text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{Introduction} Biometrics~\cite{Fons2010} is a technology allowing to recognize people through various personal factors. It is an active research field which design new biometric traits from time to time (like finger knuckle recognition~\cite{kumar-human}). We can classify the various biometric modalities among three main families: \begin{itemize} \item \emph{Biological}: the recognition is based on the analysis of biological data linked to an individual (\emph{e.g}, DNA, EEG analysis, ...). \item \emph{Behavioural}: the recognition is based on the analysis of the behaviour of an individual while he is performing a specific task (\emph{e.g}, signature dynamics, gait, ...). \item \emph{Morphological}: the recognition is based on the recognition of different physical patterns, which are, in general, permanent and unique (\emph{e.g}, fingerprint, face recognition, ...). \end{itemize} It is mandatory to evaluate these biometric systems in order to quantify their performance and compare them. These biometric systems must be evaluated in order to compare them, or to quantify their performance. To evaluate a biometric system, a database must be acquired (or a common public dataset must be used). This database must contain as many users as possible to provide a large number of captures of their biometric data. These data are separated into two different sets: \begin{itemize} \item \emph{the learning} set which serves to compute the biometric reference of each user \item \emph{the validating} set which serves to compute their performance. \end{itemize} When comparing test samples to biometric references, we obtain two different kinds of scores: \begin{itemize} \item \emph{the intrascores} represent comparison scores between the biometric reference (computed thanks to the learning set) of an individual and biometric query samples (contained in the validating set) \item \emph{the interscores} represent comparison scores between the biometric reference of an individual and the biometric query samples of the other individuals. \end{itemize} From these two sets of scores, we can compute various error rates, from which the EER is one functioning point which represents a very interesting error rate often used to compare biometric systems. In order to have reliable results, it is necessary to evaluate the performance of biometric system with huge datasets. These huge datasets produce numbers of scores. As the time to evaluate the performance of a biometric system depends on the quantity of available scores, we can see that evaluation may become very long on these large datasets. In this paper, we present a very fast way to compute this error rate, as well as its confidence interval in a non parametric way, on different datasets of the literature. Nevertheless, there will always be users for which one modality (or method applied to this modality) will give bad results. These low performances can be implied by different facts: the quality of the capture, the acquisition conditions, or the individual itself. Biometric multi-modality (or multibiometrics) allows to compensate this problem while obtaining better biometric performances (\emph{i.e.}, better security by accepting less impostors, and better usability by rejecting less genuine users) by expecting that the errors of the different modalities are not correlated. So, the aim of multibiometrics is to protect logical or physical access to a resource by using different biometric captures. We can find different types of multibiometrics systems. Most of them are listed in~\cite{ross2006handbook}, they use: \begin{enumerate} \item Different sensors of the same modality (\emph{i.e.}, capacitive or resistive sensors for fingerprint acquisition); \item Different representations of the same capture (\emph{i.e.}, use of points of interest or texture); \item Different biometric modalities (\emph{i.e.}, face and fingerprint); \item Several instances of the same modality (\emph{i.e.}, left and right eye for iris recognition); \item Multiple captures (\emph{i.e.}, 25 images per second in a video used for face recognition); \item An hybrid system composed of the association of the previous ones. \end{enumerate} In the proposed study, we are interested in the first four kinds of multi-modality. We also present in this paper, a new multibiometrics approach using various fusion functions parametrized by genetic algorithms using a fast EER (Equal Error Rate) computing method to speed up the fitness evaluation. This paper is related to high performance computing, because algorithms are designed to work in an infrastructure managing the biometric authentication of millions of individuals (\emph{i.e.}, border access control, logical acces control to webservices). To improve the recognition rate of biometric systems, it is necessary to regularly update the biometric reference to take into account intra class variability. With the proposed approach, the time taken to update the biometric reference would be lowered. The faster is the proposed method, the more we can launch the updating process (or the more users we can add to the process). We also propose an adaptation of the proposed EER computing method which gives confidence intervals in a non parametric way (\emph{i.e.}, by computing the EER several times through a bootstraping method). The confidence intervals are computed on a single CPU, on several CPUs on the same machine and on several machines.\\ The main hints of the papers are: \begin{itemize} \item the proposition of a new method to approximate the EER and its confidence interval in a fast way \item the proposition of two original functions for multibiometrics fusion. \end{itemize} The plan is organized as following. Section 2 presents the background of the proposed work. Section 3 presents the proposed method for computing the approximated value of the EER and its confidence interval. Section 4 validates them. Section 5 presents the proposed multibiometrics fusion functions and their performance in term of biometric recognition and computation time against the baseline. Section 6 gives perspectives and conclusions of this paper. \section{Background} \subsection{Evaluation of Biometric Systems} \label{sec:evaluation_overview} Despite the obvious advantages of this technology in enhancing and facilitating the authentication process, its proliferation is still not as much as attended \cite{jain2004biometrics}. As argued in the previous section, biometric systems present several drawbacks in terms of precision, acceptability, quality and security. Hence, evaluating biometric systems is considered as a challenge in this research field. Nowadays, several works have been done in the literature to evaluate such systems. Evaluating biometric systems is generally realized within three aspects as illustrated in figure \ref{fig:evaluation_aspects}: usability, data quality and security. \begin{figure}[h!] \begin{tikzpicture}[ event/.style={rectangle, thick, draw, text width=2.0cm, text centered,font=\sffamily,anchor=north}, edge from parent/.style={very thick, draw=black!70}, edge from parent path = {(\tikzparentnode.south) -- ++(0,-1.05cm) -| (\tikzchildnode.north)}, level 1/.style={sibling distance=9cm,level distance=1.4cm, growth parent anchor=south,nodes=event}, level 2/.style={sibling distance=8cm}, scale=0.6 ] \node(root) [event] {Evaluation of Biometric systems} child{node (Data quality) {Data quality}} child{node (Usability) {Usability} child{node (Quantitative) {Quantitative} child{node (Efficiency) {Efficiency}} child{node (Effectiveness) {Effectiveness}} } child{node (Qualitative) {Qualitative} child{node (Acceptance and User Satisfaction) {Acceptance and User Satisfaction}} } } child{node (Security) {Security}}; \end{tikzpicture} \caption{Evaluation aspects of Biometric Systems.} \label{fig:evaluation_aspects} \end{figure} \subsubsection{Usability} According to the International Organization for Standardization ISO 13407:1999 \citep{iso1999human}, usability is defined as ``\textit{The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use}''. \begin{itemize} \item \emph{Efficiency} which means that users must be able to accomplish the tasks easily and in a timely manner. It is generally measured as task time; \item \emph{Effectiveness} which means that users are able to complete the desired tasks without too much effort. It is generally measured by common metrics include completion rate and number of errors such failure-to-enroll rate (FTE) \cite{iso2006performance}; \item \emph{User satisfaction} which measures users' acceptance and satisfaction regarding the system. It is generally measured by studying several properties such as easiness to use, trust in the system, \emph{etc.} The acceptability of biometric systems is affected by several factors. According to \cite{smith2003human}, some members of the human-computer interaction (HCI) community believe that interfaces of security systems do not reflect good thinking in terms of creating a system that is easy to use, while maintaining an acceptable level of security. Existing works \cite{coventry2003usability, gunson2011usability} show also that there is a potential concern about the misuse of personal data (\emph{i.e.}, templates) which is seen as violating users' privacy and civil liberties. Moreover, one of our previous work \cite{elabed2010usability} shows the necessity of taking into account users' acceptance and satisfaction when designing and evaluating biometric systems. More generally speaking, even if the performance of a biometric system outperformed another one, this will not necessarily mean that it will be more operational or acceptable;\\ \end{itemize} \subsubsection{Data quality} It measures the quality of the biometric raw data \cite{tabassi2005quality, elabed2010quality}. Low quality samples increase the enrollment failure rate, and decrease system performance. Therefore, quality assessment is considered as a crucial factor required in both the enrollment and verification phases. Using quality information, the bad quality samples can be removed during enrollment or rejected during verification. Such information could also be used in soft biometrics or multimodal approaches \cite{Krzysztof2009quality}. Such type of assessment is generally used to quantify biometric sensors, and could be also used to enhance system performance;\\ \subsubsection{Security} It measures the robustness of a biometric system (algorithms, architectures and devices) against attacks. Many works in the literature \citep{ratha2001security, uludag2004attacks, galbally2010attacks} show the vulnerabilities of biometric systems which can considerably decrease their security. Hence, the evaluation of biometric systems in terms of security is considered as an important factor to ensure its functionality. The International Organization for Standardization ISO/IEC FCD 19792 \cite{iso2008security} addresses the aspects of security evaluation of such systems. The report presents an overview of biometric systems vulnerabilities and provide some recommendations to be taking into account during the evaluation process. Nowadays, only few partial security analysis studies with relation to biometric authentication systems exist. According to ISO/IEC FCD 19792 \cite{iso2008security}, the security evaluation of biometric systems is generally divided into two complementary assessments: 1) assessment of the biometric system (devices and algorithms) and 2) assessment of the environmental (for example, is the system is used indoor or outdoor?) and operational conditions (for example, tasks done by system administrators to ensure that the claimed identities during enrolment of the users are valid). A type-1 security assessment method is presented in a personal previous work \cite{elabed2011security}. The proposed method has shown its efficiency in evaluating and comparing biometric systems. \subsection{Performance Evaluation of Biometric Systems} \label{sec:performance_evaluation} The performance evaluation of biometric systems is now carefully considered in biometric research area. We need a reliable evaluation methodology in order to put into obviousness the benefit of a new biometric system. Nowadays, many efforts have been done to achieve this objective. We present in section \ref{sec:performance_evaluation} an overview of the performance metrics, followed by the research benchmarks in biometrics as an illustration of the evaluation methodologies used in the literature for the comparison of biometric systems. \subsubsection{Performance metrics} \label{sec:performance_evaluation} By contrast to traditional methods, biometric systems do not provide a cent per cent reliable answer, and it is quite impossible to obtain such a response. The comparison result between the acquired biometric sample and its corresponding stored template is illustrated by a distance score. If the score is lower than the predefined decision threshold, then the system accepts the claimant, otherwise he is rejected. This threshold is defined according to the security level required by the application. Figure \ref{fig:scores_distribution} illustrates the theoretical distribution of the genuine and impostor scores. This figure shows that errors depend from the used threshold. Hence, it is important to quantify the performance of biometric systems. The International Organization for Standardization ISO/IEC 19795-1 \cite{iso2006performance} proposes several statistical metrics to characterize the performance of a biometric system such as: \begin{itemize} \item \emph{Failure-to-enroll rate (FTE)}: proportion of the user population for whom the biometric system fails to capture or extract usable information from biometric sample; \item \emph{Failure-to-acquire rate (FTA)}: proportion of verification or identification attempts for which a biometric system is unable to capture a sample or locate an image or signal of sufficient quality; \item \emph{False Acceptation Rate (FAR)} and \emph{False Rejection Rate (FRR)}: FAR is the proportion of impostors that are accepted by the biometric system, while the FRR is the proportion of authentic users that are incorrectly denied. The computation of these error rates is based on the comparison of the scores against a threshold (the direction of the comparison is reversed if the scores represent similarities instead of distances). FRR and FAR are respectively computed (in the case of a distance score) as in (\ref{eq:frr}) and (\ref{eq:far}), where $intra_i$ (respectively $inter_i$) means the intra score at position $i$ in the set of intra score (respectively inter score at position $i$) and $Card(set)$ is the cardinal of the set in argument, $thr$ is the decision threshold, and $\mathds{1}$ is the indicator function. \begin{equation} \label{eq:frr} FRR = \frac{\sum_{score \in intra}\mathds{1}\{score > thr\}}{Card(intra)} \end{equation} \begin{equation} \label{eq:far} FAR = \frac{\sum_{score \in inter}\mathds{1}\{score \le thr\}}{Card(intra)} \end{equation} \item \emph{Receiver operating characteristic (ROC) curve}: the ROC curve is obtained by computing the couple of (FAR, FRR) for each tested threshold. It plots the FRR versus the FAR. The aim of this curve is to present the tradeoff between FAR and FRR and to have a quick overview of the system performance and security for all the parameters configurations. \item \emph{Equal error rate (EER)}: it is the value where both errors rates, FAR and FRR, are equals (\emph{i.e.}, FAR = FRR). It constitutes a good indicator, and the most used, to evaluate and compare biometric systems. In other words, lower the EER value is, higher the accuracy of the system. Using the ROC curve, the EER is computed by selecting the couple of (FAR, FRR) having the smallest absolute difference (\ref{eq:precision}) at the given threshold $\tau$: \begin{equation} \label{eq:precision} \tau = \underset{\tau}{\operatorname{argmin}} (abs(FAR_{\tau}-FRR_{\tau})), \forall_{\tau \subset Card\{ROC\}} \nonumber \end{equation} \noindent and returning their average (\ref{eq:eer}): \begin{equation} \label{eq:eer} EER = \frac{FAR_{\tau} + FRR_{\tau}}{2} \end{equation} \noindent By this way, we have obtained the best approaching EER value with the smallest precision error. The classical EER computing algorithm is presented in the Figure~\ref{fig:sloweer} \footnote{another, slower, way of computing would be to test each unique score of the intrascores and interscores sets, but this would held a too important number of iterations. We named it ``whole'' later in the paper.}. From Figure~\ref{fig:sloweer}, we can see that the complexity is in $O(n*m)$ with $n$ the number of thresholds held in the computation, and, $m$ the number of scores in the dataset. As it is impossible to reduce $m$, we have to find a better method which reduces $n$. We did not find, in the literature, methods allowing to reduce computation time in order to obtain this EER. \begin{figure}[h!] \footnotesize \begin{algorithmic} \STATE $ROC \leftarrow [] $ \STATE $EER \leftarrow 1.0 $ \STATE $DIFF \leftarrow 1.0 $ \STATE $START \leftarrow min(scores)$ \STATE $END \leftarrow max(scores)$ \FOR{$\tau$ $START$ to $END$ in $N$ steps} \STATE $FAR \leftarrow$ compute $FAR$ for $\tau$ \STATE $FRR \leftarrow$ compute $FRR$ for $\tau$ \STATE append $(FAR,FRR)$ to $ROC$ \IF{$abs(FAR-FRR)<DIFF$} \STATE $DIFF \leftarrow abs(FAR-FRR)$ \STATE $EER \leftarrow (FAR+FRR)/2$ \ENDIF \ENDFOR \RETURN $EER$, $ROC$ \end{algorithmic} \caption{Classical EER computing algorithm.} \label{fig:sloweer} \end{figure} \end{itemize} \begin{figure}[!ht] \centering \includegraphics[width=0.7\linewidth]{scores_distribution.png} \caption{Distribution of genuine users and impostor scores.} \label{fig:scores_distribution} \end{figure} \subsubsection{Biometrics Datasets} \label{sec:biometrics_benchmarks} A public dataset allows researchers to test their algorithm and compare them with those from the state of the art. It takes a lot of time and energy to build a large and significant dataset. It is very convenient to download one for research purposes. We present in this section an overview of the used datasets in this paper. Table \ref{tab:databases} presents a summary of these datasets. \begin{itemize} \item \textbf{Biometric Scores Set - Release 1 (BSSR1)}\\ The BSSR1~\cite{bssr1url} database is an ensemble of scores sets from different biometric systems. In this study, we are interested in the subset containing the scores of two facial recognition systems and the two scores of a fingerprint recognition system applied to two different fingers for $512$ users. This database has been used many times in the literature~\cite{nandakumar2008likelihood,sedgwick2005preliminary}. \item \textbf{BANCA}\\ The second database is a subset of scores produced from the BANCA database~\cite{bancascores}. The selected scores correspond to the following one labelled: \begin{enumerate} \item IDIAP\_voice\_gmm\_auto\_scale\_25\_100\_pca.scores \item SURREY\_face\_nc\_man\_scale\_100.scores \item SURREY\_face\_svm\_man\_scale\_0.13.scores \item UC3M\_voice\_gmm\_auto\_scale\_10\_100.scores \end{enumerate} The database as two subsets G1 and G2. G1 set is used as the learning set, while G2 set is used as the validation set. \item \textbf{PRIVATE}\\ The last database is a chimeric one we have created for this purpose by combining two public biometric template databases: the AR~\cite{martinez1998ar} for the facial recognition and the GREYC keystroke~\cite{giot2009database} for keystroke dynamics~\cite{Monrose2000351,giot2011keystroke}. The AR database is composed of frontal facial images of $126$ individuals under different facial expressions, illumination conditions or occlusions. These images have been taken during two different sessions with $13$ captures per session. The GREYC keystroke contains the captures on several sessions on two months of $133$ individuals. Users were asked to type the password "greyc laboratory" $6$ times on a laptop and $6$ times on an USB keyboard by interlacing the typings (one time on a keyboard, one time on another). We have selected the first $100$ individual of the AR database and we have associated each of these individuals to another one in a subset of the GREYC keystroke database having $5$ sessions of captures. We then used the $10$ first captures to create the biometric reference of each user and the $16$ others to compute the intra and inter scores. These scores have been computed by using two different methods for the face recognition and two other ones for the keystroke dynamics. \end{itemize} \begin{table}[!ht] \caption{Summary of the different databases used to validate the proposed method} \small \label{tab:databases} \centering \begin{tabular}{|l|r|r|r|}\hline \textbf{Nb of} & \textbf{BSSR1} & \textbf{PRIVATE} & \textbf{BANCA}\\\hline\hline users & 512 & 100 & 208 \\ \hline intra tuples & 512 & 1600 & 467 \\ \hline inter tuples & 261632 & 158400 & 624 \\ \hline items/tuples & 4 & 5 & 4 \\ \hline \end{tabular} \end{table} \subsection{Multibiometrics} We focus in this part on the state of the art on multimodal systems involving biometric modalities usable for all computers (keystroke, face, voice...). The scores fusion is the main process in multimodal systems. It can be operated on the scores provided by algorithms or in the templates themselves~\cite{raghavendra2009pva}. In the first case, it is necessary to normalize the different scores as they may not evolve in the same range. Different methods can be used for doing this, and the most efficient methods are \emph{zscore}, \emph{tanh} and \emph{minmax}~\cite{jain2005snm}. Different kinds of fusion methods have been applied on biometric systems. The fusion can be done with multiple algorithms of the same modality. For example, in~\cite{hocquet2007}, three different keystroke dynamics implementations are fused with an improvement of the EER, but less than 40 users are involved in the database. In~\cite{teh2007statistical}, two keystroke dynamics systems are fused together by using weighted sums for 50 users, but no information on the weight computing is provided. The fusion can also be done within different modalities in order to improve the authentication process. In~\cite{roli2008adaptive}, authors use both face and fingerprint recognition, the impact of error rate reduction is used to reduce the error when adapting the user's biometric reference. There is only one paper (to our knowledge) on keystroke dynamics fusion with another kind of biometric modality (voice recognition): it is presented in~\cite{montalvao2006mbf}, but only 10 users are involved in the experiment. In~\cite{fierrez2003comparative}, multi-modality is done on fingerprints, speech, and face images on 50 individuals. Fusion has been done with SVM~\cite{vapnik1996theory} with good improvements, especially, when using user specific classifiers. Very few multimodal systems have been proposed for being used in classical computers and the published ones have been validated on small databases. In order to contribute to solve this problem, we propose a new approach in the following section. \section{Fast EER and Confidence Intervals Estimation} We propose a kind of dichotomic EER computing function, in order to quickly approximate its value. Thanks to this computing speed up, we can use it in time consuming applications. Finally, we present a confidence interval computing method based on our approximated EER calculation associated to parallel and distributed computing. \subsection{EER Estimation} Computation time to get the EER can be quite important. When the EER value needs to be computed a lot of time, it is necessary to use a faster way than the standard one. In the biometric community, the shape of the ROC curve always follows the same pattern: it is a monotonically decreasing function (when we present FRR against FAR, or increasing when we present 1-FRR against FAR) and the EER value is the curve's point having $x_{ROC}=y_{ROC}$ (or $FAR=FRR$). Thanks to this fact, the curve symbolising the difference of $y_{ROC}$ against $x_{ROC}$ is also a monotonically decreasing function from $1$ to $-1$, where the point at $y_{DIFF}=0$ represents the EER (and its value is $x_{DIFF}$ because $x_{ROC}=y_{ROC}$ or $FAR=FRR$). With these information, we know that to get the EERs, we need to find the $x_{DIFF}$ for which $y_{DIFF}$ is the closest as possible to zero. An analogy with the classical EER computing, would be to incrementally compute $y_{DIFF}$ for each threshold by increasing order and stop when $y_{DIFF}$ changes of sign. By this way, we can expect to do half thresholds comparisons than with the classical way if scores are correctly distributed. A clever way is to use something equivalent to a divide and conquer algorithm like the binary search and obtain a mean complexity closer to $O(log(n))$. That is why we have implemented a polytomous version of EER computing: \begin{enumerate} \item We chose $i$ thresholds linearly distributed on the scores sets \item For each threshold $t$ among the $i$ thresholds, we compute the FAR and FRR values ($FAR_t$, $FRR_t$) \item We take the two following thresholds $t1$ and $t2$ having $sign(FRR_{t1}-FAR_{t1})$ different of $sign(FRR_{t2}-FAR_{t2})$ \item We repeat step 2 with selecting $i$ thresholds between $t1$ and $t2$ included while $FRR_{t1}-FAR_{t1})$ does not reach the attended precision. \end{enumerate} By this way, the number of threshold comparisons is far smaller than in the classical way. Its complexity analysis is not an easy task because it depends both on the attended precision and the choice of $i$. It can be estimated as O($\log(N)$). Figure~\ref{fig:fasteer} presents the algorithm while Figure~\ref{fig:expleer} illustrates it by showing the different iterations until getting the EER value with a real world dataset. We have chosen $i=5$ points to compute during each iteration. The EER is obtained in five iterations. Circle symbols present the points computed at the actual iteration, triangle symbols present the points computed at the previous iterations, and the dotted curve presents the ROC curve if all the points are computed. Very few points are computed to obtained the EER value. Figure~\ref{fig:expleer_roc} presents the real ROC curve and the ROC curve obtained with the proposed method. We can see that even if we obtain an approximated version of the real ROC curve, it is really similar around the EER value (cross with the dotted lined). \begin{figure}[!htb] \footnotesize \begin{algorithmic} \STATE $ROC \leftarrow [] $ \STATE $CACHE \leftarrow \{\} $ \STATE $START \leftarrow min(scores)$ \STATE $END \leftarrow max(scores)$ \WHILE{True} \FOR{$THRESHOLD$ from $START$ to $END$ in $N$ steps} \STATE $SDIFF \leftarrow []$ \STATE $THRESHOLDS \leftarrow []$ \IF{not empty $CACHE[THRESHOLD]$} \STATE $FAR,FRR \leftarrow CACHE[THRESHOLD]$ \ELSE \STATE $FAR \leftarrow$ compute $FAR$ for $THRESHOLD$ \STATE $FRR \leftarrow$ compute $FRR$ for $THRESHOLD$ \STATE append $(FAR,FRR)$ to $ROC$ \STATE $CACHE[THRESHOLD] \leftarrow (FAR, FRR)$ \ENDIF \IF{ $abs(FAR-FRR) < PRECISION$ } \STATE $EER \leftarrow (FAR+FRR)/2$ \RETURN $EER$, $ROC$ \ENDIF \STATE append $FAR-FRR$ to $SDIFF$ \STATE append $THRESHOLD$ to $THRESHOLDS$ \ENDFOR \STATE $PSTART \leftarrow -1$ \STATE $PEND \leftarrow -1$ \FOR{ $PIVOT=0$ to $STEPS-1$} \IF{ $sign(SDIFF[PIVOT]) \ne sign(SDIFF[PIVOT+1])$} \STATE $PSTART \leftarrow PIVOT$ \STATE $PEND \leftarrow PIVOT+1$ \STATE \textbf{break} \ENDIF \ENDFOR \COMMENT{PSTART and PEND are set} \STATE $START \leftarrow THRESHOLDS[PSTART]$ \STATE $END \leftarrow THRESHOLDS[PEND]$ \ENDWHILE \end{algorithmic} \caption{Fast EER Computing Algorithm} \label{fig:fasteer} \end{figure} \begin{figure}[!htb] \subfloat[Iteration 1]{\includegraphics[width=0.5\linewidth]{explain_0.png}} \subfloat[Iteration 2]{\includegraphics[width=0.5\linewidth]{explain_1.png}}\\ \subfloat[Iteration 3]{\includegraphics[width=0.5\linewidth]{explain_2.png}} \subfloat[Iteration 4]{\includegraphics[width=0.5\linewidth]{explain_3.png}}\\ \subfloat[Iteration 5]{\includegraphics[width=0.5\linewidth]{explain_4.png}} \subfloat[ROC curve]{\label{fig:expleer_roc}\includegraphics[width=0.45\linewidth]{explain_final.png}} \caption{Points computed by the proposed algorithm when $i=5$. In this case, the EER value is found in five iterations. Each image represents an iteration, with: the real ROC curve, the points computed at the iteration and the points computed at the previous iteration (different thresholds may produce the same points).} \label{fig:expleer} \end{figure} \clearpage \subsection{Confidence Intervals Estimation} We also provide a method to compute the confidence interval of the EER value. It is based on a bootstrapping method and can be used in a parallelized way. \subsubsection{Bootstrapping} It is interesting to give a confidence interval of an EER value, because we are not totally sure of its value. One way is to obtain this confidence interval parametrically, but it requires to have strong hypothesis on the function of the EER value (the scores come from independant and identically distributed variables, even for the scores of the same user). As such assumption is too strict (and probably false), it is possible to use non parametric confidence intervals. One of these non parametric methods is called ``bootstrap''~\cite{johnson2001introduction}. Such method is often used when the distribution is unknown or the number of samples is too low to correctly estimates the interval. The main aim is to re-sample the scores several times, and, compute the EER value for each of these re-sampling. The boostraping method works as following: \begin{enumerate} \item Use the $intra$ and $inter$ scores to compute the EER $\hat{\chi}$. \item Resample $K$ times the $intra$ and $inter$ scores and store them in $intra^i$ and $inter^i$ ($0<i<=K$). \begin{itemize} \item Generate the resampled $intra^i$ scores by sampling $Card(intra)$ scores with replacement from $intra$. \item Generate the $inter^i$ scores by sampling $Card(inter)$ scores with replacement from $inter$. \end{itemize} \item Compute the $K$ EERs ($\chi^i$) for each couple of $intra^i$ and $inter^i$ ($0<i<=K$). \item Store the $K$ residuals $e^i = \hat{\chi} - \chi^i$. \item The $100(1-\alpha)$\% confidence interval of the EER is formed by taking the interval from $\hat{\chi}-e_{Upper}$ to $\hat{\chi}-e_{Lower}$, with $e_{Lower}$ and $e_{Upper}$ which respectively represent the $\alpha/2$th and the $1-\alpha/2$th percentiles of the distribution of $e$. \end{enumerate} Figure~\ref{fig:residuals} presents the residuals of one run of the bootstraped method on a real world dataset with the lower and upper limits used to compute the confidence interval. \begin{figure}[!htb] \includegraphics[width=.9\linewidth]{residuals_4_dicho.png} \caption{Histogram of residuals and $\alpha/2$ and $1-\alpha/2$ percentiles for a confidence interval at 90\%.} \label{fig:residuals} \end{figure} We can see that the consuming part of this algorithm is the fact to compute $K$ times the EER. As all the EER computing is totally independent from each other, the $K$ computations can be done in a parallel way. \subsubsection{Parallelization} We propose three different ways to compute the confidence interval: \begin{itemize} \item \emph{Single}. The single version consists in doing all the computations in a sequential manner (\emph{cf.} figure~\ref{fig_archi_conf_single}). The EER with the original scores is computed. The whole set of resampled scores is created. The EER of each new distribution is computed. The confidence interval is computed from the results. \item \emph{Parallel}. The parallel version consists in using the several cores or processors on the computer used for the computation (\emph{cf.} figure~\ref{fig_archi_conf_parallel}). In this case, several EERs may be computed at the same time (in the better case, with a computer having $n$ processing units, we can compute $n$ different results at the same time). The procedure is the following: the EER with the original scores is computed. The whole set of resampled scores is created. The main program distributes the $K$ EERs computations on the different processing units. Each processing unit computes an EER and returns the results, until the main program stops to send it new data. The results are merged together. The confidence interval is computed from the results. \item \emph{Distributed}. The distributed version consists in using several computers to improve the computation (\emph{cf.} figure~\ref{fig_archi_conf_distributed}). The computation is done in a parallelized way on each computer. In this case, much more EERs can be computed at the same time. The main program generates a set $S=\{S_1,...S_T\}$ of $T$ values symbolising $T$ subworks, were the subwork $T_i$ must compute $S_i$ EERs (thus $\sum S=K$). The procedure is the following: the EER with the original scores is computed. The main program sends the intra and inter scores on each worker (\emph{i.e.}, a computer). The main program distributes the $T$ numbers to each worker. Each time a work receive such number ($S_i$), it computes the $S_i$ resamples sets. Then, it computes (using the Parallellized way) the $S_i$ EERs by distributing them on its processing units. It merges the $S_i$ EERs together and send them to the main program. The main program merges all the results together. The confidence interval is computed from the results. \end{itemize} \begin{figure}[!htb] \centering \subfloat[Single]{\includegraphics[width=.35\linewidth]{single_computing}\label{fig_archi_conf_single}} \subfloat[Parallel]{\includegraphics[width=.55\linewidth]{parallel_computing}\label{fig_archi_conf_parallel}}\\ \subfloat[Distributed]{\includegraphics[width=.55\linewidth]{distributed_computing}\label{fig_archi_conf_distributed}} \caption{Different architectures to compute the confidence interval.\label{fig_archi_conf}} \end{figure} We can see that the three schemes are totally different. The parallelized version may be used on all recent computers which have several processing units, while the distributed version needs to use several computers connected through a network. \clearpage \section{Protocol} \subsection{Databases Sets} In order to do these evaluations, we have used three different biometric databases presented in section~\ref{sec:biometrics_benchmarks}. \subsection{Evaluation of the EER Computing} The two different algorithms for EER computing have been run on five different sets of scores (three of keystroke dynamics and two of face recognition, generated with the PRIVATE database) with various parameters. We call \emph{classic} the classical way of computing the EER and \emph{polyto} the proposed version of the algorithm. The classic way is tested by using 50, 100, 500 and 1000 steps to compute the EER. The polytomous way is tested by using between 3 and 7 steps and a precision of 0.01, 0.005 and 0.003. The aim of these tests is to compare how the proposed method performs better than the classical one, and what are its best parameters.\\ \subsection{Utility for Non-parametric Confidence Interval} Confidence intervals are also an interesting information on the performance of a system. The properties of the score distribution may forbid the use of parametric methods to compute it. This is where the boostrap method helps us by computing several times the EER with resampling methods. We have tested the computation time of confidence interval computing for two systems of each database, which give us six different systems. We have computed the EER value under three different ways: the polytomous one, the classical one (with 1000 steps), and another we called whole. The whole method is similar to the classical one, except that it uses all the possible scores present in the intra inter scores arrays as thresholds, instead of artificially generating them in a predefined interval (thus, there may be far more thresholds than in the classic method, or far less, depending on the number of available scores in the database). The results are then validated with confidence intervals. The test scripts were written with the Python language. The EER computing methods and the resampling methods have been compiled in machine code thanks to the Cython~\cite{seljebotn2009fast} framework (which speed up computation time). The parallelization is done with the joblib~\cite{joblib} library. The distributed version is done by using the task management provided by Ipython~\cite{perez2007ipython}. The standard and parallelized versions have been launched on a recent Linux machine with an Intel® Core™ i5 with 4 cores at 3.20GHz and 4Gb of RAM. Four processes are launched at the same time. For the distributed machine, the orchestrator is the same machine. It distributes the jobs on three multicores machines (itself, another similar machine and an another machine with an Intel® Xeon® with 8 cores at 2.27GHz and 4Gb of RAM. The controller sends two more jobs last machine machine). \subsection{Experimental Results} \subsubsection{Evaluation of the EER Computing} Table~\ref{tab:resspeed1} presents the results obtained within the first tested biometric system. We present the name of the method, the error of precision while computing the EER, the computation time in milliseconds and the number of comparisons involved (each comparison corresponds to the comparison of a threshold against the whole set of intra and inter scores). The real computation time taken by a comparison is given in~(\ref{eq:time}), where $n$ is the number of thresholds to compare, $A$ is the timing to do a comparison and $B$ and $C$ depends on the algorithm. \begin{equation} \label{eq:time} T = n * (A*(Card\{intra\} + Card\{inter\}) + B) + C \end{equation} We can see that the computation time is highly related to the number of comparisons and the size of the score set. Using the Kruskal-Wallis test at a confidence degree equals to $95\%$, the proposed method significantly outperformed the classical method in terms of errors (with a \emph{p} value $=0.0305$) and computation time (with a \emph{p} value $=0.002562$). The obtained results are slightly similar for the five tested biometrics modalities. We can observe that, in the classic method, using 50 steps gives not enough precise results, while using 1000 gives a very good precision, but is really time consuming; depending on the dataset, 500 steps seems to be a good compromise between precision error and computation time. In all the polytomous configurations, the computation time is far better than the fastest classic method (50 steps) while having a greatest precision. This precision is always better than the classic method with 100 steps and approach or is better than the precision in 1000 steps. This gain of time is due to the lowest number of involved comparisons. In an $n$ steps classical computing, we need to check $n$ thresholds, while in the polytomous way this number depends both on the dataset and the required precision: with our dataset, it can vary from 8 to 35 which is always lower than 50. As the computation time depends only on this value, we can say that the fastest algorithms are the one having the smallest number of tests to complete. \\ Based on the number of comparisons (and the timing computation), Table~\ref{tab:resspeedtot} presents the best results for each modality (when several methods return the same number of iterations, the most precise is chosen). \begin{table}[!tb] \centering \small \caption{ Comparison of the Different EER Computing Methods And Configurations On The First Test Set. LABEL presents the used method. ERROR is the difference between FAR and FRR values. TIME is the time needed to compute the EER value. COMP. is the number of threshold comparison done.} \label{tab:resspeed1} \begin{tabular}{|l|c|c|c|c|}\hline \textbf{LABEL} & \textbf{ERROR (\%)} & \textbf{TIME (ms.)} & \textbf{COMP.}\\\hline classic\_50 & 8.37 & 459 & 50\\\hline classic\_100 & 4.13 & 940 & 100\\\hline classic\_500 & 0.20 & 4700 & 500\\\hline classic\_1000 & 0.20 & 9310 & 1000\\\hline \hline polyto\_3\_0.010 & 0.30 & \textbf{110} & \textbf{11}\\\hline polyto\_3\_0.005 & \textbf{0.07} & 139 & 14\\\hline polyto\_3\_0.003 & \textbf{0.07} & 140 & 14\\\hline polyto\_4\_0.010 & 0.40 & 140 & 15\\\hline polyto\_4\_0.005 & 0.20 & 149 & 16\\\hline polyto\_4\_0.003 & 0.10 & 169 & 18\\\hline polyto\_5\_0.010 & 0.30 & 150 & 16\\\hline polyto\_5\_0.005 & \textbf{0.07} & 190 & 20\\\hline polyto\_5\_0.003 & \textbf{0.07} & 179 & 20\\\hline polyto\_6\_0.010 & 0.40 & 140 & 15\\\hline polyto\_6\_0.005 & 0.10 & 179 & 19\\\hline polyto\_6\_0.003 & 0.10 & 179 & 19\\\hline polyto\_7\_0.010 & \textbf{0.0}7 & 190 & 21\\\hline polyto\_7\_0.005 & \textbf{0.07} & 190 & 21\\\hline polyto\_7\_0.003 & \textbf{0.07} & 200 & 21\\\hline \end{tabular} \end{table} \begin{table}[!tb] \centering \caption{Fastest EER Computing Parameters For Each Modality} \label{tab:resspeedtot} \small \begin{tabular}{|l|l|c|c|c|c|}\hline \textbf{DB}&\textbf{LABEL} & \textbf{ERROR (\%)} & \textbf{TIME} & \textbf{COMP.}\\\hline 1& polyto\_3\_0.010 & 0.30 & 110 & 11\\\hline 2& polyto\_3\_0.010 & 0.05 & 50 & 5\\\hline 3& polyto\_6\_0.003 & 0.09 & 60 & 7\\\hline 4& polyto\_3\_0.010 & 0.14 & 89 & 10\\\hline 5& polyto\_4\_0.010 & 0.29 & 70 & 7\\\hline \end{tabular} \end{table} We can argue that the proposed method is better, both in terms of speed and precision error, than the classical way of computing. Based on the results of our dataset, the configuration using 3 steps and a precision of 0.010 seems to be the best compromise between speed and precision. We can now argue that the proposed EER computation will speed up genetic algorithms using the EER as fitness function. \subsubsection{Utility for Non-parametric Confidence Interval} All the methods under all the implementations give similar confidence intervals. We do not discuss on this point, because we are only interested in computation time. Table~\ref{tab_computation_confidence} gives for each method, under each implementation the mean value of the computation time for all the six different biometric systems. We can observe that, in average, the polytomous version seems far more faster than the other methods (classic and whole). The distributed implementation seems also more faster than the other implementations (Parallel, Single). Using the Kruskal-Wallis test at a confidence degree equals to $95\%$, the computation time of the proposed method is significantly faster than both classical (\emph{p} value $=0.00651$) and whole (\emph{p} value $=0.004407$) schemes. There was no significant difference of computation time between classical and whole schemes (\emph{p} value $=0.8002$). \begin{table} \centering \caption{Summary of the computation time in seconds. Time are averaged on all the set of scores} \label{tab_computation_confidence} \begin{tabular}{|c|c|c|c||c|}\hline & Polytomous & Classic & Whole & Mean \\ \hline Distributed & 14.93 & 94.25 & 1009.28 & 372.82\\ \hline Parallel & 14.95 & 276.60 & 3154.32 & 1148.63 \\ \hline Single & 18.83 & 523.79 & 6733.68 & 2425.43\\ \hline \hline Mean & 16.24 & 298.22 & 3632.43 & \\ \hline \end{tabular} \end{table} \subsubsection{Discussion} We have demonstrated the superiority of our EER estimation method against the classical method concerning the computation time. However, the method stops when the required precision is obtained. As the method is iterative, it is not parallelizable when we want only a simple EER. However using a grid computing method greatly improves the computation of confidence intervals. \section{Application to Multibiometrics Fusion Function Configuration} We propose a biometric fusion system based on the generation of a fusion function parametrized by a genetic algorithm and a fast method to compute the EER (which is used as fitness function) in order to speed the computing time of the genetic algorithm. \subsection{Method} We have tested three different kinds of score fusion methods which parameters are automatically set by genetic algorithms~\cite{mitchell1998introduction}. These functions are presented in (\ref{eq:ga1}), (\ref{eq:ga2}) and (\ref{eq:ga3}) where $n$ is the number of available scores (\emph{i.e.}, the number of biometric systems involved in the fusion process), $w_i$ the multiplication weight of score $i$, $s_i$ the score $i$ and $x_i$ the weight of exponent of score $i$. (\ref{eq:ga1}) is the commonly used weighted sum (note that in this version, the sum of the weights is not equal to 1), while the two others, to our knowledge, have never been used in multibiometrics. We have empirically designed them in order to give more weights to higher scores. \begin{equation} \label{eq:ga1} ga1 = \sum_{i=0}^{n}{w_i*s_i} \end{equation} \begin{equation} \label{eq:ga2} ga2 = \prod_{i=0}^{n}{s_i^{x_i}} \end{equation} \begin{equation} \label{eq:ga3} ga3 = \sum_{i=0}^{n}{w_i*s_i^{x_i}} \end{equation} The aim of the genetic algorithm is to optimize the parameters of each function in order to obtain the best fusion function. Each parameter (the $w_i$ and $x_i$) is stored in a chromosome of real numbers. The fitness function is the same for the three genetic algorithms. It is processed in two steps: \begin{itemize} \item \emph{fusion}: The generated function (\ref{eq:ga1}), (\ref{eq:ga2}) or (\ref{eq:ga3}) are evaluated on the whole set of scores; \item \emph{error computing}: The EER is computed on the result of the fusion. We use the polytomous version of computing in order to highly speed up the total computation time. \end{itemize} \subsection{Experimental Protocol} \subsubsection{Design of Fusion Functions} Table~\ref{tab:gaparams} presents the parameters of the genetic algorithms. The genetic algorithms have been trained on a learning set composed of half of the intrascores and half of the interscores of a database and they have been verified with a validation set composed of the others scores. The three databases have been used separately. \begin{table}[!tb] \caption{Configuration of the Genetic Algorithms} \centering \label{tab:gaparams} \small \begin{tabular}{|l|p{6cm}|}\hline \textbf{Parameter} & \textbf{Value}\\\hline\hline Population & 5000 \\\hline Generations & 500 \\\hline Chromosome signification & weights and powers of the fusion functions \\\hline Chromosome values interval & $[-10;10]$ \\\hline Fitness & polytomous EER on the generated function\\\hline Selection & normalized gemetric selection (probability of 0.9)\\\hline Mutation & boundary, multi non uniform, non uniform, uniform \\\hline Cross-over & Heuristic Crossover\\\hline Elitism & True \\\hline \end{tabular} \end{table} The generated functions are compared to three methods of the state of the art: $sum$, $mul$ and $min$, they have been explored in~\cite{jain2005snm,Kittler1998CC}. Table~\ref{tab:initperf} presents, for each database, the EER value of each of its biometric method (noted $sn$ for method $n$), as well as the performance of the fusion functions of the state of the art. \begin{table}[!tb] \caption{Performance (EER) of the Biometric Systems ($s1$, $s2$, $s3$, $s4$), and the State Of The Art Fusion Functions ($sum$,$min$,$mul$) on the Three Databases} \label{tab:initperf}\centering \small \begin{tabular}{|ll|c|c|}\hline \multicolumn{2}{|l|}{\textbf{Method}} & \textbf{Learning} & \textbf{Validation}\\\hline \multicolumn{4}{|c|}{BANCA}\\\hline \multirow{4}{*}{Biometric systems} &$s1$&0.0310&0.0438\\%\hline &$s2$&0.0680&0.1154\\%\hline &$s3$&0.0824&0.0897\\%\hline &$s4$&0.0974&0.0732\\\hline \multirow{3}{*}{State of the art fusion} &$sum$&\textbf{0.0128}&\textbf{0.0128}\\%\hline &$min$&0.0385&0.0438\\%\hline &$mul$&\textbf{0.0128}&\textbf{0.0128}\\\hline \multicolumn{4}{|c|}{BSSR1}\\\hline \multirow{4}{*}{Biometric systems} &$s1$&0.0425&0.0430\\%\hline &$s2$&0.0553&0.0620\\%\hline &$s3$&0.0861&0.0841\\%\hline &$s4$&0.0511&0.0454\\\hline \multirow{3}{*}{State of the art fusion} &$sum$&\textbf{0.0116}&\textbf{0.0070}\\%\hline &$min$&0.0436&0.0504\\%\hline &$mul$&0.0117&\textbf{0.0070}\\\hline \multicolumn{4}{|c|}{PRIVATE}\\\hline \multirow{4}{*}{Biometric systems} &$s1$&0.1161&0.1153\\%\hline &$s2$&0.1522&0.1569\\%\hline &$s3$&0.0603& 0.0621\\%\hline &$s4$&0.2815&0.3143\\\hline \multirow{3}{*}{State of the art fusion} &$sum$&0.0256&\textbf{0.0278}\\%\hline &$min$&0.1397&0.1471\\%\hline &$mul$&\textbf{0.0252}&0.0281\\\hline \end{tabular} \end{table} We can see that biometric methods from PRIVATE have more biometric verification errors than the ones of the other databases. Using the Kruskal-Wallis test, the $sum$ (\emph{p} value $=0.0038$) and $mul$ (\emph{p} value $=0.0038$) operators outperformed the $min$ operator. There was no significant difference (\emph{p} value $=0.935$) between both operators $sum$ and $mul$ operators.\\ \subsubsection{Magnitude Of the Gain in Computation Time} We also want to prove that using the proposed EER computation method improves the computation time of the genetic algorithm run. To do that, the previously described process has been repeated two times: \begin{itemize} \item Using the proposed EER computing method with the following configuration: 5 steps and stop at a precision of 0.01. \item Using the classical EER computing method with 100 steps. \end{itemize}~ The total computation time is saved in order to compare the speed of the two systems. These tests have been done on a Pentium IV machine with 512 Mo of RAM with the Matlab programming language.\\ \subsection{Experimental Results} \subsubsection{Design of Fusion Functions} The EER of each generated function of each database is presented in Table~\ref{tab:globalres} for the learning and validation sets, while Figure~\ref{fig:resfusion} presents there ROC curve on the validation set. We can see that the proposed generated functions are all globally better than the ones from the state of the art (by comparison with Table~\ref{tab:initperf}) and the obtained EER is always better than the ones of the $sum$ and $mul$. The two new fusion functions ((\ref{eq:ga2}) and (\ref{eq:ga3})) give similar or better results than the weighted sum (\ref{eq:ga1}). We do not observe over-fitting problems: the results are promising both on the learning and validation sets. \begin{table*}[!tb] \centering \caption{Configurations Of The Three Weighted Functions For Each Database} \label{tab:gares} \small \begin{tabular}{|l|l|}\hline \multicolumn{2}{|c|}{BANCA}\\\hline GA & configuration \\\hline ga1 &$8.7229*s_0 + 2.3092*s_1 2.0626*s_2 + 2.9687*s_3 $ \\\hline ga2 &$s_0^{7.4721}*s_1^{1.8091}*s_2^{2.1255}*s_3^{0.8874} $ \\\hline ga3 &$-2.3079*s_0^{-6.0105} + -8.5217*s_1^{-3.1367} + -8.6644*s_2^{-3.5730} + -7.0890*s_3^{7.4634} $ \\\hline \hline \multicolumn{2}{|c|}{BSSR1}\\\hline GA & configuration \\\hline ga1 & $2.3270*s_0 + 0.8790*s_1 + 0.3661*s_2 + 9.4978*s_3 $\\\hline ga2 & $s_0^{2.0650} * s_1^{0.5660} * s_2^{1.6168} * s_3^{9.1864} $\\\hline ga3 & $5.7285*s_0^{4.6227} + 4.2471*s_1^{6.8192} + 9.7541*s_2^{6.7588} + 5.9431*s_3^{0.9251} $\\\hline \hline \multicolumn{2}{|c|}{PRIVATE}\\\hline GA & configuration \\\hline ga1 & $6.7755*s_0 + 2.3841*s_1 + 5.9128*s_2 + 2.6919*s_3 $\\\hline ga2 & $ s_0^{6.2215} * s_1^{4.1538} * s_2^{6.6853} * s_3^{3.9254} $\\\hline ga3 & $4.8647*s_0^{1.6977} + 8.3564*s_1^{5.9125} + 4.7450*s_2^{2.2407} + 2.0707*s_3^{0.7681} $\\\hline \end{tabular} \end{table*} \begin{figure*}[!tb] \centering {\includegraphics[width=0.9\linewidth]{ROC_dbI.png}} \caption{ROC Curve Of The Generated Multibiometrics Fusion Functions on the Validation Set of the BANCA dataset} \end{figure*} \begin{figure*}[!tb] {\includegraphics[width=0.9\linewidth]{ROC_dbII.png}} \caption{ROC Curve Of The Generated Multibiometrics Fusion Functions on the Validation Set of the BSSR1 dataset} \end{figure*} \begin{figure*}[!tb] {\includegraphics[width=0.9\linewidth]{ROC_dbIII.png}} \caption{ROC Curve Of The Generated Multibiometrics Fusion Functions on the Validation Set of the Private dataset} \label{fig:resfusion} \end{figure*} \begin{table}[!tb] \centering \caption{ EER For Training and Validation Sets And Computation Time Gain By Using Our EER Computation Method} \label{tab:globalres} \small \begin{tabular}{|l|c|c||c|}\hline \textbf{Function}& \textbf{Train EER} & \textbf{Test EER} & \textbf{Gain (\%)}\\\hline \multicolumn{4}{|c|}{BANCA}\\\hline (\ref{eq:ga1}): ga1 & \textbf{0.0032} & 0.0091 &\textbf{61.29} \\\hline (\ref{eq:ga2}): ga2 & \textbf{0.0032} & 0.0091 &41.84 \\\hline (\ref{eq:ga3}): ga3 & 0.0037 & \textbf{0.0053} &43 \\\hline \multicolumn{4}{|c|}{BSSR1}\\\hline (\ref{eq:ga1}): ga1 & 0.000596 & \textbf{0.0038} & \textbf{78.32} \\\hline (\ref{eq:ga2}): ga2 & \textbf{0.000532} & \textbf{0.0038} & 64.77 \\\hline (\ref{eq:ga3}): ga3 & 0.000626 & \textbf{0.0038} & 28.49 \\\hline \multicolumn{4}{|c|}{PRIVATE}\\\hline (\ref{eq:ga1}): ga1 & 0.019899& 0.0241 & \textbf{77.66} \\\hline (\ref{eq:ga2}): ga2 & \textbf{0.019653} & 0.0244 & 46.5 \\\hline (\ref{eq:ga3}): ga3 & 0.020152 & \textbf{0.0217} & 55.03 \\\hline \end{tabular} \end{table} We also could expect to obtain even better performance by using more individuals or more generations in the genetic algorithm process, but, in this case, timing computation would become too much important. Their will always be a tradeoff between security (biometric performance) and computation speed (genetic algorithm performance). By the way, the best individuals were provided in the first 10 generations, and several runs give approximately the same results, so we may already be in a global minima. As a conclusion of this part, we increased the performance of multibiometrics systems given the state of the art by reducing errors of 58\% for BANCA, 45\% for BSSR1 and 22\% for PRIVATE. \subsubsection{Magnitude Of the Gain in Computation Time} Table~\ref{tab:globalres} presents a summary of the performance of the generated methods both in term of EER and timing computing improvement. The column gain presents the improvement of timing computation between the proposed EER polytomous computation time and the classical one in 100 steps. We can observe that, in all the cases, the proposed computation methods outperform the classical one (which is not its slowest version). We can see that this improvement depends both on the cardinal of the set of scores and the function to evaluate: there are better improvements for (\ref{eq:ga1}). The best gain is about 78\% while the smallest is about 28\%. \subsubsection{Discussion} Once again, we can observe the interest of our EER estimation method which allows to obtain results far quickly. We can note that the generated function all generate a monotonically decreasing ROC curve which allows to use our method. If the ROC curve does not present this shape, we would be unable to obtain the estimated EER (such drawbacks as been experimented using genetic programming~\cite{giot2011genetic} instead of genetic algorithms). \section{Conclusion and Perspectives} The contribution of this paper is twofold: a fast approximated EER computing method (associated to its confidence interval), and two score fusion functions having to be parametrized thanks to genetic algorithms. Using these two contributions together allows to speed up the computation time of the genetic algorithm because its fitness function consists on computing the EER (thus, allows to use a bigger population). The fast EER computing method has been validated on five different biometric systems and compared to the classical way. Experimental results showed the benefit of the proposed method, in terms of precision of the EER value and timing computation. The score fusion functions have been validated on three significant multibiometrics databases (two reals and one chimerical) having a different number of scores. The fusion functions parametrized by genetic algorithm always outperform simple state of the art simple functions ($sum$, $min$, $mul$), and, the two new fusion functions have given better are equal results than the simple weighted sum. Using the proposed fast EER computing method also considerably speed up the timing computation of the genetic algorithms. These better results imply that the multibiometrics system has a better security (fewer impostors can be accepted) and is more pleasant to be used (fewer genuine users can be rejected). One limitation of the proposed method is related to the shape of the ROC curve and the atended precision wanted. In some cases, the method is unable to get the EER at the wanted precision, and, is not able to return the result (we did not encounter this case in these experiments). Our next research will focus on the use of different evolutionary algorithms in order to generate other kind of complex functions allowing to get better results. \section*{Acknowledgements} Work has been with financial support of the ANR ASAP project, the Lower Normandy French Region and the French research ministry.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,055
package org.asynchttpclient.providers.grizzly.filters; import org.asynchttpclient.providers.grizzly.filters.events.SSLSwitchingEvent; import org.glassfish.grizzly.Connection; import org.glassfish.grizzly.EmptyCompletionHandler; import org.glassfish.grizzly.Grizzly; import org.glassfish.grizzly.IOEvent; import org.glassfish.grizzly.attributes.Attribute; import org.glassfish.grizzly.filterchain.FilterChain; import org.glassfish.grizzly.filterchain.FilterChainContext; import org.glassfish.grizzly.filterchain.FilterChainEvent; import org.glassfish.grizzly.filterchain.NextAction; import org.glassfish.grizzly.ssl.SSLEngineConfigurator; import org.glassfish.grizzly.ssl.SSLFilter; import javax.net.ssl.SSLEngine; import javax.net.ssl.SSLHandshakeException; import java.io.IOException; import java.util.concurrent.ConcurrentHashMap; /** * SSL Filter that may be present within the FilterChain and may be * enabled/disabled by sending the appropriate {@link SSLSwitchingEvent}. * * @since 2.0 * @author The Grizzly Team */ public final class SwitchingSSLFilter extends SSLFilter { private static final Attribute<Boolean> CONNECTION_IS_SECURE = Grizzly.DEFAULT_ATTRIBUTE_BUILDER.createAttribute(SwitchingSSLFilter.class.getName()); private static final Attribute<Throwable> HANDSHAKE_ERROR = Grizzly.DEFAULT_ATTRIBUTE_BUILDER.createAttribute(SwitchingSSLFilter.class.getName() + "-HANDSHAKE-ERROR"); // ------------------------------------------------------------ Constructors public SwitchingSSLFilter(final SSLEngineConfigurator clientConfig) { super(null, clientConfig); addHandshakeListener(new ProtocolHandshakeListener()); } // -------------------------------------------------- Methods from SSLFilter @Override protected void notifyHandshakeFailed(Connection connection, Throwable t) { setError(connection, t); } @Override public NextAction handleConnect(final FilterChainContext ctx) throws IOException { // Suspend further handleConnect processing. We do this to ensure that // the SSL handshake has been completed before returning the connection // for use in processing user requests. Additionally, this allows us // to determine if a connection is SPDY or HTTP as early as possible. ctx.suspend(); final Connection c = ctx.getConnection(); handshake(ctx.getConnection(), new EmptyCompletionHandler<SSLEngine>() { @Override public void completed(SSLEngine result) { // Handshake was successful. Resume the handleConnect // processing. We pass in Invoke Action so the filter // chain will call handleConnect on the next filter. ctx.resume(ctx.getInvokeAction()); } @Override public void cancelled() { // Handshake was cancelled. Stop the handleConnect // processing. The exception will be checked and // passed to the user later. setError(c, new SSLHandshakeException( "Handshake canceled.")); ctx.resume(ctx.getStopAction()); } @Override public void failed(Throwable throwable) { // Handshake failed. Stop the handleConnect // processing. The exception will be checked and // passed to the user later. setError(c, throwable); ctx.resume(ctx.getStopAction()); } }); // This typically isn't advised, however, we need to be able to // read the response from the proxy and OP_READ isn't typically // enabled on the connection until all of the handleConnect() // processing is complete. enableRead(c); // Tell the FilterChain that we're suspending further handleConnect // processing. return ctx.getSuspendAction(); } @Override public NextAction handleEvent(final FilterChainContext ctx, final FilterChainEvent event) throws IOException { if (event.type() == SSLSwitchingEvent.class) { final SSLSwitchingEvent se = (SSLSwitchingEvent) event; setSecureStatus(se.getConnection(), se.isSecure()); return ctx.getStopAction(); } return ctx.getInvokeAction(); } @Override public NextAction handleRead(final FilterChainContext ctx) throws IOException { if (isSecure(ctx.getConnection())) { return super.handleRead(ctx); } return ctx.getInvokeAction(); } @Override public NextAction handleWrite(final FilterChainContext ctx) throws IOException { if (isSecure(ctx.getConnection())) { return super.handleWrite(ctx); } return ctx.getInvokeAction(); } @Override public void onFilterChainChanged(final FilterChain filterChain) { // no-op } public static Throwable getHandshakeError(final Connection c) { return HANDSHAKE_ERROR.remove(c); } // --------------------------------------------------------- Private Methods private static boolean isSecure(final Connection c) { Boolean secStatus = CONNECTION_IS_SECURE.get(c); return (secStatus == null ? true : secStatus); } private static void setSecureStatus(final Connection c, final boolean secure) { CONNECTION_IS_SECURE.set(c, secure); } private static void setError(final Connection c, Throwable t) { HANDSHAKE_ERROR.set(c, t); } private static void enableRead(final Connection c) throws IOException { c.enableIOEvent(IOEvent.READ); } // ---------------------------------------------------------- Nested Classes private static interface HandshakeCompleteListener { void complete(); } private static final class ProtocolHandshakeListener implements HandshakeListener { static final ConcurrentHashMap<Connection,HandshakeCompleteListener> listeners = new ConcurrentHashMap<Connection,HandshakeCompleteListener>(); // --------------------------------------- Method from HandshakeListener @Override public void onStart(Connection connection) { // no-op } @Override public void onComplete(Connection connection) { final HandshakeCompleteListener listener = listeners.get(connection); if (listener != null) { removeListener(connection); listener.complete(); } } // --------------------------------------------- Package Private Methods public static void addListener(final Connection c, final HandshakeCompleteListener listener) { listeners.putIfAbsent(c, listener); } static void removeListener(final Connection c) { listeners.remove(c); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,072
La Volta a les Illes Fèroe (Kring Føroyar en feroès) és una cursa ciclista per etapes que s'organitza a les Illes Fèroe. La cursa es disputa en les modalitats masculina, femenina i júnior en una sessió de pròleg i quatre o cinc etapes, la distància de les quals varia depenent de la categoria. Se celebra el mes de juliol just abans de l'Ólavsøka (la festa nacional de l'arxipèlag), tot i que alguns anys s'ha disputat en altres dates, com és el cas de l'edició del 2019 que es va fer al mes d'agost. Tot i que la Kring Føroyar es va disputar ja el 1996, la primera edició oficial va ser el 1997. Als inicis, per qüestió de patrocini, la prova es deia Statoil Kring Føroyar, i quan la companyia Statoil va canviar de nom a Effo es va passar a dir Effo Kring Føroyar. El 2015 es va canviar de patrocinador per Volvo, i des de llavors la cursa porta el nom de Volvo Kring Føroyar. La Volta ciclista a les Illes Fèroe sempre acaba a Tórshavn, la capital de l'arxipèlag. Fins al 2019 Torkil Veyhe és el ciclista que més títols ha guanyar, 6 en total (2009, 2010, 2012, 2014, 2015 i 2017). Palmarès A continuació es mostren els resultats de totes les edicions celebrades de la Kring Føroyar en la modalitat masculina. Referències Competicions ciclistes Esport a les Illes Fèroe
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,969
Tag Archives: Margaret Thatcher General, Psychology, Society Society: Why must people of Britain dance on the grave of Margaret Thatcher? 10 Apr 2013 Mani Navasothy 7 Comments Margaret Thatcher (c) Mani Navasothy 2013 When I say `people of Britain' I mean all those who at this moment have their feet on the soil of Great Britain. So let's get to the ones who are currently rejoicing, splitting their own spleens, plotting celebrations, protest marches and demonstrations – all against an already dead 87 year old woman – namely Margaret Thatcher! Here's a Question to those who are so angry that they are planning demo's …or blasting such negativity on various internet & social network platforms..: What is wrong with you people? And why now? What do you hope to achieve by it all? The woman – Margaret Thatcher – who passed away few days ago at the age of 87, was 3-times democratically elected Prime Minister of Great Britain. Let's put a spot light on some key points here – Democratically elected! Not once or twice..but three times. The country chose her. Did I say 3 times? If people didn't like her style or ways of politics or the way she was running the Country, then why keep electing her ..again..and again? But it happened. And as far as I know, there were no Vote-fixing, or military intimidation..or anything of the sort that has happened in some other so-called democratic countries.. No one forced anyone to vote for her.. yet people did. So no point then putting the blame (if there is any) on the woman. As to the question of why now..? For haven's sake.. if there ever was a time to protest – democratically – it was while Margaret Thatcher was in power as Prime Minister. People had had 11 years to do it then.. and if people did, and it didn't go anywhere…well, then I can only conclude that not enough of the majority (did I mention democracy) wanted change bad enough to all come together and protest..despite whatever circumstances opposed them. And as Great Britain was not under military or any sort of Dictatorial leadership, people had the power and freedom.. full stop. Okay, so perhaps people felt powerless or intimidated by her leadership.. (we are talking about billions of people in the whole of UK here!). And then the inevitable happened. Her own cabinet plotted against her.. (People power.. democracy at work..in mysterious ways). And she finally left her post.. If there was another moment to rejoice and plan demonstrations, there was the chance. She was no longer in power. People could have had their street parties then.. No.. people waited.. and got on about their lives.. for almost 2 decades.. And then ..`after' she dies…of natural ill-health.. people now plot and plan demonstrations.. against dead woman.. the first woman to become Prime Minister in the country that created the first people-power parliament in the world! This `dancing on the grave of a dead person' is absolutely pointless, and utterly cowardice! Collective expression. It's been common for mass of people ..or rather the emotions that are sinking into their human Collective (psyche) to surface at perhaps unexpected times ..and triggers. Uk mourned for Princes Diana when she died in a car accident. It's not that everyone loved her, but everyone's own feelings of suppressed grief needed an outlet..and the death of Diana provided it. People rejoiced at the wedding of Prince William & Kate.. It's not that everyone loved them…but people had a need for celebrating, and their wedding provided the opportunity. So did the Queen's Jubilee events.. And now, with the nation's Economy and people's finances in dire nose dive.. (not just hitting the ground, but seem to be heading for the depths of Underworld), with no end in sight, with crisis after crisis, faith in government and banks…and papers all fast decaying.. people need to vent their frustrations.. and anger.. And so..here's an opportunity .. to dig out a 20 year old repressed feeling.. (some people who are about to do these demo's weren't even born 20 yrs ago..that's quite a joke)… Trouble with the world is, while someone pushes their neck in the grand noose, and takes on the weight of the people ..and works hard (doing their best), everyone lets that happen.. And then years or decades later.. people choose to act.. It's not just unfair, but inhuman.. to want to protest against an ex-leader who did her best.. in dire circumstances. People should not blame Margaret Thatcher now.. as they had 11 years to get rid of her then. People should not protest now..because they've had almost 2 decades to do that too. (If people do want to protest, do it for a better reason.. to make a point to the current government or banks about Next blog: How to send Love & Light energy..! current circumstances! !) The sad thing is, even in death, Margaret Thatcher is serving the Nation- by giving people a trigger to expressed their current suppressed frustrations (even though they are retro-projecting it all onto her). Rest in peace, Margaret Thatcher… Iron Lady..! My respects to you!! Dancing on gravesdeathMargaret ThatcherPoliticsprotests & demonstrations
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
388
W Virginia county sues drug distributors over opioid epidemic © Hyungwon Kang / Reuters West Virginia's Cabell County Commission has launched a lawsuit against numerous major drug wholesalers and retailers, claiming they flooded fewer than 100,000 people with 40 million doses of hydrocodone and oxycodone. A lawsuit against AmerisourceBergen, CVS, Cardinal Health, HD Smith, Kroger, McKesson, Rite Aid, Walgreens and Walmart claims that the drug distributors breached their duty in reporting suspicious opioid prescriptions. The suit was filed by the Cabell County Commission and seeks to have the behavior of the distributors considered a "public nuisance." Escalating drug deaths in West Virginia drains funds for burials The behavior by the pharmacy chains is seen by some as predatory. In December, the West Virginia Gazette reported that the state received such large shipments of opioid drugs that there were 433 hydrocodone and oxycodone pills for every men, woman and child in the state. Cabell County Commission is arguing that the behavior of drug providers violates the Controlled Substance Act (CSA) that imposes restrictions on the distribution of high risk drugs to prevent illicit sales. The lawsuit cites the wholesale dealers' "duty to exercise due diligence to avoid filing suspicious orders that might be diverted into other than legitimate medical, scientific and industrial channels." Cardinal Health denied any wrongdoing and told Axois it would fight the lawsuit "vigorously." The opioid epidemic has wreaked havoc on West Virginia, where the rate of overdose deaths is the highest in the country. The problem reached such a fever pitch that the state fund to help pay for impoverished citizens' funerals. Within the first seven months of 2015, the fund was completely depleted.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,199
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" > <TextView android:id="@+id/tv" android:layout_marginLeft="10dp" android:layout_marginRight="10dp" android:lineSpacingExtra="4dp" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_centerInParent="true" android:text="正在请求..." /> </RelativeLayout>
{ "redpajama_set_name": "RedPajamaGithub" }
8,264
import sys import argparse from GoogleApi import GoogleApi from YahooApi import YahooApi import email_base parser = argparse.ArgumentParser(description='Gets stock prices of scrips listed in NSE') parser.add_argument('scrips', metavar='s', action="store", help='Comma separated list of scrips by scrip_code in NSE') MAX_RETRIES = 5 def get_stock_price(scrips, api_preference='Google', retries=0): """ Fetches stock prices of scrips supplied as a list of strings """ api_obj = None if retries > MAX_RETRIES: print >> sys.stderr, "Retries exceeded" return None if api_preference == 'Google': api_obj = GoogleApi() else: api_obj = YahooApi() scrip_data = None try: scrip_data = api_obj.get_data(scrips) except Exception, err_msg: email_base.send_mail('%s API call failed with error msg %s'%(api_preference, err_msg)) if api_preference == 'Google': return get_stock_price(scrips, api_preference='Yahoo', retries=retries+1) else: return get_stock_price(scrips, api_preference='Google',retries=retries+1) return scrip_data if __name__ == '__main__': args = parser.parse_args() print get_stock_price (args.scrips.split(','))
{ "redpajama_set_name": "RedPajamaGithub" }
764
\section{Introduction} Capacitive proximity sensing quantitatively measures the variation of Electric Field (EF) around a sensor based on the EF disturbance which is physically triggered from a dialectic object or conductor nearby. Through EF disturbance caused by peripheral objects, it provides a novel methodology to be utilized in diverse domains by detecting target signals and controlling the machines through digitalization. Among the two main capacitive sensing mechanisms [1, 2]: contact sensing and non-contact sensing, our focus lies on non-contact proximity sensing. In this paper, we propose an automated real-time hand motion gesture detection and classification system which controls the application's User Interface (UI) where the gesture classification output results will be used to control the UI. The architecture is composed of the following activities: extracting raw signals from sensors, signal processing, motion gesture detection, gesture frame extraction, gesture classification with a GRU classifier, and socket communication to control the foreground process of UI. Each step in the framework is selected with an optimal mechanism after thorough step-wise analysis in order to acquire efficiency and feasibility, considering the future utilization in applications to control their UI through physical hand motion gestures. In our experiments, we collected the hand motion gesture data from the capacitive sensors and derived 10 gesture type datasets (totally 1000 datasets) from three subjects in our GitHub [3] to be accessed by the public. Our experimental results validate our approach with a 98.8\% detection rate, 98.4\% extraction rate, and 98.79\% accuracy in classification. Fig.\ref{SystemOverview} displays the overall pipeline of our proposed system.\\ The main contributions of this paper are as follows:\\ 1. This paper presents an end-to-end pipeline and step-wise evaluation for an automated \& real-time interface control system with hand motion gesture recognition based on non-contact capacitive sensing. To evaluate the performance of the proposed architecture, multiple experiments were conducted over the datasets acquired for this scenario on three metrics: detection rate, frame extraction rate, and gesture classification accuracy.\\ 2. Our end-to-end system presents the controllability of User Interface which suggests the feasibility of new product commercialization in the domains of IoT, HCI (Human-Computer Interaction), NUI (Natural User Interface), and HMI (Human Machine Interface) through non-contact proximity sensing with low cost, low computation, and low latency compared to the existing sensing mechanisms.\\ 3. We open-sourced our established benchmark dataset and algorithm design through Github [3] for further development progress of capacitive sensing systems through crowdsourcing.\\ \begin{figure} \centering{\includegraphics[width=\textwidth]{Fig1.PNG}} \caption{Framework overview: Our proposed Motion Gesture Recognition System using non-contact capacitive sensing.} \label{SystemOverview} \end{figure} This paper is organized as follows. In further parts of this section, we address the footprints of non-contact capacitive sensing applications by reviewing the existing studies. In Section 2, we introduce our methodology, implementing signal processing mechanisms by thoroughly analyzing the extracted signals from our designed capacitive sensing system from multiple perspectives. We also propose the adaptive threshold in order to detect and extract the signal frame which covers the relevant signals. Section 3 presents the Deep Learning classifier to train and categorize the signals into the right gesture. Furthermore, we implement the classifier to control the UI and suggest its potential usability. In section 4, we evaluate our system ramification performance through conducting multiple experiments with three metrics. Finally, we conclude our paper by presenting conclusions and future works. \subsection{Related Works and Motivation As the major objective in capacitive sensing is utilization, a variety of new applications have been designed and implemented upon prominent domains. Capacitive sensing can be categorized into two major fields: contact sensing and non-contact sensing. Diverse researches have been conducted in capacitive contact sensing [4, 5, 6, 7], and remarkable approaches have emerged that progressed the usability of capacitive sensing. The capacitive touch sensing technology has been successfully introduced as a low-cost and effective method to control the interface of IT devices and other controller systems. In addition, contact sensing has widely been utilized in the field of Health care [8] to monitor the electric signals from our body such as electrocardiogram [9, 10], electromyography [11], and electroencephalography. Moreover, some systems that detect a person's existence and compute the localization through capacitive sensing have been devised in [12-15]. Diverse studies were performed regarding hand motion detection, especially implementing visual processing techniques based on the footage or image recognition through image sensors [16-23]. Recently, as capacitive sensing tends to be more cost-effective in computational perspective compared to image sensing, the current research trends are shifting towards capacitive sensing to obtain an accurate and efficient methodology to recognize hand motions. Wong et. al. [6] designed a wearable capacitive sensing unit, sensing the capacitive values through attaching sensors to a hand and recognizing a specific hand gesture using a machine learning model. Similarly, Kalpattu et. al. [7] developed a gesture recognition glove device that utilizes capacitive sensors to extract voltage signals, perceiving a certain hand gesture. Likewise, most systems that detect hand motions imply recognizing static motions with certain hand forms such as thumbs-up pose, pouch pose, open-hand pose, or American sign language (ASL) [6, 7, 12, 24, 25]. \\ On the other hand, our hand gesture focuses on capturing dynamic motions in a non-contactive fashion such as waving a hand in different directions. Aezinia et. al. [26, 27] implemented three copper plates to measure the amplitude when a finger is nearby, and observed the EF difference around the plates, suggesting the feasibility of a motion tracking system through capacitive sensing. Arshad et. al. [28] conducted a study to track a person's existence through capacitive proximity sensing where the sensed signals contain distinct uniqueness which is sufficient enough to determine the existence of a specific person based on their experiments. Chu et. al. [29, 30] suggested a possibility of utilizing a neural network and Hidden Markov Model to recognize hand gesture EF signals. However, their study does not have thorough experimental validation, which lacks practical feasibility. Jung et. al. [31, 32] designed a signal offset computation method for accurate zero setting and calculated the empirical threshold of the EPS (Electric Potential Sensor) capacitive sensor. In addition, this study is based on our past studies [33, 34] which presented the dynamic threshold and frame extraction based on EPS capacitive sensor, classifying four-hand gesture types through the CNN model. \subsection{System Overview} Our system contains four capacitive sensors in the form of copper plates followed by middleware as shown in Fig.\ref{SystemOverview}. All the signal data and information from the sensors were collected using Arduino devices. Once the data are collected, they will be automatically passed through the pipeline of different pre-processing steps involving signal processing, signal detection, and frame extraction computations. Once the data processing pipeline finishes, pre-trained neural network models will determine the gesture type of the incoming authentic gesture dataset. As part of the final step for any kind of inference or usage of this system in the real-time scenario, we implemented the system into a python-based GUI framework that is being controlled based on the classification output of the neural network for an activity near the sensors. \section{Problem Formulation and Methodology} \subsection{Computing Optimal Threshold through Signal Analysis In this section, we analyze the extracted raw sensor signals and compute the optimal threshold by interpreting the internal properties. As EF signals can be easily influenced by the surrounding environment which causes unstable disturbances or noises in our data collection, our primary objective is to detect the relevant signals by reducing the intrinsic noises as humanly as possible. In general, there are two main perspectives to diminish noises: a) hardwarical approach such as designing the circuital structure b) applying the mathematical filtering computation that could magnify the target signals more distinctive based on software perspective. In the following sections, our aim is to discover the optimal softwarical approach to effectively avoid noises, suggesting an algorithmic resolution based on property analysis. \subsection{Capacitive Theories} In order to design a comprehensive framework by interpreting the fundamental background, comprehending the physical theories behind the capacitive sensing is essential [1]. A capacitive sensor measures the mutual capacitance interacting with nearby objects. Let $s=\left\{\left(\boldsymbol{s}_{i}\right)\right\}_{i=1}^{N} $ be a set of sensors where $n(\bigcup_{\forall i}s_{i})=N$. The total self-capacitance $\mathrm{C}=\sum_{\forall i}\mathrm{c}_{i}$, where $\mathrm{c}_{i}$ indicates each sensor's (${s}_{i}$) capacitance since they are connected in parallel, and based on $Q_{(o,s)}=\mathrm{C}V$, $\mathrm{C}=\frac{\epsilon A_{s}}{d_{(o,s)}}$. $\epsilon$ denotes the dielectric constant, $o$ is an object, $d$ indicates a distance and $A_{s}$ is the overlapping area between sensor $s$ and a nearby object. Our final measurement $V$ can be approximated to Equation (1) where $Q$ denotes the amount of charges. \begin{equation} { V \approx \bigcup_{\forall s} V_{s} \,\,\, s.t. \,\,\, V_{s} \approx \frac{Q_{(o,s)}}{ \frac{ \epsilon \cdot A_{s}}{d_{(o,s)}}} } \end{equation} \subsection{Signal Processing} \begin{enumerate} \item \textbf{Signal Analysis}\\ Let $x_i$ be our raw input signal. $X_s=\{x_{\left(s,i\right)}|1\le s\le 4,\ (s,i\mathrm{)}\in \mathbb{N}\mathrm{\}}$ and $I=\{i|1\le i\le n(X_s)\}$ where $n\left(X_{\exists s}\right)=n(I)$, and $i$ is an index of input $x$. Based on our empirical signal observation, the internal range of $\left|x_{(s,i)}\right|$ is $\left|x_{(s,i)}\right|-C\le \left|x_{\left(s,i\right)}\right|\le \left|x_{\left(s,i\right)}\right|+C$, with $C=\sigma \left(\left|x_{\left(s,i\right)}\right|\right)\approx 2.03$ when there are no disturbances from the surrounding environment, where $\sigma$ denotes the standard deviation and the unit is voltage $V$. Compared with this inherent oscillation, when a certain hand gesture takes place near a sensor with the vertical height of 0.1$\mathrm{\sim}$15cm, the signal amplitude has the form of non-linear relation between the distance and amplitude based on the coulomb's law $F=k_e\frac{|q_{1}q_{2}|}{r^{2}}$, where $k_e$ is the Coulomb's constant, $q_1$ and $q_2$ denote the magnitudes of charges, and $r$ is the distance between $q_{1}$ and $q_{2}$. Another property of a capacitive sensor is that it periodically discharges the internal charges as the sensor absorbs the electrons through nearby substances such as air and objects, when it surpasses the intrinsic capacity $\omega_s$, having $|x_s|=\sum_{i=1}^p|q_{i}|$, where $p$ indicates a certain period, and at $p+1$, $x_{(x,i=p+1)}=\sum_{i=1}^q|q_{i}-\omega_s|$. The relation of $x_s$ is as follows in Equation (2). \begin{equation} \label{eqn:capactive} |x_{(s,i^{'})}|\ll |x_{(s,i^{'}+p)}| \,\, and \,\, |x_{(s,i^{'}+p)}|\gg |x_{(s,i^{'}+p+1)}| \,\, s.t. \,\, |x_{(s,i+p)}| \approx \omega_s \,\, and \,\, |x_{(s,i+p)}|>\omega_s \end{equation} In Equation (2), $i+p$ indicates the moment of the discharge, and $i+p+1$ denotes the certain period after the discharge. $p$ is determined based on the variation of signal $\Delta x_s$, and our system displays $p\approx 1499.4\ \left(\pm 7.65\right),\ $that is $19.6\ (\pm 0.10)$ seconds when the sampling rate is 76.5 Hz, with $\sigma \left|x_{(s,i^{'})}\right|\approx \mathrm{11}\pm 4.73$. The reason for $p_{\forall s}$ are identical is that $A_{\forall s}$ are equivalent. However, $X_{\forall s}$ have their own values, with $\mu \left(X_{s}\right)\napprox \mu \left(X_{s^{'}}\right)$ where $s\neq s{'}$, but $\sigma \left(X_{s}\right)\approx \sigma \left(X_{s^{'}}\right)$, which implies that the overall range of difference is equivalent. Another property is that $x_{\left(s,i\right)}$ starts off from its distinct range, indicating the potential energy it possesses. Furthermore, $\mu \left({\tilde{X}}_{\left(s,j\neq j^{'}\right)}\right)\napprox \mu \left({\tilde{X}}_{\left(s,j^{'}\neq j\right)}\right)$ which entails that the overall amplitude of signal amplitude alters throughout the time where $\tilde{X}\mathrm{\in }X$, having $X=\bigcup_{\forall j}{{\tilde{X}}_j}$, and $j$ and $j^{'}$ are both indexes of $\tilde{X}$. Due to these diversities, setting the average values to zero is constantly required to reduce heterogeneity, by computing the offset ${\lambda}_s$ of each $s$, which we will introduce in Section 3.1. \item \textbf{Frequency Analysis}\\ By mapping our time-series dataset into the frequency domain, it offers the intuition of comprehending the intrinsic frequency components and insight to manipulate the frequency range using filtering mechanisms. Using the Fast Fourier Transform (FFT) in Equation (3), where \textit{f(x)} is an input signal, \textit{F(u)} is a linear aggregation of continuous periodical function ${cos2\pi ux + vsin2\pi ux}$, a term encompassing $v$ denotes an imaginary number, and \textit{u} indicates a frequency. Fig.\ref{FFT}(a) displays the FFT decomposition result when no hand gesture is presented and Fig.\ref{FFT}(b) shows the transformed signal when multiple hand gestures are presented. \begin{equation} {F(u) = \int^{\infty}_{-\infty} f(x) ({cos2\pi ux + vsin2\pi ux}) dx } \end{equation} \begin{figure}% \centering \subfloat[\centering ]{{\includegraphics[width=6.5cm]{Fig2_a.png} }}% \subfloat[\centering ]{{\includegraphics[width=6.5cm]{Fig2_b.png} }}% \caption{FFT result: (a) without hand gestures; (b) with hand gestures (Left to Right).} \label{FFT}% \end{figure} The result in Fig.\ref{FFT} indicates that both raw signal without hand gesture (a), and the hand motion gesture signal (b) contain a significant amount of amplitudes in frequency range of under 50Hz. When computing the statistics: mean ($\mu (\cdot)$) and standard deviation ($\sigma (\cdot)$) by the different ranges of frequencies, where $\frac{1}{4} \sum_{s=1}^{4} (\sigma(x_{(s,j)}))$, $\frac{1}{4} \sum_{s=1}^{4}(\mu(x_{s,j}))$ in frequency range of $[a_n,b_n]$ are given in Table 1, with $\bigcup^{5}_{n=1}{a_{n}} = \{1,100,200,300,400|a_{1 \leq n \leq 5} \}$ and $\bigcup^{5}_{n=1}{b_{n}} = \{100,200,300,400,500|b_{1 \leq n \leq 5} \}$. For example, 1:100 in the first row in Table 1 indicates the frequency range of [1,100] Hz, and its statistics in the identical column below are the values after the band-pass filter where the passband is [1, 100] Hz. \begin{table} \caption{Statistical comparison} \setlength{\tabcolsep}{4.5pt} \begin{center} \begin{tabular}{c c c c c c c } \hline \hline Statistic & Gesture & 1:100 & 100:200 & 200:300 & 300:400 & 400:500 \\ [0.75ex] \hline $\mu(\hat{x}_{(s,j)})$ & F & 7.3 ($\pm2.8$) & 1.1 ($\pm0.1$) & 1.1 ($\pm0.1$) & 0.8 ($\pm0.1$) & 0.8 ($\pm0.1$) \\ \hline $\sigma(\hat{x}_{(s,j)})$ & F & 16.2 ($\pm4.7$) & 0.7 ($\pm0.1$) & 0.9 ($\pm0.1$) & 0.5 ($\pm0.1$) & 0.5 ($\pm0.1$) \\ \hline $\mu(\hat{x}_{(s,j)})$ & T & 16.59 ($\pm2.0$) & 1.4 ($\pm0.1$) & 1.3 ($\pm0.1$) & 0.9 ($\pm0.0$) & 0.9 ($\pm0.0$) \\ \hline $\sigma(\hat{x}_{(s,j)})$ & T & 35.95 ($\pm6.3$) & 0.7 ($\pm0.1$) & 1.0 ($\pm0.0$) & 0.5 ($\pm0.0$) & 0.5 ($\pm0.0$) \\ [0.75ex] \hline \hline \end{tabular} \end{center} \end{table} \item \textbf{Processing Input Signal}\\ In order to effectively reduce internal noises, we consider the following three signal processing methods: Low Pass Filter (LPF), differentiation by sensors, and differentiation by sequential time step. As internal frequencies of input signals are usually dense in the range of ultra-low frequency, filtering the signals with LPF that has 50 cut-off frequency $f$ is a practical technique, which permits $F(u\leq f)$ in Equation (1) and reverts by $\sum^{f}_{u=0} F(u) = f(\Bar{x}_{(s,j)})$ where $f(\Bar{x}_{(s,j)})\in f(x_{(s,j)})$. The major drawback of utilizing filters is that there is latency for the length of window size $w$, for $\frac{1}{w} \sum_{j=1}^{w} |x_{(s,j)}|=\hat{x}_{(s,j)}$, where $\hat{x}_{(s,j)} \in \hat{X}_s$ is our final processed signal. However, since the objective of our system is to control a UI, it requires a rapid response with low latency. Furthermore, this case requires an additional zero setting, which involves higher computation. The second option is to differentiate sensor signals in a pairwise manner, which is indicated in Equation (4) where $s \neq s^{'}$. This new $\hat{X}_{(s,s^{'})}$ comes effective when minimizing the noises, as sensors are proximal to each other and we assume that noises triggered by the external factors would affect almost identically among $\forall s$. We adjusted six combinations since $1\leq (s, s^{'}) \leq 4$, however, the results are not satisfactory as internal noises are still clearly shown. \begin{equation} {\hat{X}_{(s,s^{'})} = \bigcup_{\forall j} \hat{x}_{(s,s^{'},j)} \,\,\, \textit{s.t.} \,\,\, \hat{x}_{(s,s^{'},j)} = |x_{(s,j)} - x_{(s^{'},j)}}| \end{equation} The final option we inspect is shown in Equation (5), and our empirical study shows that this scheme is the most effective one among the three options. Equation (5) differentiates the sequential values between $x_j$ and $x_{(j-1)}$, and having $\hat{x}_{(s,j)} \approx 0$ when $\sigma^2(\bigcup_{j}^{j+p_{1}} x_{(s,j)})$ is low, providing automatic zero setting with low computations where $p_{1} \in \mathbb{N}$ is a constant that implies a certain period. Therefore, we select Equation (5) as our final processing mechanism, and add sensitivity $\tau_{s}$ and $1-\tau_{s}$ to each term $x_{(s,j)}$ and $x_{(s,j-1)}$ respectively to endow the corresponding weights, as well as moving the average to smooth out the values as shown in Equation (6) where $\Breve{x}_s$ indicates the processed signals, and $\Breve{X}_s$ denotes the set of $\forall \Breve{x}_{\exists s}$. Moreover, $\lambda_{s}$ indicates the offset of each $\exists s$, which is an auxiliary measure to set $\mu(\Breve{x}_{(s,j^{'} \leq j \leq j^{'}+p_{1})})\approx 0$, where $\lambda_{s} = \frac{1}{p_{1}} \sum_{j=j^{'}}^{j^{'}+p_{1}} \Breve{x}_{(s,j)}$. Fig.\ref{signal}(a) shows the raw signals $X_{\forall s}$ where Fig.\ref{signal}(b) shows the processed signals $\Breve{X}_{\forall s}$. \begin{equation} {\hat{X}_{s} = \bigcup_{\forall j} \hat{x}_{(s,j)} \,\,\, \textit{s.t.} \,\,\, \hat{x}_{(s,j)} = |x_{(s,j)} - x_{(s,j-1)}}| \end{equation} \begin{equation} \Breve{X}_s = \bigcup_{\forall j} (\Breve{x}_{(s,j)} - \lambda_{s}) \,\,\,\, \textit{s.t.} \,\,\, \Breve{x}_{(s,j)} = \frac{1}{p_{1}} \sum^{p_{1}}_{j} (\tau_{s} x_{(s,j)} + (1-\tau_{s}) x_{(s,j-1)}) \end{equation} \begin{figure}% \centering \subfloat[\centering ]{{\includegraphics[width=6.5cm]{Fig3_a.png} }}% \subfloat[\centering ]{{\includegraphics[width=6.5cm]{Fig3_b.png} }}% \caption{(a) raw signals with a single hand gesture presented, (b) processed signals covering two hand gestures (Left to Right).} \label{signal}% \end{figure} \end{enumerate} \subsection{Setting Adaptive Threshold} As EF signals from sensors are highly variable throughout the extraction, setting an optimal threshold that could obtain accurate detection is essential. We suggest an adaptive threshold that changes dynamically, by periodically recomputing the threshold which can adapt to the inconsistent signal range and provide credible detection performance. The main purpose of the adaptive threshold is to detect the signal triggered by a hand gesture and to extract the frame that encompasses the authentic parts of the gesture. Let $\delta_{s}$ be a threshold of an individual sensor $s$ and $\phi$ be a constant that determines the amplitude of $\delta_{s}$. $\delta_{s}$ is being updated with period $p_{1}$ as shown in Equation (7). $\phi$ is decided with empirical observations, where $\phi$ directly influences the detection algorithm, determining the lower bound of $\delta_{s}$, such that $\inf \delta_{s} \approx \phi$ when base term $\frac{1}{p_{1}} \sum_{j=1}^{p_{1}} \Breve{x}_{(s,j)} \approx 0$. When $(\Breve{x}_{(s,j)} - \lambda_{s}) \geq \delta_{s}$ and $(\Breve{x}_{(s,j^{'} > j)} - \lambda_{s}) \leq \delta_{s}$ where $(j+50) \geq j^{'}$, the system recognizes this as a detection and extracts $\Breve{x}_{(s,j-p_{s}\leq j \leq j^{'}+p_{e})}$ where $p_{s}$ and $p_{e}$ are the constants that indicate period, in order to secure the whole frame that shifts forward from the start of the detection (i.e., $\Breve{x}_{(s,j)} - \lambda_{s} \geq \delta_{s}$) which is at index $j$, and moves backward from the end of the detection (i.e., $\Breve{x}_{(s,j^{'} > j)} - \lambda_{s} \leq \delta_{s}$), which is at index $j^{'}$. The system consists of two parts: offset initialization (Algorithm 1), and hand gesture detection through adaptive threshold that returns the frame automatically in real-time (Algorithm 2). \begin{equation} \delta_{s} = \frac{1}{p_{1}} \sum_{j=1}^{p_{1}} (\Breve{x}_{(s,j)} - \lambda_{s}) + \phi \end{equation} \begin{algorithm} \caption{Offset Initialization}\label{alg:cap} \begin{algorithmic}\\ \textbf{Input: Initialize period $p_{0A}$} \\ \textbf{Output: Offset $\lambda_{s}$} \\ \For {each $s$} \textbf{in parallel} \State Initialize $(start_{s}, end_{s}) \gets 0$ \State Initialize $\delta_{s} \gets \phi$ \State Initialize empty list $\Breve{X}_{s}, init_{s}$ \For \, $j$ = 1, {\dots}, \, $p_{0A}$ \State $\Breve{X}_{s} \bigoplus \Breve{x}_{(s,j)}$ \Comment{append} \EndFor \State $\lambda_{s} \gets \frac{1}{n(\Breve{X}_{s})} \cdot \sum_{\forall j} \Breve{x}_{(s,j)} $ \State \textbf{Return} $\lambda_{s}, start_{s}, end_{s}, \delta_{s}, init_{s}$ \EndFor \end{algorithmic} \end{algorithm} Furthermore, in preparation of the case where sensor values suddenly surge when a dialectic object is being directly contacted by the sensor (huge amount of charges flow in this case compare with non-contact methods), additional measure to take precaution needs to be designed. When $|j-j^{'}| > p_{safe}$, the system perceives this outlier as malfunction and recomputes until $\Breve{X}_{s}$ is stabilized using offset $\lambda_{s}$. The overall steps of detection \& frame extraction are shown in Algorithm 2. Before running Algorithm 2, we compute the initial $\lambda_{s}$ for a certain period $p_{0}$ where $A \bigoplus B$ indicates the appending $B$ to $A$ function, and $start_{s}$ and $end_{s}$ denote the starting frame and ending frame respectively. \begin{algorithm}[t] \caption{Hand Gesture Detection through Adaptive Threshold}\label{alg:cap} \begin{algorithmic}\\ \textbf{Input}: {input signal $\Breve{x}_{(s,j)}$, offset/threshold update period $p_{1}$, period that adds forward from the start $p_{s}$, period that adds backward from the end $p_{e}$, auxiliary period $p_{safe}$, auxiliary period $p_{0B}$ } \\ \textbf{Output: $FRAME$ } \\ \For {each $s$} \textbf{in parallel} \State initialize empty list $frame_{s}$ \EndFor \State $START, END \gets 0$ \State $(\lambda_{s}, start_{s}, end_{s}, \delta_{s}, init_{s}) \gets \text{Offset Initialization}$ \Comment{Algorithm 1} \While {$\Breve{x}_{(s,j)} \neq \emptyset$} \Comment{run until input signal halts} \For {each $s$} \textbf{in parallel} \State $\Breve{X}_{s} \bigoplus (\Breve{x}_{(s,j)}-\lambda_{s})$ \If{$start_{s}$ = 0 and $end_{s}$ = 0} \State $init_{s} \bigoplus \Breve{x}_{(s,j)}$ \Comment{Compute offset when frame is not set} \EndIf \If {$(\Breve{x}_{(x,j)} - \lambda_{s}) > \delta_{s}$ and $(\Breve{x}_{(s, j-1)} - \lambda_s) < \delta_{s}$} \Comment{$\Breve{x}_{(s,j)}$ passed the $\delta_{s}$} \State $start_{s} \gets j - p_{s}$ \Comment{Shift forward to cover the authentic signal} \EndIf \State \textbf{else if} {$(\Breve{x}_{(s,j)}-\lambda_{s}) > \delta_s$} \textbf{then} \Comment{In the middle of higher than $\delta_{s}$} \State $cnt \gets cnt + 1$ \Comment{Safety measure} \State \textbf{else if} {$(\Breve{x}_{(s,j)}-\lambda_{s}) < \delta_s$ and ($\Breve{x}_{(s,j-1)}-\lambda_{s}) > \delta_s$ } \textbf{then} \State $end_{s} \gets j + p_{e}$ \Comment{Shift backward to cover the authentic signal} \State $cnt \gets 0$ \State \textbf{else if} {$start_{s} = 0$ and $end_{s} = 0$ and $j \% p_{1}=0$} \textbf{then} \State $\lambda_{s} \gets \mu(init_{s}) $ \State $\delta_{s} \gets \frac{1}{p_{1}} \cdot \sum_{k=j-p_{1}}^{j} \Breve{x}_{(s,k)} + \phi$ \State $init_s \gets empty list$ \If{ $\Breve{x}_{(s,j)} > \delta_{s}$ and $cnt > p_{safe}$ } \Comment{safety measure} \State $\delta_{s} \gets \frac{1}{p_{1}} \cdot \sum_{k=j-p_{1}}^{j} \Breve{x}_{(s,k)}$ \Comment{offset recompute} \State $cnt \gets 0$ \EndIf \If {$j = end_{s}$ and $j > p_{0B}$} \Comment{Another initial period} \State $frame_{s} \bigoplus array(start_{s}, end_{s})$ \State $(start_{s}, end_{s}) \gets 0$ \EndIf \EndFor \If {$p_{0B} < j$} \For {each $s$} \textbf{in parallel} \If {$frame_{s} \neq \emptyset$} \State $START \gets frame_{s}[n(frame_{s})][0]$ \State $END \gets frame_{s}[n(frame_{s})][1]$ \EndIf \EndFor \If {$j = END$ and $START \neq 0$ and $END \neq 0$} \State $FRAME \bigoplus array(START, END)$ \For {each $s$} \textbf{in parallel} \State $frame_{s} \gets empty list$ \EndFor \State $(START, END) \gets 0$ \State return $FRAME$ \EndIf \EndIf \EndWhile \end{algorithmic} \end{algorithm} In Algorithm 2, $init_{s}$ is a supplementary measure to compute $\lambda_{s}$ storing $\Breve{x}_{(s,j)}$ for $p_{1}$. In order to determine the final frame, let $START, END$ be a local volatile variable that denotes the temporary index of the start and end frame using $frame_{s}$. Then we append $array(START, END)$ to $FRAME$, having $FRAME[j][0] = START$ and $FRAME[j][1] = END$ where $j$ is the current index. We define the final frame as $\psi_{k}$, and its mathematical expression is shown in Equation (8) where $k$ is an index of the extracted frame. Note that $\lambda_{s}$ is iteratively being updated. \begin{equation} \psi_{k} = \bigcup_{j=Frame[k][0]}^{Frame[k][1]} (\Breve{x}_{(s,j)}-\lambda_{s}) \end{equation} Based on the designed algorithms, we automatically detect a hand gesture signal using an adaptive threshold and extract the signal frame that covers the authentic section triggered by the hand movement in real-time. The adaptive threshold and extracted motion frame are shown in Fig.\ref{threshold}. The parameter settings including multiple periods and the algorithm performances are demonstrated in Section 5. \begin{figure}% \centering \subfloat[\centering ]{{\includegraphics[height=4.5cm,width=6.5cm]{Fig4_a.png} }}% \subfloat[\centering ]{{\includegraphics[height=4.5cm,width=6.5cm]{Fig4_b.png} }}% \caption{ (a) Adaptive threshold (red horizontal bars) which is periodically updated based on the amplitude of the input signal. (b) The frame extracted based on Algorithm 2.} \label{threshold}% \end{figure} \subsection{System Architecture and Components} The hardware in our system includes three main components: four sensors (copper plates) for collecting EF signals, middleware (Arduino UNO) for aggregating sensor signals which transmits the signals to local devices through serial communication, and local devices. The sensors are located on top of the PVC plate which is made of non-conductive materials to preserve sensor signals. Recall that the input raw sensor signals are processed inside a device based on the methodologies in Section 3. Using the incoming signals, the system detects a motion gesture, extracts its frame, and classifies the frame implementing the algorithms proposed in Section 4. Finally, the classification result is transmitted through socket communication to a foreground application for UI control, which manipulates the interface with the built-in conditions such as {\it Left to Right} is to move to a next page and {\it Right to Left} is to go back to a previous page. The functions are listed in Table 2. \section{Training Classifier and Controlling UI \subsection{Training AI Classifier} As deep learning methods have been shown to have exceptional performance on predicting the input data into right class labels in a given feature space, we adopt a deep learning classifier model to train our hand gesture dataset to determine its class label among 10 gesture types. We employ the pre-trained classifier to conduct the classification of the extracted frames in real-time in order to utilize the prediction result as a conditional input signal to control UI. Our objective function in deep learning is shown in Equation (9) where $\mathscr{L}$ is a categorical cross entropy loss function with inputs of true label $y_{k}$, predicted label $g(\psi,M)$, and $M_{(s,k)}\ni {\{weight_{(s,k)}, bias_{(s,k)}\}}$ on frame index $k$. We update $M$ through gradient descent: $M^{(t+1)}_{(s,k)}=M^{(t)}_{(s,k)}-\eta \cdot \frac{\partial }{\partial M^{(t)}_{(s,k)}}\cdot {\mathscr{L}} (M^{(t)}_{(s,k)})$ where $\eta$ denotes the learning rate. \begin{equation} {\mathop{\mathrm{min}}_{M_k\in {\mathbb{R}}^d}} \frac{1}{n(\psi_{(s,k)})}\sum_{\forall k}{{\mathscr{L}}(y_{(s,k)},g(\psi_{(s,k)},M_{(s,k})) } \end{equation} Since our dataset is in a time-series format, we implement two deep learning models which are proper for handling time-series data: Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM). By training the neural networks, it returns a predicted label. The overall training performance is presented in Section 5-2. \subsection{UI Control and Optimal System Structure} The application UI is being directly controlled by a human in a foreground process, therefore the overall process including signal processing, thresholding, frame extraction, and classification should operate on the background process. The daemon process transmits the classification result to the foreground application through socket communication inside the single local device (Intel i7-1165G7, 2.8GHz, 16GB RAM). Depending on the gesture types, each has its certain function that controls a given UI. For example, when the classifier recognizes the hand gesture signal as 'Left to Right', then it commands to move to the next page. The other functions of motion class labels are shown in Table 2. We show a simple UI that visually shows through the window which contains the small GUI elements. Through importing the python GUI library Tkinter, we construct a window containing strings that signify the current hand motion gesture, for example, showing up "Left to Right" sending the input to the UI process. Again, this is to show the possibility of further implementation of diverse hand motion gesture class label inputs, which can be utilized to effectively control the target device. \begin{table} \caption{Types of hand motion gestures} \setlength{\tabcolsep}{4.5pt} \begin{center} \begin{tabular}{c c c c c } \hline \hline Left to Right & Right to Left & UP & DOWN & Down to Left \\ [0.75ex] \hline Class 1 & Class 2 & Class 3 & Class 4 & Class 5 \\ Next page & Previous page & Scroll up & Scroll down & Previous 2 pages\\ \hline Down to Right & Left to Down & Right to Down & Up to Left & Up to Right \\ \hline Class 6 & Class 7 & Class 8 & Class 9 & Class 10 \\ Next 2 pages & Off & On & Volume down & Volume up\\ \hline \end{tabular} \end{center} \end{table} \section{Experimental Evaluation \subsection{Experimental Setup} The framework's environmental setup consists of two main entities: four copper sensor plates for collecting EF signals, and middleware (Arduino UNO) for aggregating the raw sensor signal data which transmits the data to the local device through serial communication. Once the data have been collected, they will be passed through our pipeline as shown in Fig.\ref{SystemOverview} in which different pre-processing steps happen in a sequence. Once the data processing pipeline finishes, the processed signal is fed to the pre-trained neural network model for gesture classification. Finally, the classification result is transmitted through socket communication to another python GUI application process and the results of the gesture motions are shown on the interface for any activity that takes place near the sensors. We conducted 10 different hand motion gestures which are listed in Table 2, and extracted the optimal signal frame. The following parameters are the designated parameters: $p_{s} = 70$, $p_{e} = 70$, $\phi = 20$, $p_{0A} = 10 ~seconds$, $p_{0B} = 8 ~seconds$, $p_{safe} = 3 ~seconds$, and $p_{1} = Sampling ~rate \cdot Initialization ~time$ where $Sampling ~rate = 53$ and $Initialization ~time = 6 ~seconds$. Based on the preset hyper-parameters, we operate our system and wave hand gesture motions near the sensor plates, measuring the classification accuracy of 10 gesture types and observing the accurate controllability of UI. Furthermore, we compare the accuracy and loss of two high-performance deep learning models: LSTM and GRU. For the layer structure of LSTM and GRU: LSTM (20) / GRU (20) – Dense (32) – Dense (64) – Dense (32) – Dense (10) with 60 epochs, batch size of 10, $\eta$ = 0.005, and relu activation function for the dense layers except for the final layer which uses the softmax. \subsection{Experiment results} In order to evaluate our proposed framework, we use three evaluation metrics to assess our adaptive threshold, optimal period of the extracted frame, and train the ability of a deep learning classifier. The first metric is to quantitatively estimate the performance of the adaptive threshold in each sensor, which is solely for detecting the occurrence of hand gestures near the sensor plates and we define this as Detection rate. The second metric is to assess the coverage area of the extracted frame signal after detection, as the withdrawn frame is the key source for recognizing authentic hand motion types, and we denote this as Frame Extraction rate. Finally, in order to conduct accurate classification by calling the real-time hand gesture frames, a well-trained AI model is required, and the final metric is to evaluate the performance of the trainability of the classifier. Our overall evaluation shows 98.8\% Detection rate, 98.4\% Frame Extraction rate, and accuracy of 98.79 ($\pm0.72$)\% in GRU, 98.05 \%($\pm0.68$) in LSTM. Fig.\ref{performance} shows the average value of the validation dataset's accuracy and loss of pre-trained classifiers after 30 independent training trials, with standard deviation shown in each epoch step. The macro-average of precision, recall, and f1 score for all labels approximate to 1.0. In Fig.\ref{performance}, it shows GRU has more stable convergence with slightly higher accuracy at the end. Therefore, we applied a pre-trained GRU classifier into the final background process and based on the classification result, it dynamically controls the foreground application process UI with 10 options. To quantitatively evaluate the overall performance of the suggested system, it operates with an average of 98.67 ($\pm 2.005$) \% accuracy. \begin{figure \centering \subfloat[\centering ]{{\includegraphics[width=6.5cm]{Fig5_a.png} }}% \subfloat[\centering ]{{\includegraphics[width=6.5cm]{Fig5_b.png} }}% \caption{Performance of GRU and LSTM: (a) Average of GRU test accuracy and Loss with standard deviation after 30 training trials. (b) Average of LSTM test accuracy and Loss with standard deviation after 30 training trials.} \label{performance}% \end{figure} \section{Conclusion and Future Work This paper proposes a real-time user interface control system based on hand motion gesture recognition through non-contact capacitive sensing. Our designed framework processes raw capacitance signal data from sensors with sequential differentiation, which are selected as an optimum option among filtering schemes with spectrum analysis, and two differentiation computations are performed after observing each result. By dynamically computing the adaptive threshold, the system detects a movement near sensors and automatically extracts the motion gesture frame that covers the authentic EF disturbance signal triggered by hand. This frame is sent to a pre-trained GRU classifier, which categorizes the input frame among 10 predefined hand motion gesture types. Furthermore, the classification result is transmitted to another application process that maneuvers the UI and controls the interface with built-in commands. This proposed work provides the feasibility of adopting a capacitive sensing system, which shows the possibility of commercializing intensive application products through capacitive proximity sensing as capacitive proximity sensing is emerging as a promising sensing technology that enables the implementation of new applications in the domains of IoT, VR, AR, touchless interface, HMI, robotics, HCI, and NUI, with cost-efficiency. The dataset used to support the study is uploaded in GitHub [3] and can be accessed by the public. The limitations of our proposed system include limited distance between a sensor and a dialectic object, and unstable signal trend affected by the surrounding. Limited distance is an intrinsic drawback of a capacitive sensor as it is non-contact proximity sensing. The range of height for our system in order to detect a hand motion is approximately under 15cm. This distance may not be sufficient enough to be commercialized in products, therefore, more accurate and sensitive sensing is required. The overall trend of capacitive sensor signal differs when surrounding changes, and as we cannot predict the heterogeneous circumstances, this diversity is inevitable. This problem can partially gain a solution through two basic approaches, collecting diverse datasets and computing the physical model of the given environment in order to gain possible cases of how the signal pattern would occur. Our future work lies on the design of approaches to resolve the above mentioned limitations: analyzing the EF disturbance pattern when dialectic objects are near, and collecting more data. Moreover, we will focus on increasing the utility of the proposed system, adding further hand gesture motion data with an increasing number of hand gesture types (at least 100 gestures). After successfully classifying the labels and connecting more sensors to construct a more comprehensive system, these steps will contribute to constructing an intensive system with higher multiplicity that utilizes capacitive sensing.\\\\
{ "redpajama_set_name": "RedPajamaArXiv" }
8,724
\section{Introduction}\label{Intro} The two body leptonic decay mode of the charged kaon decay-at-rest (KDAR) i.e. $K^+ \to \mu^+ \nu_\mu$, B.R. 63.55$\pm$1.1$\%$~\cite{Olive:2016xmw} provides a unique and important source of monoenergetic muon neutrinos of energy 236 MeV. These neutrinos may be used to make high precision measurements of neutrino-nucleus cross sections for the charged current (CC) induced weak quasielastic (QE) production of muons from the various nuclear targets. The high precision neutrino-nucleus cross section measured with the well defined monoenergetic beam of muon neutrinos may serve as benchmark for validating many theoretical models currently being used to describe the nuclear medium effects in QE reactions~\cite{Katori:2016yel,Alvarez-Ruso:2014bla} relevant for the analysis of present day neutrino experiments in the low energy region of a few hundred MeVs~\cite{Abazajian:2012ys,Abe:2011sj,Antonello:2015lea,Katori:2011uq,Chen:2007ae,Dracos:pro61,Baussan:2013zcy,Vinning:2017izb, Cao:2014bea,Adey:2015iha,Liu:2013zjn, Ajimura:2017fld,Harada:2016vlb,Harada:2015gla,Axani:2015dha,Spitz:2014hwa,Spitz:2012gp,grange}. These KDAR neutrinos are proposed to be used as a probe to study the new neutrino oscillation modes to sterile neutrinos i.e. $\nu_\mu \to \nu_s$ by preforming the oscillation experiments in $\nu_\mu \to \nu_\mu$ disappearance mode and studying the CC interactions of $\nu_\mu$ with nuclei and/or performing the oscillation experiments in $\nu_\mu \to \nu_e$ appearance mode and studying the CC interaction of $\nu_e$ with nuclei ~\cite{Ajimura:2017fld,Harada:2016vlb,Harada:2015gla,Axani:2015dha,Spitz:2014hwa,Spitz:2012gp,grange}. In the $\nu_\mu \to \nu_\mu$ disappearance mode, $\nu_\mu$ from the three body $K \mu_3$ decays of charged kaons i.e. $K^+ \to \mu^+ \pi^0 \nu_\mu$ having continuous energy spectrum with the end point energy of 215 MeV constitute the major source of background while in the $\nu_\mu \to \nu_e$ appearance mode, $\nu_e$ from the $K e_3$ decay mode of charged kaons i.e. $K^+ \to e^+ \pi^0 \nu_e$, having continuous energy spectrum with end point energy of 228 MeV constitute the major source of background. The background in both the channels from the decay in flight (DIF) neutrinos from pions, kaons and other mesons corresponds to higher energies. With sufficiently improved energy resolution for the detection of the final muon and the electron produced respectively in the CC weak interaction of $\nu_\mu$ and $\nu_e$ with matter, the background events can be well separated in energy from the signal events for the oscillation experiments corresponding to $E_{\nu_{\mu}(\nu_e)}$ = 236 MeV. Moreover, it has been recently suggested~\cite{Rott:2016mzs} that the observation of CC induced QE events with the monoenergetic neutrinos can also provide information about the dark matter which annihilates in its interaction with the solar matter in the center of the Sun into quark-antiquark pairs and produces the charged kaons through the hadronization process. The monoenergetic muon neutrinos produced by these charged kaons through the $K \mu_2$ decays can be identified by comparing the on-source and off-source event rates in the terrestrial detectors provided the background events for $E_\nu \sim$ 236 MeV are well under control in the $\nu$-oscillation experiments proposed with the KDAR neutrinos. The feasibility of such experiments with high intensity KDAR neutrinos requires an accelerator facility capable of producing $K^+$ mesons with a very high yield. The 3 GeV proton accelerator facility at the J-PARC MLF facility in Tokai, Japan~\cite{Ajimura:2017fld,Harada:2016vlb,Harada:2015gla,Axani:2015dha,Spitz:2014hwa} and the 8 GeV proton accelerator facility at the BNB source facility at the Fermilab, USA~\cite{Spitz:2014hwa,Spitz:2012gp,grange} have the sufficient energy and power to produce high intensity charged kaons through the primary and/or secondary interactions of protons with the nuclear targets which would be stopped in the surrounding material and their decay would give intense beam of $\nu_\mu$. At the J-PARC facility the neutrino oscillation experiments in the appearance mode i.e. $\nu_\mu \to \nu_e$ as well as in the disappearance mode i.e. $\nu_\mu \to \nu_\mu$ have been proposed respectively, through the JSNS experiment by the Japanese group~\cite{Ajimura:2017fld,Harada:2016vlb,Harada:2015gla}, and the KPipe experiment by the MIT-Columbia group~\cite{Axani:2015dha,Spitz:2014hwa,Spitz:2012gp} using the liquid scintillator detector with active detector mass of 17 tons and 684 tons, respectively. At the Fermilab facility a neutrino oscillation experiment in the appearance mode i.e. $\nu_\mu \to \nu_e$ has been proposed with 2 kton LArTPC detector~\cite{Spitz:2012gp,grange}. One of the major sources of systematic errors in these experiments is due to the $\nu_\mu$ flux arising from the uncertainty in the $K^+$ production yields in the proton-nucleus interaction predicted by the hadronic models for the kaon production and could be as large as 75$\%$~\cite{Axani:2015dha,Spitz:2014hwa,Agostinelli:2002hh,Mokhov:2004aa}. The other source of systematic errors is due to the uncertainty in the $\nu_\mu(\nu_e)-$nucleus cross sections for $E_{\nu_{\mu}(\nu_e)}$ = 236 MeV arising due to the nuclear medium effects~\cite{Katori:2016yel,Alvarez-Ruso:2014bla} and is the subject of the present work. The present simulation studies~\cite{Axani:2015dha,Spitz:2014hwa,Spitz:2012gp}, for estimating the neutrino oscillation parameters, use the neutrino nucleus cross sections for the KDAR neutrinos on $^{12}$C and $^{40}$Ar as predicted by the NuWro generator~\cite{Juszczak:2009qa} which are reported to be about 25$\%$ smaller than the predictions by the GENIE Monte Carlo generator~\cite{Andreopoulos:2009rq} and the results of Martini et al.~\cite{Martini:2011wp,Martini:2009uj}. In the low energy region, the short range correlations and the meson exchange currents(MEC) are not expected to play an important role~\cite{Nieves:2012yz,Benhar:2005dj,Martini:2012uc}, but the effects of Pauli blocking, Fermi motion and the long range RPA correlations are found to be quite important. This has been shown by many theoretical attempts~\cite{Kosmas:1996fh,Engel:1996zt,Auerbach:1997ay,Singh:1998md,Volpe:2000zn,Hayes:1999ew,Kolbe:1999au,Kolbe:2003ys,Nieves:2004wx,Paar:2007fi,Auerbach:2002tw,Paar:2008zza,Nieves:2017lij} made to explain the $\nu_\mu-^{12}$C cross section measured in the LSND experiment~\cite{Albert:1994xs,Athanassopoulos:1997rm,Auerbach:2002iy} with the pion decay in flight (DIF) muon neutrinos in the energy region of $E_{\nu_\mu} <$ 320 MeV with $<E_{\nu_\mu}> =$ 150 MeV. These effects could therefore be very important in the energy region of KDAR neutrinos. In view of the recent interest in the proposed neutrino oscillation experiments in $\nu_\mu \to \nu_\mu$ and $\nu_\mu \to \nu_e$ mode with liquid scintillator(LS) and LArTPC detectors and the search of sterile neutrinos through the $\nu_\mu \to \nu_s$ mode; it is topical to study the uncertainties in the $\nu_\mu(\nu_e)-$nucleus cross sections in the low energy region relevant for the monoenergetic KDAR neutrinos. In this paper, we have studied the uncertainties in the neutrino-nucleus cross sections for the QE processes induced by the weak charged current interaction in $\nu_\mu(\nu_e)$ scattering from $^{12}C$ and $^{40}Ar$ nuclei relevant for the KDAR neutrinos with $E_{\nu_\mu} \le$ 300 MeV in a nuclear model using the local density approximation which takes into account the effects of nuclear medium arising due to the Pauli Blocking, Fermi motion and the long range RPA correlations. The model has been used by us earlier to calculate quite satisfactorily the low energy neutrino cross sections relevant for the supernova, Michel and pion decay in flight(DIF) neutrino spectra ~\cite{SajjadAthar:2004yf,Athar:2005ca,SajjadAthar:2005ke,Chauhan:2017tgf}. We report the results on the energy dependence of the total cross section $\sigma(E_\nu)$ for $E_\nu < $300MeV, and the angular distributions ($\frac{d\sigma}{dcos\theta_l}$) and the kinetic energy distributions ($\frac{d\sigma}{dT_l}$) for the electron and the muon produced in the CCQE reactions induced by $\nu_e$ and $\nu_\mu$ at $E_{\nu}=$ 236 MeV in $^{12}$C and $^{40}$Ar and compare these results with the other theoretical calculations available in the literature. \begin{figure} \begin{center} \includegraphics[height=.2\textheight,width=0.25\textwidth]{Fig2_noz.eps} \caption{Diagrammatic representation of the particle - hole(p-h) excitation induced by $W$ boson in the large mass limit of intermediate vector boson($M_W \rightarrow \infty$).} \label{fig:neutrinoselfenergy} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[height=.2\textheight,width=0.60\textwidth]{rpa_noz.eps} \caption{RPA effects in the 1p1h contribution to the W self energy, where particle-hole, $\Delta$-hole, $\Delta$-$\Delta$, etc. excitations contribute.} \label{fg:fig2} \end{center} \end{figure} \section{Formalism}\label{Formalism} The reaction for the CC neutrino interaction with a nucleus is given by \begin{equation} \nu_l + ^{A}_{Z}\!X \to l^- + _{Z+1}^{A}\!Y~~~~~(l=e,\mu) \end{equation} for which the basic process is \begin{equation}\label{rxn1} \nu_l(k) + n(p) \to l^-(k') + p(p'). \end{equation} $^{A}_{Z}\!X(_{Z+1}^{A}\!Y)$ is the initial(final) nucleus, and $k,~k'$ are the four momenta of the incoming and outgoing lepton and $p,~p^\prime$ are the four momenta of the initial and final nucleon, respectively. The invariant matrix element given in Eq.(\ref{rxn1}) is written as \begin{eqnarray}\label{qe_lep_matrix} {\cal M}=\frac{G_F}{\sqrt{2}}\cos\theta_c~l_\mu~J^\mu \end{eqnarray} where $G_F$ is the Fermi coupling constant (=1.16639$\times 10^{-5}~GeV^{-2}$), $\theta_c(=13.1^0)$ is the Cabibbo angle. The leptonic weak current is given by \begin{eqnarray}\label{lep_curr} l_\mu&=&\bar{u}(k^\prime)\gamma_\mu(1 - \gamma_5)u(k), \end{eqnarray} $J^\mu$ is the hadronic current given by \begin{eqnarray}\label{had_curr} J^\mu&=&\bar{u}(p')\Gamma^\mu u(p), \end{eqnarray} with \begin{eqnarray}\label{eq:had_int} \Gamma^\mu=F_1^V(Q^2)\gamma^\mu+F_2^V(Q^2)i\sigma^{\mu\nu}\frac{q_\nu}{2M} + F_A(Q^2)\gamma^\mu\gamma^5 + F_P(Q^2) \frac{q^\mu}{M}\gamma^5 , \end{eqnarray} $Q^2(=-q^2)~\geq 0$ is the four momentum transfer square and $M$ is the nucleon mass. $F_{1,2}^V(Q^2)$ are the isovector vector form factors and $F_A(Q^2)$, $F_P(Q^2)$ are the axial and pseudoscalar form factors, respectively. We have not considered the contribution from the second class currents. \begin{figure} \includegraphics[width=8cm,height=8cm] {sigma_C_combined_Gp_band.eps} \includegraphics[width=8cm,height=8cm]{sigma_Ar_combined_Gp_band.eps} \caption{$\sigma$ vs $E_{\nu_l}$, for $\nu_l$($l$ = $e^-,~\mu^-$) induced scattering on $^{12}C$(left panel) and $^{40}Ar$(right panel) nuclear targets. The dashed line(line with circles) represents $\nu_e$($\nu_\mu$) cross section obtained in the LFGM without RPA effects, while the bands upper(lower) represents $\nu_e$($\nu_\mu$) cross section with RPA. The bands correspond to the variation of $g^\prime$ in the range of 0.6-0.7.} \label{sigmacband} \end{figure} \begin{figure} \includegraphics[width=8cm,height=8cm]{sigma_C_e-1.eps} \includegraphics[width=8cm,height=8cm]{sigma_C_mu.eps}\\ \caption{$\sigma$ vs $E_{\nu_l}$, for $\nu_e$(left panel) and $\nu_\mu$(right panel) CCQE scattering cross sections on $^{12}C$ in the LFGM model with RPA effect(solid line), the results of NuWro event generator taken from Ref.~\cite{Spitz:2012gp} (dashed double-dotted line), Volpe et al.~\cite{Volpe:2000zn} in RPA (triangle right), Kolbe et al.~\cite{Kolbe:1999au} in CRPA~(cross), Paar et al.~\cite{Paar:2007fi} in RQRPA~(dotted line), Samana et al.~\cite{Samana:2011zz} in PQRPA (circle), and Smith and Moniz~\cite{Smith:1972xh} in RFGM (dashed line).} \label{sigmac} \end{figure} \begin{figure} \includegraphics[width=8cm,height=8cm]{sigma_Ar_e.eps} \includegraphics[width=8cm,height=8cm]{sigma_Ar_mu.eps}\\ \caption{$\sigma$ vs $E_{\nu_l}$, for $\nu_e$(left panel) and $\nu_\mu$(right panel) on $^{40}Ar$ in the LFGM model with RPA effect(solid line), the results of NuWro generator taken from Ref.~\cite{Spitz:2012gp} (dashed - double dotted), and Smith and Moniz in RFGM~\cite{Smith:1972xh,Andreopoulos:2009rq} (dashed line). \label{sigmaar} \end{figure} The hadronic current contains isovector vector form factors $F_{1,2}^V(Q^2)$ of the nucleons, which are given as \begin{equation}\label{f1v_f2v} F_{1,2}^V(Q^2)=F_{1,2}^p(Q^2)- F_{1,2}^n(Q^2) \end{equation} where $F_{1}^{p(n)}(Q^2)$ and $F_{2}^{p(n)}(Q^2)$ are the Dirac and Pauli form factors of proton(neutron) which in turn are expressed in terms of the experimentally determined Sach's electric $G_E^{p,n}(Q^2)$ and magnetic $G_M^{p,n}(Q^2)$ form factors as \begin{eqnarray}\label{f1pn_f2pn} F_1^{p,n}(Q^2)&=&\left(1+\frac{Q^2}{4M^2}\right)^{-1}~\left[G_E^{p,n}(Q^2)+\frac{Q^2}{4M^2}~G_M^{p,n}(Q^2)\right]\\ F_2^{p,n}(Q^2)&=&\left(1+\frac{Q^2}{4M^2}\right)^{-1}~\left[G_M^{p,n}(Q^2)-G_E^{p,n}(Q^2)\right] \end{eqnarray} $G_E^{p,n}(Q^2)$ and $G_M^{p,n}(Q^2)$ are the electric and magnetic Sachs form factors and for the numerical calculations we have used the parameterization of Bradford et al.~\cite{Bradford:2006yz}. The isovector axial form factor is obtained from the quasielastic neutrino and antineutrino scattering as well as from the pion electroproduction data and is parameterized as \begin{equation}\label{fa} F_A(Q^2)=F_A(0)~\left[1+\frac{Q^2}{M_A^2}\right]^{-2};~~F_A(0)=-1.267. \end{equation} The pseudoscalar form factor $F_P(Q^2)$ is dominated by the pion pole and is given in terms of the axial vector form factor $F_A(Q^2)$ using the Goldberger-Treiman(GT) relation~\cite{LlewellynSmith:1971zm}: \begin{equation}\label{fp} F_P(Q^2)=\frac{2M^2F_A(Q^2)}{m_\pi^2+Q^2}. \end{equation} The differential cross section corresponding to Eq.~\ref{rxn1} is given by \begin{equation}\label{sig0} \sigma_{0}({\bf q^2, k^\prime, p}) = \frac{1}{4\pi} \frac{k^2}{E_{\nu} E_{l}} \frac{M^2}{E_{n} E_{p}} \bar\Sigma\Sigma |{\cal M}^{2}| \delta(q_0 + E_{n} - E_{p}), \end{equation} where $q_0=E_{\nu_l}-E_l$, $E_n=\sqrt{{|\bf p|}^2 + M_n^2}$ and $E_p=\sqrt{{|{\bf p + q}|}^2 + M_p^2}$. The matrix element square is obtained by using Eq.(\ref{qe_lep_matrix}) and is given by \begin{equation}\label{mat_quasi} {|{\cal M}|^2}=\frac{G_F^2}{2}~{ L}_{\mu\nu} {J}^{\mu\nu}. \end{equation} In Eq.(\ref{mat_quasi}), ${ L}_{\mu\nu}$ is the leptonic tensor calculated to be \begin{eqnarray}\label{lep_tens} {L}_{\mu\nu}&=&{\bar\Sigma}\Sigma{l_\mu}^\dagger l_\nu=L_{\mu\nu}^{S} - i L_{\mu\nu}^{A},~~~~\mbox{with}\\ L_{\mu\nu}^{S}&=&8~\left[k_\mu k_\nu^\prime+k_\mu^\prime k_\nu-g_{\mu\nu}~k\cdot k^\prime\right]~~~~\mbox{and}\nonumber\\ L_{\mu\nu}^{A}&=&8~\epsilon_{\mu\nu\alpha\beta}~k^{\prime \alpha} k^{\beta}, \end{eqnarray} The hadronic tensor ${J}^{\mu\nu}$ given by: \begin{eqnarray}\label{had_tens} J^{\mu\nu}&=&\bar{\Sigma}\Sigma J^{\mu\dagger} J^\nu, \end{eqnarray} where $J^{\mu}$ defined in Eq.(\ref{had_curr}) with Eqs.(\ref{f1v_f2v}), (\ref{fa}) and (\ref{fp}) has been used for the numerical calculations. The detailed expression for the hadronic tensor $J^{\mu\nu}$ is given in Ref.~\cite{Akbar:2015yda}. When the processes given by Eq.~(\ref{rxn1}) take place in a nucleus, various nuclear medium effects like Pauli blocking, Fermi motion, binding energy corrections and nucleon correlations, etc. come into play. Moreover, the charged lepton produced in the final state moves in the Coulomb field of the residual nucleus and which affects its energy and momenta. We have taken into account these effects which are briefly discussed below: \begin{enumerate} \item In the standard treatment of the Fermi Gas Model applied to neutrino reactions the quantum states of the nucleons inside the nucleus are filled up to a Fermi momentum $p_{F}$, given by $p_{F}= \left[3 \pi^2 \rho\right]^{\frac{1}{3}}$, where $\rho$ is the density of the nucleus. In a nuclear reaction, the momentum of the initial nucleon $p$ is therefore constrained to be $p < p_{F}$ and $p^\prime (= |{\bf p} + {\bf q}|) > p_{F}^\prime$, where $p_{F}$ is the Fermi momentum of the initial nucleon target in the Fermi sea, and $p_{F}^\prime$ is the Fermi momentum of the outgoing nucleon. The total energies of the initial($i$) and final($f$) nucleons are $E_i=\sqrt{{|\bf p|}^2+M_i^2}$ and $E_f=\sqrt{|{\bf p} + {\bf q}|^2+M_f^2}$. In this model the Fermi momentum and energy are constrained to be determined by the nuclear density which is constant. In the local Fermi gas model(LFGM), the Fermi momenta of the initial and final nucleons are not constant but depend on the interaction point $\vec r$ and are given by $p_{F_n}(r)$ and $p_{F_p}(r)$ for neutron and proton, respectively, where $p_{F_n}(r)= \left[3 \pi^2 \rho_n (r)\right]^{\frac{1}{3}}$ and $p_{F_p}(r)= \left[3\pi^2 \rho_p (r)\right]^{\frac{1}{3}}$, $\rho_n (r)$ and $\rho_p (r)$ being the neutron and proton nuclear densities, respectively. We use the proton density $\rho_p (r) = \frac{Z}{A}\rho (r)$ and neutron density given by $\rho_n (r) = \frac{A-Z}{A}\rho (r)$, where $\rho (r)$ is determined experimentally by the electron-nucleus scattering experiments~\cite{Vries:1974}. We use modified harmonic oscillator(MHO) density \begin{eqnarray}\label{MHO} \rho(r) = \rho(0) \left[ 1 + a \left( \frac{r}{R} \right)^{2} exp\left[ -\left( \frac{r}{R} \right)^{2} \right] \right] \end{eqnarray} for $^{12}C$ and 2-parameter Fermi density(2pF) \begin{eqnarray}\label{2pF} \rho(r) = \frac{\rho(0)} {\left[ 1 + exp\left( \frac{r-R}{a} \right)\right]} \end{eqnarray} for $^{40}Ar$ with $R$ and $a$ as the density parameters and the parameters are taken from the Refs.~\cite{Vries:1974, GarciaRecio:1991wk}. In Table-\ref{tab:Q-value_BE}, we show the nuclear density and other parameters needed for the numerical calculations in this paper. In the local density approximation(LDA), the cross section($\sigma$) for the $\nu_l$ scattering from a nucleon moving in the nucleus with a momentum ${\bf p}$ is given by~\cite{Singh:1993rg}: \begin{equation}\label{sigma_lda} \sigma(q^2, k^\prime) = \int 2 d{\bf r} d{\bf p} \frac{1}{(2\pi)^3} n_n({\bf p(r)}) [1-n_{p}({\bf p(r)} + {\bf q(r)})] \sigma_{0}({\bf q^2, k^\prime, p}), \end{equation} where $\sigma_{0}$ is given by Eq.\ref{sig0}. In the above expression, $n_n({\bf p(r)})$ and $n_{p}({\bf p(r)} + {\bf q(r)})$ represent the occupation numbers for the neutron and proton respectively i.e. at a given position ${\bf r}$, $n_n({\bf p(r)})$=1 for $p \le p_{F_n}(r)$, and 0 otherwise, and $n_{p}({\bf p(r)} + {\bf q(r)})$=1 for $|{\bf p(r)} + {\bf q(r)}| \ge p_{F_p}(r)$, and 0 otherwise. Instead of using Eqs.~\ref{sig0} and~\ref{sigma_lda}, we use the methods of many body field theory~\cite{Fetter}, where the reaction cross section for the process $\nu_l + n \to l^- + p$ in a nuclear medium is given in terms of the imaginary part of the Lindhard function $U_N(q_0,\vec q)$ corresponding to the p-h excitation diagram shown in Fig.\ref{fig:neutrinoselfenergy}~\cite{Singh:1993rg}. This imaginary part $U_N(q_0,\vec q)$ is obtained by cutting the $W$ self energy diagram along the horizontal line(Fig. \ref{fig:neutrinoselfenergy}) and applying the Cutkowsky rules~\cite{Itzykson}. This is equivalent to replacing the expression \begin{equation} \int \frac{d\bf{p}}{(2\pi)^3}{n_n(\bf{p})} [1-n_{p}({\bf p} + {\bf q})] \frac{M_n M_p}{E_{n}({\bf p}) E_{p}(\bf p+\bf q)}\delta[q_0+E_n-E_p] \end{equation} occurring in Eq.(\ref{sigma_lda}) by $-(1/{\pi})$Im$ U_N(q_0,\vec{q})$, where \begin{equation} U_N(q_0,{\bf q}) = \int \frac{d{\bf p}}{(2\pi)^3} \frac{M_{n}M_{p}}{E_{n}({\bf p})E_{p}(\bf p + \bf q)} \frac{n_{n}({\bf p}) [1-n_{p}({\bf p} + {\bf q})]}{q_{0}+E_{n}({\bf p})-E_{p}(\bf p+\bf q)+i\epsilon }. \end{equation} The imaginary part of the Lindhard function is calculated to be~\cite{Singh:1993rg}: \begin{equation}\label{lind} Im ~ U_N (q_{0}, {\bf q}) = -\frac{1}{2\pi} \frac{M_{p} M_{n}}{|{\bf q}|} [E_{F_{1}} - A] \end{equation} for $q^2 < 0, E_{F_2} - q_{0} < E_{F_{1}}$ and $\frac{-q_0 + |{\bf q}|\sqrt{1 - \frac{4 M^2}{q^2} }}{2} < E_{F_{1}}$, otherwise $Im ~U_N = 0$. In the above expression $E_{F_{1(2)}} = \sqrt{p_{F_{n(p)}}^2 + M_{n(p)}^{2}}~$, and $A = Max \left[ M_{n}, E_{F_{2}} - q_{0}, \frac{-q_0 + |{\bf q}|\sqrt{1 - \frac{4 M^2}{q^2} }}{2}\right]$. \item When the reaction $\nu_l + n \rightarrow l^- + p$ takes place in the nucleus, the first consideration is the Q value which inhibits the reaction in the nucleus. The experimental Q values corresponding to the g.s. $\rightarrow$ g.s. transition are given in Table-I for the two nuclei. We also introduce $Q_F(r)= E_{F_{2}}(r) - E_{F_{1}}(r)$ to take into account the difference in the Fermi levels of the initial and final nuclei, which results in an effective value of $Q = Q-Q_F(r)$ to be used in the local Fermi Gas model. These considerations imply that $q_0$ should be modified to $q_0^{\text eff}(r) = q_0-(Q-Q_F(r))$ in the calculation of the Lindhard function in Eq.\ref{lind}. \item In the charged current reaction, the energy and momentum of the outgoing charged lepton are modified due to the Coulomb interaction with the final nucleus. The Coulomb distortion effect on the outgoing lepton has been taken into account in an effective momentum approximation(EMA)~\cite{Engel:1997fy,Kim:1996ua,Traini:2001kz,Aste:2005wc} in which the lepton momentum and energy are modified by replacing $E_l$ by $E_l+V_c(r)$. The form of the Coulomb potential $V_{c}(r)$ considered here is: \begin{equation}\label{cool} V_{c}(r) =-\alpha~ 4\pi\left(\frac{1}{r}\int_{0}^{r}\frac{\rho_{p}(r^\prime)}{Z}r^{\prime 2}dr^{\prime} + \int_{r}^{\infty}\frac{\rho_{p} (r^\prime)}{Z}r^{\prime}dr^{\prime}\right), \end{equation} where $\alpha$ is fine structure constant and $\rho_{p}(r)$ is the proton density of the final nucleus. Incorporation of these considerations results in the modification of the argument of the Lindhard function (Eq.\ref{lind}), i.e. \[Im U_N (q_0, {\bf q})~\longrightarrow~Im U_N (q_0^{\text eff}(r)~-~V_c(r), {\bf q}).\] With the inclusion of these nuclear effects, the cross section $\sigma(E_\nu)$ is written as {\footnotesize \begin{eqnarray}\label{xsection_medeffect} \sigma(E_\nu)=-2{G_F}^2\cos^2{\theta_c}\int^{r_{max}}_{r_{min}} r^2 dr \int^{{k^\prime}_{max}}_{{k^\prime}_{min}}k^\prime dk^\prime \int_{Q^{2}_{min}}^{Q^{2}_{max}}dQ^{2}\frac{1}{E_{\nu_l}^{2} E_l} L_{\mu\nu}J^{\mu\nu}Im U_N (q_0^{\text eff}(r) - V_c(r), {\bf q}).~~~~ \end{eqnarray}} We must point out that in the above expression the outgoing lepton momentum and energy are $r$-dependent i.e. $k^\prime=k^\prime(r)$ and $E_l=E_l(r)$, and only in the asymptotic limit ($r \rightarrow \infty$) they become independent of r. With the incorporation of the Coulomb effect, $E_l(r)$ is modified to $E_l(r) + V_{c}(r)$, and $|\vec k^\prime(r)|=\sqrt{(E_l(r) + V_{c}(r))^2 - m_l^2}$. Accordingly the energy transfer $q_0$ modifies to $q_0^{\text eff}(r) = q_0^{\text eff}(r) - V_c(r)$, and the three momentum transfer $\vec q$ modifies to $\vec q(r)= \vec k - \vec k^\prime(r)$. \begin{table}\label{tab:Q-value_BE} \begin{tabular}{|c|c|c|c|c|c|}\hline\hline Nucleus & Q-Value($\nu$)& $R_p$ & $R_n$ & a \\ & (MeV) & (fm)&(fm)&$(fm)^\ast$\\ \hline $^{12}C$& 17.84 &1.69 &1.692 &1.082(MHO)\\ $^{40}Ar$& 3.64 & 3.47& 3.64 & 0.569(2pF)\\ \hline \end{tabular} \caption{Fermi momentum and Q-value of the reaction for $^{12}C$ and $^{40}Ar$ nuclear targets. Last three columns are the parameters for MHO and 2pF densities~\cite{Nieves:2004wx, Vries:1974, GarciaRecio:1991wk}. $^{\ast}$ is dimensionless for the MHO density.} \end{table} \item In the nucleus the strength of the electroweak couplings may change from their free nucleon values due to the presence of strongly interacting nucleons. Conservation of vector current (CVC) forbids any change in the charge coupling while the magnetic and the axial vector couplings are likely to change from their free nucleon values. There exists considerable work in understanding the quenching of magnetic moment and axial charge in nuclei due to nucleon-nucleon correlations. In our approach these are reflected in the modification of nuclear response in longitudinal and transverse channels leading to some reduction. We calculate this reduction in the vector-axial(VA) and axial-axial(AA) response functions due to the long range nucleon-nucleon correlations treated in the random phase approximation(RPA), which has been diagrammatically shown in Fig.(\ref{fg:fig2}). The weak nucleon current described by Eq.(\ref{had_curr}) gives in the non-relativistic limit, terms like $F_A \vec{\sigma}\tau_+$ and $i F_2^V \frac{\vec{\sigma}\times \vec{q}}{2M}\tau_+$ which generate spin-isospin transitions in nuclei. While the term $i F_2^V \frac{\vec{\sigma}\times \vec{q}}{2M}\tau_+$ couples with the transverse excitations, the term $F_A \vec{\sigma}\tau_+$ couples with the transverse as well as longitudinal channels. These channels produce different RPA responses in the longitudinal and transverse channels due to the different NN potential in these channels when the diagrams of Fig.(\ref{fg:fig2}) are summed up. As a consequence a term proportional to $F^2_A \delta_{ij}$ in $J^{ij}$ is replaced by $J^{ij}_{RPA}$ as \cite{Akbar:2015yda}: \begin{equation}\label{f2a_rpa} J^{ij}\rightarrow J^{ij}_{RPA}= F^2_A{Im U_N}\left[\frac{{\bf{\hat{q_i}}{\hat{q_j}}}}{1-U_NV_l}+\frac{\delta_{ij}-{\bf{\hat{q_i}}{\hat{q_j}}}} {1-U_NV_t}\right], \end{equation} where the first and second terms show the modification in $J^{ij}$ in longitudinal and transverse channels. In Eq.(\ref{f2a_rpa}), $V_l$ and $V_t$ are the longitudinal and transverse parts of the nucleon-nucleon potential calculated using $\pi$ and $\rho$ exchanges and are given by \begin{eqnarray}\label{longi_part} V_l(q) = \frac{f^2}{m_\pi^2}\left[\frac{q^2}{-q^2+m_\pi^2}{\left(\frac{\Lambda_\pi^2-m_\pi^2}{\Lambda_\pi^2-q^2}\right)^2}+g^\prime\right],\nonumber\\ V_t(q) = \frac{f^2}{m_\pi^2}\left[\frac{q^2}{-q^2+m^2_\rho}{C_\rho}{\left(\frac{{\Lambda_\rho}^2-m^2_\rho}{{\Lambda_\rho}^2-q^2}\right)^2}+g^\prime \right],\end{eqnarray} where $\frac{f^{2}}{4\pi}$ = 0.8, $\Lambda_\pi$ = 1.3 GeV, $C_\rho$ = 2, $\Lambda_\rho$ = 2.5 GeV, $m_\pi$ and $m_\rho$ are the pion and rho meson masses, and $g^\prime$ is the Landau-Migdal parameter taken to be $0.7$ which has been used quite successfully to explain weak processes in nuclei ~\cite{Singh:1993rg,Singh:1998md,SajjadAthar:2005ke,Chauhan:2017tgf}. However, in some recent works, the Valencia group~\cite{Nieves:2004wx,Nieves:2017lij,Valverde:2006zn} has used the value of $g'=$ 0.63. We have, therefore, studied the dependence of $g'$ on the total cross section as well as lepton energy and angular distributions by varying $g'$ in the range of 0.6 to 0.7. The effect of the $\Delta$ degrees of freedom in the nuclear medium is included in the calculation of the RPA response by considering the effect of ph-$\Delta$h and $\Delta$h-$\Delta$h excitations. This is done by replacing $U_N \rightarrow U_N^{\prime}=U_N+U_\Delta$, where $U_\Delta$ is the Lindhard function for the $\Delta$h excitation in the nuclear medium. The expressions for $U_N$ and $U_\Delta$ are taken from Ref.\cite{Oset1}. The different couplings of $N$ and $\Delta$ are incorporated in $U_N$ and $U_\Delta$ and then the same interaction strengths ($V_l$ and $V_t$) are used to calculate the RPA response. With the incorporation of these nuclear medium effects the expression for the total scattering cross section $\sigma(E_\nu)$ is given by Eq.(\ref{xsection_medeffect}) with $J^{\mu \nu}$ replaced by $J^{\mu \nu}_{RPA}$(defined in Eq. (\ref{f2a_rpa})) i.e. \begin{eqnarray}\label{xsection_final} \sigma(E_\nu)&=&-2{G_F}^2 cos^{2}\theta_c \int^{r_{max}}_{r_{min}} r^2 dr \int^{{k^\prime}_{max}}_{{k^\prime}_{min}}k^\prime dk^\prime \int_{Q^{2}_{min}}^{Q^{2}_{max}}dQ^{2}\frac{1}{E_{\nu_l}^{2} E_l} L_{\mu\nu}J^{\mu \nu}_{RPA} Im{U_N}(q_0^{\text eff}(r)~-~V_c(r)),~~~~~~ \end{eqnarray} where $J^{\mu \nu}_{RPA}$ is the hadronic tensor with its various components modified due to long range correlation effects treated in RPA as it is shown in Eq.(\ref{f2a_rpa}) for the leading term proportional to $F_A^2$. The explicit expressions for $J^{\mu \nu}_{RPA}$ is given in the Ref.\cite{Akbar:2015yda}. \end{enumerate} \begin{figure} \includegraphics[width=8cm,height=8cm]{ratio-sigma-nue.eps} \includegraphics[width=8cm,height=8cm]{ratio-sigma-numu.eps} \caption{Ratio, $R=\frac{\sigma^{^{40}Ar}_{\nu_{_l}}}{\sigma^{^{12}C}_{\nu_{_l}}}$ vs $E_{\nu}$ for $\nu_e$(left panel) and $\nu_\mu$(right panel). Solid(dashed) line represents the results obtained using LFGM with(without) RPA. In the inset of $\nu_e$ case(left panel), the results of the ratio are obtained without(dashed line) and with(solid line) RPA. In the inset of $\nu_\mu$ case(right panel), the results are compared with the results of Van Dessel et al.~\cite{VanDessel:2017ery} in CRPA(dotted line).}\label{ratio_sigma} \end{figure} \begin{figure} \includegraphics[width=12cm,height=8cm]{ratio_sigma_wRPA.eps} \caption{Ratio of $\nu_\mu$ to $\nu_e$ scattering cross sections $\frac{\sigma_{\nu_\mu}}{\sigma_{\nu_e}}$ vs $E_\nu$ for $^{12}C$(solid line) and $^{40}Ar$(dashed line) in the LFGM with RPA.} \label{ratioemu} \end{figure} \begin{figure} \includegraphics[height=8cm,width=8cm]{dsdT_C_wRPA_Gp_band.eps} \includegraphics[height=8cm,width=8cm]{dsdT_Ar_wRPA_Gp_0.6.eps} \caption{$\frac{d\sigma}{dT_l}$ vs $T_l$ for $\nu_l$ induced processes on $^{12}C$(left panel) and $^{40}Ar$(right panel) nuclear targets at $E_\nu = $236 MeV. The results are obtained using LFGM with RPA. The variation of $g'$ from 0.6 to 0.7 is represented by the band. The curves on the left(right) side of each panel represent the results for $\mu^-(e^-)$ kinetic energy distribution induced by $\nu_\mu(\nu_e)$ scattering.} \label{dTmugp} \end{figure} \begin{figure} \includegraphics[height=8cm,width=8cm]{lepton_mass_dep_dTmu_C12.eps} \includegraphics[height=8cm,width=8cm]{dsdT_Ar_wRPA_Ma.eps} \caption{ $\frac{d\sigma}{dT_\mu}$ vs $T_\mu$ for $\nu_\mu$ induced processes on $^{12}C$(left panel) and $^{40}Ar$(right panel) nuclear targets at $E_\nu = $236 MeV. The results are obtained using LFGM with RPA for the different values of $M_A$ viz. $M_A =$ 1.0 GeV(dotted line), 1.1 GeV(dash double-dotted line) and 1.2 GeV(circle), respectively.}\label{dTmu} \end{figure} \begin{figure} \includegraphics[height=8cm,width=12cm]{v-pandey.eps} \caption{Kinetic energy distribution, $\frac{d\sigma}{dT_\mu}$ vs $T_\mu$ for $\nu_\mu$ induced process on $^{12}C$ nuclear target. The results of LFGM with RPA are represented by solid line at $E_\nu =$ 200 MeV and by dash double-dotted line at $E_\nu =$ 300 MeV. The results obtained from the Ref.~\cite{Pandey:2014tza} at $E_\nu =$ 200 MeV(circle) and $E_\nu =$ 300 MeV(square) are also presented in the figure.}\label{pandey} \end{figure} \begin{figure} \includegraphics[width=8cm,height=8cm]{dsdcos_C_wRPA_Gp_0.6.eps} \includegraphics[width=8cm,height=8cm]{dsdcos_Ar_wRPA_Gp_0.6.eps} \caption{$\frac{d\sigma}{cos{\theta_l}}$ vs $cos{\theta_l}$ for $\nu_l$ induced process on $^{12}C$(left panel) and $^{40}Ar$(right panel) at $E_{\nu}$=236MeV, obtained by using LFGM with RPA. The variation of $g'$ from 0.6 to 0.7 is represented by the band. Solid and dashed lines present the results of $\nu_e$ scattering and dotted and dash double-dotted lines represent the results of $\nu_\mu$ scattering.}\label{dcos} \end{figure} \begin{figure} \includegraphics[height=8cm,width=8cm]{lepton_mass_dep_dcosth_C12.eps} \includegraphics[height=8cm,width=8cm]{lepton_mass_dep_dcosth_Ar40.eps} \caption{$\frac{d\sigma}{cos{\theta_l}}$ vs $cos{\theta_l}$ for $\nu_e$(top curves) and $\nu_\mu$(bottom curves) induced processes on $^{12}C$(left panel) and $^{40}Ar$(right panel) nuclear targets at $E_\nu = $236 MeV, obtained by using LFGM with RPA for the different values of $M_A$ viz. $M_A =$ 1.0 GeV(circle), 1.1 GeV(dash double-dotted line) and 1.2 GeV(dotted line).} \label{dcos_Ma} \end{figure} \section{Results and discussion}\label{Results} For the numerical calculations, we have used Eq.~\ref{xsection_medeffect} to obtain the results for the charged current $\nu_e$ and $\nu_\mu$ scattering cross sections on the nuclear targets in the local Fermi gas model(LFGM) with the inclusion of Fermi momentum and Pauli blocking, and Eq.~\ref{xsection_final} when RPA effects are also included. Furthermore, we have taken Coulomb distortion effect on the outgoing charged lepton in both cases using EMA with the Coulomb potential given in Eq.~\ref{cool}. In Fig.~\ref{sigmacband}, we present the results of $\nu_l$(l=$e,\mu$) induced charged lepton production cross sections $\sigma$ vs $E_{\nu_l}$ in $^{12}C$ and $^{40}Ar$, respectively. We find a large reduction in the cross section due to the nuclear medium effects. For example, in the case of $\nu_e$ scattering on $^{12}C$($^{40}Ar$) nuclear targets, when the cross section is obtained using the LFGM without RPA effects, the reduction in the cross section from the free nucleon case(not shown here) is $\sim$50\%(35\%) at $E_{\nu_e}=$ 150 MeV, $\sim$38$\%$(20$\%$) at $E_{\nu_e}=$ 200 MeV and $\sim$30$\%$(15$\%$) at $E_{\nu_e}=$ 236 MeV. When the RPA effects are also taken into account there is a further reduction in the cross section which is about $\sim$48\%(53\%) at $E_{\nu_e}=$ 150 MeV, $\sim$45$\%$(50$\%$) at $E_{\nu_e}=$ 200 MeV and $\sim$42$\%$(47$\%$) at $E_{\nu_e}=$ 236 MeV. In the case of $\nu_\mu$ scattering, this reduction is $\sim$85$\%$(65$\%$) at $E_{\nu_\mu}=$ 150 MeV, $\sim$60$\%$(43$\%$) at $E_{\nu_\mu}=$ 200 MeV and $\sim$47$\%$(30$\%$) at $E_{\nu_\mu}=$ 236 MeV without the RPA correlation and a further reduction of $\sim$55$\%$(60$\%$) at $E_{\nu_\mu}=$ 150 MeV, $\sim$50$\%$(55$\%$) at $E_{\nu_\mu}=$ 200 MeV and $\sim$45$\%$(50$\%$) at $E_{\nu_\mu}=$ 236 MeV, when RPA effects are included. We have also shown in these figures, the dependence of the cross section on $g'$, the Landau-Migdal parameter, used in Eq.~(\ref{longi_part}) by varying the value of $g'$ in the range 0.6-0.7. The bands shown in the figures correspond to the change in the cross section due to the variation of $g'$ in this range. We find that with $g'=0.7$ the cross section in $^{12}C$/$^{40}Ar$ decreases by about 10\% for $\nu_l(l=e,\mu)$ scattering at 236 MeV from the results obtained with $g'=0.6$. In Figs.~\ref{sigmac} and \ref{sigmaar}, we have compared the present results in $\nu_e - ^{12}C$ with the results of NuWro generator~\cite{Juszczak:2009qa}, Volpe et al.~\cite{Volpe:2000zn} in Random Phase Approximation, Kolbe et al.~\cite{Kolbe:1999au} in Continuum Random Phase Approximation, Paar et al.~\cite{Paar:2007fi} in Relativistic Quasiparticle Random Phase Approximation, Samana et al.~\cite{Samana:2011zz} in Projected Quasiparticle Random Phase Approximation-S6, and Smith and Moniz~\cite{Smith:1972xh} in the relativistic Fermi gas model. In the case of $\nu_\mu - ^{12}C$, $\nu_e - ^{40}Ar$ and $\nu_\mu - ^{40}Ar$ induced processes, the cross section results are compared with the results of Smith and Moniz~\cite{Smith:1972xh} and NuWro generator~\cite{Spitz:2012gp,Juszczak:2009qa} in which the nucleon spectral function of Benhar et al.~\cite{Benhar:2005dj} has been used. In the preliminary simulation studies for determining the neutrino oscillation parameters Spitz et al.~\cite{Spitz:2014hwa,Spitz:2012gp} have used NuWro~\cite{Juszczak:2009qa} prediction of 1.3$\times$10$^{-38}$ cm$^2$ per neutron for the total cross section for $\nu_\mu-^{12}C$ and $\nu_\mu-^{40}Ar$ scattering at $E_\nu=$ 236 MeV. and the same value has also been used by Axani et al.~\cite{Axani:2015dha} for the $\nu_\mu-^{12}C$ cross section. In view of this we have studied the ratio R = $\frac{\sigma^{^{40}Ar}}{\sigma^{^{12}C}}$ as a function of $E_\nu$. In Fig.~\ref{ratio_sigma}, we have shown the results for R obtained using the present model with and without RPA effects for the $\nu_e$ and $\nu_\mu$ induced processes and also made a comparison with the recent results reported by Van Dessel et al.~\cite{VanDessel:2017ery} in CRPA for the $\nu_\mu$ induced process. The measurement of neutrino-nucleus cross section induced by $\nu_\mu$ and $\nu_e$ and the $\nu_\mu/\nu_e$ cross section ratio is an important quantity in the analysis of $\nu_\mu \to \nu_e$ oscillations in the appearance channel. This ratio also provides an experimental validation of the theoretical calculations of the various effects arising due to the lepton mass dependent terms in the standard model specially the pseudoscalar form factors and the second class currents~\cite{Akbar:2015yda,Day:2012gb} and could provide possible evidence of muon--electron non-universality. Moreover, this ratio is also a key parameter in improving the sensitivity of measuring the CP violation phase $\delta_{CP}$ in the future experiments on neutrino oscillations~\cite{Huber:2007em}. We have plotted in Fig.~\ref{ratioemu}, the ratio of $\nu_\mu/\nu_e$ cross section as a function of energy in the low energy region relevant for the experiments with KDAR neutrinos. In Fig.\ref{dTmugp}, we have presented the results of $\frac{d\sigma}{dT_l}$ vs $T_l$ ($l=e,\mu$), for $\nu_e$ and $\nu_\mu$ induced processes in $^{12}C$(left panel) and $^{40}Ar$(right panel) nuclear targets at $E_\nu = $236 MeV. The results are shown by varying $g'$ in the range 0.6 to 0.7 using $M_A$=1.05GeV. In Fig.~\ref{dTmu}, we have presented the results for $\frac{d\sigma}{dT_\mu}$ vs $T_\mu$ in $^{12}C$ and $^{40}Ar$ nuclear targets at $E_\nu =$ 236 MeV by varying $M_A$ in the range of 1 - 1.2GeV. We observe that there is very little sensitivity on $M_A$ in the case of kinetic energy distribution which is around 5\% when we vary the value of $M_A$ by 20\%. In Fig.~\ref{pandey}, we show the energy distribution $\frac{d\sigma}{dT_\mu}$ vs $T_\mu$ at the two different neutrino energies, viz., $E_{\nu_\mu}=$ 200 MeV and $E_{\nu_\mu}=$ 300 MeV on $^{12}C$ calculated in the present model and compare them with the results of Pandey et al.~\cite{Pandey:2014tza}. We see that there is wide variation in the energy distribution of muons predicted in these two models. In Fig.~\ref{dcos}, we have presented the results of the angular distribution $\frac{d\sigma}{d\cos\theta_l}$ vs $\cos\theta_l$ for $\nu_e$ and $\nu_\mu$ induced processes in $^{12}C$(left panel) and $^{40}Ar$(right panel) nuclear targets at $E_\nu = $236 MeV. The results are obtained by varying $g'$ in the range 0.6 to 0.7 using $M_A$=1.05GeV. In Fig.~\ref{dcos_Ma}, we have presented the results for $\nu_e$ and $\nu_\mu$ induced processes in $^{12}C$ and $^{40}Ar$ nuclear targets at $E_\nu =$ 236 MeV by varying $M_A$ in the range of 1 - 1.2GeV. We observe that there is some sensitivity on $M_A$ in the case of angular distribution specially at the backward angles corresponding to higher $Q^2$. \section{Summary and Conclusions} We have presented a theoretical description of the inclusive quasielastic scattering for $\nu_e$ and $\nu_\mu$ scattering induced by the weak charged current on $^{12}C$ and $^{40}Ar$ relevant for the future experiments planned to be done using KDAR neutrinos. These KDAR neutrinos are monoenergetic with $E_{\nu_\mu}$=236MeV. The neutrino oscillation experiments in the $\nu_\mu \rightarrow \nu_\mu$ and $\nu_\mu \rightarrow \nu_e$ channels have background from the KDAR neutrinos from the $K_{\mu_3}$ and $K_{e_3}$ decay modes with the continuous energy spectrum of the muon neutrinos with $E_{\nu_\mu} \le$ 215MeV and the electron neutrinos with $E_{\nu_e} \le$ 235MeV. We have therefore studied the nuclear medium effects in the neutrino-nucleus cross sections in the energy region of $E_{\nu_l} \le 300MeV$ for QE scattering of $\nu_e$ and $\nu_\mu$ from $^{12}C$ and $^{40}Ar$ nuclear targets proposed to be used in the future experiments planned at the JPARC and Fermilab facilities with liquid scintillator(LS) and LArTPC detectors. The calculations have been done in a microscopic model of nucleus which takes into account the effect of the Fermi motion, binding energy and long range nucleon-nucleon correlations through RPA. The method has been earlier applied successfully to reproduce the low energy neutrino-nucleus cross sections observed in LSND, KARMEN and LAMPF experiments. The effect of Pauli blocking and RPA correlations is to drastically reduce the cross sections over the free nucleon case. In the energy region of $E_\nu=150-250MeV$, the overall reduction due to Pauli blocking and RPA correlations in $^{12}C$($^{40}Ar$) varies from the range of 70$\%$(75$\%$) to 55$\%$(53$\%$) in the case of $\nu_e$-nucleus scattering and 95$\%$(85$\%$) to 68$\%$(63$\%$) in the case of $\nu_\mu$-nucleus scattering. There is an uncertainty of about 10$\%$ due to the Landau-Migdal parameter used in the treatment of RPA correlations. The results have been compared with the results of the other calculations in the literature. The cross section obtained using the present model with RPA effect is around 50$\%$ smaller than the results of the Relativistic Fermi gas model(RFGM) of Smith and Moniz~\cite{Smith:1972xh}. The different treatment of nucleon-nucleon correlations in the various approaches discussed in section-\ref{Results} results in an uncertainty of about $25\%$ in the cross sections at $E_{\nu_\mu}$=236MeV. We have also presented the results for the energy and angular distributions for $e^-$ and $\mu^-$ produced in these reactions for a fixed neutrino energy of $E_\nu$=236MeV. A comparison with the recent results of Pandey et al.~\cite{Pandey:2014tza} shows that the differences in the prediction of these two models are significant in the case of the energy distributions of the muons. An experimental observation of the total cross sections and the differential cross sections in the forthcoming experiments will be able to discriminate between various models of treating the nuclear medium effects in the neutrino-nucleus scattering at low energies.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,517
{"url":"https:\/\/tex.stackexchange.com\/questions\/302142\/add-some-more-in-table-of-contents\/302178","text":"i want to add the contents eg Abstract in table of contents which should have a roman letter to it.. i have shown below\n\n\u2022 Welcome to SE. Your question is quit common here. Do you \"google\" in SE and general for similar problem? \u2013\u00a0Zarko Apr 2 '16 at 16:53\n\nNote the effect of \\frontmatter and \\mainmatter in this MWE. With these commands, the abstract could be a normal chapter of the front matter part.\n\n\\documentclass[oneside]{book}\n\\begin{document}\n\\frontmatter\n\\chapter{Abstract}\n\\chapter{Acknowledgements}\n\\tableofcontents\n\\mainmatter\n\\chapter{Introduction}\n\\section{Background}\n\\begin{figure} ... \\caption{A figure} \\end{figure}\n\\begin{table} ... \\caption{A table} \\end{table}\n\\end{document}\n\n\u2022 That adds the number of the last page of the list to the toc, which isn't a problem if the list is only one page long. \u2013\u00a0Johannes_B Apr 4 '16 at 8:57\n\u2022 @Johannes_B Your are right, but I hoped that one can guess that the number also can be added starting a page, before of the list command. \u2013\u00a0Fran Apr 4 '16 at 20:41\n\nThe usage of tocbibind prevents explicit \\addcontentsline for the LoF and LoT\n\n\\documentclass[oneside]{book}\n\\usepackage[nottoc]{tocbibind}\n\\begin{document}\n\n\\frontmatter\n\\chapter{Abstract}\n\\chapter{Acknowledgements}\n\\tableofcontents\n\\listoffigures\n\\listoftables\n\\mainmatter\n\n\\chapter{First chapter}\n\\section{Some about the theory on Brontosaurs}\n\n\\begin{figure}\n\\caption{A dummy figure}\n\\end{figure}\n\n\\begin{table}\n\\caption{A dummy table}\n\\end{table}\n\\end{document}\n\n\nI may have misread the question- are you wanting to list the contents as an entry in the table of contents? If so, you could try this:\n\n\\tableofcontents\n\nThis will create an entry for the contents page in your table of contents. The position of the command \\addcontentsline (i.e. before or after \\tableofcontents) will determine the page number that LaTeX assigns the entry.\nProvided you include this command after \\pagenumbering{roman} and before \\pagenumbering{arabic}, the page number will be Roman numerals as you desire (unless you are using the book document class, which I believe does this automatically if you declare \\frontmatter, \\mainmatter etc).","date":"2019-08-22 17:51:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7233353853225708, \"perplexity\": 2199.4176344929656}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027317339.12\/warc\/CC-MAIN-20190822172901-20190822194901-00189.warc.gz\"}"}
null
null
package obj import ( "cmd/internal/objabi" "fmt" "log" "math" ) func Linknew(arch *LinkArch) *Link { ctxt := new(Link) ctxt.hash = make(map[string]*LSym) ctxt.statichash = make(map[string]*LSym) ctxt.Arch = arch ctxt.Pathname = objabi.WorkingDir() ctxt.Headtype.Set(objabi.GOOS) if ctxt.Headtype < 0 { log.Fatalf("unknown goos %s", objabi.GOOS) } ctxt.Flag_optimize = true ctxt.Framepointer_enabled = objabi.Framepointer_enabled(objabi.GOOS, arch.Name) return ctxt } // LookupDerived looks up or creates the symbol with name name derived from symbol s. // The resulting symbol will be static iff s is. func (ctxt *Link) LookupDerived(s *LSym, name string) *LSym { if s.Static() { return ctxt.LookupStatic(name) } return ctxt.Lookup(name) } // LookupStatic looks up the static symbol with name name. // If it does not exist, it creates it. func (ctxt *Link) LookupStatic(name string) *LSym { s := ctxt.statichash[name] if s == nil { s = &LSym{Name: name, Attribute: AttrStatic} ctxt.statichash[name] = s } return s } // Lookup looks up the symbol with name name. // If it does not exist, it creates it. func (ctxt *Link) Lookup(name string) *LSym { return ctxt.LookupInit(name, nil) } // LookupInit looks up the symbol with name name. // If it does not exist, it creates it and // passes it to init for one-time initialization. func (ctxt *Link) LookupInit(name string, init func(s *LSym)) *LSym { ctxt.hashmu.Lock() s := ctxt.hash[name] if s == nil { s = &LSym{Name: name} ctxt.hash[name] = s if init != nil { init(s) } } ctxt.hashmu.Unlock() return s } func (ctxt *Link) Float32Sym(f float32) *LSym { i := math.Float32bits(f) name := fmt.Sprintf("$f32.%08x", i) return ctxt.LookupInit(name, func(s *LSym) { s.Size = 4 s.Set(AttrLocal, true) }) } func (ctxt *Link) Float64Sym(f float64) *LSym { i := math.Float64bits(f) name := fmt.Sprintf("$f64.%016x", i) return ctxt.LookupInit(name, func(s *LSym) { s.Size = 8 s.Set(AttrLocal, true) }) } func (ctxt *Link) Int64Sym(i int64) *LSym { name := fmt.Sprintf("$i64.%016x", uint64(i)) return ctxt.LookupInit(name, func(s *LSym) { s.Size = 8 s.Set(AttrLocal, true) }) }
{ "redpajama_set_name": "RedPajamaGithub" }
9,969
Walmart, Sam's Club and the Walmart Foundation Announce Relief and Recovery Efforts To Help Those Affected by Hurricane Laura Up to $2.5 million in cash and product committed to support local response organizations in southwest Louisiana and southeast Texas Submitted by Walmart With the catastrophic storm, flooding and power outages affecting southwest Louisiana and southeast Texas, Walmart, Sam's Club and the Walmart Foundation are committing up to $2.5 million in cash and in-kind product donations to organizations leading response and recovery efforts. The funds and product donations will support relief efforts in the area through four non-profits already active in the response, including the Community Foundation of Southwest Louisiana, SBP USA (Saint Bernard Project), American Red Cross and the Salvation Army. In addition to aid for these organizations, the company is donating more than 600,000 bottles of water to the Louisiana Governor's Office on Homeland Security and local nonprofits to support impacted communities. "Hurricane Laura has significantly impacted our associates, customers and the communities we call home, leaving many without power and basic services that may not return for weeks," said Dan Bartlett, executive vice president of corporate affairs for Walmart Inc. "Our hearts go out to those affected, and we gladly leverage our resources to assist where needed." In addition, Walmart and Sam's Club are also supporting evacuees in Texas, providing phone charging stations in the Lake Charles area and hosting food relief organizations in a store parking lot in Louisiana. As people were evacuated from southeast Texas late last week, many were housed in San Antonio where Walmart supplied a shelter with water, snacks and towels. For those without power in southwest Louisiana, Sam's Club has set up charging stations at three area clubs so that people may charge their mobile phones. To help those that need hot meals, food relief organizations are handing out food and meal kits to first responders, associates and those impacted by the storm at the Walmart store on Gerstner Memorial Blvd in Lake Charles. At present, only one Walmart store remains closed. All other stores and Sam's Clubs in the area are back up and running, some with limited hours. For store and club operating hours, customers may check local store Facebook pages or call clubs directly. During times of disasters, the company's priority is the safety of their associates. Walmart and Sam's Club work to take care of associates and their families by communicating with store and club location management teams, reminding associates of emergency procedures and what to do during and after the storm. The company is providing disaster assistance to displaced and affected associates, setting up support centers in the impacted areas and calling associates to conduct wellness checks. Walmart has a long history of providing aid in times of disasters, helping communities prepare and recover by donating emergency supplies, such as food and water, home, and personal products. Since FY2017, Walmart and the Walmart Foundation have donated more than $58 million in cash and in-kind donations to support disaster preparedness and relief efforts. Walmart Inc. (NYSE: WMT) helps people around the world save money and live better - anytime and anywhere – in retail stores, online, and through their mobile devices. Each week, over 265 million customers and members visit approximately 11,500 stores under 56 banners in 27 countries and eCommerce websites. With fiscal year 2020 revenue of $524 billion, Walmart employs over 2.2 million associates worldwide. Walmart continues to be a leader in sustainability, corporate philanthropy and employment opportunity. Additional information about Walmart can be found by visiting http://corporate.walmart.com, on Facebook at http://facebook.com/walmart and on Twitter at http://twitter.com/walmart. About Philanthropy at Walmart Walmart.org represents the philanthropic efforts of Walmart and the Walmart Foundation. By leaning in where the business has unique strengths, Walmart.org works to tackle key social issues and collaborate with others to spark long-lasting systemic change. Walmart has stores in 27 countries, employing more than 2 million associates and doing business with thousands of suppliers who, in turn, employ millions of people. Walmart.org is helping people live better by supporting programs that work to accelerate upward job mobility for frontline workers, address hunger and make healthier, more sustainably grown food a reality, and build strong communities where Walmart operates. To learn more, visit www.walmart.org or find us on Twitter @walmartorg. http://news.walmart.com/reporter Wal-Mart Stores, Inc. (NYSE: WMT) helps people around the world save money and live better - anytime and anywhere - in retail stores, online, and through their mobile devices. Each week, nearly 260 million customers and members visit our 11,535 stores under 72 banners in 28 countries and e-commerce websites in 11 countries. With fiscal year 2016 revenue of $482.1 billion, Walmart employs approximately 2.2 million associates worldwide. Additional information about Walmart can be found by visiting http://corporate.walmart.com, on Facebook at http://facebook.com/walmart and on Twitter at http://twitter.com/walmart. More from Walmart
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,913
\section{Introduction} Tunneling is one of the hallmarks of quantum systems, and physical effects associated with quantum tunneling are important in many branches of science~\cite{razavy_03}. While the probability for quantum tunneling can be readily calculated for a variety of systems and has been measured experimentally to great accuracy, the time it takes a quantum system to complete a tunneling event is a much less well-defined notion. It is often hard to measure in experiment and in many cases still the subject of intense debate~\cite{schulman_08}. In this paper we address the problem of measuring the timescale associated with one of the conceptually simplest tunneling phenomena, Landau-Zener (LZ) tunneling. Landau-Zener tunneling arises when two energy levels of a quantum system cross as a function of some parameter that varies in time. There is a possibility of a transition if the degeneracy at the level crossing is lifted by a coupling and the system is forced across the resulting avoided crossing by varying the parameter that determines the level separation. LZ tunneling was first studied theoretically in the early 1930's in the context of atomic scattering processes and spin dynamics in time-dependent fields~\cite{Landau,Zener,Stueckelberg,Majorana32} In its basic form the LZ problem can be described by a simple two-state model with a Hamiltonian given by \begin{equation} H_{\rm LZ}=\left( \begin{array}{cc} \alpha t & \Delta E/2 \\ \Delta E/2 & -\alpha t\\ \end{array} \right) \, \label{eqno1} \end{equation} \begin{figure}[htc] \begin{center} \includegraphics[width=0.5\linewidth,angle=0]{1.eps} \caption{ \small \label{fig:1} Energy levels as a function of time. The dashed lines show the so-called diabatic levels i.e., the energy position of states in the absence of the interaction. The solid lines demonstrate the so-called adiabatic levels, i.e., the eigenstates of the system corresponding to the instantaneous Hamiltonian.} \end{center} \end{figure} \noindent where the off-diagonal term, $\Delta E/2$, is the coupling between the two states and $\alpha$ is the rate of change of the energy levels in time. The dynamics of the system can be described either in the {\it diabatic} or in the {\it adiabatic} basis. The diabatic basis is the basis of the bare states of Eq. (\ref{eqno1}) when there is coupling (i.e., no off-diagonal entries in the matrix). The adiabatic basis, on the other hand, is the basis of a system with a finite coupling $\Delta E/2$ between the two states. The Hamiltonian has two adiabatic energy levels $E_{\pm}=\pm\frac{1}{2}\sqrt{\left(2\alpha t\right)^2 + \Delta E ^{2}}$.\\ Assuming that the system is initially, at $t\rightarrow -\infty$, in the ground energy level $E_{\rm -}$ and if the sweeping rate is small enough, it will be exponentially likely that the system remains in its adiabatic ground state $E_{\rm -}$ at $t\rightarrow +\infty$. The limiting value of the adiabatic LZ survival probabilities (for $t$ going from $-\infty$ to $-\infty$) is \cite{Holthaus00}, \begin{equation} {P_{\rm a}(\infty)=1-\exp\left(-\frac{\pi}{\gamma}\right), \label{eqno2}} \end{equation} where we have introduced the dimensionless adiabaticity parameter $\gamma=4\hbar\alpha/\Delta E^{2}$. While the above analysis can predict the probability for LZ tunneling very accurately, in contains no reference to the dynamics around the avoided crossing and hence to the time it takes the system to complete the tunneling event which eventually results in the populations measured in the ground and excited bands (or in the corresponding diabatic states). Also, in the case of LZ tunneling, which occurs in an abstract space spanned by the energy levels of the system as a function of a parameter, the concept of tunneling time is less intuitive than in the case of cross-barrier tunneling in real space, to name just one example. Nevertheless, the tunneling time (or transition time or jump time, as it is called in certain contexts) associated with LZ is a meaningful concept referring to the timescale on which the system evolves around the avoided crossing. Analytical estimates for the LZ transition times have been derived in~\cite{vitanov_96,vitanov_99} using the two-state model of Eq.~\ref{eqno1}. In a given basis, e.g., adiabatic or diabatic, different transition times are obtained. Vitanov~\cite{vitanov_99} calculated the time-dependent diabatic/adiabatic survival probability at finite times. Analytical estimates for the LZ transition times were derived in~\cite{vitanov_96} using some exact and approximate results for the transition probability. In general, the LZ jump time in a given basis can be defined as the time after which the transition probability reaches its asymptotic value. From this definition one can expect to observe a step-like structure, with a finite width, in the time-resolved tunneling probability. Because the step is not very sharp, it is not straightforward to define the initial and final times for the transition. It is even less obvious how to define the jump time for both small and for large coupling. Some possible choices have been used by Lim and Berry~\cite{Berry_90} and Vitanov~\cite{vitanov_96, vitanov_99}. The problem is even more complicated when the survival probability shows an oscillatory behavior on top of the step structure. The oscillations give rise to an additional time scale in the system, namely an oscillation time and a damping time of the oscillations appearing in the transition probability after the crossing. Therefore, a measurement of the tunneling time depends very much on how these times are defined and also which basis is considered. In~\cite{vitanov_99} the jump time in the diabatic/adiabatic bases is defined as \begin{equation} \tau^{jump}_{\rm d/a}=\frac{P_{\rm d/a}(\infty)}{P'_{\rm d/a}(0)}. \label{eqno10} \end{equation} where $P_\mathrm{d/a}$ is the transition probability between the two diabatic/adiabatic states, respectively. $P'_{\rm d/a}(0)$ denotes the time derivative of the tunneling probability evaluated at the crossing point. From this definition, the {\em diabatic} jump time is calculated as $\tau^{jump}_{\rm d} \approx \sqrt{2\pi\hbar/\alpha}$ is almost constant for large values of the adiabaticity parameter $\gamma$. For $\gamma\ll1$, on the other hand, it decreases with $\gamma$ , $\tau^{jump}_{\rm d} \approx 2\sqrt{\hbar(\gamma\alpha)^{-1}}$. In the {\em adiabatic} basis, when $\gamma$ is large, the transition probability is similar to that in the diabatic basis. For a small adiabaticity parameter, because of the oscillations appearing on top of the transition probability step structure, it is not straightforward to define the initial and the final time for the transition. Vitanov defines the initial jump time as the time $t<0$ at which the transition probability is very small (i.e., $P_{\rm a}(\tau)=\varepsilon P_{\rm a}(\infty)$, where $\varepsilon$ is a proper small number). The final time of the transition $t>0$ is defined as the time at which the non-oscillatory part of $P_{\rm a}(\tau)$ is equal to $(1+\varepsilon)P_{\rm a}(\infty)$. Using these definitions, Vitanov derived that the transition time in the adiabatic basis depends exponentially on the adiabaticity parameter, $\tau^{jump}_{\rm a} \approx \left(4/\epsilon\right)^{1/6}\gamma^{-1/3}\mathrm{exp}\left(\pi/(6\gamma)\right)\sqrt{\hbar/\alpha}$. \section{Experimental results} \label{results} The Landau-Zener model is realized in our experiments using Bose-condensed rubidium atoms inside an optical lattice \cite{JonaLasinio03,Zenesini09}. Initially, we created Bose-Einstein condensates of $5\times 10^4$ rubidium-87 atoms inside an optical dipole trap (mean trap frequency around $80\,\mathrm{Hz}$). A one-dimensional optical lattice created by two counter-propagating, linearly polarized gaussian beams was then superposed on the BEC by ramping up the power in the lattice beams in $100\,\mathrm{ms}$. The wavelength of the lattice beams was $\lambda=842\,\mathrm{nm}$, leading to a sinusoidal potential with lattice constant $d_{\rm L}=\lambda/2=421\,\mathrm{nm}$. A small frequency offset $\Delta \nu(t)$ between the two beams could be introduced through the acousto-optic modulators in the setup, which allowed us to accelerate the lattice in a controlled fashion and hence, in the rest-frame of the lattice, to subject the atoms to a force $F_{\rm LZ}=Ma_\mathrm{LZ}$ with $a_\mathrm{LZ}=d_\mathrm{L}\frac{d\Delta \nu(t)}{dt}.$ The energy level structure of Bose condensates in optical lattices can be represented by energy bands in the Brillouin zone picture. At the edge of the Brillouin zone successive bands are separated by gaps, and in the vicinity of the zone edge our system approximates the LZ model very well. We can make time-resolved measurements of a single tunneling event in the following way: First, the Bose condensate is loaded adiabaticallly into a lattice, after which the lattice is accelerated to some finite quasimomentum. Thereafter, the instantaneous populations of the eigenstates of the system are measured, the exact protocol depending on the basis chosen. In detail, the protocols are as follows: \begin{itemize} \item For measurements in the {\em adiabatic} basis, after loading the BEC into the optical lattice the lattice was accelerated with acceleration $a_\mathrm{LZ}$ for a time $t_\mathrm{LZ}$. The lattice thus acquired a final velocity $v=a_\mathrm{LZ} t_\mathrm{LZ}$. At time $t=t_{\mathrm LZ}$ the acceleration was abruptly reduced to a smaller value $a_\mathrm{sep}$ and the lattice depth was increased to $V_\mathrm{sep}$ in a time $t_\mathrm{ramp}\ll T_\mathrm{B}$. These values were chosen in such a way that at time $t=t_\mathrm{LZ}$ the probability for LZ tunneling from the lowest to the first excited energy band dropped from between $\approx 0.1-0.9$ (depending on the initial parameters chosen) to less than $\approx 0.01$, while the tunneling probability from the first excited to the second excited band remained high at about $0.95$. This meant that at $t=t_\mathrm{LZ}$ the tunneling process was effectively interrupted and for $t>t_\mathrm{LZ}$ the measured survival probability $P(t)=N_0/N_\mathrm{tot}$ (calculated from the number of atoms $N_0$ in the lowest band and the total number of atoms in the condensate $N_\mathrm{tot}$) reflected the instantaneous value $P(t=t_\mathrm{LZ})$. The lattice was then further accelerated for a time $t_\mathrm{sep}$ such that $a_\mathrm{sep}t_\mathrm{sep}\approx 2n p_\mathrm{rec}/M$ (where typically $n=2$ or $3$). In this way, atoms in the lowest band were accelerated to a final velocity $v\approx 2n p_\mathrm{rec}/M$, while atoms that had undergone tunneling to the first excited band before $t=t_\mathrm{LZ}$ underwent further tunneling to higher bands with a probability $>0.95$ and were, therefore, no longer accelerated. At time $t_\mathrm{sep}$ the lattice and dipole trap beams were suddenly switched off and the expanded atomic cloud was imaged after $23\,\mathrm{ms}$. In these time-of-flight images the two velocity classes $0$ and $2n p_\mathrm{rec}/M$ were well separated and the atom numbers $N_0$ and $N_\mathrm{tot}$ could be measured directly. Since the populations were effectively "frozen" inside the energy bands of the lattice, which represent the adiabatic eigenstates of the total Hamiltonian of the system, this experiment measured the time dependence of the LZ survival probability $P_{\rm a}$ in the {\it adiabatic} basis. \item For measurements in the {\em diabatic} basis, after the initial loading phase the lattice was accelerated with acceleration $a_\mathrm{LZ}$ for a time $t_\mathrm{LZ}$ as in the adiabatic case. At that point, however, the atomic sample was projected onto the free-particle diabatic basis by instantaneously (within less than $1\,\mathrm{\mu s}$) switching off the optical lattice. After a time-of-flight the number of atoms in the $v=0$ and $v=2p_\mathrm{rec}/M$ momentum classes are measured and from these the survival probability (corresponding to the atoms remaining in the $v=0$ velocity class relative to the total atom number) is calculated. \end{itemize} \begin{figure}[htc] \begin{center} \includegraphics[width=0.7\linewidth,angle=0]{LZ_ad_diab_exp.eps} \caption{ \small \label{fig:experiment} Time-resolved measurements of LZ tunneling in the adiabatic (filled circles) and diabatic bases (open circles)for $F_0=1.197$ and $V/E_\mathrm{rec}=1.8$. The solid line is a sigmoid fit to the adiabatic survival probability, and the vertical dashed lines indicate the position of the zone edge at $t=0.5\,T_B$ and the tunneling time $\tau_a$.} \end{center} \end{figure} The results of typical measurements in the adiabatic and diabatic bases are shown in Fig. \ref{fig:experiment}. The step-like behaviour of the survival probability around $t=0.5T_B$ is clearly visible, which demonstrates that our experimental protocol does, indeed, allow us to access the timescale of the LZ transition. It is also obvious from the figure that while in the adiabatic basis the transition is smooth and can be well fitted with a sigmoid function, in the diabatic basis there are strong oscillations for times $t>0.5\,T_B$. As a consequence, the tunneling time in the adiabatic basis can be easily identified as the width of the transition curve (indicated in the figure), while in the diabatic basis it is less obvious when the tunneling event is completed. \section{Conclusions} We have demonstrated that experiments with Bose-condensates in accelerated optical lattices allow access to the full dynamics of LZ tunneling and hence to the timescales involved in the tunneling process, both in the adiabatic and diabatic bases of the problem. Our experiments pave the way towards a thorough investigation of tunneling times and the quantum speed limit. The latter has recently been discussed theoretically in the context of optimal quantum control \cite{QSL}QSL and more generalized Landau-Zener protocols involving non-linear sweep functions that are predicted to lead to shorter minimum times for completing a tunneling event. As our experimental setup allows us to realize arbitrary protocols for the lattice acceleration, such experiments should be relatively straightforward to realize. \ack The authors would like to thank the PhD students and post-docs participating in the experiments described in this article: A. Zenesini, H. Lignier and J. Radogostowicz. We acknowledge funding by the EU Project "NAMEQUAM" (EC FP7-225187) and the CNISM "Progetto Innesco 2007". \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,427
Tag Archives: prescription drug abuse Entries tagged with "prescription drug abuse" Detox, Drug Information, Opioid Abuse, Prescription Drugs, Substance AbuseBy Serenity House Detox & Recovery Houston May 22, 2022 Pharmaceutical companies make a lot of painkillers. Two examples of these are oxycodone and OxyContin. Although these drugs have very similar chemical structures, they aren't exactly the same. There are several differences between oxycodone vs. OxyContin. If you are suffering from an addiction to opioids such as oxycodone or OxyContin, then it's time to seek… Am I Addicted to Prescription Drugs? Drug Addiction, Prescription Drugs, Recovery, Rehab Therapy ServicesBy Serenity House Detox & Recovery Houston May 2, 2022 Prescription drugs introduce a unique challenge. On the one hand, they can help people struggling with overwhelming physical or mental symptoms function throughout the day. But on the other hand, they can quickly lead to dependence and addiction. Even with the best intentions, you can develop an addiction to your prescription and require the assistance… Getting Help for Xanax Abuse in Texas Addiction, Controlled Substances, Drug Addiction, Dual Diagnosis, Prescription Drugs, Substance AbuseBy Serenity House Detox & Recovery Houston March 16, 2022 While benzodiazepines serve a meaningful purpose, they are highly addictive and subject to abuse. Millions of Americans struggle with anxiety disorders and take benzodiazepines like Xanax to help manage anxiety and panic attacks. Once they experience the quick, calming relief, they may use the drug anytime they experience stress or discomfort. Xanax can be challenging… Symptoms of Oxycodone Abuse Addiction, Controlled Substances, Drug Addiction, Opioid Abuse, Prescription Drugs, Substance AbuseBy Serenity House Detox & Recovery Houston February 24, 2022 The wide availability of oxycodone and other prescription painkillers has led to the opioid addiction epidemic in the United States. The bulk of oxycodone use occurs in this country, with millions of residents struggling with prescription drug abuse. Even those who take their prescription opioids as their doctors directed may risk developing dependence or addiction… What Are Study Drugs? Addiction, Controlled Substances, Drug Addiction, Prescription Drugs, Substance AbuseBy Serenity House Detox & Recovery Houston January 14, 2022 The fast pace of today's modern lifestyles can make it challenging to keep up with all of your obligations. Many people express anxiety over keeping up with work responsibilities, schoolwork, raising children, maintaining their home, and finding time for their loved ones. This is the case regardless of which life stage you are in, with… How Long Does Methadone Withdrawal Last? Addiction, Detox, Drug Addiction, RehabilitationBy Serenity House Detox & Recovery Houston December 22, 2021 People used to think that methadone was the answer to opiate addiction, but few realized that it could become an addictive substance itself. Even beyond that, there is the question of detoxing from the drug once you start it. What are the symptoms of methadone withdrawal, how long does withdrawal last, and how can you… Signs of a Xanax Overdose Controlled Substances, Detox, Drug Addiction, Drug Information, Prescription DrugsBy Serenity House Detox & Recovery Houston May 9, 2021 Xanax is a prescription medication used to treat conditions such as panic disorders, anxiety, insomnia, and seizures. Used properly, it is safe. However, it is a highly addictive substance and, if misused, it can also lead to life-threatening complications, including Xanax overdose. If you believe your loved one is overdosing, call 911 immediately. If your… Dangers of Mixing Adderall and Alcohol Alcohol Addiction, Drug Addiction, Dual Diagnosis, Mental Health Treatment, Prescription Drugs, Teen/Young Adult, TherapyBy Serenity House Detox & Recovery Houston April 25, 2021 Mixing Adderall and alcohol is a risky habit. Adderall, often used to treat Attention Deficit-Hyperactivity Disorder (ADHD), is a stimulant. Alcohol is a depressant. When taken together, each drug cancels out the effects of the other. When this happens, the person who's mixing the two may accidentally overdose on one or both. If you're currently… What is a Prescription Take-Back? Drug Addiction, Drug Information, Prescription Drugs, RecoveryBy Serenity House Detox & Recovery Houston April 24, 2021 April 24th is National Prescription Drug Take-Back Day, but what does that mean? For many, it is about going to their medicine cabinets and clearing out all the expired and unused medication. The goal of the day, though, is to address an increasingly troubling public health problem, prescription drug abuse. If you're suffering from an… Signs of Opiate Abuse Drug Addiction, Drug Information, Dual Diagnosis, Prescription DrugsBy Serenity House Detox & Recovery Houston April 14, 2021 Opiate and opioid abuse continue to be a growing problem in Houston and the surrounding area. In 2018, there was a spike in drug poisoning deaths. Opiates like opioids are highly addictive. Knowing the signs of opiate abuse can be the first step in getting you or someone you love the help they need, like…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,562
Walgreens API: Live from the API Strategy Conference The first-ever API Strategy Conference took place in New York City on Thursday February 21 and Friday February 22. This event was attended by leading API technologists, strategists, technology providers, as well as others who are just starting the API journey If you were not able to attend the event in-person, don't worry, included below is the presentation that our product manager, Joe Rago, gave at the event. Presentation at Health 2.0 Chicago Meetup on February 4, 2013 by Sr. Product Manager Joe Rago regarding the launch of the Walgreens Prescription API. Join Walgreens at the Sprint Hackathon Join Walgreens, Facebook, Samsung, Evernote and more at the Sprint Developer Program Hackathon! When: October 23-24, 2012 Where: San Jose Convention Center - 150 W. San Carlos St - San Jose, CA 95113 How: Register at sprinthackathon.eventbrite.com Learn More: Win Big With the Walgreens QuickPrints SDK at the Sprint Hackathon! Join the Sprint Developer Program and WIP on Tuesday, October 23 in San Jose for the official hackathon of the Sprint Open Solutions Conference. Sprint wants to see your best apps, and has stacks of cash and devices for prizes -- and the top app will get special recognition during Thursday evening's keynote from Sprint CEO Dan Hesse. You'll get a chance to meet and get direct support from the Sprint Developer Program, as well as a host of great API and developer tool providers. All attendees will also get a code to register for FREE for the Sprint Open Solutions Conference ($350 value) when they attend the hackathon (these codes will be distributed onsite)! Whatever your platform of choice or your area of expertise, you're welcome to join the fun (and if you don't have a team, we'll help you find one). [Walgreens & the Sprint Hackathon] Walgreens will be featuring its QuickPrints SDK for iOS and Android at the Sprint Hackathon for developer attendees to utilize within their application hacks. Walgreens will feature multiple representatives on-hand at the event to discuss the QuickPrints SDK and Partner Program. If you want to reach us at the event, feel free to send a tweet to @WalgreensAPI or email us at dev-events@walgreens.com. Walgreens is excited to be a part of this event and look forward to seeing you there! The event is free to attend, and you can register at sprinthackathon.eventbrite.com. Walgreens Developer Program Sponsors Health 2.0 Chicago Conference The Walgreens Developer Program is pleased be the flagship sponsor for the first-ever Health 2.0 Chicago conference which will occur on Friday September 28 and Saturday 29. Chicago Health Tech and Health 2.0 is hosting the very first Health 2.0 Local conference focused on fostering the growth of the health tech ecosystem in Chicago and the greater Midwest. Health 2.0 is bringing together the entrepreneurial and innovator community with the goal of sharing lessons learned and identifying opportunities for growth and innovation. Health 2.0 believes that connecting health technology stakeholders, such as entrepreneurs, startups, providers, payers, investors and government policy makers in a vibrant environment for sharing are integral for this growth. Event Times: Fri, Sept 28, Day 1 - Main Conference at Spertus: 8:30 am – 6:00 pm / After Party at 1871 in Merchandise Mart : 6:30 pm – 9:30 pm Sat, Sept 29, Day 2 - Code-a-thons/Make-a-thons and Innovator Fuse at 1871 in Merchandise Mart : 8:30am – 5:30 pm · Anyone who wants the pulse on health technology in Chicago and the greater Midwest. · Entrepreneurs in health technology · Established healthcare stakeholders interested in collaborating with startups · Healthcare professionals with entrepreneurial interests · Technology innovators looking for opportunities in health and healthcare · Investors To learn more information about this great event, please visit http://www.health2con.com/chicago/conferences/2012fall. If you'd like to register for Health 2.0 event, you can do so at http://www.health2con.com/chicago/?ee=1. You can also follow the event on Twitter via #H20Chi. Walgreens Mobile Photo Hack Day Promotes Developer Innovation 30 developers attend Walgreens event to integrate QuickPrints technology into prototype photo applications in under 5 hours Hack day highlights company's continued efforts to foster mobile innovation through relationships with third-party developer community DEERFIELD, IL – Aug. 13, 2012 – This past Saturday, Walgreens hosted its first Mobile Photo Hack Day at its downtown Chicago office. The event gave developers the opportunity to meet, collaborate and showcase their creative implementations of the QuickPrints SDK in their applications. The successful, single-day hackathon attracted a total of 30 developers making up 14 photo hack application demonstrations. Developer demos were judged based upon the unique features and functionality of the application and the completed integration of the QuickPrints SDK, which was determined by the photo print output created at the on-site photo lab. The applications were reviewed by a panel of judges that included: Scott Regan (head of marketing, Apigee), Ari Fuchs (lead API engineer & developer evangelist, Aviary), Brett Goldstein (commissioner, Chicago Department of Innovation & Technology), Kevin McGinnis (VP of product platforms, Sprint), Terry Howerton (founder, TechNexus) and Jasbir Patel (senior director and general merchandise manager for photo, Walgreens). "I would like to thank all the talented programmers who found very creative ways to use QuickPrints SDK in third party apps," said Abhi Dhar, Walgreens chief technical officer for e-commerce. "Although 'appathons' are more common in places like Silicon Valley, it was our goal to bring the concept to Chicago and take advantage of the city's burgeoning tech industry and its growing pool of talented programmers." The winner of both the Judges' and People's Choice Awards was Smilesback, developed by the husband-wife team of Mr. Bhabani Sahu and Mrs. Prachi Sahoo. Smilesback (watch their app demo video) reminds people about important dates such as anniversaries, birthdays and others occasions that are perfect for personalized, last-minute photo gifts. "We wanted to build something that helps people and can make a difference in their daily lives," Bhabani Sahu said. "QuickPrints is about convenience and speed and that really fits with our goals for the Smilesback app." The Smilesback team won a total of $5,000 in cash prizes along with a 6-month mentoring and residency program through event partner TechNexus, a Chicago-based technology incubator and collaboration center. In July, Walgreens extended its innovation in mobile technology with its developer portal that allows third-party software developers to integrate the Walgreens QuickPrints software development kit (SDK), which allows iOS and Android mobile users to print photos directly from their smartphone to most Walgreens stores nationwide in about an hour. The launch of the QuickPrints SDK marked the first time that Walgreens has provided an open interface for third-party application developers to integrate Walgreens in-store services. The Walgreens QuickPrints SDK, already implemented within seven partner applications, allows third party developers to integrate a print service within their existing mobile application and earn a revenue share from photo orders printed and picked up at a local Walgreens location. More information about the QuickPrints SDK program and its partners can be found on the Walgreens Developer Portal and through a dedicated Twitter handle @WalgreensAPI. As the nation's largest drugstore chain with fiscal 2011 sales of $72 billion, Walgreens (www.walgreens.com) vision is to become America's first choice for health and daily living. Each day, Walgreens provides nearly 6 million customers the most convenient, multichannel access to consumer goods and services and trusted, cost-effective pharmacy, health and wellness services and advice in communities across America. Walgreens scope of pharmacy services includes retail, specialty, infusion, medical facility and mail service, along with respiratory services. These services improve health outcomes and lower costs for payers including employers, managed care organizations, health systems, pharmacy benefit managers and the public sector. The company operates 7,919 drugstores in all 50 states, the District of Columbia and Puerto Rico. Take Care Health Systems is a Walgreens subsidiary that is the largest and most comprehensive manager of worksite health and wellness centers and in-store convenient care clinics, with more than 700 locations throughout the country.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,337
{"url":"https:\/\/brilliant.org\/problems\/logic-problem-85707\/","text":"# Can You Make This True?\n\nLogic Level 2\n\nOut of the 4 standard arithmetic operations ($$+, - , \\times, \\div$$), can we place them into $$\\square$$ such that the equation is true?\n\n$\\large 6 \\square 3 \\square 3 = 3$\n\nNote: Each $$\\square$$ could use the same or a different operation.\n\n\u00d7","date":"2017-03-28 08:15:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8836022019386292, \"perplexity\": 796.1758832628628}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218189686.31\/warc\/CC-MAIN-20170322212949-00125-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
Сулейма́н Алі Нашну́ш ( ; 17 серпня 1943, Триполі, Лівія — 25 лютого 1991) — лівійський баскетболіст. Згідно з Книгою рекордів Гіннеса — найвища людина на Землі станом на 1991 рік. Один із сімнадцяти задокументованих випадків зросту у понад 8 футів. 1960.року Нашнуш вдало переніс операцію з метою зупинити подальший ріст. Виступав за баскетбольну збірну Лівії у відборі до чемпіонату світу 1962 року. 1969 року брав участь у зйомках фільму «Сатирикон Фелліні» режисера Федеріко Фелліні (служка Трифени). Сулейман Нашнуш вважається найвищим з-поміж усіх професійних баскетболістів в історії. Див. також Хе Пінпін Брагім Такіулла Стадник Леонід Степанович Люди-гіганти Рекордсмени Книги рекордів Гіннеса
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,384
#nullable disable namespace Amazon.Route53; public sealed class HealthCheck { public string Id { get; init; } public string CallerReference { get; init; } public HealthCheckConfig HealthCheckConfig { get; init; } public int HealthCheckVersion { get; init; } } /* <HealthCheck> <Id>abcdef11-2222-3333-4444-555555fedcba</Id> <CallerReference>example.com 192.0.2.17</CallerReference> <HealthCheckConfig> <IPAddress>192.0.2.17</IPAddress> <Port>80</Port> <Type>HTTP</Type> <ResourcePath>/docs/route-53-health-check.html</ResourcePath> <FullyQualifiedDomainName>example.com</FullyQualifiedDomainName> <RequestInterval>30</RequestInterval> <FailureThreshold>3</FailureThreshold> <MeasureLatency>true</MeasureLatency> <EnableSNI>true</EnableSNI> <Regions> <Region>ap-southeast-1</Region> <Region>ap-southeast-2</Region> <Region>ap-northeast-1</Region> </Regions> <Inverted>false</Inverted> </HealthCheckConfig> <HealthCheckVersion>2<HealthCheckVersion> </HealthCheck> */
{ "redpajama_set_name": "RedPajamaGithub" }
2,167
{"url":"https:\/\/www.khanacademy.org\/math\/algebra\/x2f8bb11595b61c86:quadratic-functions-equations\/x2f8bb11595b61c86:more-on-completing-square\/a\/quadratic-formula-proof-review","text":"If you're seeing this message, it means we're having trouble loading external resources on our website.\n\nIf you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.\n\nA text-based proof (not video) of the quadratic formula\nx, equals, start fraction, minus, start color #e07d10, b, end color #e07d10, plus minus, square root of, start color #e07d10, b, end color #e07d10, squared, minus, 4, start color #7854ab, a, end color #7854ab, start color #e84d39, c, end color #e84d39, end square root, divided by, 2, start color #7854ab, a, end color #7854ab, end fraction\nstart color #7854ab, a, end color #7854ab, x, squared, plus, start color #e07d10, b, end color #e07d10, x, plus, start color #e84d39, c, end color #e84d39, equals, 0\nIf you've never seen this formula proven before, you might like to watch a video proof, but if you're just reviewing or prefer a text-based proof, here it is:\n\n## The proof\n\nWe'll start with the general form of the equation and do a whole bunch of algebra to solve for x. At the heart of the proof is the technique called start color #11accd, start text, c, o, m, p, l, e, t, i, n, g, space, t, h, e, space, s, q, u, a, r, e, end text, end color #11accd. If you're unfamiliar with this technique, you may want to brush up by watching a video.\n\n### Part 1: Completing the square\n\n\\begin{aligned} \\purpleD{a}x^2 + \\goldD{b}x + \\redD{c} &= 0&(1)\\\\\\\\ ax^2+bx&=-c&(2)\\\\\\\\ x^2+\\dfrac{b}{a}x&=-\\dfrac{c}{a}&(3)\\\\\\\\ \\blueD{x^2+\\dfrac{b}{a}x+\\dfrac{b^2}{4a^2}}&\\blueD{=\\dfrac{b^2}{4a^2}-\\dfrac{c}{a}}&(4)\\\\\\\\ \\blueD{\\left (x+\\dfrac{b}{2a}\\right )^2}&\\blueD{=\\dfrac{b^2}{4a^2}-\\dfrac{c}{a}}&(5) \\end{aligned}\n\n## Part 2: Algebra! Algebra! Algebra!\n\nRemember, our goal is to solve for x.\n\\begin{aligned} \\left (x+\\dfrac{b}{2a}\\right )^2&=\\dfrac{b^2}{4a^2}-\\dfrac{c}{a}&(5) \\\\\\\\ \\left (x+\\dfrac{b}{2a}\\right )^2&=\\dfrac{b^2}{4a^2}-\\dfrac{4ac}{4a^2} &(6)\\\\\\\\ \\left (x+\\dfrac{b}{2a}\\right )^2&=\\dfrac{b^2-4ac}{4a^2}&(7)\\\\\\\\ x+\\dfrac{b}{2a}&=\\pm \\dfrac{\\sqrt{b^2-4ac}}{\\sqrt{4a^2}}&(8)\\\\\\\\ x+\\dfrac{b}{2a}&=\\pm \\dfrac{\\sqrt{b^2-4ac}}{2a}&(9)\\\\\\\\ x&=-\\dfrac{b}{2a}\\pm \\dfrac{\\sqrt{b^2-4ac}}{2a}&(10)\\\\\\\\ x&=\\dfrac{-\\goldD{b}\\pm\\sqrt{\\goldD{b}^2-4\\purpleD{a}\\redD{c}}}{2\\purpleD{a}}&(11) \\end{aligned}\nAnd we're done!\n\n## Want to join the conversation?\n\n\u2022 When we take the sqrt of 4a^2 shouldn't it be + or - 2a, not just 2a?\n\u2022 When you took the square root of the entire fraction, technically both sides have the 'plus or minus' symbol but putting it on both the numerator and the denominator would be redundant and you can simply simplify it like they did with a single plus or minus symbol in front of the fraction but you can also leave it on the numerator.\n\u2022 In step 8 the square root on the right hand side is +\/-. Why is the square root on the left hand side not also +\/-?\n\u2022 on the left hand side x + (b\/2a) is squared so the root cancels out\n\u2022 How can we multiply by 4a^2 in step 6, without affecting the left side of the equation?\n\u2022 What they did in step 6 was multiply - c\/a by 4a\/4a. The reason you can do this is because 4a\/4a is the same thing as 1, so multiplying by it doesn't change any values.\nIn other words, -c\/a has the same value as -4ac\/(4a^2)\n\u2022 I require some help with understanding how -b\/2a derives the x-coordinate of the vertex of a parabola. Thanks in advance!\n(1 vote)\n\u2022 I know of two ways to understanding it.\n\nFirst, using the vertex formula: y = a(x \u2013 h)^2 + k, where \"h\" is the vertex.\nPut the general equation y = ax^2 + bx + c into the vertex form and you will find that \"h\" will equal -b\/2a. I'll leave the work up to you.\n\nSecond, since quadratics in the general form (y = ax^2 + bx + c) are symmetric over a vertical line through the vertex, we can use the two roots of the quadratic formula and average them to find the x-coordinate of the vertex (visualize a quadratic graph and you will see why this is true).\n\nSo if you find the average of the two roots:\n\n[-b + sqrt(b^2-4ac)]\/2a and [-b - sqrt(b^2-4ac)]\/2a \/\/ it will be -b\/2a. (I again, will leave the work up to you.)\n\u2022 can someone help me with 4x^2+11x-20=0 I solved everything expect I got stuck on the square root of 441\n\u2022 I tried the proof myself in a slightly different way and it didn't quite work out.\n\n(1) ax^2 + bx + c = 0\n(2) x^2 + (b\/a)x + c\/a = 0\n(3) x^2 + (b\/a)x + (b^2\/4a^2) + c\/a - (b^2\/4a^2) = 0\n(4) (x+(b\/2a))^2 + 4ac\/4a^2 - b^2\/4a^2 = 0\n(5) (x+(b\/2a))^2 + (4ac-b^2)\/(4a^2) = 0\n(6) (x+(b\/2a))^2 = -(4ac-b^2)\/(4a^2)\n(7) (x+(b\/2a)) = -(sqrt(4ac-b^2))\/2a\n(8) x = -(b\/2a)-(sqrt(4ac-b^2))\/2a\n(9) x = -(-b+-squr(4ac-b^2))\/2a\n\nMy discriminant is 4ac-b^2 while the one we use is b^2-4ac. Also, I have a negative sign in front of the whole fraction which is from the second equation in step 8. Could somebody tell me where I made a mistake?\n\u2022 At step 6 and 7 when you took the square root, the negative should have stayed inside rather than outside (taking square root would yield \u00b1 on the outside). So when you distribute the -1(4ac-b^2) you end up with b^2-4ac.\n(1 vote)\n\u2022 on step 4, why adding (b^2\/4a^2) to both sides?\n(1 vote)\n\u2022 To complete the square, you divide the coefficient of the x term by 2 (b\/2a) and square this to get b^2\/4a^2. So you need this term to complete the square. If you do it to the left side in order to complete the square, you either have to subtract it on the left or add it to the right side of the equation to keep it balanced.","date":"2023-01-30 22:26:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 7, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 2, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7265426516532898, \"perplexity\": 1462.1587229698239}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499829.29\/warc\/CC-MAIN-20230130201044-20230130231044-00573.warc.gz\"}"}
null
null
\section{Introduction} Let $M$ be an orientable hyperbolic manifold (or orbifold) with finite volume. The {\it length spectrum} of $M$ is defined to the set of all lengths of closed geodesics in $M$. Further, two manifolds are said to be {\it commensurable} if they share an isometric finite-sheeted covering. Commensurability is an equivalence relation, and the {\it commensurability class} of $M$ is the equivalence class containing $M$. One of the earliest results concerning the relationship between the length spectrum of a hyperbolic manifold and its commensurability class is due to Reid \cite{R} and shows that if two arithmetic hyperbolic $2$-manifolds have the same length spectra then they are necessarily commensurable. This was later extended to arithmetic hyperbolic $3$-manifolds by Chinburg-Hamilton-Long-Reid \cite{CHLR}. It turns out that one does not need the entire length spectrum in order to force commensurability in these cases. In \cite{LMPT}, Linowitz, McReynolds, Pollack and Thompson showed that two arithmetic hyperbolic $3$-manifolds of volume at most $V$ whose length spectra coincide for all geodesic lengths less than $c\cdot \left(\exp(\log V^{\log V})\right)$ are commensurable, where $c>0$ is an absolute constant. A similar result was proven for arithmetic hyperbolic surfaces. Although a number of authors have addressed the relationship between the length spectrum of a hyperbolic manifold and its commensurability class in the arithmetic setting, to our knowledge the only papers that consider the non-arithmetic setting are those of Millichap \cite{Mi} and Futer and Millichap \cite{FM}, where families of non-commensurable $3$-manifolds having the same volume and the same $n$ shortest geodesic lengths were constructed. The past ten years have seen a number of papers considering this problem for more general locally symmetric spaces. Lubotzky, Samuels and Vishne \cite{LSV}, for instance, have constructed non-commensurable arithmetic manifolds with universal cover the symmetric space associated to $\PGL_n(\mathbb R)$ (for $n\geq 3$) having the same length spectra. More generally, Prasad and Rapinchuk \cite{PR} have considered locally symmetric spaces $\mathfrak X_\Gamma=\mathcal{K}\backslash \mathcal{G} / \Gamma$ where $\mathcal{G}=G(\mathbb R)$ is the Lie group associated to a connected semi-simple real algebraic subgroup $G$ of $\SL_n$, $\mathcal{K}$ is a maximal compact subgroup of $\mathcal{G}$ and $\Gamma$ is a discrete torsion-free subgroup of $\mathcal{G}$. In particular they showed that there exist pairs of non-commensurable locally symmetric spaces $\mathfrak X_{\Gamma_1}$ and $\mathfrak X_{\Gamma_2}$ with the same length spectra only if $G$ is of type $A_n (n>1)$, $D_{2n+1} (n\geq 1)$, $D_4$ or $E_6$. In this paper we focus on hyperbolic surfaces and prove a variety of results which quantify the extent to which two non-commensurable hyperbolic surfaces may contain many geodesic lengths in common. Because we will be considering arithmetic hyperbolic surfaces, we briefly recall what it means for a hyperbolic surface to be arithmetic. Given a discrete subgroup $\Gamma$ of $\PSL_2(\mathbb R)$, the {\it commensurator} of $\Gamma$ is the set \[\mathrm{Comm}(\Gamma)=\{g\in \PSL_2(\mathbb R) : \Gamma\text { and }g\Gamma g^{-1}\text{ are commensurable }\}.\] The celebrated Margulis dichotomy \cite{Ma} states that $\Gamma$ is arithmetic if and only if $\Gamma$ has infinite index in $\mathrm{Comm}(\Gamma)$. An alternative characterization of arithmeticity defines a hyperbolic surface to be arithmetic if and only if it is commensurable with a hyperbolic surface of the form ${\bf H}^2/\Gamma_\mathcal O$. Here ${\bf H}^2$ denotes the hyperbolic plane and $\Gamma_\mathcal O$ is a group constructed from a maximal order in a quaternion algebra defined over a totally real field (we will review the construction of $\Gamma_\mathcal O$ in Section \ref{section:arithmetic}). We note that an arithmetic hyperbolic surface is called {\it derived from a quaternion algebra} if its fundamental group is contained in a group of the form $\Gamma_\mathcal O$. We now define a counting function whose behavior will be studied throughout this paper. Given a set $S=\{\ell_1,\dots,\ell_r\}$ of nonnegative real numbers we define $\pi(V, S)$ to be the maximum cardinality of a collection of pairwise non-commensurable arithmetic hyperbolic surfaces derived from quaternion algebras, each of which has volume less than $V$ and length spectrum containing $S$. The function $\pi(V,S)$ was previously studied in \cite[Theorem 4.10]{LMPT}, where it was shown that if $\pi(V,S)\to\infty$ as $V\to\infty$ then there exist integers $1\leq a,b\leq |S|$ and constants $c_1,c_2>0$ such that \[c_1\frac{V}{\log V^{1-\frac{1}{2^{a}}}} \leq \pi(V,S) \leq c_2\frac{V}{\log V^{1-\frac{1}{2^{b}}}}\] for all sufficiently large $V$. The first result of this paper considers the asymptotic behavior of $\pi(V,S)$ in short intervals and provides a lower bound on the number of arithmetic hyperbolic surfaces which are pairwise non-commensurable, have length spectra containing $S$ and volume contained in an interval of the form $[V,V+W]$. \begin{theorem}\label{theorem:shortintervals} Fix a finite set $S$ of nonnegative real numbers for which $\pi(V,S)\to\infty$ as $V\to\infty$. Let $r$ be the cardinality of $S$ and define $\theta=\frac{8}{3}$ if $r=1$ and $\theta=\frac{1}{2^r}$ otherwise. If $\epsilon>0$ and $V^{1-\theta+\epsilon} < W < V$ then as $V\to\infty$ we have \[\pi(V+W,S)-\pi(V,S) \geq \frac{1}{2^r}\cdot \frac{W}{\log V}.\] \end{theorem} The assumption that $\pi(V,S)\to\infty$ as $V\to\infty$ is necessary in Theorem \ref{theorem:shortintervals} because of the existence of sets $S$ for which the function $\pi(V,S)$ is non-zero yet constant for all sufficiently large $V$. The remainder of this paper is devoted to a careful analysis of the situation in which $\pi(V,S)$ is eventually constant. In Lemma \ref{lemma:sameinvariants} we will show that if $S$ is such that $\pi(V,S)>0$ for sufficiently large $V$ then every arithmetic hyperbolic surface with length spectrum containing $S$ must have the same invariant trace field (see Section \ref{section:arithmetic} for a definition). In the following theorem we will denote this common invariant trace field by $k$. \begin{theorem}\label{theorem:finiteness} Suppose that for some fixed (finite) set $S$ of nonnegative real numbers the function $\pi(V,S)$ is eventually constant and greater than zero. There exist integers $\ell,m,n$ with $\ell\in \{0,1\}$, $m\in\{1,[k:\mathbb Q]\}$ and $n\geq0$ such that \[\lim_{V\to\infty} \pi(V,S)=m 2^{n}-\ell.\] Furthermore, $\ell=0$ whenever $k$ has narrow class number one. \end{theorem} The case in which $k=\mathbb Q$ is especially nice, as this field has narrow class number one and of course satisfies $[k:\mathbb Q]=1$. Theorem \ref{theorem:finiteness} therefore immediately implies: \begin{cor}\label{cor:2n} Suppose that for some fixed (finite) set $S$ of nonnegative real numbers the function $\pi(V,S)$ is eventually constant. If $\mathbb Q$ is the invariant trace field associated to $S$ then there is an integer $n\geq 0$ such that \[\lim_{V\to\infty} \pi(V,S)=2^{n}.\] \end{cor} As a complement to Corollary \ref{cor:2n} we prove the following theorem which shows that for every integer $n\geq 0$ one can find a set $S$ such that $\lim_{n\to\infty} \pi(V,S)=2^n$. \begin{theorem}\label{theorem:existence} For every integer $n\geq 0$ there exists a set $S$ of nonnegative real numbers such that \[\lim_{V\to\infty} \pi(V,S)=2^{n}.\] \end{theorem} Our proofs are for the most part number theoretic and make extensive use of the correspondence between lengths of closed geodesics on arithmetic hyperbolic surfaces and algebraic integers in quadratic subfields of certain quaternion algebras. Of particular importance are Borel's formula for the area of an arithmetic hyperbolic surface \cite{Borel}, a ``selectivity'' theorem for embeddings of commutative orders into quaternion orders due to Chinburg and Friedman \cite{CF-S}, as well as a version of the Chebotarev density theorem in short intervals due to Balog and Ono \cite{BO}. \section{Quaternion algebras and quaternion orders} Let $k$ be a number field with ring of integers $\mathcal O_k$. A quaternion algebra over $k$ is a central simple $k$-algebra of dimension $4$. Equivalently, a quaternion algebra over $k$ is a $4$-dimensional $k$-vector space with basis $\{1,i,j,ij\}$ such that $i^2,j^2\in k^*$, $ij=-ji$ and such that every element of $k$ commutes with $i$ and $j$. Suppose that $B$ is a quaternion algebra over $k$. Given a prime $\mathfrak{p}$ of $k$, we define the completion $B_\mathfrak{p}$ of $B$ at $\mathfrak{p}$ as $B_\mathfrak{p}=B\otimes_k k_\mathfrak{p}$. The classification of quaternion algebras over local fields shows that if $B_\mathfrak{p}$ is not a division algebra then $B_\mathfrak{p}\cong \M_2(k_\mathfrak{p})$. If $B_\mathfrak{p}$ is a division algebra we say that $\mathfrak{p}$ {\it ramifies} in $B$. Otherwise we say that $\mathfrak{p}$ is {\it unramified} or {\it split} in $B$. The set of primes of $k$ (finite or infinite) which ramify in $B$ is denoted $\Ram(B)$. We denote by $\Ram_f(B)$ (respectively $\Ram_\infty(B)$) the set of finite (respectively infinite) primes of $k$ which ramify in $B$. The set $\Ram(B)$ is known to be finite of even cardinality. Conversely, given any finite set $T$ of primes of $k$ (which are either finite or else real) having even cardinality there exists a unique (up to isomorphism) quaternion algebra $B$ over $k$ for which $\Ram(B)=T$. Note that $B$ is a division algebra if and only if $\Ram(B)\neq\emptyset$. Given a quaternion algebra $B$ over a number field $k$, we define a {\it quaternion order} to be a subring of $B$ which is also finitely generated as an $\mathcal O_k$-module and contains a basis for $B$ over $k$. A quaternion order is called a maximal order if it is not properly contained in any other quaternion order. \section{Arithmetic hyperbolic surfaces and their closed geodesics}\label{section:arithmetic} Let $k$ be a totally real field of degree $n_k$ with absolute value of discriminant $d_k$ and Dedekind zeta function $\zeta_k(s)$. Let $B$ be a quaternion algebra over $k$ which is unramified at a unique real place $\nu$ of $k$. This gives us an identification $B\otimes_k k_\nu\cong \M_2(\mathbb R)$. Let $\mathcal O$ be a maximal order of $B$ and $\mathcal O^1$ be the multiplicative subgroup of $\mathcal O^*$ consisting of those elements with reduced norm one. We denote by $\Gamma_\mathcal O$ the image of $\mathcal O^1$ in $\PSL_2(\mathbb R)$. It was shown by Borel \cite{Borel} that $\Gamma_\mathcal O$ is a discrete subgroup of $\PSL_2(\mathbb R)$ whose coarea is given by the formula: \begin{equation}\label{equation:volumeformula} \coarea(\Gamma_{\mathcal{O}})=\frac{8\pi d_k^{\frac{3}{2}}\zeta_k(2)}{(4\pi^2)^{n_k}}\prod_{\mathfrak{p}\in\Ram_f(B)}\left(N(\mathfrak{p})-1\right). \end{equation} We define an {\it arithmetic Fuchsian group} to be a discrete subgroup of $\PSL_2(\mathbb R)$ which is commensurable with a group of the form $\Gamma_\mathcal O$. An arithmetic Fuchsian group is {\it derived from a quaternion algebra} if it is contained in a group of the form $\Gamma_\mathcal O$. Although not every arithmetic Fuchsian group $\Gamma$ is derived from a quaternion algebra, it is known that the subgroup $\Gamma^2$ of $\Gamma$ generated by squares of elements of $\Gamma$ is always derived from a quaternion algebra \cite[Chapter 8]{MR}. An {\it arithmetic hyperbolic surface} is a hyperbolic surface of the form ${\bf H}^2/\Gamma$ where $\Gamma$ is an arithmetic Fuchsian group. We will say that an arithmetic hyperbolic surface is derived from a quaternion algebra if its fundamental group $\Gamma$ is derived from a quaternion algebra. Suppose that $\Gamma$ is an arithmetic Fuchsian group. The {\it trace field} of $\Gamma$ is the field $\mathbb Q(\tr\gamma : \gamma\in \Gamma)$. It follows from the Mostow Rigidity Theorem that this trace field is a number field. Although it turns out that the trace field of an arithmetic Fuchsian group is not an invariant of the commensurability class, it can be shown that the {\it invariant trace field} $\mathbb Q(\tr\gamma^2 : \gamma\in \Gamma)$ is a commensurability class invariant. We will denote the invariant trace field of $\Gamma$ by $k\Gamma$. We will now define a quaternion algebra over $k\Gamma$. Let \[B\Gamma = \left\{\sum b_i\gamma_i : b_i\in k\Gamma, \gamma_i\in \Gamma\right\}\] where only finitely many of the $b_i$ are non-zero. We may define multiplication in $B\Gamma$ in the obvious manner: $(b_1\gamma_1)\cdot (b_2\gamma_2)=(b_1b_2)(\gamma_1\gamma_2)$. The algebra $B\Gamma$ is a quaternion algebra over $k\Gamma$ which we call the {\it invariant quaternion algebra} of $\Gamma$. Suppose that $\Gamma_1, \Gamma_2$ are arithmetic Fuchsian groups. It was shown by Maclachlan and Reid \cite[Chapter 8.4]{MR} that the surfaces ${\bf H}^2/\Gamma_1$ and ${\bf H}^2/\Gamma_2$ are commensurable in the wide sense if and only if $k\Gamma_1\cong k\Gamma_2$ and $B\Gamma_1\cong B\Gamma_2$. Let $\Gamma$ be an arithmetic Fuchsian group and $\gamma\in\Gamma$ be a hyperbolic element. Let $\lambda=\lambda_\gamma$ be an eigenvalue of a preimage of $\gamma$ in $\SL_2(\mathbb R)$ for which $|\lambda|>1$. Then $\lambda$ is well-defined up to multiplication by $\pm 1$. The axis of $\gamma$ in $\mathbf H^2$ projects to a closed geodesic on $\mathbf H^2/\Gamma$ of length $\ell=\ell(\gamma)$ where $\cosh(\ell/2)=\pm\tr(\gamma)/2$. \section{Quaternion algebras with specified maximal subfields} In this section we prove a variety of results concerning quaternion algebras admitting embeddings of a fixed set of quadratic fields. These results will play an important role in the proofs of this paper's main theorems. \begin{example} Consider the three real quadratic fields $\mathbb Q(\sqrt{3}),\mathbb Q(\sqrt{17}), \mathbb Q(\sqrt{51})$. The only primes that do not split in any of these fields are $3$ and $17$. It follows that if $B$ is a quaternion division algebra over $\mathbb Q$ which admits embeddings of $\mathbb Q(\sqrt{3}),\mathbb Q(\sqrt{17})$ and $\mathbb Q(\sqrt{51})$ then $B$ is the unique quaternion division algebra over $\mathbb Q$ with $\Ram(B)=\{3,17\}$. The quaternion algebra $\M_2(\mathbb Q)$ also admits embeddings of these quadratic fields, hence there are (up to isomorphism) two quaternion algebras over $\mathbb Q$ which admit embeddings of $\mathbb Q(\sqrt{3}),\mathbb Q(\sqrt{17})$ and $\mathbb Q(\sqrt{51})$. \end{example} \begin{theorem}\label{theorem:csatheorem1} If $L_1,\dots, L_r$ is a collection of quadratic extensions of a number field $k$ with the property that only finitely many quaternion algebras over $k$ admit embeddings of the $L_i$ then the number of isomorphism classes of quaternion algebras over $k$ which admit embeddings of all of the $L_i$ is $2^n$ for some $n\geq 0$. \end{theorem} \begin{proof} Let $k$ be a number field and $L_1,\dots, L_r$ be a collection of quadratic extensions of $k$ such that there are only finitely many isomorphism classes of quaternion algebras over $k$ admitting embeddings of all of the $L_i$. We claim that all but finitely many primes of $k$ split in at least one of the $L_i$. Indeed, suppose to the contrary that $\mathfrak p_1, \mathfrak p_2,\dots$ are distinct primes of $k$ which do not split in any of the $L_i$. Because a quaternion algebra over $k$ admits an embedding of a quadratic extension $L/k$ if and only if no prime of $k$ which ramifies in the algebra splits in $L/k$, it follows that the (mutually non-isomorphic) quaternion algebras \[\{ B_i : \Ram(B_i)=\{\mathfrak p_i, \mathfrak p_{i+1}\}\}\] each admit embeddings of all of the $L_i$, giving us a contradiction which proves our claim. We have shown that all but finitely many primes (finite of infinite) of $k$ split in at least one of the $L_i$. Let $S=\{\mathfrak p_1,\dots, \mathfrak p_m\}$ be the primes of $k$ not splitting in any of the $L_i$. On the one hand there are precisely $2^{m-1}$ subsets of $S$ with an even number of elements, each of which corresponds to a unique quaternion algebra (the algebra which is ramified precisely at the primes in this subset). Of these algebras, $2^{m-1}-1$ are division algebras; the remaining algebra is $\M_2(k)$ and corresponds to the empty subset of $S$. On the other hand, if $B$ is a quaternion algebra over $k$ which admits embeddings of $L_1,\dots, L_r$ then the only primes which may ramify in $B$ are those lying in $S$. It follows that $\Ram(B)\subseteq S$. Because the set $\Ram(B)$ is non-empty and determines the isomorphism class of $B$, the theorem follows. \end{proof} The following corollary to Theorem \ref{theorem:csatheorem1} considers a similar counting problem, though with the caveat that the quaternion algebras being considered are required to have a prescribed archimedean ramification behavior which will be necessary in our geometric applications. \begin{cor}\label{cor:csacor}Let $k$ be a number field of signature $(r_1,r_2)$ with $r_1>0$ and $L_1,\dots, L_r$ be a collection of quadratic extensions of $k$ such that only finitely many quaternion algebras over $k$ admit embeddings of the $L_i$. There is a nonnegative integer $n$ such that the number of quaternion algebras over $k$ which admit embeddings of all of the $L_i$ and are unramified at a unique real place of $k$, if nonzero, is equal to $m2^n$ for some integer $m\in\{1,r_1\}$. \end{cor} \begin{proof} We may assume that there exists at least one quaternion algebra $B$ over $k$ which admits embeddings of all of the $L_i$ and is split at a unique real place of $k$, as otherwise the total number of algebras we are counting is $0$. Suppose that the unique real place of $k$ at which $B$ is split is $\nu$. If $\omega\neq \nu$ is a real place of $k$ then $\omega$ ramifies in $B$, hence $\omega$ does not split in any of the extensions $L_i/k$ (since no place of $k$ which ramifies in a quaternion algebra over $k$ may split in a quadratic extension of $k$ which embeds into the quaternion algebra). We now have two cases to consider. The first case is that $\nu$ does not split in any of the extensions $L_i/k$. In this case no real place of $k$ splits in any of the extensions $L_i/k$. Fix a real place $\nu'$ of $k$. We will count the number of quaternion algebras over $k$ which admit embeddings of all of the $L_i$ and are split at $\nu'$ and no other real places of $k$. The proof of Theorem \ref{theorem:csatheorem1} shows that all but finitely many primes (finite or infinite) of $k$ split in at least one of $L_1,\dots, L_r$. Let $S=\{\mathfrak{p}_1,\dots,\mathfrak{p}_m\}$ be the set of all primes of $k$ which do not split in any of these extensions. Note that we have already shown that in the case we are considering $S$ contains all real places of $k$. A quaternion algebra $B$ over $k$ is ramified at all real places of $k$ not equal to $\nu'$, split at $\nu'$ and admits embeddings of $L_1,\dots, L_r$ if and only if \[\Ram(B)= \{\omega : \omega \text{ is a real place of $k$ not equal to $\nu'$}\} \bigcup S'\] for some subset $S'$ of $S$ containing only finite primes and whose cardinality ensures that $\Ram(B)$ has an even number of elements. The number of such subsets is $2^n$ for some integer $n\geq 0$, hence there are a total of $r_1 2^n$ quaternion algebras over $k$ which are split at a unique real place of $k$ and which admit embeddings of all of the $L_i$ (since there are $r_1$ choices for $\nu'$). Now consider the case in which $\nu$ splits in one of the extensions $L_i/k$. In this case a quaternion algebra over $k$ admits embeddings of all of the $L_i$ only if $\nu$ does not ramify in the quaternion algebra. Because we are counting quaternion algebras which are ramified at all but one real place of $k$, it must be the case that all of the quaternion algebras we are counting are split at $\nu$ and at no other real places of $k$. That there is a nonnegative integer $n$ such that there are $2^n$ quaternion algebras which are split at $\nu$ and no other real place of $k$ and which admit embeddings of all of the $L_i$ now follows from the argument that was used in the previous case. \end{proof} \begin{theorem}\label{theorem:csatheorem2} Let $n\in\mathbb Z$ with $n\geq 0$. For every number field $k$ there exist quadratic extensions $L_1,\dots, L_r$ of $k$ such that there are precisely $2^n-1$ isomorphism classes of quaternion division algebras over $k$ which admit embeddings of all of the $L_i$. \end{theorem} \begin{proof} We begin by considering the case in which $k=\mathbb Q$. Let $p_1$ be a prime satisfying $p_1\equiv 1 \pmod{8}$ and define $L_1=\mathbb Q(\sqrt{p_1})$. Let $p_2,\dots, p_{m}$ be primes which satisfy $p_i\equiv 1\pmod{8}$ and which are all inert in $L_1/\mathbb Q$. Define $L_2=\mathbb Q(\sqrt{p_1p_2\cdots p_{m}})$ and $L_3=\mathbb Q(\sqrt{p_2\cdots p_{m}})$. Let $d_1,d_2, d_3$ denote the discriminants of $L_1,L_2,L_3$. A prime $p$ splits in the extension $L_i/\mathbb Q$ if and only if the Kronecker symbol $\left(\frac{d_i}{p}\right)=1$, is inert in the extension if and only if $\left(\frac{d_i}{p}\right)=-1$ and ramifies if and only if $\left(\frac{d_i}{p}\right)=0$. Moreover, as $\left(\frac{ab}{p}\right)=\left(\frac{a}{p}\right)\left(\frac{b}{p}\right)$ for all positive integers $a,b$, we have the identity \[ \left(\frac{d_1}{p}\right)\left(\frac{d_2}{p}\right)\left(\frac{d_3}{p}\right)=\left(\frac{d_1d_2d_3}{p}\right)=\left(\frac{\left(p_1p_2\cdots p_m\right)^2}{p}\right)=1.\] This shows that every prime $p$ not lying in the set $\{p_1,\dots, p_m\}$ must split in one of the extensions $L_i/\mathbb Q$. While a prime $p_i$ with $i>1$ is inert in $L_1/\mathbb Q$ and ramifies in $L_2/\mathbb Q$ and $L_3/\mathbb Q$, quadratic reciprocity implies that the prime $p_1$ will split in $L_3/\mathbb Q$ if and only if $m$ is odd. Let $L_4$ be a real quadratic field in which the prime $p_1$ splits and in which $p_2,\dots, p_m$ are all inert. It now follows from the previous paragraph that that every prime not in $\{p_2,\dots, p_m\}$ splits in at least one of the quadratic fields $\{L_1,\dots, L_4\}$. If $B$ is a quaternion division algebra over $\mathbb Q$ into which $L_1,\dots, L_4$ all embed then the set $\Ram(B)$ of primes at which $B$ is ramified is a nonempty set of even cardinality which satisfies $\Ram(B)\subseteq \{p_2,\dots, p_m\}$. Conversely, every nonempty subset of $\{p_2,\dots,p_m\}$ with even cardinality defines a unique quaternion division algebra over $\mathbb Q$ into which the quadratic fields $\{L_1,\dots, L_4\}$ all embed. As there are precisely $2^{m-2}-1$ such subsets, setting $m=n+2$ proves the theorem in the case that $k=\mathbb Q$. We now consider the general case in which $k$ is an arbitrary number field. Let $L_1,\dots, L_4$ be quadratic fields as above, though with the additional restrictions that $L_i\cap k=\mathbb Q$ for $i=1,\dots 4$ and that all of the primes in the set $\{p_2,\dots,p_m\}$ split completely in $k/\mathbb Q$. Let $p\not\in\{p_2,\dots,p_m\}$ be a rational prime and $\mathfrak p$ be a prime of $k$ lying above $p$. Then for $i=1,\dots,4$ the prime $\mathfrak p$ splits in the quadratic extension $kL_i/k$, where $kL_i$ is the compositum of $k$ and $L_i$. Also, if $q \in \{p_2,\dots,p_m\}$ and $\mathfrak q$ is a prime of $k$ lying above $q$ then $\mathfrak q$ is inert in $kL_i/k$ for $i=1,\dots,4$. Both of these assertions follow from standard properties of the Artin symbol \cite[Chapter X]{Lang-ANT} and the fact that $\Gal(kL_i/k)$ is isomorphic to $\Gal(L_i/\mathbb Q)$ via restriction to $L_i$. It follows that all but finitely many primes of $k$ split in at least one of the extensions $kL_1, \dots, kL_4$ and that there are at least $m-1$ primes of $k$ which do not split in any of the $kL_i$. By considering a fifth quadratic extension of $k$ in which $m-1$ of these primes are inert and the remainder of the primes split, we obtain five quadratic extensions of $k$ with the property that all but $m-1$ primes (finite or infinite) of $k$ split in at least one of these extensions. The theorem now follows, as it did in the $k=\mathbb Q$ case, from the correspondence between quaternion division algebras over $k$ admitting embeddings of these five quadratic extensions and even order subsets of these $m-1$ primes. \end{proof} \begin{rmk}\label{totallyreal} Because it will be important in the proof of Theorem \ref{theorem:existence}, we remark that in the case that $k=\mathbb Q$, the quadratic fields furnished by Theorem \ref{theorem:csatheorem2} may all be assumed to be totally real. This follows immediately from the proof of Theorem \ref{theorem:csatheorem2}. \end{rmk} \section{Selectivity in quaternion algebras} Let $k$ be a number field, $B$ be a quaternion algebra over $k$ which admits embeddings of the quadratic extensions $L_1,\dots, L_r$ of $k$. For each $i=1,\dots, r$, fix a quadratic $\mathcal O_k$-order $\Omega_i\subset L_i$. We would like to determine which maximal orders of $B$ contain conjugates of {\it all} of the quadratic orders $\Omega_i$. In the case that $r=1$ this problem was solved by Chinburg and Friedman \cite[Theorem 3.3]{CF-S}. Because of our interest in arithmetic hyperbolic surfaces and their invariant quaternion algebras, we are primarily interested in the case that $k$ is totally real and $B$ is unramified at a unique real place of $k$. \begin{thm}[Chinburg and Friedman]\label{thm:CF} Let $B$ be a quaternion algebra over a number field $k$, $\Omega\subset B$ be a commutative $\mathcal O_k$-order and assume that $B$ is unramified at some real place of $k$. Then every maximal order of $B$ contains a conjugate (by $B^*$) of $\Omega$, except when the following three conditions hold: \begin{enumerate}[(1)] \item $\Omega$ is an integral domain and its quotient field $L\subset B$ is a quadratic extension of $k$. \item The extension $L/k$ and the algebra $B$ are unramified at all finite places and ramify at exactly the same (possibly empty) set of real places of $k$. \item All prime ideals of $k$ dividing the relative discriminant ideal $d_{\Omega/\mathcal O_k}$ of $\Omega$ are split in $L/k$. \end{enumerate} Suppose now that (1), (2) and (3) hold. Then $B$ has an even number of conjugacy classes of maximal orders and the maximal orders containing some conjugate of $\Omega$ make up exactly half of these conjugacy classes. \end{thm} \begin{rmk}We note that Chinburg and Friedman actually prove a stronger result which shows exactly which conjugacy classes of maximal orders have representatives admitting embeddings of $\Omega$.\end{rmk} \begin{thm}\label{thm:selectivity} Let $k$ be a totally real number field and $L_1,\dots, L_r$ be quadratic extensions of $k$. For each $i=1,\dots, r$ let $\Omega_i$ be a quadratic $\mathcal O_k$-order contained in $L_i$. Suppose that there exists a quaternion algebra over $k$ which is unramified at a unique real place of $k$ and into which all of the $L_i$ embed. Then with one possible exception, every quaternion algebra over $k$ which is unramified at a unique real place of $k$ and into which all of the $L_i$ embed has the property that every maximal order of the quaternion algebra contains conjugates of all of the $\Omega_i$. Furthermore, this exceptional quaternion algebra does not exist if the narrow class number of $k$ is equal to one. \end{thm} \begin{proof} Suppose first that $k$ has narrow class number one and that $B$ is a quaternion algebra over $k$ which is unramified at a unique real place of $k$ and in which all of the $L_i$ embed. It was shown in \cite[Proposition 5.4]{L-S} that if $\mathcal R$ is an order of $B$ then there is an extension $k(\mathcal R)$ of $k$ with the property that if $L_i\not\subset k(\mathcal R)$ then every order in the genus of $\mathcal R$ admits an embedding of $\Omega_i$. By the Skolem-Noether theorem, this is equivalent to the statement that $\mathcal R$ contains a conjugate of $\Omega_i$. Moreover, it was shown in \cite[Section 3]{L-S} that the conductor of $k(\mathcal R)$ is divisible only by primes which divide the level ideal of $\mathcal R$. In the case we are considering, $\mathcal R$ is a maximal order. Therefore its level ideal is trivial and the genus of $\mathcal R$ is simply the set of all maximal orders of $B$. It follows that $k(\mathcal R)$ is contained in the narrow class field of $k$. As $k$ has narrow class number one, this means that $k(\mathcal R)=k$, hence \cite[Section 3]{L-S} shows that every maximal order of $B$ contains conjugates of all of the $\Omega_i$. We now prove the first statement of the theorem. If $k=\mathbb Q$ then $k$ has narrow class number one and we are done by the previous paragraph. We may therefore assume that $k\neq \mathbb Q$. Note that because $k$ is totally real and not equal to $\mathbb Q$, it follows that $k$ has at least two real places. By hypothesis there exists a quaternion algebra $B$ over $k$ which is unramified at a unique real place of $k$ and into which all of the $L_i$ embed. Denote by $\nu$ the real place of $k$ which is unramified in $B$. If $\omega\neq \nu$ is another real place of $k$ then $\omega$ ramifies in $B$, hence ramifies in all of the extensions $L_i/k$, as otherwise the $L_i$ would not all embed into $B$. Let $B'$ be a quaternion algebra over $k$ which admits embeddings of all of the $L_i$ and which is unramified at a unique real place of $k$. Suppose that $B'$ and one of the extensions, say $L_i$, satisfy condition (2) in Theorem \ref{thm:CF}. We have already seen that every real place $\omega$ of $k$ not equal to $\nu$ ramifies in $L_i$. Because $B'$ and $L_i$ satisfy (2), it must be that $B'$ ramifies at $\omega$ as well. Because $B'$ is not ramified at all real places of $k$ we may deduce that $\Ram_\infty(B')=\{\omega : \omega \text{ is a real place of $k$ not equal to $\nu$}\}$. Also, because $B'$ satisfies (2) we see that $\Ram_f(B')=\emptyset$. This shows that if $B'$ and $L_i$ satisfy condition (2) of Theorem \ref{thm:CF} then $\Ram(B')=\{\omega : \omega \text{ is a real place of $k$ not equal to $\nu$}\}$. Because a quaternion algebra is completely determined by the primes that ramify in the algebra, we conclude that there is at most one quaternion algebra over $k$ for which the conditions in Theorem \ref{thm:CF} are satisfied for any of the $\Omega_i$ and $L_i$. The theorem now follows from Theorem \ref{thm:CF}. \end{proof} \section{A useful lemma} In this section we prove a lemma which will play an important role in the proofs of our main theorems. \begin{lemma}\label{lemma:sameinvariants} Let $\Gamma, \Gamma'$ be arithmetic Fuchsian groups for which the surfaces $\mathbf H^2/\Gamma, \mathbf H^2/\Gamma'$ have closed geodesics of length $\ell$. Let $\gamma\in\Gamma$ be the hyperbolic element associated to $\ell$ and $\lambda_\gamma$ the corresponding eigenvalue. Then the invariant trace fields of $\Gamma$ and $\Gamma'$ are equal, and the invariant quaternion algebras of $\Gamma$ and $\Gamma'$ both admit embeddings of the quadratic extension $\mathbb Q(\lambda_{\gamma^2})$ of this common invariant trace field. \end{lemma} \begin{proof} Because $\gamma^2$ is contained in the subgroup $\Gamma^2$ of $\Gamma$ generated by squares, which is derived from a quaternion algebra \cite[Chapter 8]{MR}, it follows from Lemma 2.3 of \cite{CHLR} that the invariant trace field of $\Gamma^2$, and hence of $\Gamma$, is $\mathbb Q(\lambda_{\gamma^2}+1/\lambda_{\gamma^2})=\mathbb Q(\tr(\gamma^2))$. Because $\mathbf H^2/\Gamma'$ also contains a geodesic of length $\ell$, the geodesic length formula shows that $\Gamma'$ contains an element $\gamma'$ such that $\tr(\gamma')=\tr(\gamma')$ (up to a sign). In particular this implies that \[\tr(\gamma'^2)=\tr^2(\gamma')-2=\tr^2(\gamma)-2=\tr(\gamma^2),\] from which we conclude that $\mathbb Q(\tr(\gamma^2))=\mathbb Q(\tr(\gamma'^2))$. Because $\mathbb Q(\tr(\gamma^2))$ is the invariant trace field of $\Gamma$ and $\mathbb Q(\tr(\gamma'^2))$ is the invariant trace field of $\Gamma'$, this proves the first part of the theorem. Let $k$ denote the invariant trace field of $\Gamma$ and $\Gamma'$. Let $B\Gamma$ denote the invariant quaternion algebra of $\Gamma$ and $B\Gamma'$ the invariant quaternion algebra of $\Gamma'$. The fields $k(\lambda_{\gamma^2})$ and $k(\lambda_{\gamma'^2})$ embed into $B\Gamma$ and $B\Gamma'$ by \cite[Chapter 8]{MR}, hence the theorem follows from the fact that $k(\lambda_{\gamma^2})\cong \mathbb Q(\lambda_{\gamma^2})\cong k(\lambda_{\gamma'^2})$. \end{proof} The proof of Lemma \ref{lemma:sameinvariants} also shows the following. \begin{lemma}\label{lemma:embedlemma} Let $\Gamma, \Gamma'$ be a arithmetic Fuchsian groups derived from quaternion algebras for which the surfaces $\mathbf H^2/\Gamma, \mathbf H^2/\Gamma'$ have closed geodesics of length $\ell$. Let $\gamma\in\Gamma$ be the hyperbolic element associated to $\ell$ and $\lambda_\gamma$ the corresponding eigenvalue. Let $k$ denote the invariant trace fields of $\Gamma$ and $\Gamma'$. Then the invariant quaternion algebras of $\Gamma$ and $\Gamma'$ both admit embeddings of the quadratic extension $k(\lambda_\gamma)$ of $k$. \end{lemma} \section{Proof of Theorem \ref{theorem:shortintervals}} Let $S=\{\ell_1,\dots,\ell_r\}$ be a set of nonnegative real numbers for which $\pi(V,S)\to\infty$ as $V\to\infty$. Let ${\bf H}^2/\Gamma_0$ be an arithmetic hyperbolic surface derived from a quaternion algebra whose length spectrum contains $S$. Let $k$ be the invariant quaternion algebra of $\Gamma_0$ and $B_0$ be the invariant trace field of $\Gamma_0$. For $i=1,\dots,r$ define $L_i=k(\lambda_i)$. Since $\pi(V,S)\to\infty$ as $V\to\infty$ there are infinitely many pairwise non-commensurable arithmetic hyperbolic surfaces derived from quaternion algebras with geodesics of lengths $\{\ell_1,\dots,\ell_r\}$. By Lemma \ref{lemma:embedlemma} the invariant quaternion algebras of these surfaces, which are pairwise non-isomorphic, all admit embeddings of $L_1,\dots,L_r$. This shows, in particular, that there are infinitely many primes of $k$ which are inert in all of the extensions $L_i/k$. Suppose that $B$ is a quaternion algebra over $k$ which is unramified at a unique real place of $k$, admits embeddings of $L_1,\dots, L_r$ and satisfies $\Ram_f(B)\neq \emptyset$. For each $i=1,\dots, r$, fix a quadratic $\mathcal O_k$-order $\Omega_i\subset L_i$ which contains a preimage in $L_i$ of $\gamma_i$. It follows from Theorem \ref{thm:selectivity} that every maximal order of $B$ contains conjugates of all of the $\Omega_i$. If $\mathcal O$ is one such maximal order then the arithmetic hyperbolic surface ${\bf H}^2/\Gamma_{\mathcal O}$, which is by definition derived from a quaternion algebra, must have length spectrum containing $S$. Let $V_0$ denote the area of ${\bf H}^2/\Gamma_{\mathcal O}$. Let $\epsilon>0$ and define $\theta=\frac{8}{3}$ if $r=1$ and $\theta=\frac{1}{2^r}$ if $r>1$. Finally, let $V^{1-\theta+\epsilon} < W < V$. In light of the previous paragraph it suffices to show that for all sufficiently large $V$ one can construct at least $\frac{1}{2^r}\cdot \frac{W}{\log V}$ quaternion algebras $B$ which are ramified at a finite prime of $k$, a unique real place of $k$, admit embeddings of all of the $L_i$ and satisfy $\coarea(\Gamma_\mathcal O)\in (V,V+W)$ where $\mathcal O$ is a maximal order of $B$. Let $\mathfrak{p}_0$ be a prime of $k$ which is inert in all of the extensions $L_i/k$ (for $i=1,\dots, r$), is unramified in $B_0$ and which satisfies $N(\mathfrak{p}_0)>13$. Note that such a prime exists because we have already shown that there are infinitely many primes of $k$ which are inert in all of the extensions $L_i/k$. Before continuing we note that because the compact (respectively non-compact) hyperbolic $2$-orbifold of minimal area has area $\pi/42$ (respectively, $\pi/6$), the fact that $N(\mathfrak{p}_0)>13$ ensures that $V_0\cdot (N(\mathfrak{p}_0)-1)>1$ (see \cite{K}). We will now construct our quaternion algebras $B$ by choosing primes $\mathfrak{p}$ of $k$ which are unramified in $B_0$ and inert in all of the extensions $L_i/k$, and then defining $B$ to be the quaternion algebra for which $\Ram(B)=\Ram(B_0)\cup \{\mathfrak{p}_0,\mathfrak{p}\}$. As all of the $L_i$ embed into $B_0$ it must be the case that no prime of $\Ram(B_0)$ splits in any of the extensions $L_i/k$. Further, because of the way that we chose $\mathfrak{p}_0$ and $\mathfrak{p}$, neither of these primes split in any of the extensions $L_i/k$, hence $B$ admits embeddings of the $L_i$ as desired. If $\mathcal O$ is a maximal order of $B$ then the coarea of $\Gamma_\mathcal O$ is given by \[V_0(N(\mathfrak{p}_0)-1)(N(\mathfrak{p})-1)\] by (\ref{equation:volumeformula}). Let $L$ denote the compositum over $k$ of $L_1, L_2,\dots, L_r$. We will show that $[L:k]=2^r$. Suppose to the contrary that $[L:k]=2^s<2^r$. Relabelling the $L_i$ as necessary, we may assume that the compositum over $k$ of $L_1,\dots, L_s$ is $L$. Because $L_r$ is contained in $L$ and $\Gal(L/k)\cong (\mathbb Z/2\mathbb Z)^s$, the Galois correspondence implies that there exist $1\leq i<j\leq s$ such that $L_r$ is contained in the compositum of $L_i$ and $L_j$. Let $\mathfrak{q}$ be a prime of $k$ which is unramified in $L_i, L_j$ and $L_r$. We claim that $\mathfrak{q}$ splits in one of these three quadratic extensions of $k$. Indeed, were $\mathfrak{q}$ inert in all three extensions then the Galois group $\Gal(L_iL_j/k)$ of the compositum of $L_i$ and $L_j$ would have to be cyclic of prime power order \cite[p. 115]{M}, which is not the case since $\Gal(L_iL_j/k)\cong (\mathbb Z/2\mathbb Z)^2$. This shows that there are only finitely many primes of $k$ which do not split in any of $L_i, L_j, L_r$. The proof of Theorem \ref{theorem:csatheorem1} now implies that there are only finitely many quaternion algebras over $k$ which admit embeddings of $L_i, L_j$ and $L_r$, and hence of $L_1,\dots, L_r$. This is a contradiction as we have already seen that there are infinitely many such quaternion algebras. Therefore $[L:k]=2^r$. We will now employ a version of the Chebotarev density theorem in short intervals due to Balog and Ono \cite{BO}. This theorem shows that the number of primes $\mathfrak P$ of $k$ which are unramified in $L/k$, have $(\mathfrak P,L/k)=(1,\dots,1)\in\Gal(L/k)$ and have $X\leq N(\mathfrak P)\leq X+Y$ is asymptotically \[\frac{1}{2^s}\cdot \frac{Y}{\log X}\] for all sufficiently large $X$ if $\epsilon'>0$ and $X^{1-\theta+\epsilon'}\leq Y\leq X$. Theorem \ref{theorem:shortintervals} now follows from the short intervals version of the Chebotarev density theorem upon setting $c=V_0\cdot (N(\mathfrak{p}_0)-1)$ and $X=\frac{1}{c}V$. \section{Proof of Theorem \ref{theorem:finiteness}} Let $S=\{\ell_1,\dots,\ell_r\}$ be a finite set of nonnegative numbers for which $\pi(V,S)$ is eventually constant and greater than zero. Let $\mathbf H^2/\Gamma$ be an arithmetic hyperbolic surface derived from a quaternion algebra whose length spectrum contains $S$. Let $k=k\Gamma$ be the invariant trace field of $\Gamma$ and $B=B\Gamma$ be the invariant quaternion algebra of $\Gamma$. For $i=1,\dots, r$, let $\gamma_i$ be the associated hyperbolic element and $\lambda_{\gamma_i}$ be the eigenvalue of the preimage in $\SL_2(\mathbb R)$ of $\gamma_i$ for which $|\lambda_{\gamma_i}|>1$. Suppose that $\mathbf H^2/\Gamma'$ is an arithmetic hyperbolic surface derived from a quaternion algebra whose length spectrum contains $S$ and which is not commensurable with $\mathbf H^2/\Gamma$. By Lemma \ref{lemma:sameinvariants}, the invariant trace field of $\mathbf H^2/\Gamma'$ is also $k$ and the invariant quaternion algebra $B'$ of $\mathbf H^2/\Gamma'$ admits embeddings of the quadratic extensions $k(\lambda_{\gamma_1}), \dots, k(\lambda_{\gamma_r})$ of $k$. Conversely, suppose that $B''$ is a quaternion algebra over $k$ which is unramified at a unique real place of $k$, admits embeddings of $k(\lambda_{\gamma_1}), \dots, k(\lambda_{\gamma_r})$ and is not isomorphic to $B$. For each $i=1,\dots, r$, fix a quadratic $\mathcal O_k$-order $\Omega_i\subset k(\lambda_i)$ which contains a preimage in $k(\lambda_i)$ of $\gamma_i$. It follows from Theorem \ref{thm:selectivity} that with one possible exception (which can occur only if the narrow class number of $k$ is greater than one), every maximal order of $B''$ contains conjugates of all of the $\Omega_i$ and hence gives rise to an arithmetic hyperbolic surface ${\bf H}^2/\Gamma_{\mathcal O}$ containing closed geodesics of lengths $\ell_1,\dots,\ell_r$. Moreover, such a surface is, by definition, derived from a quaternion algebra. From the above we deduce that with one possible exception, every isomorphism class of quaternion algebras over $k$ which split at a unique real place of $k$ and admit embeddings of all of the fields $k(\lambda_{\gamma_i})$ will give rise to an arithmetic hyperbolic surface derived from a quaternion algebra with length spectrum containing $S$. Moreover, because these algebras are pairwise non-isomorphic, the associated hyperbolic surfaces are pairwise non-commensurable. Theorem \ref{theorem:finiteness} now follows from Corollary \ref{cor:csacor}. \section{Proof of Theorem \ref{theorem:existence}} Fix an integer $n\geq 0$. By Theorem \ref{theorem:csatheorem2} there exist quadratic extensions $L_1,\dots, L_r$ of $\mathbb Q$ such that there are precisely $2^n-1$ quaternion division algebras over $\mathbb Q$ which admit embeddings of all of the $L_i$. Moreover, as was explained in Remark \ref{totallyreal}, we may take these quadratic fields to all be real quadratic fields. The results of \cite[Chapter 12.2]{MR} (see for instance \cite[Theorem 12.2.6]{MR}, which also holds in the context of arithmetic hyperbolic surfaces) show that these real quadratic fields give rise to hyperbolic elements $\gamma_1,\dots,\gamma_r$ of $\PSL_2(\mathbb R)$ and that each of the $2^n-1$ quaternion division algebras gives rise to an arithmetic hyperbolic surface derived from a quaternion algebra containing closed geodesics of lengths $\ell(\gamma_i),\dots,\ell(\gamma_r)$. Here we have used the fact that by Theorem \ref{thm:CF}, every maximal order of these quaternion algebras contains a conjugate of each of the $\gamma_i$. Similarly, the quaternion algebra $\M_2(\mathbb Q)$ admits embeddings of all of these real quadratic fields and gives rise to the hyperbolic surface ${\bf H}^2/\PSL_2(\mathbb Z)$ (whose length spectrum must also contain $\ell(\gamma_i),\dots,\ell(\gamma_r)$). Let $S=\{\ell(\gamma_1),\dots,\ell(\gamma_r)\}$. We have just shown that for sufficiently large $V$ we have that $\pi(V,S)\geq 2^n$. Suppose now that ${\bf H}^2/\Gamma$ is an arithmetic hyperbolic surface derived from a quaternion algebra whose length spectrum contains $S$. Lemma \ref{lemma:sameinvariants} shows that the invariant trace field of this surface is $\mathbb Q$ and that its invariant quaternion algebra admits embeddings of the real quadratic fields $L_1,\dots, L_r$. Recall that two arithmetic hyperbolic surfaces are commensurable if and only if they have isomorphic invariant trace fields and invariant quaternion algebras \cite[Chapter 8.4]{MR}. If ${\bf H}^2/\Gamma$ is not compact then it is commensurable with ${\bf H}^2/\PSL_2(\mathbb Z)$, while if ${\bf H}^2/\Gamma$ is compact its invariant quaternion algebra must be one of our $2^n-1$ quaternion division algebras by Theorem \ref{theorem:csatheorem2}. This shows that ${\bf H}^2/\Gamma$ is commensurable to one of the $2^n$ hyperbolic surfaces constructed above. Theorem \ref{theorem:existence} follows.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,794
import flask_nav from flask_nav.elements import * nav = flask_nav.Nav()
{ "redpajama_set_name": "RedPajamaGithub" }
267
\section{Introduction} \label{sec:1} Leptogenesis~\cite{Fukugita:1986hr} is an elegant framework for dynamically generating the observed matter-antimatter asymmetry in our Universe through out-of-equilibrium decays of heavy Majorana neutrinos, whilst simultaneously explaining the smallness of the light neutrino masses by the seesaw mechanism~\cite{seesaw}. Resonant Leptogenesis (RL)~\cite{Pilaftsis:1997dr, Pilaftsis:2003gt} offers the possibility of realizing this beautiful idea at energy scales accessible to laboratory experiments. In RL, the heavy Majorana neutrino self-energy effects on the leptonic $ C\!P$-asymmetry become dominant~\cite{Flanz:1994yx} and get resonantly enhanced, when at least two of the heavy neutrinos have a small mass difference comparable to their decay widths~\cite{Pilaftsis:1997dr}. Flavour effects in both heavy-neutrino and charged-lepton sectors, as well as the interplay between them, play an important role in determining the final lepton asymmetry in low-scale leptogenesis models~\cite{Abada:2006fw, Nardi:2006fx}. These intrinsically quantum effects can be consistently accounted for by extending the classical flavour-diagonal Boltzmann equations for the number densities of individual flavour species to a semi-classical evolution equation for a {\it matrix of number densities}~\cite{Sigl:1993}. Using this general technique, we present in Section~\ref{sec:2} a {\it fully} flavour-covariant formalism for transport phenomena in the Markovian regime. As an application of this general formalism, we derive a set of flavour-covariant transport equations for lepton and heavy-neutrino number densities with arbitrary flavour content in a quantum-statistical ensemble. We demonstrate the necessary appearance of rank-4 tensor rates in flavour space that properly account for the statistical evolution of off-diagonal flavour coherences. As shown in Section~\ref{sec:3}, this manifestly flavour-covariant formalism enables us to capture three important flavour effects pertinent to RL: (i) the resonant mixing of heavy neutrinos, (ii) the coherent oscillations between heavy neutrino flavours and (iii) quantum (de)coherence effects in the charged-lepton sector. In Section~\ref{sec:4}, we present a numerical example to illustrate the importance of these flavour off-diagonal effects on the final lepton asymmetry. Our conclusions are given in Section~\ref{sec:5}. For a detailed discussion of the topics presented here, we refer the reader to~\cite{Dev:2014laa}. \section{Flavour-Covariant Formalism} \label{sec:2} Let us begin with an arbitrary flavour content for the lepton doublet field operators $L_l$ (with $l=1,\: 2,\: \dots,\: \mathcal{N}_{L}$) and the right-handed Majorana neutrino field operators $N_{\rm R, \alpha} \equiv \mathrm{P_R} N_\alpha$ (with $\alpha=1,\: 2,\: \dots,\: \mathcal{N}_N$), where $\mathrm{P_R} = (\mat{1}_4 + \gamma_5)/2$ is the right-chiral projection operator. The field operators transform as follows in the fundamental representations of $U(\mathcal{N}_{L})$ and $U(\mathcal{N}_{N})$: \begin{subequations} \begin{gather} \hspace{-1.0em}L_l \to L'_l = V_l^{\phantom{l}m}L_m\;,\quad L^l \equiv (L_l)^{\dag} \to L'^l = V^l_{\phantom{l}m}L^m\;, \\ \hspace{-1.0em} N_{\mathrm{R},\, \alpha} \to N'_{\rm R, \alpha} = U_{\alpha}^{\phantom{\alpha}\beta}N_{\mathrm{R},\,\beta},\quad N_{\mathrm{R}}^{\alpha} \to N_{\rm R}'^{ \alpha} = U^{\alpha}_{\phantom{\alpha}\beta}N_{\mathrm{R}}^{\beta}, \end{gather} \end{subequations} where $V_l^{\phantom{l}m} \in U(\mathcal{N}_{L})$ and $U_{\alpha}^{\phantom{\alpha}\beta} \in U(\mathcal{N}_{N})$. In the flavour basis, the relevant neutrino Lagrangian is given by \begin{equation} -\mathcal{L}_N = \h{l}{\alpha} \overline L^{l} \widetilde{\Phi} N_{\rm R, \alpha} + \frac{1}{2} \overline{N}_{\rm R, \alpha}^C [M_N]^{\alpha \beta} N_{\rm R, \beta} + {\rm H.c.}\;, \label{eq:L} \end{equation} where $\widetilde{\Phi}=i\sigma_2\Phi^*$ is the isospin conjugate of the Higgs doublet $\Phi$. The Lagrangian~\eqref{eq:L} transforms covariantly under $U(\mathcal{N}_{L})\otimes U(\mathcal{N}_{N})$, provided the heavy-neutrino Yukawa and mass matrices transform as \begin{subequations} \begin{align} \h{l}{\alpha} \ &\rightarrow \ h_{l}'^{\ \alpha} \ = \ V_l^{\phantom l m} \; U^\alpha_{\phantom{\alpha} \beta} \; \h{m}{\beta} \; ,\\ [M_N]^{\alpha \beta} \ &\rightarrow \ [M'_N]^{\alpha \beta} \ = \ U^\alpha_{\phantom{\alpha} \gamma} \; U^\beta_{\phantom{\beta} \delta} \; [M_N]^{\gamma \delta} \;. \end{align} \end{subequations} The field operators in~\eqref{eq:L} can be expanded in flavour-covariant plane-wave decompositions, e.g. \begin{align} L_l(x) \ & = \ \sum_{s=+,-} \int_{\ve p} \Esdu{L}{p}{l}{i} \nonumber \\ & \quad \times \ \Big( \edu{-}{p}{i}{j} \su{s}{p}{j}{k} \, b_k(\ve p,s,0) \; \nonumber\\&\qquad +\: \edu{}{p}{i}{j} \, \sv{s}{p}{j}{k} \, d_k^{\dagger}(\ve p,s,0) \Big)\;, \label{eq:Lfield} \end{align} where we have suppressed the isospin indices. In \eqref{eq:Lfield}, $\int_{\mathbf{p}}\equiv\int\!\frac{\D{3}{\mathbf{p}}}{(2\pi)^3}$, $s$ is the helicity index and $[E_L^2(\mathbf{p})]_{l}^{\phantom{l}m}=\mathbf{p}^2\delta_{l}^{\phantom{l}m}+ [M_L^{\dag}M_L]_{l}^{\phantom{l}m}$. Notice that the Dirac four-spinors $\su{s}{p}{j}{k}$ and $\sv{s}{p}{j}{k}$ transform as rank-$2$ tensors in flavour space. The lepton creation and annihilation operators $b^k\equiv b_k^{\dag}$ and $b_k$, and the anti-lepton creation and annihilation operators $d^{\dagger,\, k}\equiv d_k$ and $d_k^{\dagger}$, satisfy the following equal-time anti-commutation relations \begin{align} \label{eq:b_d_anticomm} & \big\{ b_l(\ve p,s,\tilde{t}), \, b^{m}(\ve p',s',\tilde{t}) \big\} \ = \ \big\{d^{\dagger, m}(\ve p,s,\tilde{t}) , \, d_l^{\dagger}(\ve p',s',\tilde{t}) \big\} \nonumber\\ & \qquad \qquad \qquad \qquad \ = \ (2 \pi)^3 \delta^{(3)}{(\ve p - \ve p')} \, \delta_{s s'}\, \delta_l^{\phantom l m} . \end{align} Note that for the Dirac field, the lepton annihilation operator $b_k(\ve p,s,\tilde{t})$ and the anti-lepton creation operator $d_k^{\dagger}(\ve p,s,\tilde{t})$ transform under the {\em same} representation of $U(\mathcal{N}_L)$. For the heavy {\em Majorana} neutrino creation and annihilation operators $a^{\alpha}(\ve k,r,\tilde{t})$ and $a_{\alpha}(\ve k,r,\tilde{t})$, with helicities $r=\pm$, it is necessary to introduce the flavour-covariant Majorana constraint \begin{equation} d^{\dagger , \alpha}(\ve k,-\,r,\tilde{t})\ = \ G^{\alpha \beta} \, b_\beta(\ve k,r,\tilde{t}) \ \equiv\ G^{\alpha\beta}a_{\beta}(\ve k,r,\tilde{t}) \;, \label{eq:def_G} \end{equation} where $G^{\alpha \beta}\equiv[ U^* U^\dagger ]^{\alpha \beta}$ are the elements of a unitary matrix $\bm{G}$, which transforms as a contravariant rank-$2$ tensor under $U(\mathcal{N}_N)$. Similar flavour rotations are {\it forced} by the flavour-covariance of the formalism, when we derive the transformation properties of the discrete symmetries $C$, $P$ and $T$. This necessarily leads to the {\it generalized} discrete transformations \begin{subequations} \begin{align} b_l(\ve p,s,\tilde{t})^{\widetilde{C}} \ & \equiv \ \mathcal{G}^{lm} \, b_m(\ve p,s,\tilde{t})^{C} \ = \ -i \, d^{\dagger,l}(\ve p,s,\tilde{t}) \; ,\\ b_l(\ve p,s,\tilde{t})^P \ & = \ - s \, b_l(-\ve p,-s,\tilde{t})\;, \\ b_l(\ve p,s,\tilde{t})^{\widetilde{T}} \ &\equiv \ \mathcal{G}_{lm} \, b_m(\ve p,s,\tilde{t})^{T} \ = \ b_l(-\ve p,s,-\tilde{t}) \;, \end{align} \end{subequations} where $\mathcal{G}^{lm}\equiv [V^*V^\dag]^{lm}$ is the lepton analogue of the heavy-neutrino tensor $\bm{G}$. Using a flavour-covariant canonical quantization~\cite{Dev:2014laa}, we may define the matrix number densities of the leptons and heavy neutrinos, as follows: \begin{subequations} \begin{align} \hspace{-0.85em}\n{L}{s_1 s_2}{p}{l}{m}{t} & \equiv \mathcal V_3^{-1} \langle b^m(\ve p, s_2,\tilde{t}) b_l(\ve p, s_1,\tilde{t})\rangle_t \,, \label{eq:def_n_1}\\ \hspace{-0.85em}\nb{L}{s_1 s_2}{p}{l}{m}{t} & \equiv \mathcal V_3^{-1} \langle d_l^{\dagger}(\ve p,s_1,\tilde{t}) d^{\dagger,m}(\ve p,s_2,\tilde{t})\rangle_t \,, \label{eq:def_n_2}\\ \hspace{-0.85em}\n{N}{r_1 r_2}{k}{\alpha}{\beta}{t} & \equiv \mathcal V_3^{-1} \langle a^{\beta}(\ve k,r_2,\tilde{t}) a_\alpha(\ve k,r_1,\tilde{t})\rangle_t \,, \label{eq:def_n_3} \end{align} \end{subequations} where $\mathcal V_3 = (2 \pi)^3 \delta^{(3)}(\ve 0)$ is the coordinate three-volume and the macroscopic time $t=\tilde{t}-\tilde{t}_i$, equal to the interval of microscopic time between specification of initial conditions ($\tilde{t}_i$) and subsequent observation of the system ($\tilde{t}$)~\cite{Millington:2012pf}. Note the relative reversed ordering of indices in the lepton and anti-lepton number densities, which ensures that the two quantities transform in the same representation, so that they can be combined to form a flavour-covariant lepton asymmetry. For the Majorana neutrinos, $\bm{n}^N$ and $\overline{\bm{n}}^N$ are not independent quantities and are related by the generalized Majorana condition \begin{equation} \label{eq:Majorana_bar_G} \nb{N}{r_1 r_2}{k}{\alpha}{\beta}{t} \ = \ G_{\alpha \mu} \, \n{N}{r_2 r_1}{k}{\lambda}{\mu}{t} \, G^{\lambda \beta} \;. \end{equation} The number density matrices defined above have simple generalized-$C$ transformation properties: \begin{equation} [\bm{n}^X(\ve p,t)]^{\widetilde{C}} \ = \ [\overline{\bm{n}}^X(\ve p,t)]^{\mathsf{T}}, \label{LN} \end{equation} where $\mathsf{T}$ denotes the matrix transpose acting on both flavour and helicity indices. The total number densities $\bm{n}^X(t)$ are obtained by tracing over helicity and isospin indices and integrating over the three-momenta. Using the $\widetilde{C}$-transformation relations \eqref{LN}, we can define the generalized $\widetilde{C}P$-``odd'' lepton asymmetry \begin{equation} \bm{\delta n}^L\ = \ \bm{n}^L\:-\:\overline{\bm{n}}^L\;. \end{equation} In addition, for the heavy neutrinos, we may define the $\widetilde{C}P$-``even'' and -``odd'' quantities \begin{equation} \underline{\bm{n}}^N\ = \ \frac{1}{2}\Big(\bm{n}^N\:+\:\overline{\bm{n}}^N\Big)\;,\qquad \bm{\delta n}^N\ =\ \bm{n}^N\:-\:\overline{\bm{n}}^N\;. \end{equation} We will use these quantities, having definite $\widetilde{C}P$-transformation properties, to write down the flavour-covariant rate equations. First we derive a Markovian master equation governing the time evolution of the matrix number densities $\mat{n}^X(\ve p,t)$. These are defined in terms of the quantum-mechanical number-density operator $\mat{\check{n}}^X(\ve k,\tilde{t};\tilde{t}_i)$ and density operator $\rho(\tilde{t};\tilde{t}_i)$, as follows: \begin{equation} \label{eq:def_n_rho} \mat{n}^{X}(\ve k, t) \equiv \langle \mat{\check{n}}^{X}(\ve k,\tilde{t};\tilde{t}_i) \rangle_t = \Tr\left\{\, \rho(\tilde{t};\tilde{t}_i) \, \mat{\check{n}}^{X}(\ve k, \tilde{t};\tilde{t}_i) \right\}\,, \end{equation} where the trace is over the Fock space. Differentiating \eqref{eq:def_n_rho} with respect to the macroscopic time $t=\tilde{t}-\tilde{t}_i$, and using the Liouville-von Neumann and Heisenberg equations of motion, we proceed via a Wigner-Weisskopf approximation to obtain the leading order Markovian master equation~\cite{Dev:2014laa} \begin{align} \label{eq:master} &\frac{\D{}{}}{\D{}{t}} \mat{n}^X(\ve k, t) \ \simeq \ i \langle \, [H_0^X,\ \mat{\check{n}}^{X}(\ve k, t) ] \, \rangle_t \nonumber\\& \ -\:\frac{1}{2} \int_{-\infty}^{+\infty} \D{}{t'} \; \langle \, [H_{\rm int}(t'),\ [H_{\rm int}(t),\ \mat{\check{n}}^{X}(\ve k, t)]] \, \rangle_{t} \; , \end{align} where $H^X_0$ and $H_{\rm int}$ are the free and interaction Hamiltonians, respectively. The first term on the RHS of \eqref{eq:master}, involving the free Hamiltonian, generates flavour oscillations in vacuum, whereas the second term in \eqref{eq:master}, involving the interaction Hamiltonian, generates the collision terms in the generalized Boltzmann equations. For the system of lepton and Higgs doublets and heavy-neutrino singlets under consideration, we have \begin{subequations} \begin{align} H_0^L & \ = \ \sum_{s}\int_{\ve p} \, \Edu{L}{p}{m}{l} \Big(b^{m}(\ve p,s,\tilde{t}) \, b_l(\ve p,s,\tilde{t}) \; \nonumber \\ & \qquad \qquad +\; d^{\dagger}_l(\ve p,s,\tilde{t}) \, d^{\dagger,m}(\ve p,s,\tilde{t}) \Big)\; , \label{free_HamL}\\ H_0^N & \ = \ \sum_{r} \int_{\ve k} \, \Edu{N}{k}{\beta}{\alpha} \, a^{\dagger,\beta}(\ve k,r,\tilde{t}) \, a_\alpha(\ve k,r,\tilde{t}) \; , \label{free_HamN} \\ H_{\rm int} & \ = \ \int d^4x \h{l}{\alpha} \, \bar L^{l} \, \widetilde{\Phi} \, N_{\rm R, \alpha} \, + \, {\rm H.c.} \;. \label{Hint} \end{align} \end{subequations} Using these expressions in \eqref{eq:master}, we obtain the following evolution equations for the lepton and heavy-neutrino number densities~\cite{Dev:2014laa}: \begin{subequations} \begin{align} &\frac{\D{}{} }{\D{}{t}} \, \n{L}{s_1 s_2}{p}{l}{m}{t} \ = \ - \, i \, \Big[{E}_L(\ve p), \,{n}^{L}_{s_1 s_2}(\ve p,t) \Big]_{l}^{\phantom{l}m} \nonumber\\ &\qquad \qquad \qquad \qquad + \; [{C}^L_{s_1 s_2}(\ve p,t)]_l^{\phantom l m} \;, \label{eq:evol_lept}\\ &\frac{\D{}{} }{\D{}{t}} \,\n{N}{r_1 r_2}{k}{\alpha}{\beta}{t} \ = \ - \, i \, \Big[{E}_N(\ve k), \, {n}^{N}_{r_1 r_2}(\ve k,t)\Big]_{\alpha}^{\phantom{\alpha}\beta} \nonumber\\&\quad + \; [{C}^{N}_{r_1 r_2}(\ve k,t)]_\alpha^{\phantom \alpha \beta} + \; G_{\alpha \lambda} \, [\overline{{C}}^N_{r_2 r_1}(\ve k,t)]_{\mu}^{\phantom{\mu} \lambda} \, G^{\mu \beta} \;, \label{eq:evol_neu} \end{align} \end{subequations} where, for instance, the lepton collision terms may be written in the form \begin{align} \label{def_coll} [{C}^L_{s_1 s_2}(\ve p,t)]_l^{\phantom l m} \ & = \ - \: \frac{1}{2} \, \big[ \,{\mathcal{F}} \cdot {\Gamma} \, +\, {\Gamma^\dagger} \cdot {\mathcal{F}} \,\big]_{s_1 s_2, \,l}^{\phantom{s_1 s_2, \,l} m} \;. \end{align} Here, we have suppressed the overall momentum dependence and used a compact notation \begin{align} \label{compact1} \big[{\mathcal{F}} \cdot {\Gamma} \,\big]_{s_1 s_2, \,l}^{\phantom{s_1 s_2, \,l} m} \ & \equiv \ \sum_{s,r_1,r_2} \int_{\ve k, \, \ve q} \Tdu{[\mathcal{F}_{s_1 s \,r_1 r_2} (\ve p, \ve q, \ve k,t)]}{l}{n}{\alpha}{\beta} \nonumber\\&\qquad \times\: \Tdu{[\Gamma_{s\, s_2 r_2 r_1} (\ve p, \ve q, \ve k)]}{n}{m}{\beta}{\alpha} \;. \end{align} In \eqref{compact1}, there are two {\it new rank-4 tensors} in flavour space, as required by flavour-covariance: (i) the statistical number density tensors \begin{align} & \Tdu{\mat{\mathcal{F}}(\ve p, \ve q, \ve k,t)}{}{}{}{} \ = \ n^{\Phi}(\ve q, t) \, \mat{{n}}^L(\ve p, t) \otimes \left[{\mat 1} - \mat{{n}}^{N}(\ve k, t)\right] \nonumber \\ \ & \qquad - \: \left[1 + n^{\Phi}(\ve q, t)\right] \left[{\mat 1} - \mat{{n}}^L(\ve p, t)\right] \otimes \mat{{n}}^{N}(\ve k, t)\;, \label{Fstat} \end{align} and (ii) the absorptive rate tensors \begin{align} \Tdu{[\Gamma_{s_1 s_2 r_1 r_2} (\ve p, \ve q, \ve k)]}{l}{m}{\alpha}{\beta} & = \hs{k}{\nu}\; \h{i}{\lambda} \DiFud{k-p-q}{j}{p}{\mu}{\delta} \notag\\ &\hspace{-8em}\times \ \frac{1}{2 E_\Phi(\ve q)}\, \Esud{L}{p}{i}{j} \, \Esdu{L}{p}{k}{n} \,\nonumber \\ & \hspace{-8em}\times \ \Esdu{N}{k}{\lambda}{\mu} \, \Esud{N}{k}{\nu}{\gamma} \Tr\Big\{\su{r_2}{k}{\delta}{\beta} \, \notag\\ & \hspace{-8em}\times \ \sub{r_1}{k}{\gamma}{\alpha} \, {\rm {P_L}} \, \su{s_2}{p}{n}{m} \, \sub{s_1}{p}{p}{l} \, {\rm {P_R}} \Big\} \;. \label{eq:Gamma1} \end{align} The rate tensor \eqref{eq:Gamma1} describes heavy neutrino decays and inverse decays, and its off-diagonal components are responsible for the evolution of flavour-coherences in the system. The necessary emergence of these higher-rank tensors in flavour space may be understood in terms of the unitarity cuts of the partial self-energies~\cite{Dev:2014laa}. This is illustrated diagrammatically in Figure~\ref{fig:cuts2} for the in-medium heavy-neutrino production $L\Phi \to N$ (Figures~\ref{cuta} and \ref{cutb}) and $\Delta L=0$ scattering $L\Phi\to L \Phi$ (Figures~\ref{cutc} and \ref{cutd}) in a spatially-homogeneous statistical background of lepton and Higgs doublets. In Figures~\ref{cuta} and \ref{cutc}, the cut, across which positive energy flows from unshaded to shaded regions, is associated with production rates in the thermal plasma, as described by a generalization of the optical theorem~\cite{Dev:2014laa}. \begin{figure*}[t] \centering \subfloat[][Heavy-neutrino self-energy,\\ $N \to L\Phi \to N$.\label{cuta}]{\includegraphics[scale=0.55]{LPHItoNself}} \hspace{1.5em} \subfloat[][Heavy-neutrino production, $n^{\Phi}\protect{[}n^L\protect{]}_l^{\protect\phantom{l}k}\protect{[}\gamma(L\Phi \to N)\protect{]}_{k \protect\phantom{l} \alpha}^{\protect\phantom{k} l \protect\phantom{\alpha}\beta}$\label{cutb}]{\includegraphics[scale=0.65]{LPHItoNamp}}\\ \subfloat[][Charged-lepton self-energy, with $\Delta L = 0$ internally.\label{cutc}]{\includegraphics[scale=0.55]{DL0self}} \hspace{1.5em} \subfloat[][$\Delta L = 0$ scattering, $n^{\Phi}\protect{[}n^L\protect{]}_l^{\protect\phantom{l}k} \protect{[}\gamma(L\Phi \to L\Phi) \protect{]}_{k \protect\phantom{l} m} ^{\protect\phantom{k} l \protect\phantom{m}n}$.\label{cutd}]{\includegraphics[scale=0.65]{DL0amp}} \caption{Generalized unitarity cut of the partial heavy-neutrino and lepton self-energies, giving rise to the rank-4 tensor rates for heavy-neutrino production and $\Delta L = 0$ scattering processes. The explicit forms of the thermally-averaged rank-4 rates can be found in~\cite{Dev:2014laa}. } \label{fig:cuts2} \end{figure*} \section{Rate Equations for Resonant Leptogenesis} \label{sec:3} As already mentioned in Section~\ref{sec:1}, in the limit when two (or more) heavy Majorana neutrinos become degenerate, the $\varepsilon$-type $ C\!P$-violation due to the interference between the tree-level and absorptive part of the self-energy graphs in the heavy-neutrino decay can be resonantly enhanced, even up to order one~\cite{Pilaftsis:1997dr}. In this regime, finite-order perturbation theory breaks down and one needs a consistent field-theoretic resummation of the self-energy corrections. Neglecting thermal loop effects~\cite{Giudice:2003jh}, we perform such resummation along the lines of~\cite{Pilaftsis:2003gt} and replace the tree-level neutrino Yukawa couplings by their resummed counterparts in the transport equations given in Section~\ref{sec:2}. Specifically, for the processes $N \to L \Phi$ and $L^{\tilde{c}} \Phi^{\tilde{c}} \to N$, we have $\h{l}{\alpha} \to \hr{l}{\alpha}$ and, for $N \to L^{\tilde{c}} \Phi^{\tilde{c}}$ and $L \Phi \to N$, we have $\hs{l}{\alpha} \to \hrc{l}{\alpha}$, where $\tilde{c}$ denotes the $\widetilde{C}\!P$-conjugate. The algebraic form of the resummed neutrino Yukawa couplings in the heavy-neutrino mass eigenbasis can be found in~\cite{Pilaftsis:2003gt} and the corresponding form in a general flavour basis may be obtained by the appropriate flavour transformation, i.e.~$\hr{l}{\alpha} = V_{l}^{\ m}U^{\alpha}_{\ \beta}\widehat{\mathbf{h}}_{m}^{\ \ \beta}$, where $\widehat{\mathbf{h}}_{m}^{\ \ \beta}\equiv\widehat{\mathbf{h}}_{m\beta}$ in the mass eigenbasis~\cite{Dev:2014laa}. In order to obtain the rate equations relevant for RL from the general transport equations \eqref{eq:evol_lept} and \eqref{eq:evol_neu}, we perform the following standard approximations: \begin{itemize} \item [(i)] assume kinetic equilibrium, since elastic scattering processes rapidly equilibrate the momentum distributions for all the relevant particle species on time-scales much smaller than their statistical evolution. \item [(ii)] neglect the mass splittings between different heavy-neutrino flavours inside thermal integrals, and use an average mass $m_N$ and energy $E_N(\ve k) = (|\ve k|^2 + m_N^2)^{1/2}$, since the average momentum scale $|\ve k|\sim T \gg |m_{N_\alpha}-m_{N_\beta}|$. \item [(iii)] take the classical statistical limit of \eqref{Fstat}. \item [(iv)] neglect thermal and chemical-potential effects~\cite{Pilaftsis:2005rv}. \end{itemize} With the above approximations, we integrate both sides of \eqref{eq:evol_lept} and \eqref{eq:evol_neu}, and their generalized $\widetilde{C}P$-conjugates, over the phase space and sum over the degenerate isospin and helicity degrees of freedom. The resulting rate equations account for the decay and inverse decay of the heavy neutrinos in a flavour-covariant way~\cite{Dev:2014laa}. However, in order to guarantee the correct equilibrium behaviour, we must include the washout terms induced by the $\Delta L=0$ and $\Delta L=2$ scattering processes, with proper real intermediate state (RIS) subtraction~\cite{Kolb:1980, Pilaftsis:2003gt, Dev:2014laa} (see e.g., Figure~\ref{cutd}). As illustrated in~\cite{Dev:2014laa}, it is necessary to account for thermal corrections in the RIS contributions, when considering off-diagonal flavour correlations. In addition to the $2\leftrightarrow 2$ scatterings, it is also important to include the effect of the charged-lepton Yukawa couplings, which are responsible for the decoherence of the charged leptons towards their would-be mass eigenbasis, as opposed to the interactions with the heavy neutrinos [cf.~\eqref{eq:L}], which tend to create a coherence between the charged-lepton flavours. Note that, while calculating the reaction rates for the processes involving the charged-lepton Yukawa couplings, it is important to take into account their thermal masses, which control the phase-space suppression for the decay and inverse decay of the Higgs boson~\cite{Cline:1993bd}. Taking into account all of these contributions, as well as the expansion of the Universe, we derive the following {\it manifestly} flavour-covariant rate equations for the normalized $\widetilde{C}\!P$-``even" number density matrix $\mat{\underline{\eta}}^{N}$ and $\widetilde{C}\!P$-``odd" number density matrices $\mat{\delta \eta}^N$ and $\mat{\delta \eta}^L$ (where $\eta^X=n^X/n^{\gamma}$, $n^\gamma$ being the photon number density)~\cite{Dev:2014laa}: \begin{subequations} \boxalign[0.45\textwidth]{ \begin{align} \frac{H_{N} \, n^\gamma}{z}\, &\frac{\D{}{[\underline{\eta}^{N}]_{\alpha}^{\phantom{\alpha}\beta}}}{\D{}{z}} \ = \ - \, i \, \frac{n^\gamma}{2} \, \Big[\mathcal{E}_N,\, \delta \eta^{N}\Big]_\alpha^{\phantom \alpha \beta} + \, \Tdu{\big[\widetilde{\rm Re} (\gamma^{N}_{L \Phi})\big]}{}{}{\alpha}{\beta} \, \notag\\ & - \, \frac{1}{2 \, \eta^N_{\rm eq}} \, \Big\{\underline{\eta}^N, \, \widetilde{\rm Re}(\gamma^{N}_{L \Phi}) \Big\}_{\alpha}^{\phantom{\alpha}\beta} \;, \label{eq:evofinal2}\\[0.5em] \frac{H_{N} \, n^\gamma}{z}\, &\frac{\D{}{[\delta \eta^N]_\alpha^{\phantom \alpha \beta}}}{\D{}{z}} \ = \ - \, 2 \, i \, n^\gamma \, \Big[\mathcal{E}_N,\, \underline{\eta}^{N}\Big]_\alpha^{\phantom \alpha \beta} \notag\\ & + \, 2\, i\, \Tdu{\big[\widetilde{\rm Im} (\delta \gamma^{N}_{L \Phi})\big]}{}{}{\alpha}{\beta} \, - \, \frac{i}{\eta^N_{\rm eq}} \, \Big\{\underline{\eta}^N, \, \widetilde{\rm Im} (\delta\gamma^{N}_{L \Phi}) \Big\}_{\alpha}^{\phantom{\alpha}\beta} \notag\\ & - \, \frac{1}{2 \, \eta^N_{\rm eq}} \, \Big\{\delta \eta^N, \, \widetilde{\rm Re}(\gamma^{N}_{L \Phi}) \Big\}_{\alpha}^{\phantom{\alpha}\beta} \label{eq:evofinal3}\;, \\[0.5em] \frac{H_{N} \, n^\gamma}{z}\, & \frac{\D{}{[\delta \eta^L]_l^{\phantom l m}}} {\D{}{z}} \ = \ - \, \Tdu{[\delta \gamma^{N}_{L \Phi}]}{l}{m}{}{} \, +\, \frac{[\underline{\eta}^{N}]_{\beta}^{\phantom{\beta}\alpha}} {\eta^N_{\rm eq}} \, \Tdu{[\delta \gamma^{N}_{L \Phi}]}{l}{m}{\alpha}{\beta} \notag\\ & + \, \frac{[\delta \eta^N]_{\beta}^{\phantom\beta \alpha}}{2\,\eta^N_{\rm eq}} \, \Tdu{[\gamma^{N}_{L \Phi}]}{l}{m}{\alpha}{\beta} \, - \frac{1}{3} \, \Big\{ \delta {\eta}^{L} , \, {\gamma}^{L\Phi}_{L^{\tilde{c}} \Phi^{\tilde{c}}} + {\gamma}^{L\Phi}_{L \Phi}\Big\}_{l}^{\phantom l m} \notag\\ & \, - \frac{2}{3} \, \Tdu{[\delta {\eta}^L]}{k}{n}{}{} \, \Tdu{[{\gamma}^{L\Phi}_{L^{\tilde{c}} \Phi^{\tilde{c}}} - {\gamma}^{L\Phi}_{L \Phi}]}{n}{k}{l}{m} \notag\\[3pt] & - \frac{2}{3} \, \Big\{\delta \eta^L, \, \gamma_{\rm dec } \Big\}_{l}^{\phantom l m} \, +\, [\delta \gamma_{\rm dec}^{\rm back}]_{l}^{\phantom l m}\;. \label{eq:evofinal1} \end{align}} \end{subequations} \vspace{0.25em} Here, $z=m_N/T$, $H_N$ is the Hubble parameter at $z=1$ and $ \mat{\mathcal{E}}_N$ is the thermally-averaged effective heavy-neutrino energy matrix. $\mat\gamma^N_{L\Phi}$ and $\mat{\delta\gamma}^N_{L\Phi}$ are respectively the $\widetilde{C}\!P$-``even" and -``odd" thermally-averaged rate tensors governing the decay and inverse decay of the heavy neutrinos. The rates $\mat\gamma^{L\Phi}_{L \Phi}$ and $\mat\gamma^{L\Phi}_{L^{\tilde{c}} \Phi^{\tilde{c}}}$ in \eqref{eq:evofinal1} describe the washout due to $\Delta L = 0$ and $\Delta L = 2$ resonant scattering, respectively. On the other hand, $\mat\gamma_{\rm dec}$ and $\mat{\delta \gamma}_{\rm dec}^{\rm back}$ govern the charged-lepton decoherence. An accurate determination of the impact of the latter phenomenon, which tends to suppress the generated asymmetry in models of RL, relies upon the systematic inclusion of the rank-4 absorptive rate tensors. We emphasise, therefore, that this higher-rank flavour structure, which is manifest in this fully flavour-covariant formalism, can play an important physical role. In obtaining \eqref{eq:evofinal2} and \eqref{eq:evofinal3}, we have defined, for a given Hermitian matrix $\mat{A} = \mat{A}^\dagger$, the generalized real and imaginary parts, as follows: \begin{subequations} \begin{align} \big[\widetilde{\rm Re}(A)\big]_{\alpha}^{\phantom{\alpha}\beta} \ & \equiv \ \frac{1}{2} \, \Big( \Tdu{A}{\alpha}{\beta}{}{} \; + \; G_{\alpha \lambda} \,\Tdu{A}{\mu}{\lambda}{}{}\, G^{\mu \beta}\Big) \;, \label{4.26} \\ \big[\widetilde{\rm Im}(A)\big]_{\alpha}^{\phantom{\alpha}\beta} \ & \equiv \ \frac{1}{2 \, i} \, \Big( \Tdu{A}{\alpha}{\beta}{}{} \; - \; G_{\alpha \lambda} \,\Tdu{A}{\mu}{\lambda}{}{}\, G^{\mu \beta}\Big) \; . \label{4.27} \end{align} \end{subequations} In addition, we have used the relations \begin{subequations} \begin{align} \widetilde{\rm Re}(\underline{\mat{n}}^N) \ &= \ \underline{\mat{n}}^N \;,\\ i\, \widetilde{\rm Im}(\mat{\delta n}^N) \ &= \ \mat{\delta n}^N \;. \end{align} \end{subequations} The flavour-covariant rate equations \eqref{eq:evofinal2}--\eqref{eq:evofinal1} provide a complete and unified description of the RL phenomenon, consistently capturing the following {\it physically distinct} effects in a single framework, applicable for any temperature regime:\vspace{-0.5em} \begin{itemize} \item [(i)] Lepton asymmetry due to the {\it resonant mixing} between heavy neutrinos, as described by the resummed Yukawa couplings in $\mat{\delta\gamma}^N_{L\Phi}$, appearing in the first two terms on the RHS of~\eqref{eq:evofinal1}. This provides a flavour-covariant generalization of the mixing effects discussed earlier in~\cite{Pilaftsis:2003gt}.\vspace{-0.5em} \item [(ii)] Generation of the lepton asymmetry via coherent heavy-neutrino {\it oscillations}. Even starting with an incoherent diagonal heavy-neutrino number density matrix, off-diagonal $\widetilde{C}\!P$-``even'' number densities will be generated at ${\cal O}(h^2)$ due to the $C\!P$-conserving part of the coherent inverse decay rate $\mat\gamma^N_{L\Phi}$ in the last two terms on the RHS of~\eqref{eq:evofinal2}. Heavy-neutrino oscillations will transfer these coherences to the $\widetilde{C}\!P$-``odd'' number densities $[\delta \eta^N]_\alpha^{\phantom \alpha \beta}$ due to the commutator terms in~\eqref{eq:evofinal2} and \eqref{eq:evofinal3}. Finally, a lepton asymmetry is generated at ${\cal O}(h^4)$ by the $\widetilde{C}\!P$-``even'' coherent off-diagonal decay rates in the first term on the second line of~\eqref{eq:evofinal1}. Notice that the novel rank-4 rate tensor $\Tdu{[\gamma^{N}_{L \Phi}]}{l}{m}{\alpha}{\beta}$, required by flavour covariance, plays an important role in this mechanism, along with the $\widetilde{C}\!P$-``odd'' number density $[\delta \eta^N]_\alpha^{\phantom \alpha \beta}$, which is purely off-diagonal in the heavy-neutrino mass eigenbasis. We stress here that this phenomenon of coherent oscillations is an ${\cal O}(h^4)$ effect on the {\it total} lepton asymmetry, and so differs from the ${\cal O}(h^6)$ mechanism proposed in~\cite{Akhmedov:1998qx}. The difference is due to the fact that the latter typically takes place at temperatures much higher than the sterile neutrino masses in the model (see e.g.~\cite{Asaka:2005}), where the total lepton number is not violated at leading order. On the other hand, the ${\cal O}(h^4)$ effect identified here is enhanced in the same regime as the resonant $T=0$ $\varepsilon$-type $ C\!P$ violation, namely, for $z \approx 1$ and $\Delta m_N \sim \Gamma_{N_\alpha}$~\cite{Dev:2014laa}.\vspace{-0.5em} \item [(iii)] {\it Decoherence} effects due to charged-lepton Yukawa couplings, described by the last two terms on the RHS of~\eqref{eq:evofinal1}. Our description of these effects is similar to the one of~\cite{Abada:2006fw}, which has been generalized here to an arbitrary flavour basis. \end{itemize} \section{A Numerical Example} \label{sec:4} To illustrate the importance of the flavour effects captured {\it only} by the flavour-covariant rate equations \eqref{eq:evofinal2}--\eqref{eq:evofinal1}, we consider a scenario of {\it Resonant $\ell$-Genesis} (RL$_\ell$), in which the final lepton asymmetry is dominantly generated and stored in a {\it single} lepton flavour $\ell$~\cite{Pilaftsis:2004xx}. In this case, the heavy neutrino masses could be as low as the electroweak scale~\cite{Pilaftsis:2005rv}, still with sizable couplings to other charged-lepton flavours $\ell' \neq \ell$, whilst being consistent with all current experimental constraints~\cite{PDG}. This enables the modelling of RL$_\ell$ scenarios~\cite{Deppisch:2010fr} with electroweak-scale heavy Majorana neutrinos that could be {\it tested} during the run-II phase of the LHC~\cite{Dev:2013wba}. The basic assumption underlying RL$_\ell$ models is an approximate SO(3)-symmetric heavy-neutrino sector at some high scale $\mu_X$, with mass matrix $\bm{M}_N(\mu_X)=m_N\bm{1}_3+\bm{\Delta M}_N$~\cite{Pilaftsis:2005rv}. For our purposes, we assume that the SO(3)-breaking mass term $\bm{\Delta M}_N$ is of the minimal form $\bm{\Delta M}_N=\mathrm{diag}(\Delta M_1,\Delta M_2/2,-\Delta M_2/2)$. By virtue of the RG running, an additional mass splitting $\bm{\Delta M}_N^{\mathrm{RG}}$ is induced, such that $\bm{M}_N(m_N)=m_N\bm{1}_3+\bm{\Delta M}_N+\bm{\Delta M}_N^{\mathrm{RG}}$ at the scale relevant to RL. In addition and in order to ensure the smallness of the light-neutrino masses, we require the heavy-neutrino Yukawa sector to have an approximate leptonic U(1)$_l$ symmetry. As an explicit example of RL$_\ell$, we consider an RL$_\tau$ model, with the heavy-neutrino Yukawa structure \begin{eqnarray} \mat{h} \ = \ \left(\begin{array}{ccc} 0 & ae^{-i\pi/4} & ae^{i\pi/4}\\ 0 & be^{-i\pi/4} & be^{i\pi/4}\\ 0 & ce^{-i\pi/4} & ce^{i\pi/4} \end{array}\right) \: + \: \mat{\delta h} \; , \label{yuk} \end{eqnarray} where $a,b,c$ are arbitrary complex parameters and \begin{eqnarray} \mat{\delta h} \ = \ \left(\begin{array}{ccc} \epsilon_e & 0 & 0\\ \epsilon_\mu & 0 & 0\\ \epsilon_{\tau}& 0 & 0 \end{array}\right) \label{delta_h} \end{eqnarray} contains the U(1)$_l$-breaking parameters $\epsilon_{e,\mu,\tau}$. If the U(1)$_l$ symmetry were to be exact, i.e.~if $\bm{\delta h}=\bm{0}$, then the light neutrino masses would remain massless to all orders in perturbation theory~\cite{Pilaftsis:1991ug}. In order to be consistent with the observed neutrino-oscillation data, we require $|a|,|b|\lesssim 10^{-2}$ for electroweak scale heavy neutrinos. In addition, in order to protect the $\tau$ asymmetry from washout effects, we require $|c|\lesssim 10^{-5}\ll|a|,|b|$ and $|\epsilon_{e,\mu,\tau}|\lesssim 10^{-6}$. \begin{figure*}[t!] \centering \newsavebox{\tempbox} \hspace{-1.2cm} \sbox{\tempbox}{\includegraphics[scale=0.55]{mN400.pdf}} \subfloat[ \label{fig2a}]{\includegraphics[scale=0.55]{mN400.pdf}} \hspace{0.5cm} \subfloat[ \label{fig2b}]{\vbox to \ht\tempbox {\hsize=11em \vfil \small {\begin{tabular}[b]{c|c}\hline\hline Parameter & Value \\ \hline\hline $m_N$ & 400 GeV \\ $c$ & $2\times 10^{-7}$ \\ $\frac{\Delta M_1}{m_N}$ & $-3\times 10^{-5}$ \\ $\frac{\Delta M_2}{m_N}$ & $(-1.21+0.10\,i)\times 10^{-9}$ \\ \hline $a$ & $(4.93-2.32 \, i)\times 10^{-3}$ \\ $b$ & $(8.04 - 3.79 \, i)\times 10^{-3}$ \\ $\epsilon_e$ & $5.73\,i\times 10^{-8}$ \\ $\epsilon_\mu$ & $4.30\,i\times 10^{-7}$ \\ $\epsilon_\tau$ & $6.39\,i\times 10^{-7}$ \\ \hline\hline \end{tabular}}\vfil}} \caption{(a) Total lepton asymmetry as predicted by the RL$_\tau$ model with benchmark parameters given in (b). We show the comparison between the total asymmetry obtained using the fully flavour-covariant formalism (thick solid lines, with different initial conditions) with those obtained using the flavour-diagonal formalism (dashed lines). Also shown (thin solid line) is an approximate semi-analytic result discussed in~\cite{Dev:2014laa}.}\label{fig2} \end{figure*} A choice of benchmark values for these parameters, satisfying all the current experimental constraints, is given in Figure~\ref{fig2b}. The corresponding numerical solution for the total lepton asymmetry $\delta \eta^L \equiv {\rm Tr}(\mat{\delta \eta}^L)$ in our flavour-covariant formalism is shown in Figure~\ref{fig2a}. Here, the horizontal dotted line shows the value of $\delta\eta^L$ required to explain the observed baryon asymmetry in our Universe, whereas the vertical line shows the critical temperature $z_c=m_N/T_c$, beyond which the electroweak sphaleron processes become ineffective in converting lepton asymmetry to baryon asymmetry. The thick solid lines show the evolution of $\delta\eta^L$ for three different initial conditions, to which the final lepton asymmetry $\delta \eta^L (z\gg 1)$ is shown to be insensitive. This is a general consequence of the RL mechanism in the strong washout regime~\cite{Pilaftsis:2005rv}. For comparison, we also show in Figure~\ref{fig2a} various partially flavour-dependent limits, i.e.~when either the heavy-neutrino (dashed line) or the lepton (dash-dotted line) number density or both (dotted line) are diagonal in flavour space. Also shown is the approximate semi-analytic solution discussed in~\cite{Dev:2014laa} for the case of a diagonal heavy-neutrino number density (thin solid line). The enhanced lepton asymmetry in the {\it fully} flavour-covariant formalism, as compared to when the heavy-neutrino number density is assumed to be diagonal (dashed line), is mainly due to the coherent oscillations between the heavy-neutrino flavours, leading to an enhancement of a factor of 2. \section{Conclusions} \label{sec:5} We have presented a {\it fully} flavour-covariant formalism for transport phenomena by deriving Markovian master equations that describe the time-evolution of particle number densities in a quantum-statistical ensemble with arbitrary flavour content. As an application, we have studied the flavour effects in RL and have obtained {\em manifestly} flavour-covariant rate equations for heavy-neutrino and lepton number densities. This provides a complete and unified description of RL, capturing three {\em distinct} physical phenomena: (i) resonant mixing between the heavy-neutrino states, (ii) coherent oscillations between different heavy-neutrino flavours and (iii) quantum decoherence effects in the charged-lepton sector. The quantitative importance of this formalism and the need to capture consistently all physically-relevent flavour effects has been illustrated for an RL$_\tau$ model. Therein, the predicted lepton asymmetry is observed to vary by as much as an order of magnitude between the two partially flavour off-diagonal treatments (dashed and dash-dotted lines in Figure~\ref{fig2a}). \vspace{-0.5em} \section*{Acknowledgments} The work of P.S.B.D., P.M. (in part) and A.P. is supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant ST/J000418/1. P.M. is also supported in part by the IPPP through STFC grant ST/G000905/1. The work of D.T. has been supported by a fellowship of the EPS Faculty of the University of Manchester. \nocite{*} \bibliographystyle{elsarticle-num} \vspace{-0.5em}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,443
\section{Introduction} \label{sec:intro} Symmetries play an important role in dynamical systems. Constants of motion associated with a symmetry govern the integrability of a given classical system. At the quantum level, symmetries provide quantum numbers for the classification of states, determine spectral degeneracies and selection rules, and facilitate the calculation of matrix elements. An exact symmetry occurs when the Hamiltonian of the system commutes with all the generators ($g_i$) of the symmetry-group $G$, $[\, \hat{H} \, , \, g_i\,] = 0$. In this case, all states have good symmetry and are labeled by the irreducible representations (irreps) of $G$. The Hamiltonian admits a block structure so that inequivalent irreps do not mix and all eigenstates in the same irrep are degenerate. In a dynamical symmetry the Hamiltonian commutes with the Casimir operator of $G$, $[\, \hat{H} \, , \, \hat{C}_{G}\,] = 0$, the block structure of $\hat{H}$ is retained, the states preserve the good symmetry but, in general, are no longer degenerate. When the symmetry is completely broken then $[\, \hat{H} \, , \, g_i\,] \neq 0$, and none of the states have good symmetry. In-between these limiting cases there may exist intermediate symmetry structures, called partial (dynamical) symmetries, for which the symmetry is neither exact nor completely broken. This novel concept of symmetry and its implications for dynamical systems, in particular nuclei, are the focus of the present review. Models based on spectrum generating algebras form a convenient framework to examine underlying symmetries in many-body systems and have been used extensively in diverse areas of physics~\cite{BNB}. Notable examples in nuclear physics are Wigner's spin-isospin SU(4) supermultiplets~\cite{WIG}, SU(2) single-$j$ pairing~\cite{Kerman61}, Elliott's SU(3) model~\cite{Elliott58}, symplectic model~\cite{Rowe85}, pseudo SU(3) model~\cite{ps_su3}, Ginocchio's monopole and quadrupole pairing models~\cite{GIN}, interacting boson models (IBM) for even-even nuclei~\cite{ibm} and boson-fermion models (IBFM) for odd-mass nuclei~\cite{ibfm}. Similar algebraic techniques have proven to be useful in the structure of molecules~\cite{vibron,Frank94} and of hadrons~\cite{BIL}. In such models the Hamiltonian is expanded in elements of a Lie algebra, ($G_0$), called the spectrum generating algebra. A dynamical symmetry occurs if the Hamiltonian can be written in terms of the Casimir operators of a chain of nested algebras, $G_0\supset G_1 \supset \ldots \supset G_n$~\cite{Iachello06}. The following properties are then observed. (i)~All states are solvable and analytic expressions are available for energies and other observables. (ii)~All states are classified by quantum numbers, $\vert\alpha_0,\alpha_1,\ldots,\alpha_n\rangle$, which are the labels of the irreps of the algebras in the chain. (iii)~The structure of wave functions is completely dictated by symmetry and is independent of the Hamiltonian's parameters. \begin{figure}[t] \begin{center} \includegraphics[height=15cm]{fig1symmetry.eps} \caption{ \protect\small Hierarchy of symmetries. \label{figSymmetry}} \end{center} \end{figure} A dynamical symmetry provides clarifying insights into complex dynamics and its merits are self-evident. However, in most applications to realistic systems, the predictions of an exact dynamical symmetry are rarely fulfilled and one is compelled to break it. The breaking of the symmetry is required for a number of reasons. First, one often finds that the assumed symmetry is not obeyed uniformly, {\it i.e.}, is fulfilled by only some of the states but not by others. Certain degeneracies implied by the assumed symmetry are not always realized, ({\it e.g.}, axially deformed nuclei rarely fulfill the IBM SU(3) requirement of degenerate $\beta$ and $\gamma$ bands~\cite{ibm}). Secondly, forcing the Hamiltonian to be invariant under a symmetry group may impose constraints which are too severe and incompatible with well-known features of nuclear dynamics ({\it e.g.}, the models of~\cite{GIN} require degenerate single-nucleon energies). Thirdly, in describing transitional nuclei in-between two different structural phases, {\it e.g.}, spherical and deformed, the Hamiltonian by necessity mixes terms with different symmetry character. In the models mentioned above, the required symmetry breaking is achieved by including in the Hamiltonian terms associated with (two or more) different sub-algebra chains of the parent spectrum generating algebra. In general, under such circumstances, solvability is lost, there are no remaining non-trivial conserved quantum numbers and all eigenstates are expected to be mixed. A partial dynamical symmetry (PDS) corresponds to a particular symmetry breaking for which some (but not all) of the virtues of a dynamical symmetry are retained. The essential idea is to relax the stringent conditions of {\em complete} solvability so that the properties (i)--(iii) are only partially satisfied. It is then possible to identify the following types of partial dynamical symmetries \begin{itemize} \item {\em PDS type I:} $\qquad\;\;\;$ {\bf part} of the states have {\bf all} the dynamical symmetry \item{\em PDS type II:} $\qquad\;\,$ {\bf all} the states have {\bf part} of the dynamical symmetry \item{\em PDS type III:} $\qquad$ {\bf part} of the states have {\bf part} of the dynamical symmetry. \end{itemize} In PDS of type~I, only part of the eigenspectrum is analytically solvable and retains all the dynamical symmetry (DS) quantum numbers. In PDS of type~II, the entire eigenspectrum retains some of the DS quantum numbers. PDS of type~III has a hybrid character, in the sense that some (solvable) eigenstates keep some of the quantum numbers. The notion of partial dynamical symmetry generalizes the concepts of exact and dynamical symmetries. In making the transition from an exact to a dynamical symmetry, states which are degenerate in the former scheme are split but not mixed in the latter, and the block structure of the Hamiltonian is retained. Proceeding further to partial symmetry, some blocks or selected states in a block remain pure, while other states mix and lose the symmetry character. A partial dynamical symmetry lifts the remaining degeneracies, but preserves the symmetry-purity of the selected states. The hierarchy of broken symmetries is depicted in Fig.~1. The existence of Hamiltonians with partial symmetry or partial dynamical symmetry is by no means obvious. An Hamiltonian with the above property is not invariant under the group $G$ nor does it commute with the Casimir invariants of $G$, so that various irreps are in general mixed in its eigenstates. However, it posses a subset of solvable states, denoted by $\vert\Psi\rangle$ in Fig.~1, which respect the symmetry. The commutator $[\, \hat{H} \, , \, g_i\,]$ or $[\, \hat{H} \, , \, \hat{C}_{G}\,]$ vanishes only when it acts on these `special' states with good $G$-symmetry. In this review, we survey the various types of partial dynamical symmetries (PDS) and discuss algorithms for their realization in bosonic and fermionic systems. Hamiltonians with PDS are explicitly constructed, including higher-order terms. We present empirical examples of the PDS notion and demonstrate its relevance to nuclear spectroscopy, to quantum phase transitions and to mixed systems with coexisting regularity and chaos. \subsection{The interacting boson model} \label{subsec:ibm} In order to illustrate the various notions of symmetries and consider their implications, it is beneficial to have a framework that has a rich algebraic structure and allows tractable yet detailed calculations of observables. Such a framework is provided by the interacting boson model (IBM)~\cite{ibm,arimaiac76,arimaiac78,arimaiac79}, widely used in the description of low-lying quadrupole collective states in nuclei. The degrees of freedom of the model are one monopole boson ($s^{\dag}$) and five quadrupole bosons ($d^{\dag}_{\mu}$). The bilinear combinations $\{s^{\dag}s,\,s^{\dag}d_{\mu},\, d^{\dag}_{\mu}s,\, d^{\dag}_{\mu}d_{\mu'}\}$ span a U(6) algebra. These generators can be transcribed in spherical tensor form as \begin{eqnarray} \hat{n}_{s} &=& s^{\dag}s \;\; , \;\; U^{(L)}_{\mu} = (d^{\dag} \tilde{d})^{(L)}_{\mu} \quad L=0,1,2,3,4 \nonumber\\ \Pi^{(2)}_{\mu} &=& d^{\dag}_{\mu}s + s^{\dag}\tilde{d}_{\mu} \;\; , \;\; \bar{\Pi}^{(2)}_{\mu} = i(d^{\dag}_{\mu}s - s^{\dag}\tilde{d}_{\mu}) ~, \label{u6gen} \end{eqnarray} where $\tilde{d}_{\mu} = (-1)^{\mu}d_{-\mu}$, and standard notation of angular momentum coupling is used. U(6) serves as the spectrum generating algebra and the invariant (symmetry) algebra is O(3), with generators $L^{(1)}_{\mu} = \sqrt{10}\,U^{(1)}_{\mu}$. The IBM Hamiltonian is expanded in terms of the operators~(\ref{u6gen}) and consists of Hermitian, rotational-scalar interactions which conserve the total number of $s$- and $d$- bosons, $\hat N = \hat{n}_s + \hat{n}_d = s^{\dagger}s + \sum_{\mu}d^{\dagger}_{\mu}d_{\mu}$. Microscopic interpretation of the model suggests that for a given even-even nucleus the total number of bosons, $N$, is fixed and is taken as the sum of valence neutron and proton particle and hole pairs counted from the nearest closed shell~\cite{Talmi93}. The three dynamical symmetries of the IBM are \begin{eqnarray} \begin{array}{lllllll} {\rm U(6)} & \supset & {\rm U(5)} & \supset {\rm O(5)} & \supset & {\rm O(3)}\qquad & \;{\rm anharmonic\; spherical\; vibrator}\\ {\rm U(6)} & \supset & {\rm SU(3)} & \supset {\rm O(3)} & & & \;{\rm axially}{\rm -deformed \; rotovibrator} \\ {\rm U(6)} & \supset & {\rm O(6)} & \supset {\rm O(5)} & \supset & {\rm O(3)} & \;\gamma{\rm -unstable\; deformed\; rotovibrator} \\ \end{array} \label{IBMds} \end{eqnarray} The associated analytic solutions resemble known limits of the geometric model of nuclei~\cite{bohr75}, as indicated above. Each chain provides a complete basis, classified by the irreps of the corresponding algebras, which can be used for a numerical diagonalization of the Hamiltonian in the general case. In the Appendix we collect the relevant information concerning the generators and Casimir operators of the algebras in Eq.~(\ref{IBMds}). Electromagnetic moments and rates can be calculated in the IBM with transition operators of appropriate rank. For example, the most general one-body E2 operator reads \begin{eqnarray} T(E2) = e_{B}\left [ \, \Pi^{(2)} + \chi\,U^{(2)} \, \right ] ~. \label{Te2} \end{eqnarray} A geometric visualization of the model is obtained by an energy surface \begin{eqnarray} E_{N}(\beta,\gamma) &=& \langle \beta,\gamma; N\vert \hat{H} \vert \beta,\gamma ; N\rangle ~, \label{enesurf} \end{eqnarray} defined by the expectation value of the Hamiltonian in the coherent (intrinsic) state~\cite{gino80,diep80} \begin{subequations} \begin{eqnarray} \vert\beta,\gamma ; N \rangle &=& (N!)^{-1/2}(b^{\dagger}_{c})^N\,\vert 0\,\rangle ~,\\ b^{\dagger}_{c} &=& (1+\beta^2)^{-1/2}[\beta\cos\gamma d^{\dagger}_{0} + \beta\sin{\gamma} ( d^{\dagger}_{2} + d^{\dagger}_{-2})/\sqrt{2} + s^{\dagger}] ~. \end{eqnarray} \label{condgen} \end{subequations} Here $(\beta,\gamma)$ are quadrupole shape parameters whose values, $(\beta_0,\gamma_0)$, at the global minimum of $E_{N}(\beta,\gamma)$ define the equilibrium shape for a given Hamiltonian. The shape can be spherical $(\beta =0)$ or deformed $(\beta >0)$ with $\gamma =0$ (prolate), $\gamma =\pi/3$ (oblate), $\gamma$-independent, or triaxial $(0 < \gamma < \pi/3)$. The latter shape requires terms of order higher than two-body in the boson Hamiltonian~\cite{isachen81,levshao90}. The equilibrium deformations associated with the Casimir operators of the leading subalgebras in the dynamical symmetry chains~(\ref{IBMds}) are, $\beta_0=0$ for U(5), $(\beta_0=\sqrt{2},\gamma_0=0)$ for SU(3) and $(\beta_0=1,\gamma_0\,{\rm arbitrary})$ for O(6). \section{PDS type I} \label{sec:PDStypeI} PDS of type I corresponds to a situation for which the defining properties of a dynamical symmetry (DS), namely, solvability, good quantum numbers, and symmetry-dictated structure are fulfilled exactly, but by only a subset of states. An algorithm for constructing Hamiltonians with this property has been developed in~\cite{AL92} and further elaborated in~\cite{RamLevVan09}. The analysis starts from the chain of nested algebras \begin{equation} \begin{array}{ccccccc} G_{\rm dyn}&\supset&G&\supset&\cdots&\supset&G_{\rm sym}\\ \downarrow&&\downarrow&&&&\downarrow\\[0mm] [h]&&\langle\Sigma\rangle&&&&\Lambda \end{array} \label{chain} \end{equation} where, below each algebra, its associated labels of irreps are given. Eq.~(\ref{chain}) implies that $G_{\rm dyn}$ is the dynamical (spectrum generating) algebra of the system such that operators of all physical observables can be written in terms of its generators~\cite{Frank94,Iachello06}; a single irrep of $G_{\rm dyn}$ contains all states of relevance in the problem. In contrast, $G_{\rm sym}$ is the symmetry algebra and a single of its irreps contains states that are degenerate in energy. A frequently encountered example is $G_{\rm sym}={\rm O}(3)$, the algebra of rotations in 3 dimensions, with its associated quantum number of total angular momentum $L$. Other examples of conserved quantum numbers can be the total spin $S$ in atoms or total isospin $T$ in atomic nuclei. The classification~(\ref{chain}) is generally valid and does not require conservation of particle number. Although the extension from DS to PDS can be formulated under such general conditions, let us for simplicity assume in the following that particle number is conserved. All states, and hence the representation $[h]$, can then be assigned a definite particle number~$N$. For $N$ identical particles the representation $[h]$ of the dynamical algebra $G_{\rm dyn}$ is either symmetric $[N]$ (bosons) or antisymmetric $[1^N]$ (fermions) and will be denoted, in both cases, as $[h_N]$. For particles that are non-identical under a given dynamical algebra $G_{\rm dyn}$, a larger algebra can be chosen such that they become identical under this larger algebra (generalized Pauli principle). The occurrence of a DS of the type~(\ref{chain}) signifies that the Hamiltonian is written in terms of the Casimir operators of the algebras in the chain \begin{eqnarray} \hat{H}_{DS} &=& \sum_{G} a_{G}\,\hat{C}_{G} \label{hDS} \end{eqnarray} and its eigenstates can be labeled as $|[h_N]\langle\Sigma\rangle\dots\Lambda\rangle$; additional labels (indicated by $\dots$) are suppressed in the following. Likewise, operators can be classified according to their tensor character under~(\ref{chain}) as $\hat T_{[h_n]\langle\sigma\rangle\lambda}$. Of specific interest in the construction of a PDS associated with the reduction~(\ref{chain}), are the $n$-particle annihilation operators $\hat T$ which satisfy the property \begin{equation} \hat T_{[h_n]\langle\sigma\rangle\lambda} |[h_N]\langle\Sigma_0\rangle\Lambda\rangle=0, \label{anni} \end{equation} for all possible values of $\Lambda$ contained in a given irrep~$\langle\Sigma_0\rangle$ of $G$. Any $n$-body, number-conserving normal-ordered interaction written in terms of these annihilation operators and their Hermitian conjugates (which transform as the corresponding conjugate irreps) \begin{eqnarray} \hat{H}' &=& \sum_{\alpha,\beta} A_{\alpha\beta}\, \hat{T}^{\dag}_{\alpha}\hat{T}_{\beta} \label{PS} \end{eqnarray} has a partial G-symmetry. This comes about since for arbitrary coefficients, $A_{\alpha\beta}$, $\hat{H}'$ is not a G-scalar, hence most of its eigenstates will be a mixture of irreps of G, yet relation~(\ref{anni}) ensures that a subset of its eigenstates $\vert [h_N]\langle\Sigma_0\rangle\Lambda\rangle$, are solvable and have good quantum numbers under the chain~(\ref{chain}). An Hamiltonian with partial dynamical symmetry is obtained by adding to $\hat{H}'$ the dynamical symmetry Hamiltonian, $\hat{H}_{DS}$~(\ref{hDS}), still preserving the solvability of states with $\langle\Sigma\rangle=\langle\Sigma_0\rangle$ \begin{eqnarray} \hat{H}_{PDS} &=& \hat{H}_{DS} + \hat{H}' ~. \label{hPDS} \end{eqnarray} If the operators $\hat{T}_{\beta}\equiv \hat T_{[h_n]\langle\sigma\rangle\lambda}$ span the entire irrep $\langle\sigma\rangle$ of G, then the annihilation condition~(\ref{anni}) is satisfied for all $\Lambda$-states in $\langle\Sigma_0\rangle$, if none of the $G$ irreps $\langle\Sigma\rangle$ contained in the $G_{\rm dyn}$ irrep $[h_{N-n}]$ belongs to the $G$ Kronecker product $\langle\sigma\rangle\times\langle\Sigma_0\rangle$. So the problem of finding interactions that preserve solvability for part of the states~(\ref{chain}) is reduced to carrying out a Kronecker product. In this case, although the generators $g_i$ of $G$ do not commute with $\hat{H}'$, their commutator does vanish when it acts on the solvable states~(\ref{anni}) \begin{subequations} \begin{eqnarray} \left [\, g_i\, , \hat{H}'\, \right ] &\neq& 0 ~,\\ \left [\, g_i\, , \hat{H}'\, \right ] |[h_N]\langle\Sigma_0\rangle\Lambda\rangle &=& 0 ~, \quad g_i \in G ~. \label{gicommub} \end{eqnarray} \end{subequations} Eq.~(\ref{gicommub}) follows from $\left [\, g_i\, , \, \hat{T}^{\dag}_{\alpha}\hat{T}_{\beta}\, \right ] = \hat{T}^{\dag}_{\alpha}\, \left [\, g_i \, , \hat{T}_{\beta}\,\right ] + \left [ \, g_i \, , \hat{T}^{\dag}_{\alpha}\, \right ] \hat{T}_{\beta}$ and the fact that $\left [\, g_i \, , \hat{T}_{\beta}\,\right ]$ involves a linear combination of $G$-tensor operators which satisfy Eq.~(\ref{anni}). The arguments for choosing the special irrep $\langle\Sigma\rangle=\langle\Sigma_0\rangle$ in Eq.~(\ref{anni}), which contains the solvable states, are based on physical grounds. A~frequently encountered choice is the irrep which contains the ground state of the system. The above algorithm for constructing Hamiltonians with PDS of type I is applicable to any semisimple group. It can also address more general scenarios, in which relation~(\ref{anni}) holds only for some states $\Lambda$ in the irrep $\langle\Sigma_0\rangle$ and/or some components $\lambda $ of the tensor $\hat T_{[h_n]\langle\sigma\rangle\lambda}$. In this case, the Kronecker product rule does not apply, yet the PDS Hamiltonian is still of the form as in Eqs.~(\ref{PS})-(\ref{hPDS}), but now the solvable states span only part of the corresponding $G$-irrep. This is not the case in the quasi-exactly solvable Hamiltonians, introduced in~\cite{Turbiner88}, where the solvable states form complete representations. The coexistence of solvable and unsolvable states, together with the availability of an algorithm, distinguish the notion of PDS from the notion of accidental degeneracy~\cite{mosh83}, where all levels are arranged in degenerate multiplets. An Hamiltonian with PDS of type I does not have good symmetry but some of its eigenstates do. The symmetry of the latter states does not follow from invariance properties of the Hamiltonian. This situation is opposite to that encountered in a spontaneous symmetry breaking, where the Hamiltonian respects the symmetry but its ground state breaks it. The notion of PDS differs also from the notion of quasi-dynamical symmetry~\cite{rowe0405}. The latter corresponds to a situation in which selected states in a system continue to exhibit characteristic properties ({\it e.g.}, energy and B(E2) ratios) of a dynamical symmetry, in the face of strong symmetry-breaking interactions. Such an ``apparent'' persistence of symmetry is due to the coherent nature of the mixing in the wave functions of these states. In contrast, in a PDS of type~I, although the symmetry is broken (even strongly) in most states, the subset of solvable states preserve it exactly. In this sense, the symmetry is partial but exact!. In what follows we present concrete constructions of Hamiltonians with PDS associated with the three DS chains~(\ref{IBMds}) of the IBM. The partial symmetries in question involve continuous Lie groups. PDS can, however, be associated also with discrete groups which are relevant, {\it e.g.}, to molecular physics. An example of a partial symmetry which involves point groups was presented in~\cite{pinchen97}, employing a model of coupled anharmonic oscillators to describe the molecule $XY_6$. The partial symmetry of the Hamiltonian allowed a derivation of analytic expressions for the energies and eigenstates of a set of unique levels. Furthermore, the numerical calculations required to obtain the energies of the remaining (non-unique) levels were greatly simplified since the Hamiltonian could be diagonalized in a much smaller space. \subsection{U(5) PDS (type I)} \label{subsec:u5PDStypeI} The U(5) DS chain of the IBM and related quantum numbers are given by~\cite{arimaiac76} \begin{eqnarray} \begin{array}{ccccccc} {\rm U}(6)&\supset&{\rm U}(5)&\supset&{\rm O}(5)& \supset&{\rm O}(3)\\ \downarrow&&\downarrow&&\downarrow&&\downarrow\\[0mm] [N]&&\langle n_d \rangle&&(\tau)&n_\Delta& L \end{array} ~, \label{chainu5} \end{eqnarray} where the generators of the above groups are defined in Table~\ref{TabIBMcas} of the Appendix. For a given U(6) irrep~$[N]$, the allowed U(5) and O(5) irreps are $n_d=0,1,2,\ldots, N$ and $\tau=n_d,\,n_d-2,\dots 0$ or $1$, respectively. The values of $L$ contained in the O(5) irrep $(\tau)$, are obtained by partitioning $\tau=3n_{\Delta}~+~\lambda$, with $n_{\Delta},\,\lambda\geq 0$ integers, and $L=2\lambda,\,2\lambda-2,2\lambda-3,\ldots, \lambda$. The multiplicity label $n_\Delta$ in the ${\rm O(5)}\supset {\rm O(3)}$ reduction, counts the maximum number of $d$-boson triplets coupled to $L=0$~\cite{Szpik}. The eigenstates $\vert[N]\langle n_d \rangle(\tau)n_\Delta LM\rangle$ are obtained with a Hamiltonian with U(5) DS which, for one- and two-body interactions, can be transcribed in the form \begin{eqnarray} \hat{H}_{\rm DS} &=& \epsilon\,\hat{n}_d + A\,\hat{n}_d(\hat{n}_d+4) + B\, \hat{C}_{{\rm O(5)}} + C\,\hat{C}_{{\rm O(3)}} ~. \label{hDSu5} \end{eqnarray} Here $\hat{n}_d$ and $\hat{n}_d(\hat{n}_d+4)$ are the linear and quadratic Casimir operators of U(5), respectively, and $\hat{C}_{G}$ denotes the quadratic Casimir operator of $G$, as defined in the Appendix. The Casimir operators of U(6) are omitted from Eq.~(\ref{hDSu5}) since they are functions of the total boson number operator, $\hat{N}=\hat{n}_s + \hat{n}_d$, which is a constant for all $N$-boson states. The spectrum of $\hat{H}_{\rm DS}$ is completely solvable with eigenenergies \begin{eqnarray} E_{\rm DS} &=& \epsilon\, n_d + A\, n_d ( n_d+4) + B\,\tau(\tau+3) +\, C\,L(L+1). \label{eDSu5} \end{eqnarray} The U(5)-DS spectrum of Eq.~(\ref{eDSu5}) resembles that of an anharmonic spherical vibrator, describing quadrupole excitations of a spherical shape. The splitting of states in a given U(5) multiplet, $\langle n_d \rangle$, is governed by the O(5) and O(3) terms in $\hat{H}_{\rm DS}$~(\ref{hDSu5}). The lowest U(5) multiplets involve states with quantum numbers $(n_d=0,\,\tau=0,\, L=0)$, $(n_d=1,\,\tau=1,\, L=2)$, and $(n_d=2,\,\tau=0,\,L=0)$, $(n_d=2,\,\tau=2,\,L=2,4)$. \begin{table} \begin{center} \caption{\label{Tabu5tens} \protect\small Normalized one- and two-boson U(5) tensors.} \vspace{1mm} \begin{tabular}{cccccl} \hline & & & & &\\[-3mm] $n$&$n_d$&$\tau$& $n_{\Delta}$ &$\ell$& $\hat B^\dag_{[n]\langle n_d\rangle(\tau)n_{\Delta}\ell m}$\\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] 1& 0& 0& 0& 0& $s^{\dag}$\\[2pt] 1& 1& 1& 0& 2& $d^{\dag}_{m}$\\[2pt] 2& 0& 0& 0& 0& $\sqrt{\frac{1}{2}}(s^{\dag})^2$\\[2pt] 2& 1& 1& 0& 2& $s^{\dag} d^{\dag}_{m}$\\[2pt] 2& 2& 0& 0& 0& $\sqrt{\frac{1}{2}}(d^{\dag} d^{\dag})^{(0)}_0$\\[2pt] 2& 2& 2& 0& 2& $\sqrt{\frac{1}{2}}(d^\dag d^\dag)^{(2)}_m$\\[2pt] 2& 2& 2& 0& 4& $\sqrt{\frac{1}{2}}(d^\dag d^\dag)^{(4)}_m$\\ & & & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} The construction of Hamiltonians with U(5)-PDS is based on identification of $n$-boson operators which annihilate all states in a given U(5) irrep~$\langle n_d\rangle$. A physically relevant choice is the irrep $n_d=0$ which consists of the ground state, with $\tau=L=0$, built of $N$ $s$-bosons \begin{eqnarray} \vert [N],n_d=\tau=L=0\rangle &=& (N!)^{-1/2}\,(s^{\dag})^N \vert 0 \rangle ~. \label{condu5} \end{eqnarray} Considering U(5) tensors, $\hat B^\dag_{[n]\langle n_d\rangle(\tau)n_{\Delta}\ell m}$, composed of $n$ bosons of which $n_d$ are $d$-bosons then, clearly, the Hermitian conjugate of such operators with $n_d\neq 0$ will annihilate the $n_d=0$ state of Eq.~(\ref{condu5}). Explicit expressions for $n$-boson U(5) tensors, with $n=1,2$ are shown in Table~\ref{Tabu5tens}. From them one can construct the following one- and two-body Hamiltonian with U(5)-PDS \begin{eqnarray} \hat{H}_{PDS} &=& \epsilon_{d}\,d^{\dag}\cdot \tilde{d} + u_{2}\,s^{\dag}d^{\dag}\cdot\tilde{d}s + v_{2}\,\left [\, s^{\dag}d^{\dag}\cdot (\tilde{d} \tilde{d})^{(2)} + H.c. \, \right ] \nonumber\\ && + \sum_{L=0,2,4}c_{L}\,(d^{\dag}d^{\dag})^{(L)}\cdot (\tilde{d}\tilde{d})^{(L)} ~, \label{u5hPDS} \end{eqnarray} where $H.c.$ means Hermitian conjugate. By construction, \begin{eqnarray} \hat{H}_{PDS}\vert [N],n_d=\tau=L=0\rangle &=& 0 ~. \label{nd0} \end{eqnarray} Using Eq.~(\ref{hIBMcas}), we can rewrite $\hat{H}_{PDS}$ in the form \begin{eqnarray} \hat{H}_{PDS} &=& \hat{H}_{DS} + \hat{V}_2 ~, \label{hPDSu5} \end{eqnarray} where $\hat{H}_{DS}$ is the U(5) dynamical symmetry Hamiltonian, Eq.~(\ref{hDSu5}), and $\hat{V}_2$ is given by \begin{eqnarray} \hat{V}_{2} &=& v_{2}\left [\, s^{\dag}d^{\dag}\cdot (\tilde{d} \tilde{d})^{(2)} + H.c. \, \right ] = v_{2}\,\Pi^{(2)}\cdot U^{(2)} = v_{2}\,U^{(2)}\cdot \Pi^{(2)} ~. \label{V2} \end{eqnarray} The operators $\Pi^{(2)}_{\mu}$ and $U^{(2)}_{\mu}$ are defined in Eq.~(\ref{u6gen}). The $\hat{V}_2$ term breaks the U(5) DS, however, it still has the U(5) basis states with $(n_d=\tau=L=0)$, Eq.~(\ref{nd0}), and $(n_d=\tau=L=3)$ as zero-energy eigenstates \begin{eqnarray} \hat{V}_{2}\,\vert [N],n_d=\tau=L=3\rangle &=& 0 ~. \label{V2nd3} \end{eqnarray} The last property follows from the U(5) selection rules of $\hat{V}_2$, $\Delta n_d = \pm 1$, and the fact that the irreps $(n_d=4,\tau=0,2,4)$ and $(n_d=2,\tau=0,2)$ do not contain an $L=3$ state. Altogether, $\hat{H}_{PDS}$~(\ref{hPDSu5}) is not diagonal in the U(5) chain, but retains the following solvable U(5) basis states with known eigenvalues \begin{subequations} \begin{eqnarray} \vert [N], n_d=\tau=L=0\rangle \;\;\;\; &&E_{PDS} = 0 ~,\\ \vert [N], n_d=\tau=L=3\rangle \;\;\;\; &&E_{PDS} = 3\epsilon + 21A + 18B + 12C ~. \qquad\qquad \end{eqnarray} \label{ePDSu5} \end{subequations} As will be discussed in Section~\ref{sec:PDSQPT}, this class of Hamiltonians with U(5)-PDS of type I is relevant to first-order quantum shape-phase transitions in nuclei. A second class of Hamiltonians with U(5)-PDS can be obtained by considering the interaction \begin{eqnarray} \hat{V}_{0} &=& v_{0}\left [\, (s^{\dag})^2\tilde{d}\cdot \tilde{d} + H.c. \, \right ] ~. \label{V0} \end{eqnarray} This interaction breaks the U(5) DS, however, it still has selected U(5) basis states as zero-energy eigenstates \begin{subequations} \begin{eqnarray} \hat{V}_{0}\,\vert [N],n_d=\tau=N,L\,\rangle &=& 0 ~,\\ \hat{V}_{0}\,\vert [N],n_d=\tau=N-1,L\,\rangle &=& 0 ~, \end{eqnarray} \label{V0ndN} \end{subequations} where $L$ takes the values compatible with the ${\rm O(5)}\supset{\rm O(3)}$ reduction. These properties follow from the fact that $s^2$ annihilates states with $n_s=0,\,1$ ($n_d=N,\, N-1$) and $\tilde{d}\cdot\tilde{d}$ annihilates states with $n_d=\tau$~\cite{arimaiac76}. Adding the interaction $\hat{V}_0$ to the U(5) dynamical symmetry Hamiltonian $\hat{H}_{DS}$~(\ref{hDSu5}), we obtain the following Hamiltonian with U(5)-PDS \begin{eqnarray} \hat{H}_{PDS}' &=& \hat{H}_{DS} + \hat{V}_0 ~. \label{hPDSu5b} \end{eqnarray} $\hat{H}_{PDS}'$ is not diagonal in the U(5) chain, but retains the following solvable U(5) basis states with known eigenvalues \begin{subequations} \begin{eqnarray} &&\vert [N], n_d=\tau=N,L\,\rangle \;\;\qquad\quad E_{PDS}' = N[\,\epsilon_{d} + A(N+4) + B(N+3)\,] + CL(L+1) ~,\qquad\qquad\\ &&\vert [N], n_d=\tau=N-1,L\,\rangle \;\;\;\quad E_{PDS}' = (N-1)[\,\epsilon_{d} + A(N+3) + B(N+2)\,] + CL(L+1) ~. \qquad\qquad \end{eqnarray} \label{ePDSu5b} \end{subequations} The Hamiltonian $\hat{H}_{PDS}'$~(\ref{hPDSu5b}) with U(5)-PDS of type I, contains terms from both the U(5) and O(6) chains~(\ref{IBMds}), hence preserves the common segment of subalgebras, ${\rm O(5)}\supset {\rm O(3)}$. As such, it exhibits also an O(5)-PDS of type II, to be discussed in Section~\ref{sec:PDStypeII}. As will be shown in Section~\ref{sec:PDSQPT}, this class of Hamiltonians is relevant to second-order quantum shape-phase transitions in nuclei. \subsection{SU(3) PDS (type I)} \label{subsec:su3PDStypeI} The SU(3) DS chain of the IBM and related quantum numbers are given by~\cite{arimaiac78} \begin{eqnarray} \begin{array}{ccccc} {\rm U}(6)&\supset&{\rm SU}(3)&\supset&{\rm O}(3)\\ \downarrow&&\downarrow&&\downarrow\\[0mm] [N]&&\left (\lambda,\mu\right )& K & L \end{array} ~, \label{chainsu3} \end{eqnarray} where the generators of the above groups $G$ are defined in Table~\ref{TabIBMcas} of the Appendix. For a given U(6) irrep $[N]$, the allowed SU(3) irreps are $(\lambda,\mu)=(2N-4k-6m,2k)$ with $k,m$ non-negative integers, such that, $\lambda,\mu\geq 0$. The multiplicity label $K$ is needed for complete classification and corresponds geometrically to the projection of the angular momentum on the symmetry axis. The values of $L$ contained in the above SU(3) irreps are $L=K,K+1,K+2,\ldots,K+{\rm max}\{\lambda,\mu\}$, where $K=0,\, 2,\ldots, {\rm min}\{\lambda,\mu\}$; with the exception of $K=0$ for which $L=0,\, 2,\ldots, {\rm max}\{\lambda,\mu\}$. The states $\vert [N](\lambda,\mu)KLM\rangle$ form the (non-orthogonal) Elliott basis~\cite{Elliott58} and the Vergados basis $\vert [N](\lambda,\mu)\tilde{\chi}LM\rangle$~\cite{VER} is obtained from it by a standard orthogonalization procedure. The two bases coincide in the large-N limit and both are eigenstates of a Hamiltonian with SU(3) DS. The latter, for one- and two-body interactions, can be transcribed in the form \begin{eqnarray} \hat{H}_{DS} &=& h_{2}\left [-\hat C_{{\rm SU(3)}} + 2\hat N (2\hat N+3)\right ] + C\, \hat C_{{\rm O(3)}} \quad ~, \label{hDSsu3} \end{eqnarray} where $\hat{C}_{G}$ is the quadratic Casimir operator of $G$, as defined in the Appendix. The spectrum of $\hat{H}_{DS}$ is completely solvable with eigenenergies \begin{eqnarray} E_{\rm DS} &=& h_{2}\, \left [\, -f_{2}(\lambda,\mu) + 2N(2N+3) \, \right ] + C\, L(L+1) \nonumber\\ &=& h_{2}\, 6\left [2N(k+2m) - k(2k-1)-3m(2m-1) -6km\right ] + CL(L+1) ~,\qquad \label{eDSsu3} \end{eqnarray} where $f_{2}(\lambda,\mu) = \lambda^2 + \mu^2 + \lambda\mu + 3\lambda + 3\mu$ and $(\lambda,\mu)=(2N-4k-6m,2k)$. The spectrum resembles that of an axially-deformed rotovibrator and the corresponding eigenstates are arranged in SU(3) multiplets. In a given SU(3) irrep $(\lambda,\mu)$, each $K$-value is associated with a rotational band and states with the same L, in different $K$-bands, are degenerate. The lowest SU(3) irrep is $(2N,0)$, which describes the ground band $g(K=0)$ of a prolate deformed nucleus. The first excited SU(3) irrep $(2N-4,2)$ contains both the $\beta(K=0)$ and $\gamma(K=2)$ bands. States in these bands with the same angular momentum are degenerate. This $\beta$-$\gamma$ degeneracy is a characteristic feature of the SU(3) limit of the IBM which, however, is not commonly observed~\cite{CW}. In most deformed nuclei the $\beta$ band lies above the $\gamma$ band. In the IBM framework, with at most two-body interactions, one is therefore compelled to break SU(3) in order to conform with the experimental data. To do so, the usual approach has been to include in the Hamiltonian terms from other chains so as to lift the undesired $\beta$-$\gamma$ degeneracy. Such an approach was taken by Warner Casten and Davidson (WCD) in~\cite{WCD}, where an O(6) term was added to the SU(3) Hamiltonian. However, in this procedure, the SU(3) symmetry is completely broken, all eigenstates are mixed and no analytic solutions are retained. Similar statements apply to the description in the consistent Q formalism (CQF)~\cite{CQF}, where the Hamiltonian involves a non-SU(3) quadrupole operator. In contrast, partial SU(3) symmetry, to be discussed below, corresponds to breaking SU(3), but in a very particular way so that {\it part} of the states (but not all) will still be solvable with good symmetry. As such, the virtues of a dynamical symmetry ({\it e.g.}, solvability) are fulfilled but by only a subset of states. The construction of Hamiltonians with SU(3)-PDS is based on identification of $n$-boson operators which annihilate all states in a given SU(3) irrep $(\lambda,\mu)$, chosen here to be the ground band irrep $(2N,0)$. For that purpose, we consider the following boson-pair operators with angular momentum $L =0,\,2$ \begin{subequations} \begin{eqnarray} P^{\dagger}_{0} &=& d^{\dagger}\cdot d^{\dagger} - 2(s^{\dagger})^2 ~,\\ P^{\dagger}_{2\mu} &=& 2d^{\dagger}_{\mu}s^{\dagger} + \sqrt{7}\, (d^{\dagger}\,d^{\dagger})^{(2)}_{\mu} ~. \end{eqnarray} \label{PL} \end{subequations} \begin{table} \begin{center} \caption{\label{Tabsu3tensII} \protect\small Normalized one- and two-boson SU(3) tensors. For the indicated irreps the labels of the Vergados basis ($\tilde{\chi}$) and Elliott basis ($K$) are identical, $\tilde{\chi}=K$.} \vspace{1mm} \begin{tabular}{ccccl} \hline & & & & \\[-3mm] $n$&$(\lambda,\mu)$&$\tilde{\chi}$&$\ell$& $\qquad\hat B^\dag_{[n](\lambda,\mu)\tilde{\chi};\ell m}$\\ & & & & \\[-3mm] \hline & & & & \\[-2mm] 1& (2,0) & 0& 0& $s^{\dag}$ \\[2pt] 1& (2,0) & 0& 2& $d^{\dag}_{m}$ \\[2pt] 2& (4,0) & 0& 0& $\sqrt{\frac{5}{18}}(s^{\dag})^2 + \sqrt{\frac{2}{9}}(d^{\dag}d^{\dag})^{(0)}_{0} $ \\[2pt] 2& (4,0) & 0& 2& $\sqrt{\frac{7}{9}}s^{\dag}d^{\dag}_{m} - \frac{1}{3}(d^{\dag}d^{\dag})^{(2)}_{m} $ \\[2pt] 2& (4,0) & 0& 4& $\frac{1}{\sqrt{2}}(d^{\dag}d^{\dag})^{(4)}_{m} $ \\[2pt] 2& (0,2) & 0& 0& $-\sqrt{\frac{2}{9}}(s^{\dag})^2 + \sqrt{\frac{5}{18}}(d^{\dag}d^{\dag})^{(0)}_{0} \qquad$ \\[4pt] 2& (0,2) & 0& 2& $\sqrt{\frac{2}{9}}s^{\dag}d^{\dag}_{m} + \sqrt{\frac{7}{18}}(d^{\dag}d^{\dag})^{(2)}_{m} $ \\[2pt] & & & & \\[-3mm] \hline \end{tabular} \end{center} \end{table} As seen from Table~\ref{Tabsu3tensII}, these operators are proportional to two-boson SU(3) tensors, $B^{\dagger}_{[n](\lambda,\mu)\tilde{\chi};\ell m}$, with $n=2$ and $(\lambda,\mu)=(0,2)$ \begin{subequations} \begin{eqnarray} B^{\dagger}_{[2](0,2)0;00} &=& \frac{1}{3\sqrt{2}}\,P^{\dagger}_{0} ~,\\ B^{\dagger}_{[2](0,2)0;2\mu} &=& \frac{1}{3\sqrt{2}}\,P^{\dagger}_{2\mu} ~. \end{eqnarray} \end{subequations} The corresponding Hermitian conjugate boson-pair annihilation operators, $P_0$ and $P_{2\mu}$, transform as $(2,0)$ under SU(3), and satisfy \begin{eqnarray} P_{0}\,\vert [N](2N,0)K=0, LM\rangle &=& 0 ~, \nonumber\\ P_{2\mu}\,\vert [N](2N,0)K=0, LM\rangle &=& 0 ~. \label{P0P2} \end{eqnarray} Equivalently, these operators satisfy \begin{eqnarray} P_{0}\vert c;\, N\rangle &=& 0 \nonumber\\ P_{2\mu}\vert c;\, N\rangle &=& 0 \label{PLcond} \end{eqnarray} where \begin{eqnarray} \vert c; \, N\rangle &=& (N!)^{-1/2} (b^{\dag}_{c})^{N}\vert 0\rangle \;\;\; , \;\;\; b^{\dag}_{c}= (\sqrt{2}\,d^{\dag}_{0} + s^{\dag})/\sqrt{3} ~. \label{condsu3} \end{eqnarray} The state $\vert c; \, N\rangle$ is a condensate of $N$ bosons and is obtained by substituting the SU(3) equilibrium deformations in the coherent state of Eq.~(\ref{condgen}), $\vert c; \, N\rangle = \vert\beta=\sqrt{2},\gamma=0 ; N \rangle$. It is the lowest-weight state in the SU(3) irrep $(\lambda,\mu)=(2N,0)$ and serves as an intrinsic state for the SU(3) ground band~\cite{CA83}. The rotational members of the band $\vert [N](2N,0)K=0, LM\rangle$, Eq.~(\ref{P0P2}), are obtained by angular momentum projection from $\vert c;\, N\rangle$. The relations in Eqs.~(\ref{P0P2})-(\ref{PLcond}) follow from the fact that the action of the operators $P_{L\mu}$ leads to a state with $N-2$ bosons in the U(6) irrep $[N-2]$, which does not contain the SU(3) irreps obtained from the product $(2,0)\times (2N,0)= (2N+2,0)\oplus (2N,1)\oplus (2N-2,2)$. Following the general algorithm, a two-body Hamiltonian with partial SU(3) symmetry can now be constructed as~\cite{AL92,lev96} \begin{eqnarray} \hat{H}(h_0,h_2) &=& h_{0}\, P^{\dagger}_{0}P_{0} + h_{2}\,P^{\dagger}_{2}\cdot \tilde{P}_{2} ~, \label{HPSsu3} \end{eqnarray} where $\tilde P_{2,\mu} = (-)^{\mu}P_{2,-\mu}$. For $h_{2}=h_{0}$, the Hamiltonian is an SU(3) scalar, related to the quadratic Casimir operator of SU(3) \begin{eqnarray} \hat{H}(h_0=h_2) &=& h_{2}\,\left [P^{\dagger}_{0}P_{0} + P^{\dagger}_{2}\cdot \tilde{P}_{2}\right ] = h_{2}\left [-\hat C_{{\rm SU(3)}} + 2\hat N (2\hat N+3)\right ] ~.\qquad \label{Hsu3} \end{eqnarray} For $h_0=-5h_2$, the Hamiltonian transforms as a $(2,2)$ SU(3) tensor component. For arbitrary $h_{0}, h_{2}$ coefficients, $\hat{H}(h_0,h_2)$ is not an SU(3) scalar, nevertheless, the relations in Eqs.~(\ref{P0P2})-(\ref{PLcond}) ensure that it has a solvable zero-energy band with good SU(3) quantum numbers $(\lambda,\mu)=(2N,0)$. When the coefficients $h_{0},\,h_{2}$ are positive, $\hat{H}(h_0,h_2)$ becomes positive definite by construction, and the solvable states form its SU(3) ground band. $\hat{H}(h_0,h_2)$ of Eq.~(\ref{HPSsu3}) has additional solvable eigenstates with good SU(3) character. This comes about from Eq.~(\ref{PLcond}) and the following properties \begin{eqnarray} &&P_{L,\mu}\vert c;N\rangle \;=\; 0 \;\;\; , \;\;\; \left [P_{L,\mu}\, ,\, P^{\dagger}_{2,2}\right ]\vert c;N\rangle\;= \; \delta_{L,2}\delta_{\mu,2}\,6(2N+3)\vert c;N\rangle \;\;\; , \qquad \nonumber\\ &&\left [\left [P_{L,\mu}\, , \,P^{\dagger}_{2,2}\right ]\, , \, P^{\dagger}_{2,2}\right ]\; = \; \delta_{L,2}\delta_{\mu,2}\,24P^{\dagger}_{2,2}\;\;\; , \qquad\quad L=0,2 \quad ~. \label{PLprop} \end{eqnarray} These relations imply that the sequence of states \begin{eqnarray} \vert k\rangle \; \propto \; \left (P^{\dagger}_{2,2}\right )^{k} \vert c; N-2k\rangle ~, \label{k} \end{eqnarray} are eigenstates of $\hat{H}(h_0,h_2)$ with eigenvalues $E_{k}\;=\; 6h_{2}\left (2N+1 -2k \right )k $. A comparison with Eq.~(\ref{eDSsu3}) shows that these energies are the SU(3) eigenvalues of $\hat{H}(h_0=h_2)$, Eq.~(\ref{Hsu3}), and identify the states $\vert k\rangle$ to be in the SU(3) irreps $(2N-4k,2k)$. They can be further shown to be the lowest-weight states in these irreps. Furthermore, $P_0$ satisfies \begin{eqnarray} P_{0}\,\vert k\rangle &=& 0 ~, \label{P0k} \end{eqnarray} or equivalently, \begin{eqnarray} P_{0}\,\vert [N](2N-4k,2k)K=2k, LM \rangle &=& 0 ~. \label{P0} \end{eqnarray} The states $\vert k\rangle$~(\ref{k}) are deformed and serve as intrinsic states representing $\gamma^{k}$ bands with angular momentum projection ($K=2k$) along the symmetry axis~\cite{CA83}. In particular, as noted earlier, $\vert k=0\rangle = \vert c; N\rangle$ represents the ground band ($K=0$) and $\vert k=1\rangle$ is the $\gamma$-band ($K=2$). The rotational members of these bands, $\vert [N](2N-4k,2k)K=2k, LM \rangle \propto \hat{\cal P}_{LM}\vert k\rangle$, Eq.~(\ref{P0}), can be obtained by O(3) projection from the corresponding intrinsic states $\vert k\rangle$. Relations~(\ref{P0k}) and~(\ref{P0}) are equivalent, since the angular momentum projection operator, $\hat{\cal P}_{LM}$, is composed of O(3) generators, hence commutes with $P_0$. The intrinsic states $\vert k \rangle$ break the O(3) symmetry, but since the Hamiltonian in Eq.~(\ref{HPSsu3}) is O(3) invariant, the projected states with good~$L$ are also eigenstates of $\hat{H}(h_0,h_2)$ with energy $E_k$ and with good SU(3) symmetry. For the ground band $(k=0)$ the projected states span the entire SU(3) irrep $(2N,0)$. For excited bands $(k\neq 0)$, the projected states span only part of the corresponding SU(3) irreps. There are other states originally in these irreps (as well as in other irreps) which do not preserve the SU(3) symmetry and therefore get mixed. In particular, the ground $g(K=0)$, and $\gamma(K=2)$ bands retain their SU(3) character $(2N,0)$ and $(2N-4,2)$ respectively, but the first excited $K=0$ band is mixed. This situation corresponds precisely to that of partial SU(3) symmetry. An Hamiltonian $\hat{H}(h_0,h_2)$ which is not an SU(3) scalar has a subset of {\it solvable} eigenstates which continue to have good SU(3) symmetry. All of the above discussion is applicable also to the case when we add to the Hamiltonian of Eq.~(\ref{HPSsu3}) the Casimir operator of O(3), and by doing so, convert the partial SU(3) symmetry into partial dynamical SU(3) symmetry. The additional rotational term contributes just an $L(L+1)$ splitting but does not affect the wave functions. The most general one- and two-body Hamiltonian with SU(3)-PDS can thus be written in the form \begin{eqnarray} \hat{H}_{PDS} &=& \hat{H}(h_0,h_2) + C\, \hat{C}_{{\rm O(3)}} = \hat{H}_{DS} + (h_0 - h_2)\, P^{\dagger}_{0}P_{0} ~. \label{hPDSsu3} \end{eqnarray} Here $\hat{H}_{DS}$ is the SU(3) dynamical symmetry Hamiltonian, Eq.~(\ref{hDSsu3}), with parameters $h_2$ and $C$. The $P^{\dag}_{0}P_0$ term in Eq.~(\ref{hPDSsu3}) is not diagonal in the SU(3) chain~(\ref{chainsu3}), but Eqs.~(\ref{P0P2}) and~(\ref{P0}) ensure that it annihilates a subset of states with good SU(3) quantum numbers. Consequently, $\hat{H}_{PDS}$ retains selected solvable bands with good SU(3) symmetry. Specifically, the solvable states are members of the ground $g(K=0)$ band \begin{subequations} \begin{eqnarray} &&\vert N,(2N,0)K=0,L\rangle \;\;\;\; L=0,2,4,\ldots, 2N\\ && E_{PDS}= CL(L+1) \end{eqnarray} \label{gband} \end{subequations} and $\gamma^{k}(K=2k)$ bands \begin{subequations} \begin{eqnarray} &&\vert N,(2N-4k,2k)K=2k,L\rangle \;\;\;\;\;\; L=K,K+1,\ldots, (2N-2k) \qquad\qquad\\ && E_{PDS} = h_{2}\,6k \left (2N - 2k+1 \right ) + CL(L+1) \qquad k>0 ~. \end{eqnarray} \label{gamband} \end{subequations} The solvable states~(\ref{gband})-(\ref{gamband}) are those projected from the intrinsic states $\vert k\rangle$ of Eq.~(\ref{k}), and are simply selected members of the Elliott basis $\phi_{E}((\lambda,\mu)KLM)$~\cite{Elliott58}. In particular, the states belonging to the ground and $\gamma$ bands are the Elliott states $\phi_{E}((2N,0)K=0,LM)$ and $\phi_{E}((2N-4,2)K=2,LM)$ respectively. Their wave functions can be expressed in terms of the orthonormal Vergados basis, $\Psi_{V}((\lambda,\mu)\tilde{\chi} LM)$~\cite{VER}. For the ground band and for members of the $\gamma$ band with $L$ odd, the Vergados and Elliott bases are identical. The Elliott states in the $\gamma (K=2)$ band with $L$ even are mixtures of Vergados states in the SU(3) irrep $(2N-4,2)$ \begin{eqnarray} &&\phi_{E}((2N-4,2)K=2,LM) = \nonumber\\ &&\qquad\quad \left [\, \Psi_{V}((2N-4,2)\tilde{\chi}=2,LM) - x^{(L)}_{20}\, \Psi_{V}((2N-4,2)\tilde{\chi}=0,LM) \,\right ]/ x^{(L)}_{22} ~,\qquad\quad \label{phigam} \end{eqnarray} where $x^{(L)}_{20},\, x^{(L)}_{22}$ are known coefficients which appear in the transformation between the two bases~\cite{VER}. Since the wave functions of the solvable states are known, it is possible to obtain {\it analytic} expressions for matrix elements of observables between them. For calculating E2 rates, it is convenient to rewrite the relevant E2 operator, Eq.~(\ref{Te2}), in the form \begin{eqnarray} T(E2) \; = \; \alpha\, Q^{(2)} + \theta\, \Pi^{(2)} ~, \label{Te2su3} \end{eqnarray} where $Q^{(2)}$ is the quadrupole SU(3) generator [$\,Q^{(2)} =\Pi^{(2)} -(\sqrt{7}/2)\,U^{(2)}\,$] and $\Pi^{(2)}$ is a $(2,2)$ tensor under SU(3). The B(E2) values for $g\to g$ and $g\to \gamma$ transitions \begin{subequations} \begin{eqnarray} B(E2;g,L\rightarrow g,L') = \qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad \nonumber\\ \frac{\vert\langle\phi_{E}((2N,0)K=0,L')|| \alpha\, Q^{(2)} + \theta\, \Pi^{(2)}|| \phi_{E}((2N,0)K=0,L)\rangle\vert^{2}}{(2L+1)} ~, \label{gtog} \\ B(E2;\gamma,L\rightarrow g,L') = \qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad \nonumber\\ \theta^2\, \frac{\vert\langle\phi_{E}((2N,0)K=0,L')||\Pi^{(2)}|| \phi_{E}((2N-4,2)K=2,L)\rangle\vert^{2}}{(2L+1)} ~. \label{gamtog} \end{eqnarray} \label{E2tran} \end{subequations} can be expressed in terms of E2 matrix elements in the Vergados basis, for which analytic expressions are available~\cite{arimaiac78,ISA}. The Hamiltonian of Eq.~(\ref{hPDSsu3}), with SU(3)-PDS, was used in~\cite{lev96} to describe spectroscopic data of $^{168}$Er. The experimental spectra~\cite{WCD} of the ground $g(K=0^{+}_1)$, $\gamma(K=2^{+}_1)$ and $K=0^{+}_2$ bands in this nucleus is shown in Fig.~\ref{figEr168}, and compared with an exact DS, PDS and broken SU(3) calculations. \begin{figure}[t] \begin{center} \includegraphics[height=8cm]{fig2Er168.eps} \caption{ \small Spectra of $^{168}$Er ($N=16$). Experimental energies (EXP) are compared with IBM calculations in an exact SU(3) dynamical symmetry [SU(3)], in a broken SU(3) symmetry (WCD)~{\protect\cite{WCD}} and in a partial dynamical SU(3) symmetry (PDS). The latter employs the Hamiltonian of Eq.~(\ref{hPDSsu3}) with $h_0=8,\,h_2=4,\,C=13$ keV. Adapted from~\cite{lev96}. \label{figEr168}} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=14cm]{fig3su3decomp.eps} \caption{ \small SU(3) decomposition of wave functions of the ground ($K=0_1$), $\gamma$ ($K=2_1$), and $K=0_2$ bands of $^{168}$Er ($N=16$) in a SU(3)-PDS calculation and in broken-SU(3) calculations: CQF~{\protect\cite{CQF}} and WCD~{\protect\cite{WCD}}. Adapted from~{\protect\cite{LevSin99}}. \label{figsu3decomp}} \end{center} \end{figure} According to the previous discussion, the SU(3)-PDS spectrum of the solvable ground and $\gamma$ bands is given by \begin{eqnarray} E_{g}(L) &=& C L(L+1) ~, \nonumber\\ E_{\gamma}(L) &=& 6h_{2}(2N-1) + C L(L+1) ~. \end{eqnarray} $\hat{H}_{PDS}$~(\ref{hPDSsu3}) is specified by three parameters (N=16 for $^{168}$Er according to the usual boson counting). The values of $C$ and $h_{2}$ were extracted from the experimental energy differences $[E(2^{+}_{g})-E(0^{+}_{g})]$ and $[E(2^{+}_{\gamma})-E(2^{+}_{g})]$ respectively. The spectrum of an exact SU(3) DS, Eq.~(\ref{eDSsu3}), obtained for $h_0=h_2$, deviates considerably from the experimental data since empirically the $K=0^{+}_2$ and $\gamma(K=2^{+}_1)$ bands are not degenerate. In the SU(3)-PDS calculation, the parameter $h_{0}$ was varied so as to reproduce the bandhead energy of the $K=0^{+}_2$ band. The prediction for other rotational members of the $K=0^{+}_1,\,0^{+}_2,\,2^{+}_1$ bands is shown in Fig.~\ref{figEr168}. Clearly, the SU(3) PDS spectrum is an improvement over the schematic, exact SU(3) dynamical symmetry description, since the $\beta$-$\gamma$ degeneracy is lifted. The ground and $\gamma$ bands are solvable, still, the quality of the calculated PDS spectrum is similar to that obtained in the broken-SU(3) calculation (WCD)~\cite{WCD}. The resulting SU(3) decomposition of the lowest bands for the SU(3)-PDS calculation is shown in Fig.~\ref{figsu3decomp}, and compared~\cite{LevSin99} to the conventional broken-SU(3) calculations WCD~\cite{WCD} and CQF~\cite{CQF}. In the latter calculations all states are mixed with respect to SU(3). In contrast, in the PDS calculation, the good SU(3) character, $(32,0)$ for the ground band and $(28,2)$ for $\gamma$ band, is retained, while the $K=0^{+}_2$ band contains about $13\%$ admixtures into the dominant $(28,2)$ irrep. Thus, the breaking of the SU(3) symmetry induced by $\hat{H}_{PDS}$~(\ref{hPDSsu3}) is not small, and can lead to an appreciable SU(3) mixing in the remaining non-solvable states. Nevertheless, the special solvable states carry good SU(3) labels. The SU(3) symmetry is therefore partial but exact. \begin{table}[t] \begin{center} \caption{\label{Tab3Er168} \protect\small B(E2) branching ratios from states in the $\gamma$ band in $^{168}$Er. The column EXP lists the experimental ratios~\cite{WCD}, PDS is the SU(3) partial dynamical symmetry calculation~{\protect\cite{lev96}} and WCD is the broken SU(3) calculation~{\protect\cite{WCD}}. Adapted from~{\protect\cite{lev96}}.} \vspace{1mm} \begin{tabular}{lcccc|rlcccc} \hline & & & & & & & & & &\\[-3mm] $L^{\pi}_{i}$ & $L^{\pi}_{f}$ & EXP & PDS & WCD & & $L^{\pi}_{i}$ & $L^{\pi}_{f}$ & EXP & PDS & WCD \\ & & & & & & & & & &\\[-3mm] \hline & & & & & & & & & &\\[-2mm] $2^{+}_{\gamma}$ & $0^{+}_{g}$ & $54.0$ & $64.27$ & $66.0$ & & $6^{+}_{\gamma}$ & $4^{+}_{g}$ & $0.44$ & $0.89$ & $0.97$ \\[2pt] & $2^{+}_{g}$ & $100.0$ & $100.0$ & $100.0$ & & & $6^{+}_{g}$ & $3.8$ & $4.38$ & $4.3$ \\[2pt] & $4^{+}_{g}$ & $6.8$ & $6.26$ & $6.0$ & & & $8^{+}_{g}$ & $1.4$ & $0.79$ & $0.73$ \\[2pt] $3^{+}_{\gamma}$ & $2^{+}_{g}$ & $2.6$ & $2.70$ & $2.7$ & & & $4^{+}_{\gamma}$ & $100.0$ & $100.0$ & $100.0$ \\[2pt] & $4^{+}_{g}$ & $1.7$ & $1.33$ & $1.3$ & & & $5^{+}_{\gamma}$ & $69.0$ & $58.61$ & $59.0$ \\[2pt] & $2^{+}_{\gamma}$ & $100.0$ & $100.0$ & $100.0$ & & $7^{+}_{\gamma}$ & $6^{+}_{g}$ & $0.74$ & $2.62$ & $2.7$ \\[2pt] $4^{+}_{\gamma}$ & $2^{+}_{g}$ & $1.6$ & $2.39$ & $2.5$ & & & $5^{+}_{\gamma}$ & $100.0$ & $100.0$ & $100.0$ \\[2pt] & $4^{+}_{g}$ & $8.1$ & $8.52$ & $8.3$ & & & $6^{+}_{\gamma}$ & $59.0$ & $39.22$ & $39.0$ \\[2pt] & $6^{+}_{g}$ & $1.1$ & $1.07$ & $1.0$ & & $8^{+}_{\gamma}$ & $6^{+}_{g}$ & $1.8$ & $0.59$ & $0.67$ \\[2pt] & $2^{+}_{\gamma}$ & $100.0$ & $100.0$ & $100.0$ & & & $8^{+}_{g}$ & $5.1$ & $3.57$ & $3.5$ \\[2pt] $5^{+}_{\gamma}$ & $4^{+}_{g}$ & $2.91$ & $4.15$ & $4.3$ & & & $6^{+}_{\gamma}$ & $100.0$ & $100.0$ & $100.0$ \\[2pt] & $6^{+}_{g}$ & $3.6$ & $3.31$ & $3.1$ & & & $7^{+}_{\gamma}$ & $135.0$ & $28.64$ & $29.0$ \\[2pt] & $3^{+}_{\gamma}$ & $100.0$ & $100.0$ & $100.0$ & & & & & & \\[2pt] & $4^{+}_{\gamma}$ & $122.0$ & $98.22$ & $98.5$ & & & & & & \\[4pt] \hline \end{tabular} \end{center} \label{Tabbe2su3} \end{table} Electromagnetic transitions are a more sensitive probe to the structure of states, hence are an important indicator for testing the predictions of SU(3)-PDS. Based on the analytic expressions of Eq.~(\ref{E2tran}), the parameters $\alpha$ and $\theta$ of the E2 operator can be extracted from the experimental values of $B(E2;0^{+}_{g}\rightarrow 2^{+}_{g})$ and $B(E2;0^{+}_{g}\rightarrow 2^{+}_{\gamma})$. The corresponding ratio for $^{168}$Er is $\theta/\alpha=4.261$. As shown in Table~\ref{Tab3Er168}, the resulting SU(3) PDS E2 rates for transitions originating within the $\gamma$ band are found to be in excellent agreement with experiment and are similar to the WCD broken-SU(3) calculation~\cite{WCD}. In particular, the SU(3) PDS calculation reproduces correctly the $(\gamma\rightarrow\gamma)/(\gamma\rightarrow g)$ strengths. The only significant discrepancy is that for the $8^{+}_{\gamma}\rightarrow 7^{+}_{\gamma}$ transition which is very weak experimentally, with an intensity error of $50\%$ and an unknown M1 component~\cite{WCD}. The expressions in Eq.~(\ref{E2tran}) imply that $\gamma\rightarrow g$ $B(E2)$ ratios are independent of both E2 parameters $\alpha$ and $\theta$. Furthermore, since the ground and $\gamma$ bands have pure SU(3) character, $(2N,0)$ and $(2N-4,2)$ respectively, the corresponding wave-functions do not depend on parameters of the Hamiltonian and hence are determined solely by symmetry. Consequently, the B(E2) ratios for $\gamma\rightarrow g$ transitions quoted in Table~\ref{Tab3Er168} are parameter-free predictions of SU(3) PDS. The agreement between these predictions and the data confirms the relevance of SU(3)-PDS to the spectroscopy of $^{168}$Er. PDS has important implications not only for the structure of the pure (solvable) states, but it also affects the mixing of the remaining (non-solvable) states. Of particular interest is the nature of the lowest $K=0^{+}_2$ excitation, for which the role of double-$\gamma$-phonon admixtures is still subject to controversy~\cite{garrett01}. A closer look at the SU(3) decomposition in Fig.~\ref{figsu3decomp}, shows that in the SU(3)-PDS calculation, the $K=0_2$ band is mixed and has the structure~\cite{LevSin99} \begin{eqnarray} \vert L,K=0_2\rangle &=& A_1\,\tilde\phi_{E}((2N-4,2)\tilde K=0,L) + A_2\,\tilde\phi_{E}((2N-8,4)\tilde K=0,L)\nonumber\\ && + A_3\,\phi_{E}((2N-6,0)K=0,L) ~. \label{pdsk0wf} \end{eqnarray} Here $\tilde\phi_{E}$ denote states orthogonal to the solvable $\gamma^k_{K=2k}$ Elliott's states. The notation $\tilde{K}=0$ signifies that $K=0$ is only the dominant component in these states. For example, \begin{eqnarray} &&\tilde\phi_{E}((2N-4,2)\tilde K=0,L) = \nonumber\\ && \qquad\qquad \Bigl[\phi_{V}((2N-4,2)\tilde{\chi}=0,L) +x^{(L)}_{20}\phi_{V}((2N-4,2)\tilde{\chi}=2,L)\Bigr ]/x^{(L)}_{22} ~, \qquad\quad \label{phitil} \end{eqnarray} is the state orthogonal to the Elliott state in Eq.~(\ref{phigam}). For $^{168}$Er, the $K=0_2$ band was found~\cite{LevSin99} to contain $9.6\%$ $(26,0)$ and $2.9\%$ $(24,4)$ admixtures into the dominant $(28,2)$ irrep. Using the geometric analogs of the SU(3) bands~\cite{WC82}, $(2N-4,2)K=0 \sim \beta$, $(2N-8,4)K=0 \sim (\sqrt{2}\beta^2 + \gamma{^2}_{K=0})$, $(2N-6,0)K=0 \sim (\beta^2 - \sqrt{2}\gamma^{2}_{K=0})$, the wave function of Eq.~(\ref{pdsk0wf}) can be expressed in terms of the probability amplitudes for single- and double- phonon $K=0$ excitations \begin{eqnarray} A_{\beta} &=& A_1 ~,\quad A_{\gamma^2} = (A_2 -\sqrt{2}A_3)/\sqrt{3} ~,\quad A_{\beta^2} = (\sqrt{2}A_2 + A_3)/\sqrt{3} ~. \label{Ai} \end{eqnarray} It follows that in the PDS calculation, the $K=0_2$ band in $^{168}$Er contains admixtures of $12.4\%$ $\gamma^{2}_{K=0}$ and $0.1\%$ $\beta^2$ into the $\beta$ mode, {\it i.e.} $12.5\%$ double-phonon admixtures into the dominant single-phonon component. These findings support the conventional single-phonon interpretation for this band with small but significant double-$\gamma$-phonon admixture. General properties of the $K=0_2$ band have been studied~\cite{LevSin99} by examining the SU(3)-PDS Hamiltonian of Eq.~(\ref{HPSsu3}). The empirical value of the ratio of $K=0_2$ and $\gamma$ bandhead energies $E(0^{+}_{2})/[E(2^{+}_{\gamma})-E(2^{+}_{g})] = 0.8-1.8$, in the rare-earth region~\cite{casten94,casten94b} constrains the parameters of $\hat{H}(h_0,h_2)$ to be in the range $0.7 \leq h_0/h_2 \leq 2.4$. In general, one finds that the $K=0_2$ wave function retains the form as in Eq.~(\ref{pdsk0wf}) and, therefore, a three-band mixing calculation is sufficient to describe its structure. The SU(3) breaking and double-phonon admixture is more pronounced when the $K=0_2$ band is above the $\gamma$ band. For most of the relevant range of $h_0/h_2$, corresponding to bandhead ratio in the range $0.8-1.65$, the double-phonon admixture is at most $\sim 15\%$. Only for higher values of the bandhead ratio can one obtain larger admixtures and even dominance of the $\gamma^{2}_{K=0}$ component in the $K=0_2$ wave function. \begin{table} \begin{center} \caption{\label{Tabbe2K0} \protect\small Comparison of calculated (Calc) and experimental (Exp)~\cite{lehmann98} absolute B(E2) values [W.u.] for transitions from the $K=0_2$ band in $^{168}$Er. PDS is the SU(3) partial dynamical symmetry calculation~\cite{LevSin99}, while WCD~\cite{WCD} and CQF~\cite{CQF} are broken SU(3) calculations. Adapted from~\cite{LevSin99}.} \vspace{1mm} \begin{tabular}{lcc|ccc} \hline & & & & &\\[-3mm] \multicolumn{3}{c|}{Exp} & \multicolumn{3}{c}{Calc}\\ Transition & B(E2) & Range & PDS & WCD & CQF \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $2^{+}_{K=0_2}\rightarrow 0^{+}_g$ & 0.4 & 0.06--0.94 & 0.65 & 0.15 & 0.03 \\[2pt] $2^{+}_{K=0_2}\rightarrow 2^{+}_g$ & 0.5 & 0.07--1.27 & 1.02 & 0.24 & 0.03 \\[2pt] $2^{+}_{K=0_2}\rightarrow 4^{+}_g$ & 2.2 & 0.4--5.1 & 2.27 & 0.50 & 0.10 \\[2pt] $2^{+}_{K=0_2}\rightarrow 2^{+}_{\gamma}$ $^{a)}$ & 6.2 (3.1) & 1--15 (0.5--7.5) & 4.08 & 4.2 & 4.53 \\[2pt] $2^{+}_{K=0_2}\rightarrow 3^{+}_{\gamma}$ $^{a)}$ & 7.2 (3.6) & 1--19 (0.5--9.5) & 7.52 & 7.9 & 12.64 \\[4pt] \hline \multicolumn{6}{l}{}\\[-3mm] \multicolumn{6}{l}{\small $^{a)}$ The two numbers in each entry correspond to an assumption of pure E2}\\ \multicolumn{6}{l}{\small and (in parenthesis) 50\% E2 multipolarity.}\\ \end{tabular} \end{center} \end{table} An important clue to the structure of $K=0_2$ collective excitations comes from E2 transitions connecting the $K=0_2$ and $K=0_1$ bands. If we recall that only the ground band has the SU(3) component $(\lambda,\mu)=(2N,0)$, that $Q^{(2)}$, as a generator, cannot connect different SU(3) irreps and that $\Pi^{(2)}$, as a $(2,2)$ tensor under SU(3), can connect the $(2N,0)$ irrep only with the $(2N-4,2)$ irrep, we obtain the following expression for the B(E2) values of $K=0_2\rightarrow g$ transitions \begin{eqnarray} B(E2;K=0_2,L\rightarrow g,L') = \qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad \nonumber\\ A_{\beta}^2\,\theta^2\, \frac{\vert\langle\phi_{E}((2N,0)K=0,L')||\Pi^{(2)}|| \tilde\phi_{E}((2N-4,2)\tilde K=0,L)\rangle\vert^{2}}{(2L+1)} ~. \qquad \label{be2K0} \end{eqnarray} Employing Eq.~(\ref{phitil}), this B(E2) value can be expressed in terms of known B(E2) values in the Vergados basis~\cite{arimaiac78,ISA}. Using Eq.~(\ref{gamtog}), the E2 parameter $\theta$ can be determined from the known $2^{+}_{\gamma}\rightarrow 0^{+}_{g}$ E2 rates, and for $^{168}$Er is found to be $\theta^2=2.175$ W.u. As seen from Eq.~(\ref{be2K0}), the B(E2) values for $K=0_2\rightarrow g$ transitions are proportional to $(A_{\beta})^2$, hence, they provide a direct way for extracting the amount of SU(3) breaking and the admixture of double-phonon excitations in the $K=0_2$ wave function. In Table~\ref{Tabbe2K0} we compare the predictions of the PDS and broken-SU(3) calculations with the B(E2) values deduced from a lifetime measurement of the $2^{+}_{K=0_2}$ level in $^{168}$Er \cite{lehmann98} (the indicated range for the B(E2) values correspond to different assumptions on the feeding of the level). The PDS and WCD calculations are seen to agree well with the empirical values but the CQF calculation under predicts the measured $K=0_2\rightarrow g$ data. The SU(3)-PDS discussed above, is relevant to rotational-vibrational states of a prolate deformed shape, with equilibrium deformations ($\beta=\sqrt{2},\gamma=0)$ and a symmetry $z$-axis. It is also possible to identify an $\overline{{\rm SU(3)}}$-PDS corresponding to an oblate shape with equilibrium deformations ($\beta=\sqrt{2},\gamma=\pi/3)$ and a symmetry $y$-axis. The Hamiltonian with $\overline{{\rm SU(3)}}$-PDS has the same form as in Eq.~(\ref{hPDSsu3}) but the $L=0,2$ boson-pairs are obtained from Eq.~(\ref{PL}) by a change of phase: $s^{\dag}\to -s^{\dag}$, $s\to-s$. The generators and quadratic Casimir operator of $\overline{{\rm SU(3)}}$ are listed in the Appendix. The relevant $\overline{{\rm SU(3)}}$ irreps, $(\bar{\lambda},\bar{\mu})=(2k,2N-4k-6m)$, are conjugate to the SU(3) irreps, $(\lambda,\mu)$, encountered in the SU(3) chain, Eq.~(\ref{chainsu3}). \subsection{O(6) PDS (type I)} \label{subsec:o6PDStypeI} The O(6) DS chain of the IBM and related quantum numbers are given by~\cite{arimaiac79} \begin{eqnarray} \begin{array}{ccccccc} {\rm U}(6)&\supset&{\rm O}(6)&\supset&{\rm O}(5)& \supset&{\rm O}(3)\\ \downarrow&&\downarrow&&\downarrow&&\downarrow\\[0mm] [N]&&\langle\sigma\rangle&&(\tau)&n_\Delta& L \end{array} ~, \label{chaino6} \end{eqnarray} where the generators of the above groups are listed in Table~\ref{TabIBMcas} of the Appendix. For a given U(6) irrep $[N]$, the allowed O(6) and O(5) irreps are $\sigma=N,\,N-2,\dots 0$ or $1$, and $\tau=0,\,1,\,\ldots \sigma$, respectively. The ${\rm O(5)}\supset {\rm O(3)}$ reduction is the same as in the U(5) chain. The eigenstates $|[N]\langle\sigma\rangle(\tau)n_\Delta LM\rangle$ are obtained with a Hamiltonian with O(6) DS which, for one- and two-body interactions, can be transcribed in the form \begin{eqnarray} \hat{H}_{\rm DS} &=& h_{0}\left [-\hat C_{{\rm O(6)}} + \hat N (\hat{N} +4)\right ] + B\, \hat{C}_{{\rm O(5)}} + C\,\hat{C}_{{\rm O(3)}} ~. \label{hDSo6} \end{eqnarray} The quadratic Casimir operators, $\hat{C}_{G}$, are defined in the Appendix. The spectrum of $\hat{H}_{\rm DS}$ is completely solvable with eigenenergies \begin{eqnarray} E_{\rm DS} &=& h_{0}\,(N-\sigma)(N+\sigma + 4) + B\,\tau(\tau+3) +\, C\,L(L+1) ~. \label{eDSo6} \end{eqnarray} The spectrum resembles that of a $\gamma$-unstable deformed rotovibrator, where states are arranged in O(6) multiplets with quantum number $\sigma$. The ground band corresponds to the O(6) irrep with $\sigma=N$. The splitting of states in a given O(6) multiplet is governed by the O(5) and O(3) terms in $\hat{H}_{\rm DS}$~(\ref{hDSo6}). The lowest members in each band have quantum numbers $(\tau=0,\, L=0)$, $(\tau=1,\, L=2)$ and $(\tau=2,\, L=2,4)$. \begin{table} \begin{center} \caption{\label{Tabo6tensII} \protect\small Normalized one- and two-boson O(6) tensors.} \vspace{1mm} \begin{tabular}{cccccl} \hline & & & & &\\[-3mm] $n$&$\sigma$&$\tau$&$n_{\Delta}$&$\ell$& $\hat B^\dag_{[n]\langle\sigma\rangle(\tau)n_{\Delta};\ell m}$\\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] 1& 1& 0& 0 & 0& $s^{\dag}$\\[2pt] 1& 1& 1& 0 & 2& $d^{\dag}_{m}$\\[2pt] 2& 2& 0& 0 & 0& $\sqrt{\frac{1}{12}}(d^\dag d^\dag)^{(0)}_0 +\sqrt{\frac{5}{12}}(s^\dag )^2$\\[2pt] 2& 2& 1& 0 & 2& $s^\dag d^{\dag}_m$\\[2pt] 2& 2& 2& 0 & 2& $\sqrt{\frac{1}{2}}(d^\dag d^\dag)^{(2)}_m$\\[2pt] 2& 2& 2& 0 & 4& $\sqrt{\frac{1}{2}}(d^\dag d^\dag)^{(4)}_m$\\[2pt] 2& 0& 0& 0 & 0& $\sqrt{\frac{5}{12}}(d^\dag d^\dag)^{(0)}_0 -\sqrt{\frac{1}{12}}(s^\dag)^2$\\ & & & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} The construction of Hamiltonians with O(6)-PDS is based on identification of $n$-boson operators which annihilate all states in a given O(6) irrep, $\langle\sigma\rangle$, chosen here to be the ground band irrep $\langle\sigma\rangle=\langle N\rangle$. For that purpose, a relevant operator to consider is \begin{eqnarray} P^{\dagger}_{0} &=& d^{\dagger}\cdot d^{\dagger} - (s^{\dagger})^2 ~. \label{Pdag0o6} \end{eqnarray} As seen from Table~\ref{Tabo6tensII}, the above operator is proportional to a two-boson O(6) tensor, $\hat B^\dag_{[n]\langle\sigma\rangle(\tau)n_{\Delta}\ell m}$, with $n=2$ and $\sigma=\tau=L=0$ \begin{eqnarray} \hat B^\dag_{[2]\langle0\rangle(0)0;00} &=& \frac{1}{2\sqrt{3}}\,P^{\dagger}_{0} ~. \label{Pdag0} \end{eqnarray} The corresponding Hermitian conjugate boson-pair annihilation operator, $P_0$, transforms also as $\langle\sigma\rangle=\langle 0\rangle$ under O(6) and satisfies \begin{eqnarray} P_{0}\,\vert [N]\langle N \rangle (\tau)n_{\Delta}LM\rangle = 0 ~. \label{P0o6} \end{eqnarray} Equivalently, this operator satisfies \begin{eqnarray} P_{0}\vert c;\, N\rangle &=& 0 \label{P0cond} \end{eqnarray} where \begin{eqnarray} \vert c; \, N\rangle &=& (N!)^{-1/2} (b^{\dag}_{c})^{N}\vert 0\rangle \nonumber\\ b^{\dag}_{c} &=& [\,\cos\gamma\,d^{\dag}_{0} + \sin\gamma\, (d^{\dag}_{2}+d^{\dag}_{-2})/\sqrt{2} + s^{\dag}\,]/\sqrt{2} ~. \label{condso6} \end{eqnarray} The state $\vert c; \, N\rangle$ is obtained by substituting the O(6) equilibrium deformations in the coherent state of Eq.~(\ref{condgen}), $\vert c; \, N\rangle = \vert\beta=1,\gamma ; N \rangle$. It is the lowest-weight state in the O(6) irrep $\langle\sigma\rangle = \langle N\rangle$ and serves as an intrinsic state for the O(6) ground band~\cite{lev84}. The rotational members of the band, $\vert [N]\langle N\rangle (\tau)n_{\Delta}LM\rangle$, Eq.~(\ref{P0o6}), are obtained by O(5) projection from $\vert c;\, N\rangle$ and span the entire O(6) irrep $\langle\sigma\rangle=\langle N\rangle$. The relations in Eqs.~(\ref{P0o6})-(\ref{P0cond}) follow from the fact that the action of the operator $P_{0}$ leads to a state with $N-2$ bosons in the U(6) irrep $[N-2]$, which does not contain the O(6) irrep $\langle N\rangle$ obtained from the product of $\langle 0\rangle\times \langle N\rangle$. Since both $P^{\dag}_{0}$ and $P_0$~(\ref{Pdag0o6}) are O(6) scalars, they give rise to the following O(6)-invariant interaction \begin{eqnarray} P^{\dag}_{0}P_{0} &=& \left [-\hat C_{{\rm O(6)}} + \hat N (\hat{N} +4)\right ] ~, \label{HPSo6} \end{eqnarray} which is simply the O(6) term in $\hat{H}_{\rm DS}$, Eq.~(\ref{hDSo6}), with an exact O(6) symmetry. Thus, in this case, unlike the situation encountered with SU(3)-PDS, the algorithm does not yield an O(6)-PDS of type I with two-body interactions. In the IBM framework, an Hamiltonian with a genuine O(6)-PDS of this class, requires higher-order terms. A construction of three-body Hamiltonians with such property will be presented in Subsection~\ref{subsec:o6PDS3bod}. \section{PDS type II} \label{sec:PDStypeII} PDS of type II corresponds to a situation for which {\it all} the states of the system preserve {\it part} of the dynamical symmetry, \begin{eqnarray} G_0 \supset G_1 \supset G_2 \supset \ldots \supset G_{n} ~. \label{DSchain} \end{eqnarray} In this case, there are no analytic solutions, yet selected quantum numbers (of the conserved symmetries) are retained. This occurs, for example, when the Hamiltonian contains interaction terms from two different chains with a common symmetry subalgebra, {\it e.g.}, \begin{eqnarray} G_0 \supset \left \{ \begin{array}{c} G_1 \\ G_{1}' \end{array} \right \}\supset G_2 \supset \ldots \supset G_n \label{G0chains} \end{eqnarray} If $G_{1}$ and $G_{1}'$ are incompatible, {\it i.e.}, do not commute, then their irreps are mixed in the eigenstates of the Hamiltonian. On the other hand, since $G_2$ and its subalgebras are common to both chains, then the labels of their irreps remain as good quantum numbers. An Hamiltonian based on a spectrum generating algebra $G_0$ and a symmetry algebra $G_n$ common to all chains of $G_0$, has by definition, a $G_n$-PDS of type II, albeit a trivial one [{\it e.g.}, $G_n={\rm O(3)}$]. Therefore, the notion of PDS type II is physically relevant when the common segment of the two DS chains, contains subalgebras which are different from the symmetry algebra, {\it i.e.}, $G_{2}\neq G_n$ in Eq.~(\ref{G0chains}). An alternative situation where PDS of type II occurs is when the Hamiltonian preserves only some of the symmetries $G_i$ in the chain~(\ref{DSchain}) and only their irreps are unmixed. A~systematic procedure for identifying interactions with such property was proposed in~\cite{isa99}. Let $G_1\supset G_2\supset G_3$ be a set of nested algebras which may occur anywhere in the reduction~(\ref{DSchain}), in-between the spectrum generating algebra $G_0$ and the invariant symmetry algebra $G_n$. The procedure is based on writing the Hamiltonian in terms of generators, $g_i$, of $G_1$, which do not belong to its subalgebra $G_2$. By construction, such Hamiltonian preserves the $G_1$ symmetry but, in general, not the $G_2$ symmetry, and hence will have the $G_1$ labels as good quantum numbers but will mix different irreps of $G_2$. The Hamiltonians can still conserve the $G_3$ labels {\it e.g.}, by choosing it to be a scalar of $G_3$. The procedure for constructing Hamiltonians with the above properties involves the identification of the tensor character under $G_2$ and $G_3$ of the operators $g_i$ and their products, $g_i g_{j}\ldots g_{k}$. The Hamiltonians obtained in this manner belong to the integrity basis of $G_3$-scalar operators in the enveloping algebra of $G_1$ and, hence, their existence is correlated with their order. In the discussion below we exemplify the two scenarios for constructing Hamiltonians with PDS of type II within the IBM framework. \subsection{O(5) PDS (type II)} \label{subsec:o5PDStypeII} An example of mixing two incompatible chains with a common symmetry subalgebra is the U(5) and O(6) chains in the IBM~\cite{levnov86} \begin{eqnarray} {\rm U(6)} \supset \left \{ \begin{array}{c} {\rm U(5)} \\ {\rm O(6)} \end{array} \right \}\supset {\rm O(5)} \supset {\rm O(3)} ~. \label{u5o6} \end{eqnarray} The corresponding quantum numbers were discussed in Subsection~\ref{subsec:u5PDStypeI} and~\ref{subsec:o6PDStypeI}. The most general Hamiltonian which conserves the O(5) symmetry, involves a combination of terms from these two chains, and for one- and two-body interactions reads \begin{eqnarray} \hat{H}_{O(5)} &=& \epsilon\,\hat{n}_d + \alpha\,\hat{n}_d(\hat{n}_d+4) + A\,P^{\dag}_{0}P_0 + B\, \hat{C}_{{\rm O(5)}} + C\,\hat{C}_{{\rm O(3)}} ~. \label{hPDSo5} \end{eqnarray} The d-boson number operator, $\hat{n}_d$, is the linear Casimir of U(5), the O(6)-pairing term, $P^{\dag}_{0}P_0$, is related to the Casimir operator of O(6), Eq.~(\ref{HPSo6}), and the Casimir operators of O(5) and O(3) can be replaced by their eigenvalues $B\,\tau(\tau+1) +C\,L(L+1)$. In this case, all eigenstates of $\hat{H}_{{\rm O(5)}}$ have good O(5) symmetry but, with a few exceptions, none of them have good U(5) symmetry nor good O(6) symmetry, and hence only part of the dynamical symmetry of each chain in Eq.~(\ref{u5o6}) is observed. These are precisely the defining features of O(5)-PDS of type II. In general, $\hat{H}_{{\rm O(5)}}$~(\ref{hPDSo5}) mixes U(5) irreps, characterized by $n_d$, as well as mixes irreps of O(6) characterized by $\sigma$, but retains the ${\rm O(5)}\supset {\rm O(3)}$ labels, $(\tau,L)$ as good quantum numbers. The conserved O(5) symmetry has important consequences which will be discussed briefly below~\cite{levnov86}. The first four terms in~(\ref{hPDSo5}) are O(5) scalars, hence level-spacings within an O(5) irrep ($\tau$-multiplet) are the same throughout the U(5)-O(6) transition region. They are given only by $CL(L+1)$ and are independent of the values of the other parameters in $\hat{H}_{{\rm O(5)}}$. The various multiplets with the same value of $\tau$ are distinguished by another label, $\nu=1,\ldots$ which indicates its relative position in the spectrum. The actual positions of the various $\tau$-multiplets, as well as the wave functions of their states, are determined by diagonalization of the Hamiltonian~(\ref{hPDSo5}). The wave function of any state in the $\nu$th $\tau$-multiplet can be expressed in the U(5) basis as \begin{eqnarray} \vert \nu; N,\tau, n_\Delta, L,M\rangle &=& \sum_{n_d}\, \xi_{n_d}^{(\nu,\tau)}\, \vert [N]\langle n_d\rangle (\tau)n_\Delta L M\rangle \label{wfu5basis} \end{eqnarray} with $n_d=\tau,\tau+2,\ldots, N-1$ or $N$. Similarly, the same wave functions can be expressed in the O(6) basis as \begin{eqnarray} \vert \nu; N,\tau, n_\Delta, L,M\rangle &=& \sum_{\sigma}\, \eta_{\sigma}^{(\nu,\tau)}\, \vert [N]\langle \sigma\rangle (\tau)n_\Delta L M\rangle \label{wfo6basis} \end{eqnarray} with $\sigma=N, N-2,\ldots, (\tau+1)$ or $\tau$. The amplitudes in Eqs.~(\ref{wfu5basis})-(\ref{wfo6basis}) depend, in general, on the actual values of $\epsilon,\,\alpha,\,A$ as well as on $N$. The O(5) symmetry of~(\ref{hPDSo5}) can be further used to derive special properties of electromagnetic transitions. The $\Pi^{(2)}$ part of the E2 operator, Eq.~(\ref{Te2}), is a $\tau=1$ tensor under O(5) and connects states with $\Delta\tau=\pm 1$. The $U^{(2)}$ part is a $\tau=2$ tensor under O(5) and connects states with $\Delta\tau=0, \pm2$. The B(E2) values of transitions from a state with $\tau',n_{\Delta}',L'$ in the $\nu'$ multiplet to a state with $\tau,n_{\Delta},L$ in the $\nu$ multiplet can be written in the form \begin{subequations} \begin{eqnarray} &&B(E2; \nu',\tau',n_{\Delta}',L'\to \nu,\tau,n_{\Delta},L) \nonumber\\ &&\qquad\qquad = e_{B}^2\,{\cal F}_{N}(\nu,\nu',\tau) \left \langle \begin{array}{cc|c} \tau' & 1 & \tau\\ n_{\Delta}'L' & 2 & n_{\Delta}L \end{array} \right \rangle^2 \quad\quad\;\;\; \tau'=\tau\pm 1 ~,\qquad\quad \label{be2o5a}\\ &&\qquad\qquad = (e_{B}\chi)^2\,{\cal G}_{N}(\nu,\nu',\tau) \left \langle \begin{array}{cc|c} \tau' & 2 & \tau \\ n_{\Delta}'L' & 2 & n_{\Delta}L \end{array} \right \rangle^2 \quad \tau'=\tau\, , \,\tau\pm 2~.\qquad\quad \label{be2o5b} \end{eqnarray} \label{be2o5} \end{subequations} Each B(E2) is a product of two factors. The first factor which is determined by the Hamiltonian depends on $N$ and on $\nu,\tau$ of the initial and final states. The second factor is the ${\rm O(5)}\supset {\rm O(3)}$ isoscalar factor (ISF). It is the same for every Hamiltonian~(\ref{hPDSo5}), is completely determined by O(5) symmetry and depends only on the $\tau,\, n_{\Delta},\, L$ of the intial and final states. Analytic expressions for some of these ISF are available~\cite{ibm,ibfm,piet87}, as well as a computer code for their evaluation~\cite{caprio09}. The factorization observed in Eq.~(\ref{be2o5}) is a manifestation of the Wigner-Eckart theorem for the O(5) group and has important implications with respect to E2 rates. First, consider transitions between states in any $\nu,\tau$-multiplet to states in any $\nu'\tau'$-multiplet (including $\nu=\nu'$). A direct consequence of~(\ref{be2o5}) is that B(E2) ratios of such transitions are equal to the ratios of ISFs, hence, are independent of $\nu,\,\nu'$ throughout the O(6)-U(5) region. Such ratios are also independent of $N$ and the actual parameters in~(\ref{hPDSo5}), {\it i.e.}, they should be the same in all O(5) nuclei considered. Second, consider E2 transitions between a state with given $\tau,\,n_{\Delta},\,L$ and a state with given $\tau',\,n_{\Delta}',\,L'$ either with $\nu=\nu'$ or $\nu\neq\nu'$, and even in different nuclei. In B(E2) ratios of such transitions the ISFs in Eq.~(\ref{be2o5}) cancel, and the ratio of the other factors [{\it e.g.}, $e_{B}^2{\cal F}_{N}(\nu,\nu',\tau)/e_{B}'^{2} {\cal F}_{N'}(\nu'',\nu''',\tau'')$] is independent of $n_{\Delta},L,n_{\Delta}',L'$. Thus, such ratios should be the same for all transitions between the states of the $\tau,\,\tau'$-multiplets considered. The very definite statements made above about E2 transitions follow directly from O(5) symmetry. They hold exactly for any values of $N$ and of parameters in the Hamiltonian~(\ref{hPDSo5}) and, in particular, at the O(6) and U(5) limits. They played an instrumental role in identifying empirical signatures of O(5) symmetry which are common to all IBM Hamiltonians (\ref{hPDSo5}) in the O(6)-U(5) region and which should be clearly distinguished from features, {\it e.g.}, absolute B(E2) values, that can yield crucial evidence for O(6) or U(5) symmetries or for deviations from these limits~\cite{casten87,brentano88,rain10}. Hamiltonians with O(5)-PDS of type II have been used extensively for studying transitional nuclei in the Ru-Pd~\cite{Stachel82} and Xe-Ba~\cite{Casten85} regions, whose structure varies from spherical [U(5)] to $\gamma$-unstable deformed [O(6)]. Such O(5)-PDS is also relevant to the coexistence of normal and intruder levels in $^{112}$Cd~\cite{Jolie95}. The particular Hamiltonian $\hat{H}_{{\rm O(5)}}$ of Eq.~(\ref{hPDSo5}), with O(5)-PDS of type II, has also selected U(5) basis states as eigenstates and hence exhibits also U(5)-PDS of type I. This follows from the fact that $AP^{\dag}_{0}P_0$, which is the only term in $\hat{H}_{\rm{O(5)}}$ that breaks the U(5) DS, involves the operator $P_0 = \tilde{d}\cdot \tilde{d} -s^2$. The latter operator annihilates the U(5) basis states, $\vert [N],n_d=\tau=N,L\,\rangle$ and $\vert [N],n_d=\tau=N-1,L\,\rangle$, for reasons given after Eq.~(\ref{V0ndN}). \subsection{O(6) PDS (type II)} \label{subsec:o6PDStypeII} An alternative situation where PDS of type II can occur is when the entire eigenspectrum retains only some of the symmetries $G_i$ in a given dynamical symmetry chain~(\ref{DSchain}). Such a scenario was considered in~\cite{isa99} in relation to the O(6) chain \begin{equation} \begin{array}{ccccccc} {\rm U(6)} &\supset& {\rm O(6)} &\supset& {\rm O(5)} & \supset& {\rm O(3)} \\ \downarrow&&\downarrow&&\downarrow&&\downarrow\\[0mm] \left [N\right ] && \langle0,\sigma,0\rangle && (\tau,0) & n_{\Delta}& L \end{array}. \label{DSo6} \end{equation} The following Hamiltonian, with O(6)-PDS of type II, has been proposed \begin{eqnarray} \hat{H}_1 \;=\; \kappa_{0}P^{\dagger}_{0}P_{0} + \kappa_2 \Bigl (\Pi^{(2)}\times \Pi^{(2)}\Bigr )^{(2)}\cdot\Pi^{(2)} ~. \label{h1} \end{eqnarray} The $\kappa_0$ term is the O(6) pairing term of Eq.~(\ref{HPSo6}). It is diagonal in the dynamical-symmetry basis $\vert [N]\langle\sigma\rangle (\tau)n_{\Delta}LM\rangle$ of Eq.~(\ref{DSo6}) with eigenvalues $\kappa_0(N - \sigma)(N +\sigma +4)$. The $\kappa_2$ term is constructed only from the O(6) generator, $\Pi^{(2)}=d^{\dagger}s+s^{\dagger}\tilde{d}$, which is not a generator of O(5). Therefore, it cannot connect states in different O(6) irreps but can induce O(5) mixing subject to $\Delta\tau=\pm 1,\pm 3$. Consequently, all eigenstates of $\hat{H}_1$ have good O(6) quantum number $\sigma$ but do not possess O(5) symmetry $\tau$. These are the necessary ingredients of an O(6) PDS of type II associated with the chain of Eq.~(\ref{DSo6}). As shown in Fig.~\ref{figo6energy}, a typical spectra of $\hat{H}_1$ displays rotational bands of an axially-deformed nucleus. All bands of $\hat{H}_1$ are pure with respect to O(6). This is demonstrated in the left panel of Fig.~\ref{figo6decomp} for the $K=0_1,2_1,2_3$ bands which have $\sigma=N$, and for the $K=0_2$ band which has $\sigma=N-2$. In this case, the diagonal $\kappa_0$-term in Eq.~(\ref{h1}) simply shifts each band as a whole in accord with its $\sigma$ assignment. On the other hand, the $\kappa_2$-term in Eq.~(\ref{h1}) is an O(5) tensor with $(\tau,0)=(3,0)$ and, therefore, all eigenstates of $\hat{H}_1$ are mixed with respect to O(5). This mixing is demonstrated in the right panel of Fig.~\ref{figo6decomp} for the $L=0,2$ members of the ground band. A key element in the above procedure for constructing Hamiltonians with O(6)-PDS of type II, is the tensorial character of the generators contained in O(6) but not in O(5)~\cite{isa99}. In the present case, the tensor character of the operator $\Pi^{(2)}$ under O(5) is $(\tau,0)=(1,0)$ and under O(3), $L=2$. A quadratic interaction $\Pi^{(2)}\cdot \Pi^{(2)}$ corresponds to the O(5) multiplication $(1,0)\times (1,0) = (2,0)\oplus (1,1)\oplus (0,0)$. Since only the irrep $(0,0)$ contains an $L=0$, it follows that the quadratic terms must be an O(5) scalar. Indeed, from Table~\ref{TabIBMcas} of the Appendix, we find $\Pi^{(2)}\cdot \Pi^{(2)}= \hat{C}_{{\rm O(6)}} - \hat{C}_{{\rm O(5)}}$. In the next cubic order, the interaction $(\Pi^{(2)}\times \Pi^{(2)})^{(2)}\cdot\Pi^{(2)}$ corresponds to $(1,0)\times (1,0)\times (1,0)$; O(5) multiplication show that there is only one O(3) scalar and it has O(5) character $(3,0)$. Consequently, $(\Pi^{(2)}\times \Pi^{(2)})^{(2)}\cdot\Pi^{(2)}$ is an example of a $\sigma$-conserving, $\tau$-violating interaction; it mixes $(\tau,0)$ with $(\tau\pm 1,0)$ and $(\tau\pm 3,0)$. This discussion highlights the fact that the existence of Hamiltonians with PDS of type~II, constructed in this manner, may necessitate higher-order terms. A similar procedure can been applied to the ${\rm U(6)}\supset {\rm U(5)}\supset {\rm O(5)}\supset {\rm O(3)}$ chain of the IBM~\cite{isa99}. The generators contained in U(5) but not in O(5) are $U^{(L)}_{\mu}\equiv (d^{\dag}\tilde{d})^{(L)}_{\mu}$ with $L=0,2,4$. $U^{(0)}$ is a scalar in O(5) and hence does not mix O(5) irreps. The operators $U^{(2)}_{\mu}$ and $U^{(4)}_{\mu}$, on the other hand, have O(5) character $(2,0)$. O(3)-scalar interactions obtained from quadratic combinations of such tensors involve terms of the U(5) DS Hamiltonian, Eq.~(\ref{hDSu5}), hence do not induce O(5) mixing among symmetric, $(\tau,0)$ irreps. On the other hand, cubic O(3)-scalar combinations of $U^{(2)}_{\mu}$ and $U^{(4)}_{\mu}$ can lead to two independent $d$-boson interaction terms that can induce O(5) mixing but conserve the U(5) quantum number, $n_d$. By definition, such $n_d$-conserving but $\tau$-violating cubic terms exemplify a U(5) PDS of type II. \section{PDS type III} \label{sec:PDStypeIII} PDS of type III combines properties of both PDS of type I and II. Such a generalized PDS~\cite{levisa02} has a hybrid character, for which {\it part} of the states of the system under study preserve {\it part} of the dynamical symmetry. In relation to the dynamical symmetry chain of Eq.~(\ref{chain}), $G_{\rm dyn}\supset G\supset\cdots\supset G_{\rm sym}$, with associated basis, $\vert [h_N]\langle\Sigma\rangle\Lambda\rangle$, this can be accomplished by relaxing the condition of Eq.~(\ref{anni}), $\hat T_{[h_n]\langle\sigma\rangle\lambda} |[h_N]\langle\Sigma_0\rangle\Lambda\rangle=0$, so that it holds only for {\it selected} states $\Lambda$ contained in a given irrep $\langle\Sigma_0\rangle$ of $G$ and/or selected (combinations of) components $\lambda$ of the tensor $\hat T_{[h_n]\langle\sigma\rangle\lambda}$. Under such circumstances, let $G'\neq G_{sym}$ be a subalgebra of $G$ in the aforementioned chain, $G\supset G'$. In general, the Hamiltonians, constructed from these tensors, in the manner shown in Eq.~(\ref{PS}), are not invariant under $G$ nor $G'$. Nevertheless, they do posses the subset of solvable states, $|[h_N]\langle\Sigma_0\rangle\Lambda\rangle$, with good $G$-symmetry, $\langle\Sigma_0\rangle$, while other states are mixed. At the same time, the symmetry associated with the subalgebra $G'$, is broken in all states (including the solvable ones). Thus, part of the eigenstates preserve part of the symmetry. These are precisely the requirements of PDS of type III. In what follows we explicitly construct Hamiltonians with such properties within the IBM framework. \subsection{O(6) PDS (type III)} \label{subsec:o6PDStypeIII} PDS of type III associated with the O(6) chain, Eq.~(\ref{DSo6}), can be realized in terms of Hamiltonians which have a subset of solvable states with good O(6) symmetry but broken O(5) symmetry. Hamiltonians with such property can be constructed~\cite{levisa02} by means of the following boson-pair operators with angular momentum $L =0,\,2$ \begin{subequations} \begin{eqnarray} P^{\dagger}_{0} &=& d^{\dagger}\cdot d^{\dagger} - (s^{\dagger})^2 ~,\\ P^{\dagger}_{2\mu} &=& \sqrt{2}d^{\dagger}_{\mu}s^{\dagger} + \sqrt{7}\, (d^{\dagger}\,d^{\dagger})^{(2)}_{\mu} ~. \end{eqnarray} \label{PLo6} \end{subequations} \begin{figure}[t] \begin{center} \includegraphics[height=4in,angle=-90]{fig4Dy162.eps} \end{center} \caption{ \small Experimental spectra (EXP) of $^{162}$Dy compared with calculated spectra of $\hat{H}_1+ C_1\,\hat{C}_{{\rm O(3)}}$, Eq.~(\ref{h1}), with O(6)-PDS type II and of $\hat{H}_2+ C_2\,\hat{C}_{{\rm O(3)}}$, Eq.~(\ref{h2}), with O(6)-PDS of type III. The parameters (in keV) are $\kappa_0=8$, $\kappa_2=1.364$, $C_1=8$ and $h_0=28.5$, $h_2=6.3$, $C_2=13.45$, and boson number $N=15$. Adapted from~\cite{levisa02}. \label{figo6energy}} \end{figure} \begin{figure}[t] \begin{center} \includegraphics*[width=2.49in,angle=-90]{fig5Lo6decomp.eps} \hspace*{0.2in} \includegraphics*[width=2.49in,angle=-90]{fig5Ro5decomp.eps} \end{center} \caption{ \small Left: ~O(6) decomposition of wave functions of states in the bands $K=0_1,\,2_1,\,0_2,\,(L=K^{+})$, and $K=2_3,\,(L=3^{+})$, for $\hat{H}_1$~(\ref{h1}) with O(6)-PDS type II (upper portion) and $\hat{H}_2$~(\ref{h2}) with O(6)-PDS type III (lower portion). Right: O(5) decomposition of wave functions of $L=0,\,2$ states in the $\sigma=N$ ground bands ($K=0_1$) of $\hat{H}_1$ (upper portion) and $\hat{H}_2$ (lower portion). Adapted from~\protect\cite{levisa02}. \label{figo6decomp}} \end{figure} From Table~\ref{Tabo6tensII} one sees that the $P^{\dag}_{0}$ pair is an O(6) tensor with $(\sigma=0,\tau=0,L=0)$, while $P^{\dag}_{2\mu}$ involves a combination of tensors with $(\sigma=2,\tau=1,L=2)$ and $(\sigma=2,\tau=2,L=2)$ \begin{subequations} \begin{eqnarray} P^{\dagger}_{0} &=& 2\sqrt{3}\, \hat B^\dag_{[2]\langle0\rangle(0)0;00} ~,\\ P^{\dagger}_{2\mu} &=& \sqrt{2}\, \hat B^\dag_{[2]\langle2\rangle(1)0;2\mu} + \sqrt{14}\,\hat B^\dag_{[2]\langle2\rangle(2)0;2\mu} ~. \label{Pdag2} \end{eqnarray} \label{Pdag02} \end{subequations} These operators satisfy \begin{subequations} \begin{eqnarray} P_{0}\vert c;\, N\rangle &=& 0 \label{P0condo6}\\ P_{2\mu}\vert c;\, N\rangle &=& 0 \label{P2condo6} \end{eqnarray} \label{PLcondo6} \end{subequations} where \begin{eqnarray} \vert c; \, N\rangle &=& (N!)^{-1/2} (b^{\dag}_{c})^{N}\vert 0\rangle \;\;\; , \;\;\; b^{\dag}_{c}= (\,d^{\dag}_{0} + s^{\dag})/\sqrt{2} ~. \label{condo6} \end{eqnarray} The state $\vert c; \, N\rangle$ is obtained by substituting the O(6) deformation, $\beta=1$, as well as $\gamma=0$ in the coherent state of Eq.~(\ref{condgen}), $\vert c; \, N\rangle = \vert\beta=1,\gamma=0 ; N \rangle$. It has good O(6) character, $\langle\sigma\rangle=\langle N \rangle$, and serves as an intrinsic state for a prolate-deformed ground band. Rotational members of the band with $\sigma=N$ and even values of $L$, are obtained by angular momentum projection, $\vert [N]\langle N\rangle LM\rangle \propto \hat{\cal P}_{LM}\vert c; \, N\rangle$. The projection operator, $\hat{\cal P}_{LM}$, involves an O(3) rotation which commutes with $P_{0}$ and transforms $\tilde{P}_{2\mu}$ among its various components. Consequently, $P_{0}$ and $P_{2\mu}$ annihilate also the projected states \begin{subequations} \begin{eqnarray} P_{0}\,\vert [N]\langle N\rangle LM\rangle &=& 0 ~, \label{P0Lo6}\\ P_{2\mu}\,\vert [N]\langle N\rangle LM\rangle &=& 0 ~, \qquad L=0,2,4,\ldots, 2N ~. \label{P2Lo6} \end{eqnarray} \label{P0P2o6} \end{subequations} It should be noted that $P^{\dag}_{2\mu}$ and $P_{2\mu}$, Eq.~(\ref{Pdag2}), span only part of the $\sigma=2$ irrep. Consequently, the projected states $\vert [N]\langle N\rangle LM\rangle$ of Eq.~(\ref{P0P2o6}) span only part of the O(6) irrep, $\langle \sigma\rangle =\langle N\rangle$. The corresponding wave functions contain a mixture of components with different O(5) symmetry $\tau$, and their expansion in the O(6) basis $\vert [N]\langle\sigma\rangle (\tau)n_{\Delta} LM\rangle$ reads \begin{subequations} \begin{eqnarray} \vert [N]\langle N\rangle LM\rangle &=& {\cal N}_{N}^{(L)} \sum_{\tau,n_{\Delta}}\,a_{\tau,n_{\Delta}}^{(N,L)}\, \vert [N]\langle N\rangle (\tau) n_{\Delta} L M\rangle ~,\\ a_{\tau, n_{\Delta}}^{(N,L)} &=& \left [(N-\tau)!(N+\tau+3)!\right ]^{-1/2}\,f_{\tau, n_{\Delta}}^{(L)} ~. \end{eqnarray} \label{solo6pds} \end{subequations} Here ${\cal N}_{N}^{(L)}$ is a normalization coefficient and explicit expressions of the factors $f_{\tau, n_{\Delta}}^{(L)}$ for $L=0,2,4$ are given in Table~\ref{Tabftau}. \noindent Following the general algorithm, a two-body Hamiltonian with O(6) partial symmetry can now be constructed~\cite{levisa02} from the boson-pairs operators of Eq.~(\ref{PLo6}) as \begin{equation} \hat{H}_2 = h_{0}\, P^{\dagger}_{0}P_{0} + h_{2}\, P^{\dagger}_{2} \cdot\tilde P_{2} ~. \label{h2} \end{equation} The $h_0$ term is the O(6)-scalar interaction of Eq.~(\ref{HPSo6}). The multipole form of the $h_2$ term involves the Casimir operators of O(5) and O(3) which are diagonal in $\sigma$ and $\tau$, terms involving $\hat{n}_d$ which is a scalar under O(5) but can connect states differing by $\Delta\sigma=0,\pm 2$ and a $\Pi^{(2)}\cdot U^{(2)}$ term which induces both O(6) and O(5) mixing subject to $\Delta\sigma=0,\pm 2$ and $\Delta\tau=\pm 1,\pm 3$. Although $\hat{H}_2$ is not an O(6)-scalar, relations~(\ref{PLcondo6}) and~(\ref{P0P2o6}) ensure that it has an exactly solvable ground band with good O(6) symmetry, $\langle\sigma\rangle =\langle N\rangle$, but broken O(5) symmetry. The Casimir operator of O(3) can be added to $\hat{H}_2$ to obtain \begin{eqnarray} \hat{H}_{PDS} = \hat{H}_2 + C\, \hat{C}_{{\rm O(3)}} ~. \label{h2PDS} \end{eqnarray} The solvable states of $\hat{H}_{PDS}$ form an axially--deformed ground band \begin{subequations} \begin{eqnarray} &&\vert [N]\langle N\rangle LM\rangle\qquad L=0,2,4,\ldots, 2N\\ && E_{PDS} = C\,L(L+1) ~. \end{eqnarray} \label{ePDSo6III} \end{subequations} Thus, $\hat{H}_{PDS}$~(\ref{h2PDS}) has a subset of solvable states with good O(6) symmetry, which is not preserved by other states. All eigenstates of $\hat{H}_{PDS}$ break the O(5) symmetry but preserve the O(3) symmetry. These are precisely the required features of O(6)-PDS of type III. \begin{table}[t] \begin{center} \caption{\label{Tabbe2Dy162} \protect\small Calculated and observed (Exp) B(E2) values (in $10^{-2}e^2b^2)$ for $g\to g$ and $\gamma\to g$ transitions in $^{162}$Dy. The parameters of the E2 operator, Eq.~(\ref{Te2}), are $e_B=0.138$ $[0.127]$ $eb$ and $\chi=-0.235$ $[-0.557]$ for $\hat{H}_1$~(\ref{h1}) [$\hat{H}_2$~(\ref{h2})]. The Hamiltonian $\hat{H}_1$ ($\hat{H}_2$) has O(6)-PDS of type II (type III). Adapted from ~\protect\cite{levisa02}.} \vspace{1mm} \begin{tabular}{llll|llll} \hline & & & & & & &\\[-3mm] Transition & $H_{1}$ & $H_{2}$ & Exp & Transition & $H_{1}$ & $H_{2}$ & Exp \\ \hline & & & & & & &\\[-2mm] $2^{+}_{K=0_1}\rightarrow 0^{+}_{K=0_1}$ & 107 & 107 & 107(2) & $2^{+}_{K=2_1}\rightarrow 0^{+}_{K=0_1}$ & 2.4 & 2.4 & 2.4(1) \\[2pt] $4^{+}_{K=0_1}\rightarrow 2^{+}_{K=0_1}$ & 151 & 152 & 151(6) & $2^{+}_{K=2_1}\rightarrow 2^{+}_{K=0_1}$ & 3.8 & 4.0 & 4.2(2) \\[2pt] $6^{+}_{K=0_1}\rightarrow 4^{+}_{K=0_1}$ & 163 & 165 & 157(9) & $2^{+}_{K=2_1}\rightarrow 4^{+}_{K=0_1}$ & 0.24 & 0.26 & 0.30(2) \\[2pt] $8^{+}_{K=0_1}\rightarrow 6^{+}_{K=0_1}$ & 166 & 168 & 182(9) & $3^{+}_{K=2_1}\rightarrow 2^{+}_{K=0_1}$ & 4.2 & 4.3 & \\[2pt] $10^{+}_{K=0_1}\rightarrow 8^{+}_{K=0_1}$ & 164 & 167 & 183(12) & $3^{+}_{K=2_1}\rightarrow 4^{+}_{K=0_1}$ & 2.2 & 2.3 & \\[2pt] $12^{+}_{K=0_1}\rightarrow 10^{+}_{K=0_1}$& 159 & 163 & 168(21) & $4^{+}_{K=2_1}\rightarrow 2^{+}_{K=0_1}$ & 1.21 & 1.14 & 0.91(5) \\[2pt] & & & & $4^{+}_{K=2_1}\rightarrow 4^{+}_{K=0_1}$ & 4.5 & 4.7 & 4.4(3) \\[2pt] & & & & $4^{+}_{K=2_1}\rightarrow 6^{+}_{K=0_1}$ & 0.59 & 0.61 & 0.63(4) \\[2pt] & & & & $5^{+}_{K=2_1}\rightarrow 4^{+}_{K=0_1}$ & 3.4 & 3.3 & 3.3(2) \\[2pt] & & & & $5^{+}_{K=2_1}\rightarrow 6^{+}_{K=0_1}$ & 2.9 & 3.1 & 4.0(2) \\[2pt] & & & & $6^{+}_{K=2_1}\rightarrow 4^{+}_{K=0_1}$ & 0.84 & 0.72 & 0.63(4) \\[2pt] & & & & $6^{+}_{K=2_1}\rightarrow 6^{+}_{K=0_1}$ & 4.5 & 4.7 & 5.0(4) \\[4pt] \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{\label{Tab2be2Dy162} \protect\small Calculated and observed (Exp)~\cite{apra06} B(E2) values (in $e^2b^2$) for transitions from the $K=0_2$ band in $^{162}$Dy. The calculations involve the Hamiltonian $\hat{H}_2$~(\ref{h2}) with O(6)-PDS of type III and the CQF Hamiltonian with broken O(6) symmetry. Adapted from~\cite{apra06}.} \vspace{1mm} \begin{tabular}{llll|llll} \hline & & & & & & &\\[-3mm] Transition & $H_{2}$ & CQF & Exp & $\;$Transition & $H_{2}$ & CQF & Exp \\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] $0^{+}_{K=0_2}\rightarrow 2^{+}_{K=0_1}$ & 0.0023 & 0.0011 & & $4^{+}_{K=0_2}\rightarrow 2^{+}_{K=0_1}$ & 0.0005 & 0.0002 & \\[2pt] $0^{+}_{K=0_2}\rightarrow 2^{+}_{K=2_1}$ & 0.1723 & 0.151 & & $4^{+}_{K=0_2}\rightarrow 4^{+}_{K=0_1}$ & 0.0004 & 0.0001 & \\[2pt] $2^{+}_{K=0_2}\rightarrow 0^{+}_{K=0_1}$ & 0.0004 & 0.0002 & & $4^{+}_{K=0_2}\rightarrow 6^{+}_{K=0_1}$ & 0.0015 & 0.0006 & 0.0034(7) \\[2pt] $2^{+}_{K=0_2}\rightarrow 2^{+}_{K=0_1}$ & 0.0005 & 0.0002 & & $4^{+}_{K=0_2}\rightarrow 2^{+}_{K=2_1}$ & 0.0005 & 0.0001 & 0.0015(5) \\[2pt] $2^{+}_{K=0_2}\rightarrow 4^{+}_{K=0_1}$ & 0.0014 & 0.0006 & 0.013(2) & $4^{+}_{K=0_2}\rightarrow 3^{+}_{K=2_1}$ & 0.0085 & 0.0030 & 0.0011(3) \\[2pt] $2^{+}_{K=0_2}\rightarrow 2^{+}_{K=2_1}$ & 0.0369 & 0.0242 & 0.016(3) & $4^{+}_{K=0_2}\rightarrow 4^{+}_{K=2_1}$ & 0.0446 & 0.0283 & 0.011(2) \\[2pt] $2^{+}_{K=0_2}\rightarrow 3^{+}_{K=2_1}$ & 0.0849 $\;\;\;$ & 0.0716 & 0.052(5) & $4^{+}_{K=0_2}\rightarrow 5^{+}_{K=2_1}$ & 0.0737 & 0.0631 & 0.018(4) \\[2pt] $2^{+}_{K=0_2}\rightarrow 4^{+}_{K=2_1}$ & 0.0481 & 0.0474 & $\equiv$0.048 & $4^{+}_{K=0_2}\rightarrow 6^{+}_{K=2_1}$ & 0.0373 & 0.0361 & \\[4pt] \hline \end{tabular} \end{center} \end{table} The calculated spectra of $\hat{H}_2$~(\ref{h2}) and $\hat{H}_1$~(\ref{h1}), supplemented with an O(3) term, are compared with the experimental spectrum of $^{162}$Dy in Fig.~\ref{figo6energy}. The spectra display rotational bands of an axially-deformed nucleus, in particular, a ground band $(K=0_1)$ and excited $K=2_1$ and $K=0_2$ bands. The O(6) and O(5) decomposition of selected bands are shown in Fig.~\ref{figo6decomp}. For $\hat{H}_1$, characteristic features of the results were discussed in Subsection~\ref{subsec:o6PDStypeII}. For $\hat{H}_2$, the solvable $K=0_1$ ground band has $\sigma=N$ and all eigenstates are mixed with respect to O(5). However, in contrast to $\hat{H}_1$, excited bands of $\hat{H}_2$ can have components with different O(6) character. For example, the $K=0_2$ band of $\hat{H}_2$ has components with $\sigma=N$ $(85.50\%)$, $\sigma=N-2$ $(14.45\%)$, and $\sigma=N-4$ $(0.05\%)$. These $\sigma$-admixtures can, in turn, be interpreted in terms of multi-phonon excitations. Specifically, the $K=0_2$ band is composed of $36.29\%$ $\beta$, $63.68\%$ $\gamma^2_{K=0}$, and $0.03\%$ $\beta^2$ modes, {\it i.e.}, it is dominantly a double-gamma phonon excitation with significant single-$\beta$ phonon admixture. The $K=2_1$ band has only a small O(6) impurity and is an almost pure single-gamma phonon band. The results of Fig.~\ref{figo6decomp} illustrate that $\hat{H}_2$~(\ref{h2}) possesses O(6)-PDS of type III which is distinct from the O(6)-PDS of type II exhibited by $\hat{H}_1$~(\ref{h1}). In Table~\ref{Tabbe2Dy162} the experimental B(E2) values for E2 transitions in $^{162}$Dy, are compared with PDS calculations. The B(E2) values predicted by $\hat{H}_1$~(\ref{h1}) and $\hat{H}_2$~(\ref{h2}) for $K=0_1\rightarrow K=0_1$ and $K=2_1\rightarrow K=0_1$ transitions are very similar and agree well with the measured values. On the other hand, their predictions for interband transitions from the $K=0_2$ band are very different~\cite{levisa02}. For $\hat{H}_1$, the $K=0_2\rightarrow K=0_1$ and $K=0_2\rightarrow K=2_1$ transitions are comparable and weaker than $K=2_1\rightarrow K=0_1$. In contrast, for $\hat{H}_2$, $K=0_2\rightarrow K=2_1$ and $K=2_1\rightarrow K=0_1$ transitions are comparable and stronger than $K=0_2\rightarrow K=0_1$. The results of a recent detailed measurement~\cite{apra06} of $^{162}$Dy, shown in Table~\ref{Tab2be2Dy162}, indicate that characteristic features of the $K=0_2$ band in this nucleus are reproduced by both $\hat{H}_2$ with O(6)-PDS of type III, and the CQF Hamiltonian with broken O(6) symmetry, but refinements are necessary. \section{Partial Solvability} \label{sec:PartialSolv} The PDS of type I and III, discussed so far, involve subsets of solvable states with good symmetry character, with respect to algebras in a given dynamical symmetry chain. A~further extension of this concept is possible, for which the selected solvable states are not associated with any underlying symmetry. Such a situation can be referred to as partial solvability. In the PDS examples considered within the IBM framework, the solvable states were obtained by choosing specific deformations and projecting from an intrinsic state, Eq.~(\ref{condgen}), representing the ground band \begin{eqnarray} \vert\,\beta,\gamma ; N \rangle &\propto& \left [\,\beta\cos\gamma\, d^{\dagger}_{0} + \beta\sin{\gamma}\, ( d^{\dagger}_{2} + d^{\dagger}_{-2})/\sqrt{2} + s^{\dagger}\, \right ]^N\vert 0\rangle ~. \label{condbet} \end{eqnarray} Specifically, for SU(3)-PDS of type I, the solvable ground band was associated with deformations $(\beta=\sqrt{2},\gamma=0)$, while for O(6)-PDS of type III, it was associated with $(\beta=1,\gamma=0)$. More generally, a natural candidate for a solvable ground band would be the set states of good O(3) symmetry $L$, projected from the prolate-deformed intrinsic state, $\vert\,\beta,\gamma=0 ; N \rangle$, with arbitrary deformation $\beta>0$~\cite{lev06} \begin{eqnarray} \vert\, \beta; N, L M\rangle &\propto& \left [\Gamma_{N}^{(L)}(\beta)\right ]^{-1/2} \hat{\cal{P}}_{LM}\vert \beta,\gamma=0; N\rangle \qquad L=0,2,4,\ldots, 2N \nonumber\\ \Gamma_{N}^{(L)}(\beta) &=& \frac{1}{N!}\int_{0}^{1}dx \left [ 1 + \beta^2\,P_{2}(x)\right ]^N P_{L}(x) ~. \qquad \label{wfqpt1} \end{eqnarray} Here $P_{L}(x)$ is a Legendre polynomial with $L$ even and $\Gamma_{N}^{(L)}(\beta)$ is a normalization factor. In general, these $L$-projected states do not have good symmetry properties with respect to any of the IBM dynamical symmetry chains~(\ref{IBMds}). Their wave functions have the following expansion in the U(5) basis \begin{eqnarray} \vert\, \beta; N, L M\rangle &=& \sum_{n_d,\tau,n_{\Delta}}\frac{1}{2} \left [1 + (-1)^{n_d-\tau}\right ]\, \xi_{n_d,\tau,n_{\Delta}}^{(N,L)} \vert\, [N]\langle n_d\rangle (\tau) n_{\Delta} L M\rangle ~, \label{projndt} \end{eqnarray} where $(\tau,n_{\Delta})$ take the values compatible with the ${\rm O(5)}\supset {\rm O(3)}$ reduction and the $n_d$ summation covers the range $\tau\leq n_d\leq N$. The coefficients $\xi_{n_d,\tau,n_{\Delta}}^{(N,L)}$ are of the form~\cite{lev05} \begin{eqnarray} \xi_{n_d,\tau,n_{\Delta}}^{(N,L)} &=& \left [\Gamma_{N}^{(L)}(\beta)\right ]^{-1/2} \frac{\beta^{n_d}} {\left [ (N-n_d)!(n_d-\tau)!!(n_d +\tau +3)!!\right ]^{1/2}}\, f_{\tau,n_{\Delta}}^{(L)} ~. \label{xindt} \end{eqnarray} Explicit expressions~\cite{lev05} for some of the factors $f_{\tau,n_{\Delta}}^{(L)}$ are given in Table~\ref{Tabftau}. \begin{table}[t] \begin{center} \caption{\label{Tabftau} \protect\small The factors $f_{\tau,n_{\Delta}}^{(L)}$, Eq.~(\ref{xindt}), for the states $\vert\beta; N,LM\rangle$, Eq.~(\ref{projndt}), with $L=0,2,4$. The label $n_{\Delta}$ is not required, since these $L$-states are multiplicity-free.} \vspace{1mm} \begin{tabular}{llll} \hline & & &\\[-3mm] & $f_{\tau=0,3,6,\ldots}^{(L)}$ & $f_{\tau=1,4,7,\ldots}^{(L)}$ & $f_{\tau=2,5,8,\ldots}^{(L)}$\\[4pt] \hline & & &\\[-2mm] $L=0$ & $(-)^{\tau}\sqrt{2\tau+3}$ & & \\[4pt] $L=2$ & & $(-)^{\tau+1}\sqrt{\tau+2}$ & $(-)^{\tau+1}\sqrt{\tau+1}$ \\[6pt] $L=4\quad$ & $(-1)^{\tau}\sqrt{\frac{7(2\tau+3)\tau(\tau+3)}{3(2\tau+5)(2\tau+1)}}\quad$ & $(-1)^{\tau}\sqrt{\frac{5(\tau+2)(\tau-1)}{6(2\tau+5)}}\quad$ & $(-1)^{\tau}\sqrt{\frac{5(\tau+1)(\tau+4)}{6(2\tau+1)}}$\\[6pt] & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} The construction of partially-solvable Hamiltonians which have the set of states~(\ref{wfqpt1}) as eigenstates, can be accomplished~\cite{lev85} by means of the following boson-pair operators with angular momentum $L=0,\,2$ \begin{subequations} \begin{eqnarray} P^{\dagger}_{0}(\beta_0) &=& d^{\dagger}\cdot d^{\dagger} - \beta_{0}^2(s^{\dagger})^2 ~, \label{P0b0}\\ P^{\dagger}_{2\mu}(\beta_0) &=& \beta_{0}\sqrt{2}d^{\dagger}_{\mu}s^{\dagger} + \sqrt{7}\, (d^{\dagger}\,d^{\dagger})^{(2)}_{\mu} ~. \label{P2b0} \end{eqnarray} \label{PLb0} \end{subequations} These operators satisfy \begin{subequations} \begin{eqnarray} P_{0}(\beta_0)\,\vert\,\beta_0,\gamma=0 ; N \rangle &=& 0 ~,\\ P_{2\mu}(\beta_0)\,\vert\,\beta_0,\gamma=0 ; N \rangle &=& 0 ~, \label{PLcondb0} \end{eqnarray} \end{subequations} or equivalently, \begin{eqnarray} P_{0}(\beta_0)\,\vert\, \beta_0; N, L M\rangle &=& 0 ~, \nonumber\\ P_{2\mu}(\beta_0)\,\vert\, \beta_0 ; N, L M\rangle &=& 0 ~. \label{P0P2b0} \end{eqnarray} The following Hamiltonian~\cite{lev85,kirlev85,lev87} \begin{eqnarray} \hat{H}(h_0,h_2,\beta_0) &=& h_{0}\, P^{\dagger}_{0}(\beta_{0})P_{0}(\beta_0) + h_{2}\,P^{\dagger}_{2}(\beta_0)\cdot \tilde{P}_{2}(\beta_0) ~, \label{hPS} \end{eqnarray} has a solvable zero-energy prolate-deformed ground band, composed of the states in Eq.~(\ref{wfqpt1}). The Casimir operator of O(3) can be added to it to form a partially solvable (PSolv) Hamiltonian \begin{eqnarray} \hat{H}_{PSolv} &=& \hat{H}(h_0,h_2,\beta_0) + C\,\hat{C}_{{\rm O(3)}} ~. \label{hPSolv} \end{eqnarray} The solvable states and energies are \begin{subequations} \begin{eqnarray} && \vert\, \beta_0; N, L M\rangle \;\;\;\; L=0,2,4,\ldots, 2N\\ && E_{PSolv}= CL(L+1) ~. \end{eqnarray} \label{ePSolv} \end{subequations} Since the wave functions of these states are known, it is possible to obtain closed form expressions for related observables. For example, for the E2 operator of Eq.~(\ref{Te2}), the B(E2) values for transitions between members of the solvable ground band read~\cite{lev06,lev07} \begin{eqnarray} &&B(E2; L+2\to L) = e_{B}^2\,(L+2,0;2,0\vert L,0)^2\,\beta^2\, \frac{ [\,a_1\,\Gamma_{N-1}^{(L)}(\beta) + a_2\, \Gamma_{N-1}^{(L+2)}(\beta)\,]^2} {\Gamma_{N}^{(L)}(\beta)\,\Gamma_{N}^{(L+2)}(\beta)} ~, \qquad\quad \nonumber\\ && a_1= 1-\beta\sqrt{\frac{2}{7}}\chi L/(2L+3) \;\;\; , \;\;\; a_2= 1-\beta\sqrt{\frac{2}{7}}\chi (L+3)/(2L+3) ~, \label{be2beta} \end{eqnarray} where the symbol $(...\vert ..)$ denotes an O(3) Clebsch Gordan coefficient. The Hamiltonian $\hat{H}_{PSolv}$ of Eq.~(\ref{hPSolv}) is partially solvable for any value of $\beta_0>0$. For $\beta_0=\sqrt{2}$, it reduces to the Hamiltonian of Eq.~(\ref{hPDSsu3}) with SU(3)-PDS of type I. In this case, the solvable states span the SU(3) irrep $(2N,0)$ and the normalization factor in Eq.~(\ref{wfqpt1}) is given by \begin{eqnarray} \Gamma_{N}^{(L)}(\beta=\sqrt{2}) &=& \frac{3^{N}(2N)!}{(2N-L)!!(2N+L+1)!!N!} ~. \end{eqnarray} Relation~(\ref{projndt}) then provides transformation brackets between these SU(3) states and the U(5) basis and Eq.~(\ref{be2beta}) reduces to a well-known expression for E2 transitions among states in the SU(3) ground band~\cite{arimaiac78,ISA}. When $\beta_0 =1$, the Hamiltonian $\hat{H}_{PSolv}$~(\ref{hPSolv}) coincides with the Hamiltonian of Eq.~(\ref{h2PDS}) with O(6)-PDS of type III. When $h_2=0$, the Hamiltonian of Eq.~(\ref{hPS}) takes the form \begin{eqnarray} \hat{H}(h_0,\beta_0) &=& h_{0}\, P^{\dagger}_{0}(\beta_{0})P_{0}(\beta_0) ~. \label{hPS0} \end{eqnarray} Both $P^{\dag}(\beta_0)$ and $P_0(\beta_0)$, Eq.~(\ref{P0b0}), are O(5)-scalars. Futhermore, $P_0(\beta_0)$ annihilates the intrinsic state, Eq.~(\ref{condbet}), with $\beta=\beta_0$ and arbitrary $\gamma$ \begin{eqnarray} P_{0}(\beta_0)\,\vert\,\beta_0,\gamma ; N \rangle &=& 0 ~. \label{P0condb0} \end{eqnarray} Equivalently, \begin{eqnarray} P_{0}(\beta_0)\,\vert\beta_0 ; N,\tau, n_{\Delta},LM\rangle &=& 0 ~, \label{P0b0L} \end{eqnarray} where the indicated states, with good $\tau$ and $L$ quantum numbers, are obtained by O(5) projection from the deformed intrinsic state $\vert\,\beta_0,\gamma ; N \rangle$~(\ref{condbet}) \begin{eqnarray} \vert\beta; N,\tau, n_{\Delta},LM\rangle &\propto& \left [F_{N}^{(\tau)}(\beta)\right ]^{-1/2} \hat{\cal{P}}_{\tau,n_{\Delta},LM}\vert \beta,\gamma; N\rangle \nonumber\\ F_{N}^{(\tau)}(\beta) &=& \sum_{n_d}\frac{1}{2}\left [1 + (-1)^{n_d-\tau}\right ] \frac{\beta^{2n_d}}{(N-n_d)!\,(n_d-\tau)!!\,(n_d+\tau+3)!!} ~. \qquad \label{wfqpt2} \end{eqnarray} Here $F_{N}^{(\tau)}(\beta)$ is a normalization factor and the $n_d$ summation covers the range $\tau\leq n_d \leq N$. The corresponding wave functions have the following expansion in the U(5) basis~\cite{levgin03} \begin{subequations} \begin{eqnarray} \vert\, \beta; N, \tau, n_{\Delta},L M\rangle &=& \sum_{n_d}\frac{1}{2} \left [1 + (-1)^{n_d-\tau}\right ]\, \theta_{n_d}^{(N,\tau)} \vert\, [N]\langle n_d\rangle(\tau)n_{\Delta} L M\rangle ~,\\ \theta_{n_d}^{(N,\tau)} &=& \left [ F_{N}^{(\tau)}(\beta)\right ]^{-1/2} \frac{\beta^{n_d}}{[(N-n_d)!(n_d -\tau)!!(n_d+\tau+3)!!]^{1/2}}~. \qquad \end{eqnarray} \label{projnd} \end{subequations} The Hamiltonian~(\ref{hPS0}) mixes the U(5) and O(6) chains but preserves the common O(5) subalgebra. This is explicitly seen from its multipole form \begin{eqnarray} \hat{H}(h_0,\beta_0) &=& h_{0}\beta_{0}^2\left [\, -\hat{C}_{{\rm O(6)}} + 5\hat{N} +\beta_{0}^2\hat{N}(\hat{N}-1) \,\right ] \nonumber\\ && +h_{0}(1-\beta_{0}^2)\left [ 4\hat{n}_d + 2\beta_{0}^2(\hat{N}-1)\hat{n}_d + (1-\beta_{0}^2)\hat{n}_{d}(\hat{n}_d-1) -\hat{C}_{{\rm O(5)}} \right ] ~. \qquad\; \end{eqnarray} It has a solvable zero-energy $\gamma$-unstable deformed ground band, composed of the states in Eq.~(\ref{wfqpt2}). The Casimir operators of O(5) and O(3) can be added to it to form a partially solvable (PSolv) Hamiltonian \begin{eqnarray} \hat{H}_{PSolv} &=& \hat{H}(h_0,\beta_0) + B\, \hat{C}_{{\rm O(5)}} + C\,\hat{C}_{{\rm O(3)}} ~. \label{hPSolvt} \end{eqnarray} $\hat{H}_{PSolv}$~(\ref{hPSolvt}) has also an O(5)-PDS of type II in the sense discussed in Subsection~\ref{subsec:o5PDStypeII}. The solvable states and energies are \begin{subequations} \begin{eqnarray} && \vert\, \beta_0; N, \tau, n_{\Delta}, L M\rangle ~,\\ && E_{PSolv}= B\, \tau(\tau+3) + C\, L(L+1) ~, \end{eqnarray} \label{ePSolvt} \end{subequations} where the $(\tau,n_{\Delta},L)$ assignments are the same as for states in the O(6) irrep with $\sigma=N$. Closed form expressions can be derived for observables in these states. For example, for the general E2 operator of Eq.~(\ref{Te2}), we find~\cite{levgin03} \begin{eqnarray} &&B(E2; \tau+1,n_{\Delta}',L'\to \tau,n_{\Delta},L) \nonumber\\ &&\qquad\qquad = e_{B}^2 \frac{\tau+1}{2\tau+5} \beta^2\, \frac{[\,F^{(\tau)}_{N-1}(\beta) + F^{(\tau+1)}_{N-1}(\beta)\,]^2} {F^{(\tau)}_{N}(\beta)F^{(\tau+1)}_{N}(\beta)} \left \langle \begin{array}{cc|c} \tau+1 & 1 & \tau\\ n_{\Delta}'L' & 2 & n_{\Delta}L \end{array} \right \rangle^2 ~. \label{be2beta0} \end{eqnarray} This expression is similar in form to that encountered in Eq.~(\ref{be2o5a}), but now the factor in front of the O(5) isoscalar factor is explicitly known. The Hamiltonian of Eq.~(\ref{hPSolvt}) is partially solvable for any value of $\beta_0>0$. For $\beta_0=1$, it reduces to the Hamiltonian of Eq.~(\ref{hDSo6}) with O(6) dynamical symmetry. The solvable states~(\ref{ePSolvt}) then span the O(6) irrep $\langle\sigma\rangle = \langle N\rangle$ and the normalization factor~(\ref{wfqpt2}) becomes \begin{eqnarray} F_{N}^{(\tau)}(\beta=1) &=& \frac{2^{N+1}(N+1)!}{(N-\tau)!(N+\tau+3)!} ~. \label{Fo6} \end{eqnarray} In this case, relation~(\ref{projnd}) corresponds to known transformation brackets between these O(6) states and the U(5) basis, and one recovers from Eq.~(\ref{be2beta0}) a familiar expression for the indicated $B(E2)$ in the O(6) limit of the IBM~\cite{arimaiac79}. In addition to the states shown in Eq.~(\ref{P0b0L}), the operator $P_{0}(\beta_0)$ (\ref{P0b0}) annihilates also the following U(5) basis states \begin{subequations} \begin{eqnarray} P_{0}\,\vert [N],n_d=\tau=N,n_{\Delta},L M\,\rangle &=& 0 ~,\\ P_{0}\,\vert [N],n_d=\tau=N-1,n_{\Delta},L M\,\rangle &=& 0 ~, \label{P0ndN} \end{eqnarray} \end{subequations} for reasons explained after Eq.~(\ref{V0ndN}). Consequently, $\hat{H}_{PSolv}$ of Eq.~(\ref{hPSolvt}) has also a U(5)-PDS of type I, in the sense discussed in Subsection~\ref{subsec:u5PDStypeI}. The additional solvable eigenstates and energies are \begin{subequations} \begin{eqnarray} &&\vert [N], n_d=\tau=N,n_{\Delta},L M\,\rangle \;\;\qquad\quad E_{Psolv} = B\,N(N+3) + C\,L(L+1) ~,\qquad\qquad\\ &&\vert [N], n_d=\tau=N-1,n_{\Delta},L M\,\rangle \;\;\;\quad E_{Psolv} = B\,(N-1)(N+2) + C\,L(L+1) ~, \qquad\qquad \end{eqnarray} \label{ePSolvndt} \end{subequations} where the $(\tau,n_{\Delta},L)$ assignments are those of the ${\rm O(5)}\supset{\rm O(3)}$ reduction. The Hamiltonian $\hat{H}(h_0,h_2,\beta_0)$ of Eq.~(\ref{hPS}) is a prototype of an intrinsic Hamiltonian which generate band-structure~\cite{lev85,kirlev85,lev87}. Its energy surface, defined as in Eq.~(\ref{enesurf}), \begin{eqnarray} E_{N}(\beta,\gamma) &=& N(N-1)(1+\beta^2)^{-2} \left [ h_{0}(\beta^2-\beta_{0}^2)^2 + 2h_{2}\beta^2(\beta^2 - 2\beta_{0}\beta\cos 3\gamma + \beta_{0}^2)\right ] \qquad\; \label{esurfint} \end{eqnarray} has a global minimum at $(\beta_0>0,\gamma_0=0)$, corresponding to a prolate-deformed shape. $\hat{H}(h_0,h_2,\beta_0)$ is O(3)-invariant, but has the deformed equilibrium intrinsic state, $\vert\beta_0,\gamma=0; N\rangle$~(\ref{condbet}), as a zero-energy eigenstate. The O(3) symmetry is thus spontaneously broken. The two Goldstone modes are associated with rotations about directions perpendicular to the symmetry axis. The intrinsic modes involve the one-dimensional $\beta$ mode and two-dimensional $\gamma$ modes of vibrations. For large N, the spectrum of $\hat{H}(h_0,h_2,\beta_0)$~(\ref{hPS}) is harmonic, involving $\beta$ and $\gamma$ vibrations about the deformed minimum with frequencies given by~\cite{lev85,lev87} \begin{eqnarray} \epsilon_{\beta} &=& 2N \beta_{0}^2(2h_{0}+h_{2}) \;\; , \;\; \epsilon_{\gamma}= 18N \beta_{0}^2(1+\beta_{0}^2)^{-1}h_{2} ~. \label{emodes} \end{eqnarray} The importance of $\hat{H}(h_0,h_2,\beta_0)$ lies in the fact that the most general one- and two-body IBM Hamiltonian with equilibrium deformations $(\beta_0>0,\gamma_0=0)$, can be resolved into intrinsic and collective parts~\cite{kirlev85,lev87} \begin{eqnarray} \hat{H}_{IBM} &=& \hat{H}(h_0,h_2,\beta_0) + \hat{H}_c ~. \label{resol} \end{eqnarray} The intrinsic part is the partially-solvable Hamiltonian of Eq.~(\ref{hPS}). The collective part, $\hat{H}_c$, involves kinetic rotational terms which do not affect the shape of the energy surface \begin{eqnarray} \hat{H}_{c} &=& c_3 \left [\, \hat{C}_{{\rm O(3)}} - 6\hat{n}_d \,\right ] + c_5 \left [\, \hat{C}_{{\rm O(5)}} - 4\hat{n}_d \,\right ] +\, c_6 \left [\, \hat{C}_{\overline{{\rm O(6)}}} - 5\hat{N}\,\right ] + E_0~. \label{hcol} \end{eqnarray} The various Casimir operators in Eq.~(\ref{hcol}) are defined in the Appendix. The $L$-projected states, $\vert\, \beta; N, L M\rangle$, of Eq.~(\ref{wfqpt1}) can now be used to construct an $L$-projected energy surface, $E^{(N)}_{L}(\beta) = \langle\beta;N,LM\vert \hat{H}_{IBM} \vert\beta;N,LM\rangle$, for the IBM Hamiltonian~(\ref{resol}) \begin{eqnarray} E^{(N)}_{L}(\beta) &=& h_0\,(\beta^2-\beta^{2}_{0})^2\, S_{2,L}^{(N)} + 2h_2\,(\beta-\beta_0)^2\,\Sigma_{2,L}^{(N)} + c_3\left [ L(L+1) - 6D_{1,L}^{(N)}\right ] \nonumber\\ && +\, c_5\left [ D_{2,L}^{(N)} - \beta^4\,S_{2,L}^{(N)}\right ] +\, c_6\left [ N(N-1) -(1+\beta^2)^2\,S_{2,L}^{(N)}\right ] + E_0 ~. \qquad \label{eneL} \end{eqnarray} Here $D_{1,L}^{(N)}$, $S_{2,L}^{(N)}$, $D_{2,L}^{(N)}$ and $\Sigma_{2,L}^{(N)}$ denote the expectation values in the states $\vert \beta;N,L M\rangle$ of $\hat{n}_d$, $\hat{n}_s(\hat{n}_s-1)$, $\hat{n}_d(\hat{n}_d-1)$ and $\hat{n}_s\hat{n}_d$ respectively. All these quantities are expressed in terms of the expectation value of $\hat{n}_s$, denoted by $S^{(N)}_{1,L}$. Specifically, $D_{1,L}^{(N)}= N - S_{1,L}^{(N)}$, $S^{(N)}_{2,L} = S^{(N)}_{1,L}S^{(N-1)}_{1,L}$, $\Sigma^{(N)}_{2,L} = (N-1)S^{(N)}_{1,L} - S^{(N)}_{2,L}$, $D_{2,L}^{(N)} = N(N-1) - 2(N-1)S^{(N)}_{1,L} + S^{(N)}_{2,L}$. The quantity $S^{(N)}_{1,L}$ itself is determined by the normalization factors of Eq.~(\ref{wfqpt1}) \begin{eqnarray} S^{(N)}_{1,L} = \langle\beta;N,LM\vert\hat{n}_s\vert\beta;N,LM\rangle = \Gamma^{(L)}_{N-1}(\beta)/\Gamma^{(L)}_{N}(\beta) ~. \label{S1L} \end{eqnarray} It also satisfies the following recursion relation~\cite{lev06} \begin{eqnarray} S^{(N)}_{1,L} = \frac{(N-L/2)(2N+L+1)} {(\beta^2+4)(N-1)+3 + (\beta^2-2)(1+\beta^2)S^{(N-1)}_{1,L}} ~. \label{S1Lrecur} \end{eqnarray} For $h_2=0$, the intrinsic part of the IBM Hamiltonian in Eq.~(\ref{resol}) reduces to the Hamiltonian $\hat{H}(h_0,\beta_0)$ of Eq.~(\ref{hPS0}), which is O(5)-invariant. The unprojected energy surface~(\ref{esurfint}) is now independent of $\gamma$ and the equilibrium shape is deformed $(\beta_0>0)$ and $\gamma$-unstable. The O(5) symmetry is spontaneously broken in the intrinsic state, $\vert\beta_0,\gamma;N\rangle$~(\ref{condbet}), which is a zero-energy eigenstate of $\hat{H}(h_0,\beta_0)$. As a result, the $\gamma$ and three rotational modes are Goldstone modes, and only the $\beta$ vibration in Eq.~(\ref{emodes}) survives as a genuine mode~\cite{lev85,lev87}. In this case, $\hat{H}_{IBM}(h_2=0)$ in Eq.~(\ref{resol}) preserves the O(5) symmetry, $\tau$, and the O(5)-projected states of Eq.~(\ref{wfqpt2}) can be used to construct its $\tau$-projected energy surface, $E^{(N)}_{\tau,L}(\beta) = \langle\beta;N,\tau,n_{\Delta},LM \vert \hat{H}_{IBM}(h_2=0) \vert\beta;N,\tau,n_{\Delta},LM\rangle$, \begin{eqnarray} E_{\tau,L}^{(N)}(\beta) &=& h_0\,(\beta^2-\beta^{2}_{0})^2\, S_{2,\tau}^{(N)} + c_3\left [ L(L+1) - 6D_{1,\tau}^{(N)}\right ] \nonumber\\ && +\, c_5\left [ \tau(\tau+3) -4D_{1,\tau}^{(N)}\right ] +\, c_6\left [ N(N-1) -(1+\beta^2)^2\,S_{2,\tau}^{(N)}\right ] + E_0 ~. \qquad \label{enetau} \end{eqnarray} Here $D_{1,\tau}^{(N)}$ and $S_{2,\tau}^{(N)}$ denote the expectation values in the states $\vert \beta;N,\tau,n_{\Delta},L M\rangle$ of $\hat{n}_d$ and $\hat{n}_s(\hat{n}_s-1)$ respectively. All these quantities are expressed in terms of the expectation value of $\hat{n}_s$, denoted by $S^{(N)}_{1,\tau}$. Specifically, $D_{1,\tau}^{(N)}= N - S_{1,\tau}^{(N)}$, $S^{(N)}_{2,\tau} = S^{(N)}_{1,\tau}S^{(N-1)}_{1,\tau}$. The quantity $S^{(N)}_{1,\tau}$ itself is determined by the normalization factors of Eq.~(\ref{wfqpt2}) \begin{eqnarray} S^{(N)}_{1,\tau} = \langle\beta;N,\tau,n_{\Delta},L M\vert\hat{n}_{s} \vert\beta;N,\tau,n_{\Delta},L M\rangle = F^{(\tau)}_{N-1}(\beta)/F^{(\tau)}_{N}(\beta) ~. \label{S1t} \end{eqnarray} It also satisfies the following recursion relation \begin{eqnarray} S^{(N)}_{1,\tau} = \frac{(N-\tau)(N+\tau+3)} {2(N+1) + (\beta^4-1)S^{(N-1)}_{1,\tau}} ~. \label{S1trecur} \end{eqnarray} \section{PDS and Quantum Phase Transitions} \label{sec:PDSQPT} Symmetry plays a profound role in quantum phase transitions (QPT). The latter occur at zero temperature as a function of a coupling constant in the Hamiltonian. Such ground-state energy phase transitions~\cite{gilmore79} are a pervasive phenomenon observed in many branches of physics, and are realized empirically in nuclei as transitions between different shapes. QPTs occur as a result of a competition between terms in the Hamiltonian with different symmetry character, which lead to considerable mixing in the eigenfunctions, especially at the critical-point where the structure changes most rapidly. An interesting question to address is whether there are any symmetries (or traces of) still present at the critical points of QPT. As shown below, unexpectedly, partial dynamical symmetries (PDS) can survive at the critical point in spite of the strong mixing~\cite{lev07}. The feasibility of such persisting symmetries gains support from the recently proposed~\cite{iac0001} and empirically confirmed~\cite{caszam0001} analytic descriptions of critical-point nuclei, and the emergence of quasi-dynamical symmetries~\cite{rowe0405} in the vicinity of such critical-points. A convenient framework to study symmetry-aspects of QPT in nuclei is the IBM~\cite{ibm}, whose dynamical symmetries~(\ref{IBMds}) correspond to possible phases of the system. The starting point is the energy surface of the Hamiltonian, Eq.~(\ref{enesurf}), which for one- and two- body interactions has the form \begin{eqnarray} E_{N}(\beta,\gamma) &=& E_0 + N(N-1)f(\beta,\gamma) ~, \nonumber\\ f(\beta,\gamma) &=& (1+\beta^2)^{-2}\beta^2\left [ a - b\beta\cos 3\gamma + c\beta^2 \right ] ~. \label{eLan} \end{eqnarray} The coefficients $E_0,a,b,c$ involve particular linear combinations of the Hamiltonian's parameters~\cite{lev87}. \begin{figure}[t] \begin{center} \includegraphics[height=0.3\textheight,width=7cm,angle=270]{fig6Lcrisurf1.eps} \hspace{0.5cm} \includegraphics[height=0.3\textheight,width=7cm,angle=270]{fig6Rcrisurf2.eps} \caption{ \small Energy surfaces at the critical points, Eq.~(\ref{1st2nd}). (a)~First-order transition. The position and height of the barrier are $\beta= \beta_{+} = (-1 + \sqrt{1+\beta_{0}^2}\;)/\beta_0\,$ and $h = f(\beta_{+},\gamma=0) = ( -1 + \sqrt{1+\beta_{0}^2}\,)^2/4 \,$ respectively. (b)~Second-order transition. In this case $f(\beta,\gamma)$ is independent of $\gamma$. Asymptotically, $f(\beta\to\infty,\gamma)=1$. \label{figcrisurf}} \end{center} \end{figure} Phase transitions can be studied by IBM Hamiltonians of the form, $\hat{H}(\alpha) = (1-\alpha)\,\hat{H}_{1} + \alpha\, \hat{H}_{2}$, involving terms from different dynamical symmetry chains~\cite{diep80}. The nature of the phase transition is governed by the topology of the corresponding surface~(\ref{eLan}), which serves as a Landau's potential with the equilibrium deformations as order parameters. The conditions on the parameters and resulting surfaces at the critical-points of first- and second-order transitions are given by \begin{subequations} \begin{eqnarray} 1^{st}\, {\rm order}\quad b^{2}=4ac,\;a>0,\; b >0 && f(\beta,\gamma=0) = c(1+\beta^2)^{-2}\beta^2\left ( \beta-\beta_0\right )^2 ~, \qquad\;\; \label{1st} \\ 2^{nd}\, {\rm order} \quad a=0,\; b=0,\; c>0 \;\;\;\; && f(\beta,\gamma) = c(1+\beta^2)^{-2}\beta^4 ~. \label{2nd} \end{eqnarray} \label{1st2nd} \end{subequations} As shown in Fig.~\ref{figcrisurf}, the first-order critical-surface has degenerate spherical and deformed minima at $\beta=0$ and $(\beta=\beta_0>0,\gamma=0)$, where $\beta_0 =2a/b$. The position ($\beta_{+}$) and height ($h$) of the barrier are indicated in the caption. The second-order critical-surface is independent of $\gamma$ and is flat bottomed $(\sim \beta^4)$ for small $\beta$. The conditions on $a,\,b,\,c$ in Eq.~(\ref{1st2nd}) fix the critical value of the control parameter $(\alpha=\alpha_c)$ which, in turn, determines the critical-point Hamiltonian, $\hat{H}_{cri}=\hat{H}(\alpha=\alpha_c)$. IBM Hamiltonians of this type have been used extensively for studying shape-phase transitions in nuclei~\cite{diep80,iaczam04,rowe0405,lev07,levgin03,gilmore79, iac0001,caszam0001,lev05,lev06,QPTrev}. We now show that a large class of such critical-point Hamiltonians exhibit PDS~\cite{lev07}. The spherical to deformed $\gamma$-unstable shape-phase transition is modeled in the IBM by the Hamiltonian \begin{eqnarray} \hat{H}_{cri} &=& \epsilon\,\hat{n}_d + A \left[\, d^{\dagger}\cdot d^{\dagger} - (s^{\dagger})^2\,\right ] \left[\, H.c.\,\right] \nonumber\\ \epsilon &=& 4(N-1)A ~. \label{hcri2nd} \end{eqnarray} The $A$-term is the O(6) pairing term of Eq.~(\ref{HPSo6}). $\hat{H}_{cri}$ satisfies condition~(\ref{2nd}) with $c=4A$, hence qualifies as a second-order critical Hamiltonian. It involves a particular combination of the U(5) and O(6) Casimir operators, hence is recognized to be a special case of the Hamiltonian $\hat{H}_{{\rm O(5)}}$ of Eq.~(\ref{hPDSo5}) with O(5)-PDS of type II. In fact, since O(5) is a good symmetry common to both the U(5) and O(6) chains~(\ref{u5o6}), the O(5) PDS is valid throughout the U(5)-O(6) transition region. As mentioned at the end of Subsection~\ref{subsec:o5PDStypeII}, $\hat{H}_{{\rm O(5)}}$ and, therefore, $\hat{H}_{cri}$~(\ref{hcri2nd}), has also U(5)-PDS of type I, with the following solvable U(5) basis states \begin{subequations} \begin{eqnarray} &&\vert [N], n_d=\tau=N,L\,\rangle \;\;\qquad\quad E = \epsilon\,N ~,\qquad\qquad\\ &&\vert [N], n_d=\tau=N-1,L\,\rangle \;\;\;\quad E = \epsilon\,\,(N-1) ~, \qquad\qquad \end{eqnarray} \label{ecriu5o6} \end{subequations} where $L$ takes the values compatible with the ${\rm O(5)}\supset{\rm O(3)}$ reduction. The dynamics at the critical point of a spherical to prolate-deformed shape-phase transition can be modeled in the IBM by the following Hamiltonian~\cite{lev06} \begin{figure}[t] \begin{center} \rotatebox{270}{\includegraphics[width=2.6in,clip=]{fig7Lhcri1st.eps}} \hspace{0.1cm} \rotatebox{270}{\includegraphics[width=2.6in,clip=]{fig7Rhcri1st.eps}} \hspace{0.1cm} \end{center} \vspace{-0.5cm} \caption{ \small Left: spectrum of a first-order critical Hamiltonian $\hat{H}_{cri}(\beta_0)$, Eq.~(\ref{hcri1st}), with $h_2=0.05$, $\beta_0 =1.3$ and $N=10$. The solvable eigenstates are the deformed states, Eq.~(\ref{deform}), forming a zero-energy $K=0_1$ ground band and the spherical states, $L=0_2,\,3_1$, Eq.~(\ref{spher}), with good U(5) symmetry. Right: U(5) ($n_d$) decomposition for selected eigenstates of $\hat{H}_{cri}(\beta_0)$. Adapted from~\cite{lev06}. \label{fighcri1stspec}} \end{figure} \begin{eqnarray} \hat{H}_{cri}(\beta_0) &=& h_{2}\, P^{\dagger}_{2}(\beta_0)\cdot\tilde{P}_{2}(\beta_0) ~, \label{hcri1st} \end{eqnarray} where $P^{\dagger}_{2\mu}(\beta_0)$ is the $L=2$ boson-pair of Eq.~(\ref{P2b0}) and $h_2,\,\beta_0>0$. The corresponding surface in Eq.~(\ref{eLan}) has coefficients $a=2h_2\beta_{0}^2, b=4h_{2}\beta_{0},c=2h_2$, which satisfy condition~(\ref{1st}). This qualifies $\hat{H}_{cri}(\beta_0)$ as a first-order critical Hamiltonian whose potential accommodates two degenerate minima at $\beta=0$ and $(\beta,\gamma)=(\beta_0,0)$. $\hat{H}_{cri}(\beta_0)$ is recognized to be a special case of the partially-solvable Hamiltonian, $\hat{H}_{PSolv}$ of Eq.~(\ref{hPSolv}). As such, it has a solvable prolate-deformed ground band, composed of the states of Eq.~(\ref{ePSolv}) \begin{eqnarray} \vert\beta_0;N,L\rangle \quad E=0\qquad\;\; L=0,2,4,\ldots, 2N~. \qquad\quad \label{deform} \end{eqnarray} On the other hand, the following multipole form of $\hat{H}_{cri}(\beta_0)$ \begin{eqnarray} &&\hat{H}_{cri}(\beta_0) = \nonumber\\ &&\qquad\;\;\; h_{2}\left [2(\beta_{0}^2\hat{N} -2)\hat{n}_d + 2(1-\beta_{0}^2)\hat{n}_{d}^2 + 2 \hat{C}_{{\rm O(5)}} - \hat{C}_{{\rm O(3)}} + \sqrt{14}\,\beta_{0}\,\Pi^{(2)}\cdot U^{(2)}\right ] \qquad\quad \label{hcri1stmult} \end{eqnarray} identifies it as the Hamiltonian of Eq.~(\ref{hPDSu5}) with U(5)-PDS of type I. As such, it has also the solvable spherical eigenstates of Eq.~(\ref{ePDSu5}), with good U(5) symmetry \begin{subequations} \begin{eqnarray} \vert N,n_d=\tau=L=0 \rangle \;\; &&E = 0 \label{nd0b0}\\ \vert N,n_d=\tau=L=3 \rangle \;\; &&E = 6 h_2[\beta_{0}^2 (N-3) + 5]~. \qquad\quad \label{nd3b0} \end{eqnarray} \label{spher} \end{subequations} The spectrum of $\hat{H}_{cri}(\beta_0)$~(\ref{hcri1st}) and the U(5) ($n_d$) decomposition of selected eigenstates is shown in Fig.~\ref{fighcri1stspec}. The spectrum displays a coexistence of spherical states (some of which solvable with good U(5) symmetry) and deformed states (some of which solvable), signaling a first-order transition. The remaining non-solvable states in the spectrum are either predominantly spherical (with characteristic dominance of single $n_d$ components) or deformed states (with a broad $n_d$ distribution) arranged in several excited bands~\cite{lev06}. The critical Hamiltonian of Eq.~(\ref{hcri1st}) with $\beta_0=\sqrt{2}$ is a special case of the Hamiltonian of Eq.~(\ref{hPDSsu3}), shown to have SU(3)-PDS of type I. As such, it has a subset of solvable states, Eqs.~(\ref{gband})-(\ref{gamband}), which are members of the ground $g(K=0)$ and $\gamma^{k}(K=2k)$ bands, with good SU(3) symmetry, $(\lambda,\mu)=(2N-4k,2k)$ \begin{subequations} \begin{eqnarray} &&\vert N,(2N,0)K=0,L\rangle \;\;\;\; E = 0 \qquad L=0,2,4,\ldots, 2N \label{solsu3g} \\ &&\vert N,(2N-4k,2k)K=2k,L\rangle \;\;\;\;\;\;\;\; E = h_{2}\,6k \left (2N - 2k+1 \right ) \qquad\qquad \nonumber\\ && L=K,K+1,\ldots, (2N-2k) \qquad\; k>0 ~. \label{solsu3gam} \end{eqnarray} \label{solsu3} \end{subequations} In addition, $\hat{H}_{cri}(\beta_0=\sqrt{2})$ has the spherical states of Eq.~(\ref{spher}), with good U(5) symmetry, as eigenstates. The spherical $L=0$ state, Eq.~(\ref{nd0b0}), is exactly degenerate with the SU(3) ground band, Eq.~(\ref{solsu3g}), and the spherical $L=3$ state, Eq.~(\ref{nd3b0}), is degenerate with the SU(3) $\gamma$-band, Eq.~(\ref{solsu3gam}) with $k=1$. The remaining levels of $\hat{H}_{cri}(\beta_0=\sqrt{2})$, shown in Fig.~\ref{fighcrisu3} are calculated numerically and their wave functions are spread over many U(5) and SU(3) irreps. This situation, where some states are solvable with good U(5) symmetry, some are solvable with good SU(3) symmetry and all other states are mixed with respect to both U(5) and SU(3), defines a U(5) PDS of type I coexisting with a SU(3) PDS of type I. \begin{figure}[t] \begin{center} \rotatebox{270}{\includegraphics[width=2.5in,clip=] {fig8Lhcrisu3.eps}}\hspace{0.2cm} \hspace{0.1cm} \rotatebox{270}{\includegraphics[width=2.5in,clip=] {fig8Rhcrisu3.eps}}\hspace{0.2cm} \caption{ \small Left: spectrum of $\hat{H}_{cri}(\beta_0=\sqrt{2})$, Eq.~(\ref{hcri1st}), with $h_2=0.05$ and $N=10$. $L(K=0_1)$ and $L(K=2_1)$ are the solvable SU(3) states of Eq.~(\ref{solsu3g}) and Eq.~(\ref{solsu3gam}) with $k=1$, respectively. $L=0_2,3_1$ are the solvable U(5) states of Eq.~(\ref{spher}). Right: U(5) ($n_d$) and SU(3) $[(\lambda,\mu)]$ decomposition for selected eigenstates of $\hat{H}_{cri}(\beta_0=\sqrt{2})$. Adapted from~\cite{lev07}. \label{fighcrisu3}} \end{center} \end{figure} The critical Hamiltonian of Eq.~(\ref{hcri1st}) with $\beta_0=1$ is a special case of the Hamiltonian of Eq.~(\ref{h2PDS}), shown to have O(6)-PDS of type III. As such, it has a subset of solvable states Eq.~(\ref{ePDSo6III}), which are members of a prolate-deformed ground band, with good O(6) symmetry, $\langle\sigma\rangle = \langle N \rangle$, but broken O(5) symmetry \begin{eqnarray} \vert N,\sigma=N,L\rangle \qquad E=0 \qquad L=0,2,4,\ldots, 2N ~. \quad \label{solo6} \end{eqnarray} In addition, $\hat{H}_{cri}(\beta_0=1)$ has the spherical states of Eq.~(\ref{spher}), with good U(5) symmetry, as eigenstates. The remaining eigenstates of $\hat{H}_{cri}(\beta_0=1)$ shown in Fig.~\ref{fighcrio6} are mixed with respect to both U(5) and O(6). Apart from the solvable U(5) states of Eq.~(\ref{spher}), all eigenstates of $\hat{H}_{cri}(\beta_0=1)$ are mixed with respect to O(5) [including the solvable O(6) states of Eq.~(\ref{solo6}), as shown in the bottom right panel of Fig.~\ref{fighcrio6}]. It follows that the Hamiltonian has a subset of states with good U(5) symmetry and a subset of states with good O(6) but broken O(5) symmetry, and all other states are mixed with respect to both U(5) and O(6). These are precisely the required features of U(5) PDS of type I coexisting with O(6) PDS of type III. \begin{figure}[t] \begin{center} \rotatebox{270}{\includegraphics[width=2.5in,clip=] {fig9Uhcrio6.eps}}\hspace{0.2cm} \end{center} \begin{center} \rotatebox{270}{\includegraphics[width=2.5in,clip=] {fig9Lhcrio6.eps}} \hspace*{0.1in} \rotatebox{270}{\includegraphics[width=2.5in,clip=] {fig9Rhcrio6.eps}} \caption{ \footnotesize Upper panel: spectrum of $\hat{H}_{cri}(\beta_0=1)$, Eq.~(\ref{hcri1st}), with $h_2=0.05$ and $N=10$. $L(K=0_1)$ are the solvable states of Eq.~(\ref{solo6}) with good O(6) but broken O(5) symmetry. $L=0_2,3_1$ are the solvable U(5) states of Eq.~(\ref{spher}). Bottom left panel: U(5) ($n_d$) and O(6) $(\sigma)$ decomposition for selected spherical and deformed eigenstates of $\hat{H}_{cri}(\beta_0=1)$. Bottom right panel: O(5) ($\tau$) decomposition for the $L=0,\,2$ states, Eq.~(\ref{solo6}), members of the ground band ($K=0_1$) of $\hat{H}_{cri}(\beta_0=1)$. Both states have O(6) symmetry $\sigma=N$. Adapted from~\cite{lev07}. \label{fighcrio6}} \end{center} \end{figure} In conclusion, the above results demonstrate the relevance of the PDS notion to critical-points of QPT, with phases characterized by Lie-algebraic symmetries. In the example considered, second-order critical Hamiltonians mix incompatible symmetries but preserve a common lower symmetry, resulting in a single PDS with selected quantum numbers conserved. First-order critical Hamiltonians exhibit distinct subsets of solvable states with good symmetries, giving rise to a coexistence of different PDS. The ingredients of an algebraic description of QPT is a spectrum generating algebra and an associated geometric space, formulated in terms of coherent (intrinsic) states. The same ingredients are used in the construction of Hamiltonians with PDS. These, in accord with the present discussion, can be used as tools to explore the role of partial symmetries in governing the critical behaviour of dynamical systems undergoing QPT. \section{PDS and Mixed Regular and Chaotic Dynamics} \label{sec:PDSChaos} Partial dynamical symmetries can play a role not only for discrete spectroscopy but also for analyzing statistical aspects of nonintegrable systems~\cite{walev93,levwhe96}. Hamiltonians with dynamical symmetry are always completely integrable~\cite{zhang88}. The Casimir invariants of the algebras in the chain provide a set of constants of the motion in involution. The classical motion is purely regular. A dynamical symmetry-breaking is connected to nonintegrability and may give rise to chaotic motion~\cite{zhang88,zhang89,zhang90}. Hamiltonians with PDS are not completely integrable, hence can exhibit stochastic behavior, nor are they completely chaotic, since some eigenstates preserve the symmetry exactly. Consequently, such Hamiltonians are optimally suitable to the study of mixed systems with coexisting regularity and chaos. The dynamics of a generic classical Hamiltonian system is mixed~\cite{bohig93}; KAM islands of regular motion and chaotic regions coexist in phase space. In the associated quantum system, if no separation between regular and irregular states is done, the statistical properties of the spectrum are usually intermediate between the Poisson and the Gaussian orthogonal ensemble (GOE) statistics. In a PDS of type I, the symmetry of the subset of solvable states is exact, yet does not arise from invariance properties of the Hamiltonian. This offers an important opportunity to study how the existence of partial (but exact) symmetries affects the dynamics of the system. If the fraction of solvable states remains finite in the classical limit, one might expect that a corresponding fraction of the phase space would consist of KAM tori and exhibit regular motion. It turns out that PDS has an even greater effect on the dynamics. It is strongly correlated with suppression ({\it i.e.}, reduction) of chaos even though the fraction of solvable states approaches zero in the classical limit~\cite{walev93,levwhe96}. We consider the IBM Hamiltonian of Eq.~(\ref{hPS}) \begin{eqnarray} \hat{H}(\beta_0) &=& h_{0}\, P^{\dagger}_{0}(\beta_{0})P_{0}(\beta_0) + h_{2}\,P^{\dagger}_{2}(\beta_0)\cdot \tilde{P}_{2}(\beta_0) ~. \label{Hchaos} \end{eqnarray} As discussed in Section~\ref{sec:PartialSolv}, when $\beta_0=\sqrt{2}$, the Hamiltonian~(\ref{Hchaos}) has an SU(3)-PDS of type I. In this case, the solvable states are those of Eqs.~(\ref{gband})-(\ref{gamband}). At a given spin per boson $l=L/N$, and to leading order in $1/N$, the fraction $f$ of solvable states decreases like $1/N^2$ with boson number. However, at a given boson number $N$, this fraction increases with $l$, a feature which is valid also for finite $N$~\cite{walev93}. The classical limit of~(\ref{Hchaos}) is obtained~\cite{hatch82,alnovo90,alw91} through the use of coherent states parametrized by the six complex numbers $\{\alpha_s,\alpha_{\mu};\mu=-2,\ldots,2\}$ and taking $N\to\infty$. The classical Hamiltonian is then obtained from~(\ref{Hchaos}) by the substitution $s^{\dag},d^{\dag}_{\mu}\to\alpha_{s}^{*},\alpha_{\mu}^{*}$ and $s,d_{\mu}\to\alpha_{s},\alpha_{\mu}$ and rescaling the parameters $h_{i}\to Nh_{i}$ $(i=0,2)$. Here $1/N$ plays the role of $\hbar$. \begin{figure}[t] \begin{center} \includegraphics[height=5in]{fig10chaossu3.eps} \caption{ \small Classical $(\sigma,\bar{\lambda})$ and quantal $(\omega,\nu)$ measures of chaos versus $\beta_0$ for the Hamiltonian~(\ref{Hchaos}) with $h_2/h_0=7.5$. Shown are three cases with classical spins $l=0.08,\, 0.4$, and $1$. The quantal calculations $(\omega,\nu)$ are done for $N=25$ bosons and spins $L=2,10$, and $25$, respectively. Notice that with increasing spin the minimum gets deeper and closer to $\beta_0=\sqrt{2}$. The suppression of chaos near $\beta_0=\sqrt{2}$ is seen both for finite $N$ through the measures $\omega,\,\nu$ and in the classical limit $N\to\infty$ through the measures $\sigma,\,\bar{\lambda}$. Adapted from~\cite{walev93}. \label{figchaossu3}} \end{center} \end{figure} To study the effect of the SU(3) PDS on the dynamics, we fix the ratio $h_{2}/h_{0}$ at a value far from the exact SU(3) symmetry (for which $h_0/h_2 =1)$. We then change $\beta_0$ in the range $1\leq\beta_0\leq 2$. Classically, we determine the fraction $\sigma$ of chaotic volume and the average largest Lyapunov exponent $\bar{\lambda}$. To analyze the quantum Hamiltonian, we study spectral and transition intensity distributions. The nearest neighbors level spacing distribution is fitted by a Brody distribution, $P_{\omega}(S) = AS^{\omega}\exp(-\alpha S^{1+\omega})$, where $A$ and $\alpha$ are determined by the conditions that $P_{\omega}(S)$ is normalized to $1$ and $\langle S\rangle =1$. For the Poisson statistics $\omega=0$ and for GOE $\omega=1$, corresponding to integrable and fully chaotic classical motion~\cite{berry77, bohig84}, respectively. The intensity distribution of the SU(3) E2 operator, $Q^{(2)}$ of Eq.~(\ref{Te2su3}), is fitted by a $\chi^2$ distribution in $\nu$ degrees of freedom~\cite{levine86}, $P_{\nu}(y) = [(\nu/2\langle y\rangle)^{\nu/2}/\Gamma(\nu/2)]y^{\nu/2-1} \exp(-\nu y/2\langle y\rangle)$. For the GOE, $\nu=1$ and $\nu$ decreases as the dynamics become more regular. Fig.~\ref{figchaossu3} shows the two classical measures $\sigma$, $\bar{\lambda}$ and the two quantum measures $\omega$, $\nu$ for the Hamiltonian~(\ref{Hchaos}) as a function of $\beta_0$. The parameters of the Hamiltonian are taken to be $h_2/h_0=7.5$ and the number of bosons is $N=25$. Shown are three classical spins $l=0.08,\, 0.4$ and $1$, which correspond in the quantum case to $L=2,\, 10$ and $25$. All measures show a pronounced minimum which gets deeper and closer to $\beta_0=\sqrt{2}$ [where the partial SU(3) symmetry occurs] as the classical spin increases. This behaviour is correlated with the fraction of solvable states (at a constant $N$) being larger at higher $l$. We remark that the classical measures show a clear enhancement of the regular motion near $\beta_0=\sqrt{2}$ even though the fraction of solvable states vanishes as $1/N^2$ in the classical limit $N\to\infty$. \begin{figure}[t] \begin{center} \includegraphics[height=5in]{fig11entropysu3n.eps} \caption{ \small The average SU(3) entropy of the eigenstates of the Hamiltonian~(\ref{Hchaos}) (for $h_2/h_0=7.5$) versus $\beta_0$, for three values of the spin (per boson), $l=0.08,\,0.4$, and $1$. Left: $N=15$ bosons; right: $N=25$ bosons. Adapted from~\cite{walev93}. \label{figentropysu3}} \end{center} \end{figure} To confirm that the observed suppression of chaos is related to the SU(3) PDS, we employ the concept of an entropy~\cite{iaclevine87,izra89} associated with a given symmetry. To determine the SU(3) entropy, we expand any eigenstate $\vert\alpha LM\rangle$ in an SU(3) basis, $\vert\alpha LM\rangle = \sum_{(\lambda\mu),K} c^{(\alpha)}_{(\lambda,\mu)K} \vert(\lambda,\mu)KLM\rangle$. Denoting by $p^{(\alpha)}_{\lambda\mu}$ the probability to be in the SU(3) irrep $(\lambda,\mu)$, $p^{(\alpha)}_{\lambda\mu} = \sum_{K}\vert c^{(\alpha)}_{(\lambda,\mu)K}\vert^2$, the SU(3) entropy of the state $\vert\alpha L M\rangle $ is defined as $S^{(\alpha)}_{SU(3)} = - \sum_{\lambda,\mu} p^{(\alpha)}_{\lambda\mu} \ln p^{(\alpha)}_{\lambda\mu}$. The entropy vanishes when the state has a good SU(3) symmetry. The averaged entropy $\langle S_{{\rm SU(3)}}\rangle$ over all eigenstates is then a measure of the global SU(3) symmetry. This quantity is plotted in Fig.~\ref{figentropysu3}, versus $\beta_0$ for $N=15$ and 25 and for the same spin values (per boson) $l$ as in Fig.~\ref{figchaossu3}. We observe a minimum which is well correlated with the minimum in Fig.~\ref{figchaossu3}. The maximum SU(3) entropy is the logarithm of the number of allowed SU(3) irreps for the given $N$ and $l$. The average SU(3) entropy therefore increases with $N$. The depth of the minimum increases with $N$ and $l$ though the fraction of solvable states is smaller at $N=25$ than at $N=15$ by a factor of about 3. The existence of an SU(3) PDS seems to have an effect of increasing the SU(3) symmetry of all states, not just those with an exact SU(3) symmetry~\cite{walev93}. In order to better understand the strong suppression of classical chaos induced by PDS, we consider a simpler model and use its PDS to infer relationships between the classical and quantum dynamics of a Hamiltonian in a mixed KAM r\'egime~\cite{levwhe96}. The model is based on a U(3) spectrum generating algebra and its building blocks are three types of bosons $a^{\dagger}$, $b^{\dagger}$, $c^{\dagger}$ satisfying the usual commutation relations. The nine number-conserving bilinear products of creation and destruction operators comprise the U(3) algebra. The conservation of the total boson-number $\hat{N}=\hat{n}_a+\hat{n}_{b}+\hat{n}_c$ ($\hat{n}_a =a^{\dagger}a$ with eigenvalue $n_a$ etc.) ensures that the model describes a system with only two independent degrees of freedom. All states of the model are assigned to the totally symmetric representation [N] of U(3). One of the dynamical symmetries of the model is associated with the following chain of algebras \begin{equation} \label{chainu3} {\rm U(3)} \supset {\rm U(2)} \supset {\rm U(1)} \end{equation} Here ${\rm U}(2)\equiv {\rm SU}(2)\times {\rm U}_{ab}(1)$ with a linear Casimir $\hat{n}_{ab}=\hat{n}_a+\hat{n}_b$ [which is also the generator of ${\rm U}_{ab}(1)$]. The generators of SU(2) are $\hat{J}_+ = b^{\dagger}a$, $\hat{J}_-= a^{\dagger}b$, $\hat{J}_z=(\hat{n}_b-\hat{n}_a)/2$ and its Casimir $\mbox{\boldmath $J^2$}=\hat{n}_{ab}(\hat{n}_{ab}+2)/4$. The subalgebra U(1) in Eq.~(\ref{chainu3}) is composed of the operator $\hat{J}_z$. A choice of Hamiltonian with a U(2) dynamical symmetry is \begin{eqnarray} \hat{H}_0 & = & \omega_a a^{\dagger}a + \omega_b b^{\dagger}b \;\; = \;\; \hat{n}_{ab} - 2A\hat{J}_z \label{h0} \end{eqnarray} where $\omega_{a,b} = 1 \pm A$, and $A$ is introduced to break degeneracies. Diagonalization of this Hamiltonian is trivial and leads to eigenenergies $E_{n_a,n_b}= \omega_a n_a + \omega_b n_b$ and eigenstates $\vert n_a,n_b,n_c\rangle$ or equivalently $\vert N,J,J_z\rangle$ where the label $J=n_{ab}/2$ identifies the SU(2) irrep. These are states with well defined $n_a$, $n_b$ and $n_c=N-n_a-n_b$. To create a PDS we add the term \begin{equation} \hat{H}_1 = b^{\dagger}(b^\dagger a + b^\dagger c + a^\dagger b + c^\dagger b)b ~, \label{h1chaos} \end{equation} which preserves the total boson number but not the individual boson numbers, so it breaks the dynamical symmetry. However, states of the form $\vert n_a,n_b=0,n_c\rangle$ (or equivalently $\vert N,J=n_a/2,J_z=-J\rangle$ ) with $n_a=0,1,2,\ldots N$ are annihilated by $\hat{H}_1$ and therefore remain eigenstates of $\hat{H}_0+B \hat{H}_1$. The latter Hamiltonian is not an SU(2) scalar yet has a subset of $(N+1)$ ``special'' solvable states with SU(2) symmetry, and therefore has PDS. There is one special state per SU(2) irrep $J=n_{a}/2$ (the lowest weight state in each case) with energy $\omega_an_a$ independent of the parameter B. Other eigenstates are mixed. Although $\hat{H}_0$ and $\hat{H}_1$ do not commute, when acting on the ``special'' states they satisfy \begin{equation} \label{comm} \Bigl [\hat{H}_0\, ,\, \hat{H}_1\Bigr ]\vert n_a,n_b=0,n_c\rangle\; = 0 ~. \end{equation} To break the PDS we introduce a third interaction \begin{equation} \hat{H}_2 = a^{\dagger}c + c^{\dagger}a + b^{\dagger}c + c^{\dagger}b ~. \label{h2chaos} \end{equation} The complete Hamiltonian is then \begin{equation} \hat{H} = \hat{H}_0 + B\,\hat{H}_1 + C\,\hat{H}_2 ~. \label{htot} \end{equation} For $B=C=0$ we have the full dynamical symmetry; for $B\neq 0,\,C=0$ we have partial dynamical symmetry and for $C\neq 0$ we have neither. The classical Hamiltonian ${\cal H}_{cl}$ is obtained from~(\ref{htot}) by replacing $(a^\dagger,b^\dagger,c^\dagger)$ by complex c-numbers $(\alpha^*,\beta^*,\gamma^*)$ and taking $N\rightarrow\infty$. The latter limit is obtained by rescaling ${\bar B}=NB$, $\alpha\rightarrow \alpha/\sqrt{N}$ etc. and considering the classical Hamiltonian per boson ${\cal H} = {\cal H}_{cl}/N$. In the present model the latter has the form \begin{eqnarray} {\cal H} &=& {\cal H}_0 + \bar{B}\,{\cal H}_1 + C\,{\cal H}_2 ~. \label{cham} \end{eqnarray} Number conservation imposes a constraint $\alpha^*\alpha+\beta^*\beta+\gamma^*\gamma=1$, so that the phase space is compact and four-dimensional with a volume $2\pi^2$. The total number of quantum states is $(N+1)(N+2)/2$. Assigning, to leading order in $N$, one state per $(2\pi\hbar)^2$ volume of phase space, we identify $\hbar=1/N$, so that the classical limit is $N\rightarrow\infty$. \begin{figure}[t] \begin{center} \includegraphics[height=3in]{fig12chaoswl.eps} \caption{ \small Classical ($\mu$) and quantum ($\omega$) measures of chaos [denoted by ($\bullet$) and ($\times$), respectively] versus $C$ for the Hamiltonian~(\ref{htot}) with ${\bar B}=0.5$. Adapted from~\cite{levwhe96}. \label{figchaoswl}} \end{center} \end{figure} In all calculations reported below~\cite{levwhe96} we take $A=0.8642$ and $N=60$. As a first step, we fix ${\bar B}=0.5$ and vary $C$. As previously done, for the classical analysis we randomly sample the phase space and determine the fraction $\mu$ of chaotic volume (same as $\sigma$ in Fig.~\ref{figchaossu3}). For the quantum analysis we evaluate the energy levels, calculated the nearest neighbors level spacing distribution of the unfolded spectrum and fitted it to a Brody distribution. The latter is specified by the fit parameter $\omega$, mentioned above. As shown in Fig.~\ref{figchaoswl}, both of these measures indicate a suppression of chaos near $C=0$ similar to the results of Fig.~\ref{figchaossu3}. To appreciate the strong effect of the PDS (at $C=0$) on the underlying dynamics, it should be noted that the fraction of the solvable states $\vert n_a,n_b=0,n_c\rangle$ is $2/(N+2)$, which approaches zero in the classical limit. To measure the extent to which each eigenstate $|\Psi\rangle$ has SU(2) symmetry, we define variances $\sigma_{i}^2 = \langle \Psi|\hat{n}_{i}^2| \Psi\rangle - \langle \Psi|\hat{n}_{i}|\Psi\rangle^2$ $(i=a,b)$. A state which belongs to just one irrep of SU(2) (with well defined $J,J_z$) has zero variances, while a mixed state has large variances. These variances have the same physical content as the entropies considered before. It is instructive to display the average $\langle\hat{n}_a\rangle$ and variance of each state, as done in Fig.~\ref{figvariance}. SU(2) PDS is present in Fig.~\ref{figvariance}(a) ($B\neq0$, $C=0$), Fig.~\ref{figvariance}(b) is a blow up of Fig~\ref{figvariance}(a), and in Fig.~\ref{figvariance}(c) the symmetry is completely broken ($C\neq 0$). In Figs.~\ref{figvariance}(a) and \ref{figvariance}(b) we see states with zero variance. These are just the special $N+1$ states ($n_b=0$) discussed before, which preserve the SU(2) symmetry. In addition, we see families of states with small variance and small $\langle n_b\rangle$ which suggests that the presence of PDS increases the purity of states other than the special ones. By contrast, in Fig.~\ref{figvariance}(c) we see no particular structure because of the destruction of the PDS for $C\neq 0$. \begin{figure}[t] \begin{center} \includegraphics[height=12cm]{fig13variance.eps} \hspace{1cm} \includegraphics[height=12cm]{fig14variance.eps} \caption{\footnotesize (Left panel). The values of $\langle n_a\rangle$ and of the variance $\sigma_b$ (denoted by $\diamond$) of each eigenstate of the Hamiltonian~(\ref{htot}). (a)~${\bar B}=0.5$, $C=0$ (partial dynamical symmetry). (b)~a blow up of (a) with superimposed results [denoted by ($+$)] of quantum perturbation theory. The families of states with low $\sigma_b$ have small values of $\langle n_b\rangle$. (c)~${\bar B}=0.3$, $C=0.5$ (broken symmetry). Adapted from~\cite{levwhe96}. \vspace{2pt} \hfil\break {\normalsize {\rm Figure 14:}} (Right panel). Poincar\'{e} sections $J_b$ versus $\theta_b$ at energy $E=1.0$. (a)~${\bar B}=0.5$, $C=0$ (partial dynamical symmetry). (b)~a blow up of (a) with superimposed results (dashed curves) of classical perturbation theory. (c)~${\bar B}=0.3$, $C=0.5$ (broken symmetry). Adapted from~\cite{levwhe96}. \label{figvariance}} \end{center} \end{figure} Considerable insight is gained by examining the classical phase space structure in terms of action-angle variables $\alpha =\sqrt{J_a}\exp(-i\theta_a)$, $\beta =\sqrt{J_b}\exp(-i\theta_b)$ and $\gamma = \sqrt{J_c} = \sqrt{1-J_a-J_a}$. The $\theta_a=-\pi/2$ Poincar\'e section is shown in Fig.~14 for energy $E=1.0$. When SU(2) PDS is present (${\bar B}\neq 0,\, C=0$), we see in Figs.~14(a) and 14(b) a torus with $J_b=0$, and additional perturbed tori in its neighborhood (small $J_b$). This structure is absent when the symmetry is completely broken ($C\neq 0$), as shown in Fig.~14(c). The features in Fig.~14 persist also at other energies. To understand them, we recall that for ${\bar B}=C=0$, the Hamiltonian~(\ref{cham}) is integrable and all trajectories wind around invariant tori. By standard torus quantization (without turning points) the actions are quantized as $J_{i}=n_{i}\hbar=n_{i}/N$ $(i=a,b)$. In the integrable limit, quantum states are associated with toroidal manifolds in phase space. In case of a partial symmetry (${\bar B}\neq 0,\,C=0$) we are led by analogy with Eq.~(\ref{comm}) to seek manifolds ${\cal M}$ in phase space on which \begin{equation} \label{poiss} \Bigl.\Bigl \{{\cal H}_0\, ,\,{\cal H}_1\Bigr \}\Bigr|_{\cal M}\; =0 \end{equation} vanishes even though the Poisson bracket is not zero everywhere. In addition, we demand that $\{\{{\cal H}_0\,,\,{\cal H}_1\},{\cal H}_0+{\cal H}_1\}|_{\cal M}=0$ (in analogy to the quantum relation $[[\,H_0,H_1]\,,\,H_0+H_1]|n_a,n_b=0,n_c\rangle =0$) so that a trajectory starting on ${\cal M}$ remains on ${\cal M}$. The solution to these conditions is the manifold $J_b=\beta^*\beta=0$, which may be interpreted as a (degenerate) torus of the ${\cal H}_0$ Hamiltonian. It is also a stable isolated periodic orbit of ${\cal H}_0+\bar{B}{\cal H}_1$. Quantization of the torus with $J_b=0$ proceeds exactly as before, so we correctly predict no change in the quantum energies associated with it. The manifold ${\cal M}$ ($J_b=0$) is the direct classical analogue of the special quantum states $\vert n_b=0\rangle$. It refers, however, to a region of phase space of measure zero, and so cannot by itself explain the observed (global) suppression of chaos. However, as suggested by Fig.~14, the presence of PDS induces a quasi-regular region foliated by tori in the vicinity of the special torus. The dynamics on a finite measure of phase space can be understood by performing a perturbative calculation in the neighbourhood of ${\cal M}$~\cite{levwhe96}. For the classical perturbation calculation we set $C= 0$ in Eq.~(\ref{cham}) and treat $\bar{B}$ as an expansion parameter, assuming $\bar{B}{\cal H}_1$ in Eq.~(\ref{cham}) is small in the neighbourhood of the special periodic orbit. The second order correction reproduces well the perturbed tori on the Poincar\'e sections as shown in Fig.~14(b). The variances can be calculated in quantum perturbation theory. In Fig.~\ref{figvariance}(b) we show the results [denoted by ($+$)] of the quantum perturbation theory (to order $B^5$). We see that the first few families of states are reproduced. It is these states which we can recover from perturbation theory and whose approximate symmetry is induced by the symmetry of the special states. The following physical picture emerges from the foregoing analysis. Near the special orbit, there are KAM tori, some of which are quantized. The quantum eigenstates lie on these tori, so knowing the classical variance of the actions of the tori tells us the variances of the states themselves, in the semiclassical limit. Large variances indicate the extent to which the corresponding states fail to respect the symmetry. This provides a measure for a separation of regular and irregular levels, as conceived in~\cite{perc}. In the present model, the quantum states can be grouped into three classes: i) the special states, which observe the symmetry; ii) the ``almost special states'' which are accessible by perturbation theory; iii) the rest of the states, which are mixed. As in~\cite{bohig93}, the frontier between regular states (sets (i) and (ii) ) and irregular states (set (iii) ) is not sharp. The above discussion illustrates the effect of PDS on the quantum and classical dynamics of a mixed system. At the quantum level, PDS by definition implies the existence of a ``special'' subset of states, which observe the symmetry. The PDS affects the purity of other states in the system; in particular, neighboring states, accessible by perturbation theory, possess approximately good symmetry. Analogously, at the classical level, the region of phase space near the ``special'' torus also has toroidal structure. As a consequence of having PDS, a finite region of phase space is regular and a finite fraction of states is approximately ``special''. This clarifies the observed suppression of chaos. Based on these arguments and the above results, it is anticipated that the suppression of chaos will persist in higher dimensional systems with PDS. \section{PDS and Higher-Order Terms} \label{PDS3bod} In applications of algebraic modeling to dynamical systems, there is occasionally a need, based on phenomenological and/or microscopic grounds, to include higher-order terms in the Hamiltonian. For example, in the IBM, accommodating rigid triaxial shapes and describing large anharmonicities in excited bands, requires at least cubic terms in the boson Hamiltonian. From a microscopic point of view, many-body boson interactions in the IBM, are generated by the mapping of fermion pairs into bosons~\cite{ariyoshgin81,klein91} and the truncation to only monopole and quadrupole bosons, with the associated renormalization of the effective interaction in the truncated space~\cite{duval83,levkir84,otsuka85}. From this perspective, to confine to two-body interactions in the boson space, is only a convenient lowest-order approximation. The advantages of using higher-order terms with PDS are twofold. First, the algorithms for realizing such symmetry structures provide a systematic procedure for identifying and selecting interactions of a given order. Having at hand a selection criteria is highly desirable, since, if higher-order terms or new degrees of freedom are added, one is immediately faced with the problem of many possible interactions and a proliferation of free parameters. Second, Hamiltonians with PDS break the dynamical symmetry but retain selected subsets of solvable eigenstates with good symmetry. Such qualities are a virtue, since interactions with a PDS can be introduced, in a controlled manner, {\it without destroying} results previously obtained with a dynamical symmetry for a segment of the spectrum. In general, the existence of quantum Hamiltonians with PDS is closely related to the order of the interaction among the constituents. IBM Hamiltonians with higher-order terms, exhibiting PDS of type~II were already encountered in Subsection~\ref{subsec:o6PDStypeII}. In what follows, we present examples of three-body IBM Hamiltonians with U(5) and O(6) PDS of type~I. Work on three-body IBM Hamiltonians with SU(3)-PDS of type~I, is currently in progress and will be reported elsewhere~\cite{levramisa10}. \subsection{U(5) PDS (type I) with three-body terms} \label{subsec:u5PDS3bod} The U(5) dynamical symmetry (DS) chain, ${\rm U(6)}\supset {\rm U(5)} \supset {\rm O(5)}\supset {\rm O(3)}$, and its related basis states, $\vert[N]\langle n_d\rangle (\tau) n_\Delta LM\rangle$, were discussed in Section~\ref{subsec:u5PDStypeI}. In this case, new terms show up in the DS Hamiltonian at the level of three-body interactions. The DS Hamiltonian and related spectrum now read \begin{subequations} \begin{eqnarray} \hat{H}_{DS} &=& t_1\,\hat{n}_d + t_2\,\hat{n}_{d}^2 + t_3\,\hat{n}_{d}^3 + t_4\,\hat{C}_{{\rm O(5)}} + t_5\,\hat{n}_d\hat{C}_{{\rm O(5)}} + t_6\,\hat{C}_{{\rm O(3)}} + t_7\,\hat{n}_d\hat{C}_{{\rm O(3)}} ~,\qquad \label{hDS3u5}\\ E_{DS} &=& t_1\, n_d + t_2\, n_{d}^2 + t_3\, n_{d}^3 + t_4\, \tau(\tau+3) + t_5\,n_{d}\tau(\tau+3) + t_6\, L(L+1) \nonumber\\ && + t_7\, n_{d}L(L+1) ~. \label{eDS3u5} \end{eqnarray} \label{ehDS3u5} \end{subequations} Terms of the form $\hat{N}\hat{C}_{G}$, with $\hat{C}_{G}$ a quadratic Casimir operator of $G={\rm U(5)},\,{\rm O(5)},\, {\rm O(3)}$, are included in $\hat{H}_{DS}$ by allowing the parameters $t_i$ to depend on $N$. \begin{table}[t] \begin{center} \caption{\label{Tabu5tens3} \protect\small Normalized three-boson U(5) tensors.} \vspace{1mm} \begin{tabular}{cccccl} \hline & & & & &\\[-3mm] $n$&$n_d$&$\tau$& $n_{\Delta}$ &$\ell$& $\hat B^\dag_{[n]\langle n_d\rangle(\tau)n_{\Delta}\ell m}$\\ & & & & &\\[-3mm] \hline & & & & & \\[-2mm] 3& 0& 0& 0& 0& $\sqrt{\frac{1}{6}}(s^{\dag})^3$\\[2pt] 3& 1& 1& 0& 2& $\sqrt{\frac{1}{2}}(s^{\dag})^{2}d^{\dag}_{m}$\\[2pt] 3& 2& 0& 0& 0& $\sqrt{\frac{1}{2}}s^{\dag}(d^{\dag} d^{\dag})^{(0)}_0$\\[2pt] 3& 2& 2& 0& 2& $\sqrt{\frac{1}{2}}s^{\dag}(d^\dag d^\dag)^{(2)}_m$\\[2pt] 3& 2& 2& 0& 4& $\sqrt{\frac{1}{2}}s^{\dag}(d^\dag d^\dag)^{(4)}_m$\\[2pt] 3& 3& 1& 0& 2& $\sqrt{\frac{5}{14}} ((d^\dag d^\dag)^{(0)} d^\dag)^{(2)}_m$\\[2pt] 3& 3& 3& 1& 0& $\sqrt{\frac{1}{6}} ((d^\dag d^\dag)^{(2)} d^\dag)^{(0)}_m$\\[2pt] 3& 3& 3& 0& 3& $\sqrt{\frac{7}{30}} ((d^\dag d^\dag)^{(2)} d^\dag)^{(3)}_m$\\[2pt] 3& 3& 3& 0& 4& $\sqrt{\frac{7}{22}} ((d^\dag d^\dag)^{(2)} d^\dag)^{(4)}_m$\\[2pt] 3& 3& 3& 0& 6& $\sqrt{\frac{1}{6}} ((d^\dag d^\dag)^{(4)} d^\dag)^{(6)}_m$\\[4pt] & & & & & \\[-3mm] \hline \end{tabular} \end{center} \end{table} The construction of U(5)-PDS Hamiltonians with three-body terms follows the general algorithm by considering operators which annihilate, for example, the U(5) ground state, $\vert [N],n_d=\tau=L=0\rangle$. This can be accomplished by means of those U(5) tensors in Table~\ref{Tabu5tens3} with $n_d\neq 0$. Several families of U(5)-PDS Hamiltonians can be defined by identifying specific three-body terms which annihilate additional U(5) basis states. One such family involves the interaction \begin{subequations} \begin{eqnarray} &&\hat{V}_{0} = r_0\,G^{\dag}_{0}G_{0} + e_{0}\left (G^{\dag}_0 K_0 + K^{\dag}_{0}G_0 \right ) \label{V0a}\\ &&\hat{V}_{0}\vert [N], n_d=\tau, \tau, n_{\Delta}=0, L M\rangle = 0 \qquad L=\tau,\tau+1,\ldots,2\tau-2,2\tau \qquad\qquad \label{V0b} \end{eqnarray} \label{V03bod} \end{subequations} where $G^{\dag}_{L,\mu} = [(d^\dag d^\dag)^{(\rho)} d^\dag]^{(L)}_{\mu}$ with $(\rho,L)=(2,0),\, (0,2),\, (2,3),\, (2,4),\, (4,6)$ and $K^{\dag}_{L,\mu} = s^{\dag}(d^{\dag} d^{\dag})^{(L)}$ with $L=0,\, 2,\, 4$. As shown in~\cite{talmi03}, the states of Eq.~(\ref{V0b}) may be projected from states created by acting on the vacuum by a product of $\tau$ operators $d^{\dag}_{m_i}$ with $m_i\geq 1$ for $i=1,\dots,\tau$. Hence, such states are guaranteed to contain no three $d$-boson states coupled to $L=0$ and, therefore, are annihilated by $G_0$. The same set of states are annihilated also by $K_0$, since all states with $n_d=\tau$ vanish under the action of $\tilde{d}\cdot\tilde{d}$~\cite{arimaiac76}. The remaining eigenstates of $\hat{V}_0$~(\ref{V03bod}) are mixed with respect to both U(5) and O(5). Clearly, $\hat{H}_{DS} + \hat{V}_0$ exhibits a U(5)-PDS of type I. A second family of PDS Hamiltonians involves the interaction \begin{subequations} \begin{eqnarray} &&\hat{V}_{2} = a_{2}\, \Pi^{(2)}\cdot U^{(2)} + \sum_{L=0,2,4}e_{L}\left (\,G^{\dag}_L\cdot \tilde{K}_L + H.c.\,\right ) \qquad\qquad \label{V2a}\\ &&\hat{V}_{2}\vert [N], n_d=\tau=L=3\rangle = 0 ~, \label{V2b} \end{eqnarray} \label{V23bod} \end{subequations} where $\tilde{G}_{L,\mu} = (-1)^{\mu}G_{L,-\mu}$ and $\tilde{K}_{\mu} = (-1)^{\mu}K_{L,-\mu}$. The relation in Eq.~(\ref{V2b}) follows from arguments similar to those given after Eq.~(\ref{V2nd3}). Other eigenstates of $\hat{V}_2$~(\ref{V23bod}) are mixed in the U(5) basis. Clearly, $\hat{H}_{DS} + \hat{V}_2$ exhibits a U(5)-PDS of type I. A third family of PDS Hamiltonians involves the interaction \begin{eqnarray} \hat{V}_{3} = r_3\,G^{\dag}_{3}\cdot{G}_{3} + r_0\,G^{\dag}_{0}G_{0} ~. \label{V3} \end{eqnarray} Both terms in Eq.~(\ref{V3}) conserve the U(5) quantum number $n_d$, but are not O(5) scalars. They can induce O(5) mixing subject to $\Delta\tau=2,4,6$, and their multipole form involves U(5) generators, some of which are not contained in the O(5) subalgebra. As such, and in accord with the discussion at the end of Subsection~\ref{subsec:o6PDStypeII}, $\hat{H}_{DS} + \hat{V}_3$ exhibits U(5)-PDS of type II. Since both terms in $\hat{V}_3$ are rotational-scalars and are diagonal in $n_d$, it follows that in a given $n_d$ multiplet, those $L$-states which have a unique $\tau$-assignment, remain pure with respect to O(5), and hence are good U(5) eigenstates. For example, the states with $n_d=\tau, n_{\Delta}=0,\, L =2n_d,\, 2n_d-2, 2n_d-3, 2n_d-5$, or states with $L=0$ and $n_d\leq 5$, or states with $L=3$ and $n_d\leq 8$, are all eigenstates of $\hat{V}_3$, diagonal in the U(5) basis. In this sense, $\hat{H}_{DS} + \hat{V}_3$ exhibits also O(5)-PDS of type~I. \subsection{O(6) PDS (type I) with three-body terms} \label{subsec:o6PDS3bod} The O(6) dynamical symmetry (DS) chain, ${\rm U(6)}\supset {\rm O(6)}\supset {\rm O(5)}\supset {\rm O(3)}$, and its related basis states, $|[N]\langle\Sigma\rangle(\tau)n_\Delta LM\rangle$, were discussed in Subsection~\ref{subsec:o6PDStypeI}. The DS Hamiltonian is given in Eq.~(\ref{hDSo6}) and no new terms are added to it at the level of three-body interactions. According to the general algorithm, the construction of interactions with O(6)-PDS of type I requires $n$-boson creation and annihilation operators with definite tensor character in the O(6) basis: \begin{equation} \hat B^\dag_{[n]\langle\sigma\rangle(\tau)n_{\Delta}\ell m}, \;\; \tilde{B}_{[n^5]\langle\sigma\rangle(\tau)n_{\Delta}\ell m} \equiv (-1)^{\ell-m} \left(\hat B^\dag_{[n]\langle\sigma\rangle(\tau)n_{\Delta}\ell,-m} \right)^\dag. \label{tenso6} \end{equation} Of particular interest are tensor operators with $\sigma<n$. They have the property \begin{equation} \tilde{B}_{[n^5]\langle\sigma\rangle(\tau)n_{\Delta}\ell m} |[N]\langle N\rangle(\tau)n_{\Delta} LM\rangle=0, \qquad \sigma<n, \label{anniso6} \end{equation} for all possible values of $\tau,n_{\Delta},L$ contained in the O(6) irrep $\langle N\rangle$. This is so because the action of $\tilde{B}_{[n^5]\langle\sigma\rangle(\tau)n_{\Delta}\ell m}$ leads to an $(N-n)$-boson state that contains the O(6) irreps $\langle\Sigma\rangle=\langle N-n-2i\rangle,\,i=0,1,\dots$ which cannot be coupled with $\langle\sigma\rangle$ to yield $\langle\Sigma\rangle=\langle N\rangle$, since $\sigma<n$. Number-conserving normal-ordered interactions that are constructed out of such tensors with $\sigma<n$ (and their Hermitian conjugates) thus have $|[N]\langle N\rangle(\tau)n_\Delta LM\rangle$ as eigenstates with zero eigenvalue~\cite{RamLevVan09}. \begin{table}[t] \begin{center} \caption{\label{Tabo6tens3} \protect\small Normalized three-boson O(6) tensors.} \vspace{1mm} \begin{tabular}{cccccl} \hline & & & & &\\[-3mm] $n$&$\sigma$&$\tau$&$n_{\Delta}$&$\ell$& $\hat B^\dag_{[n]\langle\sigma\rangle(\tau)n_{\Delta}\ell m}$\\[4pt] & & & & &\\[-3mm] \hline & & & & & \\[-2mm] 3& 3& 0& 0& 0& $\sqrt{\frac{3}{16}}s^\dag (d^\dag d^\dag)^{(0)}_0 +\sqrt{\frac{5}{48}}(s^{\dag})^{3}$\\[2pt] 3& 3& 1& 0& 2& $\sqrt{\frac{5}{112}}((d^\dag d^\dag)^{(0)} d^\dag)^{(2)}_m +\sqrt{\frac{7}{16}}(s^{\dag})^{2}d^{\dag}_m$\\[2pt] 3& 3& 2& 0& 2& $\sqrt{\frac{1}{2}}s^\dag (d^\dag d^\dag)^{(2)}_m$\\[2pt] 3& 3& 2& 0& 4& $\sqrt{\frac{1}{2}}s^\dag (d^\dag d^\dag)^{(4)}_m$\\[2pt] 3& 3& 3& 1& 0& $\sqrt{\frac{1}{6}}((d^\dag d^\dag)^{(2)} d^\dag)^{(0)}_0$\\[2pt] 3& 3& 3& 0& 3& $\sqrt{\frac{7}{30}}((d^\dag d^\dag)^{(2)} d^\dag)^{(3)}_m$\\[2pt] 3& 3& 3& 0& 4& $\sqrt{\frac{7}{22}}((d^\dag d^\dag)^{(2)} d^\dag)^{(4)}_m$\\[2pt] 3& 3& 3& 0& 6& $\sqrt{\frac{1}{6}}((d^\dag d^\dag)^{(4)} d^\dag)^{(6)}_m$\\[2pt] 3& 1& 0& 0& 0& $\sqrt{\frac{5}{16}}s^\dag (d^\dag d^\dag)^{(0)}_0 -\sqrt{\frac{1}{16}}(s^{\dag})^{3}$\\[2pt] 3& 1& 1& 0& 2& $\sqrt{\frac{5}{16}}((d^\dag d^\dag)^{(0)} d^\dag)^{(2)}_m -\sqrt{\frac{1}{16}}(s^{\dag})^{2}d^{\dag}_m$\\[4pt] & & & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} As shown in Subsection~\ref{subsec:o6PDStypeI}, there is one two-boson operator $P^{\dagger}_{0}= d^{\dagger}\cdot d^{\dagger} - (s^{\dagger})^2$, Eq.~(\ref{Pdag0o6}), with $\sigma<n=2$, which gives rise to an O(6)-invariant interaction, $P^{\dag}_{0}P_{0}$, related to the completely solvable Casimir operator of O(6), Eq.~(\ref{HPSo6}). On the other hand, from Table~\ref{Tabo6tens3}, one recognizes two three-boson O(6) tensors with $\sigma<n=3$ \begin{equation} \hat B^\dag_{[3]\langle1\rangle(1)0;2m}= \textstyle{\frac{1}{4}} P^{\dag}_{0}d^\dag_m, \quad \hat B^\dag_{[3]\langle1\rangle(0)0;00}= \textstyle{\frac{1}{4}} P^{\dag}_{0}s^\dag, \label{three} \end{equation} and from these one can construct the interactions with an O(6) PDS. The only three-body interactions that are partially solvable in O(6) are thus $P^{\dag}_{0}\hat n_s P_{0}$ and $P^{\dag}_{0}\hat n_d P_{0}$. Since the combination $P^{\dag}_{0}(\hat n_s+\hat n_d)P_{0} = (\hat{N} -2)P^{\dag}_{0}P_{0}$ is completely solvable in O(6), there is only one genuine partially solvable three-body interaction which can be chosen as $P^{\dag}_{0}\hat n_s P_{0}$, with tensorial components $\sigma=0,\,2$. The O(6)-DS spectrum \setcounter{figure}{14} \begin{figure*}[t] \begin{center} \leavevmode \includegraphics[width=0.90\linewidth]{fig15pt196.eps} \caption{ \small Observed spectrum of $^{196}$Pt compared with the calculated spectra of $\hat H_{\rm DS}$~(\ref{hDSo6}), with O(6) dynamical symmetry (DS), and of $\hat H_{\rm PDS}$~(\ref{hPDSo63bod}) with O(6) partial dynamical symmetry (PDS). The parameters in $\hat H_{\rm DS}$ $(\hat H_{\rm PDS})$ are $h_0=43.6\, (30.7)$, $B=44.0\, (44.0)$, $C=17.9\, (17.9)$, and $\eta=0\, (8.7)$ keV. The boson number is $N=6$ and $\Sigma$ is an O(6) label. Adapted from~\cite{RamLevVan09}.} \label{figpt196} \end{center} \end{figure*} \begin{eqnarray} E_{\rm DS} &=& 4h_{0}\,(N-v +2)v + B\,\tau(\tau+3) +\, C\,L(L+1) ~, \label{eDSo6v} \end{eqnarray} resembles that of a $\gamma$-unstable deformed rotovibrator, where states are arranged in bands with O(6) quantum number $\Sigma=N-2v$, $(v=0,1,2,\ldots)$. The O(5) and O(3) terms in the dynamical symmetry Hamiltonian, $\hat{H}_{\rm DS}$~(\ref{hDSo6}), govern the in-band rotational splitting. A comparison with the experimental spectrum and E2 rates of $^{196}$Pt is shown in Fig.~\ref{figpt196} and Table~\ref{Tabbe2pt196}. \begin{table}[t] \begin{center} \caption{\label{Tabbe2pt196} \protect\small Observed (EXP) and calculated B(E2) values (in $e^2{\rm b}^2$) for $^{196}$Pt. For both the exact (DS) and partial (PDS) O(6) dynamical symmetry calculations, the E2 operator is that of Eq.~(\ref{Te2}) with $e_{B}=0.151$ $e$b and $\chi=0.29$. Only the state $0^{+}_3$ has a mixed O(6) character. Adapted from~\cite{RamLevVan09}.} \vspace{1mm} \begin{tabular}{clll|clll} \hline & & & & & & &\\[-3mm] Transition& EXP & DS &PDS & Transition& EXP & DS &PDS\\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] $2^+_1\rightarrow0^+_1$& 0.274~(1) & 0.274& 0.274 & $2^+_3\rightarrow0^+_2$& 0.034~(34) & 0.119& 0.119\\[2pt] $2^+_2\rightarrow2^+_1$& 0.368~(9) & 0.358& 0.358 & $2^+_3\rightarrow4^+_1$& 0.0009~(8) & 0.0004& 0.0004\\[2pt] $2^+_2\rightarrow0^+_1$& 3.10$^{-8}$(3) & 0.0018& 0.0018 & $2^+_3\rightarrow2^+_2$& 0.0018~(16)& 0.0013& 0.0013\\[2pt] $4^+_1\rightarrow2^+_1$& 0.405~(6) & 0.358& 0.358 & $2^+_3\rightarrow0^+_1$& 0.00002~(2)& 0 & 0 \\[2pt] $0^+_2\rightarrow2^+_2$& 0.121~(67) & 0.365& 0.365 & $6^+_2\rightarrow6^+_1$& 0.108~(34) & 0.103& 0.103\\[2pt] $0^+_2\rightarrow2^+_1$& 0.019~(10) & 0.003& 0.003 & $6^+_2\rightarrow4^+_2$& 0.331~(88) & 0.221& 0.221\\[2pt] $4^+_2\rightarrow4^+_1$& 0.115~(40) & 0.174& 0.174 & $6^+_2\rightarrow4^+_1$& 0.0032~(9) & 0.0008& 0.0008\\[2pt] $4^+_2\rightarrow2^+_2$& 0.196~(42) & 0.191& 0.191 & $0^+_3\rightarrow2^+_2$&$<0.0028$ & 0.0037& 0.0028\\[2pt] $4^+_2\rightarrow2^+_1$& 0.004~(1) & 0.001& 0.001 & $0^+_3\rightarrow2^+_1$&$<0.034$ & 0 & 0 \\[2pt] $6^+_1\rightarrow4^+_1$& 0.493~(32) & 0.365& 0.365 & & & & \\[4pt] \hline \end{tabular} \end{center} \end{table} The O(6)-DS limit is seen to provide a good description for properties of states in the ground band $(\Sigma=N)$. This observation was the basis of the claim~\cite{Cizewski78} that the O(6)-DS is manifested empirically in $^{196}$Pt. However, the resulting fit to energies of excited bands is quite poor. The $0^+_1$, $0^+_3$, and $0^+_4$ levels of $^{196}$Pt at excitation energies 0, 1403, 1823 keV, respectively, are identified as the bandhead states of the ground $(v=0)$, first- $(v=1)$ and second- $(v=2)$ excited vibrational bands~\cite{Cizewski78}. Their empirical anharmonicity, defined by the ratio $R=E(v=2)/E(v=1)-2$, is found to be $R=-0.70$. In the O(6)-DS limit these bandhead states have $\tau=L=0$ and $\Sigma=N,N-2,N-4$, respectively. The anharmonicity $R=-2/(N+1)$, as calculated from Eq.~(\ref{eDSo6v}), is fixed by $N$. For $N=6$, which is the appropriate boson number for $^{196}$Pt, the O(6)-DS value is $R=-0.29$, which is in marked disagreement with the empirical value. A detailed study of double-phonon excitations within the IBM, has concluded that large anharmonicities can be incorporated only by the inclusion of at least cubic terms in the Hamiltonian~\cite{ramos00b}. In the IBM there are 17 possible three-body interactions~\cite{ibm}. One is thus confronted with the need to select suitable higher-order terms that can break the DS in excited bands but preserve it in the ground band. On the basis of the preceding discussion this can be accomplished by the following Hamiltonian with O(6)-PDS~\cite{RamLevVan09} \begin{equation} \hat{H}_{\rm PDS}=\hat{H}_{\rm DS}+ \eta\,P^{\dag}_{0}\hat{n}_s P_{0}, \label{hPDSo63bod} \end{equation} where the terms are defined in Eqs.~(\ref{hDSo6}) and~(\ref{three}). The spectrum of $\hat{H}_{\rm PDS}$ is shown in Fig.~\ref{figpt196}. The states belonging to the $\Sigma=N=6$ multiplet remain solvable with energies given by the same DS expression, Eq.~(\ref{eDSo6v}). States with $\Sigma < 6$ are generally admixed but agree better with the data than in the DS calculation. For example, the bandhead states of the first- (second-) excited bands have the O(6) decomposition $\Sigma=4$: $76.5\%\,(19.6\%)$, $\Sigma=2$: $16.1\%\,(18.4\%)$, and $\Sigma=0$: $7.4\%\,(62.0\%)$. Thus, although the ground band is pure, the excited bands exhibit strong O(6) breaking. The calculated O(6)-PDS anharmonicity for these bands is $R=-0.63$, much closer to the empirical value, $R=-0.70$. It should be emphasized that not only the energies but also the wave functions of the $\Sigma=N$ states remain unchanged when the Hamiltonian is generalized from DS to PDS. Consequently, the E2 rates for transitions among this class of states are the same in the DS and PDS calculations. Thus, the additional three-body term in the Hamiltonian~(\ref{hPDSo63bod}), does not spoil the good O(6)-DS description for this segment of the spectrum. This is evident in Table~\ref{Tabbe2pt196} where most of the E2 data concern transitions between $\Sigma=N=6$ states. Only transitions involving states from excited bands ({\it e.g.}, the $0^+_3$ state in Table~\ref{Tabbe2pt196}) can distinguish between DS and PDS. Unfortunately, such interband E2 rates are presently poorly known experimentally. Their measurement is highly desirable for further testing the O(6)-PDS wave functions. \section{PDS and Coupled Systems} \label{PDSibm2} So far the notion of partial dynamical symmetries was presented in the framework of algebraic models involving only one species of constituent particle. It is of great interest to extend this notion to the case of coupled systems involving two (or more) species of particles. In this case, the appropriate spectrum generating algebra, $G_1\times G_2$, contains the direct product of the two algebraic structures for systems 1 and 2. An example of such a coupled system is the proton-neutron version of the interacting boson model (IBM-2)~\cite{ibm,arima77,otsuka78}. The building blocks of the model are monopole and quadrupole bosons, $\{s^{\dag}_{\rho},\,d^{\dag}_{\rho\mu}\}$, of proton type ($\rho=\pi$) and of neutron type $(\rho=\nu$), representing pairs of identical valence nucleons. Number conserving bilinear combinations of operators in each set comprise the ${\rm U}_{\rho}(6)$ algebra as in the IBM-1, Eq.~(\ref{u6gen}), and bosons of different types commute. Since the separate proton- and neutron- boson numbers, $\hat{N}_{\pi}$ and $\hat{N}_{\nu}$, are conserved, the appropriate spectrum generating algebra of the model is ${\rm U}_{\pi}(6)\times {\rm U}_{\nu}(6)$. Subalgebras can be constructed with the aid of the individual subalgebras, ${\rm U}_{\rho}(5),\, {\rm SU}_{\rho}(3),\, {\rm O}_{\rho}(6),\, {\rm O}_{\rho}(5),\,{\rm O}_{\rho}(3)$. For instance, for a given algebra $G_{\rho}$, with generators $g_{\rho}$, there is a combined algebra $G_{\pi+\nu}$, with generators $g_{\pi}+ g_{\nu}$. The dynamical symmetries of the IBM-2 are obtained by identifying the lattices of embedded algebras starting with ${\rm U}_{\pi}(6)\times {\rm U}_{\nu}(6)$ and ending with the symmetry algebra ${\rm O}_{\pi+\nu}(3)$. A new aspect in coupled systems is the occurrence of states which are not symmetric with respect to interchange of the two constituents. This is clearly seen in the reduction \begin{eqnarray} \begin{array}{ccc} {\rm U}_{\pi}(6)\times {\rm U}_{\nu}(6) & \supset & {\rm U}_{\pi+\nu}(6)\\ \downarrow&&\downarrow\\ \, [N_{\pi}]\times [N_{\nu}] && [N_1, N_2] \end{array} ~. \label{u6pinu} \end{eqnarray} For a given irrep of ${\rm U}_{\pi}(6)\times {\rm U}_{\nu}(6)$, characterized by $N_{\pi}$ and $N_{\nu}$, the allowed irreps of ${\rm U}_{\pi+\nu}(6)$ are $[N_1,N_2] = [N_{\pi} + N_{\nu} -k, k]$, where $k=0,1,\ldots, {\rm min}\{N_{\pi},\, N_{\nu}\}$. States in the irreps with $N_2\neq 0$ ($k\neq 0$) are not symmetric with respect to $\pi$ and $\nu$ bosons. One of the successes of the IBM-2 has been the empirical discovery of such low-lying mixed symmetry states in nuclei, in which valence protons and neutrons move out of phase. A complete listing of all possible partial dynamical symmetries (PDS) of the IBM-2 is outside the scope of the present review. In what follows, we present a sample of such symmetry structures, illuminating new features of PDS in coupled systems. Coupled algebraic structure, $G_1\times G_2$, can involve also fermionic algebras, as well as Bose-Fermi algebras. An example of the latter is the interacting boson-fermion model (IBFM)~\cite{ibfm}, used for describing odd-mass nuclei and broken fermion-pairs in even-even nuclei. The model incorporates collective (bosonic) and quasi-particle (fermionic) degrees of freedom, and the associated spectrum generating algebra is ${\rm U}_{B}(6)\times {\rm U}_{F}(m)$. Here ${\rm U}_{B}(6)$ and ${\rm U}_{F}(m)$ are the boson and fermion algebras respectively, and $m=\sum_{i}(2j_i+1)$ is the dimension of the single-particle space ($j_i$ are the angular momenta of the occupied shell-model orbits). Bose-Fermi symmetries correspond to dynamical symmetries of ${\rm U}_{B}(6)\times {\rm U}_{F}(m)$. Supersymmetry corresponds to a further embedding of the Bose-Fermi symmetry into a graded Lie algebra ${\rm U}(6/m)\supset {\rm U_{B}}(6)\times {\rm U_{F}}(m)$. Partial Bose-Fermi symmetries and partial supersymmetries have not been considered in detail so far. There are initial hints that such a structure can occur in the IBFM~\cite{jolos00}, however, an in-depth systematic study is called for. \subsection{F-spin and selected PDS in the IBM-2} \label{subsec:FspinPDS} The proton-neutron degrees of freedom are naturally reflected in the IBM-2 via an ${\rm SU}_{F}(2)$ F-spin algebra~\cite{arima77} with generators \begin{eqnarray} \hat{F}_{+} = s^{\dagger}_{\pi} s_{\nu} + d^{\dagger}_{\pi}\cdot \tilde d_{\nu} \;\; , \;\; \hat{F}_{-} = (\hat{F}_{+})^{\dagger}\;\; , \;\; \hat{F}_{0} = (\hat{N}_{\pi} - \hat{N}_{\nu})/2 ~. \label{su2F} \end{eqnarray} These generators commute with the total boson number operator, $\hat{N}=\hat{N}_{\pi}+\hat{N}_{\nu}$, which is a ${\rm U}_{N}(1)$ generator. The basic F-spin doublets are $(s^{\dagger}_{\pi},s^{\dagger}_{\nu})$, and $(d^{\dagger}_{\pi \mu},d^{\dagger}_{\nu \mu})$, with F-spin projection +1/2 ($-$1/2) for proton (neutron) bosons. The algebras ${\rm SU}_{F}(2)\times {\rm U}_{N}(1)$~(\ref{su2F}) and ${\rm U}_{\pi+\nu}(6)$~(\ref{u6pinu}) commute and obey a duality relationship, in the sense that their irreps are related by $F = (N_1-N_2)/2 = (N_{\pi} + N_{\nu})/2 -k$ and $N=N_1+N_2=N_{\pi}+ N_{\nu}$. In a given nucleus, with fixed $N_{\pi}$, $N_{\nu}$, all states have the same value of $F_{0}= (N_{\pi} - N_{\nu})/2$, while the allowed values of the F-spin quantum number F range from $|F_{0}|$ to $F_{max} \equiv (N_{\pi}+N_{\nu})/2 \equiv N/2$ in unit steps. F-spin characterizes the $\pi$-$\nu$ symmetry properties of IBM-2 states. States with maximal F-spin, $F \equiv F_{max}$, are fully symmetric and correspond to the IBM-1 states with only one type of bosons~\cite{ibm}. There are several arguments, {\it e.g.}, the empirical success of IBM-1, the identification of F-spin multiplets~\cite{harter85,brentano85,gupta89,zamfir92} (series of nuclei with constant $F$ and varying $F_{0}$ with nearly constant excitation energies), and weakness of M1 transitions, which lead to the belief that low lying collective states have predominantly $F=F_{max}$~\cite{lipas90}. States with $F<F_{max}$, correspond to `mixed-symmetry' states~\cite{iac84}, most notably, the orbital magnetic dipole scissors mode~\cite{boh84} has by now been established experimentally as a general phenomena in deformed even-even nuclei~\cite{rich95}. \begin{table}[t] \begin{center} \caption[]{\label{TabFspinmult} \protect\small Energies (in MeV) of $2^{+}$ levels of the ground ($g$), $\gamma$ and $\beta$ bands in F-spin multiplets. The mass numbers are $A= 132 + 4F$. Adapted from~\cite{levgin00}.} \vspace{1mm} \begin{tabular}{lccccccc} \hline & & & & & & &\\[-3mm] F & Energy & $^{A}$Dy & $^{A+4}$Er & $^{A+8}$Yb & $^{A+12}$Hf & $^{A+16}$W & $^{A+20}$Os \\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] 6 & $E(2^{+}_{g})$ & 0.14 & 0.13 & 0.12 & 0.12 & 0.12 & 0.14 \\[2pt] & $E(2^{+}_{\gamma})$ & 0.89 & 0.85 & 0.86 & 0.88 & & 0.86 \\[2pt] & $E(2^{+}_{\beta})$ & 0.83 & 1.01 & 1.07 & 1.06 & & 0.74 \\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] 13/2 & $E(2^{+}_{g})$ & 0.10 & 0.10 & 0.10 & 0.10 & 0.11 & 0.13 \\[2pt] & $E(2^{+}_{\gamma})$ & 0.95 & 0.90 & 0.93 & 0.96 & & \\[2pt] & $E(2^{+}_{\beta})$ & 1.09 & 1.17 & 1.14 & 0.99 & & \\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] 7 & $E(2^{+}_{g})$ & 0.09 & 0.09 & 0.09 & 0.10 & 0.11 & 0.13 \\[2pt] & $E(2^{+}_{\gamma})$ & 0.97 & 0.86 & 0.98 & 1.08 & & 0.87 \\[2pt] & $E(2^{+}_{\beta})$ & 1.35 & 1.31 & 1.23 & 0.95 & & 0.83 \\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] 15/2 & $E(2^{+}_{g})$ & 0.08 & 0.08 & 0.08 & 0.09 & 0.11 & \\[2pt] & $E(2^{+}_{\gamma})$ & 0.89 & 0.79 & 1.15 & 1.23 & 1.11 & \\[2pt] & $E(2^{+}_{\beta})$ & 1.45 & 1.53 & 1.14 & 0.90 & 1.08 & \\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] 8 & $E(2^{+}_{g})$ & 0.07 & 0.08 & 0.08 & 0.09 & & \\ [2pt] & $E(2^{+}_{\gamma})$ & 0.76 & 0.82 & 1.47 & 1.34 & & \\[2pt] & $E(2^{+}_{\beta})$ & & 1.28 & 1.12 & 1.23 & & \\ & & & & & & &\\[-3mm] \hline & & & & & & &\\[-2mm] 17/2 & $E(2^{+}_{g})$ & 0.08 & 0.08 & 0.08 & & & \\[2pt] & $E(2^{+}_{\gamma})$ & 0.86 & 0.93 & 1.63 & & & \\[2pt] & $E(2^{+}_{\beta})$ & 1.21 & 0.96 & 1.56 & & & \\ & & & & & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} Various procedures have been proposed to estimate the F-spin purity of low lying states~\cite{lipas90}. In the majority of analyses, based on M1 transitions (which should vanish between pure $F=F_{max}$ states), magnetic moments and energy systematics of mixed-symmetry states, the F-spin admixtures in low lying states are found to be of a few percents ($<10\%$), typically $2\%-4\%$~\cite{lipas90}. In spite of its appeal, however, F-spin cannot be an exact symmetry of the Hamiltonian. The assumption of F-spin scalar Hamiltonians is at variance with the microscopic interpretation of the IBM-2, which necessitates different effective interactions between like and unlike nucleons~\cite{Talmi93}. Furthermore, if F-spin was a symmetry of the Hamiltonian, then {\it all} states would have good F-spin and would be arranged in F-spin multiplets. Experimentally this is not case. As noted in an analysis~\cite{brentano85,gupta89} of rare earth nuclei, the ground bands are in F-spin multiplets, whereas the vibrational $\beta$ bands and some $\gamma$ bands do not form good F-spin multiplets. The empirical situation in the deformed Dy-Os region is portrayed in Table~\ref{TabFspinmult} and Fig.~\ref{figFspinmult}. From Table~\ref{TabFspinmult} it is seen that, for $F>13/2$, the energies of the $L=2^{+}$ members of the $\gamma$ bands vary fast in the multiplet and not always monotonically. The variation in the energies of the $\beta$ bands is large and irregular. Thus both microscopic and empirical arguments rule out F-spin invariance of the Hamiltonian. F-spin can at best be an approximate quantum number which is good only for a selected set of states while other states are mixed. We are thus confronted with a situation of having `special states' endowed with a good symmetry which does not arise from invariance of the Hamiltonian. These are precisely the characteristics of a partial symmetry for which a non-scalar Hamiltonian produces a subset of special (at times solvable) states with good symmetry. In what follows we present IBM-2 Hamiltonians with F-spin as a partial symmetry~\cite{levgin00}. The construction process is similar to that employed in Section~\ref{sec:PartialSolv} for obtaining partially-solvable IBM-1 Hamiltonians. Predictions of F-spin PDS are then confronted with empirical data. \begin{figure}[t] \begin{center} \includegraphics[height=11cm,angle=-90]{fig16Fspin.eps} \caption{ \small Experimental levels of the ground $\gamma$ and $\beta$ bands in an F-spin multiplet $F=6$ of rare earth nuclei. Levels shown are up to $L=8^{+}_{g}$ for the ground band, $L=2^{+}_{\gamma},3^{+}_{\gamma}$ for the $\gamma$ band (diamonds connected by dashed lines) and $L=0^{+}_{\beta},2^{+}_{\beta}$ for the $\beta$ band (squares connected by dotted lines). Adapted from~\cite{levgin00}. \label{figFspinmult}} \end{center} \end{figure} The ground band in the IBM-2 is represented by an intrinsic state which is a product of a proton condensate and a rotated neutron condensate with $N_{\pi}$ and $N_{\nu}$ bosons, respectively~\cite{levkir90}. It depends on the quadrupole deformations, $\beta_{\rho},\gamma_{\rho}$, ($\rho=\pi,\nu$) of the proton-neutron equilibrium shapes and on the relative orientation angles $\Omega$ between them. For $\beta_{\rho}>0$, the intrinsic state is deformed and members of the rotational ground-state band are obtained from it by projection. It has been shown in~\cite{lev90} that the intrinsic state will have a well defined F-spin, $F=F_{max}$, when the proton-neutron shapes are aligned and with equal deformations. The conditions ($\beta_{\pi}= \beta_{\nu}$,$\gamma_{\pi}=\gamma_{\nu}$, $\Omega=0$) are weaker than the conditions for F-spin invariance, which makes it possible for a non-F-scalar IBM-2 Hamiltonian to have an equilibrium intrinsic state with pure F-spin. Focusing on the most likely situation, namely, aligned axially symmetric (prolate) deformed shapes ($\beta_{\rho}=\beta$, $\gamma_{\rho}=\Omega=0$), the equilibrium deformed intrinsic state for the ground band with $F=F_{max}$ has the form \begin{eqnarray} \vert c; K=0 \rangle &\equiv& \vert N_{\pi},N_{\nu} \rangle = (N_{\pi}!N_{\nu}!)^{- 1/2}(b^{\dagger}_{c,\pi})^{N_{\pi}} \, (b^{\dagger}_{c,\nu})^{N_{\nu}} \vert 0 \rangle ~, \nonumber\\ b^{\dagger}_{c,\rho} &=& (1 + \beta^{2}\,)^{-1/2} (\,s^{\dagger}_{\rho} + \beta\, d^{\dagger}_{\rho, 0} \, ) ~, \label{condibm2} \end{eqnarray} where $K$ denotes the angular momentum projection on the symmetry axis. The construction of partially-solvable IBM-2 Hamiltonian with F-spin partial symmetry, can be accomplished by means of the following boson-pair operators~\cite{levgin00} \begin{eqnarray} \begin{array}{ll} R^{\dagger}_{\rho,0} = d^{\dagger}_{\rho} \cdot d^{\dagger}_{\rho} - \beta ^{2}(s^{\dagger}_{\rho})^2,\;\;\; & R^{\dagger}_{(\pi\nu),0} = \sqrt{2}\,(\,d^{\dagger}_{\pi}\cdot d^{\dagger}_{\nu} - \beta^{2}s^{\dagger}_{\pi}s^{\dagger}_{\nu}\,) \\[1mm] R^{\dagger}_{\rho,2} = \sqrt{2}\,\beta\, s^{\dagger}_{\rho}d^{\dagger}_{\rho} + \sqrt{7}(d^{\dagger}_{\rho}d^{\dagger}_{\rho})^{(2)}, \;\;\;& R^{\dagger}_{(\pi\nu),2} = \beta (\,s^{\dagger}_{\pi}d^{\dagger}_{\nu} + s^{\dagger}_{\nu}d^{\dagger}_{\pi}\,) + \sqrt{14}(d^{\dagger}_{\pi} d^{\dagger}_{\nu})^{(2)} \\[1mm] W^{\dagger}_{L} = (d^{\dagger}_{\pi} d^{\dagger}_{\nu})^{(L)} \;\; (L=1,3),\;\;\; & W^{\dagger}_{2} = s^{\dagger}_{\pi}d^{\dagger}_{\nu} - s^{\dagger}_{\nu}d^{\dagger}_{\pi} \label{pairs} \end{array}\qquad \end{eqnarray} The $R^{\dagger}_{\rho,L}$ pairs ($\rho=\pi,\nu$) are the same $L=0,2$ pairs of Eq.~(\ref{PLb0}) and the $\pi$-$\nu$ pair, $R^{\dagger}_{(\pi\nu),L}$, completes the set to form an $F$-spin vector. Altogether, the $R^{\dagger}_{i,L}$ ($L=0,2$) are boson pairs with $F=1$ and $(F_0=1,0,-1)\leftrightarrow [i=\pi,(\pi\nu),\nu]$. The $W^{\dagger}_{L}$ $(L=1,2,3)$ are F-spin scalar ($F=0$) $\pi$-$\nu$ boson pairs. All these operators satisfy \begin{subequations} \begin{eqnarray} R_{i,L'\mu}\vert c; K=0 \rangle &=& 0 ~,\\ W_{L'\mu}\vert c; K=0 \rangle &=& 0 ~, \end{eqnarray} \label{pairscond} \end{subequations} or equivalently, \begin{subequations} \begin{eqnarray} R_{i,L'\mu}\vert c;\, [N_{\pi}],[N_{\nu}],\, F=F_{max}, LM\rangle &=& 0 ~,\\ W_{L'\mu}\vert c;\, [N_{\pi}],[N_{\nu}],\, F=F_{max}, LM\rangle &=& 0 ~. \end{eqnarray} \label{pairscondL} \end{subequations} The states in Eq.~(\ref{pairscondL}) are those obtained by ${\rm O}_{\pi+\nu}(3)$ projection from the intrinsic state~(\ref{condibm2}). Since the angular momentum projection operator is an F-spin scalar, the projected states of good L will also have good $F=F_{max}$. Following the general algorithm, an IBM-2 Hamiltonian with partial F-spin symmetry can be constructed as \begin{eqnarray} \hat{H} &=& \sum_{i}\sum_{L=0,2} A^{(i)}_{L}R^{\dagger}_{i,L}\cdot\tilde{R}_{i,L} + \sum_{L=1,2,3}B_{L}W^{\dagger}_{L}\cdot\tilde W_{L} +C_{2}\Bigl [ R^{\dagger}_{(\pi\nu),2}\cdot\tilde W_{2} + H.c. \Bigr ] ~, \qquad\;\;\; \label{hamilt} \end{eqnarray} where $\tilde{R}_{i,L,\mu} = (-1)^{\mu}R_{i,L,-\mu}$, $\tilde{W}_{L,\mu} = (-1)^{\mu}W_{L,-\mu}$. The above Hamiltonian is an F-spin scalar only when $A^{(\pi)}_{L}=A^{(\nu)}_{L}=A^{(\pi\nu)}_{L}\, (L=0,2)$ and $C_{2}=0$. Nevertheless, the relations of Eqs.~(\ref{pairscond})-(\ref{pairscondL}) ensure that it has a solvable zero-energy band with good F-spin for {\it any} choice of parameters $A^{(i)}_{L},\,B_{L},\, C_{2}$ and {\it any} $N_{\pi},N_{\nu}$. When $A^{(i)}_{L},\,B_{L},\,A^{(\pi\nu)}_{2}B_{2}-(C_{2})^{2}\geq 0$, $\hat{H}$~(\ref{hamilt}) becomes positive-definite and the solvable states form its ground band. We thus have a non-F-spin scalar Hamiltonian with a solvable (degenerate) ground band with $F=F_{max}$. The degeneracy can be lifted by adding to the Hamiltonian ${\rm O}_{\pi+\nu}(3)$ rotation terms which produce $L(L+1)$ type splitting but do not affect the wave functions. States in other bands can be mixed with respect to F-spin, hence the F-spin symmetry of $\hat{H}$ is partial. $\hat{H}$ trivially commutes with $\hat{F}_{0}$ but not with $\hat{F}_{\pm}$. However, $[\,\hat{H},\hat{F}_{\pm}\,]\vert c; K=0\rangle =0$ does hold and, therefore, $\hat{H}$ will yield F-spin multiplets for members of ground bands. On the other hand, states in other bands can have F-spin admixtures and are not compelled to form F-spin multiplets. These features which arise from the partial F-spin symmetry of the Hamiltonian are in line with the empirical situation as discussed above and as depicted in Table~\ref{TabFspinmult} and Fig.~\ref{figFspinmult}. It should be noted that the partial F-spin symmetry of $\hat{H}$ holds for any choice of parameters in Eq.~(\ref{hamilt}). In particular, one can incorporate realistic shell-model based constraints, by choosing the $A^{(\rho)}_{2}$ ($\rho=\pi,\nu$) terms (representing seniority-changing interactions between like nucleons), to be small. For the special choice $A^{(i)}_{2}=C_{2}=0$ and $B_{1}=B_{3}$, $\hat{H}$ of Eq.~(\ref{hamilt}) becomes ${\rm O}_{\pi+\nu}(5)$-scalar which commutes, therefore, with the ${\rm O}_{\pi+\nu}(5)$ projection operator and hence produces F-spin multiplets with good ${\rm O}_{\pi+\nu}(5)$ symmetry. Such multiplets were reported in the Yb-Os region of $\gamma$-soft nuclei~\cite{zamfir92}. The same conditions ($\beta_{\rho}=\beta$, $\gamma_{\rho}=\Omega=0$) which resulted in $F=F_{max}$ for the condensate of Eq.~(\ref{condibm2}), ensure also $F=F_{max}-1$ for the intrinsic state representing the scissors band \begin{eqnarray} \vert sc ; K=1 \rangle &=& \Gamma^{\dagger}_{sc}\vert N_{\pi}-1,N_{\nu}-1 \rangle ~, \nonumber\\ \Gamma^{\dagger}_{sc} &=& b^{\dagger}_{c,\pi}d^{\dagger}_{\nu,1} - d^{\dagger}_{\pi,1}b^{\dagger}_{c,\nu} ~. \label{scissors} \end{eqnarray} Here $\Gamma^{\dagger}_{sc}$ is a $F=0$ deformed boson pair whose action on the condensate with $(N-2)$ bosons produces the scissors mode excitation. The states of good $L$ projected from this intrinsic state, $\vert sc;\, [N_{\pi}],[N_{\nu}],\, F=F_{max}-1, LM\rangle$ retain the F-spin quantum number, $F=F_{max}-1$. Futhermore, it can be shown that the operators $R^{\dag}_{i,L\mu}$ of Eq.~(\ref{pairs}) satisfy \begin{subequations} \begin{eqnarray} &&R_{i,L'\mu}\vert sc; K=1 \rangle = 0 ~,\\ &&R_{i,L'\mu}\vert sc;\, [N_{\pi}],[N_{\nu}],\, F=F_{max}-1, LM\rangle = 0 ~. \end{eqnarray} \label{Risc} \end{subequations} Consequently, the scissors intrinsic state~(\ref{scissors}) and corresponding $L$-projected states are exact eigenstates of the following Hamiltonian, obtained from Eq.~(\ref{hamilt}) for the special choice $C_{2} = 0$ and $B_{1}= B_{3} = 2B_{2} \equiv 2B$ \begin{eqnarray} \hat{H}' &=& \sum_{i}\sum_{L=0,2} A^{(i)}_{L}R^{\dagger}_{i,L}\cdot\tilde{R}_{i,L} + B\hat{{\cal M}}_{\pi\nu} ~. \label{hprime} \end{eqnarray} The last term in Eq.~(\ref{hprime}) is the Majorana operator~\cite{ibm}, with eigenvalues $k(N-k+1)$ for states with $F=F_{max}-k$. It is related to the total F-spin operator and the quadratic Casimir operator of ${\rm U}_{\pi+\nu}(6)$ by ${\hat{\cal M}}_{\pi\nu} = [\,\hat {N}(\hat{N}+2)/4 - \mbox{\boldmath $F^2$}\,]= [\,\hat{N}(\hat{N}+5) - \hat{C}_{2}[{\rm U}_{\pi+\nu}(6)]\,]/2$. Adding an ${\rm O}_{\pi+\nu}(3)$ rotation term we obtain the following Hamiltonian with F-spin PDS \begin{eqnarray} \hat{H}_{PDS} = \hat{H}' + C\,\hat{C}_{2}[{\rm O}_{\pi+\nu}(3)] = \hat{H}_{DS} + \sum_{i}\sum_{L=0,2} A^{(i)}_{L}R^{\dagger}_{i,L}\cdot\tilde{R}_{i,L} ~. \label{hPDSFspin} \end{eqnarray} Here $\hat{H}_{DS}$ contains the Majorana and ${\rm O}_{\pi+\nu}(3)$ terms, associated with the dynamical symmetry chain ${\rm U}_{\pi+\nu}(6)\supset {\rm O}_{\pi+\nu}(3)$. $\hat{H}_{PDS}$~(\ref{hPDSFspin}) has subsets of solvable states which form the $K=0$ ground band with $F=F_{max}$, \begin{subequations} \begin{eqnarray} &&\vert c;\, [N_{\pi}],[N_{\nu}],\, F=F_{max}, LM\rangle \;\;\;\;\; L=0,2,4,\ldots, 2N\\ && E_{g}(L) = C\,L(L+1) ~, \label{gbandibm2} \end{eqnarray} \end{subequations} and the $K=1$ scissors band with $F=F_{max}-1$ \begin{subequations} \begin{eqnarray} &&\vert sc;\, [N_{\pi}],[N_{\nu}],\, F=F_{max}-1, LM\rangle \;\;\;\;\; L=1,2,3,\ldots, 2N-1\\ && E_{sc}(L) = B\,N + C\,L(L+1) ~. \label{scband} \end{eqnarray} \end{subequations} It follows that for such Hamiltonians, both the ground and scissors band have good F-spin and have the same moment of inertia. The latter derived property is in agreement with the conclusions of a comprehensive analysis of the scissors mode in heavy even-even nuclei~\cite{enders99}, which concluded that, within the experimental precisions ($\sim$ 10\%), the moment of inertia of the scissors mode are the same as that of the ground band. It is the partial F-spin symmetry of the Hamiltonian~(\ref{hPDSFspin}) which is responsible for the common signatures of collectivity in these two bands. \begin{table}[t] \begin{center} \caption[]{\label{TabM1Fspin} \protect\small The ratio $R=\sum B(M1)\uparrow/(C_{F,F_0})^2$ for members of F-spin~multiplets. Here $\sum B(M1)\uparrow$ denotes the experimental summed M1 strength to the scissors mode~\cite{pietralla95,maser96} and $C_{F,F_0} = (F,F_0;1,0\vert F-1,F_0)$. Adapted from~\cite{levgin00}.} \vspace{1mm} \begin{tabular}{lccccc} \hline & & & & &\\[-3mm] Nucleus & $F$ & $F_0$ & $\sum B(M1)\uparrow$ $[\mu_{N}^2]$ & $(C_{F,F_0})^2$ & $R$ \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $^{148}$Nd & 4 & 1 & 0.78 (0.07) & 5/12 & 1.87 (0.17) \\ $^{148}$Sm & & 2 & 0.43 (0.12) & 1/3 & 1.29 (0.36) \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $^{150}$Nd & 9/2 & 1/2 & 1.61 (0.09) & 4/9 & 3.62 (0.20) \\[2pt] $^{150}$Sm & & 3/2 & 0.92 (0.06) & 2/5 & 2.30 (0.15) \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $^{154}$Sm & 11/2 & 1/2 & 2.18 (0.12) & 5/11 & 4.80 (0.26) \\[2pt] $^{154}$Gd & & 3/2 & 2.60 (0.50) & 14/33 & 6.13 (1.18) \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $^{160}$Gd & 7 & 0 & 2.97 (0.12) & 7/15 & 6.36 (0.26) \\[2pt] $^{160}$Dy & & 1 & 2.42 (0.18) & 16/35 & 5.29 (0.39) \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $^{162}$Dy & 15/2 & 1/2 & 2.49 (0.13) & 7/15 & 5.34 (0.28) \\[2pt] $^{166}$Er & & $-1/2$ & 2.67 (0.19) & 7/15 & 5.72 (0.41) \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $^{164}$Dy & 8 & 0 & 3.18 (0.15) & 8/17 & 6.76 (0.32) \\[2pt] $^{168}$Er & & $-1$ & 3.30 (0.12) & 63/136 & 7.12 (0.26) \\[2pt] $^{172}$Yb & & $-2$ & 1.94 (0.22)$^{a)}$ & 15/34 & 4.40 (0.50) \\ & & & & &\\[-3mm] \hline & & & & &\\[-2mm] $^{170}$Er & 17/2 & $-3/2$ & 2.63 (0.16) & 70/153 & 5.75 (0.35) \\[2pt] $^{174}$Yb & & $-5/2$ & 2.70 (0.31) & 66/153 & 6.26 (0.72) \\ & & & & &\\[-3mm] \hline \end{tabular}\\[6pt] {\small $^{a)}$ The low value of $\sum B(M1)\uparrow $ for $^{172}$Yb has been attributed to experimental deficiencies \cite{rich95}. $\qquad\qquad$}\\ \end{center} \end{table} The Hamiltonian $\hat{H}'$ of Eq.~(\ref{hprime}) is not F-spin invariant, however, the following relations are satisfied, $[\,\hat{H}' , \vec{F} \,]\,\vert c; K=0 \rangle = [\,\hat{H}' , \vec{F} \,]\,\vert sc; K=1 \rangle = 0$. This implies that members of both the ground and scissors bands are expected to form F-spin multiplets. For ground bands such structures have been empirically established~\cite{harter85,brentano85,gupta89,zamfir92}. The prediction for F-spin multiplets of scissors states requires further elaboration. Although the mean energy of the scissors mode is at about 3 MeV~\cite{pietralla98}, the observed fragmentation of the M1 strength among several $1^{+}$ states prohibits, unlike ground bands, the use of nearly constant excitation energies as a criteria to identify F-spin multiplets of scissors states. Instead, a more sensitive test of this suggestion comes from the summed ground to scissors B(M1) strength. The IBM-2 M1 operator $(\hat{L}_{\pi}- \hat{L}_{\nu})$ is an F-spin vector ($F=1,F_{0}=0$). Its matrix element between the ground state [$L=0^{+}_{g},\,(F = F_{max},F_{0})$] and scissors state [$L=1^{+}_{sc},\,(F' =F-1,F_{0}$)] is proportional to an F-spin Clebsch Gordan coefficient $C_{F,F_0} = (F,F_0;1,0\vert F-1,F_0)$ times a reduced matrix element. It follows that the ratio $B(M1;0^{+}_{g}\rightarrow 1^{+}_{sc})/(C_{F,F_0})^2$ does not depend on $F_{0}$ and should be a constant in a given F-spin multiplet. In Table~\ref{TabM1Fspin} we list {\it all} F-spin partners for which the summed B(M1) strength to the scissors mode has been measured~\cite{pietralla95,maser96}. It is seen that within the experimental errors, the above ratio is fairly constant. The most noticeable discrepancy for $^{172}$Yb (F=8), arises from its measured low value of summed B(M1) strength. The latter should be regarded as a lower limit due to experimental deficiencies (large background and strong fragmentation~\cite{rich95}). These observations strengthen the contention of high F-spin purity and formation of F-spin multiplets of scissors states. The Hamiltonian $\hat{H}'$~(\ref{hprime}) depends on $\beta$ through the operators $R^{\dag}_{i,L}$~(\ref{pairs}). It exhibits additional partial symmetries for specific choices of the deformation and/or parameters. Specifically, for $\beta=\sqrt{2}$, $\hat{H}'$ has both F-spin and ${\rm SU}_{\pi+\nu}(3)$ PDS of type I. In such circumstances, the ground (K=0) and scissors (K=1) bands have good F-spin and ${\rm SU}_{\pi+\nu}(3)$ symmetries: $[(\lambda,\mu),F] = [(2N,0),F_{max}]$ and $[(2N-2,1),F=F_{max}-1]$, respectively. If in addition, $A^{(\pi)}_{2}=A^{(\nu)}_{2}=A^{(\pi\nu)}_{2}$, then also the symmetric-$\gamma$ ($K=2$), and antisymmetric-$\gamma$ ($K=2$) bands are solvable and have good SU(3) and F-spin symmetries: $[(2N-4,2),F=F_{max}]$ and $[(2N-4,2),F=F_{max}-1]$, respectively. In this case, also states of the $\gamma$-bands will be arranged in F-spin multiplets. At the same time, since the Hamiltonian is not F-spin scalar, the $\beta$ bands can have F-spin admixtures and need not form F-spin multiplets. As noted in~\cite{brentano85,gupta89} and shown in Table~\ref{TabFspinmult} and Fig.~\ref{figFspinmult}, such a behaviour is observed for nuclei with $F=6,\, 13/2$. For $\beta=1$, the ground (K=0) and scissors (K=1) bands have good F-spin and ${\rm O}_{\pi+\nu}(6)$ symmetries: $[\langle\sigma_1,\sigma_2\rangle,F] = [\langle N,0\rangle,F_{max}]$ and $[\langle N-1,1\rangle,F_{max}-1]$, respectively, but the projected states are mixed with respect to ${\rm O}_{\pi+\nu}(5)$. Consequently, in this case, $\hat{H}'$~(\ref{hprime}) has ${\rm O}_{\pi+\nu}(6)$ PDS of type III. For $A_{2}^{(\pi)}=A_{2}^{(\nu)}=A_{2}^{(\pi\nu)}=0$, $\hat{H}'$ is ${\rm O}_{\pi+\nu}(5)$-invariant. It contains a mixture of terms from several chains: ${\rm U}_{\pi}(5)\times {\rm U}_{\nu}(5)$, ${\rm O}_{\pi}(6)\times {\rm O}_{\nu}(6)$ and ${\rm U}_{\pi+\nu}(6)$, all sharing a common ${\rm O}_{\pi+\nu}(5)\supset {\rm O}_{\pi+\nu}(3)$ segment. In such circumstances, $\hat{H}'$ has a partially-solvable ${\rm O}_{\pi+\nu}(5)$ PDS of type II. Such a PDS was used in~\cite{smir02} to obtain an extended M1 sum rule for excited symmetric and mixed-symmetry states, and apply it to $^{94}$Mo. A new aspect that can occur in an algebraic description of coupled systems, is the situation in which the set of Casimir operators in a given chain of subalgebras of $G$, may not be sufficient to express the most general Hamiltonian constructed from the generators of $G$. Such a scenario was considered in~\cite{Talmi97} in connection with the ${\rm U}_{\pi+\nu}(5)$ chain of the IBM-2, and shown to be associated with a partial dynamical symmetry. The ${\rm U}_{\pi+\nu}(5)$ limit of the IBM-2 corresponds to the chain \begin{eqnarray} \begin{array}{ccccccccc} {\rm U}_{\pi}(6)\times {\rm U}_{\nu}(6) &\supset &{\rm U}_{\pi+\nu}(6) &\supset &{\rm U}_{\pi+\nu}(5)& \supset&{\rm O}_{\pi+\nu}(5) &\supset &{\rm O}_{\pi+\nu}(3)\\ \downarrow&&\downarrow&&\downarrow&&\downarrow&&\downarrow\\[0mm] [N_{\pi}]\times [N_{\nu}] && [N-k,k]&&\{n_1,n_2\} &&(\tau_1,\tau_2) &\alpha&L \end{array} ~. \label{chainu5ibm2} \end{eqnarray} The total number of bosons is $N=N_{\pi}+N_{\nu}$ and their F-spin is $F= N/2 -k$. The states conserve also the total number of $d$-bosons, $n_d=n_1+n_2$, and their separate F-spin, $F_d = (n_1-n_2)/2$. Apart from $\hat{N}_{\rho}$- and $\hat{N}$-dependent terms, the most general one- and two-body Hamiltonian which has a ${\rm U}_{\pi+\nu}(5)$ dynamical symmetry (DS) is given by \begin{eqnarray} \hat{H}_{DS} &=& \epsilon\,\hat{C}_{1}[{\rm U}_{\pi+\nu}(5)] + \eta\, (\hat{C}_{1}[{\rm U}_{\pi+\nu}(5)])^2 + A\,\hat{C}_{2}[{\rm U}_{\pi+\nu}(5)] \nonumber\\ && + B\,\hat{C}_{2}[{\rm O}_{\pi+\nu}(5)] + C\,\hat{C}_{2}[{\rm O}_{\pi+\nu}(3)] + a\hat{{\cal M}}_{\pi\nu} ~, \qquad\quad \label{hDSu5ibm2} \end{eqnarray} where $\hat{C}_{p}[G]$ denoted the $p$-th order Casimir operator of $G$. However, it is not the most general Hamiltonian constructed from the generators of ${\rm U}_{\pi+\nu}(5)$. To obtain the latter, another independent operator must be added to $\hat{H}_{DS}$ which is not a Casimir operator of a subalgebra. A simple choice of such an operator can be~\cite{Talmi97} \begin{eqnarray} \hat{V}_1 &=& \xi_1\,W^{\dag}_{1}\cdot \tilde{W}_{1} ~, \label{V1} \end{eqnarray} where $W^{\dag}_{1\mu}$ is the $(F=0,\,L=1)$ boson-pair defined in Eq.~(\ref{pairs}). The latter transforms as a $(\tau_1,\tau_2)=(1,1)$ tensor under ${\rm O}_{\pi+\nu}(5)$. Consequently, $\hat{V}_1$ has components with $(\tau_1,\tau_2) = (2,2)\oplus (0,0)$, hence breaks the ${\rm O}_{\pi+\nu}(5)$ symmetry. The ${\rm U}_{\pi+\nu}(6)$ irrep $[N-k,k]$ with $k\neq 0$ contains ${\rm O}_{\pi+\nu}(5)$ irreps $(\tau_1,\tau_2)$ with $\tau_2\neq 0$, which can be admixed by this term. Nevertheless, $\hat{V}_1$ has a subset of zero-energy solvable states with good ${\rm O}_{\pi+\nu}(5)$ symmetry. These are the ${\rm U}_{\pi+\nu}(5)$ basis states with $F_d=n_d/2$, which are annihilated by $W_{1\mu}$~\cite{Talmi97}, \begin{eqnarray} W_{1\mu}\vert [N_{\pi}]\times [N_{\nu}];\,[N-k,k], \{n_d,0\}, (\tau,0), n_{\Delta}, LM\rangle &=& 0 ~. \label{W1mu} \end{eqnarray} The interaction $\hat{V}_1$ can be added to $\hat{H}_{DS}$~(\ref{hDSu5ibm2}) to obtain the following Hamiltonian with ${\rm O}_{\pi+\nu}(5)$ PDS of type I \begin{eqnarray} \hat{H}_{PDS} &=& \hat{H}_{DS} + \xi_1\, W^{\dag}_{1}\cdot \tilde{W}_{1} ~. \label{hPDSu5ibm2} \end{eqnarray} $\hat{H}_{PDS}$ breaks the ${\rm U}_{\pi+\nu}(5)$ DS but retains a subset of solvable ${\rm U}_{\pi+\nu}(5)$ basis states with known eigenvalues \begin{eqnarray} \begin{array}{l} \vert [N_{\pi}]\times [N_{\nu}];\,[N-k,k], \{n_d,0\}, (\tau,0), n_{\Delta}, LM\rangle\\ E_{PDS} = \epsilon n_d +\eta n_{d}^2 + An_d(n_d+4) + B\tau(\tau+3) + CL(L+1) + a k(N-k+1) ~. \end{array} \label{ePDSu5ibm2} \end{eqnarray} $\hat{H}_{PDS}$~(\ref{hPDSu5ibm2}) exhibits also a ${\rm U}_{\pi+\nu}(6)\supset {\rm U}_{\pi+\nu}(5)$ PDS of type II, since the remaining eigenstates preserve the quantum numbers of ${\rm U}_{\pi+\nu}(6),\,{\rm U}_{\pi+\nu}(5)$ and ${\rm O}_{\pi+\nu}(3)$ but not of ${\rm O}_{\pi+\nu}(5)$ in the chain~(\ref{chainu5ibm2}). $\hat{V}_1$~(\ref{V1}) has additional zero-energy eigenstates with $F_d< n_d/2$ which break, however, the ${\rm O}_{\pi+\nu}(5)$ symmetry~\cite{Talmi97}. These solvable states lead to additional PDS of the Hamiltonian~(\ref{hPDSu5ibm2}), provided $B=0$ in Eq.~(\ref{hDSu5ibm2}). In general, PDS associated with vanishing eigenvalues of $\hat{V}_1$ can explain the simple regularities in the spectra of the generalized Majorana operator, observed in an IBM-2 analysis of Pd and Ru nuclei~\cite{Gianatiempo98}. \section{PDS in Fermion Systems} \label{PDSfermion} Partial symmetries are not confined to bosonic systems. The proposed algorithms for constructing Hamiltonians with PDS do not rely on the statistics of the constituents, hence can be implemented for both bosons and fermions. Identifying partial symmetries in fermion systems can proceed in two ways. The first approach relies on a mapping of a bosonic Hamiltonian, which posses a partial symmetry, into its fermionic counterpart. If the bosonic generators of the spectrum generating algebra can be mapped into fermionic generators of the same algebra, then both Hamiltonians will exhibit the same type of partial symmetry. This approach was demonstrated in~\cite{Manana99} for schematic ${\rm U(2)}\times {\rm U(2)}$ Lipkin-type models. A second approach relies on a direct construction of fermion Hamiltonians with partial symmetries. In what follows, we demonstrate this approach by identifying fermionic PDS related to properties of the quadrupole-quadrupole interaction in the framework of the symplectic shell-model~\cite{Escher00,Escher02} and to properties of seniority-conserving and non-conserving interactions in a single $j$ shell~\cite{rowerosen01,rosenrowe03, escuder06,zamick07,isahein08,zamisa08,Talmi10}. Such findings constitute a first step towards understanding the microscopic origin of PDS in nuclei. \subsection{PDS in the symplectic shell model} \label{subsec:SymplecPDS} The symplectic shell model (SSM)~\cite{Rowe85,SymplM} is an algebraic, fermionic, shell-model scheme which includes multiple $2 \hbar \omega$ one-particle one-hole excitations. The scheme is based on the symplectic algebra Sp(6,R) whose generators $\hat{A}^{(20)}_{\ell m},\,\hat{B}^{(02)}_{\ell m},\, \hat{C}^{(11)}_{\ell m}$ and $\hat{H}_0$ have good SU(3) [superscript $(\lambda,\mu)$] and O(3) [subscript $\ell,m$] tensorial properties. The $\hat{A}^{(20)}_{\ell m}$ [$\hat{B}^{(02)}_{\ell m} = (-1)^{\ell-m} (\hat{A}^{(20)}_{l,-m})^{\dagger}$], $\ell$ = 0 or 2, create (annihilate) $2 \hbar \omega$ excitations in the system. The $\hat{C}^{(11)}_{\ell m}$, $\ell$ = 1 or 2, generate a SU(3) subgroup and act only within one harmonic oscillator (h.o.\/) shell ($\sqrt{3} \hat{C}^{(11)}_{2m}=$ $Q^E_{2m}$, the symmetrized quadrupole operator of Elliott, which does not couple different h.o.\/ shells~\cite{Elliott58}, and $\hat{C}^{(11)}_{1m}=\hat{L}_m$, the orbital angular momentum operator). The harmonic oscillator Hamiltonian, $\hat{H}_0$, is a SU(3) scalar and generates U(1) in U(3) $=$ SU(3) $\times$ U(1). A fermion realization of these generators is given in~\cite{Escher98b}. The model fully accommodates the action of the collective quadrupole operator, $Q_{2m}=\sqrt{\frac{16\pi}{5}} \sum_s r^2_s Y_{2m} (\hat{r}_s)$, which takes the form, $Q_{2m} = \sqrt{3} ( \hat{C}^{(11)}_{2m} + \hat{A}^{(20)}_{2m} + \hat{B}^{(02)}_{2m} )$. A basis for the symplectic model is generated by applying symmetrically coupled products of the 2$\hbar \omega$ raising operator $\hat{A}^{(20)}$ with itself, to the usual $0 \hbar \omega$ many-particle shell-model states. Each $0 \hbar \omega$ starting configuration is characterized by the distribution of oscillator quanta into the three cartesian directions, $\{ \sigma_1,\sigma_2,\sigma_3 \}$ ($\sigma_1 \geq \sigma_2 \geq \sigma_3$), or, equivalently, by its U(1)$\times$SU(3) quantum numbers $N_{\sigma} \, (\lambda_{\sigma},\mu_{\sigma})$. Here $\lambda_{\sigma} = \sigma_1 - \sigma_2$, $\mu_{\sigma} = \sigma_2 - \sigma_3$ are the Elliott SU(3) labels, and $N_{\sigma} = \sigma_1 +\sigma_2 +\sigma_3$ is related to the eigenvalue of the oscillator number operator. \begin{figure}[t] \begin{center} \includegraphics[height=5cm]{fig17SSMIrrep.eps} \caption{ \small Basis construction in the symplectic model. SU(3)-coupled products of the raising operator $\hat{A}^{(20)}$ with itself act on an Elliott starting state with $(\lambda_{\sigma},\mu_{\sigma}) = (\lambda,0)$ ($\{\sigma_1,\sigma_2,\sigma_3=\sigma_2\}$) to generate symplectic $2\hbar \omega$, $4\hbar \omega$, $\ldots$ excitations. Also shown are the SU(3) labels $(\lambda,\mu)$ and quanta distributions $\{\omega_1,\omega_2,\omega_3\}$ for some excited states. Adapted from~\cite{Escher00}. \label{figSSMIrrep}} \end{center} \end{figure} Each such set of U(3) quantum numbers uniquely determines an irrep of the symplectic group, since it characterizes a Sp(6,R) lowest weight state. The product of $N/2$, $N=0,2,4,\ldots$, raising operators $\hat{A}^{(20)}$ is multiplicity-free and generates $N\hbar \omega$ excitations for each starting irrep $N_{\sigma} \, (\lambda_{\sigma},\mu_{\sigma})$. Each such product operator ${\cal P}^{N (\lambda_n,\mu_n)}$, labeled according to its SU(3) content, $(\lambda_n,\mu_n)$, is coupled with $| N_{\sigma} \, (\lambda_{\sigma},\mu_{\sigma}) \rangle$ to good SU(3) symmetry $\rho (\lambda,\mu)$, with $\rho$ denoting the multiplicity of the coupling $(\lambda_n,\mu_n) \times (\lambda_{\sigma},\mu_{\sigma})$. The quanta distribution in the resulting state is given by $\{ \omega_1,\omega_2,\omega_3 \} $, with $N_{\sigma} + N = \omega_1 + \omega_2 + \omega_3$, $\omega_1 \geq \omega_2 \geq \omega_3$, and $\lambda = \omega_1 - \omega_2$, $\mu = \omega_2 - \omega_3$. The basis state construction is schematically illustrated in Fig.~\ref{figSSMIrrep} for a typical Elliott starting state with $(\lambda_{\sigma},\mu_{\sigma}) = (\lambda, 0)$. To complete the basis state labeling, additional quantum numbers $\alpha = \kappa L M$ are required, where $\kappa$ is a multiplicity index, which enumerates multiple occurrences of a particular $L$ value in the SU(3) irrep $(\lambda,\mu)$ from 1 to $\kappa^{max}_L (\lambda,\mu) = [ (\lambda+\mu+2-L)/2 ]$ - $[ (\lambda+1-L)/2 ]$ - $[ (\mu+1-L)/2 ]$, where [$\ldots$] is the greatest non-negative integer function~\cite{Lopez90}. The orthonormal SU(3) basis employed is that of Vergados~\cite{VER}, however, for convenience, the running index $\kappa = 1, 2, \ldots, \kappa^{max}_L$ is used instead of the usual Vergados label, $\tilde{\chi}$. The dynamical symmetry chain and the associated quantum labels of the above scheme are given by~\cite{SymplM}: \begin{eqnarray} \begin{array}{ccccc} {\rm Sp(6,R)} &\supset& {\rm U(3)} &\supset& {\rm SO(3)} \\ \downarrow & & \downarrow && \\ N_{\sigma}(\lambda_{\sigma},\mu_{\sigma}) & N(\lambda_n,\mu_n) \rho & N_{\omega}(\lambda_{\omega},\mu_{\omega})& \kappa & L \end{array} ~. \label{eq:DSBasis} \end{eqnarray} The following SSM Hamiltonians which has SU(3) partial symmetry have been proposed~\cite{Escher00,Escher02} \begin{eqnarray} \hat{H}(\beta_0,\beta_2) &=& \beta_0 \hat{A}_0 \hat{B}_0 + \beta_2 \hat{A}_2 \cdot \hat{B}_2 \nonumber\\ &=& \frac{\beta_2}{18} ( 9\hat{C}_{{\rm SU(3)}} - 9\hat{C}_{{\rm Sp(6)}} + 3\hat{H}_0^2 - 36\hat{H}_0 ) + ( \beta_0 - \beta_2) \hat{A}_0 \hat{B}_0 ~. \label{Eq:Hpds} \end{eqnarray} Here $\hat{A}_{\ell m}\equiv \hat{A}_{\ell m}^{(2,0)}$, $\hat{B}_{\ell m}\equiv \hat{B}_{\ell m}^{(0,2)}$ and the Casimir operators, $\hat{C}_{G}$, conform with the conventions used in~\cite{Escher00,Escher02}. For $\beta_0=\beta_2$, the Hamiltonian is an SU(3) scalar which is diagonal in the dynamical symmetry basis~(\ref{eq:DSBasis}). For $\beta_0=-5\beta_2$, the Hamiltonian transforms as a $(2,2)$ tensor under SU(3). Thus, in general, $\hat{H}(\beta_0,\beta_2)$ is not SU(3) invariant, however, it exhibits partial SU(3) symmetry. Specifically, among the eigenstates of $\hat{H}(\beta_0,\beta_2)$, there exists a subset of solvable pure-SU(3) states, $\vert\phi_{LM}(N)\rangle$, the SU(3)$\supset$O(3) classification of which depends on both the Elliott labels $(\lambda_{\sigma},\mu_{\sigma})$ of the starting state and the symplectic excitation $N$. In general, it is found that all L-states in the starting configuration ($N=0$) are solvable with good SU(3) symmetry~$(\lambda_{\sigma},\mu_{\sigma})$. For excited configurations, with $N>0$ ($N$ even), one can distinguish between two possible cases: \begin{figure} \begin{center} \includegraphics[height=5cm]{fig18C12.eps} \caption{\footnotesize Energy spectra for $^{12}$C. Comparison between experimental values (left), results from a symplectic $8\hbar \omega$ calculation (center) and a PDS calculation (right). K=$0_1$ indicates the ground band in all three parts of the figure. In addition, resonance bands dominated by 2$\hbar \omega$ excitations (K=$2_1,0_2,1_1,0_3$), 4$\hbar \omega$ excitations (K=$4_1$), and 6$\hbar \omega$ excitations (K=$6_1$) are shown for the Sp(6,R) and PDS calculations. Additional mixed resonance bands (not shown), dominated by 4$\hbar \omega$ and 6$\hbar \omega$ excitations, exist for this nucleus. The angular momenta of the positive parity states in the rotational bands are $L$=0,2,4,$\ldots$ for K=0 and $L$=K,K+1,K+2, $\ldots$ otherwise. Bands which consist of pure-SU(3) eigenstates of the PDS Hamiltonian~(\ref{hPDSsymp}) are indicated. Adapted from~\cite{Escher02}. \label{figEnergies_C12}} \end{center} \end{figure} \begin{table} \begin{center} \caption{\label{TabBE2_C12} \protect\small B(E2) values (in Weisskopf units) for ground band transitions in $^{12}$C. Compared are several symplectic calculations, PDS results, and experimental data. Q denotes the static quadrupole moment of the $L^{\pi}=2^+_1$ state in units of $eb$. PDS results are rescaled by an effective charge e$^*$=1.33 and the symplectic calculations employ bare charges. Adapted from~\cite{Escher02}.} \vspace{1mm} \begin{tabular}{ccccccccc} \hline & & & & & & & &\\[-3mm] \multicolumn{1}{c}{Transition} & \multicolumn{5}{c}{Model B(E2) [W.u.]} & B(E2) [W.u.] \\ \cline{2-6} $J_i \rightarrow J_f$ & \multicolumn{1}{c}{$2\hbar\omega$} & \multicolumn{1}{c}{$4\hbar\omega$} & \multicolumn{1}{c}{$6\hbar\omega$} & \multicolumn{1}{c}{$8\hbar\omega$} & \multicolumn{1}{c}{PDS} & Exp \\ & & & & & & & &\\[-3mm] \hline & & & & & & & &\\[-2mm] 2 $\rightarrow$ 0 & 4.65 & 4.65 & 4.65 & 4.65 & 4.65 & 4.65 $\pm$ 0.26\\[2pt] 4 $\rightarrow$ 2 & 4.35 & 4.27 & 4.24 & 4.23 & 4.28 & n/a \\ & & & & & & & &\\[-3mm] \hline & & & & & & & &\\[-2mm] \multicolumn{1}{c}{ $Q$ [eb]} & 0.059& 0.060& 0.060& 0.060& 0.058& 0.06$\pm$ 0.03\\ & & & & & & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics[height=5cm]{fig19Ne20.eps} \caption{ \small Energy spectra for $^{20}$Ne. Experimental ground band (K=$0_1$) energies (left), compared to theoretical results for both the ground band and 2$\hbar \omega$ resonances (K=$0_2,1_1,2_1,0_3$) for a symplectic $8\hbar \omega$ calculation (center) and a PDS calculation (right). Rotational bands which consist of pure eigenstates of the PDS Hamiltonian are indicated. Adapted from~\cite{Escher02}. \label{figNe20Energies}} \end{center} \end{figure} \begin{figure}[t] \hspace{3.5cm}$^{20}$Ne \hspace{6.5cm}$^{12}$C \begin{center} \hspace{-0.5cm} \includegraphics[height=12cm]{fig20LdecompNe20.eps} \hspace*{-2cm} \includegraphics[height=12cm]{fig20RdecompC12.eps} \caption{\footnotesize Decompositions for calculated $2^+$ states of $^{20}$Ne (left) and $^{12}$C (right). Individual contributions from the relevant SU(3) irreps at the 0$\hbar \omega$ and 2$\hbar \omega$ levels are shown for both a symplectic $8\hbar \omega$ calculation (denoted $Q_{2}\cdot Q_{2}$) and a PDS calculation. In addition, the total strengths contributed by the $N\hbar \omega$ excitations for $N>2$ are given for the symplectic case. Adapted from~\cite{Escher02}.} \label{figDecompC12Ne20} \end{center} \end{figure} \begin{table} \begin{center} \caption{\label{Tab:Ne20Be2} \protect\small B(E2) values (in Weisskopf units) for ground band transitions in $^{20}$Ne. Compared are several symplectic calculations, PDS results, and experimental data. The static quadrupole moment of the $2^+_1$ state is given in the last row. PDS results are rescaled by an effective charge e$^*$=1.95 and the symplectic calculations employ bare charges. Adapted from~\cite{Escher02}.} \vspace{1mm} \begin{tabular}{ccccccccc} \hline & & & & & & & &\\[-3mm] \multicolumn{1}{c}{Transition} & \multicolumn{5}{c}{Model B(E2) [W.u.]} & B(E2) [W.u.] \\ \cline{2-6} $J_i \rightarrow J_f$ & \multicolumn{1}{c}{$2\hbar\omega$} & \multicolumn{1}{c}{$4\hbar\omega$} & \multicolumn{1}{c}{$6\hbar\omega$} & \multicolumn{1}{c}{$8\hbar\omega$} & \multicolumn{1}{c}{PDS} & Exp \\ & & & & & & & &\\[-3mm] \hline & & & & & & & &\\[-2mm] 2 $\rightarrow$ 0 & 14.0 & 18.7 & 19.1 & 19.3 & 20.3 & 20.3 $\pm$ 1.0 \\[2pt] 4 $\rightarrow$ 2 & 18.4 & 24.5 & 24.6 & 24.5 & 25.7 & 22.0 $\pm$ 2.0 \\[2pt] 6 $\rightarrow$ 4 & 17.1 & 22.3 & 21.5 & 20.9 & 21.8 & 20.0 $\pm$ 3.0 \\[2pt] 8 $\rightarrow$ 6 & 12.4 & 15.2 & 13.3 & 12.4 & 12.9 & 9.0 $\pm$ 1.3 \\ & & & & & & & &\\[-3mm] \hline & & & & & & & &\\[-2mm] \multicolumn{1}{c}{$Q$ [eb]} & -0.14 & -0.16 & -0.16 & -0.16 & -0.17 & -0.23 $\pm$ 0.03 \\ & & & & & & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} \begin{figure}[t] \begin{center} \vspace{-1cm} \includegraphics[height=5in]{fig21Mg24.eps} \vspace{-2.5cm} \caption{ \small Energy spectra for $^{24}$Mg. Energies from a PDS calculation (bottom) are compared to symplectic results (top). Both 0$\hbar \omega$ dominated bands (K=$0_1,2_1,4_1$) and some 2$\hbar \omega$ resonance bands (K=$0_2,0_3,0_4,2_2,2_3,4_2,4_3,6_1$) are shown. The K=$0_1,2_1,4_1$ ($6_1$) states are pure (approximately pure) in the PDS scheme. Experimental energies of the ground and $\gamma$ bands are shown on the left. Adapted from~\cite{Escher02}. \label{figEnergies_Mg24}} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=13cm]{fig22LdecompMg24.eps} \hspace*{-3cm} \includegraphics[height=13cm]{fig22RdecompMg24.eps} \caption{\footnotesize Decompositions for calculated $L^{\pi}=6^+$ states of $^{24}$Mg. Eigenstates resulting from the symplectic 6$\hbar \omega$ calculation (denoted $Q_{2}\cdot Q_{2}$) are decomposed into their 0$\hbar \omega$, 2$\hbar \omega$, 4$\hbar \omega$, and 6$\hbar \omega$ components. At the 0$\hbar \omega$ and 2$\hbar \omega$ levels, contributions from the individual SU(3) irreps are shown, for higher excitations ($N>2$) only the summed strengths are given. Eigenstates of the PDS Hamiltonian belong entirely to one $N\hbar \omega$ level of excitation, here 0$\hbar \omega$ or 2$\hbar \omega$. Contributions from the individual SU(3) irreps at these levels are shown. Members of the K=$0_1,2_1,4_1$ bands are pure in the PDS scheme, and K=$6_1$ states are very nearly ($>99\%$) pure. Adapted from~\cite{Escher02}. \label{figDecomp_Mg24}} \end{center} \end{figure} \begin{itemize} \item[(a)] $\lambda_{\sigma} > \mu_{\sigma}$: the pure states belong to $(\lambda,\mu) = (\lambda_{\sigma}-N,\mu_{\sigma}+N)$ at the $N \hbar\omega$ level and have $L = \mu_{\sigma}+N, \mu_{\sigma}+N+1, \ldots , \lambda_{\sigma}-N+1$ with $N=2,4, \ldots$ subject to $2N \leq (\lambda_{\sigma} - \mu_{\sigma} + 1)$. \item[(b)] $\lambda_{\sigma} \leq \mu_{\sigma}$: the special states belong to $(\lambda,\mu) =(\lambda_{\sigma}+N,\mu_{\sigma})$ at the $N \hbar\omega$ level and have $L = \lambda_{\sigma}+N, \lambda_{\sigma}+N+1, \ldots , \lambda_{\sigma}+N+\mu_{\sigma}$ with $N=2,4, \ldots$. \end{itemize} To prove the claim, it is sufficient to show that $\hat{B}_0$ annihilates the states in question \begin{eqnarray} \hat{B}_0\,\vert \phi_{LM}(N)\rangle &=& 0 ~. \label{B0phiL} \end{eqnarray} For $N=0$ this follows immediately from the fact that the $0 \hbar \omega$ starting configuration is a Sp(6,R) lowest weight which, by definition, is annihilated by the lowering operators of the Sp(6,R) algebra. The latter include the generators $\hat{B}^{(02)}_{lm}$. For $N>0$, let $\{ \sigma_1,\sigma_2,\sigma_3 \}$ be the quanta distribution for a 0$\hbar \omega$ state with $\lambda_{\sigma} > \mu_{\sigma}$. Adding $N$ quanta to the 2-direction yields a $N \hbar \omega$ state with quanta distribution $\{ \sigma_1, \sigma_2+N, \sigma_3 \}$, that is, $(\lambda,\mu) = (\lambda_{\sigma}-N,\mu_{\sigma}+N)$. Acting with the rotational invariant $\hat{B}_0$ on such a state does not affect the angular momentum, but removes two quanta from the 2-direction, giving a $(N-2) \hbar \omega$ state with $(\lambda',\mu') = (\lambda_{\sigma}-N+2,\mu_{\sigma}+N-2)$. (The symplectic generator $\hat{B}_0$ cannot remove quanta from the other two directions of this particular state, since this would yield a state belonging to a different symplectic irrep.) Comparing the number of $L$ occurrences in $(\lambda,\mu)$ and $(\lambda',\mu')$, one finds that as long as $\lambda_{\sigma} - N + 1 \geq \mu_{\sigma}+N$, $\Delta_L(N) \equiv \kappa_L^{max}(\lambda,\mu) - \kappa_L^{max}(\lambda',\mu') = 1$ for $L = \mu_{\sigma}+N, \mu_{\sigma}+N+1, \ldots, \lambda_{\sigma}-N+1$, and $\Delta_L(N)=0$ otherwise. Therefore, when $\Delta_L(N)$=1, a linear combination $|\phi_{L}(N)\rangle = \sum_{\kappa} c_{\kappa} | N\hbar \omega (\lambda_{\sigma}-N,\mu_{\sigma}+N) \kappa L M \rangle$ exists such that $\hat{B}_0 |\phi_L(N)\rangle = 0$, and thus the claim for family (a) holds. The proof for family (b) can be carried out analogously. Here the special irrep $(\lambda,\mu) =(\lambda_{\sigma}+N,\mu_{\sigma})$ is obtained by adding $N$ quanta to the 1-direction of the starting configuration. In this case there is no restriction on $N$, hence family (b) is infinite. The special states have well defined symmetry with respect to the chain~(\ref{eq:DSBasis}) and the condition of Eq.~(\ref{B0phiL}) ensures that they are solvable eigenstates of $\hat{H}(\beta_0,\beta_2)$~(\ref{Eq:Hpds}) with eigenvalues $E(N=0) = 0$ for the 0$\hbar \omega$ level, and \begin{subequations} \begin{eqnarray} E(N) &=& \beta_2 \frac{N}{3} ( N_{\sigma} - \lambda_{\sigma} + \mu_{\sigma} - 6 + \frac{3}{2} N ) \;\;\;\; (\lambda_{\sigma} > \mu_{\sigma}) \label{familya}\\ E(N) &=& \beta_2 \frac{N}{3} ( N_{\sigma} + 2 \lambda_{\sigma} + \mu_{\sigma} - 3 + \frac{3}{2} N ) \;\;\;\; (\lambda_{\sigma} \leq \mu_{\sigma}) \label{familyb} \end{eqnarray} \label{Eq:PDSEnergies} \end{subequations} for $N > 0$. All 0$\hbar \omega$ states are unmixed and span the entire $(\lambda_{\sigma},\mu_{\sigma})$ irrep. In contrast, for the excited levels ($N > 0$), the pure states span only part of the corresponding SU(3) irreps. There are other states at each excited level which do not preserve the SU(3) symmetry and therefore contain a mixture of SU(3) irreps. The partial SU(3) symmetry of $\hat{H}(\beta_0,\beta_2)$ is converted into SU(3)-PDS of type I by adding to it O(3) rotation terms which lead to L(L+1)-type splitting but do not affect the wave functions. The solvable states then form rotational bands and since condition~(\ref{B0phiL}) determines their wave functions, one can obtain analytic expressions for the E2 rates between them~\cite{Escher02}. The Hamiltonian of Eq.~(\ref{Eq:Hpds}), with SU(3) partial symmetry, has a close relationship to the collective quadrupole- quadrupole interaction, \begin{eqnarray} \lefteqn{ Q_2 \cdot Q_2 = 3(\hat{C}_2 + \hat{A}_2 + \hat{B}_2 ) \cdot (\hat{C}_2 + \hat{A}_2 + \hat{B}_2) } \nonumber\\ &&= \hat{H}(\beta_0=12,\beta_2=18)+ const - 3 \hat{L}^2 + \{ \mbox{terms coupling different h.o. shells} \} ~. \qquad\;\;\, \label{q2q2} \end{eqnarray} Here $const = 6\hat{C}_{{\rm Sp(6)}} - 2\hat{H}_0^2 + 34\hat{H}_0$ is fixed for a given symplectic irrep $N_{\sigma}(\lambda_{\sigma},\mu_{\sigma})$ and $N\hbar \omega$ excitation. Although $\hat{H}(\beta_0,\beta_2)$ does not couple different harmonic oscillator shells, it breaks the SU(3)-symmetry and, due to the above relation, exhibits in-shell behavior similar to that of $Q_2 \cdot Q_2$. To study this connection further, and to illustrate that the PDS Hamiltonians of Eq.~(\ref{Eq:Hpds}) are physically relevant, the formalism was applied to oblate prolate and triaxially deformed nuclei, by comparing the properties of the following SU(3)-PDS Hamiltonian \begin{eqnarray} \hat{H}_{PDS} &=& h(N) + \xi \hat{H}(\beta_{0}=12,\beta_{2}=18) + \gamma_2 \hat{L}^2 + \gamma_4 \hat{L}^4 \label{hPDSsymp} \end{eqnarray} to those of the symplectic Hamiltonian \begin{eqnarray} \hat{H}_{{\rm Sp(6)}} &=& \hat{H}_0 - \eta Q_2 \cdot Q_2 + d_2 \hat{L}^2 + d_4 \hat{L}^4 \; . \label{hSp6} \end{eqnarray} Here the function $h(N)$, which contains the harmonic oscillator term $\hat{H}_0$, is simply a constant for a given $N\hbar \omega$ excitation. Details of the calculations can be found in~\cite{Escher02}. The leading Sp(6,R) irrep for the oblate nucleus $^{12}$C is $N_{\sigma} (\lambda_{\sigma},\mu_{\sigma})$ $=24.5(0,4)$. In this case, $\hat{H}_{PDS}$~(\ref{hPDSsymp}) has the pure-SU(3) states of family~(b) as solvable eigenstates. In particular, all 0$\hbar \omega$ states are pure $(\lambda_{\sigma},\mu_{\sigma}) = (0,4)$ states, and at 2$\hbar \omega$ a rotational band with good SU(3) symmetry $(\lambda,\mu)=(2,4)$ and $L=2,3,4,5,6$ exists. Similarly, at 4$\hbar \omega$ there are pure-SU(3) bands with $(\lambda,\mu)=(4,4)$ and $L=4,5,6,7,8$, at 6$\hbar \omega$ with $(\lambda,\mu)=(6,4)$ and $L=6,7,8,9,10$, etc. On the other hand, the leading Sp(6,R) irrep for the prolate nucleus $^{20}$Ne is $N_{\sigma}(\lambda_{\sigma},\mu_{\sigma})$ $=48.5(8,0)$. In this case, the solvable pure-SU(3) eigenstates of $\hat{H}_{PDS}$ are those of family~(a). They form a K=$0_1$ $L=0,2,4,6,8$ rotational band with $(\lambda,\mu)=$ (8,0) at 0$\hbar \omega$, a K=$2_1$ $L=2,3,4,5,6,7$ band with $(\lambda,\mu)=$ (6,2) at 2$\hbar \omega$, and a K=$4_1$ $L=4,5$ `band' with $(\lambda,\mu)=$ (4,4) at 4$\hbar \omega$. There are no other pure PDS states at higher levels of excitation. As shown in Fig.~\ref{figEnergies_C12} and Table~\ref{TabBE2_C12} for $^{12}$C, and in Fig.~\ref{figNe20Energies} and Table~\ref{Tab:Ne20Be2} for $^{20}$Ne, the resulting energies and transition rates in the PDS and symplectic approaches are similar and converge to values which agree with the data. The PDS calculations of B(E2) values, however, require an effective charge. A better measure for the level of agreement between the PDS and symplectic results is given by a comparison of the eigenstates. From Fig.~\ref{figDecompC12Ne20} we observe that, although $\hat{H}_{{\rm Sp(6)}}$~(\ref{hSp6}) [$\hat{H}_{PDS}$~(\ref{hPDSsymp})] does (does not) mix different major oscillator shells, the $N\hbar \omega$ level to which a particular PDS band belongs is also dominant in the corresponding symplectic band. Furthermore, within this dominant excitation, eigenstates of $\hat{H}_{{\rm Sp(6)}}$ and $\hat{H}_{PDS}$ have very similar SU(3) structure, that is, the relative strengths of the various SU(3) irreps in the symplectic states are approximately reproduced in the PDS case. This holds for the ground and excited bands in both nuclei, with the exception of the K=$0_2$ resonance band in $^{20}$Ne, where significant differences appear in the structure of the wave functions. Thus the SU(3)-PDS Hamiltonian captures much of the physics of the $Q_{2}\cdot Q_{2}$ interaction. Calculations were also performed~\cite{Escher02} for the triaxially-deformed nucleus, $^{24}$Mg, whose leading Sp(6,R) irrep is $N_{\sigma}(\lambda_{\sigma},\mu_{\sigma})$ $=62.5(8,4)$. Since now both $\lambda_{\sigma} \neq 0$ and $\mu_{\sigma} \neq 0$, the symplectic Hilbert space has a very rich structure. In this case, the solvable pure-SU(3) states are those of family~(a) and include three rotational K=0,2,4, bands at 0$\hbar \omega$ with $(\lambda,\mu)=(8,4)$, and at 2$\hbar \omega$ a (short) rotational K=6 band with $L$=6,7, which belongs to the irrep $(\lambda,\mu)=(6,6)$. Figure~\ref{figEnergies_Mg24} compares the experimental spectrum of $^{24}$Mg with energies obtained with 6$\hbar \omega$ symplectic and PDS calculations. Both the PDS and symplectic Hamiltonians were supplemented with small cubic and quartic SU(3)-conserving interactions to account for $K$-band splitting. These extra terms break the partial symmetry slightly ($<1\%$) in the $K=6_1$ band, as can be inferred from the eigenstate decompositions plotted in Fig.~\ref{figDecomp_Mg24}. As in the previous examples, the eigenstates of the PDS and symplectic Hamiltonians are found to have very similar structures. The structural differences that do exist are reflected in the very sensitive interband transition rates~\cite{Escher02}. The boson Hamiltonian, $\hat{H}(h_0,h_2)$ of Eq.~(\ref{HPSsu3}), and the fermion Hamiltonian, $\hat{H}(\beta_0,\beta_2)$ of Eq.~(\ref{Eq:Hpds}), have several features in common. Both display partial SU(3) symmetry, they are constructed to be rotationally invariant functions of $(\lambda,\mu)=(2,0)$ and $(\lambda,\mu)=(0,2)$ SU(3) tensor operators, and both contain components with $(\lambda,\mu)=(0,0)$ and $(2,2)$. Both Hamiltonian have solvable pure-SU(3) eigenstates, which can be organized into rotational bands. The ground bands are pure in both cases, and higher-energy pure bands coexist with mixed-symmetry states. There are, however, several significant differences between the bosonic and fermionic PDS Hamiltonians. For example, the ground band of the bosonic Hamiltonian $\hat{H}(h_0,h_2)$, Eq.~(\ref{HPSsu3}), is characterized by $(\lambda,\mu) = (2N,0)$, i.e., it describes an axially-symmetric prolate nucleus. As mentioned at the end of Subsection~\ref{subsec:su3PDStypeI}, it is also possible to find an IBM Hamiltonian with partial SU(3) symmetry for an oblate nucleus. It can be shown that these two cases exhaust all possibilities for partial SU(3) symmetry with a two-body Hamiltonian in the IBM with one type of monopole and quadrupole bosons. In contrast, the fermionic Hamiltonian $\hat{H}(\beta_0,\beta_2)$~(\ref{Eq:Hpds}) can accommodate ground bands of prolate [$(\lambda_{\sigma},0)$], oblate [$(0,\mu_{\sigma})$], and triaxial [$(\lambda_{\sigma},\mu_{\sigma})$ with $\lambda_{\sigma} \neq 0$, $\mu_{\sigma} \neq 0$] shapes. Another difference lies in the physical interpretation of the excited solvable bands. While these bands represent $\gamma^k$ excitations in the IBM, they correspond to giant monopole and quadrupole resonances in the fermion case. Furthermore, whereas the pure eigenstates of the IBM Hamiltonian $\hat{H}(h_0,h_2)$ can be generated by angular momentum projection from intrinsic states~(\ref{k}), a similar straightforward construction process for the special eigenstates of the symplectic Hamiltonian $\hat{H}(\beta_0,\beta_2)$ has not been identified yet. The situation seems to be more complicated in the fermion case, which is also reflected in the fact that $\hat{H}(\beta_0,\beta_2)$ has two possible families of pure eigenstates, one finite, the other infinite. The association of the special states to one or the other family depends on the 0$\hbar \omega$ symplectic starting configuration. Thus, in spite of similar algebraic structures, the bosonic and fermionic PDS Hamiltonians involve different mechanisms for generating the partial symmetries in question. \subsection{PDS and seniority} \label{subsec:PDSseniority} The notions of pairing and seniority for $n$ identical nucleons occupying a single $j$ shell, are conveniently encoded in the quasi-spin formalism~\cite{Kerman61}. The latter is based on an ${\rm SU}_{S}(2)$ algebra whose generators are \begin{eqnarray} \hat{S}^{+}_{j} = \sqrt{\frac{\Omega}{2}}\, (a^{\dag}_{j}\, a^{\dag}_{j})^{(0)}_{0} \;\;\; , \;\;\; \hat{S}^{-}_{j} = (\hat{S}^{+}_{j})^{\dag} \;\;\; , \;\;\; \hat{S}^{0}_{j} = \frac{1}{2}(\hat{n} - \Omega) ~. \label{SU2Q} \end{eqnarray} Here $\hat{S}^{+}_{j}$ creates a pair of fermions with angular momentum $J=0$, $\hat{n}=\sum_{m}a^{\dag}_{jm}a_{jm}$ is the number-operator and $\Omega = (2j+1)/2$. The basic quasi-spin doublet is $(a^{\dag}_{jm},\,-\tilde{a}_{jm})$, with $\tilde{a}_{jm} = (-1)^{j+m}a_{j,-m}$. The seniority quantum number, $v$, refers to the number of nucleons which are not in zero-coupled $J=0$ pairs. The ${\rm SU}_{S}(2)$ quantum numbers, $(S,S_0)$ are related to $n$ and $v$ by \begin{eqnarray} S=(\Omega-v)/2\;\;\; , \;\;\; S_0 = (n-\Omega)/2 ~. \label{Sv} \end{eqnarray} The operator $\hat{S}^{-}_{j}$ annihilates states with $(S,S_0=-S)$ corresponding to states in the $j^v$ configuration with seniority $v$ and $n=v$. The relevant chain of nested algebras for $n$ fermions in a single $j$ shell is~\cite{flowers52,helmers61} \begin{eqnarray} \begin{array}{ccccc} {\rm U(2j+1)} & \supset & {\rm Sp(2j+1)} & \supset & {\rm SU_{J}(2)}\\ \downarrow && \downarrow && \downarrow\\ \left [ 1^n \right ] & & v & \rho & J \end{array} ~, \label{Senchain} \end{eqnarray} where $\rho$ is a multiplicity index. The generators of the indicated algebras are \begin{eqnarray} &&{\rm U(2j+1)}=\{\hat{T}^{(L)}_{m}\}\;\; , \;\; {\rm Sp(2j+1)}=\{\hat{T}^{(L)}_{m}\; L\, {\rm odd}\} ~, \nonumber\\ &&{\rm SU_{J}(2)}=\{\hat{J}_{m} = c_j\,\hat{T}^{(1)}_{m}\}\;\; , \;\; c_j= \sqrt{j(j+1)(2j+1)/3}~,\qquad \label{Unalgeb} \end{eqnarray} where \begin{eqnarray} \hat{T}^{(L)}_{m} = \left( a^{\dag}_{j}\,\tilde{a}_{j}\right )^{(L)}_{m} \qquad L=0,1,2,\ldots, 2j~. \label{TLmS} \end{eqnarray} The unitary symplectic algebra, Sp(2j+1), and quasi-spin algebra, ${\rm SU}_{S}(2)$, commute. The duality relationship between their respective irreps, $v$ and $S$, is given in Eq.~(\ref{Sv}) and their quadratic Casimir operators are related by $\hat{C}_{2}[{\rm Sp(2j+1)}] = \Omega(\Omega+2) - 4\mbox{\boldmath $\hat{S}^2$}$. The dynamical symmetry Hamiltonian associated with the chain~(\ref{Senchain}) is given by \begin{subequations} \begin{eqnarray} \hat{H}_{DS} &=& \beta\,\hat{n} + \alpha \, \hat{n}(\hat{n}-1) + a\,\hat{C}_{2}[{\rm Sp(2j+1)}] + b\,\hat{C}_{2}[{\rm SU_{J}(2)}] ~, \label{hDSsen}\\ E_{DS} &=& \beta\, n + \alpha\, n(n-1) + a\,v(2\Omega+2 -v) + b\,J(J+1) ~, \label{eDSsen} \end{eqnarray} \label{heDSsen} \end{subequations} and $E_{DS}$ are the corresponding eigenvalues. The $\hat{T}^{(L)}_{m}$ operators of Eq.~(\ref{TLmS}), with $L$ odd, are quasi-spin scalars. Accordingly, the most general number-conserving rotational invariant Hamiltonian with seniority-conserving one- and two-body interactions, acting in a single-$j$ shell, can be transcribed in the form~\cite{Talmi93} \begin{subequations} \begin{eqnarray} \hat{H} &=& \beta\,\hat{n} + \alpha \, \hat{n}(\hat{n}-1) + \sum_{L\, {\rm odd}}\lambda_{L}\,\hat{T}^{(L)}\cdot \hat{T}^{(L)} \label{hSenCons}\\ E_{n,v,\rho, J} &=& \beta\,n +\alpha\, n(n-1) + Z_{v,\rho,J} ~. \label{eSenCons} \end{eqnarray} \label{heSenCons} \end{subequations} For the special choice, $\lambda_{L}=2a + b\,c_{j}^2\,\delta_{L,1}$, the above Hamiltonian reduces to the dynamical symmetry Hamiltonian of Eq.~(\ref{hDSsen}). In general, eigenstates of $\hat{H}$~(\ref{heSenCons}) are basis states, $\vert n, v, \rho, J M \rangle$, of the chain~(\ref{Senchain}) with eigenvalues $E_{n,v,\rho,J}$ and wave functions of the form \begin{eqnarray} \vert n, v, \rho, J M \rangle \propto \left (\hat{S}_{j}^{+}\right )^{(n-v)/2}\vert n=v, j, v,\rho, J M\rangle ~. \label{nvwf} \end{eqnarray} Since the last term in Eq.~(\ref{hSenCons}) is a quasi-spin scalar, its eigenvalues, $Z_{v,\rho,J}$, are independent of $n$. Consequently, for a given $n$, energy differences $E_{n, j, v, \rho, J}- E_{n, j, v',\rho',J'}$, are independent of $n$. Conservation of seniority does not, however, imply solvability. In general, eigenstates and eigenvalues of $\hat{H}$, Eq.~(\ref{heSenCons}), must be obtained from a numerical calculation. Nevertheless, it has been shown that some multiplicity-free eigenstates of $\hat{H}$, {\it i.e.}, with unique $n,v,J$ assignments, are completely solvable and closed algebraic expressions for their energies have been derived~\cite{rowerosen01,rosenrowe03}. These include states with $(v=2,J)$, $(v,J_{max})$ and $(v,J_{max}-2)$, where $J_{max}= v(2j+1-v)/2$. As an example, for $n$ even, the ground state of $\hat{H}$~(\ref{hSenCons}), with $v=J=0$, is analytically solvable with energy \begin{eqnarray} E_{n, j, v=0, J=0} &=& \beta\, n +\alpha\, n(n-1) ~. \label{ev0J0} \end{eqnarray} The above expression is identical to the ground-state energy of the dynamical symmetry Hamiltonian, Eq.~(\ref{heDSsen}). Since all eigenstates carry the seniority quantum number $v$, the Hamiltonian $\hat{H}$ of Eq.~(\ref{heSenCons}) exhibits PDS of type II with an added feature that some multiplicity-free states are analytically solvable. As is well known~\cite{Talmi93}, seniority remains a good quantum number for any two-body interaction acting within a $j$ shell when $j\leq 7/2$, but it need not be conserved for $j\geq 9/2$. Recently, it has been shown that it is possible to construct interactions that in general do not conserve seniority but which have {\it some} solvable eigenstates with good seniority~\cite{escuder06,isahein08}. Specifically, for $j=9/2,\,n=4$, there are two independent states ($\rho=1,2$) with seniority $v=4$ and $J=4,\,6$. For each such $J$ value, there exists one particular combination, which is completely solvable with good seniority, $v=4$, for {\it any} two-body interaction in the $j=9/2$ shell. The energies of the solvable states are given by~\cite{isahein08} \begin{subequations} \begin{eqnarray} E[(9/2)^4, v=4, J=4] &=& \frac{68}{33}\,\nu_2 +\nu_4 + \frac{13}{15}\,\nu_6 + \frac{114}{55}\,\nu_8 ~,\\[4mm] E[(9/2)^4, v=4, J=6] &=& \frac{19}{11}\,\nu_2 + \frac{12}{13}\,\nu_4 +\nu_6 + \frac{336}{143}\,\nu_8~, \end{eqnarray} \end{subequations} where $\nu_{\lambda} \equiv \langle (9/2)^2;\lambda\vert \hat{V}\vert (9/2)^2;\lambda\rangle$ are the matrix elements of an arbitrary rotational-invariant two-body interaction, $\hat{V}$. The indicated states retain their structure and are completely solvable, independent of whether the interaction conserves seniority or not. It follows that the most general one- and two-body rotational-invariant Hamiltonian in the $j=9/2$ shell, exhibits PDS of type I. The E2 matrix element between the two solvable states is interaction-independent, and the corresponding B(E2) value is given by~\cite{isahein08} \begin{eqnarray} &&B(E2; (9/2)^4, v=4, J=6 \to (9/2)^4, v=4, J=4 ) = \nonumber\\ && \qquad\qquad\qquad\qquad\qquad\qquad \frac{209475}{176468}\, B(E2; (9/2)^2, J=2 \to (9/2)^2, J=0 ) ~. \label{be2sen} \end{eqnarray} This again defines a parameter-independent relation between a property of the two- and four-particle system. Some properties of these solvable states can be attributed to properties of certain coefficients of fractional parentage~\cite{zamick07,isahein08,zamisa08,Talmi10}. It has been suggested that this partial seniority conservation may shed some new light on the existence of isomers in nuclei with valence neutrons or protons predominantly confined to the $g_{9/2}$ or $h_{9/2}$ shell~\cite{isahein08}. In most medium-mass and heavy nuclei, the valence nucleons are distributed over several non-degenerate $j$ orbits in a major shell. This situation can be treated within the generalized seniority scheme~\cite{talmi71}, based on more general pair-operators with $J=0,2$ \begin{subequations} \begin{eqnarray} \hat{S}^{\dag} &=& \sum_{j}\alpha_{j}\,(a^{\dag}_{j}\, a^{\dag}_{j})^{(0)}_{0} ~, \label{Sgen} \\ D^{\dag}_{\mu} &=& \sum_{j,j'}\beta_{jj'}\, (a^{\dag}_{j}\, a^{\dag}_{j'})^{(2)}_{\mu} ~. \end{eqnarray} \label{vg0vg2} \end{subequations} The generalized seniority conditions~\cite{talmi71,shlomtalmi72,talmi73} are \begin{subequations} \begin{eqnarray} \hat{H}\vert 0\rangle &=& 0 ~,\\ \left [ \, \hat{H}\, , \,S^{\dag}\,\right ] \vert 0 \rangle &=& V_{0}\,S^{\dag}\vert 0 \rangle \;\; , \;\; \left [ \left [\, \hat{H}\, , \,S^{\dag}\,\right ]\, , \,S^{\dag}\,\right ] = \Delta\,(S^{\dag})^2 ~,\\ \left [ \, \hat{H}\, , \,D^{\dag}_{\mu}\,\right ] \vert 0 \rangle &=& V_{2}\,D^{\dag}_{\mu}\vert 0 \rangle \;\; , \;\; \left [ \left [\, \hat{H}\, , \,S^{\dag}\,\right ]\, , \,D^{\dag}_{\mu}\,\right ] = \Delta\,S^{\dag}D^{\dag}_{\mu} ~, \end{eqnarray} \label{genSen} \end{subequations} where $\vert 0\rangle$ is the doubly-magic core state. These relations imply that the Hamiltonian has a solvable ground state with $J=0$ and generalized seniority $v_g=0$ \begin{subequations} \begin{eqnarray} \vert n=2N, v_g =0, J=0\rangle &\propto& \left (S^{\dag}\right )^N\vert 0\rangle ~,\\ E_{N,v_g=0,J=0} &=& N\,V_0 + \frac{1}{2}N(N-1)\,\Delta ~, \end{eqnarray} \label{vg0} \end{subequations} and a solvable excited state with $J=2$ and generalized seniority $v_g=2$ \begin{subequations} \begin{eqnarray} \vert n=2N, v_g =2, J=2\rangle &\propto& \left (S^{\dag}\right )^{N-1}D^{\dag}_{\mu}\vert 0\rangle ~,\\ E_{N,v_g=2,J=2} &=& E_{N,v_g=0,J=0} + V_2-V_0 ~. \end{eqnarray} \label{vg2} \end{subequations} From the point of view of symmetry, the monopole pair-operators $S^{\dag},\, S$, and $\left [S^{\dag}\, , \,S\right ]/2$, of Eq.~(\ref{Sgen}), with unequal $\alpha_j$, do not form an SU(2) algebra. Nevertheless, an Hamiltonian $\hat{H}$ satisfying relations~(\ref{genSen}) has selected solvable states, {\it i.e.}, it is partially solvable. It should be noted that the energy and wave function of the $v_g=0$ ground state~(\ref{vg0}) have the same form as in the single-$j$ dynamical symmetry expressions, Eqs.~(\ref{nvwf})-(\ref{ev0J0}) and, by construction, both spectra exhibit linear two-nucleon separation energies and constant $2^{+}$-$0^{+}$ spacings. Furthermore, the Sp(2j+1) generators~(\ref{Unalgeb}), $\hat{T}^{(L)}_{m}$ with L odd and {\it any}~$j$, annihilate the state of Eq.~(\ref{vg0}). Therefore, the $v_g=0$ ground state is invariant under $\prod_{j}\otimes {\rm Sp(2j+1)}$, even though the latter is not a symmetry of the Hamiltonian. \section{Concluding Remarks} \label{conclusion} The notion of partial dynamical symmetry (PDS) extends and complements the fundamental concepts of exact and dynamical symmetries. It addresses situations in which a prescribed symmetry is neither exact nor completely broken. When found, such intermediate symmetry structure can provide analytic solutions and quantum numbers for a portion of the spectra, thus offering considerable insight into complex dynamics. In other circumstances a PDS can serve as a convenient starting point for further improvements. As discussed in the present review, PDSs of various types have concrete applications to nuclear and molecular spectroscopy. Their empirical manifestations illustrate their ability to serve as a practical tool for calculation of observables in realistic systems. Hamiltonians with partial symmetries are not completely integrable nor fully chaotic. As such, they are relevant to the study of mixed systems with coexisting regularity and chaos, which are the most generic. Quantum phase transitions are driven by competing symmetries in the Hamiltonian. They provide a natural arena for PDSs, which incorporate such incompatible symmetries, and manage to survive at the critical points, in spite of the strong mixing. PDSs appear to be a common feature in algebraic descriptions of dynamical systems. They are not restricted to a specific model but can be applied to any quantal systems of interacting particles, bosons and fermions. The existence of PDS is closely related to the order of the interaction among the particles. The partial symmetry in question can be continuous (Lie-type), or discrete (point-group) and the associated dynamical algebra can involve a single or coupled algebraic structure. The examples of partial dynamical symmetries surveyed in the present review, involved purely bosonic or purely fermionic algebras. It is clearly desirable to extend the PDS notion to mixed systems of bosons and fermions, and explore the possible role of partial Bose-Fermi symmetries and partial supersymmetries. On phenomenological grounds, having at hand concrete algorithms for identifying and constructing Hamiltonians with PDS, is a valuable asset. It provides selection criteria for the a priori huge number of possible symmetry-breaking terms, accompanied by a rapid proliferation of free-parameters. This is particularly important in complicated environments when many degrees of freedom take part in the dynamics and upon inclusion of higher-order terms in the Hamiltonian. Futhermore, Hamiltonians with PDS break the dynamical symmetry (DS) but retain selected solvable eigenstates with good symmetry. The advantage of using interactions with a PDS is that they can be introduced, in a controlled manner, without destroying results previously obtained with a DS for a segment of the spectrum. These virtues greatly enhance the scope of applications of algebraic modeling of quantum many-body systems. On a more fundamental level, it is important to recall that dynamical systems often display simple patterns amidst a complicated environment. A representative example is the occurrence of both collective and quasi-particle type of states in the same nucleus. It is natural to associate the ``simple'' states with a symmetry that protects their purity and special character. This symmetry, however, is shared by only a subset of states, and is broken in the remaining eigenstates of the same Hamiltonian. It thus appears that realistic quantum many-body Hamiltonians can accommodate simultaneously eigenstates with different symmetry character. These are precisely the defining ingredients of a partial symmetry. In this context, PDS can offer a possible clue to the deep question of how simple features emerge from complicated dynamics. Underlying the PDS notion, is the recognition that it is possible for a non-invariant Hamiltonian to have selected eigenstates with good symmetry and good quantum numbers. In such a case, the symmetry in question is preserved in some states but is broken in the Hamiltonian (an opposite situation to that encountered in a spontaneously-broken symmetry). The PDS concept is, therefore, relevant to situations where selected states fulfill the predictions of a symmetry, which is otherwise known to be broken. Familiar examples of such a scenario include flavor symmetry in hadrons and chiral symmetry in nuclei. PDS may thus shed a new light on the related question of why, occasionally, symmetries seem to be obeyed beyond their domain of validity. The realistic attributes and fundamental aspects of partial symmetries, as portrayed in the present review, illuminate their potential useful role in dynamical systems and motivate their ongoing and future in-depth study. Much has been learnt but much more awaits to be explored and understood. \section*{Acknowledgments} \label{acknow} The author is pleased to acknowledge a fruitful collaboration with Y.~Alhassid, J.~Escher, J.E.~Garc\'\i a-Ramos, J.N.~Ginocchio, I.~Sinai, M. Mamistvalov, P.~Van~Isacker and N.D.~Whelan. Valuable discussions with F.~Iachello, D.J.~Rowe and I.~Talmi on fundamental aspects of partial symmetries, and with R.F.~Casten and N.~Pietralla on their empirical manifestations, are much appreciated. To them and to many other colleagues who have shown interest in this avenue of research, my warm thanks. This work was supported in part by the Israel Science Foundation and in part by the U.S.-Israel Binational Science Foundation. \section*{Appendix} \label{Appendix} \begin{table} \begin{center} \caption{\label{TabIBMcas} \protect\small Generators and Casimir operators, $\hat{C}_{G}$, of algebras $G$ in the IBM. Here $\hat{n}_d = \sqrt{5}\,U^{(0)}$, $L^{(1)} = \sqrt{10}\,U^{(1)}$, $Q^{(2)} = \Pi^{(2)} -{\textstyle\frac{\sqrt{7}}{2}}\,U^{(2)}$, $\bar{Q}^{(2)} = \Pi^{(2)} +{\textstyle\frac{\sqrt{7}}{2}}\,U^{(2)}$. The operators $U^{(L)},\, \Pi^{(2)},\, \bar{\Pi}^{(2)},\,\hat{n}_s$, are defined in Eq.~(\ref{u6gen}).} \vspace{1mm} \begin{tabular}{ccccc} \hline & & & &\\[-3mm] Algebra & Generators & Irrep. & Casimir operator & Eigenvalues\\[4pt] & & & &\\[-3mm] \hline & & & &\\[-2mm] {\rm O(3)} & $U^{(1)}$ & L & $L^{(1)}\cdot L^{(1)}$ & L(L+1) \\[2pt] {\rm O(5)} & $U^{(1)},U^{(3)}$ & $\tau$ & $2(U^{(1)}\cdot U^{(1)} +U^{(3)}\cdot U^{(3)})$ & $\tau(\tau+3)$ \\[2pt] {\rm O(6)} & $U^{(1)},U^{(3)},\Pi^{(2)}$ & $\sigma$ & $\hat{C}_{{\rm O(5)}} + \Pi^{(2)}\cdot\Pi^{(2)}$ & $\sigma(\sigma+4)$ \\[2pt] $\overline{{\rm O(6)}}$ & $U^{(1)},U^{(3)},\bar{\Pi}^{(2)}$ & $\bar{\sigma}$ & $\hat{C}_{{\rm O(5)}} + \bar{\Pi}^{(2)}\cdot\bar{\Pi}^{(2)}$ & $\bar{\sigma}(\bar{\sigma}+4)$ \\[2pt] {\rm SU(3)} & $U^{(1)},Q^{(2)}$ & $(\lambda,\mu)$ & $2Q^{(2)}\cdot Q^{(2)} + {\textstyle\frac{3}{4}}L^{(1)}\cdot L^{(1)}$ & $\lambda^2 + (\lambda+\mu)(\mu+3)$\\[2pt] $\overline{{\rm SU(3)}}$ & $U^{(1)},\bar{Q}^{(2)}$ & $(\bar{\lambda},\bar{\mu})$ & $2\bar{Q}^{(2)}\cdot \bar{Q}^{(2)} + {\textstyle\frac{3}{4}}L^{(1)}\cdot L^{(1)}$ & $\bar{\lambda}^2 + (\bar{\lambda}+\bar{\mu})(\bar{\mu}+3)$\\[2pt] {\rm U(5)} & $U^{(L)}$ $L=0,\ldots,4$ & $n_d$ & $\hat{n}_d,\,\hat{n}_d(\hat{n}_d+4)$ & $n_d,\, n_d(n_d+4)$ \\[2pt] {\rm U(6)} & $U^{(L)}$ $L=0,\ldots,4$ & $N$ & $\hat{N},\,\hat{N}(\hat{N}+5)$ & $N,\, N(N+5)$ \\[2pt] & $\Pi^{(2)},\, \bar{\Pi}^{(2)},\, \hat{n}_s$ & & & \\[2pt] & & & &\\[-3mm] \hline \end{tabular} \end{center} \end{table} The normal-order form of the most general IBM Hamiltonian with one- and two-body interactions is given by \begin{eqnarray} \hat{H}_{IBM} &=& \epsilon_s\,s^{\dag}s + \epsilon_{d}\,d^{\dag}\cdot \tilde{d} + u_{0}\,(s^{\dag})^2s^2 + u_{2}\,s^{\dag}d^{\dag}\cdot\tilde{d}s + v_{0}\,\left [\, (s^{\dag})^2\tilde{d}\cdot \tilde{d} + H.c. \, \right ] \nonumber\\ && + v_{2}\,\left [\, s^{\dag}d^{\dag}\cdot (\tilde{d} \tilde{d})^{(2)} + H.c. \, \right ] + \sum_{L=0,2,4}c_{L}\,(d^{\dag}d^{\dag})^{(L)}\cdot (\tilde{d}\tilde{d})^{(L)} ~. \label{hIBMnormal} \end{eqnarray} The corresponding multipole form, written in terms of the Casimir operators listed in Table~\ref{TabIBMcas}, is given by \begin{eqnarray} \hat{H}_{IBM} &=& \epsilon_s\,\hat{N} + u_{0}\,\hat{N}(\hat{N}-1) + (\epsilon_{d}-\epsilon_{s})\,\hat{n}_d - (2u_0 -u_2 + 2v_0)\,(\hat{N}-1)\hat{n}_d \nonumber\\ && +(u_0 - u_2 +2v_0 + {\textstyle\frac{1}{2\sqrt{7}}}v_2 + {\textstyle\frac{1}{5}}c_0 + {\textstyle\frac{2}{7}}c_2 + {\textstyle\frac{18}{35}}c_4) \hat{n}_{d}(\hat{n}_d-1) \nonumber\\ && -(v_0 + {\textstyle\frac{3}{2\sqrt{7}}}v_2 + {\textstyle\frac{1}{5}}c_0 - {\textstyle\frac{2}{7}}c_2 + {\textstyle\frac{3}{35}}c_4) \left [\, \hat{C}_{{\rm O(5)}} - 4\hat{n}_d\,\right ] -{\textstyle\frac{1}{2\sqrt{7}}}\,v_2\, \left [\, \hat{C}_{{\rm SU(3)}} - 10\hat{N}\,\right ] \nonumber\\ && + (v_0 + {\textstyle\frac{1}{\sqrt{7}}}v_2) \left [\, \hat{C}_{{\rm O(6)}} - 5\hat{N}\,\right ] +({\textstyle\frac{1}{2\sqrt{7}}}v_2 + {\textstyle\frac{1}{7}}c_4 - {\textstyle\frac{1}{7}}c_2) \left [\, \hat{C}_{{\rm O(3)}} - 6\hat{n}_d\,\right ] ~. \label{hIBMcas} \end{eqnarray} \def\Journal#1#2#3#4{{#1} {#2} (#4) #3} \def\em Ann. Phys. (N.Y.){\em Ann. Phys. (N.Y.)} \def{\em J. Math. Phys.}{{\em J. Math. Phys.}} \def{\em J. Chem. Phys.}{{\em J. Chem. Phys.}} \def{\em Int. J. Mod. Phys.} A{{\em Int. J. Mod. Phys.} A} \def\JPA{{\em J. Phys.} A} \def\JPAold{{\em J. Phys. A: Math. Gen.}} \def\JPB{{\em J. Phys.} B} \def\JPG{{\em J. Phys.} G} \def{\em Nuovo Cimento} A {{\em Nuovo Cimento} A } \def\em Rivista Nuovo Cimento{\em Rivista Nuovo Cimento} \def{\em Nucl. Phys.}{{\em Nucl. Phys.}} \def{\em Nucl. Phys.} A{{\em Nucl. Phys.} A} \def{\em Nucl. Phys.} B\ {{\em Nucl. Phys.} B\ } \def{\em Physica}{{\em Physica}} \def{\em Prog. Theor. Phys.}{{\em Prog. Theor. Phys.}} \def{\em Phys. Lett.} A\ {{\em Phys. Lett.} A\ } \def{\em Phys. Lett.} B{{\em Phys. Lett.} B} \def{\em Phys. Lett.}{{\em Phys. Lett.}} \def{\em Phys. Rev. Lett.\/}\ {{\em Phys. Rev. Lett.\/}\ } \def\em Phys. Rev.{\em Phys. Rev.} \def\em Phys. Rep.{\em Phys. Rep.} \def{\em Phys. Rev.} A{{\em Phys. Rev.} A} \def{\em Phys. Rev.} D{{\em Phys. Rev.} D} \def{\em Phys. Rev.} C{{\em Phys. Rev.} C} \def{\em Phys. Rev.} E{{\em Phys. Rev.} E} \def{\em Phys. Rev.} B{{\em Phys. Rev.} B} \def\em Phys. Scr.{\em Phys. Scr.} \def{\em Phil. Mag. Lett.}{{\em Phil. Mag. Lett.}} \def{\em Rev. Mex. Fis.}{{\em Rev. Mex. Fis.}} \def{\em Rev. Mod. Phys.}{{\em Rev. Mod. Phys.}} \def{\em Rep. Prog. Phys.}{{\em Rep. Prog. Phys.}} \def{\em Z. Phys.} A{{\em Z. Phys.} A} \def{\em Proc. Roy. Soc. Lond.} A{{\em Proc. Roy. Soc. Lond.} A} \def\em Prog. Part. Nucl. Phys.{\em Prog. Part. Nucl. Phys.}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,522
We are good listeners. We are inspired by our clients' ideas and needs. Delivering design that motivates your customers to buy your product thrills our team. This produces some of the BEST graphic design available at any cost. We specialize in Package and Marketing Materials Graphic Design that sells customers on buying your product and is a competitive advantage versus all other competitors. Time and money are important too so we deliver our projects on time and on budget. We do all this and never forget to have a little fun along the way. If you need the most persuasive marketing message possible for your products, we use the Science of Persuasion powered by Merwyn Technology. If you need the best possible image, target audience, marketing platforms, and much, much more for your product/s, our founder and head coach has the best marketing skills of any graphic design company anywhere. Richard's marketing success: Procter & Gamble executive (sales, brand management/advertising) for 16 years (P&G voted "Marketing Company of the Century" in 2000 by its peers), Vice President of Marketing at the Gallo Winery for 10 years, and Teaching Executive in Residence, Arizona State University's School of Management for 6 years, teaching upper level marketing, innovation, and leadership courses. Design that works – simplicity and quality that jumps of the shelf. Strong branding and product imagery creates a stunning print ad featuring our designs. Display depicting the line-up of premium coffees for discerning coffee drinkers. Created outside visual impact to direct attendees to a trade show booth. Package Graphic Design. Marketing Graphic Design. Event Graphic Design. At Blue Sage Creative, you receive fresh graphic design thinking producing inspired designs that exceed your expectations. All of our designers, project managers, and creative director are passionate about what they do. They love creating designs that make you look like a winner, especially when it helps you to win in a competitive marketplace. For many companies, package design is critical to their success. It is what customers see at the point-of-purchase. The design needs to grab their attention. Having grabbed their attention, the package design needs to give them a compelling reason to buy your product instead of competitive products. Knowing this, the design solutions we develop for you are intended to do all of this better than any of your competition. Your marketing materials require graphic design that is compelling and persuasive with customers and consumers. We understand the importance of marketing design to your success. The fresh and inspired thinking we bring to every project creates marketing designs that help you become a winner against tough competitors. For many companies, events like trade shows require graphic design solutions that powerfully grab attention and invite people into your space to learn more. We know the challenge and we are up to it. Our design solutions bring fresh and inspired thinking that helps you stand out versus your competition, helping you to be a winner. Companies typically have many graphic design needs. Your design needs can include corporate communication, innovation, HR, delivery trucks, and many other needs. Our design solutions are always intended to exceed your expectations, be better than your competition when appropriate, and be graphic design that everyone is proud of. For innovation best practices that can double innovation success check out our sister company Innovate2Grow Experts.
{ "redpajama_set_name": "RedPajamaC4" }
5,406
5 Tech Growth Capital Trends to Watch in 2021 Catherine Daly Photo by Paul Skorupskas on Unsplash The year has just begun, and it's already proving tumultuous with civil unrest and rioting in the U.S., a seemingly endless pandemic, and an uncertain global economic outlook. Despite the gloomy start to 2021, most SaaS entrepreneurs are cautiously optimistic about the future. The resilience the SaaS sector demonstrated in 2020 gives us good reason to look forward to brighter days ahead. TIMIA Capital is tracking several growth capital trends that are contributing to the positive outlook: 1. SaaS Industry Growth The COVID-19 pandemic accelerated the digital transformation roadmap of many businesses, significantly increasing the demand for B2B SaaS solutions. According to Gartner, this trend will continue in 2021 and lead to a SaaS market worth a whopping $120.9 billion by the end of the year, up from $104.7 billion in 2020. For new SaaS businesses coming to market, the greatest opportunity is in vertically-specific areas, delivering more business value and better outcomes to a narrower targeted user base. However, this means that the likelihood of a future IPO is decreasing. The more likely outcome for SaaS companies that emerge today is to sell for a modest price (e.g. $50M to $100M)—is a good outcome for founders with low dilution, but not a worthwhile outcome for a venture capitalist. We predict newer SaaS companies will seek alternatives to venture capital to fuel their growth in 2021 and beyond. 2. Continued M&A Activity In the first half of 2020, SaaS merger and acquisition (M&A) deal volume dipped below 600 for the first time since 2016. However, the resilience of the SaaS industry during COVID-19 meant that we saw a surprising number of exits and buyouts in the latter half of the year, as growth equity players resumed activity with vigor. TIMIA Capital noticed a significant increase in M&A and growth-stage equity funding activity in our own portfolio, with nine companies exiting successfully (six in the second half of the year). The sample size is small but reports from the Software Equity Group appear to reflect our experience. A recent outlook from Ernst & Young also found that 70% of investors expect M&A activity to improve over the next 12 months. Much of this investment is private capital. If public markets continue to soar, there may be an impact on private valuations with some predicting valuation multiples to hit the double digits. 3. Increasing Relevance of Venture Debt The revenue predictability and relative capital efficiency of today's B2B SaaS companies mean that the debt discussions are happening earlier in a company's lifecycle, around the $1–10 million ARR stage. A recent study found that by the time companies surpass $5 million in ARR, over three-quarters are using debt capital. That's because non-dilutive capital makes sense to these SaaS companies. It can help them scale, clean up their cap tables, or build their valuations to be in a better negotiating position with equity investors down the line. 4. Alternatives to VC The growth capital industry has changed in the past year, leaving a bigger gap than ever for SaaS companies in the $1-10 million ARR stage. Today, debt fund lenders and venture banks like Silicon Valley Bank are only focused on VC-backed companies—and usually only those that are scaled beyond $10M in ARR. Some revenue-based debt lenders dabbled at the lower end of the scale in the past, but even players like Espresso Capital have pulled back to serve only the larger end of this market. TIMIA Capital remains dedicated to serving SaaS companies in the $1-10 million ARR stage in 2021 as its tech-enabled lending platform continues to scale. 5. Increasing Investments as Economy Stabilizes While SaaS growth capital deal volume stalled in the first half of 2020, it picked back up in the latter half and is expected to increase further in 2021 as vaccines roll out and the economy stabilizes and recovers. TIMIA Capital has capital ready to deploy and is looking forward to serving entrepreneurs with the product at the right stage of their growth journey. If you're a B2B SaaS business with $1-10 million in ARR and would like some advice about non-dilutive capital options, please don't hesitate to get in touch. Catherine joined TIMIA Capital as a freelance copywriter and social media manager in January 2019. She has 15 years experience in marketing and held senior positions at a number of technology companies including Hootsuite, Absolute, and Avnet Technology Solutions. Catherine is an expert writer and marketer and holds an executive Masters in Marketing, a Bachelor of Science in Communications and Journalism, and a Diploma in Digital Marketing.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,937
\section{Introduction} The separability problem in quantum theory \cite{Guhne} asks for a criterion to distinguish the separable states from the entangled states. It is known that separable states in $\mathcal{M}_k\otimes \mathcal{M}_m$ form a subset of the positive under partial transpose states (PPT states) and in low dimensions, $km\leq 6$, these two sets coincide \cite{peres, horodeckifamily} solving the problem. However, in larger dimensions, $km> 6$, there are entangled PPT states. In addition, for $k,m$ arbitrary, this problem is known to be a hard problem \cite{gurvits2004}, thus any reduction of this problem to a subset of PPT states of $\mathcal{M}_k\otimes \mathcal{M}_m$ is certainly important. \vspace{0,2cm} In \cite{cariello, carielloIEEE}, a procedure to reduce the separability problem to a proper subset of PPT states was presented. The idea behind this reduction can be summarized as follows. Let $A=\sum_{i=1}^nA_i\otimes B_i\in \mathcal{M}_k\otimes \mathcal{M}_m$ be a quantum state and consider $G_A:\mathcal{M}_k\rightarrow \mathcal{M}_m$ and $F_A:\mathcal{M}_m\rightarrow \mathcal{M}_k$ as $ G_A(X)=\sum_{i=1}^n tr(A_iX)B_i$\ and\ $ F_A(X)=\sum_{i=1}^n tr(B_iX)A_i$, where $tr(X)$ stands for the trace of $X$. \vspace{0,2cm} Now, if $A$ is PPT and $X$ is a positive semidefinite Hermitian eigenvector of $F_A\circ G_A:\mathcal{M}_k\rightarrow \mathcal{M}_k$ then $A$ decomposes as a sum of states with orthogonal local supports \cite[Lemma 8]{carielloIEEE}, i.e., $$A=(V\otimes W)A(V\otimes W)+(V^{\perp}\otimes W^{\perp})A(V^{\perp}\otimes W^{\perp}),$$ \noindent where $V,W, V^{\perp},W^{\perp}$ are orthogonal projections onto $\operatorname{Im}(X)$, $\operatorname{Im}(G_A(X))$, $\ker(X)$ and $\ker(G_A(X))$, respectively. Then the algorithm proceeds to decompose $(V\otimes W)A(V\otimes W)$ and $(V^{\perp}\otimes W^{\perp})A(V^{\perp}\otimes W^{\perp})$, since they are also PPT, whenever such $X$ is found. Eventually this process stops with \begin{center} $A=\sum_{i=1}^n(V_i\otimes W_i)A(V_i\otimes W_i)$, \end{center} where $(V_i\otimes W_i)A(V_i\otimes W_i)$ cannot be further decomposed for $1\leq i\leq n$. These states are named weakly irreducible. Finally, $A$ is separable if and only if each $(V_i\otimes W_i)A(V_i\otimes W_i)$ is separable, therefore this algorithm reduces the separability problem to the weakly irreducible PPT case. \vspace{0,2cm} Positive under partial transpose states are not the only type of states on which this procedure works because the key feature of this procedure, which is $A\in \mathcal{M}_k\otimes \mathcal{M}_m$ breaks whenever a certain positive semidefinite Hermitian eigenvector is found, is also true for two other types of quantum states. From now on we say that $A\in \mathcal{M}_k\otimes \mathcal{M}_m$ has the \textbf{completely reducibility property}, if for every positive semidefinite Hermitian eigenvector $X$ of $F_A\circ G_A:\mathcal{M}_k\rightarrow \mathcal{M}_k$, we have $$A=(V\otimes W)A(V\otimes W)+(V^{\perp}\otimes W^{\perp})A(V^{\perp}\otimes W^{\perp}),$$ \noindent where $V,W, V^{\perp},W^{\perp}$ are orthogonal projections onto $\operatorname{Im}(X)$, $\operatorname{Im}(G_A(X))$, $\ker(X)$ and $\ker(G_A(X))$, respectively. In \cite{carielloIEEE}, a search for other types of quantum states satisfying this property was conducted finding only three types of quantum states: positive under partial transpose states (PPT states), symmetric with positive coefficients states (SPC states) and invariant under realignment states. So far this property has been only verified for these triad of quantum states. \vspace{0,2cm} Now, the number of times that $A$ breaks into weakly irreducible pieces is maximized when the non-null eigenvalues of $F_A\circ G_A:\mathcal{M}_k\rightarrow \mathcal{M}_k$ are equal, because in this case we are able to produce many positive semidefinite Hermitian eigenvectors. In this situation, we have the following theorem: \vspace{0,1cm} \begin{quote} If the non-null eigenvalues of $F_A\circ G_A$ are equal then $A$ is separable, when $A$ is PPT or SPC or invariant under realignment \cite[Proposition 15]{carielloIEEE}. Notice that the eigenvalues of $F_A\circ G_A$ are the square of the Schmidt coefficients of $A$. \end{quote} \vspace{0,1cm} In \cite{carielloSPC, carielloIEEE}, it was also noticed that every SPC state and every invariant under realignment state is PPT in $\mathcal{M}_2\otimes \mathcal{M}_2$. Before that in \cite{guhnetothsym}, it was proved that a state supported on the symmetric subspace of $\mathbb{C}^k\otimes\mathbb{C}^k$ is PPT if and only if it is SPC. This is plenty of evidence of how linked this triad of quantum states are. \vspace{0,2cm} In this work we prove new results for this triad of quantum states and a new consequence for their complete reducibility property. One of these results concerns the separability of theses states. The aforementioned connections along with our new results lead us to notice a certain triality pattern: \vspace{0,1cm} \begin{quote} For each proven theorem for one of these three types of states, there are corresponding counterparts for the other two. \end{quote} \vspace{0,1cm} We believe that a solution to the separability problem for SPC states or invariant under realignment states would shed light on bound entanglement. This is our reason for studying these types of states. \vspace{0,2cm} Next, we would like to point out the origin of these connections which is also the source of the main tools used in this work. For this, we must consider the group of linear contractions and some of its properties. For each permutation $\sigma\in S_4$, the linear transformation $L_{\sigma}:\mathcal{M}_k\otimes \mathcal{M}_k\rightarrow \mathcal{M}_k \otimes \mathcal{M}_k$ satisfying $$L_{\sigma}(v_1v_2^t\otimes v_3v_4^t)=v_{\sigma(1)}v_{\sigma(2)}^t\otimes v_{\sigma(3)}v_{\sigma(4)}^t,$$ where $v_i\in \mathbb{C}^{k}$, is called a linear contraction. \vspace{0,2cm} The term contraction comes from the fact that $\|L_{\sigma}(\gamma)\|_1\leq \|\gamma\|_1,$ for every $\sigma\in S_4$, whenever $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_m$ is a separable state and $\|\cdot\|_1$ is the trace norm of a matrix (i.e., the sum of its singular values). Hence, if $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ is a state and $\|L_{\sigma}(\gamma)\|_1> \|\gamma\|_1$ for some $\sigma\in S_4$ then $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_m$ is entangled. This observation provides two useful criteria for entanglement detection. In cycle notation, they are: \vspace{0,2cm} \begin{itemize} \item the PPT criterion \cite{peres, horodeckifamily}, when $\sigma=(34)$, and \item the CCNR criterion \cite{rudolph,rudolph2}, when $\sigma=(23)$ . \end{itemize} \vspace{0,2cm} Despite the name - contraction - these linear maps are in fact isometries, i.e., they preserve the Frobenius norm of $\gamma \in \mathcal{M}_k \otimes \mathcal{M}_k$ $($denoted here by $\|\gamma\|_2)$. This norm is an invariant exploited several times in this work. \vspace{0,2cm} In addition, the set of linear contractions is a group under composition generated by the partial transposes ($L_{(34)}(\gamma)=\gamma^{\Gamma}$ and $L_{(12)}$) and the realignment map $(L_{(23)}(\gamma)=\mathcal{R}(\gamma))$. The relations among the elements of this group are extremely useful as we shall see in the proofs of our novel results. \vspace{0,2cm} Finally, these maps allow connecting this triad of quantum states from their origins, that is, their definitions: \vspace{0,1cm} \begin{itemize} \item PPT states are the states that remain positive under partial transpose $(\gamma\geq 0, \gamma^{\Gamma}\geq 0)$ \cite{peres,horodeckifamily}, \vspace{0,1cm} \item SPC states are the states that remain positive under partial transpose composed with realignment $(\gamma\geq 0,\ \mathcal{R}(\gamma^{\Gamma})\geq 0)$ (\cite[corollary 25]{carielloIEEE} and \cite[definition 17]{carielloIEEE}), \vspace{0,1cm} \item invariant under realignment states are the states that remain the same under realignment $(\gamma\geq 0,\ \mathcal{R}(\gamma)=\gamma)$. \end{itemize} \vspace{0,2cm} These observations on the group of linear contractions are used throughout this paper, they are the keys to obtain our novel results. Now, let us describe these results. \vspace{0,2cm} Our first result is an upper bound on the spectral radius for this triad of states. We show that if $\gamma=\sum_{i=1}^m C_i\otimes D_i\in \mathcal{M}_{k}\otimes \mathcal{M}_k$ is PPT or SPC or invariant under realingment then $$\|\gamma\|_{\infty}\leq \min\{\|\gamma_A\|_{\infty}, \|\gamma_B\|_{\infty}, \|\mathcal{R}(\gamma)\|_{\infty} \},$$ where $\|.\|_{\infty}$ is the operator norm, $\gamma_A=\sum_{i=1}^m C_i tr(D_i)$ and $\gamma_B=\sum_{i=1}^m D_i tr(C_i)$ are the reduced states. Let us say that the ranks of $\gamma_A$ and $\gamma_B$ are the reduced ranks of $\gamma$. \vspace{0,2cm} Our second result regards the filter normal form of SPC states and invariant under realignment states. This normal form has been used in entanglement theory, for example, to provide a different solution for the separability problem in $\mathcal{M}_2\otimes \mathcal{M}_2$ \cite{leinaas} or to prove the equivalence of some criteria for entanglement detection \cite{Git}. Here we show that states which are SPC or invariant under realignment can be put in the filter normal form and their normal forms can still be chosen to be SPC and invariant under realignment, respectively. In other words, if $A,B\in\mathcal{M}_{k}\otimes \mathcal{M}_k$ are states such that \vspace{0,2cm} \begin{enumerate} \item $A$ is SPC then there is an invertible matrix $R\in \mathcal{M}_k$ such that $(R\otimes R)A(R\otimes R)^*= \sum_{i=1}^n\lambda_i\delta_i\otimes\delta_i,$ where $\lambda_1=\frac{1}{k}$, $\delta_1=\frac{Id}{\sqrt{k}}$, $\lambda_i>0$ and $\delta_i$ is Hermitian for every $i$, and $tr(\delta_i\delta_j)=0$ for $i\neq j$. \vspace{0,2cm} \item $B$ is invariant under realignment then there is an invertible matrix $R\in \mathcal{M}_k$ such that $(R\otimes \overline{R})B(R\otimes \overline{R})^*= \sum_{i=1}^n\lambda_i\delta_i\otimes\overline{\delta_i},$ where $\lambda_1=\frac{1}{k}$, $\delta_1=\frac{Id}{\sqrt{k}}$, $\lambda_i>0$ and $\delta_i$ is Hermitian for every $i$, and $tr(\delta_i\delta_j)=0$ for $i\neq j$. \end{enumerate} \vspace{0,2cm} The PPT counterpart of this result is an algorithm that determines whether a PPT state can be put in the filter normal form or not already found in \cite{CarielloLMP}. This algorithm is based on the complete reducibility property. \vspace{0,2cm} Our final result is a lower bound for the ranks of these three types of states. We show that the rank of PPT states, SPC states and invariant under realignment states cannot be inferior to their reduced ranks (when they are equal) and whenever this minimum is attained the states are separable. \vspace{0,2cm} In \cite{PawelMxN}, it was proved that a state $\gamma$ such that $\operatorname{rank}(\gamma)\leq \max\{\operatorname{rank}(\gamma_A),\operatorname{rank}(\gamma_B)\}$ is separable by just being positive under partial tranpose. In \cite{smolin}, it was shown that the rank of a separable state is greater or equal to their reduced ranks. So if $\gamma$ is PPT and $\operatorname{rank}(\gamma)\leq \max\{\operatorname{rank}(\gamma_A),\operatorname{rank}(\gamma_B)\}$ then it is separable and $\operatorname{rank}(\gamma)= \max\{\operatorname{rank}(\gamma_A),\operatorname{rank}(\gamma_B)\}$. \vspace{0,2cm} Hence our last result is known for PPT states, but it is original for SPC states and invariant under realignment states. The approach presented here to obtain these facts is completely original, as we show that this is another consequence of the complete reducibility property. \vspace{0,2cm} The results described above show how fundamental this property is to entanglement theory, it acts as a unifying approach for many results. In fact, even outside entanglement theory, we can find consequences of that property, for example, a new proof of Weiner's theorem \cite{weiner} on mutually unbiased bases found in \cite{carielloIEEE}. \vspace{0,2cm} The triality pattern described above together with the complete reducibility property form a potential source of information for entanglement theory. \vspace{0,2cm} This paper is organized as follows. In section 2, we present some preliminary results, which are mainly facts about the group of linear contractions. In section 3, we obtain an upper bound for the spectral radius of our special triad of quantum states. In section 4, we show that SPC states and invariant under realignment states can be put in the filter normal form and their normal forms retain their shape. In section 5, we prove that the ranks of our triad of quantum states cannot be inferior to their reduced ranks and whenever this minimum is attained the states are separable. \section{Preliminary results} In this section we present some preliminary results. We begin by noticing that $G_A:\mathcal{M}_k\rightarrow \mathcal{M}_m$ and $F_A:\mathcal{M}_m\rightarrow \mathcal{M}_k$ defined in the introduction are adjoint maps with respect to the trace inner product, when $A$ is Hermitian. The reason behind this is quite simple: If $A$ is Hermitian then $F_A(X)^*=F_{A^*}(X^*)=F_A(X^*)$, for every $X\in \mathcal{M}_m$, hence $$tr(G_A(X)Y^*)=tr(A(X\otimes Y^*))=tr(XF_A(Y^*))=tr(XF_A(Y)^*).$$ Notice that for positive semidefinite Hermitian matrices $X\in\mathcal{M}_k,Y\in\mathcal{M}_m$ and $A\in \mathcal{M}_k\otimes\mathcal{M}_m$, we have $tr(A(X\otimes Y^*))\geq 0$. Thus, the equality above also shows that $G_A$ and $F_A$ are positive maps (Definition \ref{definitionpositivemaps}) when $A$ is positive semidefinite.\vspace{0,3cm} These maps are connected to the following generalization of the Hadamard product extensively used in \cite{cariello} and required here a few times. \begin{definition}[Generalization of the Hadamard product]\label{definitionproduct} Let $\gamma=\sum_{i=1}^nA_i\otimes B_i\in \mathcal{M}_m\otimes M_k$, $\delta=\sum_{j=1}^lC_j\otimes D_j\in \mathcal{M}_k\otimes M_s$. Define the product $\gamma*\delta\in \mathcal{M}_m\otimes \mathcal{M}_s$ as $$\gamma*\delta=\sum_{i=1}^n\sum_{j=1}^lA_i\otimes D_j tr(B_iC_j^t).$$ \end{definition} \vspace{0,1cm} Let us recall some facts regarding this product. \vspace{0,1cm} \begin{remark}\label{remarkproduct} Let $u=\sum_{i=1}^ke_i\otimes e_i\in\mathbb{C}^k\otimes \mathbb{C}^k$, where $e_1,\ldots,e_k$ is the canonical basis of $\mathbb{C}^k$. \begin{itemize} \item[$a)$] Notice that $u^t(B_i\otimes C_j) u=tr((B_i\otimes C_j) uu^ t)=tr(B_iC_j^t)$, where $B_i,C_j\in\mathcal{M}_k$. Therefore, $$\gamma*\delta=(Id_{m\times m}\otimes u^t\otimes Id_{s\times s})(\gamma\otimes \delta)(Id_{m\times m}\otimes u\otimes Id_{s\times s}),$$ which implies that $\gamma*\delta$ is positive semidefinite, whenever $\gamma,\delta$ are positive semidefinite. In addition, $tr(\gamma*\delta)=tr(\gamma\otimes \delta\ (Id\otimes uu^t\otimes Id))=tr(\gamma_B\otimes \delta_A\ uu^t)=tr(\gamma_B\delta_A^t).$ \vspace{0,3cm} \item[$b)$] By \cite[Proposition 8]{cariello}, $\gamma*\delta=(F_{\gamma}((\cdot)^t)\otimes Id)(\delta)=(Id\otimes G_{\delta}((\cdot)^t))(\gamma)$.\vspace{0,3cm} \item[$c)$] Let $F=(uu^t)^{\Gamma}\in \mathcal{M}_k\otimes \mathcal{M}_k$ and notice that if $\gamma=\sum_{i=1}^nA_i\otimes B_i\in \mathcal{M}_k\otimes \mathcal{M}_k$ then $F\gamma F=\sum_{i=1}^n B_i\otimes A_i.$ This real symmetric matrix $F$ is an isometry and it is usually called the flip operator.\vspace{0,3cm} \item[] Now, notice that $\gamma*(F\gamma^t F)=\sum_j\sum_{i}A_i\otimes A_j^t tr(B_iB_j)$. Hence \begin{center} $tr(\gamma*(F\gamma^t F))=tr((\sum_{i}tr(A_i) B_i)(\sum_{j}tr(A_j)B_j))=tr(\gamma_B^2)$ and \vspace{0,2cm} $G_{\gamma*(F\gamma^t F)}(X)=\sum_j\sum_{i}tr(A_iX)tr(B_iB_j)A_j^t=\sum_jtr((\sum_{i}tr(A_iX)B_i)B_j)A_j^t=F_{\gamma}(G_{\gamma}(X))^t.$ \end{center} \end{itemize} \end{remark} \vspace{0,5cm} Next, we discuss some facts about the group of linear contractions. Actually, we need to focus only on three of these maps. The linear contractions important to us are \vspace{0,2cm} \begin{center} $L_{(34)}(\gamma)=\gamma^{\Gamma}$, $L_{(24)}(\gamma)=\gamma F$ and $L_{(23)}(\gamma)=\mathcal{R}(\gamma).$ \end{center} \vspace{0,2cm} Below we discuss several properties of these linear contractions such as relations among these elements and how they behave with respect to the product defined in \ref{definitionproduct}. \vspace{0,3cm} \begin{lemma}\label{propertiesofrealignment} Let $\gamma,\delta \in \mathcal{M}_k\otimes \mathcal{M}_k$ and $v=\sum_{i}a_i\otimes b_i,\ w=\sum_{j} c_j\otimes d_j \in \mathbb{C}^k\otimes \mathbb{C}^k$. \begin{enumerate} \item $\mathcal{R}(vw^t)=V\otimes W$, where $V=\sum_{i}a_ib_i^t,\ W=\sum_{j} c_jd_j^t$. \item $\mathcal{R}(\mathcal{R}(\gamma))=\gamma$ \item $\mathcal{R}((V\otimes W)\gamma (M\otimes N))=(V\otimes M^t)\mathcal{R}(\gamma)(W^t\otimes N)$ \item $\mathcal{R}(\gamma F)F=\gamma^{\Gamma}$ \item $\mathcal{R}(\gamma^{\Gamma})=\mathcal{R}(\gamma)F$ \item $\mathcal{R}(\gamma F)=\mathcal{R}(\gamma)^{\Gamma}$ \item $\mathcal{R}(\gamma^{\Gamma})^{\Gamma}=\gamma F$ \item $\mathcal{R}(\gamma*\delta)=\mathcal{R}(\gamma)\mathcal{R}(\delta)$ $($i.e., $\mathcal{R}$ is a homomorphism$)$. \item $\mathcal{R}(F\overline{\gamma}F)=\mathcal{R}(\gamma)^*$ \end{enumerate} \end{lemma} \begin{proof} Items (1--6) were proved in items (2--7) of \cite[Lemma 23]{carielloIEEE}. For the other three items, we just need to prove them when $\gamma=ab^t\otimes cd^t$ and $\delta=ef^t\otimes gh^t$, where $a,b,c,d,e,f,g,h\in \mathbb{C}^k$.\\\\ \underline{Item (7):} $\mathcal{R}(ab^t\otimes cd^t)^{\Gamma})^{\Gamma}=\mathcal{R}(ab^t\otimes dc^t)^{\Gamma}=(ad^t\otimes bc^t)^{\Gamma}=ad^t\otimes cb^t$. Now, $(ab^t\otimes cd^t)F=ad^t\otimes cb^t$. So $\mathcal{R}(\gamma^{\Gamma})^{\Gamma}=\gamma F$.\\\\ \underline{Item (8):} $\mathcal{R}(ab^t\otimes cd^t* ef^t\otimes gh^t)=\mathcal{R}(ab^t\otimes gh^t)(d^tf)(c^te)=(ag^t\otimes bh^t)(d^tf)(c^te)$. Now, $\mathcal{R}(ab^t\otimes cd^t)\mathcal{R}(ef^t\otimes gh^t)=(ac^t\otimes bd^t)(eg^t\otimes fh^t)=(ag^t\otimes bh^t)(c^te)(d^tf)$. So $\mathcal{R}(\gamma*\delta)=\mathcal{R}(\gamma)\mathcal{R}(\delta)$\\\\ \underline{Item (9):} $\mathcal{R}(F\overline{a}\overline{b}^t\otimes \overline{c}\overline{d}^t F)=\mathcal{R}(\overline{c}\overline{d}^t\otimes \overline{a}\overline{b}^t)=\overline{c} \overline{a}^t\otimes \overline{d}\overline{b}^t$ Now, $\mathcal{R}(ab^t\otimes cd^t)^*=(ac^t\otimes bd^t)^*=\overline{c}\overline{a}^t\otimes \overline{d}\overline{b}^t.$ So $\mathcal{R}(F\overline{\gamma} F)=\mathcal{R}(\gamma)^*$. \end{proof} \vspace{0,3cm} The next lemma is important for our final result and says something very interesting about PPT states which remain PPT under realignment: They must be invariant under realignment. \vspace{0,3cm} \begin{lemma} \label{lemmarealigmentPPTareequal}Let $\gamma \in \mathcal{M}_k\otimes \mathcal{M}_k$ be a positive semidefinite Hermitian matrix. If $\gamma$ and $\mathcal{R}(\gamma)$ are PPT then $\gamma=\mathcal{R}(\gamma)$. \end{lemma} \begin{proof} By item (4) of lemma \ref{propertiesofrealignment}, $\gamma^{\Gamma} =\mathcal{R}(\gamma F)F$. \vspace{0,2cm} Now, since $F^2=Id$, $\gamma^{\Gamma} F =\mathcal{R}(\gamma F)$. \vspace{0,2cm} Next, by item $(6)$ of lemma \ref{propertiesofrealignment}, $\mathcal{R}(\gamma F)=\mathcal{R}(\gamma)^{\Gamma}$. So $\gamma^{\Gamma}F=\mathcal{R}(\gamma)^{\Gamma}$ is a positive semidefinite Hermitian matrix by hypothesis.\vspace{0,2cm} Since $F$, $\gamma^{\Gamma}$ and $\gamma^{\Gamma}F$ are Hermitian matrices, $\gamma^{\Gamma}F=F\gamma^{\Gamma}$. So there is an orthonormal basis of $\mathbb{C}^{k}\otimes\mathbb{C}^k$ formed by symmetric and anti-symmetric eigenvectors of $\gamma^{\Gamma}$. \vspace{0,2cm} Remind that $\gamma^{\Gamma}$ and $\gamma^{\Gamma}F$ are positive semidefinite, hence $\gamma^{\Gamma}=\gamma^{\Gamma}F$.\vspace{0,2cm} Finally, we have noticed that $\gamma^{\Gamma}F=\mathcal{R}(\gamma)^{\Gamma}$. So $\gamma^{\Gamma}=\mathcal{R}(\gamma)^{\Gamma}$, which implies $\gamma=\mathcal{R}(\gamma).$ \end{proof} \vspace{0,3cm} The next lemma is used in this work a few times. Although simple, we state it here in order to better organize our arguments. \vspace{0,3cm} \begin{lemma}\label{lemmaoperatornormrealignment} Let $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$. The largest singular value of the map $G_{\gamma}:\mathcal{M}_k\rightarrow \mathcal{M}_k$ equals $$\|\mathcal{R}(\gamma)\|_{\infty}=\max\{|tr(\gamma (X\otimes Y))|,\ \|X\|_2=\|Y\|_2=1 \}.$$ In addition, if $\gamma$ is Hermitian then there are Hermitian matrices $\gamma_1,\delta_1\in \mathcal{M}_k$ such that $\|\mathcal{R}(\gamma)\|_{\infty}=tr(\gamma (\gamma_1\otimes\delta_1))$, where $\|\gamma_1\|_2=\|\delta_1\|_2=1$. \end{lemma} \begin{proof}First of all, recall that $\|\mathcal{R(\gamma)}\|_{\infty}=\max\{|tr(\mathcal{R}(\gamma)vw^t)|,\ v,w\in \mathbb{C}^k\otimes \mathbb{C}^k\text{ are unit vectors}\}.$ \vspace{0.2cm} Now, since $\mathcal{R}$ is an isometry, $tr(\mathcal{R}(\gamma)vw^t)=tr(\mathcal{R}(\gamma)(\overline{w}\overline{v}^t)^*)=tr(\gamma \mathcal{R}(\overline{w}\overline{v}^t)^*).$ \vspace{0.2cm} By item (1) of lemma \ref{propertiesofrealignment}, $\mathcal{R}(\overline{w}\overline{v}^t)^*=W^t\otimes V^t$, where $W,V\in \mathcal{M}_k$ and $\|W\|_2= \|V\|_2=1$. \vspace{0.2cm} Therefore, $\|\mathcal{R}(\gamma)\|_{\infty}=\max\{|tr(\gamma (W^t\otimes V^t))|,\ W,V\in \mathcal{M}_k\text{ and }\|W\|_2= \|V\|_2=1\},$\vspace{0,2cm} \hspace{3.65cm} $=\max\{|tr(G_{\gamma}(W^t)V^t)|,\ W,V\in \mathcal{M}_k\text{ and }\|W\|_2= \|V\|_2=1\},$\vspace{0,2cm} \hspace{3.65cm} $=\max\{\|(G_{\gamma}(W^t)\|_2,\ W\in \mathcal{M}_k\text{ and }\|W\|_2=1\}.$\vspace{0,2cm} \hspace{3.65cm} $=$ the largest singular value of $G_{\gamma}:\mathcal{M}_k\rightarrow \mathcal{M}_k$. \vspace{0,2cm} This proves the first part of the lemma. Now for the second part, if $\gamma$ is Hermitian then the set of Hermitian matrices is left invariant by $G_{\gamma}:\mathcal{M}_k\rightarrow \mathcal{M}_k$. Therefore, there is an Hermitian matrix $\gamma_1\in \mathcal{M}_k$ such that $\|\gamma_1\|_2=1$ and $\|G_{\gamma}(\gamma_1)\|_2=$ the largest singular value of $G_{\gamma}$.\vspace{0.2cm} Notice that $\|G_{\gamma}(\gamma_1)\|_2=tr(G_{\gamma}(\gamma_1)\delta_1)=tr(\gamma(\gamma_1\otimes\delta_1))$, where $\delta_1=G_{\gamma}(\gamma_1)/\|G_{\gamma}(\gamma_1)\|_2$. \vspace{0.2cm} So $\|\mathcal{R}(\gamma)\|_{\infty}=tr(\gamma(\gamma_1\otimes\delta_1))$, where $\|\gamma_1\|_2=\|\delta_1\|_2=1$ and $\gamma_1$, $\delta_1$ are Hermitian matrices. \end{proof} \vspace{0,2cm} Now, we have all the preliminary results required to discuss our new results. \vspace{0,2cm} \section{An upper bound for the spectral radius of the special triad} In this section we obtain an upper bound for the spectral radius of PPT states, SPC states and invariant under realignment states (theorem \ref{specialspectralradius}). In order to prove this theorem, two lemmas are required. \vspace{0,2cm} \begin{lemma}\label{generalspectralradius} Let $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ be any positive semidefinite Hermitian matrix. Then $$\|\gamma^{\Gamma}\|_{\infty}\leq \min\left\{\|\gamma_A\|_{\infty}, \|\gamma_B\|_{\infty}, \|\mathcal{R}(\gamma)\|_{\infty} \right\}.$$ \end{lemma} \begin{proof} Let $v\in \mathbb{C}^k\otimes\mathbb{C}^k$ be a unit vector such that $|tr(\gamma^{\Gamma}vv^*)|=\|\gamma^{\Gamma}\|_{\infty}$. Let $n$ be its rank. Denote by $\{g_1,\ldots, g_k\}$ and $\{e_1,\ldots,e_n\}$ the canonical bases of $\mathbb{C}^k$ and $\mathbb{C}^n$, respectively. \vspace{0,2cm} Next, there are matrices $D\in \mathcal{M}_{k\times k}$, $E\in \mathcal{M}_{k\times k}$, $R\in \mathcal{M}_{k\times n}$ and $S\in \mathcal{M}_{k\times n}$ such that \vspace{0,1cm} \begin{enumerate} \item $v=(D\otimes Id) w$, where $tr(DD^*)=1$, $w=\sum_{i=1}^k g_i\otimes g_i\in \mathbb{C}^k\otimes\mathbb{C}^k $, \vspace{0,1cm} \item $v=(Id\otimes E) w$, where $tr(\overline{E}E^t)=1$, $w=\sum_{i=1}^k g_i\otimes g_i\in \mathbb{C}^k\otimes\mathbb{C}^k$, \vspace{0,1cm} \item $v=(R\otimes S)u$, where $u=\sum_{i=1}^ne_i\otimes e_i\in \mathbb{C}^n\otimes \mathbb{C}^n$ and $tr((RR^*)^2)=tr((\overline{S}S^t)^2 )=1$. \end{enumerate} \vspace{0,3cm} Now, \begin{enumerate} \item $\|\gamma^{\Gamma}\|_{\infty}=|tr(\gamma^{\Gamma}vv^*)|=|tr((D^*\otimes Id)\gamma(D\otimes Id)(ww^*)^{\Gamma})|\leq tr((D^*\otimes Id)\gamma(D\otimes Id)), $\\ since $Id\otimes Id \pm (ww^*)^{\Gamma}$ and $\gamma$ are positive semidefinite. Hence $$\|\gamma^{\Gamma}\|_{\infty}\leq tr(\gamma(DD^*\otimes Id))=tr(\gamma_ADD^*)\leq \|\gamma_A\|_{\infty}tr(DD^*)=\|\gamma_A\|_{\infty}.$$ \vspace{0,2cm} \item $\|\gamma^{\Gamma}\|_{\infty}=|tr(\gamma^{\Gamma}vv^*)|=|tr((Id\otimes E^t)\gamma(Id\otimes \overline{E})(ww^*)^{\Gamma})|\leq tr((Id\otimes E^t)\gamma(Id\otimes \overline{E})), $ \\ since $Id\otimes Id \pm (ww^*)^{\Gamma}$ and $\gamma$ are positive semidefinite. Hence $$\|\gamma^{\Gamma}\|_{\infty}\leq tr(\gamma(Id\otimes \overline{E}E^t))=tr(\gamma_B\overline{E}E^t)\leq \|\gamma_B\|_{\infty}tr(\overline{E}E^t)=\|\gamma_B\|_{\infty}.$$ \vspace{0,2cm} \item $\|\gamma^{\Gamma}\|_{\infty}=|tr(\gamma^{\Gamma}vv^*)|=|tr((R^*\otimes S^t)\gamma(R\otimes \overline{S})(uu^*)^{\Gamma})|\leq tr((R^*\otimes S^t)\gamma(R\otimes \overline{S}))$, \\ since $Id\otimes Id \pm (uu^*)^{\Gamma}$ and $\gamma$ are positive semidefinite. Hence $$\|\gamma^{\Gamma}\|_{\infty}\leq tr(\gamma(RR^*\otimes \overline{S}S^t))\leq \|\mathcal{R}(\gamma)\|_{\infty},\text{ since }\|RR^*\|_2=\|\overline{S}S^t \|_2=1, \text{ by lemma } \ref{lemmaoperatornormrealignment}.$$ \end{enumerate} \end{proof} \vspace{0,5cm} \begin{lemma}\label{generalspectralradiusrealignment} Let $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ be a positive semidefinite Hermitian matrix. Then $$\|\mathcal{R}(\gamma)\|_{\infty}^2\leq \|\gamma_A\|_{\infty}\|\gamma_B\|_{\infty}.$$ \end{lemma} \begin{proof} By lemma \ref{lemmaoperatornormrealignment}, there are Hermitian matrices $\gamma_1\in \mathcal{M}_k$ and $\delta_1\in \mathcal{M}_k$ such that \\ \begin{enumerate} \item $tr(\gamma_1^2)=tr(\delta_1^2)=1$\\ \item $tr(\gamma(\gamma_1\otimes \delta_1))=\|\mathcal{R}(\gamma)\|_{\infty}.$\\ \end{enumerate} Consider the following positive semidefinite Hermitian matrix $$\begin{pmatrix} \gamma^{\frac{1}{2}} & 0\\ 0 & \gamma^{\frac{1}{2}} \end{pmatrix}\begin{pmatrix} Id\otimes \delta_1\\ \gamma_1\otimes Id \end{pmatrix} \begin{pmatrix} Id\otimes \delta_1 & \gamma_1\otimes Id \end{pmatrix}\begin{pmatrix} \gamma^{\frac{1}{2}} & 0\\ 0 & \gamma^{\frac{1}{2}} \end{pmatrix}$$ \vspace{0,5cm} $$= \begin{pmatrix} \gamma^{\frac{1}{2}}(Id\otimes \delta_1^2)\gamma^{\frac{1}{2}} & \gamma^{\frac{1}{2}}(\gamma_1\otimes\delta_1)\gamma^{\frac{1}{2}} \\ \gamma^{\frac{1}{2}}(\gamma_1\otimes \delta_1)\gamma^{\frac{1}{2}} & \gamma^{\frac{1}{2}}(\gamma_1^2\otimes Id)\gamma^{\frac{1}{2}} \end{pmatrix}.$$ \vspace{0,5cm} Its partial trace, \ \ \ $D=\begin{pmatrix} tr(\gamma (Id_k\otimes \delta_1^2) )& tr(\gamma(\gamma_1\otimes\delta_1)) \\ tr(\gamma(\gamma_1\otimes \delta_1)) & tr(\gamma(\gamma_1^2\otimes Id_k)) \end{pmatrix}_{2\times 2}$ is also positive semidefinite. \vspace{0,5cm} Thus \ \ \ $0\leq \det(D)=tr(\gamma (Id_k\otimes \delta_1^2) )tr(\gamma(\gamma_1^2\otimes Id_m))-tr(\gamma(\gamma_1\otimes \delta_1))^2.$\\ Notice that \vspace{0,2cm} \begin{itemize} \item $tr(\gamma (Id_k\otimes \delta_1^2) )=tr(\gamma_B\delta_1^2)\leq \|\gamma_B\|_{\infty}tr(\delta_1^2)= \|\gamma_B\|_{\infty}$, \item $tr(\gamma(\gamma_1^2\otimes Id_m))=tr(\gamma_A\gamma_1^2)\leq \|\gamma_A\|_{\infty}tr(\gamma_1^2)= \|\gamma_A\|_{\infty}$, \item $tr(\gamma(\gamma_1\otimes \delta_1))^2=\|\mathcal{R}(\gamma)\|_{\infty}^2$.\\ \end{itemize} Hence $\|\mathcal{R}(\gamma)\|_{\infty}^2\leq \|\gamma_A\|_{\infty}\|\gamma_B\|_{\infty}$. \end{proof} \vspace{0,3cm} These lemmas imply the first new connection for our special triad of quantum states. \vspace{0,3cm} \begin{theorem}\label{specialspectralradius} Let $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ be a positive semidefinite Hermitian matrix. If $\gamma$ is PPT or SPC or invariant under realingment then $$\|\gamma\|_{\infty}\leq \min\{\|\gamma_A\|_{\infty}, \|\gamma_B\|_{\infty}, \|\mathcal{R}(\gamma)\|_{\infty} \}.$$ \end{theorem} \begin{proof} First, let $\gamma$ be a PPT state. Hence $\gamma^{\Gamma}$ is also a state. \\ Notice that $(\gamma^{\Gamma})_A=\gamma_A$, $(\gamma^{\Gamma})_B=\gamma_B^t$ and, by lemma \ref{lemmaoperatornormrealignment}, $\|\mathcal{R}(\gamma^{\Gamma})\|_{\infty}=\|\mathcal{R}(\gamma)\|_{\infty}$.\\ By applying lemma \ref{generalspectralradius} on $\gamma^{\Gamma}$, we obtain $$\|\gamma\|_{\infty}\leq \min\{\|(\gamma^{\Gamma})_A\|_{\infty}, \|(\gamma^{\Gamma})_B\|_{\infty}, \|\mathcal{R}(\gamma^{\Gamma})\|_{\infty} \}=\min\{\|\gamma_A\|_{\infty}, \|\gamma_B\|_{\infty}, \|\mathcal{R}(\gamma)\|_{\infty} \}.$$ \vspace{0,3cm} So the proof of the PPT case is complete.\vspace{0,2cm} Next, if $\gamma$ is SPC or invariant under realignment then $\gamma_A=\gamma_B$ or $\gamma_A=\gamma_B^t$ by lemma \cite[Corollary 25]{carielloIEEE}. Hence, by lemma \ref{generalspectralradiusrealignment}, $\|\mathcal{R}(\gamma)\|_{\infty}\leq \|\gamma_A\|_{\infty}.$\vspace{0,2cm} It remains to prove that $\|\gamma\|_{\infty}\leq \|\mathcal{R}(\gamma)\|_{\infty}$, whenever $\gamma$ is SPC or invariant under realignment. Notice that this inequality is trivial for matrices invariant under realignment. Thus, let $\gamma$ be a SPC state.\vspace{0,2cm} As defined in the introduction, $\mathcal{R}(\gamma^{\Gamma})$ is positive semidefinite. Applying lemma \ref{generalspectralradius} on $\mathcal{R}(\gamma^{\Gamma})$, we obtain $$\|\mathcal{R}(\gamma^{\Gamma})^{\Gamma}\|_{\infty}\leq \|\mathcal{R}(\mathcal{R}(\gamma^{\Gamma}))\|_{\infty}.$$ \vspace{0,2cm} Now, by items (7) and (2) of lemma \ref{propertiesofrealignment}, $\mathcal{R}(\gamma^{\Gamma})^{\Gamma}=\gamma F$ and $\mathcal{R}(\mathcal{R}(\gamma^{\Gamma}))=\gamma^{\Gamma}$, where $F$ is the flip operator. Therefore $$\|\gamma F\|_{\infty}\leq \|\gamma^{\Gamma}\|_{\infty}.$$ \vspace{0,2cm} Finally, $\|\gamma^{\Gamma}\|_{\infty}\leq \|\mathcal{R}(\gamma)\|_{\infty}$ by lemma \ref{generalspectralradius}, and $\|\gamma \|_{\infty}=\|\gamma F\|_{\infty}$, since $F$ is an isometry. \end{proof} \vspace{0,5cm} \section{Filter normal form for SPC states and invariant under realignment states} \vspace{0,5cm} In this section we show that every SPC state and every invariant under realignment state can be put in the filter normal form. In addition their filter normal forms can still be chosen to be SPC and invariant under realignment, respectively (corollary \ref{corollaryfilternormalformSPCandINV}). \vspace{0,2cm} As described in the introduction, there are applications of this normal form in entanglement theory. Now, it has been noticed that this normal form is connected to an extension of Sinkhorn-Knopp theorem for positive maps \cite{CarielloLAMA, gurvits2004}. This theorem concerns the existence of invertible matrices $R,S$ such that $R^*T(SXS^*)R$ is doubly stochastic for a positive map $T(X)$ satisfying suitable conditions. So we start this section with some definitions and lemmas related to this theorem. \vspace{0,2cm} Let $V\in \mathcal{M}_k$ be an orthogonal projection and consider the sub-algebra of $\mathcal{M}_k:$ $V\mathcal{M}_kV=\{VXV,\ X\in \mathcal{M}_k\}$. Let $P_k$ denote the set of positive semidefinite Hermitian matrices of $\mathcal{M}_k$. \vspace{0,2cm} \begin{definition}\label{definitionpositivemaps}Let us say that $T:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is a positive map if $T(X)\in P_k\cap V\mathcal{M}_kV$ for every $X\in P_k\cap V\mathcal{M}_kV$. In addition, we say that a positive map $T:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is doubly stochastic if the following equivalent conditions hold \begin{enumerate} \item the matrix $A_{m\times m}$, defined as $A_{ij}=tr(T(v_iv_i^*)w_jw_j^*)$, is doubly stochastic for every choice of orthonormal bases $v_1,\ldots,v_m$ and $w_1,\ldots,w_m$ of $\operatorname{Im}(V)$, \item $T(V)=T^*(V)=V$, where $T^*:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is the adjoint of $T:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ with respect to the trace inner product. \end{enumerate} \end{definition} \vspace{0,3cm} \begin{definition}\label{deffullyindecomposabel}A positive map $T:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is said to be fully indecomposable if the following equivalent conditions hold \begin{enumerate} \item the matrix $A_{m\times m}$, defined as $A_{ij}=tr(T(v_iv_i^*)w_jw_j^*)$, is fully indecomposable \cite{marcus} for every choice of orthonormal bases $v_1,\ldots,v_m$ and $w_1,\ldots,w_m$ of $\operatorname{Im}(V)$, \item $\operatorname{rank}(X)+\operatorname{rank}(Y)<\operatorname{rank}(V)$, whenever $X,Y\in (V\mathcal{M}_kV\cap P_k)\setminus\{0\}$and $tr(T(X)Y)=0$, \item $\operatorname{rank}(T(X))>\operatorname{rank}(X)$, $\forall X\in V\mathcal{M}_kV\cap P_k$ such that $0<\operatorname{rank}(X)<\operatorname{rank} (V).$\\ \end{enumerate} \end{definition} Below we prove two lemmas concerning self-adjoint maps with respect to the trace inner product. \vspace{0,1cm} \begin{lemma}\label{lemmaselfadjoint} Let $T:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ be a fully indecomposable self-adjoint map. There is $R\in V\mathcal{M}_kV$ such that $R^*T(R(\cdot)R^*)R: V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is doubly stochastic. \end{lemma} \begin{proof} Since $T:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is fully indecomposable, it has total support \cite[Lemma 2.3]{CarielloLAMA} or it has a positive achievable capacity \cite{gurvits2004}. So there are matrices $A,B\in V\mathcal{M}_kV$ such that $\operatorname{rank}(A)=\operatorname{rank}(B)=\operatorname{rank}(V)$ and $T_1(X)=B^*T(AXA^*)B$ is doubly stochastic \cite[Theorem 3.7]{CarielloLAMA}. Notice that $T_1$ still is fully indecomposable. \vspace{0,1cm} Now, $T_2(X)=T_1^*(X)=A^*T(B(\cdot)B^*)A$ is also doubly stochastic and \begin{equation}\label{eqrelacaoT2T1} T_2(X)=C^*T_1(DXD^*)C, \end{equation} where $C=B^+A$, $D=A^+B$ and $Y^+$ is the pseudo-inverse of $Y$. \vspace{0,3cm} Let $C=EFG^*$ and $D=HLJ^*$ be the SVD decompositions of $C$ and $D$, where \begin{enumerate} \item $E=(e_1,\ldots,e_m), G=(g_1,\ldots,g_m), H=(h_1,\ldots,h_m), J=(j_1,\ldots,j_m)\in\mathcal{M}_{k\times m}$ and the columns of each of these matrices form an orthonormal basis of $\operatorname{Im}(V)$. \vspace{0,2cm} \item $F=diagonal(f_1,\ldots,f_m)$, $L=diagonal(l_1,\ldots,l_m)$ and $f_i>0$, $l_i>0$ for every $i$.\vspace{0,3cm} \end{enumerate} Next, define $R,S\in\mathcal{M}_{m\times m}$ as \begin{center} $R_{ik}=tr(T_2(j_ij_i^*)g_kg_k^*)$\ \ and\ \ $S_{ik}=tr(T_1(h_ih_i^*)e_ke_k^*)$. \end{center} By equation \ref{eqrelacaoT2T1}, $R_{ik}=l_i^2f_k^2\ S_{ik}$, i.e., $R=L^2SF^2$.\vspace{0,2cm} Thus, $L^2$, $F^2$ are positive diagonal matrices such that $L^2SF^2$ is doubly stochastic by definition \ref{definitionpositivemaps}. Recall that $S$ is a fully indecomposable matrix by definition \ref{deffullyindecomposabel}. \vspace{0,2cm} Since $S$ is fully indecomposable, by a theorem proved in \cite{Sinkhorn}, the diagonal matrices $L^2$ and $F^2$ such that $L^2SF^2$ is doubly stochastic must be unique up to multiplication by positive numbers, but $Id. S. Id$ is also doubly stochastic. Thus, $L=a^{-2} Id$ and $F=a^{2} Id$ for some $a>0$. \vspace{0,2cm} Therefore, $B^{+}A=C=a^2U$, where $U=EG^*$. Notice that $UVU^*=V$.\vspace{0,2cm} In addition, $BB^{+}A=a^2BU$. Since $BB^{+}=V$ and $VA=A$, we obtain $A=a^2BU$.\vspace{0,2cm} Thus, \begin{center} $B^*T(A(\cdot)A^*)B=B^*T((a^2B)U(\cdot)U^*(a^2B)^*)B=(aB)^*T((aB)U(\cdot)U^*(aB)^*)(aB)$. \end{center} Finally, $(aB)^*T((aB)(\cdot)(aB)^*)(aB)$ is doubly stochastic too, since $V=UVU^*$. \end{proof} \vspace{0,1cm} \begin{lemma} \label{keylemmanormalform} Let $T:V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ be a self-adjoint positive map such that $v\notin \ker(T(vv^*))$ for every $v\in\operatorname{Im}(V)\setminus\{\vec{0}\}$. Then there is $R\in V\mathcal{M}_kV$ such that $R^*T(R(\cdot)R^*)R: V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is doubly stochastic. \end{lemma} \begin{proof} This proof is an induction on the $\operatorname{rank}(V)$. \vspace{0,2cm} If $\operatorname{rank}(V)=1$ then $V\mathcal{M}_kV=\{\lambda vv^*,\ \lambda\in \mathbb{C}\}$. Thus, $T(vv^*)=\mu vv^*$, where $\mu> 0$ by hypothesis.\vspace{0,2cm} Define $R=\frac{1}{\sqrt[4]{\mu}}vv^*$. So $R^*T(Rvv^*R^*)R=vv^*$. Thus, $R^*T(R(\cdot)R^*)R: V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is a self-adjoint doubly stochastic map.\vspace{0,2cm} Let $\operatorname{rank}(V)=n>1$ and assume the validity of this theorem whenever the rank of the orthogonal projection is less than $n$.\vspace{0,2cm} Consider all pairs of orthogonal projections $(V_1,W_1)$ such that \begin{center} $V_1,W_1\in V\mathcal{M}_kV\setminus\{0\}$ and $0=tr(T(V_1)W_1)$. \end{center} Since there is no $v\in\operatorname{Im}(V)\setminus\{\vec{0}\}$ such that $tr(T(vv^*)vv^*)=0$, $\operatorname{Im}(V_1)\cap \operatorname{Im}(W_1)=\{\vec{0}\}$. So $$\operatorname{rank}(V_1)+\operatorname{rank}(W_1)\leq\operatorname{rank}(V).$$ If for every aforementioned pair $(V_1,W_1)$, we have $\operatorname{rank}(V_1)+\operatorname{rank}(W_1)<\operatorname{rank}(V)$, then $T$ is fully indecomposable by definition \ref{deffullyindecomposabel}. So the result follows by lemma \ref{lemmaselfadjoint}.\vspace{0,2cm} Let us assume that there is such a pair $(V_1,W_1)$ satisfying $\operatorname{rank}(V_1)+\operatorname{rank}(W_1)=\operatorname{rank}(V)$.\vspace{0,2cm} Since $\operatorname{Im}(V_1)\cap \operatorname{Im}(W_1)=\{\vec{0}\}$, there is $S\in V\mathcal{M}_kV$ such that $SV_1S^*=V_1$ and $S(V-V_1)S^*=W_1$. Define $T'(X)=S^*T(SXS^*)S$. Note that $tr(T'(V_1)(V-V_1))=0$.\vspace{0,2cm} Next, since $T$ is self-adjoint so is $T'$, hence $tr(T'(V-V_1)V_1)=0$.\vspace{0,2cm} These last two equalities imply that \begin{equation}\label{equationsubinv} T'(V_1\mathcal{M}_kV_1)\subset V_1\mathcal{M}_kV_1\ \ \text{and}\ \ T'((V-V_1)\mathcal{M}_k(V-V_1))\subset (V-V_1)\mathcal{M}_k(V-V_1). \end{equation} Of course the restrictions $T'|_{V_1\mathcal{M}_kV_1}$ and $T'|_{(V-V_1)\mathcal{M}_k(V-V_1)}$ are self-adjoint and there is no $v\in\operatorname{Im}(V_1)\setminus\{\vec{0}\}$ or $v\in\operatorname{Im}(V-V_1)\setminus\{\vec{0}\}$ such that $tr(T'(vv^*)vv^*)=0$.\vspace{0,2cm} By induction hypothesis, there are $R_1\in V_1\mathcal{M}_kV_1$ and $R_2\in (V-V_1)\mathcal{M}_k(V-V_1)$ such that \vspace{0,2cm} \begin{itemize} \item $R_1^*T'(R_1(\cdot)R_1^*)R_1: V_1\mathcal{M}_kV_1\rightarrow V_1\mathcal{M}_kV_1$ is doubly stochastic, i.e., \begin{equation}\label{equationdoublysub1} R_1^*T'(R_1(V_1)R_1^*)R_1=V_1 \end{equation} \item $R_2^*T'(R_2(\cdot)R_2^*)R_2: (V-V_1)\mathcal{M}_k(V-V_1)\rightarrow (V-V_1)\mathcal{M}_k(V-V_1)$ is doubly stochastic, i.e., \begin{equation}\label{equationdoublysub2} R_2^*T'(R_2(V-V_1)R_2^*)R_2=V-V_1 \end{equation} \end{itemize} \vspace{0,2cm} Set $R=R_1+R_2\in V\mathcal{M}_kV$. Note que $T''(X)=R^*T'(RXR^*)R$ is self-adjoint and $$T''(V)=T''(V_1+V-V_1)=T''(V_1)+T''(V-V_1)$$ $$\hspace{1 cm}=R^*T'(RV_1R^*)R+R^*T'(R(V-V_1)R^*)R$$ $$\hspace{1,3 cm} =R^*T'(R_1V_1R_1^*)R+R^*T'(R_2(V-V_1)R_2^*)R$$ \begin{center} $\hspace{4,8 cm} =R_1^*T'(R_1V_1R_1^*)R_1+R_2^*T'(R_2(V-V_1)R_2^*)R_2$, by equation \ref{equationsubinv},\vspace{0,2cm} $\hspace{2.2 cm}=V_1+(V-V_1)=V$, by equations \ref{equationdoublysub1} and \ref{equationdoublysub2}. \end{center} Hence, $T'':V\mathcal{M}_kV\rightarrow V\mathcal{M}_kV$ is doubly stochastic. \end{proof} \vspace{0,5cm} \begin{corollary}\label{corollaryimportant} Let $A\in \mathcal{M}_k\otimes \mathcal{M}_k$ be a Hermitian matrix such that $G_A:\mathcal{M}_k\rightarrow \mathcal{M}_k$ is a self-adjoint positive map and $tr(A(vv^*\otimes vv^*))> 0$ for every $v\in\mathbb{C}^k\setminus\{0\}$. There is an invertible matrix $R\in \mathcal{M}_k$ such that $(R^*\otimes R^*)A(R\otimes R)=\sum_{i=1}^n\lambda_i \gamma_i\otimes \gamma_i$, where \begin{enumerate} \item $\lambda_1=1$ and $\gamma_1=\frac{Id}{\sqrt{k}}$ \item $\lambda_i\in\mathbb{R}$ and $\gamma_i=\gamma_i^*$ for every $i$, \item $1\geq|\lambda_i|$ for every $i$, \item $tr(\gamma_i\gamma_j)=0$ for every $i\neq j$ and $tr(\gamma_i^2)=1$ for every $i$. \end{enumerate} \end{corollary} \begin{proof} By the definition of $G_A:\mathcal{M}_k\rightarrow \mathcal{M}_k$ (given in the introduction), notice that\begin{center} $0<tr(A(vv^*\otimes vv^*))=tr(G_A(vv^*)vv^*)$. \end{center} Hence $v\notin \ker G_A(vv^*)$ for every $v\in\mathbb{C}^k\setminus\{0\}$. By lemma \ref{keylemmanormalform}, there is an invertible matrix $R\in \mathcal{M}_k$ such that $R^*G_A(RXR^*)R$ is doubly stochastic. Define $B=(R^*\otimes R^*)A(R\otimes R)$ and notice that $G_B(X)=R^*G_A(RXR^*)R$. Therefore, $G_B$ is a self-adjoint doubly stochastic map, i.e., $G_B(\frac{Id}{\sqrt{k}})=\frac{Id}{\sqrt{k}}$. Let $\frac{Id}{\sqrt{k}}, \gamma_2,\ldots,\gamma_{k^2}$ be an orthonormal basis of $\mathcal{M}_k$ formed by Hermitian eigenvectors of the self-adjoint positive map $G_B:\mathcal{M}_k\rightarrow \mathcal{M}_k$ such that \begin{itemize} \item $G_B(\gamma_i)=\lambda_i\gamma_i$, where $|\lambda_i|>0$ for $1\leq i\leq n$ \item $G_B(\gamma_i)=0$, for $i>n$. \end{itemize} Since $G_B$ is a positive map satisfying $G_B(Id)=Id$ then its spectral radius is 1 \cite[Theorem 2.3.7]{Bhatia1}. So $|\lambda_i|\leq 1$ for every $i$.\vspace{0.2cm} Finally, by the definition of $G_B$, $$B=\frac{Id}{\sqrt{k}}\otimes G_B\left(\frac{Id}{\sqrt{k}}\right)+\gamma_2\otimes G_B(\gamma_2)+\ldots+\gamma_{k^2}\otimes G_B(\gamma_{k^2})=\sum_{i=1}^n\lambda_i \gamma_i\otimes \gamma_i.$$ \end{proof} \vspace{0,5cm} \begin{corollary}\label{corollaryfilternormalformSPCandINV} Let $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ be a positive semidefinite Hermitian matrix such that $\operatorname{rank}(\gamma_A)=k$. There is an invertible matrix $R\in \mathcal{M}_k$ such that \begin{enumerate} \item $(R^*\otimes R^*)\gamma(R\otimes R)=\sum_{i=1}^n\lambda_i \gamma_i\otimes \gamma_i$, if $\mathcal{R}(\gamma^{\Gamma})$ is positive semidefinite;\vspace{0,2cm} \item $(R^*\otimes R^t)\gamma(R\otimes \overline{R})=\sum_{i=1}^n\lambda_i \gamma_i\otimes \overline{\gamma_i}$, if $\mathcal{R}(\gamma)$ is positive semidefinite,\vspace{0,2cm} \end{enumerate} where \begin{itemize} \item[$a)$] $\lambda_1=\frac{1}{k}$ and $\gamma_1=\frac{Id}{\sqrt{k}}$ \item[$b)$] $\frac{1}{k}\geq \lambda_i>0$ and $\gamma_i=\gamma_i^*$ for every $i$, \item[$c)$] $tr(\gamma_i\gamma_j)=0$ for every $i\neq j$ and $tr(\gamma_i^2)=1$ for every $i$. \end{itemize} \end{corollary} \begin{proof} $(1)$ If $\gamma$ is a state such that $\mathcal{R}(\gamma^{\Gamma})$ is positive semidefinite then, by \cite[corollary 25]{carielloIEEE}, $\gamma$ can be written as $\gamma=\sum_{i=1}^n a_iB_i\otimes B_i$, where $a_i>0$, $B_i=B_i^*$ and $tr(B_i^2)=1$ for every i, and $tr(B_iB_j)=0$ for $i\neq j$. \vspace{0,3cm} Hence $G_{\gamma}(X)=\sum_{i=1}^na_iB_itr(B_iX)$ is a self-adjoint map with positive eigenvalues $a_1,\ldots,a_n$ and possibly some null eigenvalues. In addition, since $\gamma$ is positive semidefinite, $G_{\gamma}(X)$ is a positive map. Now, let $v\in\mathbb{C}^k$ be such that \begin{center} $0=tr(\gamma(vv^*\otimes vv^*))=\sum_{i=1}^na_itr(B_ivv^*)^2$. \end{center} Since $a_i>0$ and $tr(B_ivv^*)\in\mathbb{R}$ for every $i$, $tr(B_ivv^*)=0$ for every $i$. Therefore, \begin{center} $tr(\gamma_Avv^*)=\sum_{i=1}^n a_itr(B_i)tr(B_ivv^*)=0$. \end{center} By hypothesis $\gamma_A$ is positive definite, hence $v=0$. So, by corollary \ref{corollaryimportant}, there is a invertible matrix $R$ such that \begin{equation}\label{eqSPC} (R^*\otimes R^*)\gamma(R\otimes R)=\sum_{i=1}^n\lambda_i \gamma_i\otimes \gamma_i \end{equation} satisfies the four conditions of that corollary. It remains to show that $\lambda_i>0$ and then we multiply equation (\ref{eqSPC}) by $\frac{1}{k}$ to obtain our desired result.\vspace{0,2cm} Finally, since $G_{\gamma}$ has only non-negative eigenvalues and $\lambda_1,\ldots,\lambda_n$ are non-null eigenvalues of $R^*G_{\gamma}(RXR^*)R$ (as seen in the proof of of corollary \ref{corollaryimportant}), $\lambda_1,\ldots,\lambda_n$ are positive.\\\\ $(2)$ If $\gamma$ is a state such that $\mathcal{R}(\gamma)$ is positive semidefinite then, by \cite[Corollary 25]{carielloIEEE}, $\gamma$ can be written as $\gamma=\sum_{i=1}^n a_iB_i\otimes \overline{B_i}$, where $a_i>0$, $B_i=B_i^*$ and $tr(B_i^2)=1$ for every i, and $tr(B_iB_j)=0$ for $i\neq j$. \vspace{0,3cm} Consider $\gamma^{\Gamma}=\sum_{i=1}^n a_iB_i\otimes B_i$ and notice that $G_{\gamma^{\Gamma}}(X)=G_{\gamma}(X)^t$ is also a positive map. Now repeat the proof of item $(1)$ for $\gamma^{\Gamma}$. Hence there is a invertible matrix $R$ such that \begin{equation}\label{eqInvReal} (R^*\otimes R^*)\gamma^{\Gamma}(R\otimes R)=\sum_{i=1}^n\lambda_i \gamma_i\otimes \gamma_i, \end{equation} where $\gamma_i$ and $\lambda_i$ satisfy all the required conditions. Finally, $$(R^*\otimes R^t)\gamma(R\otimes \overline{R})=\sum_{i=1}^n\lambda_i \gamma_i\otimes \overline{\gamma_i}.$$ \end{proof} \begin{corollary}\label{corollaryleftfilter} Let $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ be a state. There is an invertible matrix $R\in \mathcal{M}_k$ such that $(R^*\otimes Id)\gamma(R\otimes Id)=\sum_{i=1}^na_i \gamma_i\otimes \delta_i,$ where \begin{itemize} \item[$a)$] $a_1\geq a_i>0$, for every $1\leq i\leq n$, and $\gamma_1=\frac{Id}{\sqrt{k}}$, \item[$b)$] $\gamma_i=\gamma_i^*$, $\delta_i=\delta_i^*$ for every $i$, \item[$c)$] $tr(\gamma_i\gamma_j)=tr(\delta_i\delta_j)=0$ for every $i\neq j$ and $tr(\gamma_i^2)=tr(\delta_i^2)=1$. \end{itemize} \end{corollary} \begin{proof} First, since $\gamma$ is a state, so is $F\overline{\gamma}F$. Therefore, $\gamma*F\overline{\gamma}F$ is positive semidefinite by item $a)$ of remark \ref{remarkproduct}. Now, by items $(8)$ and $(9)$ of lemma \ref{propertiesofrealignment}, $$\mathcal{R}(\gamma*F\overline{\gamma}F)=\mathcal{R}(\gamma)\mathcal{R}(\gamma)^*.$$ By item $(2)$ of corollary \ref{corollaryfilternormalformSPCandINV}, there is an invertible matrix $R\in\mathcal{M}_k$ such that \begin{center} $(R^*\otimes R^t)(\gamma*F\overline{\gamma}F)(R \otimes \overline{R})=\sum_{i=1}^n\lambda_i\gamma_i\otimes \gamma_i$, \end{center} where \begin{itemize} \item[$a)$] $\lambda_1=\frac{1}{k}$ and $\gamma_1=\frac{Id}{\sqrt{k}}$ \item[$b)$] $\frac{1}{k}\geq \lambda_i>0$ and $\gamma_i=\gamma_i^*$ for every $i$, \item[$c)$] $tr(\gamma_i\gamma_j)=0$ for every $i\neq j$ and $tr(\gamma_i^2)=1$ for every $i$. \end{itemize} \vspace{0,3cm} Define $\delta=(R^*\otimes Id)\gamma(R\otimes Id)$ and notice that $$\delta*F\delta^tF=\delta*F\overline{\delta}F=(R^*\otimes R^t)(\gamma*F\overline{\gamma}F)(R \otimes \overline{R}).$$ Thus, $G_{\delta*F\delta^tF}(\frac{Id}{k})=\lambda_1\frac{Id}{k}$. \vspace{0,3cm} By item $c)$ of remark \ref{remarkproduct}, $F_{\delta}(G_{\delta}(\frac{Id}{\sqrt{k}}))=G_{\delta*F\delta^tF}(\frac{Id}{\sqrt{k}})^t=\lambda_1\frac{Id}{\sqrt{k}}$. So $F_{\delta}(G_{\delta}(Id))=\lambda_1Id$. \vspace{0,3cm} By \cite[Theorem 2.3.7]{Bhatia1}, $\lambda_1$ is the largest eigenvalue of the positive map $F_{\delta}\circ G_{\delta}$. So $\sqrt{\lambda_1}$ is the largest singular value of $G_{\delta}$ and $F_{\delta}$, since they are adjoints. \vspace{0,3cm} Next, let $\delta_1=\frac{Id}{\sqrt{k}},\delta_2,\ldots,\delta_{k^2}$ be an orthonormal basis of $\mathcal{M}_k$ formed by Hermitian eigenvectors of $F_{\delta}\circ G_{\delta}:\mathcal{M}_k\rightarrow \mathcal{M}_k$. \vspace{0,3cm} Notice that $(R^*\otimes Id)\gamma(R\otimes Id)=\delta=\sum_{i=1}^{k^2}\delta_i\otimes G_{\delta}(\delta_i)$. \vspace{0,3cm} If $G_{\delta}(\delta_i)\neq 0_{k\times k}$, for $1\leq i\leq n$, then define $a_i=\|G_{\delta}(\delta_i)\|_2>0$. Thus, $\delta=\sum_{i=1}^{n}a_i\delta_i\otimes \frac{1}{a_i}G_{\delta}(\delta_i)$. \vspace{0,3cm} Notice that $G_{\delta}(\delta_i)^*=G_{\delta^*}(\delta_i^*)=G_{\delta}(\delta_i)$, since $\delta$ and $\delta_i$ are Hermitian matrices. Moreover, by the definition of $a_i$, \begin{center} $tr(\frac{1}{a_i}G_{\delta}(\delta_i)\frac{1}{a_i}G_{\delta}(\delta_i))=1,\ \ \ 1\leq i\leq n$. \end{center} In addition, since $F_{\delta}(G_{\delta}(\delta_j)))$ is a multiple of $\delta_j$, $\delta_i$ is orthogonal to $F_{\delta}(G_{\delta}(\delta_j)))$ for $i\neq j$, \begin{center} $tr(\frac{1}{a_i}G_{\delta}(\delta_i)\frac{1}{a_j}G_{\delta}(\delta_j))=\frac{1}{a_ia_j}tr(\delta_i F_{\delta}(G_{\delta}(\delta_j)))=0$. \end{center} Finally, $a_1^2=tr(G_{\delta}(\delta_1)^2)=tr(\delta_1F_{\delta}(G_{\delta}(\delta_1)))=\lambda_1tr(\delta_1^2)=\lambda_1$, so $a_1=\sqrt{\lambda_1}$. Notice also that, for every $i$, $a_i\leq$ the largest singular value of $G_{\delta}$, which is $ \sqrt{\lambda_1}=a_1$. \end{proof} \vspace{0,5cm} \section{Lower bound for the rank of the special triad} \vspace{0,5cm} In this section, we prove that $\operatorname{rank}(\gamma)\geq k$, whenever a state $\gamma$ is PPT or SPC or invariant under realignment and $ \operatorname{rank}(\gamma_A)=\operatorname{rank}(\gamma_B)=k$. Then we show that if $\operatorname{rank}(\gamma)= k$ then $\gamma$ is separable in each of these cases. \vspace{0,2cm} We start by proving the SPC and the invariant under realignment cases. In their proofs we use the result that guarantees their separability whenever their Schmidt coefficients are equal, which follows from the complete reducibility property \cite[Proposition 15]{carielloIEEE}. \vspace{0,2cm} \subsection*{First Cases:} SPC states and invariant under realignment states \vspace{0,3cm} Before proving the inequality, notice that by the symmetry of their Schmidt decompositions \cite[Corollary 25]{carielloIEEE}, $\gamma_B=\gamma_A$ or $\gamma_B=\overline{\gamma_A}$, when $\gamma$ is SPC or invariant under realignment, respectively. Hence $\operatorname{rank}(\gamma_A)=\operatorname{rank}(\gamma_B)$. In addition, we can assume without loss of generality that $\operatorname{rank}(\gamma_A)=k$, otherwise we would be able to embed $\gamma$ in $\mathcal{M}_s\otimes \mathcal{M}_s$, where $s=\operatorname{rank}(\gamma_A)$, and obtain the same result. \begin{theorem} If $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ is a SPC state such that $\operatorname{rank}(\gamma_A)=k$ then $\operatorname{rank}(\gamma)\geq k$. In addition, if the equality holds then $\gamma$ is separable. \end{theorem} \begin{proof} By corollary \ref{corollaryfilternormalformSPCandINV}, there is a invertible matrix $R$ such that \begin{center} $\delta=(R^*\otimes R^*)\gamma(R\otimes R)=\sum_{i=1}^n\lambda_i \gamma_i\otimes \gamma_i$, \end{center} where $\lambda_1=\dfrac{1}{k}$, $\gamma_1=\dfrac{Id}{\sqrt{k}}$, $tr(\gamma_i\gamma_1)=\dfrac{tr(\gamma_i)}{\sqrt{k}}=0$ for $i>1$. So $\displaystyle \delta_A=\sum_{\i=1}^n\lambda_i\gamma_itr(\gamma_i)=\frac{Id}{k}$. Now, notice that $\delta$ still is SPC by \cite[Corollary 25]{carielloIEEE}. Hence $\|\delta\|_{\infty}\leq \|\delta_A\|_{\infty}=\frac{1}{k}$, by theorem \ref{specialspectralradius}. So $\dfrac{1}{k}\geq \|\delta\|_{\infty}\geq \dfrac{tr(\delta)}{\operatorname{rank}(\delta)}=\dfrac{1}{\operatorname{rank}(\delta)}$. Hence $ \operatorname{rank}(\delta)\geq k$. \vspace{0,2cm} Since $R$ is invertible, $\operatorname{rank}(\gamma)=\operatorname{rank}(\delta)\geq k$. \vspace{0,3cm} For the next part, assume $\operatorname{rank}(\gamma)=k$. Then $\operatorname{rank}(\delta)=k$. Therefore, \begin{center} $1=tr(\delta)\leq \|\delta\|_{\infty}\operatorname{rank}(\delta)=\dfrac{1}{k}.k=1$. \end{center} Since the equality $tr(\delta)=\|\delta\|_{\infty}\operatorname{rank}(\delta)$ holds, the non-null eigenvalues of $\delta$ are equal to $\|\delta\|_{\infty}$. So $tr(\delta)=k\|\delta\|_{\infty}=1$. Hence $\|\delta\|_{\infty}=\frac{1}{k}$ and $tr(\delta^2)=\frac{1}{k}$. \vspace{0,3cm} Next, since the linear contraction - $R((\cdot)^{\Gamma})$ - preserves the Frobenius norm of $\delta$, \begin{equation}\label{eqeigenvalueequal} \dfrac{1}{k}=tr(\delta^2)=tr(\mathcal{R}(\delta^{\Gamma})\mathcal{R}(\delta^{\Gamma})^*)\leq \|\mathcal{R}(\delta^{\Gamma})\|_{\infty} \|\mathcal{R}(\delta^{\Gamma})\|_{1}. \end{equation} \vspace{0,3cm} By item (5) of lemma \ref{propertiesofrealignment}, $\mathcal{R}(\delta^{\Gamma})=\mathcal{R}(\delta)F$. Since $F$ is an a isometry, $\|\mathcal{R}(\delta)F\|_{\infty}=\|\mathcal{R}(\delta)\|_{\infty}$. Therefore,$$\|\mathcal{R}(\delta^{\Gamma})\|_{\infty}=\|\mathcal{R}(\delta)F\|_{\infty}=\|\mathcal{R}(\delta)\|_{\infty}=\lambda_1=\dfrac{1}{k}.$$ Now, since $\delta$ is SPC, by its definition, $\mathcal{R}(\delta^{\Gamma})$ is positive semidefinite. Hence $$\|\mathcal{R}(\delta^{\Gamma})\|_{1}\leq \|\mathcal{R}(\delta^{\Gamma})^{\Gamma}\|_{1}.$$ \vspace{0,3cm} By item (7) of lemma \ref{propertiesofrealignment}, $\mathcal{R}(\delta^{\Gamma})^{\Gamma}=\delta F$. Thus, $\|\mathcal{R}(\delta^{\Gamma})\|_{1}\leq \|\mathcal{R}(\delta^{\Gamma})^{\Gamma}\|_{1}=\|\delta F\|_{1}=\|\delta\|_{1}=1.$ \vspace{0,3cm} Using these pieces of information in equation (\ref{eqeigenvalueequal}) we obtain $$\dfrac{1}{k}=tr(\mathcal{R}(\delta^{\Gamma})\mathcal{R}(\delta^{\Gamma})^*)\leq \|\mathcal{R}(\delta^{\Gamma})\|_{\infty} \|\mathcal{R}(\delta^{\Gamma})\|_{1}\leq \dfrac{1}{k}.1.$$ Again, $tr(\mathcal{R}(\delta^{\Gamma})^2)= \|\mathcal{R}(\delta^{\Gamma})\|_{\infty} \|\mathcal{R}(\delta^{\Gamma})\|_{1}$ only holds if the non-null eigenvalues of the positive semidefinite hermitian matrix $\mathcal{R}(\delta^{\Gamma})$ are equal to $\|\mathcal{R}(\delta^{\Gamma})\|_{\infty}=\lambda_1=\dfrac{1}{k}$. Finally, the non-null eigenvalues of $\mathcal{R}(\delta^{\Gamma})$ are the non-null Schmidt coefficients of $\delta$. Since $\delta$ is SPC and its non-null Schmidt coefficients are equal, $\delta$ is separable by \cite[Proposition 15]{carielloIEEE}. Since $R$ is invertible, $\gamma$ is separable too. \end{proof} \vspace{0,5cm} The invariant under realignment counterpart is proved next in a similar way with minor modifications. \vspace{0,5cm} \begin{theorem} If $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ is an invariant under realignment state such that $\operatorname{rank}(\gamma_A)=k$ then $\operatorname{rank}(\gamma)\geq k$. In addition, if the equality holds then $\gamma$ is separable. \end{theorem} \begin{proof} By corollary \ref{corollaryfilternormalformSPCandINV}, there is a invertible matrix $R$ such that \begin{center} $\delta=(R^*\otimes R^t)\gamma(R\otimes \overline{R})=\sum_{i=1}^n\lambda_i \gamma_i\otimes \overline{\gamma_i}$, \end{center} where $\lambda_1=\dfrac{1}{k}$, $\gamma_1=\dfrac{Id}{\sqrt{k}}$, $tr(\gamma_i\gamma_1)=\dfrac{tr(\gamma_i)}{\sqrt{k}}=0$ for $i>1$. So $\displaystyle \delta_A=\sum_{\i=1}^n\lambda_i\gamma_itr(\gamma_i)=\frac{Id}{k}$. Now, by item(3) of lemma \ref{propertiesofrealignment}, $$\mathcal{R}(\delta)=\mathcal{R}((R^*\otimes R^t)\gamma(R\otimes \overline{R}))=(R^*\otimes R^t)\mathcal{R}(\gamma)(R\otimes \overline{R})=(R^*\otimes R^t)\gamma(R\otimes \overline{R})=\delta.$$ Thus, $\delta$ is invariant under realignment. Hence $\|\delta\|_{\infty}\leq \|\delta_A\|_{\infty}=\frac{1}{k}$, by theorem \ref{specialspectralradius}. So $\dfrac{1}{k}\geq \|\delta\|_{\infty}\geq \dfrac{tr(\delta)}{\operatorname{rank}(\delta)}=\dfrac{1}{\operatorname{rank}(\delta)}$. Hence $ \operatorname{rank}(\delta)\geq k$. Since $R$ is invertible, $\operatorname{rank}(\gamma)=\operatorname{rank}(\delta)\geq k$. \vspace{0,3cm} For the next part, assume $\operatorname{rank}(\gamma)=k$. Then $\operatorname{rank}(\delta)=k$. Therefore, \begin{center} $1=tr(\delta)\leq \|\delta\|_{\infty}\operatorname{rank}(\delta)=\dfrac{1}{k}.k=1$. \end{center} Since the equality $tr(\delta)=\|\delta\|_{\infty}\operatorname{rank}(\delta)$ holds, the non-null eigenvalues of $\delta$ are equal to $\|\delta\|_{\infty}$. Moreover, $tr(\delta)=k\|\delta\|_{\infty}=1$. Hence $\|\delta\|_{\infty}=\frac{1}{k}$. \vspace{0,3cm} Since $\delta$ is invariant under realignment, the non-null Schmidt coefficients of the Schmidt decomposition of $\delta$ are the the non-null eigenvalues of $\delta$, which are equal. We know that every invariant under realigment state with equal non-null Schmidt coefficients is separable by \cite[Proposition 15]{carielloIEEE}. So $\delta$ is separable and so is $\gamma$, since $R$ is invertible. \end{proof} \vspace{0,5cm} \subsection*{Third case:} The PPT counterpart. \vspace{0,5cm} For our final results, we need some tools developed in sections 2 and 3 together with the complete reducibility property. First, we show that the rank of a PPT state is greater or equal to its reduced ranks in the next lemma. \vspace{0,5cm} \begin{lemma}\label{lemmaineqPPT} Let $\gamma\in\mathcal{M}_k\otimes \mathcal{M}_m$ be a PPT state. Then $\operatorname{rank}(\gamma)\geq\max\{\operatorname{rank}(\gamma_A), \operatorname{rank}(\gamma_B)\}.$ \end{lemma} \begin{proof} Let us assume without loss of generality that $\max\{\operatorname{rank}(\gamma_A), \operatorname{rank}(\gamma_B)\}=\operatorname{rank}(\gamma_A)=k$. So there is an invertible matrix $R\in\mathcal{M}_k$ such that $R\gamma_AR^*=\frac{1}{k}Id$. \vspace{0,3cm} Define $\delta=(R\otimes Id)\gamma(R^*\otimes Id)$ and notice that $\delta_A=\frac{1}{k}Id$. \vspace{0,3cm} Since $\delta$ is PPT, by theorem \ref{specialspectralradius}, $\|\delta\|_{\infty}\leq \|\delta_A\|_{\infty}=\frac{1}{k}$. Hence $$1=tr(\delta_A)=tr(\delta)\leq \|\delta\|_{\infty}\operatorname{rank}(\delta)\leq\frac{1}{k} \operatorname{rank}(\delta).$$ Thus, $\operatorname{rank}(\gamma)=\operatorname{rank}(\delta)\geq \max\{\operatorname{rank}(\gamma_A), \operatorname{rank}(\gamma_B)\}$. \end{proof} \vspace{0,5cm} Now, we prove that a PPT state in the filter normal form with minimal rank must be separable which is the key ingredient of the proof of the final theorem. \vspace{0,5cm} \begin{lemma}\label{lemmaseparabilityPPT}Let $\gamma\in\mathcal{M}_k\otimes \mathcal{M}_k$ be a PPT state such that \begin{enumerate} \item $\gamma_A=\gamma_B=\frac{1}{k}Id$, \item $\gamma$ has $k$ eigenvalues equal to $\frac{1}{k}$ and the others zero. \end{enumerate} Then $\gamma$ is separable. \end{lemma} \begin{proof} This result is trivial in $\mathcal{M}_2\otimes \mathcal{M}_2$, since every PPT state there is separable. Assume the result in true in $\mathcal{M}_i\otimes \mathcal{M}_i$ for $i<k$. Let us prove the result in $\mathcal{M}_k\otimes \mathcal{M}_k$. \vspace{0,2cm} Now, since $\gamma$ is a positive semidefinite Hermitian matrix, the linear transformations $F_{\gamma}$ and $G_{\gamma}$ are positive maps and adjoints with respect to the trace inner product. \vspace{0,2cm} In addition, notice that $\frac{Id}{k}=\gamma_A=F_{\gamma}(Id)$ and $\frac{Id}{k}=\gamma_B=G_{\gamma}(Id)$. Hence $F_{\gamma}(G_{\gamma}(Id))=\frac{1}{k^2}Id$. \vspace{0,2cm} By \cite[Theorem 2.3.7]{Bhatia1}, the spectral radius of the positive operator $F_{\gamma}\circ G_{\gamma}$ is $\frac{1}{k^2}$. Hence the largest singular value of $G_{\gamma}$ and $F_{\gamma}$ is $\frac{1}{k}$, since they are adjoints. Thus, \begin{center} $\|G_{\gamma}(X)\|_2\leq \frac{1}{k}$ and $\|F_{\gamma}(X)\|_2\leq \frac{1}{k}$, whenever $\|X\|_2=1$. \end{center} Next, since $\gamma$ has $k$ linearly independent eigenvectors associated to $\frac{1}{k}$, by combining 2 of them we can find an eigenvector $v\in\mathbb{C}^k\otimes\mathbb{C}^k$ associated to $\frac{1}{k}$ such that $\|v\|_2=1$ and $\operatorname{rank}(v)=m<k$. \vspace{0,2cm} Notice that there are $R,S\in \mathcal{M}_k$ with rank $m$ such that\begin{center} $v=(R\otimes S)u$ $(u$ as defined in remark \ref{remarkproduct}) and $\|RR^*\|_2=\|SS^*\|_2=1$. \end{center} Therefore $\frac{1}{k}=tr(\gamma vv^*)$\vspace{0,1cm} $\hspace{2,25cm}=tr((R^*\otimes S^*)\gamma (R\otimes S)uu^t)$\vspace{0,1cm} $\hspace{2,25cm}=tr((R^*\otimes S^t)\gamma^{\Gamma} (R\otimes \overline{S})F)$\ $(F$ as defined in $\ref{remarkproduct})$\vspace{0,1cm} $\hspace{2,25cm}\leq tr((R^*\otimes S^t)\gamma^{\Gamma} (R\otimes \overline{S}))$\ $($since $\gamma^{\Gamma}$ and $Id-F$ are positive semidefinite$)$\vspace{0,1cm} $\hspace{2,25cm}= tr(\gamma (RR^*\otimes SS^*))$\vspace{0,1cm} $\hspace{2,25cm}= tr(G_{\gamma}(RR^*)SS^*)$\vspace{0,1cm} $\hspace{2,25cm} \leq \|G_{\gamma}(RR^*)\|_2\|SS^*\|_2\leq \frac{1}{k}.1$\ $($since the largest singular value of $G_{\gamma}$ is $\frac{1}{k})$ \vspace{0,5cm} Therefore, all the inequalities above are equalities, which imply \begin{enumerate} \item $G_{\gamma}(RR^*)=\lambda SS^*$ for some $\lambda>0$, since $G_{\gamma}$ is a positive map, and \vspace{0,1cm} \item $\frac{1}{k}=\|G_{\gamma}(RR^*)\|_2=\lambda\|SS^*\|_2=\lambda$. \end{enumerate} \vspace{0,3cm} Hence $G_{\gamma}(RR^*)=\frac{1}{k} SS^*$. Analogously, we get $F_{\gamma}(SS^*)=\frac{1}{k}RR^*$, since $tr(\gamma (RR^*\otimes SS^*))=tr(RR^*F_{\gamma}(SS^*))$. \vspace{0,3cm} Therefore, $F_{\gamma}(G_{\gamma}(RR^*))=\frac{1}{k^2}RR^*$ and $\operatorname{rank}(RR^*)=m<k$. \vspace{0,3cm} Since $\gamma$ is PPT, by the complete reducibility property, \begin{equation}\label{eqcomplredutproperty} \gamma=(V\otimes W)\gamma(V\otimes W)+(V^{\perp}\otimes W^{\perp})\gamma(V^{\perp}\otimes W^{\perp}), \end{equation} where $V,W, V^{\perp},W^{\perp}$ are orthogonal projections onto $\operatorname{Im}(RR^*)$, $\operatorname{Im}(SS^*)$, $\ker(RR^*)$ and $\ker(SS^*)$, respectively. \vspace{0,3cm} By equation \ref{eqcomplredutproperty} and the definition of $G_{\gamma}$, \begin{center} $\operatorname{Im}(G_{\gamma}(V))\subset \operatorname{Im}(W)$, $\operatorname{Im}(G_{\gamma}(V^{\perp})) \subset \operatorname{Im}(W^{\perp})$ and\vspace{0.2cm} \hspace{1.2cm}$\operatorname{Im}(F_{\gamma}(W))\subset \operatorname{Im}(V)$, $\operatorname{Im}(F_{\gamma}(W^{\perp}))\subset \operatorname{Im}(V^{\perp}).\hspace{2cm}$ \end{center} \vspace{0,3cm} Next, recall that $V+V^{\perp}=W+W^{\perp}=Id$, $VV^{\perp}=WW^{\perp}=0$ and \begin{center} $G_{\gamma}(V)+G_{\gamma}(V^{\perp})=G_{\gamma}(Id)=\frac{1}{k}Id=\frac{1}{k}W+\frac{1}{k}W^{\perp},$ \end{center} \begin{center} $F_{\gamma}(W)+G_{\gamma}(W^{\perp})=G_{\gamma}(Id)=\frac{1}{k}Id=\frac{1}{k}V+\frac{1}{k}V^{\perp}.$ \end{center} \vspace{0,3cm} Therefore \begin{center} $G_{\gamma}(V)=\frac{1}{k}W$, $F_{\gamma}(W)=\frac{1}{k}V$ and $G_{\gamma}(V^{\perp})=\frac{1}{k}W^{\perp}$, $G_{\gamma}(W^{\perp})=\frac{1}{k}V^{\perp}$. \end{center} Now, define \begin{center} $\gamma_1=\frac{k}{m}(V\otimes W)\gamma(V\otimes W)$ and $\gamma_2=\frac{k}{k-m}(V^{\perp}\otimes W^{\perp})\gamma(V^{\perp}\otimes W^{\perp})$. \end{center} \vspace{0,3cm} Notice that \begin{center} $(\gamma_1)_A=F_{\gamma_1}(Id)=\frac{k}{m}F_{\gamma}(W)=\frac{1}{m}V$\ \ and\ \ $(\gamma_1)_B=G_{\gamma_1}(Id)=\frac{k}{m}G_{\gamma}(V)=\frac{1}{m}W$. \end{center} \vspace{0,3cm} Thus, $\max\{\operatorname{rank}((\gamma_1)_A),\operatorname{rank}((\gamma_1)_B)\}=$ \begin{center} \hspace{6cm}$=\max\{\operatorname{rank}(V),\operatorname{rank}(W)\}=\max\{\operatorname{rank}(R),\operatorname{rank}(S)\}=m$. \end{center} Moreover, notice that \begin{center} $(\gamma_2)_A=F_{\gamma_2}(Id)=\frac{k}{k-m}F_{\gamma}(W^{\perp})=\frac{1}{m}V^{\perp}$\ \ and\ \ $(\gamma_2)_B=G_{\gamma_2}(Id)=\frac{k}{k-m}G_{\gamma}(V^{\perp})=\frac{1}{k-m}W^{\perp}$. \end{center} \vspace{0,3cm} Therefore $\max\{\operatorname{rank}((\gamma_2)_A),\operatorname{rank}((\gamma_2)_B)\}=\max\{\operatorname{rank}(V^{\perp}),\operatorname{rank}(W^{\perp})\}=k-m$. \vspace{0,3cm} By their definitions, $\gamma_1$ and $\gamma_2$ are PPT. So, by lemma \ref{lemmaineqPPT}, $\operatorname{rank}(\gamma_1)\geq m$ and $\operatorname{rank}(\gamma_2)\geq k-m$. \vspace{0,3cm} Recall that $k=\operatorname{rank}(\gamma)=\operatorname{rank}(\gamma_1)+\operatorname{rank}(\gamma_2)\geq m+(k-m)$. Thus $\operatorname{rank}(\gamma_1)= m$ and $\operatorname{rank}(\gamma_2)= k-m$. \vspace{0,3cm} Since $\gamma=\frac{m}{k}\gamma_1+\frac{k-m}{k}\gamma_2$, $\gamma$ has $k$ eigenvalues equal to $\frac{1}{k}$ and $\gamma_1\gamma_2=0$, \begin{itemize} \item $\gamma_1$ has $m$ eigenvalues equal to $\frac{1}{m}$ and the others $0$, \item $\gamma_2$ has $k-m$ eigenvalues equal to $\frac{1}{k-m}$ and the others $0$. \end{itemize} \vspace{0,3cm} Hence, \begin{itemize} \item $\gamma_1$ has $m$ eigenvalues equal to $\frac{1}{m}$, $(\gamma_1)_A=\frac{1}{m}V$, $(\gamma_1)_B=\frac{1}{m}W$ and $\operatorname{rank}(V)=\operatorname{rank}(W)=m$. \item $\gamma_2$ has $k-m$ eigenvalues equal to $\frac{1}{k-m}$, $(\gamma_2)_A=\frac{1}{k-m}V^{\perp}$, $(\gamma_1)_B=\frac{1}{k-m}W^{\perp}$ and $\operatorname{rank}(V^{\perp})=\operatorname{rank}(W^{\perp})=k-m$. \end{itemize} \vspace{0,3cm} By induction hypothesis, $\gamma_1$ and $\gamma_2$ are separable and so is $\gamma$.\end{proof} \vspace{0,5cm} Finally, we prove the PPT counterpart of our last result. \vspace{0,5cm} \begin{theorem} If $\gamma\in \mathcal{M}_k\otimes \mathcal{M}_k$ is a PPT state such that $\operatorname{rank}(\gamma_A)=\operatorname{rank}(\gamma_B)=k$ then $\operatorname{rank}(\gamma)\geq k$. In addition, if the equality holds then $\gamma$ is separable. \end{theorem} \begin{proof} We know already that $\operatorname{rank}(\gamma)\geq k$ by lemma \ref{lemmaineqPPT}. Let us assume that $\operatorname{rank}(\gamma)= k$.\vspace{0,2cm} Define $\gamma_1=(Id\otimes S)\gamma(Id\otimes S^*)$ such that $(\gamma_1)_B=\frac{1}{k}Id$, where $S$ is invertible.\vspace{0,2cm} Since $\gamma_1$ is also PPT, $\|\gamma_1\|_{\infty}\leq \|(\gamma_1)_B\|_{\infty}=\frac{1}{k}$, by lemma \ref{specialspectralradius}.\vspace{0,2cm} Hence $ 1=tr((\gamma_1)_B)=tr(\gamma_1)\leq \|\gamma_1\|_{\infty}\operatorname{rank}(\gamma_1)=\frac{1}{k}.k=1$. So $\gamma_1$ has $k$ eigenvalues equal to $\frac{1}{k}$ and the others $0$.\vspace{0,2cm} Notice that $(F\overline{\gamma_1}F)^{\Gamma}$ is positive semidefinite and so is $(\gamma_1*F\overline{\gamma_1}F)^{\Gamma}=\gamma_1*(F\overline{\gamma_1}F)^{\Gamma}$ as a $*-$product of two positive semidefinite Hermitian matrices by item $a)$ of remark \ref{remarkproduct}. So the positive semidefinite Hermitian matrix $\gamma_1*F\overline{\gamma_1}F$ is PPT. \vspace{0,2cm} Now, by items $(8)$ and $(9)$ of lemma \ref{propertiesofrealignment}, $\mathcal{R}(\gamma_1*F\overline{\gamma_1}F)=\mathcal{R}(\gamma_1)\mathcal{R}(\gamma_1)^*$, which is positive semidefinite. \vspace{0,2cm} Next, on one hand $tr(\mathcal{R}(\gamma_1*F\overline{\gamma_1}F))=tr(\mathcal{R}(\gamma_1)\mathcal{R}(\gamma_1)^*)=tr(\gamma_1\gamma_1^*)=\frac{1}{k}$, since $\mathcal{R}$ is an isometry.\vspace{0,2cm} On the other hand $\|\mathcal{R}(\gamma_1*F\overline{\gamma_1}F)^{\Gamma}\|_1=\|\mathcal{R}(\gamma_1*(F\overline{\gamma_1}F)F)\|_1$ (by item (6) of lemma \ref{propertiesofrealignment}) \vspace{0,2cm} \hspace{6.8cm} $=\|\mathcal{R}(\gamma_1*(F\overline{\gamma_1}F)F)F\|_1$ (since $F$ is an isometry) \vspace{0,2cm} \hspace{6.8cm} $=\|(\gamma_1*(F\overline{\gamma_1}F)^{\Gamma}\|_1$ (by item (4) of lemma \ref{propertiesofrealignment}) \vspace{0,2cm} \hspace{6.8cm} $=tr(\gamma_1*(F\overline{\gamma_1}F))$ (since $\gamma_1*(F\overline{\gamma_1}F)$ is PPT) \vspace{0,2cm} \hspace{6.8cm} $=tr((\gamma_1)_B(\gamma_1)_B^*)=\frac{1}{k}$ (by item $c)$ of remark \ref{remarkproduct}). \vspace{0,8cm} Therefore, $tr(\mathcal{R}(\gamma_1*F\overline{\gamma_1}F))= \|\mathcal{R}(\gamma_1*F\overline{\gamma_1}F)^{\Gamma}\|_1$. \vspace{0,2cm} Since $\mathcal{R}(\gamma_1*F\overline{\gamma_1}F)$ is positive semidefinite, this last equality means that $\mathcal{R}(\gamma_1*F\overline{\gamma_1}F)^{\Gamma}$ is positive semidefinite, i.e, $\mathcal{R}(\gamma_1*F\overline{\gamma_1}F)$ is PPT.\vspace{0,2cm} We have just discovered that $\mathcal{R}(\gamma_1*F\overline{\gamma_1}F)$ and $\gamma_1*F\overline{\gamma_1}F$ are PPT, but in this situation lemma \ref{lemmarealigmentPPTareequal} says that $\gamma_1*F\overline{\gamma_1}F=\mathcal{R}(\gamma_1*F\overline{\gamma_1}F).$\vspace{0,2cm} Now, by corollary \ref{corollaryleftfilter}, there is an invertible matrix $R\in \mathcal{M}_k$ such that $$\gamma_2=(R^*\otimes Id)\gamma_1(R\otimes Id)=\sum_{i=1}^n\lambda_i A_i\otimes B_i,$$ where \begin{itemize} \item[$a)$] $\lambda_1\geq \lambda_i>0$ for every $i$ and $A_1=\frac{Id}{\sqrt{k}}$, \item[$b)$] $A_i=A_i^*$, $B_i=B_i^*$ for every $i$, \item[$c)$] $tr(A_iA_j)=tr(B_iB_j)=0$ for every $i\neq j$ and $tr(A_i^2)=tr(B_i^2)=1$.\vspace{0,2cm} \end{itemize} In addition, we can normalize its trace, so assume that $tr(\gamma_2)=1$. \vspace{0,2cm} Hence, $\gamma_2*(F\overline{\gamma_2} F)=\sum_{i=1}^n\lambda_i^2 A_i\otimes \overline{A_i}=(R^*\otimes R^t)(\gamma_1*(F\overline{\gamma_1} F))(R\otimes \overline{R}).$\vspace{0,2cm} Like $\gamma_1*(F\overline{\gamma_1} F)$, the positive semidefinite Hermitian matrix $\gamma_2*(F\overline{\gamma_2} F)$ is also invariant under realignment because \vspace{0,2cm} $\mathcal{R}(\gamma_2*(F\overline{\gamma_2} F))=\mathcal{R}((R^*\otimes R^t)(\gamma_1*(F\overline{\gamma_1} F))(R\otimes \overline{R}))$ $\hspace{2.95cm}=(R^*\otimes R^t)\mathcal{R}(\gamma_1*(F\overline{\gamma_1} F))(R\otimes \overline{R})$, by item $(3)$ of lemma \ref{propertiesofrealignment},\vspace{0,2cm} $\hspace{2.95cm}=(R^*\otimes R^t)(\gamma_1*(F\overline{\gamma_1} F))(R\otimes \overline{R})$, since $\gamma_1*F\overline{\gamma_1}F=\mathcal{R}(\gamma_1*F\overline{\gamma_1}F).$\vspace{0,2cm} $\hspace{2.95cm}=\gamma_2*(F\overline{\gamma_2} F)$.\vspace{0,5cm} Now, notice that $k\lambda_1^2=tr\left(\gamma_2*(F\overline{\gamma_2} F)\ (A_1\otimes \overline{A_1})\right)k$ \hspace{3,8cm} $=tr\left(\gamma_2*(F\overline{\gamma_2} F)\ (\frac{Id}{\sqrt{k}}\otimes \frac{Id}{\sqrt{k}})\right)k$\vspace{0,2cm} \hspace{3,8cm} $=tr(\gamma_2*(F\overline{\gamma_2} F))$\vspace{0,2cm} \hspace{3,8cm} $=tr(\mathcal{R}(\gamma_2*(F\overline{\gamma_2} F)))$, since $\gamma_2*F\overline{\gamma_2}F=\mathcal{R}(\gamma_2*F\overline{\gamma_2}F)$\vspace{0,2cm} \hspace{3,8cm} $=tr(\mathcal{R}(\gamma_2)\mathcal{R}(\gamma_2)^*)$, by items (8--9) of lemma \ref{propertiesofrealignment} \vspace{0,2cm} \hspace{3,8cm} $=tr(\gamma_2\gamma_2^*)$, since $\mathcal{R}$ is an isometry. \vspace{0,5cm} Next, $\|\gamma_2\|^2_{\infty}\leq \|\mathcal{R}(\gamma_2)\|_{\infty}^2$, since $\gamma_2$ is PPT and lemma \ref{specialspectralradius}.\vspace{0,2cm} Notice that the largest singular value of $G_{\gamma_2}$ is $\lambda_1$ by item $a)$ above and the definition of $G_{\gamma_2}$. Hence, by lemma \ref{lemmaoperatornormrealignment}, $\|\mathcal{R}(\gamma_2)\|_{\infty}=\lambda_1$. Moreover, remind that $\operatorname{rank}(\gamma_2)=\operatorname{rank}(\gamma_1)=\operatorname{rank}(\gamma)=k$. Therefore, \begin{center} $k\lambda_1^2=tr(\gamma_2^2)\leq \|\gamma_2\|^2_{\infty}\operatorname{rank}(\gamma_2)\leq\lambda_1^2.k.$ \end{center} \vspace{0,2cm} The inequalities above are, in fact, equalities, which only happens when all the $k$ non-null eigenvalues of $\gamma_2$ are equal to $\lambda_1$. Therefore, $1=tr(\gamma_2)=k\lambda_1$. So $\lambda_1=\frac{1}{k}$. In addition, $$1=tr(\gamma_2)=\lambda_1tr(A_1)tr(B_1)=\frac{1}{k}\sqrt{k}\ tr(B_1).$$ So $tr(B_1)=\sqrt{k}$ and $tr(B_1^2)=1$. Recall that $G_{\gamma_2}(\frac{1}{\lambda_1}A_1)=B_1$ is a positive semidefinite Hermitian matrix, since $G_{\gamma_2}$ is a positive map and $\frac{1}{\lambda_1}A_1=\frac{Id}{k\sqrt{k}}$. Under these conditions the only possibility for $B_1$ is $B_1=\frac{Id}{\sqrt{k}}$. \vspace{0,2cm} Finally, $\gamma_2$ has $k$ eigenvalues equal to $\frac{1}{k}$ and the others $0$ and \begin{center} $(\gamma_2)_B=G_{\gamma_2}(Id)=G_{\gamma_2}(\sqrt{k}A_1)=\frac{Id}{k}$, \ \ \ $(\gamma_2)_A=F_{\gamma_2}(Id)=F_{\gamma_2}(\sqrt{k}B_1)=\frac{Id}{k}$. \end{center} \vspace{0,1cm} By lemma \ref{lemmaseparabilityPPT}, $\gamma_2$ is separable and so are $\gamma_1$ and $\gamma$. \end{proof} \vspace{0,2cm} \section{Summary and Conclusion} \vspace{0,2cm} In this article we proved new results for a triad of types of quantum states which includes the positive under partial transpose type. We obtained the same upper bound for the spectral radius of these types of quantum states. Then we showed that two of these types can be put in the filter normal form retaining their shapes. Finally, we proved that there is a lower bound for their ranks and whenever this lower bound is attained these states are separable. This last result is another consequence of their complete reducibility property. This is plenty of evidence that these states are deeply connected. In addition, their complete reducibility property is a unifying force connecting and providing many results in entanglement theory. \vspace{0,2cm} \section{Disclosure Statement} \vspace{0,2cm} No potential conflict of interest was reported by the author. \vspace{0,2cm} \begin{bibdiv} \begin{biblist} \bib{Bhatia1}{book}{ title={Positive definite matrices}, author={Bhatia, Rajendra}, year={2009}, publisher={Princeton university press} } \bib{cariello}{article}{ title={Separability for weakly irreducible matrices}, author={Cariello, Daniel}, journal={Quantum Inf. Comp.}, volume={14}, number={15-16}, pages={1308--1337}, year={2014} } \bib{carielloSPC}{article}{ author={Cariello, D.}, title={Does symmetry imply PPT property?}, journal={Quantum Inf. Comput.}, volume={15}, date={2015}, number={9-10}, pages={812--824}, } \bib{carielloIEEE}{article}{ author={Cariello, Daniel}, title={Completely Reducible Maps in Quantum Information Theory}, journal={IEEE Transactions on Information Theory}, volume={62}, date={2016}, number={4}, pages={1721-1732}, } \bib{Cariello_LAA}{article}{ title={A gap for PPT entanglement}, author={Cariello, D.}, journal={Linear Algebra and its Applications}, volume={529}, pages={89-114}, year={2017} } \bib{CarielloLAMA}{article}{ title={Sinkhorn-Knopp theorem for rectangular positive maps}, author={Cariello, Daniel}, journal={Linear and Multilinear Algebra}, volume={67}, pages={2345-2365}, year={2019} } \bib{CarielloLMP}{article}{ title={Sinkhorn-Knopp theorem for PPT states}, author={Cariello, D.}, journal={Lett Math Phys}, volume={109}, pages={2013-2034}, year={2019} } \bib{Git}{article}{ author={Gittsovich, O.}, author={G\"uhne, O.} author={Hyllus, P.} author={Eisert, J.} title={Unifying several separability conditions using the covariance matrix criterion}, journal={Phys. Rev. A}, volume={78}, year={2008}, pages={052319}, } \bib{Guhne}{article}{ title={Entanglement detection}, author={G\"uhne, O.}, author={T\'oth, G}, journal={Physics Reports}, volume={474}, number={1-6} year={2009}, pages={1--75}, } \bib{gurvits2004}{article}{ title={Classical complexity and quantum entanglement}, author={Gurvits, Leonid}, journal={Journal of Computer and System Sciences}, volume={69}, number={3}, pages={448--484}, year={2004}, publisher={Elsevier} } \bib{horodeckifamily}{article}{ title={Separability of mixed states: necessary and sufficient conditions}, author={Horodecki, M.}, author={Horodecki, P.}, author={Horodecki, R.}, journal={Phys. Lett. A.}, volume={223}, pages={1--8}, year={1996}, publisher={Elsevier} } \bib{PawelMxN}{article}{ author={Horodecki, Pawe{\l}}, author={ Lewenstein, Maciej}, author={Vidal, Guifr{\'e}}, author={Cirac, Ignacio}, title={Operational criterion and constructive checks for the separability of low-rank density matrices}, journal={Physical Review A}, volume={62}, number={3}, pages={032310}, year={2000}, publisher={APS} } \bib{smolin}{article}{ author={Horodecki, Pawel}, author={Smolin, John A.}, author={Terhal, B.M.} author={Thapliyal, Ashish V.} title={Rank two bipartite bound entangled states do not exist}, journal={Theoretical Computer Science}, volume={292}, number={3}, pages={589--596}, year={2003}, publisher={Elsevier} } \bib{leinaas}{article}{ author={Leinaas, J.M.}, author={Myrheim, J.}, author={Ovrum, E.}, title={Geometrical aspects of entanglement}, journal={Phys. Rev. A}, volume={74}, issue={3}, year={2006}, pages={012313}, } \bib{marcus}{book}{ author={Marcus, M.} author={Minc, H}, title={A survey of matrix theory and matrix inequalities}, volume={14} publisher={Courier Corporation}, year={1992} } \bib{peres}{article}{ title={Separability criterion for density matrices}, author={Peres, Asher}, journal={Physical Review Letters}, volume={77}, number={8}, pages={1413}, year={1996}, publisher={APS} } \bib{rudolph}{article}{ author={Rudolph, O.} title={Computable Cross-norm Criterion for Separability}, journal={Lett. Math. Phys.}, volume={70}, date={2005}, pages={57--64} } \bib{rudolph2}{article}{ author={Rudolph, Oliver} title={Further results on the cross norm criterion for separability}, journal={Quantum Inf. Proc.}, volume={4}, date={2005}, pages={219--239} } \bib{Sinkhorn}{article}{ title={Concerning nonnegative matrices and doubly stochastic matrices}, author={Sinkhorn, Richard} author={Knopp, Paul}, journal={Pacific Journal of Mathematics}, volume={21}, number={2}, pages={343--348}, year={1967}, publisher={Oxford University Press} } \bib{guhnetothsym}{article}{ author={T\'oth, G.} author={G\"uhne, O.}, title={Separability criteria and entanglement witnesses for symmetric quantum states}, journal={Applied Physics B}, volume={98}, date={2010}, number={4}, pages={617-22}, } \bib{weiner}{article}{ author={Weiner, M.} title={A gap for the maximum number of mutually unbiased bases}, journal={Proceedings of the American Mathematical Society}, volume={141}, date={2013}, number={6}, pages={1963-1969}, } \end{biblist} \end{bibdiv} \end{document} \vspace{0,3cm} Actually, $\operatorname{Im}(G_{\gamma}(V))=\operatorname{Im}(W)$ and $\operatorname{Im}(F_{\gamma}(W))=\operatorname{Im}(V)$. This follows from a well known property of positive maps: \begin{center} $\operatorname{Im}(V)=\operatorname{Im}(RR^*)$, $\operatorname{Im}(W)=\operatorname{Im}(SS^*)$ implies $\operatorname{Im}(G_{\gamma}(RR^*))=\operatorname{Im}(G_{\gamma}(V))$ and $\operatorname{Im}(F_{\gamma}(SS^*))=\operatorname{Im}(F_{\gamma}(W))$, \end{center} since $V$ and $W$ are positive semidefinite.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,538
{"url":"https:\/\/www.vedantu.com\/question-answer\/the-complex-number-z-x-+-iy-which-satisfy-the-class-11-maths-jee-main-5edcbcaee5b56371c59317ca","text":"Question\n\n# The complex number $z = x + iy$ which satisfy the equation $\\left| {\\dfrac{{z - 5i}}{{z + 5i}}} \\right| = 1$, lie on${\\text{A}}{\\text{.}}$ The x-axis${\\text{B}}{\\text{.}}$ The straight line $y = 5$${\\text{C}}{\\text{.}}$ A circle passing through the origin${\\text{D}}{\\text{.}}$ None of these.\n\nHint \u2013 In this question use the property of modulus of a complex number which is $\\left| {A + iB} \\right| = \\sqrt {{A^2} + {B^2}}$ to reach the answer.\n\nGiven equation is\n$\\left| {\\dfrac{{z - 5i}}{{z + 5i}}} \\right| = 1$, where $z = x + iy$\nNow as we know $\\left| {\\dfrac{A}{B}} \\right| = \\dfrac{{\\left| A \\right|}}{{\\left| B \\right|}}$\n$\\Rightarrow \\left| {\\dfrac{{z - 5i}}{{z + 5i}}} \\right| = \\dfrac{{\\left| {z - 5i} \\right|}}{{\\left| {z + 5i} \\right|}} = 1$\n$\\Rightarrow \\left| {z - 5i} \\right| = \\left| {z + 5i} \\right|$\nNow substitute $z = x + iy$\n$\\Rightarrow \\left| {x + iy - 5i} \\right| = \\left| {x + iy + 5i} \\right| \\\\ \\Rightarrow \\left| {x + i\\left( {y - 5} \\right)} \\right| = \\left| {x + i\\left( {y + 5} \\right)} \\right| \\\\$\nNow as we know that $\\left| {A + iB} \\right| = \\sqrt {{A^2} + {B^2}}$, so use this property we have\n$\\sqrt {{x^2} + {{\\left( {y - 5} \\right)}^2}} = \\sqrt {{x^2} + {{\\left( {y + 5} \\right)}^2}}$\nNow squaring on both sides we have\n${x^2} + {\\left( {y - 5} \\right)^2} = {x^2} + {\\left( {y + 5} \\right)^2} \\\\ \\Rightarrow {\\left( {y - 5} \\right)^2} = {\\left( {y + 5} \\right)^2} \\\\$\nNow opening the square we have\n${y^2} + 25 - 10y = {y^2} + 25 + 10y \\\\ \\Rightarrow 20y = 0 \\\\ \\Rightarrow y = 0 \\\\$\nAnd we all know y = 0 is nothing but a x-axis\nHence option (a) is correct.\n\nNote \u2013 In such types of questions the key concept we have to remember is that always recall all the properties of modulus which is stated above, then according to these properties simplify the given equation we will get the required answer.","date":"2021-04-14 23:51:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9905712008476257, \"perplexity\": 1436.2508520713}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038078900.34\/warc\/CC-MAIN-20210414215842-20210415005842-00381.warc.gz\"}"}
null
null
Тухович () — село в Польщі, у гміні Станін Луківського повіту Люблінського воєводства. Населення — (2011). Історія Станом на 1921 рік село та фільварок Тухович належали до гміни Тухович Луківського повіту Люблінського воєводства міжвоєнної Польщі. За офіційним переписом населення Польщі 10 вересня 1921 року в селі Тухович налічувалося 44 доми і 343 мешканці (майже усі поляки-римо-католики). На однойменному фільварку було 7 будинків та 132 мешканці, з них: 116 римо-католиків, 16 православних; 115 поляків, 12 українців, 5 осіб іншої національності. У 1975—1998 роках село належало до Седлецького воєводства. Демографія Демографічна структура станом на 31 березня 2011 року: Примітки Села Луківського повіту
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,541
La catholicité – issu du terme latin , lui-même emprunté au grec signifiant « général, universel » – désigne par son étymologie le caractère de ce qui est « universel », notamment, en théologie, l'« universalité de vocation ». Mais ce mot a d'autres sens. Significations Universalité de vocation La « catholicité » - au sens d'« universalité » –- est une qualité revendiquée par nombre d'Églises chrétiennes, notamment parmi celles qui se réclament du symbole de Nicée, dans lequel elles participent des « quatre notes de l'Église » : une, sainte, catholique et apostolique. Conformité au catholicisme Doctrine Apparu en français au milieu du , le terme, de nature didactique, désigne « la conformité à la doctrine catholique » et est resté d'un emploi rare tandis que « catholicisme », synonyme précédemment rare de « catholicité » est devenu d'usage courant depuis 1794 pour désigner l'Église catholique. La « catholicité » désigne alors la « conformité à la doctrine propre à l'Église catholique ». Registres paroissiaux Un acte de catholicité est un document attestant qu'une personne a reçu un sacrement. Les actes de baptême, mariage et décès sont généralement écrits dans un registre paroissial, tenant lieu d'état civil jusqu'à la révolution française. Appartenance au catholicisme Par métonymie, le terme de catholicité peut désigner l'ensemble des fidèles de l'Église catholique. Catholicité peut signifier également l'appartenance à l'Église, vue sous un angle administratif, ainsi le registre de catholicité ci-dessus. Le débat sur la catholicité Depuis le , un débat sur la catholicité a lieu à l'intérieur du monde catholique et, plus largement, au sein du christianisme. Il met en jeu plusieurs notions d'ordre théologique. L'universalité de l'Église Chez les Pères apostoliques Le corpus des écrits des Pères apostoliques regroupe tous les plus anciens textes chrétiens ne figurant pas dans le Nouveau Testament. Ces neuf textes ont été écrits alors que se constituait le canon des écritures chrétiennes. Il y a ainsi un « tuilage » entre l'histoire rédactionnelle des textes du Nouveau Testament qui va des années 50 aux années 110 et celle des écrits des pères apostoliques. La Didachè a ainsi été écrite au en même temps que les évangiles tandis que la rédaction du Pasteur d'Ermas, qui a été copié dans certains manuscrit de Bibles chrétiennes, s'étend jusqu'aux années 150. D'un point de vue historique et scientifique, ces textes ont la même importance que ceux du Nouveau Testament pour la connaissance du christianisme ancien. Néanmoins, la constitution du canon qui a lieu dans le même temps et qui amena à distinguer les écrits apostoliques des textes du Nouveau Testament est elle aussi un fait historique dont il faut tenir compte. Sur un plan théologique, l'importance qu'il faut accorder à ces textes a été objet de divergences entre catholiques et protestants, les premiers estimant plus volontiers qu'ils sont à prendre en compte que les seconds. Outre ces divergences, un accord se fait autour de l'idée selon laquelle les écrits du Nouveau Testament comme ceux des pères apostoliques témoignent d'une certaine diversité de la tradition chrétienne dès ses commencements. Les premières mentions de l'Église Une section de la Didaché (seconde moitié du ) décrit une « eucharistie » dont les prières évoquent l'Église. Plutôt que de la messe, il s'agit, dans le sens premier du terme ευχαριστία (reconnaissance / remerciement), d'une liturgie du christianisme primitif pour « rendre grâce » à Dieu. Selon la description qu'en donne la Didaché, cette liturgie implique l'usage du pain et du vin. La prière sur le pain est une prière pour l'unité de l'Église. Son unité est décrite avec la métaphore de grains dispersés lorsqu'ils sont semés pour ensuite être assemblés en un seul pain : . Plus loin c'est encore de l'unité de l'Église dont il est question : . La première mention de l'expression « Église catholique » se trouve dans la lettre d'Ignace d'Antioche aux Smyrniotes, un écrit qui date au plus tôt de 107 au plus tard de 112, date de la mort de son auteur. On trouve aussi quatre fois le mot catholique dans le Martyre de Polycarpe. Ce récit est de quelques années postérieur à la lettre d'Ignace aux Smyrniotes, a été écrit dans la communauté de Smyrne en étant adressé de la façon suivante : . Les plus anciennes attestations de l'expression Église catholique datent ainsi du début du . Il s'agit d'une date relativement tardive par rapport à l'histoire rédactionnelle des textes du Nouveau Testament, dans la mesure où c'est la fin de la période de rédaction des textes qui seront plus tard retenus dans les Bibles chrétiennes. En même temps, cette date est très précoce par rapport à l'histoire du christianisme, ces occurrences se trouvant dans les textes qui, outre ceux du Nouveau Testament, sont les plus anciens de la tradition chrétienne. L'émergence de l'évêque dans les communautés Les écrits d'Ignace témoignent de l'émergence de la figure de l'évêque dans l'Église syrienne, fonction qui va rapidement se généraliser à partir de la forme qu'elle prend dès la fin du dans les communautés chrétiennes de la partie orientale de l'Empire. Au même moment à Rome, l'Église reste dirigée de façon plus collégiale. Selon le témoignage qu'en donne la lettre de Clément aux corinthiens, Il y a Rome un ensemble de presbytres (anciens) et d'épiscopes (surveillants), peut-être y avait-il une figure de plus grande autorité ou un primus inter pares parmi eux mais la lettre de Clément n'en fait pas explicitement état. Néanmoins, l'auteur de la lettre, Clément lui-même, semble incontestablement avoir été une figure d'autorité dans cette communauté. Il évoque en outre dans cette lettre l'autorité et le respect dû à , c'est-à-dire un ministre de la parole ou celui qui prêche à la communauté : . Ce propos trouve de nombreux échos chez Ignace notamment lorsqu'il écrit . Ignace était évêque d'Antioche. Dans ses lettres il exprime qu'il se considérait comme . L'unité au service de laquelle entendait se mettre Ignace est avant tout celle de Dieu, tandis que l'unité à réaliser sur terre par les chrétiens se conçoit chez Ignace comme l'image de l'unité de Dieu. L'unité qui se réalise avec l'évêque entre les membres de la communauté, est aussi une unité ou une harmonie que chacun doit trouver en lui-même : Dans l'apologétique chrétienne Tradition grecque et latine Jusqu'au milieu du les communautés chrétiennes sont presque exclusivement de langue grecque, non seulement parce qu'elles sont principalement présentes dans la partie orientale de l'Empire, mais aussi parce que le grec est parlé dans tout l'Empire. Ainsi, les premières communautés chrétiennes se trouvant dans la partie occidentale de l'Empire, non seulement à Rome mais aussi jusqu'à Lyon ou Vienne en Gaule, ont longtemps été de langue grecque. Les textes du Nouveau testament et ceux des Pères apostoliques sont intégralement et exclusivement rédigés en grec. Les écrits de Tertullien, qui vivait en Afrique du Nord, sont l'une des toutes premières tentatives de formulation du christianisme en latin. Il est ainsi le premier à avoir employé nombre de mots et de catégories qui resterons ceux avec lesquels la foi chrétienne sera exprimée dans la tradition latine, notamment ce qui concerne la doctrine de la Trinité ou celle de la double nature du Christ, vrai Dieu vrai homme. Il compare souvent l'Église à une Mère. Néanmoins ses exigences d'un comportement toujours héroïque du chrétien, son goût pour la polémique voire sa rigidité et son intolérance l'ont conduit à s'isoler. Il a progressivement abandonné la communion avec l'Église pour rejoindre les montanistes. Cyprien de Carthage Cyprien de Carthage, écrivant en latin, a laissé une œuvre qui n'est pas trop spéculative, plutôt destinée à l'édification morale de la communauté dont il était l'évêque. Prêchant sur l'Église, il distingue tout en affirmant que l'Église est une, fondée sur Pierre. En ce sens Cyprien écrit . Pour Cyprien, le ministère de Pierre est celui de chaque évêque, et ces évêques ont le devoir absolu de maintenir l'unité entre eux, de même que les apôtres étaient unis en se rapportant à Pierre, le premier d'entre eux. Dans L'unité de l'Église catholique, Cyprien donne de nombreuses images pour exprimer cette unité dans laquelle chaque partie possède la plénitude du tout dans la mesure où elle lui est unie : Cyprien est l'auteur d'une phrase devenue célèbre et qui fit l'objet de nombreuses interprétations pas toujours concordantes par la suite : . Dans les débats sur le sens dans lequel il faut comprendre cet adage devenu une formule dogmatique, c'est-à-dire un article de foi définie, les interprétations oscilleront entre l'idée selon laquelle l'Église est nécessaire au salut de tous, même de ceux qui ne sont pas baptisés, et celle beaucoup plus restrictive selon laquelle celui qui n'est pas baptisé dans l'Église catholique en communion avec le successeur de Pierre et y demeure jusqu'à sa mort ne peut être sauvé. Bien que dans les écrits de Cyprien , cette question alors envisagée comme celle du « salut des infidèles », suscita un vif débat au . Un jésuite américain, Leonard Feeney, qui tenait à la compréhension la plus restrictive de l'adage, fut sommé en 1949 par la Congrégation pour la doctrine de la foi de revoir sa position. Rome défendait que l'adage de Cyprien devait être compris dans les limites que lui imposaient d'autres principes dogmatiques. L'un de ces principes est le baptême de vœu, à savoir qu'il est possible que quelqu'un ne soit pas baptisé parce qu'il n'a pas la possibilité de l'être : dans la mesure où, malgré ce qui l'empêche d'être baptisé, ses intentions correspondent à ce qu'est véritablement le baptême, alors il doit être considéré comme étant baptisé au moment de sa mort. D'autre part, l'Église catholique considère que les baptisés sont sauvés par la foi qui les incite à faire de bonnes œuvres tandis que les non-chrétiens sont sauvés par les œuvres qu'ils font en obéissant à leur conscience qui les incite à toujours faire ce qui est bien. Il est possible que parmi les non-baptisés, certains aient une connaissance erronée de ce qu'est le christianisme, et refusent de se faire baptiser parce qu'ils considèrent que c'est mal. Dans ce cas, même si du point de vue de l'Église la connaissance qu'ils ont du baptême n'est pas juste, ces non baptisés font tout de même ce qu'il faut pour leur salut en refusant le baptême, dans la mesure où ils obéissent ainsi à leur conscience qui leur indique de faire ce qui est bien. Leonard Feeney refusait quant à lui de revoir sa position, maintenant une compréhension stricte et restrictive de l'adage « hors de l'Église point de salut », il fut finalement excommunié par Pie XII en 1953. L'universalité de la voie chrétienne chez Augustin Pour Augustin d'Hippone, le christianisme est absolument universel, tandis que l'Église ne peut se renfermer sur un peuple ou un territoire donné. Augustin s'est ainsi battu autant contre ceux qui, tel le philosophe Porphyre, estimaient qu'il n'y avait pas de sagesse ou de philosophie universelle, que contre ceux qui tels les donatistes ont pensé le christianisme comme la sagesse particulière d'une communauté ou d'un nombre limité d'individu. Pour Augustin, il existe une « voie universelle » : Jean Chrysostome : la force de l'Église L'orthodoxie des pères cappadociens Les développements de la question de la primauté Vatican I Avec le premier concile œcuménique du Vatican l'affirmation de la souveraineté du pape sur l'Église a culminé. Ceux qui parmi les catholiques ont refusé les dogmes promulgués à cette époque ont déclaré constater la vacance du siège pontifical et se sont constitués en Églises autocéphales ou autonomes appelées Églises vieille-catholiques. Le dogme de l'infaillibilité pontificale et la juridiction universelle du pape sur l'Église ont aussi été jugées inacceptables par les orthodoxes dans la mesure où cela placerait le pape « au-dessus » du concile et de l'Église, c'est-à-dire finalement hors d'elle. Dans les débats qui ont suivi le concile Vatican I, des explications ont été données par le pape pour nuancer l'interprétation et l'intention des dogmes qui furent proclamés affirmant qu'elle devaient être comprises dans les strictes limites de ce qu'autorise la tradition de l'Église. Pour les orthodoxes la primauté de l'Église de Rome se conçoit, non pas en termes de monarchie pontificale tel que cela s'est imposé dans le second millénaire et a culminé en 1870, mais en fonction de ce qu'était la primauté dans le premier millénaire. Il s'agit en premier lieu d'une primauté à l'Église de Rome dont le pape est l'évêque et non pas d'une primauté qui revient personnellement au pape dont le rôle et la singularité n'ont cessé de se renforcer au cours du second millénaire. À ce sujet, Benoit XVI écrit, alors qu'il envisage le rétablissement de la communion avec les orthodoxes : . Références Bibliographie Gérard Siegwalt, Dogmatique pour la catholicité évangélique, 9 vol., Labor et Fides Liens externes Site des éditions du Cerf Un texte d'Yves Congar : résumé d'un débat avec Vladimir Lossky Christianisme
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,604
Q: Auto Refreshing Filter Excel for Dynamic Graph Ranges I am trying to graph a set of data for visual analysis. The data contains around 50 rows of lookups, but does not necessarily populate all rows. i.e. Selecting Factor A gives me 20 rows of data, and 30 rows of zeroes. As the tab is currently designed for me to be able to cycle between Factors to graph them instead of creating an individual tab per factor, this graph needs to switch between graphing anywhere between 0-50 lines excluding the blanks where necessary. I have used a filter to take out all the zeroes allowing the graph range to be "dynamic", but the filter isn't refreshing correctly when I select a different Factor (changing the # of rows). The code: Private Sub Worksheet_Change(ByVal Target As Range) ActiveSheet.AutoFilter.ApplyFilter End Sub found in previous threads does not seem to refresh the filter for me. TLDR version; How do I force Excel to always filter a particular criteria (i.e. Remove/Hide zeroes) when the column values change? A: This modification works for me. Private Sub Worksheet_Change(ByVal Target As Range) With Target.Parent If .AutoFilterMode Then On Error GoTo bm_Safe_Exit 'this next line isn't absolutely necessary 'but it doesn't hurt anything and could be necessary later Application.EnableEvents = False .AutoFilter.ApplyFilter End If End With bm_Safe_Exit: Application.EnableEvents = True End Sub Make sure you haven't crashed out event handling and left it in a False state by running Application.EnableEvents = True in the VBE's Immediate window.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,142
SEC and Other Developments Affecting Private Capital Markets Hosted By International Financial Law Review Anna T. Pinedo Ryan Castillo For a number of years now, significantly more capital has been raised by companies in the US private markets than in US SEC-registered offerings. In IFLR's recently published book, A Deep Dive into Capital Raising Alternatives, changes brought about to the exempt offering framework by the JOBS Act are discussed. The JOBS Act resulted in more companies remaining private longer and relying on successive private placements to raise proceeds to fund their continued growth. In fact, some late-stage or pre-IPO private placements have raised proceeds in the billions. Private capital, especially for late-stage companies, has remained available despite the pandemic. In recent years, the SEC has sought to reform the legal framework applicable to private placements. During this webcast, we will discuss: The market for late-stage or pre-IPO private placements in the US; The recent changes to the accredited investor and the QIB definition, which are central to US private placements; The SEC's Concept Release on exempt offerings; SEC discussions relating to retail participation in the private markets through fund vehicles; and The SEC's final amendments to the exempt offering framework adopted on November 2, 2020. Ryan Castillo, Mayer Brown Kevin Gsell, Nasdaq Lizzie Meager, International Financial Law Review Anna T. Pinedo, Mayer Brown Angus Whelchel, Moelis & Company 11:00 a.m. – 12:00 p.m. EST 10:00 a.m. – 11:00 a.m. CST 9:00 a.m. – 10:00 a.m. MST 8:00 a.m. – 9:00 a.m. PST 4:00 p.m. – 5:00 p.m. GMT 5:00 p.m. – 6:00 p.m. CET For additional information, please contact Melissa Pfeuffer at mpfeuffer@mayerbrown.com or +1 212 506 2344. Practices – Private Placements & PIPE Transactions Corporate & Securities March 23 – 25 REITwise: 2021 Law, Accounting & Finance Conference Hosted By Nareit 4th Debt Capital Markets Seminar REVERSEinquiries Workshop: ISDA 2020 IBOR Fallbacks Protocol
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,030
using System; using System.IO; namespace Cassandra.Compression { /// <summary> /// A simple wrapper to a stream that allows to limit the length of the provided stream. /// Used to overcome the deficiencies in the Compression API (not providing a length) /// </summary> internal class WrappedStream : Stream { private readonly Stream _stream; private readonly long _length; private long _position; public override bool CanRead { get { return true; } } public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override long Length { get { return _length; } } public override long Position { get { return _position; } set { if (value != 0) { throw new NotSupportedException("Setting the position of the stream is not supported"); } _position = value; } } internal WrappedStream(Stream stream, long length) { if (stream == null) { throw new ArgumentNullException("stream"); } if (stream.Position + length > stream.Length) { throw new ArgumentOutOfRangeException("length", "length + current position cannot be greater than stream.Length"); } _stream = stream; _length = length; } public override void Flush() { throw new NotSupportedException(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { if (offset < 0) { throw new ArgumentOutOfRangeException("offset", "offset cannot be negative"); } if (count < 0) { throw new ArgumentOutOfRangeException("count", "count cannot be negative"); } if (offset >= buffer.Length) { throw new ArgumentOutOfRangeException("offset", "offset cannot be greater or equal the buffer length"); } if (count + _position > _length) { //Do not read pass the length specified in this wrapper count = Convert.ToInt32(_length - _position); } if (count <= 0) { return 0; } var amountRead = _stream.Read(buffer, offset, count); _position += amountRead; return amountRead; } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,599
Come view this gorgeous Tudor Style home located in Pearl River. Beautiful marble tile flooring as you walk in the front door. Breathtaking living area with soaring ceilings and a grand fireplace that compliments the floor to ceiling wainscoting. Large kitchen with lots of counter space. Exposed beam game room, stunning office, and Spacious bed rooms. Marvelous outdoor oasis with in ground salt water pool with electricity and surround sound speakers. Come out and take a look before its too late!
{ "redpajama_set_name": "RedPajamaC4" }
2,910
\section{\label{sec:md-intro}Introduction} The investigation for calculating the dielectric tensor and Born effective charge tensor in finite electric field is very important in studying of bulk ferroelectrics, ferroelectric films, superlattices, lattice vibrations in polar crystals, and so on[1,2,3]. Recently, the investigation of response properties to the external electric field is becoming interested theoretically as well as practically. In particular, the dielectric tensor and Born effective charge tensor in finite electric field are important physical quantities for analyzing and modeling the response of material to the electric field. In case of zero electric field, these response properties have already been studied by using DFPT (Density Functional Perturbation theory), and excellent results have been obtained [1]. DFPT[4]provides a powerful tool for calculating the $2^{nd}$-order derivatives of the total energy of a periodic solid with respect to external perturbations, such as strains, atomic sublattice displacements, a homogeneous electric field etc. In contrast to the case of strains and sublattice displacements for which the perturbing potential remains periodic, treatment of homogeneous electric fields is subtle, because the corresponding potential requires a term that is linear in real space, thereby breaking the translational symmetry and violating the conditions of Bloch's theorem. Therefore, electric field perturbations have already been studied using the long-wave method, in which the linear potential caused by applied electric field is obtained by considering a sinusoidal potential in the limit that its wave vector goes to zero[5]. In this approach, however, the response tensor can be evaluated only at zero electric field. In nonzero electric field, the investigation of the response properties can't be performed using method based on Bloch's theorem, for nonperiodicity of the potential with respect to electric field. Therefore, several methods for overcoming it have been developed [2,3]. Ref.[2] introduces the electric field-dependent energy functional by Berry's phase, and suggests the methodology for calculating by using finite-difference scheme. Ref.[3] discusses the proposal for calculating it by the discretized form of Berry's phase term and response theory with respect to perturbation of the finite electric field. However, in these methods, the nonperiodicity of the potential due to electric field is resolved by introducing polarized WFs (Wannier Functions) due to finite electric field. This requires much cost in calculating its inverse matrix in the perturbation expansion of Berry's phase and yields instability of results. In this paper, we developed a new discrete method for calculating the dielectric tensor and Born effective charge tensor in finite electric field by using Berry's phase and the gauge invariance. We present a new method for overcoming non-periodicity of the potential in finite electric field due to the gauge invariance, and calculate the dielectric tensor and Born effective charge tensor in a discrete different way than ever before. This paper is organized as follows. In Sec. 2, instead of preceding investigation in which the total field-dependent energy functional is divided into Kohn-Sham energy, Berry's phase and Lagrange multiplier term, we discuss the method for studying the response properties with a new discrete way by using the polarization written with Berry's phase and unit cell periodic function polarized by field. In Sec. 3, we calculate the dielectric tensor and Born effective charge tensor in finite electric field by constructing the polarized Bloch wave Function and evaluating linear response of the wave function with Sternheimer equation. We also calculate the $2^{nd}$-order nonlinear dielectric tensor indicating nonlinear response property with respect to electric field. In order to demonstrate the correctness of the method, we also perform calculations for the semiconductors AlAs and GaAs under the finite electric field. In Sec. 4, summary and conclusion are presented. \section{\label{sec:md-method}New discrete method by using Berry's phase and the gauge invariance} The response tensors with respect to electric field in finite electric field are presented by the $2^{nd}$-order derivatives of the field-dependent total energy functional with respect to the atomic sublattice displacements and the homogeneous electric field. Here, the field-dependent energy functional[6] is \begin{equation} \label{eq:md-br1} E[\{ u^{(\vec \varepsilon )} \} ,\vec \varepsilon ] = E_{KS} [\{ u^{(\vec \varepsilon )} \} ] - \Omega \vec \varepsilon \cdot {\bf{P}}[\{ u^{(\vec \varepsilon )} \} \end{equation} where $E_{KS},\vec \varepsilon,{\bf{P}}$ are the Kohn-Sham energy functional, the finite electric field, and the cell volume, respectively. In addition, $u^{(\vec \varepsilon )}$ is a set of unit cell periodic function polarized by field, and polarization ${\bf{P}}$ written through Berry's phase is \begin{equation} \label{eq:md-br2} {\bf{P}} = - {{ife} \over {(2\pi )^3 }}\sum\limits_{n = 1}^M {\int\limits_{BZ} {d^3 k\left\langle {u_{n{\bf{k}}}^{(\vec \varepsilon )} } \right|} } \nabla _{\bf{k}} \left| {u_{n{\bf{k}}}^{(\vec \varepsilon )} } \right\rangle \end{equation} where $f$ is the spin degeneracy, and $f = 2$. In fact, the polarization is calculated by the following discretized form that is suggested by King-Smith and Vanderbilt[7]. \begin{equation} \label{eq:md-br3} {\bf{P}} = {{ef} \over {2\pi \Omega }}\sum\limits_{i = 1}^3 {{{{\bf{a}}_i } \over {N_ \bot ^{(i)} }}} \sum\limits_{l = 1}^{N_ \bot ^{(i)} } {{\mathop{\rm Im}\nolimits} \ln \prod\limits_{j = 1}^{N_i } {\det S_{{\bf{k}}_{lj} ,{\bf{k}}_{lj + 1} } } }, \end{equation} (Ref.[6] and [7] point out the meaning of every parameter in Eq.~\ref{eq:md-br2} and Eq.~\ref{eq:md-br3}.) Next, if we consider the orthonormality constraints of the unit cell periodic function polarized by field \begin{equation} \label{eq:md-br4} \left\langle {{u_{m{\bf{k}}}^{(\vec \varepsilon )} }} \mathrel{\left | {\vphantom {{u_{m{\bf{k}}}^{(\vec \varepsilon )} } {u_{n{\bf{k}}}^{(\vec \varepsilon )} }}} \right. \kern-\nulldelimiterspace} {{u_{n{\bf{k}}}^{(\vec \varepsilon )} }} \right\rangle = \delta _{mn} \end{equation} ,the total energy functional is divided into 3 parts as follows. \begin{equation} \label{eq:md-br5} F = F_{KS} + F_{BP} + F_{LM} \end{equation} where $F_{KS} = E_{KS}$ is Kohn-Sham energy and $F_{BP} = - \Omega \vec \varepsilon \cdot {\bf{P}}$ is the coupling between the electric field and the polarization by Berry's phase, and the constraints are given by Lagrange multiplier term, $F_{LM}$. Next, the set of unit cell periodic functions polarized by field, $\{ u^{(\vec \varepsilon )} \}$ is determined with variational method. The set of its function is different from a set of unit cell periodic function in zero field. Although, strictly speaking, calculated ground state is not exact ground state, this method is a way to overcome nonperiodicity of the potential caused by electric field[3]. Therefore, it does not include explicitly the gauge invariant property and requires big cost in calculating its inverse matrix in the perturbation expansion of Berry's phase, yielding instability of results. We apply the perturbation expansion by using DFPT, and investigate the response property with a new discrete method by using Eq.~\ref{eq:md-br2} and unit cell periodic functions polarized by field. Since the general perturbation expansion methods were mentioned in Refs.[1,2,3], we consider the response tensors, dielectric tensor and Born effective charge tensor in case of perturbation with respect to the atomic sublattice displacements and the homogeneous electric field. In Gaussian system the dielectric tensor is \begin{equation} \label{eq:md-br6} \in _{\alpha \beta } = \delta _{\alpha \beta } + 4\pi \chi _{\alpha \beta } \end{equation} and then electric susceptibility tensor can be written by perturbation expansion. \begin{equation} \label{eq:md-br7} \begin{split} \chi _{\alpha \beta } &= - {1 \over \Omega }{{\partial ^2 F} \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }} = - {f \over {2(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|T + v_{ext} \left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle + } } \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|T + v_{ext} \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle ]\\ &+ \sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|(ie{\partial \over {\partial k_\beta }})\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle + \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\beta }})\left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|(ie{\partial \over {\partial k_\alpha }})\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle }\\ &+ \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\alpha }})\left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle ] + {f \over {2(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{m,n = 1}^M {\Lambda _{mn}^{(0)} ({\bf{k}})[\left\langle {{u_{n{\bf{k}}}^{\varepsilon _\alpha } }} \mathrel{\left | {\vphantom {{u_{n{\bf{k}}}^{\varepsilon _\alpha } } {u_{m{\bf{k}}}^{\varepsilon _\beta } }}} \right. \kern-\nulldelimiterspace} {{u_{m{\bf{k}}}^{\varepsilon _\beta } }} \right\rangle + \left\langle {{u_{n{\bf{k}}}^{\varepsilon _\beta } }} \mathrel{\left | {\vphantom {{u_{n{\bf{k}}}^{\varepsilon _\beta } } {u_{m{\bf{k}}}^{\varepsilon _\alpha } }}} \right. \kern-\nulldelimiterspace} {{u_{m{\bf{k}}}^{\varepsilon _\alpha } }} \right\rangle ]} } - {1 \over {2\Omega }}{{\partial ^2 E_{XC} } \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }} \end{split} \end{equation} Though Eq.~\ref{eq:md-br7} reflects successfully the response properties with respect to perturbation in finite electric field, it does not describe sufficiently the periodic effect of crystal. Because the operator,$ie\nabla _{\bf{k}}$ hidden Berry's phase must be applied to gauge invariant quantity in order to overcome nonperiodicity of potential caused by field[7]. Therefore, using the gauge invariant form,$ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|$ and considering $0^{th}$-order,$\Lambda _{mn}^{(0)} ({\bf{k}}) = \varepsilon _{n{\bf{k}}}^{(0)} \delta _{mn}$, dielectric tensor is \begin{equation} \label{eq:md-br8} \begin{split} \chi _{\alpha \beta } &= \left. { - {1 \over \Omega }{{\partial ^2 F} \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }}} \right|_{\varepsilon = \varepsilon ^{(0)} }\\ & = - {f \over {2(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|T + v_{ext} - \varepsilon _{n{\bf{k}}}^{(0)} \left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle + } } \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|T + v_{ext} - \varepsilon _{n{\bf{k}}}^{(0)} \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle\\ & + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|(ie{\partial \over {\partial k_\beta }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right|(ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle\\ &- \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\beta }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle - \left\langle {u_{n{\bf{k}}}^{(0)} } \right|(ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{\varepsilon _\beta } } \right\rangle ] - \left. {{1 \over {2\Omega }}{{\partial ^2 E_{XC} } \over {\partial \varepsilon _\alpha \partial \varepsilon _\beta }}} \right|_{\varepsilon = \varepsilon ^{(0)} } \end{split} \end{equation} where BZ(Brillouin Zone) integration is performed by Monkhorst-Pack special point method. Meanwhile, the partial derivative is calculated with following discretized method. \begin{equation} \label{eq:md-br9} {\partial \over {\partial k_x }}\left| {u_{m,i,j,k}^{} } \right\rangle \left\langle {u_{m,i,j,k}^{} } \right| = {1 \over {2\Delta k_x }}(\left| {u_{m,i + 1,j,k}^{} } \right\rangle \left\langle {u_{m,i + 1,j,k}^{} } \right| - \left| {u_{m,i - 1,j,k}^{} } \right\rangle \left\langle {u_{m,i - 1,j,k}^{} } \right|) \end{equation} \begin{equation} \label{eq:md-br10} {\partial \over {\partial k_y }}\left| {u_{m,i,j,k}^{} } \right\rangle \left\langle {u_{m,i,j,k}^{} } \right| = {1 \over {2\Delta k_y }}(\left| {u_{m,i,j + 1,k}^{} } \right\rangle \left\langle {u_{m,i,j + 1,k}^{} } \right| - \left| {u_{m,i,j - 1,k}^{} } \right\rangle \left\langle {u_{m,i,j - 1,k}^{} } \right|) \end{equation} \begin{equation} \label{eq:md-br11} {\partial \over {\partial k_z }}\left| {u_{m,i,j,k}^{} } \right\rangle \left\langle {u_{m,i,j,k}^{} } \right| = {1 \over {2\Delta k_z }}(\left| {u_{m,i,j,k + 1}^{} } \right\rangle \left\langle {u_{m,i,j,k + 1}^{} } \right| - \left| {u_{m,i,j,k - 1}^{} } \right\rangle \left\langle {u_{m,i,j,k - 1}^{} } \right|) \end{equation} Additionally, the $1^{st}$-order wave function response with respect to finite electric field is calculated with the following Sternheimer equation \begin{equation} \label{eq:md-br12} P_{c{\bf{k}}} (T + v_{ext} - \varepsilon _{n{\bf{k}}}^{(0)} )P_{c{\bf{k}}} \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle = - P_{c{\bf{k}}} (ie{\partial \over {\partial k_\alpha }}\sum\limits_{m = 1}^M {\left| {u_{m{\bf{k}}}^{(0)} } \right\rangle } \left\langle {u_{m{\bf{k}}}^{(0)} } \right|)\left| {u_{n{\bf{k}}}^{(0)} } \right\rangle \end{equation} Generally, investigation of the $2^{nd}$-order energy response requires up to $1^{st}$-order wave function response with respect to perturbation by using $"$2n+1 $"$theorem. Therefore, every result can be calculated with only the $1^{st}$-order wave function response to finite electric field. In this way, Born effective charge tensor is \begin{equation} \label{eq:md-br13} \begin{split} Z_{\kappa ,\alpha \beta }^* &= \left. { - {{\partial ^2 F} \over {\partial \varepsilon _\alpha \partial \tau _{\kappa , \beta }}}} \right|_{\varepsilon = \varepsilon ^{(0)} }\\ & = {{f\Omega } \over {(2\pi )^3 }}\int\limits_{BZ} {d^3 k\sum\limits_{n = 1}^M {[\left\langle {u_{n{\bf{k}}}^{(0)} } \right|(T + v_{ext} )^{\tau _{\kappa ,\beta } } \left| {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right\rangle + \left\langle {u_{n{\bf{k}}}^{\varepsilon _\alpha } } \right|(T + v_{ext} )^{\tau _{\kappa ,\beta } } \left| {u_{n{\bf{k}}}^{(0)} } \right\rangle + \left. {{{\partial ^2 E_{XC} } \over {\partial \tau _{\kappa ,\beta} \partial \varepsilon _\alpha }}} \right|_{\varepsilon = \varepsilon ^{(0)} } } } \end{split} \end{equation} Eq.~\ref{eq:md-br13} also calculate with DFPT and the wave function polarized by field. \section{\label{sec:md-result}Results and Analysis} The calculation of the dielectric permittivity tensor and the Born effective charge tensor is performed in three steps. First, a ground state calculation in finite electric field is performed using the Berry's phase method implemented in the ABINIT code, and the field-polarized Bloch functions are stored for the later linear-response calculation. Second, the linear-response calculation is performed to obtain the first order response of Bloch functions. Third, the matrix elements of the dielectric and Born effective charge tensors are computed using these $1^{st}$-order responses. To verify the correctness of our method, we have performed test calculation on two prototypical semiconductors, AlAs and GaAs. In this calculation, we have used the HSC norm-conserving pseudopotential method based on Density Functional Theory with LDA (Local Density Approximation). The cutting energy, $E_{cut} = 20Ry$ and $6 \times 6 \times 6$ Monkhorst-Pack mesh for $k$-point sampling were used. In Table 1, we present the calculated values of dielectric tensor and Born effective charge tensor of AlAs and GaAs, when such finite electric field as in Ref. [3] is applied along the [100] direction. In order to compare our method with the preceding one, we present the calculated values in our method and preceding one (Ref.[3]), and the experimental values. As you see in Table 1, the calculated value of dielectric tensor in our method goes to the experiment one[2] more closely than the calculated one in preceding method (Ref.[3]).However, in case of Born effective charge tensor, the difference between our method and preceding one does not almost occur. It shows that in calculating Born effective charge tensor, there exist the $1^{st}$-order contribution of the potential with respect to atomic sublattice displacements and one of the polarized wave function with respect to the finite electric field, the latter playing the essential role. \begin{table*}[!h] \begin{center} \caption{\label{tab:md-die}Calculated and experimental values of dielectric tensor and Born effective charge tensor in finite electric field} \begin{tabular}{|c|c|c|c|} \hline Material& Method & $\in$ & $Z^*$ \\ \hline & Our Method & 9.48 & 2.05 \\ AlAs &Preceding Method[3] & 9.72 & 2.03 \\ & Experiment[2] & 8.2 & 2.18 \\ \hline & Our Method & 12.56 & 2.20 \\ GaAs & Preceding Method[3] & 13.32 & 2.18 \\ & Experiment[2] & 10.9 & 2.07 \\ \hline \end{tabular} \end{center} \end{table*} We also calculated the $2^{nd}-order \ nonlinear \ dielectric \ tensor$, nonlinear response property with respect to electric field. The $2^{nd}$-order nonlinear dielectric tensor is \begin{equation} \label{eq:md-br14} \chi _{123}^{(2)} = {1 \over 2}{{\partial ^2 P_2 } \over {\partial \varepsilon _1 \partial \varepsilon _3 }} = {1 \over 2}{{\partial \chi _{23} } \over {\partial \varepsilon _1 }} \end{equation} Table 2 shows calculated value of the $2^{nd}$-order nonlinear dielectric tensor on AlAs. \begin{table*}[!h] \begin{center} \caption{\label{tab:md-non}Calculated value of the $2^{nd}$-order nonlinear dielectric tensor on AlAs} \begin{tabular}{|c|c|} \hline Method & $\chi _{123}^{(2)} (pm/V)$ \\ \hline Our Method & 67.32 \\ Preceding Method[3] & 60.05 \\ Experiment[8] & 78 $\pm$ 20\\ \hline \end{tabular} \end{center} \end{table*} As shown in Table 2, the calculated value of $2^{nd}$-order nonlinear dielectric tensor in our method coincides with the experimental value[8] more closely than the calculated one in the preceding method (Ref.[3]). \section{\label{sec:sum}Summary} We suggested a new method for calculating the dielectric tensor and Born effective charge tensor in finite electric field. In particular, in order to overcome nonperiodicity of potential caused by electric field, a new transformation conserving gauge invariant property is introduced. In future, this methodology can be expanded not only to perturbation with respect to field and atomic replacement but also to the other cases, such as strain and chemical composition of solid solution. \section*{\label{ack}Acknowledgments} It is pleasure to thank Jin-U Kang, Chol-Jun Yu, Kum-Song Song, Kuk-Chol Ri and Song-Bok Kim for useful discussions. This work was supported by the Physics faculty in {\bf Kim Il Sung} university of Democratic People's Republic of Korea. \section*{\label{ref}References} [1] C.-J. Yu and H. Emmerich. J. Phys.:Condens. Matter, 19:306203, 2007. [2] I. Souza, J. Iniguez, and D. Vanderbilt. Phys. Rev. Lett., 89:117602, 2002. [3] X. Wang and D. Vanderbilt. Phys. Rev. B, 75:115116, 2007. [4] S. Baroni, Stefano de Gironcoli, and Andrea Dal Corso. Rev.Mod.Phys., 73:515, 2001. [5] X. Gonze and C. Lee. Phys. Rev. B, 55:10355, 1997. [6] R. W. Nunes and X. Gonze. Phys. Rev. B, 63:155107, 2001. [7] R. D. King-Smith and D.Vanderbilt. Phys. Rev. B, 48:4442, 1993. [8] I. Shoji, T. Kondo, and R. Ito. Opt. Quantum Electron, 34:797, 2002 \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
23
{"url":"http:\/\/www.co-pylit.org\/courses\/cosc1315\/lectures\/15-other-languages\/01-learning-other-languages.html","text":"Learning Other Languages\u00b6\n\nWhen you study anything new, you should be thinking about what you are doing. If you learn how to use a new tool, like a table saw, there are many concepts you need to understand before you can really master that tool (like, keeping all of your fingers attached!) If you learn those concepts well, when you get another new tool, like a radial-arm saw, you can take the concepts and adapt them to the new tool quickly. You do not need to unlearn anything, and many of the things you need to learn are just sitting there in a new form. All you need to do is answer the basic question: How do I do this thing I already know how to do with this new tool?\n\nThe same thing applies to programming a computer. There are many different kinds of computers in the world, and there are many different kinds of programming languages as well. If you understand how computers work, and what the basic concepts of the programming languages are, then moving from one language to another is pretty easy.\n\nLet\u2019s see. In the last five years, how many different programming languages have I used?\n\n\u2022 C\/C++\n\u2022 Python\n\u2022 Perl\n\u2022 SQL\n\u2022 Intel\/AMD Assembly Language\n\u2022 Microchip PIC microcontroller assembly language\n\u2022 ML\n\u2022 Lisp\n\u2022 Ruby\n\u2022 PC Batch\n\u2022 Bash Shell\n\u2022 Visual Basic (ugh!)\n\u2022 Java\n\u2022 JavaScript\n\u2022 PHP\n\u2022 HTML\/XHTML\n\u2022 XML\n\u2022 SVG\n\nThere are surely more that I cannot remember now. No wonder I have so many books.\n\nI do not claim to be proficient in all of those languages at any moment, I need a bit of time to come back up to speed with those I do not use on a daily basis. But, I can sit down and read a program written in any of these languages and figure out what is going on well enough to understand the program logic, and translate the ideas into the language I need to use for my current project. When you can do this, you are ready to call yourself a serious programmer! (I probably am such a beast - I have been doing this since 1965 - yikes!)\n\nHere is a link to a review of several languages available for teaching programming that I received recently. It ends up recommending a language that is my current favorite - Python:\n\nIntroducing Python\u00b6\n\nWhat is Python (other than the obvious slithery answer!)\n\nPython is a full-fledged object-oriented programming language that happens to fall into the class of languages called scripting languages. In most folk\u2019s minds, a script is a short program designed to be used on a computer to accomplish some simple task. System administrators are always building such scripts to automate part of their jobs in keeping computers and users happy.\n\nA scripting language gives up the idea of raw speed by giving up the compiling process in favor of interpreting a script. That usually means that some program reads the script and figures out what it wants to do and just does it without translating the script into machine language. This interpretation process is slow, but the script can be fast enough to be usable without worrying about speed.\n\nWhen you develop a script, you fire up an editor, write code, save it and run it. Easy and quick.\n\nSome scripting languages (like Python) go one step further. They take the script and break it down into a simpler form called a byte stream and then use that stream to process the program. The byte stream is not machine language, but is a special kind of computer language designed to be easy to process, and it saves time breaking up the original script into basic parts each time the script is run. This gives us back a bit of the speed without noticeably delaying our work.\n\nPython provides all the basic features we have learned in this class, plus a few that are new! It also provides a large library of supporting functionality that makes writing programs a lot of fun. We will look at an example of building a Python program to show you how it goes. A Simple Python Program\n\nHow short can we get? Here is our Hello, World! program:\n\nprint \"Hello, World!\"\n\n\nShort and sweet! No setup, no magic structure to learn - just simple code! Well, this will work fine, but a better version might look like this:\n\n#! c:\/tools\/Python26\/python.exe\n\nprint \"Hello, World!\"\n\n\nThe first line is special - especially on Linux systems. Formally, it is a comment (beginning with a sharp character). But the #! notation (called she-bang for some reason) tells the operating system what program to run to process this script. On Windows boxes, we can associate the file name extension .py with the right program, so this is not really necessary here.\n\nRunning the program\u00b6\n\nWell, we cannot go anywhere until we have Python installed on our systems. Since it is free (yeah!) - all we need to do is go to the Python website and download a copy:\n\nC:\\>hello.py\nHello, World!\n\n\nWhat more would you expect?\n\nNotice how fast we got this running. I normally do not close my editor when I write Python programs. I just click on the Save button, then click into an open Command Prompt window and run the script.\n\nIncluded in the Python distribution is a whole set of tools that help you write programs that work on the web. We can even set up a simple web server that will run on our local machines, even if we are not attached to the Internet. Here is what such a program looks like:\n\nSave this as webserver.py\n\n#! c:\/tools\/Python26\/python.exe\n\nimport SimpleHTTPServer, BaseHTTPServer, webbrowser\n\n# write out a basic web page\nfout = open('index.html','w')\nfout.write('<html><body><h1>Hello, World!<\/h1><\/body><\/html>')\nfout.close()\n\n# define a funtion to start a web server on the local machine\ndef run_server(server_class,handler_class):\nhttpd.serve_forever()\n\n# say hello to the folks!\n\nprint \"Serving Web pages from localhost on port 8080!\"\nprint \" point a browser at 'http:\/\/localhost:8080'\"\n\n# now, fire up a browser (IE takes a while to start)\nbrowser = webbrowser.get()\nurl = 'http:\/\/localhost:8080'\nbrowser.open(url)\n\n# finally start the web server - hope it starts before the browser!\nsc = BaseHTTPServer.HTTPServer\nhc = SimpleHTTPServer.SimpleHTTPRequestHandler\nrun_server(sc,hc)\n\n\nHere is what you see if you run this program:\n\nC:\\home\\acc\\COSC1315\\Lectures\\Day30>webserver.py\nServing Web pages from localhost on port 8080!\npoint a browser at 'http:\/\/localhost:8080'\n127.0.0.1 - - [12\/Dec\/2006 11:33:40] \"GET \/ HTTP\/1.0\" 200 -\n\n\nYou will need to press Ctrl-Break, or kill the command prompt window to stop the server.\n\nWhat does all this stuff do?\u00b6\n\nIf you look at the code, you can see what each section is up to. You should be able to follow along based on what you learned in this class. However, something will strike you right away!\n\nWhere are all the curly braces?\n\nPython uses indentation to control structure - just like we told you you should be doing when you write C++! You must indent correctly in Python or it will not work!\n\nFunctions are created by starting off with the def keyword. When you get to the end of the prototype, you use a colon and then indent the body of the function. No more curly braces.\n\nNotice that we have code hanging out in the breeze, as it were. As the Python processor reads your script, it simply records the functions you define, and runs the statements it finds as it reads your file. So, once we have defined a function, we can use it.\n\nThe import lines set up a connection between this file and other files containing Classes provided with the system. In our web server example, we are bringing in a class that knows how to listen for network traffic (web traffic that is) and another class that knows how to process a web request. We have also brought in a class that knows how to fire off the standard web browser on your system. This is kind of fun!\n\nRunning this short script starts off a browser and servers up one local page that it created as it ran. Can we do a bit better?\n\nCleaning up the code\u00b6\n\nThe above example is fine, but a bit ugly. Let\u2019s rearrange it so it is presented better:\n\nSave this code as MyPersonalSpace.py\n\n#! c:\/tools\/Python26\/python.exe\n\nimport SimpleHTTPServer, BaseHTTPServer\nimport webbrowser\n\ndef create_index():\nfout = open('index.html','w')\nfout.write('<html><body><h1>Hello, World!<\/h1><\/body><\/html>')\nfout.close()\n\ndef run_server():\nsc = BaseHTTPServer.HTTPServer\nhc = SimpleHTTPServer.SimpleHTTPRequestHandler\nhttpd.serve_forever()\n\ndef run_browser():\nbrowser = webbrowser.get()\nurl = 'http:\/\/localhost:8080'\nbrowser.open(url)\n\ndef main():\nprint \"Serving Web pages from localhost on port 8080!\"\ncreate_index()\nrun_browser()\nrun_server()\n\n\nNow, everything is in functions and we can see what is going on a bit more clearly.\n\nPython Basics\u00b6\n\nPython does not require that you declare variables and use data types, they are there, but you do not need to worry about all of that. You just pick a name and assign a value to it. We do have a few interesting data types to play with, though:\n\n\u2022 Integers\n\u2022 Floating point types\n\u2022 Strings and characters (use either quote you like)\n\u2022 Arrays (Lists)\n\u2022 Dictionaries\n\nWhat is that last one?\u00b6\n\nA dictionary is just a normal array, but one that is indexed using strings instead of numbers - cool!\n\nWe can set up an empty dictionary by creating a variable and assigning it an empty dictionary ([]). Once we have set that up, we can add entries just by creating a new index string and assigning a value to that index.\n\nmydict = {}\nmydict['index'] = 'Hello World'\nprint mydict['index']\n\n\nPython Statements\u00b6\n\nSince Python uses indentation to control structure, we have a few changes in our basic structured statements. Here are a few examples:\n\nmydata = 5\n\nif mydata > 3:\nprint \"Hello\"\nelse:\nprint \"Goodbye\"\n\nwhile mydata > 5:\nprint \"count down\"\nmydata -= 1\n\nfor mydata in range(0,5):\nprint mydata\n\n\nThat last one is a bit confusing. The range is from the first value, up to but not including the last value! So the for loop runs with values from 0 to 4. Don\u2019t ask me why - that is just the way it is!\n\nObviously, there is much more to the language, this is just to let you see what it looks like, not to learn the real language. For that you can go here:\n\nJazzing up our page\u00b6\n\nWell, we have a web server and a web page, but, the index page is boring!\n\nI want to know what the weather is like now! Here is a modified version that will tell me!\n\n#! c:\/tools\/Python26\/python.exe\n\nimport SimpleHTTPServer, BaseHTTPServer\nimport webbrowser\nimport urllib\nfrom elementtree.ElementTree import parse\n\ndef get_weather():\nurl = URL % '78701'\ndata = urllib.urlopen(url)\nforecasts = []\nfor element in rss.findall('channel\/item\/{%s}forecast' % NS):\nforecasts.append({\n'date': element.get('date'),\n'low':element.get('low'),\n'high':element.get('high'),\n'condition':element.get('text')\n})\nreturn {\n'current_condition': ycondition.get('text'),\n'current_temp': ycondition.get('temp'),\n'forecasts': forecasts,\n}\n\ndef create_index():\nfout = open('index.html','w')\nweather = get_weather()\ncurrent_cond = weather['current_condition']\ncurrent_temp = weather['current_temp']\nforecasts = weather['forecasts']\n\ntext = ''\ntext += '<h2>Austin Weather:<\/h2>'\ntext += '<h3>Currently: %s<\/h3>' % current_cond\ntext += '<h3>Temperature: %s<\/h3>' % current_temp\ntext += '<h3>Forecast:<\/h3>'\ntext += '<table border=\"1\"><tbody>'\ntext += '<tr><td><b>Date<\/b><\/td><td><b>High<\/b><\/td>'\ntext += '<td><b>Low<\/b><\/td><td><b>Conditions<\/b><\/td><\/tr>'\n\nfor fc in forecasts:\ndate = fc['date']\nhigh = fc['high']\nlow = fc['low']\ncond = fc['condition']\ntext += '<tr><td>%s<\/td><td>%s<\/td><td>%s<\/td><td>%s<\/td><\/tr>' % \\\n(date, high, low, cond)\ntext += '<\/tbody><\/table>'\ntext += '<br \/>When you are ready to study, click on this '\nfout.write('<html><body><h1>My Personal Webspace!<\/h1>')\nfout.write(text)\nfout.write('<\/body><\/html>')\nfout.close()\n\ndef run_server():\nsc = BaseHTTPServer.HTTPServer\nhc = SimpleHTTPServer.SimpleHTTPRequestHandler\nhttpd.serve_forever()\n\ndef run_browser():\nbrowser = webbrowser.get()\nurl = 'http:\/\/localhost:8080'\nbrowser.open(url)\n\ndef main():\nprint \"Serving Web pages from localhost on port 8080!\"\ncreate_index()\nrun_browser()\nrun_server()\n\nmain()\n\n\nNote\n\nIf you try to run this code yourself, you will need to add the ElementTree library to your system. Copy the script from this link onto your system and run it:\n\nThen do the following:\n\neasy_install elementtree\n\nHere is what I got:\n\nPhew! I added a new routine that uses a web service to fetch the current weather from yahoo.com. This is a neat way to get information into your program as long as you have a working Internet connection when you run the program (and who does not these days?)\n\nThe weather comes back in the form of an XML page of text, not something I can use. But, Python comes with another tool that can read such a page and strip off stuff I am interested in. So, I parse the page and get my weather data stored in a special kind of array called Or, we can assign a whole bunch\n\nFinally, I added a bunch of lines to create HTML code to display the forecast in a table. Another language to learn! All in all, this is still ugly, but useful! And, it shows the kind of things you can do if you really want to explore where programming can take you!\n\nHope you enjoyed the course, and that you learned a bit! Have fun!\n\nRoie","date":"2019-10-18 10:49:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26853159070014954, \"perplexity\": 1448.7696155091435}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986682037.37\/warc\/CC-MAIN-20191018104351-20191018131851-00256.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intro} Actively accreting black holes have three fundamental properties -- mass ($M_{BH}$), accretion rate ($\dot{M}$), and angular momentum. Measuring $M_{BH}$ for active galactic nuclei at all redshifts has become possible due to reverberation mapping of low-redshift AGN and the extrapolation of those results to high redshifts, via relations between $M_{BH}$ and the widths of broad emission lines and the AGN continuum luminosity. Accretion rate estimates have also been achieved for many AGN, usually via the Eddington ratio, $L/L_{Edd}$. The angular momentum, or spin ($a_*$), of active BHs is more elusive, as it requires probing the region near the inner edge of the accretion disk (AD). Yet measurements of spin and spin evolution would provide valuable clues to the accretion history of active BHs and perhaps the evolution of the AGN and host galaxies themselves. At present, there are two primary methods for constraining the spin parameters of actively accreting BHs: (1) measuring the Fe K$\alpha$ emission line and/or a soft X-ray excess that some attribute to relativistic reflection \citep[e.g.][]{Brenneman13,Reynolds14b}, and (2) fitting the AGN continuum emission (CF) \citep[e.g.][]{Done13,Capellupo16}. There are significant advantages and drawbacks to each method. The Fe K$\alpha$ method is based on relativistic X-ray reflection. It does not require prior knowledge of $M_{BH}$, the distance to the source, or the inclination of the disk, whereas these are all necessary ingredients for the CF method. The main drawback, however, is that a very high-quality X-ray spectrum is required to properly model the continuum emission and the Fe-K$\alpha$ emission line, severely limiting the number of sources for which current technology allows a spin measurement. As a result, most AGN with reflection measurements are at a redshift less than 0.1. Furthermore, the Fe K$\alpha$ emission line is present in just $\sim$40\% of bright, nearby type I AGN \citep{deLaCallePerez10}, so some spin estimates are based on modeling just a soft X-ray excess \citep[e.g.][hereafter, R14]{Reynolds14a}. The CF method, on the other hand, can be applied to any AGN where the continuum emission can be measured. This vastly increases the number of AGN for which a spin measurement can be made and has already been applied out to a redshift of $\sim$1.5 \citep{Capellupo15,Capellupo16}. The primary drawback is that wide wavelength coverage, sometimes exceeding the capabilities of a single observatory, is required to properly measure the shape of the SED. Furthermore, this method cannot be applied effectively if the peak of the AD spectrum occurs in a wavelength regime inaccessible to current observatories, e.g., the extreme UV (where many AGN spectra do indeed peak). This method generally assumes a thin AD model, based on \citet{Shakura73}. Recent work has directly cast doubt on the X-ray reflection method. \citet{Boissay16} find that the soft X-ray excess that some attribute to relativistic reflection is more likely due to warm Comptonization. Similarly, \citet{Yaqoob16} is able to fit the Fe K$\alpha$ emission line for one of the AGN with an X-ray reflection spin measurement without invoking relativistic reflection. For the CF method, while the standard thin AD model has been successful in fitting the UV-optical SEDs of many AGN \citep[see e.g.][]{Capellupo15,Capellupo16}, other work has found that the AGN SED can be fit with the combination of a thermal disk component and a warm Comptonization component \citep[][]{Mehdipour11}, indicating the possibility of greater complexity in the continuum emission. With these two methods now available and actively in use for the estimation of $a_*$ in AGN, it is time to investigate whether these two methods give consistent results when applied to the same AGN. This is especially important given the uncertainties in both techniques and because neither method can probe the full AGN population. In this work, we compare the X-ray reflection and CF methods for two nearby AGN -- H1821+643 and NGC 3783. Ours is among the first attempts to make this comparison \citep[see also,][]{Done16}. Both targets have a published spin estimate from the reflection method. We perform the CF analysis and compare the results in detail. In \S2, we describe how we selected sources for this study and our search for appropriate archival data. In \S3, we describe the models and CF procedure (based on \citealt{Capellupo15,Capellupo16}). \S4 and \S5 describe our application of the CF method to the two AGN, and we conclude in \S6 with a discussion of our results and how the reflection and CF method compare for these two case studies. We assume a $\Lambda$CDM model with $\Omega_{\Lambda}=0.7$, $\Omega_{m}=0.3$, and $H_{0}=70\, {\rm km\, s^{-1}} \, {\rm Mpc}^{-1}$. \section{Sample Selection and Data Sources} \label{sec:data} According to \citet{Vasudevan16}, there are currently 25 AGN with spin estimates from the X-ray reflection method. We use this list as a starting point to search for archival data to which the CF method can be applied. The CF method is most effective when the ``turnover'' in the AD spectrum is probed. This turnover occurs at shorter wavelengths for smaller black hole masses. We therefore look first for existing high-quality UV spectroscopic observations of these AGN. Via the MAST web portal\footnote{\url{https://archive.stsci.edu/}}, we identify four AGN with high-level data products for Hubble Space Telescope (HST) Faint Object Spectrograph (FOS) observations \citep{Evans04}: Fairall 9, NGC 3783, NGC 4151, and H1821+643. While the FOS spectrum is sufficient for applying the CF method for H1821+643, data at even shorter wavelengths is required for the lower--$M_{BH}$ Seyfert galaxies. We seek quasi-simultaneous data, and, for NGC 3783, we identify observations from ROSAT -- taken on 1992 July 23, just four days prior to the FOS observation on 1992 July 27 -- that probe the appropriate wavelengths \citep{Alloin95}. Hence we proceed with two objects, H1821+643 and NGC 3783, for our detailed spin comparison. The FOS spectra for H1821+643 and NGC 3783 are focused on the nucleus of the galaxy. For H1821+643, we verify that the FOS spectrum (shown in Fig.~\ref{fig:Hspec}) is dominated by AGN emission based on the broad-band star formation SED fit in \citet{Farrah02}. Similarly for NGC 3783, the spectrum is at short enough wavelengths that the host galaxy contribution should be negligible \citep{Reichert94,Alloin95}. Therefore, we do not correct the FOS spectra for stellar emission. The only correction we make to the HST data is to divide out the Galactic extinction, using the \citet{Cardelli89} extinction law and the \citet{Schlafly11} recalibration of the \citet{Schlegel98} maps. The ROSAT data have been analyzed \citep[][hereafter, T93]{Turner93}, and we make no further corrections in this work. \section{Accretion Disk Models and Bayesian Routine} \label{sec:model} To apply the CF method, a model is required that can make specific predictions for the emitted radiation at each wavelength. Standard thin AD theory \citep{Shakura73} has been used for several decades to describe AGN continuum emission. Newer models use this framework, but incorporate general relativistic corrections, comptonization in the disk atmosphere, and even disk winds \citep[e.g.][]{Hubeny01,Davis11,Slone12}. Here we adopt the numerical code described in \citet{Slone12}, assuming a viscosity parameter ($\alpha$) of 0.1. The shape and luminosity of the thin AD spectrum is mainly set by $M_{BH}$, $\dot{M}$, $a_*$, and the inclination of the disk to our line-of-sight. If we want to constrain $a_*$, prior knowledge of the other parameters is necessary as the observed SED is not enough to break the parameter degeneracy of the models, where different combinations of these parameters can yield similar SED shapes. Additionally, any intrinsic reddening in the AGN host galaxy will affect the observed SED shape. We therefore adopt a Bayesian approach that takes a large grid of models -- with varying values of $M_{BH}$, $\dot{M}$, $a_*$, inclination, and reddening -- and maximizes the probability that any given model is a good representation of the data, while penalizing those models that are not consistent with the priors, which we establish for $M_{BH}$ and $\dot{M}$ \citep{Capellupo15,Capellupo16}. This routine calculates a $\chi^2$ value for each model, using continuum windows along the observed SED. For the prior on $M_{BH}$, the reverberation mapping technique has been used to obtain $M_{BH}$ for nearby AGN \citep[e.g.,][]{Peterson04}, and these results have been extended to other AGN, using the width of the broad emission lines and the continuum luminosity \citep[the `single-epoch method'; e.g.,][]{Bentz09,Mejia16}. A prior on $\dot{M}$ can be estimated using $M_{BH}$ and a measurement of the continuum luminosity at longer (i.e., optical or near-infrared) wavelengths, assuming the canonical power law, $L_{\nu} \propto \nu^{1/3}$ \citep{Collin02,Davis11,Netzer14}. For the disk inclination, the only constraint we have is that our sample contains type-1 AGN, so we can consider only inclinations where cos $\theta > 0.5$. For intrinsic reddening, to limit the number of free parameters, we use a simple power-law curve, where $A(\lambda)=A_{o}\lambda^{-1}$ mag. We consider values of $A_V$ ranging from 0.0 to 0.50 mag. \section{H1821+643} \label{sec:h1821} \begin{figure*}[] \plotone{fig1.pdf} \caption{The HST FOS spectrum (black curve) of H1821+643, with no intrinsic reddening correction. The best-fit CF model is overplotted. \label{fig:Hspec}} \end{figure*} H1821+643 is a brightest cluster galaxy (BCG) hosting a luminous AGN at $z\sim0.297$. There are no direct reverberation mapping measurements for H1821+643, but there have been several attempts to obtain $M_{BH}$ via other methods. These estimates range from $\sim$1.2 to $6 \times 10^9 \ifmmode M_{\odot} \else $M_{\odot}$\fi$ (\citealt{Decarli08,Dasyra11}; R14), and there are theoretical arguments that the mass could be as high as $3 \times 10^{10}$ \ifmmode M_{\odot} \else $M_{\odot}$\fi\ \citep{Walker14}. We adopt the most recent `single-epoch' measurement using the H$\beta$ emission line, $M_{BH} = 2.5 \times 10^9 \ifmmode M_{\odot} \else $M_{\odot}$\fi$, from \citet{Decarli08}, and we use their measurement of log $\lambda L_{\lambda}(5100\mathrm{\AA}) =$ 46.1 ergs s$^{-1}$ for calculating $\dot{M}$. We adopt errors of 0.3 and 0.2 dex, respectively, for $M_{BH}$ and $\dot{M}$ \citep{Capellupo15}. \begin{figure*}[] \gridline{\fig{fig2a.pdf}{0.40\textwidth}{} \fig{fig2b.pdf}{0.40\textwidth}{} \fig{fig2c.pdf}{0.045\textwidth}{} } \caption{Posterior probability contour plots for H1821+643 for $a_*$ versus $M_{BH}$ and $a_*$ versus $A_V$. The vertical lines identify the observed $M_{BH}$ and the 0.3 dex error. The horizontal line identifies the lower limit on $a_*$ from R14. \label{fig:Hcont}} \end{figure*} \begin{deluxetable*}{c|cc|cccc|c} \tablecaption{Model Parameters and Results \label{tab:table}} \tablehead{ \colhead{Object} & \colhead{log $M_{BH}^{obs}$} & \colhead{log $\dot{M}^{obs}$} & \colhead{$L/L_{Edd}$} & \colhead{cos $\theta$} & \colhead{$A_V$} & \colhead{$a_{*}^{CF}$} & \colhead{$a_{*}^{ref}$} \\ \colhead{} & \colhead{(\ifmmode M_{\odot} \else $M_{\odot}$\fi)} & \colhead{(\ifmmode M_{\odot}\, {\rm yr}^{-1} \else $M_{\odot}\, {\rm yr}^{-1}$)} & & & \colhead{(mag)} & \colhead{} & \colhead{} } \startdata H1821+643 & 9.4 & 0.48 & $0.14^{+1.8}_{-0.11}$ & $0.85^{+0.15}_{-0.09}$ & $0.12^{+0.15}_{-0.12}$ & $0.5^{+0.5}_{-0.4}$ & $\ge0.40$\tablenotemark{a} \\ NGC 3783 & 7.47 & -1.9 & $0.020^{+0.096}_{-0.014}$ & $0.89^{+0.11}_{-0.09}$ & $0.17^{+0.11}_{-0.09}$ & $0.2^{+0.7}_{-0.9}$ & $\ge0.88$\tablenotemark{b} \\ NGC 3783\tablenotemark{c} & 7.47 & -1.9 & $0.032^{+0.15}_{-0.018}$ & $0.90^{+0.10}_{-0.09}$ & $0.09^{+0.09}_{-0.06}$ & $0.5^{+0.5}_{-0.4}$ & \\ \enddata \tablenotetext{a}{R14} \tablenotetext{b}{B11} \tablenotetext{c}{CF with disc wind} \end{deluxetable*} The best-fit (i.e., the most probable) model is presented in Fig.~\ref{fig:Hspec}, and the full results are shown as probability contours in Fig.~\ref{fig:Hcont}. From Fig.~\ref{fig:Hcont}, it is clear that there is a strong preference for a large, positive spin parameter. In their analysis of the X-ray spectrum of H1821+643, R14 obtain both a constraint on the spin parameter and a constraint on $L/L_{Edd}$ and the inclination. Applying these constraints to our CF routine, we obtain a similar probability distribution along the spin parameter axis as we did originally without these constraints. \section{NGC 3783} \label{sec:ngc3783} \begin{figure*}[] \plotone{fig3.pdf} \caption{The FOS spectrum (black curve) of NGC 3783, corrected for intrinsic reddening (gray curve), and the ROSAT (black) and EXOSAT (red) power-laws from T93, with shaded regions denoting the 1$\sigma$ error intervals. The solid and dashed blue curves are the best-fit models without and with a disk wind, respectively. \label{fig:Nspec}} \end{figure*} NGC 3783 is a well-studied Seyfert 1, SBa galaxy at $z\sim0.009$. The reverberation mapping technique has been applied to NGC 3783, giving $M_{BH}$ = $2.98 \pm 0.54 \times 10^7$ \ifmmode M_{\odot} \else $M_{\odot}$\fi, with a corresponding continuum luminosity, log $\lambda L_{\lambda}(5100\mathrm{\AA}) = 43.26 \pm 0.04$ ergs $s^{-1}$, which we use to estimate $\dot{M}$. Because NGC 3783 is in a lower $M_{BH}$ regime than H1821+643, the peak of the AD emission is in the extreme UV, a regime where we generally lack observations. We can discriminate between different spin parameters only in the soft X-ray, where models with the highest spin parameters peak for lower-mass BHs. NGC 3783 has a complex X-ray spectrum, with warm absorbers and a soft excess that appears and disappears \citep{Netzer03}. We use ROSAT X-ray data (see \S\ref{sec:data}), in addition to the FOS data, to apply the CF method to NGC 3783. We use the 1992 July 23 ROSAT observation, in particular, because it is nearly contemporaneous with the FOS observation, and it extends to slightly lower energy (down to 0.1 keV) than more recent X-ray observations with Chandra or XMM Newton. T93 fits the ROSAT data with several different power-laws based on different absorption models, ranging from a simple power-law model with $\Gamma$ = 2.22 to a warm absorber model with $\Gamma = 2.77^{+0.45}_{-0.31}$ (which is similar to the value found by \citealt{Schartel97} of $\Gamma = 2.7 \pm 0.7$). T93 also present a model with $\Gamma \sim$ 4.7, which is much higher than other values in the literature, so we do not include it in our analysis. \subsection{Applying the CF Method with an X-ray Upper-limit} A difficulty with using the X-ray spectrum of an AGN for the CF method is that there is a known power-law component at X-ray wavelengths of unknown origin, in addition to possible emission from the AD. Hence, the X-ray data provides only an upper limit on the AD emission. We therefore first alter our CF method to search through our model parameter space for the models with the highest spin parameter that give both a satisfactory fit to the FOS spectrum ($\chi^2 \le 3$) and do not exceed the X-ray flux at 0.1 keV from the power-law fits to the ROSAT data. To be conservative in our upper limit, we adopt the warm absorber model power law ($\Gamma = 2.77^{+0.45}_{-0.31}$) from T93. We find models spanning the full range in spin parameter, including maximum spin, that can fit within the upper limit from the T93 warm absorber model power-law for the ROSAT data, as long as $M_{BH}$ is at least as high as the \citet{Peterson04} $M_{BH}$ estimate (see, for example, the purple curve in Fig.~\ref{fig:Nspec}). \subsection{Applying the CF Method with a Modified X-ray Flux} \begin{figure*} \gridline{\leftfig{fig4a.pdf}{0.45\textwidth}{} \fig{fig4b.pdf}{0.45\textwidth}{} \fig{fig2c.pdf}{0.05\textwidth}{} } \gridline{\leftfig{fig4c.pdf}{0.45\textwidth}{} \fig{fig4d.pdf}{0.45\textwidth}{} \fig{fig2c.pdf}{0.05\textwidth}{} } \caption{Probability contour plots for NGC 3783 for $a_*$ versus $M_{BH}$ (left panels) and $a_*$ versus $A_V$ (right panels) for two different iterations of the Bayesian CF routine: without a disc wind (top panels) and with a disc wind (bottom panels). The observed value of $M_{BH}$ and associated error is indicated by vertical solid and dashed lines, respectively. The horizontal dotted and solid lines represent the 90\% and 99\% confidence on $a_*$ from B11. \label{fig:Ncont}} \end{figure*} Even for a maximally spinning black hole, the thin AD emission does not directly contribute to the hard X-ray band (i.e., above 2 keV; see Fig.~\ref{fig:Nspec}). Hard X-ray observations of NGC 3783 give a less steep power law than in the soft X-ray band. For example, T93 find $\Gamma = 2.14^{+0.24}_{-0.26}$ when applying their warm absorber model to data from EXOSAT. If we assume that the excess emission indicated by a steeper powerlaw in the soft X-ray band is due to AD emission, we can subtract the hard X-ray powerlaw from the soft powerlaw at 0.1 keV to determine a continuum point for our regular Bayesian CF procedure. We use a value of 0.1 dex for the error on $M_{BH}$ from \citet{Peterson04}. The results of the CF routine are presented in the left panel of Fig.~\ref{fig:Ncont}, and we find a median $a_* \simeq 0.2^{+0.7}_{-0.9}$. Using the X-ray reflection method, \citet[][hereafter B11]{Brenneman11} determine a spin parameter $a_* \ge 0.98$ at 90\% confidence and $a_* \ge 0.88$ at 99\% confidence (indicated by horizontal dotted and solid lines in Fig.~\ref{fig:Ncont}). \subsection{Applying the CF Method with an AD Wind} NGC 3783 is known to have a warm absorber in its X-ray spectrum (T93), i.e. an outflow often presumed to originate from the AD of the AGN \citep[e.g.,][]{Tombesi13}. If this is the case for NGC 3783, then the thin AD model must be modified, as the accretion rate would be reduced throughout the disk as material is ejected. The \citet{Slone12} thin AD code provides the option of adding a disk wind to the model. We therefore rerun the CF routine using a model with a self-similar disk wind, where the mass outflow rate per decade of radius is constant. The mass outflow rate for NGC 3783 has been estimated to be $\gtrsim160$ times the accretion rate; however, much of this outflowing gas may come from beyond the accretion disk \citep[T93;][]{Crenshaw12}. In the absence of an empirical estimate of the mass outflow rate from the disk itself, we illustrate the affect of a massive disk wind by choosing a mass accretion rate at the outer part of the disc equal to three times the accretion rate at the innermost stable circular orbit (ISCO). The results are presented in the right panel of Fig.~\ref{fig:Ncont}. The main difference between these results and the results without the disk wind is that lower spin parameters ($a_* < 0$) are much less probable in the disk wind scenario. This arises because the disk wind reduces the accretion rate in the inner part of the disk and thus suppresses the luminosity at short wavelengths. Furthermore, while there is a high probability of $a_* \ge 0.88$ both with and without a disk wind, there is clearly a lower probability of having $a_* \ge 0.98$ if there is no disk wind (there is a factor of $\sim$1.6 difference in radiative efficiency between these two spin parameters). There is also a positive correlation between the amount of intrinsic reddening and $a_*$, with $a_* \ge 0.88$ ruled out if there is close to zero reddening. \section{Discussion} \label{sec:discuss} Our aim in this work is to compare the derived spin parameters for the X-ray reflection and CF techniques for two ``case study'' AGN. Table~\ref{tab:table} summarizes the results of the two methods, including values for $L/L_{Edd}$, the disk inclination ($\theta$), and instrinsic reddening, as derived from the CF method. For H1821+643, a bright AGN with $M_{BH} \sim 2.5 \times 10^9 \ifmmode M_{\odot} \else $M_{\odot}$\fi$, R14 found $a_* \gtrsim 0.4$ using the reflection method. For the CF analysis, the HST FOS spectrum alone is sufficient, and while we do not obtain a very precise constraint on $a_*$, we find a strong probability of a spin parameter that exceeds the lower limit from R14, giving consistent results between the reflection and the CF method. We emphasize here that R14 do not clearly detect an Fe line, but instead fit excess continuum emission in the soft X-ray. For some AGN, physical processes other than relativistic reflection are the more likely cause of this soft excess \citep[][and references therein]{Boissay16}, making this reflection spin measurement a tentative one (see also \S1). For the CF method, from the posterior probability distribution, it is clear that if $M_{BH}$ is higher, then $a_*$ would be constrained to the highest allowed values. Whereas, if $M_{BH}$ is any lower, we would be unable to obtain a meaningful constraint on $a_*$. For NGC 3783, the FOS spectrum lies along the power-law portion of the thin AD model spectrum, and only in the soft X-ray regime can models with different $a_*$ be distinguished. Fortunately, there is nearly contemporaneous FOS and ROSAT data for NGC 3783. However, the X-ray data includes the known X-ray power-law emission that likely originates from above the AD (often called the ``corona''). Using the X-ray flux as an upper-limit, we find that as long as $M_{BH}$ is at least as high as the observed $M_{BH}$, any spin parameter could fit the data. Since there are other components besides the AD emission in the X-ray, if we assume that just the excess emission indicated by the steeper powerlaw slope in the soft X-ray, compared to in the hard X-ray, is due to the AD itself, applying the CF method to NGC 3783 gives a high probability for a high spin parameter, consistent with the 99\% confidence lower limit from relativistic reflection (B11). However, there is a low probability of $a_*$ exceeding the 90\% confidence lower limit from B11, unless we include a disk wind in the AD model. The results of the CF method are, in general, consistent with the results of the reflection method for the two AGN studied here. In particular, the agreement is improved for NGC 3783 if we assume a disc wind, which we include based on the existence of a warm absorber in the X-ray spectrum. The disk wind analysis, however, is tentative because it is unknown how much, if any, of the outflow originates from the inner part of the disk \citep[see e.g.][]{Netzer03}. If the outflow originates further out and therefore does not suppress the short wavelength thin AD emission, there is a slight tension between the two methods, as the reflection method suggests a slightly higher spin parameter than the CF method without a disk wind. We also find that, without a disc wind, the highest spins are most probable for $A_V$ between 0.2 and 0.3 mag. While these reddening values are generally consistent with the constraints from broad emmission line measurements for NGC 3783 \citep[$A_V$=0.1$\pm$0.2;][]{SchnorrM16}, if the reddening is actually closer to 0.1 mag, then there is even greater tension between the reflection and CF results for $a_*$. \citet{Done13} and \citet{Done16} similarly find that the CF method suggests lower spin parameters for narrow-line Seyfert 1s than the nearly maximal spin typically found for this AGN subclass via X-ray reflection. Our study highlights one particular strength of the X-ray reflection method for nearby Seyfert galaxies. For NGC 3783, with a BH mass of $\sim$$10^{7}$ \ifmmode M_{\odot} \else $M_{\odot}$\fi, the inability to probe the extreme UV prevents us from obtaining a very precise estimate of $a_*$. However, we point out that recent work by \citet{Yaqoob16} casts doubt on whether the Fe K$\alpha$ line gives any information on $a_*$ for one of the AGN in the reflection sample (see also \S1). Nearly half (12) of the 25 AGN with spin measurements from the reflection method have $a_* > 0.9$ \citep{Vasudevan16}. Given that the CF method suggests lower spin for two cases with near-maximal reflection spin estimates (1H 0707$-$495 in \citealt{Done16} and NGC 3783 presented here), these high-spin cases would be good candidates for further comparisons between the reflection and CF methods, especially those with even lower $M_{BH}$ than NGC 3783, whose AD SEDs would peak further into the soft X-ray. There is also a new method proposed by \citet{Chartas16}, based on microlensing, that could be included in future comparisons of spin estimation techniques. As more and better X-ray measurements allow the reflection sample to grow and as better constraints on $M_{BH}$ \citep[see e.g.,][]{Shen15,Mejia16} allow the CF method to more precisely determine $a_*$ for larger samples of AGN, there will be a larger population where both methods can be properly applied and compared. If such comparisons yield good agreement, then each method can be more confidently applied to the samples they are best suited for -- nearby Seyferts for the X-ray reflection method and higher redshift quasars for the CF method. If instead, these comparisons bring further tensions to light, then the assumptions underlying these methods may need to be revisited. \acknowledgments We thank the referee for helpful feedback. We thank Paulina Lira, Julie Hlavacek-Larrondo, and Helen Russell for useful discussion. DMC and DH acknowledge support from a Natural Sciences and Engineering Research Council of Canada Discovery Grant and a Fonds de recherche du Qu\'{e}bec — Nature et Technologies Nouveaux Chercheurs Grant. GWF acknowledges support from Universit\'{e} Paris-Saclay's IDEX program and l'Office Franco-Qu\'{e}b\'{e}cois pour la Jeunesse.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,642
I love the Middle East and have enjoyed amazing hospitality from very generous and kind people Today, I join many others in my sadness over today's events in Syria. How can we stop such cruelty in our world. Can we stop fighting and killing each other?
{ "redpajama_set_name": "RedPajamaC4" }
1,606
{"url":"http:\/\/mgnet.org\/mgnet\/digests.html\/V5N05.html","text":"Send mail to: mgnet@cs.yale.edu for the digests or bakeoff\nCurrent editor: Craig Douglas douglas-craig@cs.yale.edu\nAnonymous ftp repository: casper.cs.yale.edu (128.36.12.1)\nftp.cerfacs.fr (138.63.200.33)\n\nWorld Wide Web: http:\/\/na.cs.yale.edu\/mgnet\/www\/mgnet.html or\nhttp:\/\/www.cerfacs.fr\/~douglas\/mgnet.html\n\nToday's editor: Craig Douglas (douglas-craig@cs.yale.edu)\n\nVolume 5, Number 5 (approximately May 31, 1995)\n\nToday's topics:\n\nBest V-cycle Algorithm for 2D Poisson Equation\nNew paper on MGNet (Ruede)\nAnnouncement of WWW multigrid tutorial\nPaper on cache based multigrid\nOberwolfach\nAn MPI based multigrid solver for 1 or more processors\nSome of the new entries in the bibliography\n\n-------------------------------------------------------\n\nDate: Sun, 28 May 1995 03:38:05 -0400\nFrom: Jun Zhang\nSubject: Best V-cycle Algorithm for 2D Poisson Equation\n\nBest V-cycle algorithm to Solve 2D Poisson Equation\n\nI'm interested in knowing the best V-cycle algorithm to solve 2D\nPoisson equation\n- (Uxx + Uyy) = f(x,y) (x,y) in OMEGA\nu(x,y) = g(x,y) (x,y) on OMEGA\nFor simplicity, OMGEA is a unit square. I have 3 test problems (courtesy\nof Dr. S. Fulton)\n\nTest 1. f(x,y) = -x^2(1-x^2)(2-12y^2) - y^2(1-y^2)(2-12x^2),\ng(x,y) = x^2*y^2(1-x^2)(1-y^2).\nTest 2. f(x,y) = -(x^2 + y^2)*exp(xy),\ng(x,y) = exp(xy).\nTest 3. f(x,y) = 52*cos(2x + 6y),\ng(x,y) = cos(4x + 6y).\n\nConditions and Cost: 5-point finite difference discretization. V(1,1)-cycle\nalgorithm with point Red-Black Gauss-Seidel, injection and bi-linear\ninterpolation. Fortran 77, double precision, stop when L2-error < 10^(-9).\nMeshsize h= 1\/N. Initial guess U(x,y)=0. Computer is a SUN SPARCstation.\nBelow are the best results I got:\n\nN Test 1 Test 2 Test 3\n16 9 11 10\n32 9 11 11 These are V(1,1) cycle\n64 9 12 12 numbers, i.e. the iteration\n128 9 13 12 numbers.\n256 9 13 13\n512 9 14 13\n\nIf you know better results with comparable cost, would you please send\nme an email? I'm also interested in knowing the limit of improving these\nresults, so if you know better results with slightly higher cost, I\nwill appreciate your sending me a message. (I can modify the code to give\nslightly better data to some entry, but the above is the best on average.)\n\nJun Zhang zhang@math.gwu.edu\nDepartment of Mathematics\nGeorge Washington University\nWashington, DC 20052\n\nEditor's Note: I would have used a nested iteration V cycle myself, but\n------------- there is an algorithm using nonstandard grids which is a\ndirect method effectively (1 iteration, 0 contraction\nfactor) by (I think) Ruge and Stuben (correction?). Also,\nsome of the multiple coarse space methods are similar to\nthis (e.g., see a paper by Brezzi, me, and Marini in the\nMGNet bibliography for an 8 coarse grid decomposition of\nthis problem).\n\nrequestor.\n\n-------------------------------------------------------\n\nDate: Wed, 31 May 1995 17:33:57 +0200 (MESZ)\nFrom: Ulrich Ruede\nSubject: New paper on MGNet\n\nMy paper\n\nStability of implicit extrapolation methods\n\nhas been downloaded to the mgnet ftp repository. This paper has been\nsubmitted to the proceedings of the 8th International Conference on Domain\nDecomposition, May 16-20, Beijing, China.\n\nKeywords: Implicit extrapolation, multigrid, multilevel subspace splitting\nAMS Classification: 65N22, 65N50, 65N55}\n\nAbstract\n\nMultilevel methods are generally based on a splitting of the solution\nspace associated with a nested sequence of coarser grids. Besides the\nstraightforward application of extrapolation on the grid system, we\npropose to use extrapolation implicitly, similar to multigrid\ntau-extrapolation. This implicit extrapolation, when applied to linear\nfinite elements is related to the p-version of the finite element\nmethod. The method depends on a stability condition which must be\nenforced by a suitable modification of the problem.\n\nThis (and others of may papers) are available from mgnet and in the WWW\nthrough URL\n\nhttp:\/\/www5.informatik.tu-muenchen.de\/persons\/ruede\/info\/refs.html\n\nUlrich Ruede\nInstitut fuer Informatik, Technische Universitaet,\nD-80290 Muenchen, Germany, e-mail: ruede@informatik.tu-muenchen.de\nTel: +49 89 21058238, Fax: +49 89 21052022\nURL: http:\/\/www5.informatik.tu-muenchen.de\/persons\/ruede.html\n\nEditor's Note: in mgnet\/papers\/Ruede\/stable_extra.ps.gz and\n------------- mgnet\/papers\/Ruede\/stable_extra.abstract.\n\n-------------------------------------------------------\n\nDate: Wed, 31 May 1995 17:59:39 +0200 (MESZ)\nSubject: Announcement of WWW multigrid tutorial\nFrom: Ulrich Ruede\n\nOn the WWW an\n\nOnline Multigrid Tutorial,\n\nthe\n\nMultigrid Workbench\n\nis available. The start page is accessible through URL\n\nhttp:\/\/www5.informatik.tu-muenchen.de\/sci-comp\/xwb\/xwb.html\n\nThe workbench features an active image showing a standard multigrid\nV-cycle algorithm. By clicking on parts of this algorithm, the\ncorresponding status of the iteration is displayed graphically. In\ncontrast to conventional text, there is no natural sequence, how the\ndifferent pages should be read. All information is accessible by\n\nI'd appreciate any feedback and comments that would help me to improve\nthis resource. I'd also be interested in hearing about technical\nproblems, e.g. whether the current network speed (in Germany) is still\nacceptable for using the workbench in its present form.\n\n-------------------------------------------------------\n\nDate: Wed, 31 May 1995 13:48:39 -0400\nFrom: douglas@watson.ibm.com (Craig Douglas)\nSubject: Paper on cache based multigrid\n\nCaching in with Multigrid Algorithms: Problems in Two Dimensions\n\nCraig C. Douglas\n\nIBM T. J. Watson Research Center,\nYorktown Heights, NY, USA\nand\nComputer Science Department,\nYale University,\nNew Haven, CT, USA\n\nAbstract\n\nMultigrid methods combine a number of standard sparse matrix techniques.\nUsual implementations separate the individual components (e.g., an iterative\nmethods, residual computation, and interpolation between grids) into nicely\nstructured routines. However, many computers today employ quite sophisticated\nand potentially large caches whose correct use are instrumental in gaining\nmuch of the peak performance of the processors.\n\nWe investigate when it makes sense to combine several of the multigrid\ncomponents into one, using block oriented algorithms. We determine how large\n(or small) the blocks must be in order for the data in the block to just fit\ninto the processor's primary cache. By re-using the data in cache several\ntimes, a potential savings in run time can be predicted. This is analyzed for\na set of examples.\n\nKey words: multigrid, cache, sparse matrix, iterative methods, domain\ndecomposition.\n\nNote: This paper will be in the proceedings of the International Conference\non Parallel Algorithms, October 15-19, Wuhan, China.\n\nEditor's Note: in mgnet\/papers\/Douglas\/cache1.dvi.gz and\n------------- mgnet\/papers\/Douglas\/cache1.abs.\n\n-------------------------------------------------------\n\nDate: Wed, 31 May 1995 12:24:30 +0200\nFrom: P.W.Hemker@cwi.nl\nSubject: Oberwolfach\n\nDear Craig,\n\nPlease find below the titles of the talks, as announced in Oberwolfach. Sue\n\nI also add a list of participants.\n\nPieter.\n\nMonday\nAM\n\nP. Wesseling \"Multigrid solution of the incompressible Navier-Stokes equations\nin general coordinates\"\n(\"Krylov subspace and multigrid methods applied to the\nincompressible Navier-Stokes equations\" (with C. Vuik, S. Zeng))\nK. Oosterlee \"Multigrid and defect correction for the 3D incompressible\nNavier-Stokes equations in general coordinates\"\n(\"A GMRES-based plane smoother in multigrid for solving\n3D anisotropic fluid flow problems\")\nE. Dick \"Multigrid methods of Navier-Stokes equations coupled to\nk-$\\epsilon$- turbulence equations\"\n(\"Modelling transitional fronts\")\nD. Haenel \"Application of multi-sequence Runge-Kutta methods to\nadaptive solutions of the Navier-Stokes equations\"\n(\"Computation of compressible strongly unsteady flows on algebraic\nunstructured grids\")\n\nPM\nU. Ruede \"Implicit multilevel extrapolation methods\"\nJ. Dendy \"Variants of the frequency decomposition MG-method\"\n(\"Grandchild of the frequency decomposition method\")\nR. Stevenson \"Frequency decomposition multilevel methods\"\n(\"Multilevel methods based on space decompositions orthogonal\nw.r.t. discrete scalar products\")\n\nTuesday\nAM\nS. Vandewalle \"Acceleration of multilevel domain decomposition and multigrid\nby Krylov subspace methods\"\n(\"Schwarz methods: to symmetrize or not to symmetrize\")\nA. Meyer \"Preconditioning the pseudo-Laplacian for CFD-simulation\"\nM. Jung \"On the parallelization of multigrid methods using a\nnonoverlapping DD data structure\"\nG. Haase \"Dirichlet DD vs. global multigrid methods\" (with U. Langer)\n\nPM\nS. Brenner \"Convergence of nonconforming or nonnested multigrid methods\nwithout full elliptic regularity\"\nS. Turek \"On robust and efficient multilevel Schur-complement solvers\nfor incompressible and compressible Stokes and Navier-Stokes\nequations\"\nD. Braess \"Efficient smoothing of the Navier-Stokes equations by\nu-dominant iterations\" (with R. Sarazin)\n\nWednesday\nAM\nM. Feistauer \"Numerical solution of nonlinear convection-diffusion problems\nand applications to compressible fluid flow\"\nA. Reusken \"Analysis of multigrid methods for convection-diffusion problems\"\nJ. Fuhrmann \"On algebraic multigrid methods for PDEs\"\nN. Neuss \"Homogenization and multigrid\"\n\nPM\nhike\n\nThursday\nAM\nP. Oswald \"Multilevel preconditioners for nonconforming elements\"\nJ. Junkherr \"Multigrid methods for weakly singular integral equations of the\nfirst kind\"\nR. Kornhuber \"A posteriori error estimates for linear and nonlinear elliptic\nproblems\"\n\nPM\nR. Bank \"An algorithm for coarsening unstructured meshes\" (with J. Xu)\nS. Sauter \"Composite finite element spaces, coarsening and MG methods\"\n(\"A new finite element space for the approx. of PDEs on domains\nwith complicated microstructures\") (with W. Hackbusch)\nR. Hoppe \"Adaptive multilevel methods for mixed FEM for 2nd order\nelliptic BVPs\" (with B. Wohlmuth)\n\nFriday\nAM\nF. Bornemann \"Cascadic multigrid methods\" (with P. Deuflhard)\nC. Wagner \"Filtering decompositions for asymmetric and heterogeneous\nproblems\"\n(\"Frequency filtering decomposition\")\nP. Vanek \"Algebraic multigrid for thin elastic problems\"\n(\"Algebraic multigrid by smoothed aggregation for 2nd and\n4th order elliptiv BVPs\" (with J. Mandel and M. Brezina)\nW. Mulder \"Application of multigrid to porous media flow\"\n\n========================================================================\n\nParticipants:\n\nrbank@ucsd.edu (Randy Bank),\npeter@ica3.uni-stuttgart.de (Peter Bastian),\njuergen@na.mathematik.uni-tuebingen.de (Juergen Bey),\nbornemann@zib-berlin.de (Folkmar Bornemann),\nbraess@num.ruhr-uni-bochum.de (Dietrich Braess),\nbrenner@math.scarolina.edu (Susanne Brenner),\ndahmen@igpm.rwth-aachen.de (Wolfgang Dahmen),\njed@lanl.gov (J.E. Dendy),\ndeuflhard@zib-berlin.de (Peter Deuflhard),\nErik.Dick@rug.ac.be (Erik Dick),\nFEIST@MS.MFF.CUNI.CZ (Miloslav Feistauer),\nfuhrmann@iaas-berlin.d400.de (Juergen Fuhrmann),\ngriebel@informatik.tu-muenchen.de (Michael Griebel),\nghaase@numa.uni-linz.ac.at (Gundolf Haase),\nwh@informatik.uni-kiel.de (Wolfgang Hackbusch),\nhj454ha@vug.uni-duisburg.de (Dieter Haenel),\npieth@cwi.nl (Piet Hemker),\nrohop@mathematik.tu-muenchen.de (Ronald Hoppe),\ndr.michael.jung@mathematik.tu-chemnitz.de (Michael Jung),\njj@informatik.uni-kiel.de (Joerg Junkherr),\nkornhuber@iaas-berlin.d400.de (Ralf Kornhuber),\nulanger@miraculix.numa.uni-linz.ac.at (Ulrich Langer),\nmaitre@cc.ec-lyon.fr (Jean-Francois Maitre),\na.meyer@mathematik.tu-chemnitz.de (Arndt Meyer),\nmittelmann@math.la.asu.edu (Hans D. Mittelmann),\nmulderw@ksepl.nl (W. A. Mulder),\nneuss@iwr.uni-heidelberg.de (Nikolas Neuss),\nKEES.OOSTERLEE@GMD.DE (K. Oosterlee),\nPeter.Oswald@math.tamu.edu (Peter Oswald),\nWSANAR@WIN.TUE.NL (Arnold Reusken),\nruede@informatik.tu-muenchen.de (Ulrich Ruede),\nsas@informatik.uni-kiel.de (Stefan Sauter),\nstevenso@win.tue.nl (Rob Stevenson),\nture@gaia.iwr.uni-heidelberg.de (Stefan Turek),\nSTEFAN@AMA.CALTECH.EDU (Stefan Vandewalle),\nrv@silly.num1.ruhr-uni-bochum.de (Ruediger Verfuerth),\nchris@ica3.uni-stuttgart.de (Christian Wagner),\np.wesseling@math.tudelft.nl (Pieter Wesseling),\nwieners@ica3.uni-stuttgart.de (Wieners),\nwittum@icasun.ica.uni-stuttgart.de (Gabriel Wittum),\nharry@na.uni-tuebingen.de (Harry Yserentant)\n\n-------------------------------------------------------\n\nDate: Wed, 31 May 1995 14:20:32 -0400\nFrom: douglas-craig@CS.YALE.EDU (Craig Douglas)\nSubject: An MPI based multigrid solver for 1 or more processors\n\nAt the Copper Mountain meeting, I talked very briefly one evening at a\nworkshop about what I thought was new and interesting for parallel multigrid.\nI pointed out that MPI is a standard, developed by academics and computer\ncompanies, that is actually appearing to make headway as a very nice\nparallel computer message passing system. I even ran a demonstration on an\nIBM ThinkPad (running Linux, not DOS\/Windows) of how it works on a single\nprocessor, but fakes the system into simulating (quasi-randomly) a\nmultiprocessor. I volunteered to put a demonstration code on MGNet, which I\nhave done, in Codes\/douglas. There are several examples in 2D (Poisson's\nequation and variable coefficients) with a variety of matrix storage formats\n(including matrix-free). They can be modified quite easily.\n\nI will put some 3D codes out when they are cleaned up sufficiently.\n\nThese codes are not meant for high speed, just to demonstrate how MPI can be\nused in interesting ways (such as transferring non-stride 1 data without\nhaving to pack and unpack data yourself).\n\nOne word of warning. I used the experimental version (1.0.9) of mpich that is\nin pub\/mpi\/misc on info.mcs.anl.gov and also IBM's MPI product for the SP2 in\ndeveloping this. With mpich, I found that one of the routines (the\nsimulataneous send and receive routine) did not quite work correctly on a mesh\nof processors (it worked with the IBM product). Bill Gropp fixed this within\n2 hours of hearing about it and pointed me to a newer version (many thanks,\nBill).\n\nEditor's Note: in mgnet\/Codes\/douglas\/2d.tgz.\n-------------\n\n-------------------------------------------------------\n\nDate: Wed, 31 May 1995 14:19:28 -0400\nFrom: douglas-craig@CS.YALE.EDU (Craig Douglas)\nSubject: Some of the new entries in the bibliography\n\nThis is somewhat repetitive (my apologies). The next batch is in next month's\ndigest.\n\n\\begin{thebibliography}{100}\n\n\\bibitem{VIAgoshkov_1994a}\n{\\sc V.~I. Agoshkov}, {\\em Domain decomposition methods using modified basis\nfunctions}, in Domain Decomposition Methods in Science and Engineering: The\nSixth International Conference on Domain Decomposition, vol.~157 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~3--15.\n\n\\bibitem{GAguilar_FLisbona_1994a}\n{\\sc G.~Aguilar and F.~Lisbona}, {\\em Interface conditions for a kinf of non\nlinear elliptic--hyperbolic problems}, in Domain Decomposition Methods in\nScience and Engineering: The Sixth International Conference on Domain\nDecomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~89--95.\n\n\\bibitem{RArina_CCanuto_1994a}\n{\\sc R.~Arina and C.~Canuto}, {\\em A {X}--formulation of the viscous--inviscid\ndomain decomposition for the {E}uler\/{N}avier--{S}tokes equations}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~453--458.\n\n\\bibitem{SFAshby_CTKelley_PESaylor_JSScroggs_1994a}\n{\\sc S.~F. Ashby, C.~T. Kelley, P.~E. Saylor, and J.~S. Scroggs}, {\\em\nPreconditioning via asymptotically--defined decomposition}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~139--150.\n\n\\bibitem{MAzaiez_AQuarteroni_1994a}\n{\\sc M.~Azaiez and A.~Quarteroni}, {\\em A spectral {S}tokes solver in domain\ndecomposition methods}, in Domain Decomposition Methods in Scientific and\nEngineering Computing: Proceedings of the Seventh International Conference on\nDomain Decomposition, vol.~180 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~151--156.\n\n\\bibitem{NSBakhvalov_AVKnyazev_1994a}\n{\\sc N.~S. Bakhvalov and A.~V. Knyanzev}, {\\em Preconditioned iterative methods\nin a subspace for linear algebraic equations with large jumps in the\ncoefficients}, in Domain Decomposition Methods in Scientific and Engineering\nComputing: Proceedings of the Seventh International Conference on Domain\nDecomposition, vol.~180 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~157--162.\n\n\\bibitem{REBank_JXu_1994a}\n{\\sc R.~E. Bank and J.~Xu}, {\\em The hierarchical basis multigrid method and\nincomplete {LU} decompostion}, in Domain Decomposition Methods in Scientific\nand Engineering Computing: Proceedings of the Seventh International\nConference on Domain Decomposition, vol.~180 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~163--173.\n\n\\bibitem{JDBenamou_YBrenier_1994a}\n{\\sc J.-D. Benamou and Y.~Brenier}, {\\em A domain decomposition method for the\npolar factorization of vector fields}, in Domain Decomposition Methods in\nScience and Engineering: The Sixth International Conference on Domain\nDecomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~231--236.\n\n\\bibitem{BBialecki_XCCai_MDryja_GFairweather_1994a}\n{\\sc B.~Bialecki, X.-C. Cai, M.~Dryja, and G.~Fairweather}, {\\em An additive\n{S}charz algorithm for piecewise {H}ermite bicubic orthogonal spline\ncollocation}, in Domain Decomposition Methods in Science and Engineering: The\nSixth International Conference on Domain Decomposition, vol.~157 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~237--244.\n\n{\\sc P.~E. Bj{\\o}rstad, W.~M. Coughran, and E.~Gross}, {\\em Parallel domain\ndecomposition applied to coupled transport equations}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~369--380.\n\n\\bibitem{FABornemann_1994a}\n{\\sc F.~A. Bornemann}, {\\em Interpolation spaces and optimal multilevel\npreconditioners}, in Domain Decomposition Methods in Scientific and\nEngineering Computing: Proceedings of the Seventh International Conference on\nDomain Decomposition, vol.~180 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~3--8.\n\n\\bibitem{JFBourgat_PLeTallec_BPerthame_YQiu_1994a}\n{\\sc J.~F. Bourgat, P.~LeTallec, B.~Perthame, and Y.~Qiu}, {\\em Coupling\n{B}oltzmann and {E}uler equations without overlapping}, in Domain\nDecomposition Methods in Science and Engineering: The Sixth International\nConference on Domain Decomposition, vol.~157 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~377--398.\n\n\\bibitem{ABrambilla_CCarlenzoli_GGazzaniga_PGervasio_GSacchi_1994a}\n{\\sc A.~Brambilla, C.~Carlenzoli, G.~Gazzaniga, P.~Gervasio, and G.~Sacchi},\n{\\em Implementation of domain decomposition techniques on n{CUBE}2 parallel\nmachine}, in Domain Decomposition Methods in Science and Engineering: The\nSixth International Conference on Domain Decomposition, vol.~157 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~345--351.\n\n\\bibitem{JHBramble_JEPasciak_1994a}\n{\\sc J.~H. Bramble and J.~E. Pasciak}, {\\em Uniform convergence estimates for\nmultigrid {V}--cycle algorithms with less than full elliptic regularity}, in\nDomain Decomposition Methods in Science and Engineering: The Sixth\nInternational Conference on Domain Decomposition, vol.~157 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~17--26.\n\n\\bibitem{ABrandt_BDiskin_1994a}\n{\\sc A.~Brandt and B.~Diskin}, {\\em Multigrid solvers on decomposed domains},\nin Domain Decomposition Methods in Science and Engineering: The Sixth\nInternational Conference on Domain Decomposition, vol.~157 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~135--155.\n\n\\bibitem{SCBrenner_1994b}\n{\\sc S.~C. Brenner}, {\\em Two--level additive {S}chwarz preconditioners for\nnonconforming finite elements}, in Domain Decomposition Methods in Scientific\nand Engineering Computing: Proceedings of the Seventh International\nConference on Domain Decomposition, vol.~180 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~9--14.\n\n\\bibitem{FBrezzi_LDMarini_1994a}\n{\\sc F.~Brezzi and L.~D. Marini}, {\\em A three--fold domain decomposition\nmethod}, in Domain Decomposition Methods in Science and Engineering: The\nSixth International Conference on Domain Decomposition, vol.~157 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~27--34.\n\n\\bibitem{MOBristeau_RGlowinski_JPeriaux_1994a}\n{\\sc M.~O. Bristeau, R.~Glowinski, and J.~P{\\'e}riaux}, {\\em On the numerical\nsolution of the {H}elmholtz equations at large wave numbers using exact\ncontrollability methods. {A}pplication to scattering}, in Domain\nDecomposition Methods in Science and Engineering: The Sixth International\nConference on Domain Decomposition, vol.~157 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~399--419.\n\n\\bibitem{HJBungartz_MGriebel_DRoschke_CZenger_1994a}\n{\\sc H.-J. Bungartz, M.~Griebel, D.~R{\/\"}oschke, and C.~Zenger}, {\\em Two\nproofs of convergence for the combination technique for the efficient\nsolution of sparse grid problems}, in Domain Decomposition Methods in\nScientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~15--20.\n\n\\bibitem{WCai_1994a}\n{\\sc W.~Cai}, {\\em Domain decomposition and computation of two dimensional\ndetonation waves}, in Domain Decomposition Methods in Scientific and\nEngineering Computing: Proceedings of the Seventh International Conference on\nDomain Decomposition, vol.~180 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~459--464.\n\n\\bibitem{XCCai_MDryja_1994a}\n{\\sc X.-C. Cai and M.~Dryja}, {\\em Domain decomposition methods for monotone\nnonlinear eliptic problems}, in Domain Decomposition Methods in Scientific\nand Engineering Computing: Proceedings of the Seventh International\nConference on Domain Decomposition, vol.~180 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~21--27.\n\n\\bibitem{XCCai_WDGropp_DEKeyes_MDTidriri_1994a}\n{\\sc X.-C. Cai, W.~D. Gropp, D.~E. Keyes, and M.~D. Tidriri}, {\\em Parallel\nimplicit methods for aerodynamics}, in Domain Decomposition Methods in\nScientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~465--470.\n\n\\bibitem{YCai_IMNavon_1994a}\n{\\sc Y.~Cai and I.~M. Navon}, {\\em Parallel domain--decomposed preconditioners\nin finite element shallow water flow modeling}, in Domain Decomposition\nMethods in Scientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~471--476.\n\n\\bibitem{FCamilli_MFalcone_PLanucara_ASeghini_1994a}\n{\\sc F.~Camilli, M.~Falcone, P.~Lanucara, and A.~Seghini}, {\\em A domain\ndecomposition method for {B}ellman equations}, in Domain Decomposition\nMethods in Scientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~477--483.\n\n\\bibitem{CCanuto_ARusso_1994a}\n{\\sc C.~Canuto and A.~Russo}, {\\em Self--adaptive coupling of mathematical\nmodels and\/or numerical methods}, in Domain Decomposition Methods in Science\nand Engineering: The Sixth International Conference on Domain Decomposition,\nvol.~157 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~35--44.\n\n\\bibitem{TFChan_TPMathew_1994b}\n{\\sc T.~F. Chan and T.~P. Mathew}, {\\em Doamin decomposition preconditioners\nfor convection diffusion problems}, in Domain Decomposition Methods in\nScience and Engineering: The Sixth International Conference on Domain\nDecomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~157--175.\n\n\\bibitem{TFChan_BFSmith_1994a}\n{\\sc T.~F. Chan and B.~F. Smith}, {\\em Domain decomposition and multigrid\nalgorithms for elliptic problems on unstructured meshes}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~175--189.\n\n\\bibitem{JGChefter_CKChu_DEKeyes_1994a}\n{\\sc J.~G. Chefter, C.~K. Chu, and D.~E. Keyes}, {\\em Domain decomposition for\nthe shallow water equations}, in Domain Decomposition Methods in Scientific\nand Engineering Computing: Proceedings of the Seventh International\nConference on Domain Decomposition, vol.~180 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~485--490.\n\n\\bibitem{NChrisochoides_GFox_JThompson_1994a}\n{\\sc N.~Chrisochoides, G.~Fox, and J.~Thompson}, {\\em {MENUS--PGG}: {A} mapping\nenvironment for unstructured numerical parallel grid generation}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~381--386.\n\n\\bibitem{PGCiarletJr_1994b}\n{\\sc P.~G. Ciarle{t,~Jr.}}, {\\em A comparison of three iterative algorithms\nbased on domain decomposition methods}, in Domain Decomposition Methods in\nScientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~387--393.\n\n\\bibitem{PGCiarletJr_GMeurant_1994a}\n{\\sc P.~G. Ciarle{t,~Jr.} and G.~Meurant}, {\\em A class of domian decomposition\npreconditioners for massively parallel computers}, in Domain Decomposition\nMethods in Science and Engineering: The Sixth International Conference on\nDomain Decomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~353--359.\n\n{\\sc M.~C. Ciccoli, J.~A. Desideri, and J.~P{\\'e}riaux}, {\\em Introduction of\ndomain decomposition techniques in time-- dependent flow problems}, in Domain\nDecomposition Methods in Science and Engineering: The Sixth International\nConference on Domain Decomposition, vol.~157 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~433--439.\n\n\\bibitem{RKCoomer_IGGraham_1994a}\n{\\sc R.~K. Coomer and I.~G. Graham}, {\\em Domain decomposition methods for\ndevice modelling}, in Domain Decomposition Methods in Scientific and\nEngineering Computing: Proceedings of the Seventh International Conference on\nDomain Decomposition, vol.~180 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~491--496.\n\n\\bibitem{MCCurran_1994a}\n{\\sc M.~C. Curran}, {\\em An iterative finite--element collocation method for\nparabolic problems using domain decomposition}, in Domain Decomposition\nMethods in Science and Engineering: The Sixth International Conference on\nDomain Decomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~245--253.\n\n\\bibitem{CNDawson_TFDupont_1994b}\n{\\sc C.~N. Dawson and T.~F. Dupont}, {\\em Noniterative domain decomposition for\nsecond order hyperbolic problems}, in Domain Decomposition Methods in Science\nand Engineering: The Sixth International Conference on Domain Decomposition,\nvol.~157 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~45--52.\n\n\\bibitem{CNDawson_MFWheeler_1994a}\n{\\sc C.~N. Dawson and M.~F. Wheeler}, {\\em Two--grid methods for mixed finite\nelement approximations of nonlinear parabolic equations}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~191--203.\n\n\\bibitem{FDellagiacoma_SPaoletti_FPoggi_MVitaletti_1994a}\n{\\sc F.~Dellagiacoma, S.~Paoletti, F.~Poggi, and M.~Vitaletti}, {\\em A domain\ndecomposition environment for local time dependent problems}, in Domain\nDecomposition Methods in Science and Engineering: The Sixth International\nConference on Domain Decomposition, vol.~157 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~361--366.\n\n\\bibitem{PDeuflhard_1994a}\npartial differential equations: algorithm and numerical results}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~29--42.\n\n\\bibitem{ZDostal_1994a}\n{\\sc Z.~Dost{\\'a}l}, {\\em The {S}chur complement algorithm for the solution of\ncontact problems}, in Domain Decomposition Methods in Science and\nEngineering: The Sixth International Conference on Domain Decomposition,\nvol.~157 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~441--446.\n\n\\bibitem{CCDouglas_1995a}\n{\\sc C.~C. Douglas}, {\\em Madpack: A family of abstract multigrid or multilevel\nsolvers}, Comput. Appl. Math., 14 (1995), pp.~3--20.\n\n\\bibitem{MDryja_1994a}\n{\\sc M.~Dryja}, {\\em Multilevel methods for elliptic problems with\ndiscontinuous coeffiecients in three dimensions}, in Domain Decomposition\nMethods in Scientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~43--47.\n\n\\bibitem{MDryja_OBWidlund_1994b}\n{\\sc M.~Dryja and O.~B. Widlund}, {\\em Some recent results on {S}chwarz type\ndomain decomposition algorithms}, in Domain Decomposition Methods in Science\nand Engineering: The Sixth International Conference on Domain Decomposition,\nvol.~157 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~53--61.\n\n\\bibitem{OErnst_GHGolub_1994a}\n{\\sc O.~Ernst and G.~H. Golub}, {\\em A domain decomposition approach to solving\nthe {H}elmholtz equation with a radiation boundary condition}, in Domain\nDecomposition Methods in Science and Engineering: The Sixth International\nConference on Domain Decomposition, vol.~157 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~177--192.\n\n\\bibitem{EFaccioli_AQuarteroni_ATagliani_1994a}\n{\\sc E.~Faccioli, A.~Quarteroni, and A.~Tagliani}, {\\em Spectral multidomain\nmethods for the simulation of wave propagation in heterogeneous media}, in\nDomain Decomposition Methods in Science and Engineering: The Sixth\nInternational Conference on Domain Decomposition, vol.~157 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~447--455.\n\n\\bibitem{CFarhat_PSChen_1994a}\n{\\sc C.~Farhat and P.-S. Chen}, {\\em Tailoring domain decomposition methods for\nefficient parallel coarse grid solution and for systems with many right hand\nsides}, in Domain Decomposition Methods in Scientific and Engineering\nComputing: Proceedings of the Seventh International Conference on Domain\nDecomposition, vol.~180 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~401--406.\n\n\\bibitem{CFarhat_FXRoux_1994a}\n{\\sc C.~Farhat and F.-X. Roux}, {\\em The dual {S}chur complement method with\nwell--posed local {N}eumann problems}, in Domain Decomposition Methods in\nScience and Engineering: The Sixth International Conference on Domain\nDecomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~193--201.\n\n\\bibitem{PFFischer_1994a}\n{\\sc P.~F. Fischer}, {\\em Parallel domain decomposition for incompressible\nfluid dynamics}, in Domain Decomposition Methods in Science and Engineering:\nThe Sixth International Conference on Domain Decomposition, vol.~157 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~313--322.\n\nelement--by--element method for large--scale computations with h -- p--\nfinite elements}, in Domain Decomposition Methods in Science and Engineering:\nThe Sixth International Conference on Domain Decomposition, vol.~157 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~367--373.\n\n\\bibitem{LGastaldi_1994a}\n{\\sc L.~Gastaldi}, {\\em A domain decomposition for the transport equation}, in\nDomain Decomposition Methods in Science and Engineering: The Sixth\nInternational Conference on Domain Decomposition, vol.~157 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~97--102.\n\n\\bibitem{AGersztenkorn_JCDiaz_1994a}\n{\\sc A.~Gersztenkorn and J.~C. Diaz}, {\\em Domain decomposed preconditioning\nfor faulted geological blocks}, in Domain Decomposition Methods in Science\nand Engineering: The Sixth International Conference on Domain Decomposition,\nvol.~157 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~457--462.\n\n\\bibitem{LGiraud_RSTuminaro_1994a}\n{\\sc L.~Giraud and R.~S. Tuminaro}, {\\em Domain decomposition algorithms for\n{PDE} problems with large scale variation}, in Domain Decomposition Methods\nin Scientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~205--210.\n\n\\bibitem{RGlowinski_TWPan_JPeriaux_1994a}\n{\\sc R.~Glowinski, T.-W. Pan, and J.~P{\\'e}riaux}, {\\em A fictitious domain\nmethod for unsteady incompressible viscous flow modelled by\n{N}avier--{S}tokes equations}, in Domain Decomposition Methods in Science and\nEngineering: The Sixth International Conference on Domain Decomposition,\nvol.~157 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~421--431.\n\n\\bibitem{RGlowinski_TWPan_JPeriaux_1994b}\n\\leavevmode\\vrule height 2pt depth -1.6pt width 23pt, {\\em A one shot domain\ndecomposition\/fictitious domain method for {N}avier--{S}tokes equations}, in\nDomain Decomposition Methods in Scientific and Engineering Computing:\nProceedings of the Seventh International Conference on Domain Decomposition,\nvol.~180 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~211--222.\n\n\\bibitem{MGriebel_1994b}\n{\\sc M.~Griebel}, {\\em Domain--oriented multilevel methods}, in Domain\nDecomposition Methods in Scientific and Engineering Computing: Proceedings of\nthe Seventh International Conference on Domain Decomposition, vol.~180 of\nContemporary Mathematics, Providence, Rhode Island, 1994, American\nMathematical Society, pp.~223--229.\n\n\\bibitem{MGriebel_1994a}\n\\leavevmode\\vrule height 2pt depth -1.6pt width 23pt, {\\em A domain\ndecomposition method using sparse grids}, in Domain Decomposition Methods in\nScience and Engineering: The Sixth International Conference on Domain\nDecomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~255--261.\n\n\\bibitem{WDGropp_DEKeyes_JSMounts_1994a}\n{\\sc W.~D. Gropp, D.~E. Keyes, and J.~S. Mounts}, {\\em Implicit domain\ndecomposition algorithms for steady, compressible aerodynamics}, in Domain\nDecomposition Methods in Science and Engineering: The Sixth International\nConference on Domain Decomposition, vol.~157 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~203--213.\n\n\\bibitem{WDGropp_BFSmith_1994a}\n{\\sc W.~D. Gropp and B.~F. Smith}, {\\em Experiences with domain decomposition\nin three dimensions: overlapping {S}chwarz methods}, in Domain Decomposition\nMethods in Science and Engineering: The Sixth International Conference on\nDomain Decomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~323--333.\n\n\\bibitem{JLGuermond_WZShen_1994a}\n{\\sc J.-L. Guermond and W.-Z. Shen}, {\\em A domain decomposition method for\nsimulating 2{D} external viscous flows}, in Domain Decomposition Methods in\nScience and Engineering: The Sixth International Conference on Domain\nDecomposition, vol.~157 of Contemporary Mathematics, Providence, Rhode\nIsland, 1994, American Mathematical Society, pp.~463--467.\n\n\\bibitem{WHeinrichs_1994b}\n{\\sc W.~Heinrichs}, {\\em Domain decomposition for the {S}tokes equations in\nstreamfunction formulation}, in Domain Decomposition Methods in Science and\nEngineering: The Sixth International Conference on Domain Decomposition,\nvol.~157 of Contemporary Mathematics, Providence, Rhode Island, 1994,\nAmerican Mathematical Society, pp.~263--269.\n\n\\bibitem{MHolst_FSaied_1994a}\n{\\sc M.~Holst and F.~Saied}, {\\em Multigrid and domain decomposition methods\nfor electrostatics problems}, in Domain Decomposition Methods in Scientific\nand Engineering Computing: Proceedings of the Seventh International\nConference on Domain Decomposition, vol.~180 of Contemporary Mathematics,\nProvidence, Rhode Island, 1994, American Mathematical Society, pp.~231--238.\n\n\\bibitem{GCHsiao_MDMarcozzi_SZhang_1994a}\n{\\sc G.~C. Hsiao, M.~D. Marcozzi, and S.~Zhang}, {\\em An efficient\ncomputational method for the flow past an airfoil}, in Domain Decomposition\nMethods in Scientific and Engineering Computing: Proceedings of the Seventh\nInternational Conference on Domain Decomposition, vol.~180 of Contemporary\nMathematics, Providence, Rhode Island, 1994, American Mathematical Society,\npp.~497--502.\n\n\\end{thebibliography}\n\n------------------------------\n\nEnd of MGNet Digest\n**************************","date":"2017-04-26 23:27:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7346831560134888, \"perplexity\": 14849.577221190726}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-17\/segments\/1492917121752.57\/warc\/CC-MAIN-20170423031201-00593-ip-10-145-167-34.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/www.businesstopia.net\/economics\/micro\/principle-marginal-rate-substitution","text":"# Principle of Marginal Rate of Substitution\n\n### Marginal Rate of Substitution\n\nMarginal rate of substitution (MRS) may be defined as the rate at which the consumer is willing to substitute one commodity for another without changing the level of satisfaction.\n\nIn other words, MRS can also be defined as the amount of a commodity that a consumer is willing to trade off for another commodity, as long as the second commodity provides same level of utility as the first one.\n\nIf we denote two commodities as X and Y, then MRS between X and Y can be expressed as the amount of Y a consumer is willing to give up so as obtaining one more unit of commodity X.\n\nMathematically, MRS is represented as\n\nMRS=\u0394Y\/\u0394X\n\n### Table 1: Indifference schedule\n\nCombination Cigarette Coffee (cups) MRS\nA 1 12\nB 2 8 4\nC 3 5 3\nD 4 3 2\nE 5 2 1\n\nIn the above indifference schedule are given 5 different combinations of two goods (cigarette and coffee), all of which produce same level of satisfaction.\n\nWe can see, combination A consists of 1 cigarette and 12 cups of coffee. When the consumer moves to combination B from A, he has to give up 4 cups of coffee in order to add 1 unit of cigarette, maintaining same level of satisfaction. Hence the marginal rate of substitution of cigarette for coffee is 4.\n\nIn the same way, when the consumer moves to combination C, he has to give up 3 more cups of coffee in order to add one more unit of cigarette and maintain the same utility level. Hence, marginal rate of substitution of cigarette for coffee here is 3.\n\nLikewise, when the consumer moves to combination D and E, marginal rate of substitution is 2 and 1, respectively.\n\n### Principle of Marginal Rate of Substitution\n\nMarginal rate of substitution (MRS) is based on an important economic principle, i.e. MRS of X for Y diminishes more and more with each successive substitution of X for Y. This principle is known as diminishing marginal rate of substitution.\n\nAccording to MRS, a consumer can let go off some of one commodity, say Y, in order to gain more of the other commodity X. However, as the consumer starts getting more and more of commodity X, he tends to forego less and less of good Y.\n\nThe rate at which the consumer substitutes X for Y is greater at the beginning. But, as he continues the substitution process, the rate of substitution begins to fall.\n\nThe diminishing marginal rate of substitution is also apparent from the table 1.\n\nInitially, when the consumer moved to combination B from A, the MRS was calculated to be 4. In the same way, when he moved to combination C, the MRS was calculated to be 3. Likewise from combination C to D, MRS was 2. And, from D to E, MRS was 1.\nClearly, marginal rate of substitution diminished more and more as the consumer kept on substituting more and more cigarette for coffee.\n\nThe diminishing marginal rate of substitution can also be clearly visualized from the graphical representation of the indifference schedule.\n\n#### Causes for diminishing marginal rate of substitution\n\nMarginal rate of substitution is diminishing because of following reasons.\n\n##### Goods are not perfectly substitutable\n\nGoods can never be perfectly substitutable. If one good can perfectly substitute another good, both the goods are regarded as same. And, thus, increase or decrease in amount of any good will not cause effect in marginal substitution.","date":"2019-11-15 03:44:23","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9213104248046875, \"perplexity\": 1043.4677160157821}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496668561.61\/warc\/CC-MAIN-20191115015509-20191115043509-00189.warc.gz\"}"}
null
null
Plaatsen Pola (Oriental Mindoro), een gemeente in de Filipijnse provincie Oriental Mindoro; Pola (rivier in Rusland); Pola Tebu, bestuurslaag in het regentschap Karo van de provincie Noord-Sumatra, Indonesië; De Italiaanse naam van Pula (Kroatië), vroeger ook de algemeen gebruikte naam van die stad; Pola de Allande, hoofdplaats van de Spaanse gemeente Allande in de regio Asturië; Santa Pola, een gemeente in de Spaanse provincie Alicante. Personen Alexander Pola, artiestennaam van Abraham Polak (1914 – 1992), Nederlands acteur, tekstschrijver en komiek; Pola Negri, artiestennaam van Apolonia Chałupiec (1897 - 1987), een Poolse actrice. Overig Pola (kruiser), Italiaanse kruiser POLA, voormalig Duits bedrijf dat modelspooronderdelen fabriceerde. Nu onderdeel van Faller.
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,715
Time's (Almost) Up! Art Garfunkel, Southside Johnny & The Asbury Jukes, and Folsom Prison Revival! ART GARFUNKEL IN-CLOSE UP FRI., JULY 26 at 7:30 PM Strand Theatre Tickets Starting at $40.25 ART GARFUNKEL has made an indelible mark on the music world as both a solo artist and half of the unrivaled Simon & Garfunkel. Performance includes Simon & Garfunkel hits and songs from Garfunkel's solo catalogue. FOLSOM PRISON REVIVAL - A JOHNNY CASH TRIBUTE SHOW SAT., JULY 20 at 7:30 PM Tickets Starting at $20 This band was formed out of deep appreciation for Johnny Cash's music, performing all his major hits from each period of his legacy. SOUTHSIDE JOHNNY AND THE ASBURY JUKES emerged in 1974, and carried over a significant influence from Bruce Springsteen & the E Street Band. The Jukes evolved as a white R&B horn band in the Memphis Stax Records tradition.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,448
Vágar – isola delle Fær Øer Vágar – comune delle Fær Øer Vágar – regione delle Fær Øer
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,619
/* * @lc app=leetcode id=1403 lang=java * * [1403] Minimum Subsequence in Non-Increasing Order * * https://leetcode.com/problems/minimum-subsequence-in-non-increasing-order/description/ * * algorithms * Easy (71.12%) * Total Accepted: 29.6K * Total Submissions: 41.6K * Testcase Example: '[4,3,10,9,8]' * * Given the array nums, obtain a subsequence of the array whose sum of * elements is strictly greater than the sum of the non included elements in * such subsequence.  * * If there are multiple solutions, return the subsequence with minimum size * and if there still exist multiple solutions, return the subsequence with the * maximum total sum of all its elements. A subsequence of an array can be * obtained by erasing some (possibly zero) elements from the array.  * * Note that the solution with the given constraints is guaranteed to be * unique. Also return the answer sorted in non-increasing order. * * * Example 1: * * * Input: nums = [4,3,10,9,8] * Output: [10,9] * Explanation: The subsequences [10,9] and [10,8] are minimal such that the * sum of their elements is strictly greater than the sum of elements not * included, however, the subsequence [10,9] has the maximum total sum of its * elements.  * * * Example 2: * * * Input: nums = [4,4,7,6,7] * Output: [7,7,6] * Explanation: The subsequence [7,7] has the sum of its elements equal to 14 * which is not strictly greater than the sum of elements not included (14 = 4 * + 4 + 6). Therefore, the subsequence [7,6,7] is the minimal satisfying the * conditions. Note the subsequence has to returned in non-decreasing * order. * * * Example 3: * * * Input: nums = [6] * Output: [6] * * * * Constraints: * * * 1 <= nums.length <= 500 * 1 <= nums[i] <= 100 * */ class Solution { public List<Integer> minSubsequence(int[] nums) { Arrays.sort(nums); int n = nums.length; int sum = 0; for (int x : nums) sum += x; List<Integer> ans = new ArrayList<>(); int cur = 0; for (int i = n-1; i >= 0; i--) { cur += nums[i]; ans.add(nums[i]); if (cur > sum - cur) return ans; } return ans; } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,703
{"url":"http:\/\/blogformathematics.blogspot.com\/2012\/09\/chain-rule-solved-examples_20.html","text":"Pages\n\nExample 1\n\nDifferentiate the following function by the help of the chain rule:\nf(x) = sin(cos(x))\n\nSolution:\n\nStep 1: Identify the inner and outer functions\n\u2022 Inner function is cos(x)\n\u2022 Outer function is sin(cos(x))\nStep 2: Differentiate the inner and outer functions separately\nInner function:\nd\/dx cos(x) = -sin(x)\nOuter function:\nSubstitute a single variable in place of the inner function. Let u = cos(x), then\nsin(cos(x)) = sin(u)\nDifferentiate the outer function with respect to the variable taken, u\nd\/dx sin(u) = cos(u)\nSubstitute back u = cos(x)\nd\/dx sin(u) = cos(cos(x))\nStep 3: Multiply the derivatives of the inner and outer functions to get the derivative\nf(x) = -sin(x) * cos(cos(x)) = -sin(x)cos(cos(x))\n\nThe derivative of sin(cos(x)) is -sin(x)cos(cos(x))\n\nExample 2\n\nDifferentiate the following function with the help of the chain rule:\nf(x) = 2sin(x)\n\nSolution:\n\nIdentify the inner and outer functions. Here the inner function is sin(x) and outer function is an exponential function 2sin(x). But we have a formula to differentiate the exponential functions, and it works only when there is a single variable in the exponent of a constant, so we will try to get a single variable instead of sin(x).\n\nSubstituting u = sin(x) we get 2u. Now applying the chain rule formula:\n\nd\/dx f(g(x)) = d\/du f(u) * d\/dx u ... where u = g(x)\n\nd\/dx 2sin(x) = d\/du 2u * d\/dx u ... where u = sin(x)\n\nApplying the derivative formula for exponential functions, derivative of 2u = \u00a02u\n\nd\/dx 2sin(x)\u00a0= 2u\u00a0* d\/dx u ... where u = sin(x)\n\nSubstituting u = sin(x),\n\nd\/dx 2sin(x)\u00a0= 2sin(x)\u00a0* d\/dx sin(x)\n\nThe cosine function is the derivative of the sine function,\n\nd\/dx 2sin(x)\u00a0= 2sin(x)\u00a0* cos(x)\n\nTherefore the derivative of \u00a0f(x) = 2sin(x)\u00a0is f (x) =\u00a02sin(x)\u00a0* cos(x)\n\nAn other, simpler way of doing the above steps without remembering the formula is to differentiate the inner and outer functions separately and then multiply their derivatives:\n\u2022 Derivative of inner function, d\/dx sin(x) = cos(x)\n\u2022 Derivative of outer function, d\/dx 2u\u00a0= 2u\n\u2022 Product of the two derivatives: 2u\u00a0* cos(x)\n\u2022 Substitute back u = sin(x),\u00a02sin(x)\u00a0* cos(x)\n\nExample 3\n\nDifferentiate the following function with the help of the chain rule:\nf(x) = (x2\u00a0+ 4x)4\n\nSolution:\n\nMethod 1:\nIdentify the inner and outer functions.\n\u2022 Inner function:\u00a0x2\u00a0+ 4x\n\u2022 Outer function: (x2\u00a0+ 4x)4\nSubstitute u = inner function so you get\nOuter function: u4\nTake the derivative of the inner and outer functions separately\n\u2022 Inner function: d\/dx (x2\u00a0+ 4x) = 2x + 4\n\u2022 Outer function: d\/du u4\u00a0= 4u3\nMultiply the two derivatives and substitute back the inner function for u\nf (x) = (2x + 4) *\u00a04u3\nf (x) = (2x + 4) *\u00a04(x2\u00a0+ 4x)3\nThat is the derivative\n\nMethod 2:\nAfter substituting u for the inner function, apply the chain rule formula:\nd\/dx f(g(x)) = d\/du f(u) * d\/dx g(x) where u = g(x)\nd\/dx (x2\u00a0+ 4x)4\u00a0= d\/du u4\u00a0* d\/dx\u00a0(x2\u00a0+ 4x)\nApply the differentiation formulas and substitute the inner function back for u,\nf (x) =\u00a04u3\u00a0* (2x\u00a0+ 4)\nf (x) = 4(x2\u00a0+ 4x)3\u00a0* (2x\u00a0+ 4)\n\nExample 4\n\nDifferentiate the following function by the help of the chain rule:\nf(x) = ln(2x2\u00a0+ 3x + 1)\n\nSolution:\n\nMethod 1:\nIdentify the inner and outer function:\n\u2022 Inner function:\u00a0 2x2\u00a0+ 3x + 1\n\u2022 Outer function: ln(2x2\u00a0+ 3x + 1)\nSubstitute u for the inner function to get a single variable inside the outer function:\nln(2x2\u00a0+ 3x + 1) =\u00a0ln(u), u =\u00a02x2\u00a0+ 3x + 1\nDifferentiate the inner and outer functions separately:\n\n\u2022 Inner function: \u00a0d\/dx (2x2\u00a0+ 3x + 1) = 4x + 3\n\u2022 Outer function: d\/du ln(u) = 1\/u\nSubstitute back the inner function for u\n1\/u = 1\/(2x2\u00a0+ 3x + 1)\nMultiply the two derivatives to get the derivative of f(x),\nf (x) = (4x + 3) *\u00a01\/(2x2\u00a0+ 3x + 1)\nSimplify the expression as much as possible\nf (x) = (4x + 3)\/(2x2\u00a0+ 3x + 1)\nTherefore the derivative of\u00a0ln(2x2\u00a0+ 3x + 1) is\u00a0(4x + 3)\/(2x2\u00a0+ 3x + 1).\n\nMethod 2:\nIdentify the inner and outer functions. The inner function is\u00a0(2x2\u00a0+ 3x + 1) and the outer one is the logarithmic function. Substitute u = inner function so you get the following. Remember that 'u' is not variable but a function, that is the inner function.\nlog(2x2\u00a0+ 3x + 1) = log(u)\nApplying the chain rule formula,\nd\/dx f(g(x)) = d\/du f(u) * d\/dx g(x) where u = g(x)\nd\/dx\u00a0log(2x2\u00a0+ 3x + 1) = d\/du log(u) * d\/dx\u00a0(2x2\u00a0+ 3x + 1)\nCompute the derivatives,\n1\/u * (4x\u00a0+ 3)\nSubstitute back the inner function for u,\n1\/(2x2\u00a0+ 3x + 1)\u00a0\u00a0* (4x\u00a0+ 3)\nSimplify,\n(4x + 3)\/(2x2\u00a0+ 3x + 1).\n\nExample 5\n\nDifferentiate the following function with the help of the chain rule:\nf(x) = \u221ax2 + 4x - 3\n\nSolution:\n\nIdentify the inner and outer functions:\n\u2022 Inner function: x2\u00a0+ 4x - 3\n\u2022 Outer function:\u00a0 \u221ax2\u00a0+ 4x - 3\nSubstitute u = inner function in the outer function,\nx2\u00a0+ 4x - 3\u00a0=\u00a0\u221au\nDifferentiate the inner and outer functions separately,\n\u2022 Inner function: d\/dx x2\u00a0+ 4x - 3 = 2x + 4\n\u2022 Outer function: d\/du \u221au\u00a0= d\/du u1\/2 = 1\/2 u-1\/2\nMultiply the two derivatives and substitute the inner function back for u,\n(2x + 4) * (1\/2 u-1\/2) =\u00a0(2x + 4) * (1\/2(x2\u00a0+ 4x - 3)-1\/2)\nSimplify the expression,\n(2x + 4)\/(2\u221ax2\u00a0+ 4x - 3\u00a0)\nTherefore the derivative of \u221ax2\u00a0+ 4x - 3\u00a0is\u00a0(2x + 4)\/(2\u221ax2\u00a0+ 4x - 3\u00a0)\n\nExample 6\n\nDifferentiate\u00a0the following function with the help of the chain rule.\nf(x) = sin(cos(tan(x)))\n\nSolution:\n\nIdentify the inner and outer functions.\n\u2022 Inner function: cos(tan(x))\n\u2022 Outer function: sin(cos(tan(x)))\nSubstitute u for the inner function\n\u2022 Inner function: u = cos(tan(x))\n\u2022 Outer function: sin(u)\nApply the chain rule formula,\nd\/dx f(g(x)) = d\/du f(u) * d\/dx g(x) ... where u = g(x),\nd\/dx sin(cos(tan(x))) = d\/du sin(u) * d\/dx cos(tan(x))\nDerivative of sin(u) is cos(u),\n= cos(u) * d\/dx cos(tan(x))\nSubstitute back u = cos(tan(x)),\n= cos(cos(tan(x))) * d\/dx cos(tan(x))\nUse chain rule again on the second derivative because it is a composition of two functions. Identify the inner and outer functions,\n\u2022 Inner function: tan(x)\n\u2022 Outer function: cos(tan(x))\nSubstitute u for the inner function\n\u2022 Inner function: u = tan(x)\n\u2022 Outer function: cos(u)\nApply the chain rule formula,\nd\/dx f(g(x)) = d\/du f(u) * d\/dx g(x) ... where u = g(x),\nd\/dx cos(tan(x)) = d\/du cos(u) * d\/dx tan(x)\nDerivative of cos(u) is -sin(u) and that of tan(x) is sec2(x)\n= -sin(u) * sec2(x)\nSubstitute back u = tan(x),\n= -sin(tan(x)) * sec2(x)\nSubstitute this derivative of cos(tan(x)) in the derivative obtained after the first application of the chain rule,\n= cos(cos(tan(x))) * d\/dx cos(tan(x))\n= cos(cos(tan(x))) * -sin(tan(x)) * sec2(x)\nSimplify the expression,\n= -sin(tan(x)) cos(cos(tan(x))) sec2(x)","date":"2017-07-21 00:37:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.929358184337616, \"perplexity\": 3625.042007134893}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-30\/segments\/1500549423629.36\/warc\/CC-MAIN-20170721002112-20170721022112-00248.warc.gz\"}"}
null
null
Building on OS X 10.9 ===================== **NOTE:** This document is intended for developers who wish to build the dx-toolkit SDK and command-line tools from source. Instead of building from source, most users can install the prebuilt DNAnexus Platform SDK for OS X (a.k.a. dx-toolkit) release available for download at: https://wiki.dnanexus.com/Downloads ### Setup steps --------------- 1. Install Xcode and the [Command Line Tools for XCode](https://developer.apple.com/downloads/). (Free registration required with Apple) 1. Install [MacPorts](http://www.macports.org/) for your version of OS X: https://www.macports.org/install.php 1. If you want your dx-toolkit build to be backwards-compatible on OS X 10.7, add these lines to ```/opt/local/etc/macports/macports.conf``` to ensure that your MacPorts Python build is compiled with 10.7 support: ``` macosx_deployment_target 10.7 MACOSX_DEPLOYMENT_TARGET 10.7 ``` 1. Run the MacPorts install and select commands to configure your build environment: ``` sudo port install -s python27 sudo port install cmake bison autoconf automake sudo port install boost -no_static sudo port select --set python python27 sudo port install py27-pip py27-virtualenv sudo port select --set pip pip27 sudo port select --set virtualenv virtualenv27 ``` 1. Clone the dx-toolkit repo, and build the SDK: ``` cd dx-toolkit export CPATH=/opt/local/include make ``` ### Upload agent build setup steps ---------------------------------- 1. Install the upload agent build dependencies: ``` sudo port install libmagic c-ares ``` 1. Build upload agent: ``` CC=clang CXX=clang++ VERSIONER_PERL_VERSION=5.16 make ua ```
{ "redpajama_set_name": "RedPajamaGithub" }
3,325
Human Resources supports UMaine's educational mission by supporting our most important resource–our faculty and staff. Our welcoming, professional team provides services and solutions to meet the varied needs of all employees, from employment, compensation and benefits, to professional development, employee/labor relations and more. If you can't find the information you need on our website, contact us at hr-um@maine.edu.
{ "redpajama_set_name": "RedPajamaC4" }
4,771
<head> <link rel="shortcut icon" type="image/x-icon" href="favicon.ico"> <link rel="stylesheet" type="text/css" href="PgTemplate.css"> <link rel="stylesheet" type="text/css" href="work_external.css"> <link href="https://fonts.googleapis.com/css?family=Arapey|Muli|Playfair+Display|" rel="stylesheet"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script> <script src="PgTemplate.js"></script> <style> .row { width: 100%; display: block; } </style> </head> <body> <a id="xbutton" href="WorkPg.html" onclick="span_onclick()" class="close">&times; Return to portfolio</a> <div id="container"> <div id="header"> <h1>Team Lead, Innovation Catalyst Grant Winner</h1> <h2><em>Fall 2018</em></h2> <img style="padding: 0px;" id="affiliations" src="img/affiliations.png"> <img style="padding: 0px;" src="img/team.png"> </div> <div id="header"> <img id="mainimg" width="500px" src="img/dystonia2header.png"> </div> <div id="rowTileGrid"> <div id="row1" class="row"> <div id="1left" class="left tile pic"> <img src="img/backgroundeyes.png"> </div> <div id="1right" class="right tile text"> <h3>Purpose</h3> <p> Blepharospasm is a focal dystonia and movement disorder where abnormal, antagonistic muscle activity leads to impaired ocular control, impeding everyday activities like socializing and driving.<br><br> As a rare condition, medical professionals have limited understanding of the condition, relying on patient anecdotes and invasive testing for diagnosis.<br><br> The data diary is a) to encourage patients to be proactive in their healthcare and engage them in retrospective analysis about their condition and its triggers.<br><br> As the team lead on this project through Enabletech (Berkeley's assistive technology org), I produced a prototype that could be a noninvasive, outpatient wearable tool to quantify dystonia. This project was supported through Jacobs Institute of Design's Innovation Catalyst program, which awarded me a grant of up to $2000. <br><br> </div> </div> <div id="row2" class="row"> <h3>Prior Work</h3> <div id="2left" class="left tile text "> <img src="img/priorwork.png"> </div> <div id="2right" class="right tile pic"> The previous iteration that I took to the International Symposium of Academic Makerspaces 2018 at Stanford integrated a Raspberry Pi Zero, spy camera modules, and 3D printed housing.<br><br> The software pipeline then consisted of HAAR facial landmark detection run using OpenCV. An equation for eye aspect ratios (eye height vs. eye width) was used as a quantitative correlate for blinking.<br><br> One caveat to the previous approach is that it (and the field of eye tracking it seemed) to rely on frontal face detection. What this meant was that if the head moved (i.e. a three-quarters profile), then the detection of the eyes became awry. This is because faces are usually detected first as a rectangle, and then the eyes are found in HAAR cascade detection. My next prototype accounted for this by fixing the prototype on the eye and moving with the head. </div> </div> <div id="row3" class="row"> <h3>Software Side</h3> <div style="width: 70%;height:50%;" class="right tile pic"> <img src="img/webinterface12.png"> </div> <div style="width:30%;height:50%;" class="left tile text"> <p>I implemented the software pipeline. The current prototype has a web application made from the Django framework, which I taught myself over the course of my fall semester. Django allowed me to integrate Python image processing algorithms I had written with my front-end development hobby (HTML, CSS, d3.js, Javascript/jQuery)</p><br> <p> The individual would enter in an eye episode video (left), which would be exploded into frames sampled at 100 miliseconds (right). </p> </div> <div style="width:70%; height:50%;" class="left tile pic"> <img src="img/webinterface34.png"><br><br> </div> <div style="width: 30%;height:50%;" class="right tile text"> <p>The individual would then input points that would surround the eye to create a convex hull around the eye, so that we could filter out parts of the image that would make our data noisy. (left) This convex hull would create a binary mask that would be applied to each frame. (right)</p> </div> <div style="width: 30%; height:50%;" class="left tile text"> <p>Afterwards, each masked image is thresholded so that we can create a time series from the video.(left) Based on the sampling rate, we can also run some basic statistics on the data. In the overall diary/database, the new video is documented as a new entry. (right)</p> </div> <div style="width: 70%; height:50%;" class="right tile pic"> <img src="img/webinterface56.png"></div> <div style="width: 30%; height: 50%" id="" class="right tile text"><p>One important feature I implemented was the tooltip, which allowed for a side-by-side comparison of the 2 dimensional data to the video feed. For one thing, it is important for visual confirmation/debugging. For another, and perhaps more importantly, it allowed me to explore eye activity as a continuous signal while still understanding what the condition actually looked like to others. After all, the whole point of this prototype was to give people with eye conditions the chance to see what everyone else can see, but they can only feel.</p><br><br> During the development process, I wrote/used two algorithmic approaches: 1) projective homography transforms, 2) convex hulls. The first approach was to unwarp the reflection of the eye captured in the mirror and then threshold on each transformed image frame. However, I eventually found the projective transform limiting, because it relied on 4 pairs of correspondence points, and this relationship was superimposed across all frames. Convex hulls could take in an arbitrary amount of points and be more stable.</div> <div style="width: 70%; height: 50%; " id="" class="left tile pic"><img src="img/webinterface7.png"></div> </div> <div id="row4" class="row"> <div id="" class="left tile pic"> <img src="img/process0.png"> <img src="img/process1.png"> </div> <div id="" class="right tile text"> <h3>Process</h3> Thanks to financial freedom owing to the grant, my team and I explored a variety of sensors including: Raspberry Pi spy camera modules, industry spy cameras, borescopes, EMG sensors. <br><br> In early iterations, we plugged in Raspberry Pi Zero's and soldered together EMG circuits. We found that though the Raspberry Pi Zero had the benefit of directly processing the video stream as it came in (as it is a full blown computer), it was too laggy. The EMG circuit also could not pick up the nuances in muscle activity around the eye. We also looked into wireless spy cameras, which led to a rapid prototype that was a little to intrusive for our tastes. Thus, we decided on borescopes, which had a tiny form factor for the camera.<br><br> </div> </div> <div id="row5" class="row"> <div id="" class="right tile pic"> <img src="img/process2.png"> <img src="img/process3.png"> </div> <div id="" class="left tile text"> Everybody has a different head and face, so the angle and distance of the borescope setup would vary from person to person. Wanting a flexible system, Christian worked on a rails system that would allow the borescope to slide to and fro. We faced significant challenges with friction from the 3D printed parts before adding magnets to our design.<br><br> Neodymium magnets were added to attract to nails drilled into the glasses housing. These magnets were hot glued into circular crevices in the borescope holder. The ensuing design changes allowed us to make our housing more modular and printable.<br><br> To vary the angle the mirror takes with the eye and to mount the mirror in general, we also needed an arm to jut out in front of the eye. This was also 3D printed, and its form factor changed from a rotatable disc to a triangular support. </div> </div> <div id="row6" class="row"> <h3>Results</h3> <div id="" class="right tile pic"> <img src="img/dystonia2header.png"> <img src="img/webinterface7.png"> <img src="img/results2.png"> </div> <div id="" class="left tile pic"> Our resulting prototype consisted of the following physical setup and software web application.<br><br> We created housing that snapfits onto a pair of glasses. This piece was drilled into with nails that attract the magnets of the borescope holder. We designed a borescope holder that circumscribes the borescope. It has crevices on one side for magnets to be hot glued in. A USB borescope camera ports the video stream into the computer. A laser cut acrylic mirror is snugly fit into an angle-adjustable 3D-printed arm. A user-friendly web application provides the user with analysis on their eye episode. </div> </div> <div id="row7" class="row"> <div id="" class="left tile pic"><img src="img/results1.png"></div> <div id="" class="right tile text"><h3>Future work</h3> <p>More data collection should be conducted to generate comparisons between normal groups and patient groups. User studies in general should be conducted to assess the wearable as a design.</p><br><br> <p>Iteration, always iteration! Some parts could be improved in structural integrity.</p><br><br> <p>Another affordance to look into is virtual reality, whose headsets could be hacked into for eye-tracking capabilities. Though this hardware setup would then not be relevant, the software pipeline still would be applicable.</p></div> </div> <div id="row8" class="row"> <div id="" class="right tile pic"><img src="img/results4.png"></div> <div id="" class="left tile text"> <h3>Reflection</h3> This was my first time leading a team on an engineering project, developing a web application, writing image processing algorithms, and receiving a grant on a personal passion project! I learned not only how to hack, code, and prototype, but also how to teach and to manage. Along the way, I got wonderful design instruction from Chris Myers at the Invention Lab about everything from magnets to CAD principles and algorithm advice from Professor AA Efros. </div> </div> </div> </div> </body>
{ "redpajama_set_name": "RedPajamaGithub" }
6,425
Q: Printing out the matrix with overloaded << operator I have 2d array, that represents matrix and I need to print it via overloaded << operator. The declaration of that overloaded operator is std::ostream &operator <<(std::ostream &os, const Matrix &matrix) { return os; } and it works well - when I write ostringstream os; Matrix a; // fill in the matrix os << a; this function is called...but although I've read some tutorials, I didn't find out, how to make it to print out the values... Can somebody pls show me on soma sample code, how to implement some very basic operation of printing out the values from matrix? btw-the matrixes can have random sizes.. A: You either need to write the result from the ostringstream to cout: ostringstream os; Matrix a; // fill in the matrix os << a; cout << os.str(); or you do it directly: Matrix a; // fill in the matrix cout << a; A: Without seeing your Matrix class definition it's hard to guess how it's implemented, but you probably want something like this: std::ostream& operator<< (std::ostream &os, const Matrix &matrix) { for (int i = 0; i < matrix.rows; ++i) { for (int j = 0; j < matrix.cols; ++j) os << " " << matrix.data[i * matrix.cols + j]; os << std::endl; } return os; } To display the matrix you would then just do this: Matrix a; // fill in the matrix cout << a; This invokes the above operator<< implementation to print the matrix to stdout. Obviously you can use any other appropriate output stream instead of cout.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,384
Q: jQuery Validation plugin - validate group of input fields and at least one group is required I am trying to validate my form with jQuery Validate Plugin. My form require at least one pair of this input (ex: mobilePhone/mobilePhone_p or/and personalPhone/personalPhone_p or/and workPhone/workPhone_p) but I don't know how to link one text field (phone number) with his associated radio button (preferred yes/no) for validating my form and validate there is at least one pair. If one or multiple pair is set, my form also require at least one checked radio button. HTML form: <form method="post" action="#" id="phoneForm"> <table> <tr> <td><label for="mobilePhone">Mobile</label></td> <td> <input type="text" class="mp_phone" name="mobilePhone" id="mobilePhone" value="" /> </td> <td><label for="mobilePhone_p">Preferred phone number</label> <input type="radio" name="num_p" id="mobilePhone_p" value="mobilePhone_p" /></td> </tr> <tr> <td><label for="personalPhone">Personal</label></td> <td> <input type="text" class="mp_phone" name="personalPhone" id="personalPhone" value="" /> </td> <td><label for="personalPhone_p">Preferred phone number</label> <input type="radio" name="num_p" id="personalPhone_p" value="personalPhone_p" /></td> </tr> <tr> <td><label for="workPhone">Work</label></td> <td> <input type="text" class="mp_phone" name="workPhone" id="workPhone" value="" /> </td> <td><label for="workPhone_p">Preferred phone number</label> <input type="radio" name="num_p" id="workPhone_p" value="workPhone_p" /></td> </tr> </table> <button type="submit" id="validateFormButton">Submit</button> </form> jQuery Validate : $("#phoneForm").validate({ rules: { mobilePhone: { phone: true, require_from_group: [1,".mp_phone"] }, personalPhone: { phone: true, require_from_group: [1,".mp_phone"] }, workPhone: { phone: true, require_from_group: [1,".mp_phone"] }, num_p: { required: true } } }); (phone is a custom method to validate phone number and require_from_group is defined in http://ajax.aspnetcdn.com/ajax/jquery.validate/1.11.1/additional-methods.js) It's works when fields require at least one text field but radio button groups is not validated when require_from_group is set on other fields... How can I validate my form using jQuery Validation plugin ? A: I found a solution by customizing require_from_group method (based on Jquery .validate require_from_group) : jQuery.validator.addMethod("require_from_group_with_radio", function(value, element, options) { var numberRequired = options[0]; var selector = options[1]; var fields = jQuery(selector, element.form); var filled_fields = fields.filter(function() { // check if not empty and radio is checked return jQuery(this).val() !== "" && jQuery(this).closest("tr").find("input:radio").is(":checked"); }); var empty_fields = fields.not(filled_fields); // we will mark only first empty field as invalid if (filled_fields.length < numberRequired && empty_fields[0] == element) { return false; } return true; }, jQuery.format("Please fill out at least {0} of these fields and one of this associated option.")); Validate: $("#phoneForm").validate({ rules: { mobilePhone: { phone: true, require_from_group: [1,".mp_phone"], require_from_group_with_radio: [1,".mp_phone"] }, personalPhone: { phone: true, require_from_group: [1,".mp_phone"], require_from_group_with_radio: [1,".mp_phone"] }, workPhone: { phone: true, require_from_group: [1,".mp_phone"], require_from_group_with_radio: [1,".mp_phone"] } } }); Just adding jQuery(this).closest("tr").find("input:radio").is(":checked") in filter condition. We have just parameterize the radio selector to be generic.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,579
Observation 240524: Lecania A. Massal. Collection location: Paimogo, Lourinhã, Portugal [Click for map] Who: zaca Growing on a piece of rock in a calcareous habitat, close to the sea border. My impression on the field was that this could be a weird Diplotomma; Nevertheless, I collected a sample that just analyzed. Starting with a section of an apothecium immediatly realized to be in presence of something completely different than thought. Going further the KOH reaction on the section was negative, simply enhanced the yellowish coloration of a part of the hymenium. Further, pressing the slide could see paraphyses with dark tips but not much enlarged and small asci, clavate and somewhat flexuosus, containing 8 hyaline 1-septate spores inside. The asci apex showed a big tholus and a prominent apical beak, like that in Bacidia-type asci. This was unexpected and the first task was to establish the genus of this specimen. Using Ref. 2 the answer was Lecania! My knowledge of this genus is very limited and I only identified before one of its species, but not based on microscopic properties. After noting the spores form – ellipsoid to fusiform many of them with a cell bigger than the other, sometimes but not always fusiform – and their dimensions: (10.7) 11.3 – 12.9 (14.1) × (3.7) 4.3 – 5 (5.3) µm Q = (2.3) 2.4 – 2.9 (3) ; N = 22 Me = 12.1 × 4.6 µm ; Qe = 2.6 went to the keys. Before it is worth mention that many small crystals were found in the slides, that to a certain extent caused several difficulties in the observation of asci and spores. In the Sonoran Flora Vol. 2, I was lead to a group of species formed by L. chalcophila, L. rayiana and L turicensis. The same exercise in the British Flora leads to L turicensis, if considering white pruinose thallus or apothecia, or L. sylvestris, in the other case and taking into account the calcareous substrata. In my opinion the opinion the thallus and the apothecia are epruinose or only slightly pruinose. It should be mentioned that L. chalcophila and L. rayiana are not mentioned in the British Flora and that L. sylvestris is not mentioned in the Sonoran Flora. The next step was the IKI staining of the asci, which gave another surprise; I think that the asci are of the "Biatora-type", characterized by an ascus tholus that has a more or less conical ocular chamber and a high conical axial body surrounded by an amyloid zone darker than the rest of the stained apex. If this is so, than most of the above cited species are ruled out (by having "Bacidia-type asci", according to the Sonoran Flora) with the exception of L. sylvestris, according to Ref. 3. Maybe this point has to be clarified. Finally, let me mention that, according to Ref. 3, L. sylvestris has a thin and granular, sometimes immersed, thallus and that the spores have 11.1–11.4 × 2.9–3.4 mm, being ellipsoid to fusiform, but the sketch illustrating they form only shows fusiform ones. In my specimen most probably the thallus, which is not immersed, should be classified as areolate and the range for the spores dimensions are bigger than that. About Lecania A. Massal. Public Description (Default) Microscopy: Apothecial section and its KOH reaction; Microscopy: Hymenium; Microscopy: Asci (x1000, in KOH); Microscopy: Asci and spores (x1000, in KOH); Microscopy: Asci staining with IKI, after pretreatment in KOH (x1000, in KOH). Lecania A. Massal. zaca 87% (1) Used references: Ref. 1 – British Flora; Ref. 2 – Sonoran Flora, Vol. 2; Ref. 3 – R.R. Næsborg: Taxonomic revision of the Lecania cyrtella group based on molecular and morphological evidence, Mycologia, 100(3), 2008, pp. 397–416; available at http://www.mycologia.org/content/100/3/397.full.pdf Based on microscopic features Lecania sylvestris (Arnold) Arnold zaca 29% (1) Lecania turicensis (Hepp) Müll. Arg. zaca 29% (1) Lecania A. Massal. Lecania sylvestris (Arnold) Arnold Lecania turicensis (Hepp) Müll. Arg. I'd Call It That 3.0 6.55 1 (zaca) Could Be 1.0 6.55 1 (zaca)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,840
{"url":"https:\/\/math.stackexchange.com\/questions\/1918217\/the-mod97-operation-in-iban-validation","text":"# The mod97 operation in IBAN validation\n\nThe international bank account number(IBAN) is validated by a $\\bmod 97$ operation. Suppose an account number is like sd1234abcd78965h then the following steps are performed:\n\n1. the first four characters of the IBAN number are pulled out from the beginning and are appended at the end of the string.\n\n2. All the letters in the hence obtained string of characters are replaced by the ASCII value of their corresponding uppercase letter decreased by $55$. (ascii value $-55$)\n\n3. The modulus of the hence obtained number, let's say $x$, with respect to $97$ is checked.\n\n4. If the modulus is $1$, then it's a valid IBAN number .\n\nFor the third step, Wikipedia mentions an algorithm, which goes as follows:\n\nhttps:\/\/en.wikipedia.org\/wiki\/International_Bank_Account_Number#Modulo_operation_on_IBAN\n\nA nine digit number is formed by taking the leftmost $9$ digits of $x$. The mod of this number with respect to $97$, $r$ is obtained. Then another nine digit number, $q$ is formed by concatenating $r$ and the next $7$ digits of the number. This process is continued till the last value of $q\\bmod 97$ is obtained. If it is $1$ then that validates the number .\n\nBut, I couldn't prove that a number, $n$, for whom $n \\bmod 97$ is $t$ where $t$ is in between $1$ to $96$, when subjected to above algorithm, will in the end yield $t$. Or is this a special case for numbers $n$ for whom $n\\bmod 97$ is $1$? Can you show me a proof of this algorithm or disprove it?\n\nThe algorithm works because $a\\equiv a'\\pmod {97}$ implies that also $$a\\cdot 10^k+b\\equiv a'\\cdot 10^k+b\\pmod{97}$$\n\u2022 If $x<10^9$ then $x<2^{31}$ so that the numbers fit into signed 32 bit integers \u2013\u00a0Hagen von Eitzen Sep 7 '16 at 17:56","date":"2020-01-29 12:18:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8530861139297485, \"perplexity\": 309.5183756790112}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251796127.92\/warc\/CC-MAIN-20200129102701-20200129132701-00278.warc.gz\"}"}
null
null
/*Based on https://github.com/yahoo/pure/issues/326#issuecomment-44527271*/ /* pure-hidden-sm */ @media screen and (max-width:47.938em) { .pure-visible-md{display:none} .pure-visible-lg{display:none} .pure-visible-xl{display:none} .pure-hidden-sm{display:none} } /* pure-hidden-md */ @media screen and (min-width:48em) and (max-width:63.938em) { .pure-visible-sm{display:none} .pure-visible-lg{display:none} .pure-visible-xl{display:none} .pure-hidden-md{display:none} } /* pure-hidden-lg */ @media screen and (min-width:64em) and (max-width:79.938em) { .pure-visible-sm{display:none} .pure-visible-md{display:none} .pure-visible-xl{display:none} .pure-hidden-lg{display:none} } /* pure-hidden-xl */ @media screen and (min-width:80em) { .pure-visible-sm{display:none} .pure-visible-md{display:none} .pure-visible-lg{display:none} .pure-hidden-xl{display:none} }
{ "redpajama_set_name": "RedPajamaGithub" }
2,227
\section{Introduction} \IEEEPARstart{N}{on}-ORTHOGONAL Multiple Access (NOMA) technique is seen as one of the strongest candidates for the future wireless systems [1]. NOMA principle allows serving multiple users at the same resource blocks by splitting them into power domain so that NOMA outperforms to orthogonal multiple access (OMA) techniques in terms of achievable sum rate and the outage probability [2]. This is achieved by implementing superposition coding (SC) at the transmitter and successive interference canceler (SIC) at the receivers [3]. NOMA has tremendous recent attention from the researchers due to its potential. However, most of these studies assume serving only two NOMA users since increasing number of NOMA users boosts system complexity as well as it limits the advantage of NOMA due to the inter-user-interference (IUI). In addition, the authors in [4] showed that the error performance of the NOMA cannot compete with the OMA systems for the both users even if only two NOMA users are served. Hence, the trade-off between the gain in the outage and capacity performances and the decay in the bit error performance caused by IUI is very questionable. Moreover, by considering the complexity at the receiver to implement SIC process, NOMA may not be a feasible solution when two users are served. Spatial modulation (SM) is another technique proposed for spectral efficiency in MIMO systems [5]. In SM, modulation is held by splitting the input data stream into two groups. While one of the groups is modulated by a M-ary modulation scheme, the other group determines which transmitting antenna will be activated. Then, the space shift keying (SSK) is proposed as a subset of SM where the input data stream is transmitted by only mapping to transmitting antenna selection [6]. Multi-user (MU) SM schemes are investigated in the literature but mostly for the uplink scenario [7]. In [8], the authors analyzed the performance of the MU-SM with a channel precoding at the transmitter in a downlink scenario. Although SM/SSK is a spectral efficient technique, the MU applications, in which all users are served by SM/SSK, boost the system complexity due to channel precoder and the need of full channel state information at transmitter (CSIT) so that make them impractical. There are also some studies in the literature which consider NOMA and SM principles together [9-10]. Nevertheless, these applications still encounter IUI so the SIC is needed at the receivers. Hence, the low error performance and the implementation complexity is still undergone. In [11] authors point out the challenges of the NOMA networks and consider SM assisted MIMO-NOMA networks and simulations results are provided. However, the analytical analyses are not regarded. In this letter we analyze the spatial multiple access (SMA) technique which is is based on implementing SM principle for the input data streams of the different users, for MIMO systems. SMA allocate users into different domains (i.e., spatial and power) rather than only power domain as in NOMA so that the users meet IUI free communication. Hence, SMA achieves the error performance of the OMA systems in addition to providing better outage and capacity performances than conventional NOMA systems. SMA activates only one transmitting antenna during one symbol duration so that the needed radio frequency (RF) chain number is limited to only one. Moreover, not needing SIC implementation at the receivers provides less complexity and latency than NOMA. The rest of paper is organized as follows. In section II, the SMA system model is introduced and maximum likelihood (ML) detections at the users are given. In section III, the performance analyses of the SMA system are given in terms of the bit error probability, capacity and the outage probability. Then, the validation of the derived expressions via computer simulations are presented in addition to the simulation comparisons of the SMA and NOMA. Finally, in section IV the results are discussed and the paper is concluded. \section{System Model} We consider a downlink MIMO scenario where a base station (BS) and two mobile users (i.e., UE-1 and UE-2) are located. BS is equipped with $N_t$ antennas whereas each user is equipped with $N_r$ antennas. The spatial multiple access system model is shown in Fig1. \begin{figure}[!t] \centering \includegraphics[width=8.5cm]{fig1.eps} \caption{The illustration of the SMA} \label{fig1} \end{figure} The channel gain for the UE-1 and UE-2 are represented as $\mathbf{H1}$ and $\mathbf{H2}$, respectively.\footnote{In the following of this paper the notation used are as follows: the bold capital letters $\mathbf{'H'}$ show matrices, the lower case bold letters $\mathbf{'x'}$ show the vectors. We use $(.)^T$ for transpose, $(.)^H$ for conjugate transpose and $\left||.|\right|_F$ for the Frobenius form of a matrix/vector. We use $\left|.\right|$ for the absolute value of a scalar and $\binom{.}{.}$ for the binomial coefficient. $CN(\mu,\sigma)$ is a complex Gaussian distirubiton which has independent real and imaginary random variables with the $\mu$ mean and the $\frac{\sigma}{2}$ variance.} However, for the notation simplicity the user number is dropped for $\mathbf{H}$ and for the related vectors $\mathbf{h}$ in the rest of the paper. The channel gains between each transmitting and each receiving antenna for a user are assumed to be flat fading and independent-identical distributed (i.i.d) as $CN(0,\sigma^2)$. The CSI is assumed to be known at the receivers. $\mathbf{q_1}$ and $\mathbf{q_2}$ are binary vectors of UE-1 and UE-2 with the $m_1=log_2(M)$ and $m_2=log_2(N_t)$ bits. $\mathbf{q_1}$ and $\mathbf{q_2}$ vectors are mutually mapped into another vector $\mathbf{x}$ with the size of $N_t$ in which only one element is different from zero. The non-zero element is obtained from the $M$-ary modulation constellation for the $\mathbf{q_1}$ vector. The index of the non-zero element where to place is determined by the SSK modulation of the $\mathbf{q_2}$ vector. The resulting vector $\mathbf{x}$ is \begin{equation} \begin{split} \ \mathbf{x}=&\left[0 \ 0...x_n...0 \ 0\right]^T, \quad j=f_{SSK}(\mathbf{q_2}), x_n=f_{M-ary}(\mathbf{q_1}), \\ \ &\ \ \ \ \ \ \ \uparrow j^{th}position \end{split} \end{equation} where $f_{SSK}(.)$ and $f_{M-ary}(.)$ show the SSK and $M$-ary modulation mapping operations, respectively. The $\mathbf{x}$ vector is transmitted to each user over MIMO channel $\mathbf{H}$. The MIMO channel $\mathbf{H}$ can be written in the form of vectors for each transmitting antenna $v$ as follows \begin{equation} \mathbf{H=\left[h_1,\ h_2, ...,\ h_{N_t}\right]}, \end{equation} where \begin{equation} \mathbf{h_v}=\left[ h_{v,1},\ h_{v,1}, ...,\ h_{v,N_r} \right]^T. \end{equation} The received vector for each user is given by $\mathbf{y_i}=\mathbf{h_{(v=j)}}x_{n}+{w_i}, i=1,2.$ where $w_i$ is the $N_r$-dim additive white Gaussian noise vector and each dimension is distributed as $CN(0,N_0)$. \subsection{Detection at the users} \subsubsection{UE-1} The symbol of the UE-1 is sent according to the $M$-ary modulation constellation from the selected transmitting antenna and is received by $N_r$ receiving antennas. The transmitting antenna has no effect on the detection of the symbols of UE-1. Hence, the UE-1 implements a maximum likelihood (ML) receiver for $M$-ary constellation with a maximum-ratio combining (MRC) as in the conventional OMA systems. The ML decision for the symbols of the UE-1 is \begin{equation} \hat{x}_n=\argmin_{n}{\left||\mathbf{{y_1}-{h_{v=j}}}x_n|\right|^2}, \quad n=1,2,..M, \end{equation} where $x_n $ is the complex signal at the constellation point $n$ of the $M$-ary modulation. \subsubsection{UE-2} The binary symbols of the UE-2 are mapped into transmitting antenna index. Hence, the UE-2 must detect from which antenna the complex symbol of the UE-1 is sent. Since the sent symbol from the active antenna is complex, we should implement an optimum SM detection algorithm in [12] instead of SSK detection [6]. The ML based SM detection is given \begin{equation} \left[\hat{j}, \hat{x}_n\right]=\argmin_{j,n}{\sqrt{\rho}\left||\mathbf{g_{j,n}}|\right|_F^2-2Re\{\mathbf{{y_2}}^H\mathbf{g_{j,n}\}}}, \end{equation} where $j=1,2,..N_t, n=1,2,..M$, $\mathbf{g_{jn}={h_{j}}}x_n$ and $\rho$ is the average signal-to-noise ratio (SNR) for each antenna. Although optimal SM detection detects the transmitting antenna number and the symbol of UE-1 mutually, UE-2 only takes the transmitting antenna number ($\hat{j}$) as output. So that, the symbol of the UE-2 is estimated. \section{Performance Analyses} \subsection{Average Bit Error Probability (ABEP)} \subsubsection{UE-1} The conditional bit error probability of the UE-1 is equal to error probability of the well-known $1\times N_r$ SIMO system using the MRC. Hence, the conditional BEP for UE-1 is $P_1(\left.e\right|_{h_j})=\alpha Q(\sqrt{\beta{\gamma_{1_b}}})$\ where ${\gamma_{1_b}}$ is the total received SNR per bit at the output of the MRC for UE-1. $\alpha$ and $\beta$ coefficients depend on the $M$-ary constellation. For example, for QPSK $\alpha=1$ and $\beta=2$. The ABEP of the UE-1 is obtained by averaging conditional BEP over instantaneous SNR $\gamma_{1_b}$ and becomes \begin{equation} \ P_1(e)=\int_{0}^{\infty}{P_1(\left.e\right|_{h_j})p_{\gamma_{1_b}}(\gamma_{1_b})d{\gamma_{1_b}}}, \end{equation} where $p_{\gamma_{1_b}}(\gamma_{1_b})$ is the probability density function of $\gamma_{1_b}$ and in case of ${h_{j,l}}$ is Rayleigh distributed, it is chi-square distributed with the $2N_r$ degree of freedom and given in [13] by \begin{equation} \ p_{\gamma_{1_b}}(\gamma_{1_b})=\frac{{\gamma_{1_b}}^{N_r-1}e^{\sfrac{-{\gamma_{1_b}}}{\overline{\gamma}_{1_b}}}}{\Gamma(N_r){\overline{\gamma}_{1_b}}^{N_r}},\ \ \ \ \ \ \ \overline{\gamma}_{1_b}=\sfrac{\rho\sigma_1^2}{\log_2{M}}. \end{equation} The closed-form expressions for the ABEP is obtained by substituting (7) into (6). For different modulation schemes, the ABEP expressions are provided in [14]. For BPSK/QPSK (gray coded) modulation is given as \begin{equation} \ P_1(e)=\left(\frac{1-\mu_1}{2}\right)^{N_r}\sum_{k=0}^{\eta}\binom{\eta+k}{k}\left(\frac{1+\mu_1}{2}\right)^k, \end{equation} where $\mu_1=\sqrt{\frac{\overline{\gamma}_{1_b}}{1+\overline{\gamma}_{1_b}}}$ and $\eta \triangleq N_r-1$. \subsubsection{UE-2}The exact ABEP for the UE-2 cannot be determined, so that the union bound which is used ABEP analysis of SM/SSK systems in the literature widely, is analyzed. The union bound for ABEP of the optimal SM detection is given in [12] as \begin{equation} \ P(e)\le\sum_{j=1}^{N_t}\sum_{\hat{j}=1}^{N_t}\sum_{n=1}^{M}\sum_{\hat{n}=1}^{M}\frac{N(n,\hat{n})P(x_{j,n}\rightarrow x_{\hat{j},\hat{n}})}{MN_t}, \end{equation} where $N(n,\hat{n})$ is the number of different bits between the symbols $x_n$ and $x_{\hat{n}}$. $x_{j,n}$ represent the symbol $x_{n}$ sent by the transmitting antenna $j$. $P(x_{j,n}\rightarrow x_{\hat{j},\hat{n}})$ is the pairwise error probability of ML decision given in (5) as $x_{\hat{j},\hat{n}}$ is estimated whereas $x_{j,n}$ is sent. At the UE-2, the output vector consists of only the estimated antenna vector bits, therefore the union bound for the UE-2 is determined as \begin{equation} \ P(e)\le\sum_{j=1}^{N_t}\sum_{\hat{j}}^{N_t}\frac{P(x_{j,n}\rightarrow x_{\hat{j},\hat{n}})}{N_t}. \end{equation} In case Rayleigh fading channels, by utilizing PEP given in [12] \footnote{In [12], PEP is only given for real modulation constellations (i.e., BPSK) }, for M-ary contellations the PEP is determined as \begin{equation} \begin{split} \ P(x_{j,n}\rightarrow x_{\hat{j},\hat{n}})={\mu_2}^{N_r}\log_2{M} \sum_{k=0}^{\eta}\binom{\eta+k}{k}\left(1-\mu_2\right)^k, \end{split} \end{equation} where $\mu_{2}=\frac{1}{2}\left(1-\sqrt{\frac{\sigma_a^2}{{1+\sigma}_a^2}}\right)$ and $\sigma_a^2=\frac{\rho{\sigma_2}^2\left(\left|x_{n}\right|^2+\left|x_{\hat{n}}\right|^2\right)}{4}$. By substituting (11) into (10), the union bound for the ABEP of the UE-2 turns out to be \begin{equation} \ P_2(e)\le { N_t {\mu_2}^{N_r}\log_2{M}}\sum_{k=0}^{\eta}\binom{\eta+k}{k}\left(1-\mu_2\right)^k. \end{equation} \subsection{Ergodic Sum Rate} The achievable (Shannon) capacities of the users for the proposed SMA system are \begin{equation} \ {R_1}^{SMA}=\log_2{(1+{\gamma_1})}, \ \ {R_2}^{SMA}=\log_2{(N_t)}. \end{equation} where $\gamma_1=\gamma_{1_b} log_2{M}$. The achievable capacity of the UE-2 only depends on the number of the transmitting antennas (when the receiver sensitivity is ignored). Hence, to obtain ergodic sum rate of the system, ergodic capacity of the UE-1 should be analyzed. The ergodic capacity of the UE-1 is given \begin{equation} {\overline{C}}_1=\int_{0}^{\infty} \log_2{(1+{\gamma_1})}p_{\gamma_1}(\gamma_1)d\gamma_1 \end{equation} After substituting PDF given in (7) into (14), with some algebraic manipulations ergodic capacity of the UE-1 is obtained by utilizing [15, eq. (4.333.5)]. Ergodic sum rate is given as ${\overline{C}}={\overline{C}}_1+{R_2}^{SMA}$ and is derived \begin{equation} {\overline{C}}=\log_2{(N_t)}+\frac{\log_2{e}}{\Gamma(N_r)}\sum_{k=0}^{\eta}\frac{\eta !}{(\eta-k)!} \left[\frac{(-1)^{\eta-k-1}}{\rho^{\eta-k}}e^{\sfrac{1}{\rho}}E_i\left(-\frac{1}{\rho}\right)+\sum_{j=1}^{\eta-k}\frac{(j-1)!}{\left(-\rho\right)^{\eta-k-j}}\right] \end{equation} where $E_i(.)$ and $\Gamma(.)$ are the exponential integral and the gamma function, respectively. Once UE-1 and UE-2 are determined as the near user and far user for NOMA, respectively, the achievable rate of NOMA users are given as ${R_1}^{NOMA}=\log_2\left(1+a_1{\gamma_1}\right)$ and ${R_2}^{NOMA}=log_2\left(1+\sfrac{a_2{\gamma_2}}{a_1{\gamma_2}+1}\right)$ [2]. Where $a_1$ and $a_2$ are the power allocation (PA) coefficients for the users. $a_1=1-a_2$ and $a_2>a_1$. By the placement of large number of the transmitting antenna for UE-2, it can easily seen that for the all PA coefficients \begin{equation} {R_1}^{SMA}>{R_1}^{NOMA},\ {R_2}^{SMA}>{R_2}^{NOMA}. \end{equation} \subsection{Outage Probability} The outage probabilities of the users are \begin{equation} P_i(out)=P\left({{R_i}}^{SMA}<{\acute{R}}_i\right), \ \ \ i=1,2 \\ \end{equation} where ${\acute{R}}_i, i=1,2$ are the targeted data rates of the users. For the UE-1 \begin{equation} \begin{split} P_1(out)&=P\left(\gamma_1<2^{{\acute{R}}_1}-1\right) \\ &=F_{\gamma_1}\left(2^{{\acute{R}}_1}-1\right). \end{split} \end{equation} $F_{\gamma_1}(.)$ is the cumulative density function, and for the Rayleigh fading channel it is given [13] as \begin{equation} F_{\gamma_1}(\theta)=1-{e^{\sfrac{-\theta}{\overline{\gamma}_1}}\sum_{k=1}^{N_r}\frac{\left(\sfrac{\theta}{\overline{\gamma}_1}\right)^{k-1}}{\left(k-1\right)!}}. \end{equation} The outage probability of the UE-1 is obtained by substituting $\theta=2^{{\acute{R}}_1}-1$ into (19), For the UE-2, the outage event does not occur when the targeted data rate is less than the number of bits can be mapped into the transmitting antennas (i.e., ${\acute{R}}_2<{log}_2(N_t)$). In this case, it becomes $P_2(out)=0$ so that the SMA outperforms to the NOMA under the same targeted rate. \section{Simulation Results} In this section, validation of the derived expressions in the previous section are provided via computer simulations. In addition, to show superiority of the SMA system, the comparison with the NOMA systems are provided in terms of the all performance metrics (i.e., bit error rate, outage and ergodic sum rate). In all figures, average channel gain between each transmitting and receiving antenna is assumed to be equal to $1$ (i.e., $\sigma_1^2=\sigma_2^2=1$). It is chosen as $N_r=N_t$ in the SMA systems and , the modulation level of the UE-1 in SMA and of the both users in NOMA systems are chosen equal to $M=N_t$ for the fair comparison. The PA coefficients for NOMA users are chosen as $a_1=0.2$ and $a_2=0.8$ as given in [2], [3]. In all figures, simulations are provided for $10^8$ channel realizations. In Fig. 2, the BER comparison of the SMA and NOMA systems are presented respect to the average transmitted SNR. SMA outperforms substantially to the NOMA systems. The full diversity order (i.e., $N_r$) is achieved for the both users in SMA. \begin{figure}[!t] \centering \includegraphics[width=8.5cm,height=5.4cm]{fig2v2.eps} \caption{BER Comparison of SMA and NOMA } \label{fig2} \end{figure} In Fig 3, the ergodic sum rate comparison of the SMA and SIMO-NOMA systems are provided. SMA systems can achieve higher sum rate than NOMA for all number of receiving antennas ($N_r$). In addition, achievable rate of UE-2 in SMA can be easily improved by increasing the number of the transmitting antennas ($N_t$) so that the sum rate of SMA will be improved. \begin{figure}[!t] \centering \includegraphics[width=8.5cm, height=5.4cm]{fig3v2.eps} \caption{Sum Rate Comparison of SMA and NOMA} \label{fig3} \end{figure} Lastly, the outage comparison of the SMA and NOMA systems are given in Fig4. The targeted data rates of the users are chosen according to the number of the transmitting antenna of SMA (i.e., $\acute{R}_1=\acute{R}_2=log_2(N_t)$). The full diversity order is achieved for the outage performance of the UE-1 in SMA systems. The outage performance of the UE-2 in SMA is not provided due to $P_2(out)=0$ (when receiver sensitivity is ignored). SMA is superior to NOMA systems in terms of the outage performance as well. It is worth pointing out that provided simulations results of SMA match well with the derived analytical expressions in (8), (12), (15) and (19). \begin{figure}[!t] \centering \includegraphics[width=8.5cm,height=5.4cm]{fig4v2.eps} \caption{Outage Comparison of SMA and NOMA} \label{fig4} \end{figure} \section{Conclusion} In this letter, performances of the spatial multiple access (SMA) proposed as an alternative of NOMA to deal with the drawback of the NOMA systems caused by the inter-user interferences, are investigated. The analytical ABEP, ergodic sum rate and the outage probability expressions are derived. The comparison of the SMA and NOMA systems for all performance metrics (i.e., bit error rate,ergodic sum rate and outage) are simulated. The results reveal that 1) SMA is superior to NOMA for all three metrics. 2) The full diversity order (number of receiving antennas) is achieved for the SMA system. 3) SMA consumes much less power than NOMA to meet the same performance which is very promising for the energy efficiency. 4) SMA has much less complexity than NOMA since the SIC implementation at the receiver and the channel ordering with the power allocation algorithms at the transmitter are no longer have to be succeeded besides only one RF chain at the transmitter is needed for SMA. Lastly, the proposed SMA system can be expanded for the applications with the NOMA systems to meet higher number of users by achieving better performance metrics. \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "redpajama_set_name": "RedPajamaArXiv" }
376
#include <nek/type_traits/has_pointer.hpp> #include <gtest/gtest.h> #include "static_assert.hpp" #define NEK_MEMBER_TYPE pointer #include "has_member_type.hpp" #undef NEK_MEMBER_TYPE TEST(has_pointer_test, initialize_true) { STATIC_ASSERT_TRUE_VALUE(nek::has_pointer<has_member_type>); STATIC_ASSERT_EQ(nek::has_pointer<has_member_type>::type, nek::true_type); STATIC_ASSERT_EQ(nek::has_pointer<has_member_type>::value_type, bool); EXPECT_EQ(true, nek::has_pointer<has_member_type>()); SUCCEED(); } TEST(has_pointer_test, initialize_false) { STATIC_ASSERT_FALSE_VALUE(nek::has_pointer<int>); STATIC_ASSERT_EQ(nek::has_pointer<int>::type, nek::false_type); STATIC_ASSERT_EQ(nek::has_pointer<int>::value_type, bool); EXPECT_EQ(false, nek::has_pointer<int>()); SUCCEED(); } TEST(has_pointer_test, normal) { NEK_HAS_MEMBER_TYPE(nek::has_pointer); SUCCEED(); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,294
Produced in Australia as an introductory film by Bill Le Page. Edited by Steven Hein and Harriet Clutterbuck. Script by Wendy Borthwick. Music by Richard Lockwood and Robert Welsh. The DVD is in NTSC format. Avatar Meher Baba's Mandali & Beloved Mehera In the early 70's Paul Comar from France filmed Meher Baba's mandali at Meherazad, Meherabad and Ahmednagar. For those who knew them at that time it is a wonderful trip down memory lane. For those who did not meet Beloved Mehera, Mani and the other mandali (including Francis and Padri), it is a chance to get to know them through Paul's footage. The film is accompanied by his interpretative piano. Eternal Beloved This is a very special and intimate portrait of Baba. The DVD begins with colour footage of Baba striding along, beaming at people, hugging children and giving prasad. What comes through constantly is the intimacy that Baba gave to all those He reached; the film shows Him touching and caressing individuals all the time, giving out to them without ceasing, His acceptance of all. Mani, His sister, talks about His gestures, the one for love being the same as for forgiveness, a welling up from inside and giving up. Another of His disciples, Dr Goher talks about the infinite compassion in Baba's eyes, the ocean of forgiveness. They both emphasise His beauty and His fragrance - the glow of divinity that emanated from Him. Several of the key themes that Baba gave out are focussed upon: the suffering that the Avatar undergoes on behalf of all humanity, Baba's work "Mastery in Servitude", and the goal of bringing spiritual freedom to all. Eruch Jessawala, another of Baba's main disciples, speaking from Mandali Hall at Meherazad, says that Baba asked people to remember one thing "I have not come to teach but to awaken". It is a pleasure to have these reminders of what Baba is, done in such a joyful light way through the focus on Baba as intimate. This is an excellent and lovingly crafted DVD from Stephen Edelman with a narration by Charles Haynes. Colour format NTSC . God in Human Form This is a comprehensive yet concise biographical documentary of the most spiritually dynamic personality of this age, Avatar Meher Baba. Through the creative use of rare photographs, film footage and unique selections of music - including pieces composed by Meher Baba himself - the elements are woven together to present a clear perspective of this most remarkable life. This DVD is in NTSC format. Comments by people who purchased this DVD: "If you're looking for an intriguing & profound overview of Baba's 'way & work' then this film really hits all the right notes. Except maybe for one song [in my opinion] on the soundtrack that made my teeth grind, otherwise it's pretty hard to fault. Provocative and stirring." Godman This is a wide-ranging introduction to Meher Baba, including narration by Eruch Jessawala, as well as brief appearances by Baba's sister Mani, long-time disciple Padri, and Rano, plus historical footage of Meher Baba taken from a variety of sources. The narration and historical footage is interspersed with scenery of 1976 rural India. As the film moves forward in time, it describes the influx of Western spiritual seekers who came after Meher Baba dropped His body and ends with the 1976 Amartithi celebrations. It contains musical contributions by Pete Townshend, Jeff Mylett, Richard Lockwood among many others. The original 16 mm film was produced in 1977 in Australia and show on television. Happy Birthday Darling Mehera Meher Baba's sister Mani wrote, "As Sita was for Ram, Radha for Krishna, Mary for Jesus, for this Advent of Meher Baba, it is Mehera who plays the leading role. This role, of being the chosen counterpart to the God-Man, amounts to the highest, purest, most spiritual relationship, consisting of a divine love which the world cannot imagine." This 35 minute film is a stirring collection of old, rare, and more recent stills and film footage of Baba, Mehera, the Women Mandali and others, beautifully set to the music of Buz Connor, Al Jolson, Margaret Bernstein, Jamie Newell, Katie Irani and others. Directed by Kacy Cook, edited by Robert Fredericks. His Ways Are Unfathomable This film was recorded in Mandali Hall during the pilgrim season of 1991. Eruch Jessawala recollects many fascinating incidents from his life with Meher Baba. Since many of Eruch's stories had already been documented in print form, the aim of this DVD was to document specific renderings of these stories and to recreate, within the limits of the technology at hand, the quality of Eruch's voice and physical demeanour, in short, the unique ambiance of these sessions. "Re-creates, for me, the wonderful feeling of being in Mandali Hall listening to Eruch." Meher Baba's Call On September 12, 1954, in Ahmednagar, India, Meher Baba gave His Call to humanity. Baba is seen giving darshan to large numbers of people, and also washing the feet of the poor as part of his spiritual work. The film is narrated by Darwin Shaw, an early close Western disciple of Meher Baba, who witnessed these events first-hand and shares his impressions of this special time. Meher Baba's Call gives one the feeling of what it was like to be in the presence of Meher Baba during such gatherings. Padri Padri (Faredoon Driver) was 18 years old when he met Meher Baba in 1922. He oversaw the construction and maintenance of buildings at Meherabad, where Meher Baba's body was entombed in 1969. Baba called Padri one of the "Four pillars" of Meherabad. Padri is interviewed in this film at Meherabad in 1981 and 1982 by Gary Kleiner. The Ancient One – Meher Baba's Last Public Darshan The Ancient One brings you into the scene of Meher Baba's darshan for his Eastern lovers in Poona, India during the first week of May, 1965. The film footage, shot by Meher Baba's brother Behram with photographs taken by His sister, Mani, is narrated by Jim Meyer. Excerpts from Mani's Family Letters describing the darshan are read by Jane Barry Haynes. To Be Natural Eruch Jessawala, one of Meher Baba's most intimate disciples, first met Meher Baba in 1925 when he was nine years old, and came to stay with him permanently in 1938. In the years that followed until 1969 (when Meher Baba dropped His body), Eruch was Baba's personal attendant and His companion on trips all over India and around the world. In later years, Eruch spent much of his time talking to pilgrims about Meher Baba's life and work. In this 1982 videotaped session recorded at the Trust Office in Ahmednagar, Eruch offers insightful answers to questions asked by Gary Kleiner. Mani Irani, Meher Baba's only sister, is interviewed by Gary Kleiner as she describes and demonstrates Meher Baba's expressive hand gestures. Meher Baba began keeping silence in 1925, and used his own unique gestures to communicate after giving up the use of the alphabet board in October, 1957. When Merwan Grew Up This is a unique film, written for children and made by Bob Fredericks with the children of the Meher English School at Meherabad, India. The project was conceived by Bob and Mrs. Stella, the principal of the school, to teach students about Baba's life of love as the God-man. The students created the vibrant artistic images of Baba used in the film, and Bob Fredericks masterfully melds these refreshing images of Meher Baba into photographs and film footage of Baba. A 12-year-old boy from the school skilfully narrates this film with exquisite natural expression in lilting articulate English. The film tells of Meher Baba's life and work in a simple manner, and its simplicity turns out to be the perfect telling. What did Merwan grow up to do? To help all to love God. Lively music, colourful images and expressive narration make this an excellent film for children but also for anyone of any age to learn about the life and work of Avatar Meher Baba. A wonderful introductory film for sharing. About 30 minutes long. Great music by Richard Piekoff, Bob and Jane Brown, and Bob Holdt. A bonus feature is included: "How the Film was Made", with Mrs. Stella Manuel, Bob, and the children of Meher English School. "OK, but don't feel drawn to watch it more than once." "I enjoyed watching it, but it was made for a specific audience, Indian children and their parents, so maybe wouldn't be such a good introduction to Baba's life for Western kids. It was very well done, though and the music for the film was good." You Alone Exist Dictated by Avatar Meher Baba to his close disciple Bhau Kalchuri, this stunning prayer-poem expressively describes the all-pervading nature of God through many attributes, from simple to sublime. Jim Meyer composed and recorded the soundtrack, which provides the upbeat foundation for this prayer-poem celebrating the oneness of existence. Produced and directed by Peter Nordeen, this 24 minute music film features a number of rare film clips of Avatar Meher Baba creatively woven together by film editor Robert Fredericks in tandem to the rhythm of the musical prayer. About this poetic prayer Avatar Meher Baba commented: "I like this prayer because it tells people who I am, what I am. People do not know who or what I am, and so they need this prayer to know me, to understand me. I gave this prayer to them….A day will come when they will know this….In the future this prayer of Mine will be sung in every house throughout the world…" "I watch it again and again, and enjoy it more each time." "Everyone just loved it" "It's one of everyone's favourites I reckon." "Here's an audacious 'musical' that's right out of the box. Quietly builds a momentum and intensity that spins one's heart into pure rapture & awe. Really works from the inside out. When no explanation is likely to suffice then do check this out! Though be warned, it's definitely not head gear." "I like this film as it is both serious and lighthearted at the same time. The editors have tried to create a flowing visual feast, which sometimes works and sometimes appears amateurish. Overall this is a good film to have in ones collection, that rewards with rewatching." The colour TV system in Australia and New Zealand is PAL. If you wish to play DVDs made for the USA home market (most of the DVDs offered on this website are of this kind), both your player and television set need to be NTSC-compatible. Fortunately, most DVD players sold in Australia can play both PAL and NTSC formats, and most newer TVs are NTSC-compatible. So far, few people had problems, but the onus is on you to check your DVD player and TV manual. Personal and Apple computers with a DVD drive can almost always play both PAL and NTSC discs. DVD Regions All DVDs sold through the Meher Baba Film and Video Project are region-free. They should be playable in all parts of the world, subject to the colour format mentioned above. All movies are copyrighted. Please support independent filmmakers; do not make unauthorised copies. DVD Care 1. Handle discs by the outer edge or the centre hole. Never touch the surface with your fingers. 2. To remove disc from its case, press the "push" button on the centre hub and press downward. Using your other hand, gently remove disc by its outer edge without bending or twisting it. 3. Make sure the disc is properly seated in the tray before closing the door. 4. Avoid extremes of temperature and humidity. 5. Avoid prolonged exposure to sunlight and other sources of ultraviolet light. 6. Keep discs in their original cases and store upright (book style) in a cool, dry, and dark environment. 7. Remove dirt, foreign material, fingerprints, smudges, and liquids by wiping with a clean cotton fabric in a straight line from the centre of the disc toward the outer edge. Do not use circular motions when cleaning. Avoid using anything that could scratch the surface.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,210
You're More Likely To Be Murdered If You Live In a Democrat-Run City All across America, we have seen a disturbing rise in crime. This trend has been steadily growing over recent years but has exploded since 2020. Democrats dismiss it, saying it is the result of the pandemic. That would be true, if the rise in crime was just petty theft. But a troubling trend is being seen across the country—and it has nothing to do with the virus. Last year, we saw numerous blue cities embrace a toxic movement. They jumped on an ill-fated bandwagon to slash their police budgets. These cuts resulted in the loss of hundreds of police officers, not to mention a lack of resources to protect cities. This came on the heels of radical changes to law enforcement in major cities, like the removal of bail and other measures. The end result? Twelve cities broke their annual homicide records this year. All of them are run by Democrats. Philadelphia topped the list, surpassing 500 murders as of Nov. 26 — breaking last year's numbers with a month left to go in the year. The last time there were this many people killed in the City of Brotherly Love was in 1990. Rounding out the top five are Indianapolis with 246 killed, Columbus with 179, Louisville with 175, and Baton Rouge with 137… New York, with roughly 8 million residents, had 443 homicides as of Dec. 5. Others in the dirty dozen were: Albuquerque (82 murdered), Tucson (80), Portland (72), Rochester (71), Toledo (62), Austin (60), and St. Paul (35). Despite these cities seeing the sharpest rises in murder rates, Chicago still leads the nation in total homicides for the year, with 739 by the end of November. [Source: Just the News] We're not talking about break-ins or shoplifting here. We're talking about homicides. These cities saw spikes in murders we haven't seen in decades. And it's not a coincidence that these cities were run by Democrats. The mayors and city councils of these cities pushed changes that made their residents much less safe. The biggest move has been to defund their police departments. Philadelphia cut their PDs' budget by $33 million. In Rochester, their cuts were so drastic the entire police department's command staff retired. Austin slashed their PD budget by $150 million. A movement to restore funding via a ballot measure failed, because of a misinformation campaign led by leftists. Austin is such a miserable failure, that cops won't respond to 911 calls anymore. So, we see this is not just a failure of leadership. Liberal voters seem to be supporting these toxic and suicidal measures. Even when given a chance to restore law and order, many times these leftists vote it down. What do they expect is going to happen? Do they think they criminals are just going to go away? Why do they keep voting for people who are deliberately making their cities less safe? How long before the sensible residents, who don't want to die, leave for safer regions? How long before these blue cities are literal ghost towns? Can't be much longer. Author: Alex Smith
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
624
\section{Introduction} \label{sec:intro} A knowledge graph is composed by a large amount of triples in the form of $(head\; entity,\, relation,\, tail\; entity)$ ($(h, r, t)$ in short), encoding knowledge and facts in the world. Many KGs have been proposed \cite{wikidata, freebase, nell} and applied to various applications \cite{qa,recommender,longtail-re}. Although with huge amount of entities, relations and triples, many KGs still suffer from incompleteness, thus knowledge graph completion is vital for the development of KGs. One of knowledge graph completion tasks is link prediction, predicting new triples based on existing ones. For link prediction, KG embedding methods \cite{TransE, RESCAL, ComplEx, DistMult} are promising ways. They learn latent representations, called embeddings, for entities and relations in continuous vector space and accomplish link prediction via calculation with embeddings. \begin{figure}[t] \centering \includegraphics[scale=0.41]{fsrl.eps} \caption{An example of 3-shot link prediction in KGs. One task represents observing only three instances of one specific relation and conducting link prediction on this relation. Our model focuses on extracting relation-specific meta information by a kind of relational learner which is shared across tasks and transferring this meta information to do link prediction within one task.} \label{fig:fsrl} \end{figure} The effectiveness of KG embedding methods is promised by sufficient training examples, thus results are much worse for elements with a few instances during training~\cite{crosse}. However, few-shot problem widely exists in KGs. For example, about 10\% of relations in Wikidata~\cite{wikidata} have no more than 10 triples. Relations with a few instances are called few-shot relations. In this paper, we devote to discuss \emph{few-shot link prediction in knowledge graphs}, predicting tail entity $t$ given head entity $h$ and relation $r$ by only observing $K$ triples about $r$, usually $K$ is small. Figure~\ref{fig:fsrl} depicts an example of 3-shot link prediction in KGs. To do few-shot link prediction, \citet{GMatching} made the first trial and proposed GMatching, learning a matching metric by considering both learned embeddings and one-hop graph structures, while we try to accomplish few-shot link prediction from another perspective based on the intuition that \emph{the most important information to be transferred from a few existing instances to incomplete triples should be the common and shared knowledge within one task.} We call such information \emph{relation-specific meta information} and propose a new framework Meta Relational Learning (MetaR) for few-shot link prediction. For example, in Figure~\ref{fig:fsrl}, relation-specific meta information related to the relation \emph{CEOof} or \emph{CountryCapital} will be extracted and transferred by MetaR from a few existing instances to incomplete triples. The relation-specific meta information is helpful in the following two perspectives: 1) transferring common relation information from observed triples to incomplete triples, 2) accelerating the learning process within one task by observing only a few instances. Thus we propose two kinds of relation-specific meta information: \emph{relation meta} and \emph{gradient meta} corresponding to afore mentioned two perspectives respectively. In our proposed framework MetaR, relation meta is the high-order representation of a relation connecting head and tail entities. Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction. Compared with GMatching~\cite{GMatching} which relies on a background knowledge graph, our MetaR is independent with them, thus is more robust as background knowledge graphs might not be available for few-shot link prediction in real scenarios. We evaluate MetaR with different settings on few-shot link prediction datasets. MetaR achieves state-of-the-art results, indicating the success of transferring relation-specific meta information in few-shot link prediction tasks. In summary, main contributions of our work are three-folds: \begin{itemize} \item Firstly, we propose a novel meta relational learning framework (MetaR) to address few-shot link prediction in knowledge graphs. \item Secondly, we highlight the critical role of relation-specific meta information for few-shot link prediction, and propose two kinds of relation-specific meta information, \emph{relation meta} and \emph{gradient meta}. Experiments show that both of them contribute significantly. \item Thirdly, our MetaR achieves state-of-the-art results on few-shot link prediction tasks and we also analyze the facts that affect MetaR's performance. \end{itemize} \section{Related Work} One target of MetaR is to learn the representation of entities fitting the few-shot link prediction task and the learning framework is inspired by knowledge graph embedding methods. Furthermore, using loss gradient as one kind of meta information is inspired by MetaNet~\cite{metanet} and MAML~\cite{maml} which explore methods for few-shot learning by meta-learning. From these two points, we regard knowledge graph embedding and meta-learning as two main kinds of related work. \subsection{Knowledge Graph Embedding} Knowledge graph embedding models map relations and entities into continuous vector space. They use a score function to measure the truth value of each triple $(h, r, t)$. Same as knowledge graph embedding, our MetaR also need a score function, and the main difference is that representation for $r$ is the learned relation meta in MetaR rather than embedding of $r$ as in normal knowledge graph embedding methods. One line of work is started by TransE \cite{TransE} with distance score function. TransH~\cite{TransH} and TransR~\cite{TransR} are two typical models using different methods to connect head, tail entities and their relations. DistMult~\cite{DistMult} and ComplEx~\cite{ComplEx} are derived from RESCAL~\cite{RESCAL}, trying to mine latent semantics in different ways. There are also some others like ConvE~\cite{conve} using convolutional structure to score triples and models using additional information such as entity types~\cite{entity-type} and relation paths~\cite{PTransE}. \citet{kgembedding} comprehensively summarize the current popular knowledge graph embedding methods. Traditional embedding models are heavily rely on rich training instances~\cite{iteratively, GMatching}, thus are limited to do few-shot link prediction. Our MetaR is designed to fill this vulnerability of existing embedding models. \subsection{Meta-Learning} Meta-learning seeks for the ability of learning quickly from only a few instances within the same concept and adapting continuously to more concepts, which are actually the rapid and incremental learning that humans are very good at. Several meta-learning models have been proposed recently. Generally, there are three kinds of meta-learning methods so far: (1) \textit{Metric-based} meta-learning~\cite{siamese,matching,prototypical,GMatching}, which tries to learn a matching metric between query and support set generalized to all tasks, where the idea of matching is similar to some nearest neighbors algorithms. Siamese Neural Network~\cite{siamese} is a typical method using symmetric twin networks to compute the metric of two inputs. GMatching~\cite{GMatching}, the first trial on one-shot link prediction in knowledge graphs, learns a matching metric based on entity embeddings and local graph structures which also can be regarded as a metric-based method. (2) \textit{Model-based} method~\cite{mann,metanet,snail}, which uses a specially designed part like memory to achieve the ability of learning rapidly by only a few training instances. MetaNet~\cite{metanet}, a kind of memory augmented neural network (MANN), acquires meta information from loss gradient and generalizes rapidly via its fast parameterization. (3) \textit{Optimization-based} approach~\cite{maml,gradient-based}, which gains the idea of learning faster by changing the optimization algorithm. Model-Agnostic Meta-Learning~\cite{maml} abbreviated as MAML is a model-agnostic algorithm. It firstly updates parameters of task-specific learner, and meta-optimization across tasks is performed over parameters by using above updated parameters, it's like ``a gradient through a gradient". As far as we know, work proposed by \citet{GMatching} is the first research on few-shot learning for knowledge graphs. It's a metric-based model which consists of a neighbor encoder and a matching processor. Neighbor encoder enhances the embedding of entities by their one-hop neighbors, and matching processor performs a multi-step matching by a LSTM block. \begin{figure*}[t] \centering \includegraphics[scale=0.7]{overview.eps} \caption{Overview of MetaR. $\mathcal{T}_r = \{ \mathcal{S}_r, \mathcal{Q}_r\}$, $\mathit{R}_{\mathcal{T}_r}$ and $\mathit{R}_{\mathcal{T}_r}^{\prime}$ represent relation meta and updated relation meta, and $\mathit{G}_{\mathcal{T}_r}$ represents gradient meta.} \vspace{-5mm} \label{fig:metalp} \end{figure*} \section{Task Formulation} \begin{table} \small \centering \setlength{\tabcolsep}{5mm}{ \begin{tabular}{l|l} \toprule \multicolumn{2}{c}{Training} \\ \midrule \multicolumn{2}{l}{Task \#1 ({\color{cyan} CountryCapital})} \\ \midrule Support & (China, {\color{cyan} CountryCapital}, Beijing) \\ \midrule Query & (France, {\color{cyan} CountryCapital}, Paris) \\ \midrule \multicolumn{2}{l}{Task \#2 ({\color{red} CEOof})} \\ \midrule Support & (Satya Nadella, {\color{red} CEOof}, Microsoft) \\ \midrule Query & (Jack Dorsey, {\color{red} CEOof}, Twitter) \\ \midrule \midrule \multicolumn{2}{c}{Testing} \\ \midrule \multicolumn{2}{l}{Task \#1 ({\color{blue} OfficialLanguage})} \\ \midrule Support & (Japan, {\color{blue} OfficialLanguage}, Japanese) \\ \midrule Query & (Spain, {\color{blue} OfficialLanguage}, Spanish) \\ \bottomrule \end{tabular}} \caption{The training and testing examples of 1-shot link prediction in KGs.} \label{tab:task-form} \end{table} In this section, we present the formal definition of a knowledge graph and few-shot link prediction task. A knowledge graph is defined as follows: \begin{defn} (Knowledge Graph $\mathcal{G}$) A knowledge graph $\mathcal{G} = \{ \mathcal{E}, \mathcal{R}, \mathcal{TP}\}$. $\mathcal{E}$ is the entity set. $\mathcal{R}$ is the relation set. And $\mathcal{TP} = \{ (h, r, t)\in \mathcal{E} \times \mathcal{R} \times \mathcal{E}\} $ is the triple set. \end{defn} And a few-shot link prediction task in knowledge graphs is defined as: \begin{defn} (Few-shot link prediction task $\mathcal{T}$) With a knowledge graph $\mathcal{G} = \{ \mathcal{E}, \mathcal{R}, \mathcal{TP}\}$, given a support set $\mathcal{S}_r = \{(h_i, t_i)\in \mathcal{E} \times \mathcal{E} | (h_i, r, t_i) \in \mathcal{TP} \}$ about relation $r\in \mathcal{R}$, where $|\mathcal{S}_r | = K$, predicting the tail entity linked with relation $r$ to head entity $h_j$, formulated as $r:(h_j, ?)$, is called K-shot link prediction. \end{defn} As defined above, a few-shot link prediction task is always defined for a specific relation. During prediction, there usually is more than one triple to be predicted, and with \emph{support set} $\mathcal{S}_r$ , we call the set of all triples to be predicted as \emph{query set} $\mathcal{Q}_r = \{ r:(h_j, ?)\}$. The goal of a few-shot link prediction method is to gain the capability of predicting new triples about a relation $r$ with only observing a few triples about $r$. Thus its training process is based on a set of tasks $\mathcal{T}_{train}=\{\mathcal{T}_{i}\}_{i=1}^{M}$ where each task $\mathcal{T}_{i} = \{\mathcal{S}_i, \mathcal{Q}_i\}$ corresponds to an individual few-shot link prediction task with its own support and query set. Its testing process is conducted on a set of new tasks $\mathcal{T}_{test} = \{\mathcal{T}_{j}\}_{j=1}^{N}$ which is similar to $\mathcal{T}_{train}$, other than that $\mathcal{T}_{j} \in \mathcal{T}_{test}$ should be about relations that have never been seen in $\mathcal{T}_{train}$. Table~\ref{tab:task-form} gives a concrete example of the data during learning and testing for few-shot link prediction. \section{Method} To make one model gain the few-shot link prediction capability, the most important thing is transferring information from support set to query set and there are two questions for us to think about: (1) what is the most transferable and common information between support set and query set and (2) how to learn faster by only observing a few instances within one task. For question (1), within one task, all triples in support set and query set are about the same relation, thus it is naturally to suppose that relation is the key common part between support and query set. For question (2), the learning process is usually conducted by minimizing a loss function via gradient descending, thus gradients reveal how the model's parameters should be changed. Intuitively, we believe that gradients are valuable source to accelerate learning process. Based on these thoughts, we propose two kinds of meta information which are shared between support set and query set to deal with above problems: \begin{itemize} \item \textbf{Relation Meta} represents the relation connecting head and tail entities in both support and query set and we extract relation meta for each task, represented as a vector, from support set and transfer it to query set. \item \textbf{Gradient Meta} is the loss gradient of relation meta in support set. As gradient meta shows how relation meta should be changed in order to reach a loss minima, thus to accelerate the learning process, relation meta is updated through gradient meta before being transferred to query set. This update can be viewed as the rapid learning of relation meta. \end{itemize} In order to extract relation meta and gradient mate and incorporate them with knowledge graph embedding to solve few-shot link prediction, our proposal, \textbf{MetaR}, mainly contains two modules: \begin{itemize} \item \textbf{Relation-Meta Learner} generates relation meta from heads' and tails' embeddings in the support set. \item \textbf{Embedding Learner} calculates the truth values of triples in support set and query set via entity embeddings and relation meta. Based on the loss function in embedding learner, gradient meta is calculated and a rapid update for relation meta will be implemented before transferring relation meta to query set. \end{itemize} The overview and algorithm of MetaR are shown in Figure~\ref{fig:metalp} and Algorithm~\ref{alg:metalp}. Next, we introduce each module of MetaR via one few-shot link prediction task $\mathcal{T}_r = \{ \mathcal{S}_r, \mathcal{Q}_r\}$. \begin{algorithm}[tb] \setstretch{1} \caption{Learning of MetaR} \label{alg:metalp} \begin{algorithmic}[1] \REQUIRE Training tasks $\mathcal{T}_{train}$ \REQUIRE Embedding layer $emb$; Parameter of relation-meta learner $\phi$ \WHILE{not done} \STATE Sample a task $\mathcal{T}_r={\{\mathcal{S}_r, \mathcal{Q}_r\}}$ from $\mathcal{T}_{train}$ \STATE Get $\mathit{R}$ from $\mathcal{S}_{r}$ (Equ.~\ref{eq:rel-meta}, Equ.~\ref{eq:rel-avg}) \STATE Compute loss in $\mathcal{S}_{r}$ (Equ.~\ref{eq:loss-sup}) \STATE Get $\mathit{G}$ by gradient of $\mathit{R}$ (Equ.~\ref{eq:grad-meta}) \STATE Update $\mathit{R}$ by $\mathit{G}$ (Equ.~\ref{eq:update-rel-meta}) \STATE Compute loss in $\mathcal{Q}_{r}$ (Equ.~\ref{eq:loss-que}) \STATE Update $\phi$ and $emb$ by loss in $\mathcal{Q}_{r}$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Relation-Meta Learner} To extract the relation meta from support set, we design a relation-meta learner to learn a mapping from head and tail entities in support set to relation meta. The structure of this relation-meta learner can be implemented as a simple neural network. In task $\mathcal{T}_r$, the input of relation-meta learner is head and tail entity pairs in support set $\{(h_i, t_i) \in \mathcal{S}_r\}$. We firstly extract entity-pair specific relation meta via a $L$-layers fully connected neural network, \begin{equation} \begin{aligned} \mathbf{x}^0 &= \mathbf{h}_i \oplus \mathbf{t}_i \\ \mathbf{x}^l &= \sigma({\mathbf{W}^{l}\mathbf{x}^{l-1} + b^l}) \\ \mathit{R}_{(h_i, t_i)} &= {\mathbf{W}^{L}\mathbf{x}^{L-1} + b^L} \label{eq:rel-meta} \end{aligned} \end{equation} where $\mathbf{h}_i \in \mathbb{R}^{d}$ and $\mathbf{t}_i \in \mathbb{R}^{d}$ are embeddings of head entity $h_i$ and tail entity $t_i$ with dimension $d$ respectively. $L$ is the number of layers in neural network, and $l \in \{1, \dots, L-1 \}$. $\mathbf{W}^l$ and $\mathbf{b}^l$ are weights and bias in layer $l$. We use LeakyReLU for activation $\sigma$. $\mathbf{x} \oplus \mathbf{y}$ represents the concatenation of vector $\mathbf{x}$ and $\mathbf{y}$. Finally, $\mathit{R}_{(h_i, t_i)}$ represent the relation meta from specific entity pare $h_i$ and $t_i$. With multiple entity-pair specific relation meta, we generate the final relation meta in current task via averaging all entity-pair specific relation meta in current task, \begin{equation} \mathit{R}_{\mathcal{T}_r} = \frac{\sum_{i=1}^{K}\mathit{R}_{(h_i, t_i)}}{K} \label{eq:rel-avg} \end{equation} \subsection{Embedding Learner} As we want to get gradient meta to make a rapid update on relation meta, we need a score function to evaluate the truth value of entity pairs under specific relations and also the loss function for current task. We apply the key idea of knowledge graph embedding methods in our embedding learner, as they are proved to be effective on evaluating truth value of triples in knowledge graphs. In task $\mathcal{T}_r$, we firstly calculate the score for each entity pairs $(h_i, t_i)$ in support set $\mathcal{S}_r$ as follows: \begin{equation} s_{(h_i, t_i)} = \| \mathbf{h}_i + {\mathit{R}_{\mathcal{T}_r}} - \mathbf{t}_i \| \label{eq:score-sup} \end{equation} where $\| \mathbf{x}\|$ represents the L2 norm of vector $\mathbf{x}$. We design the score function inspired by TransE \cite{TransE} which assumes the head entity embedding $\mathbf{h}$, relation embedding $\mathbf{r}$ and tail entity embedding $\mathbf{t}$ for a true triple $(h, r, t)$ satisfying $\mathbf{h} + \mathbf{r} = \mathbf{t}$. Thus the score function is defined according to the distance between $\mathbf{h} + \mathbf{r} $ and $\mathbf{t}$. Transferring to our few-show link prediction task, we replace the relation embedding $\mathbf{r}$ with relation meta $\mathit{R}_{\mathcal{T}_r}$ as there is no direct general relation embeddings in our task and $\mathit{R}_{\mathcal{T}_r}$ can be regarded as the relation embedding for current task $\mathcal{T}_r$. With score function for each triple, we set the following loss, \begin{equation} L(\mathcal{S}_r) = \sum_{(h_i, t_i)\in \mathcal{S}_r} [\gamma+s_{(h_i, t_i)}-s_{(h_i, t_i^{\prime})}]_{+} \label{eq:loss-sup} \end{equation} where $[x]_{+}$ represents the positive part of $x$ and $\gamma$ represents margin which is a hyperparameter. $s_{(h_i, t_i^{\prime})}$ is the score for negative sample $(h_i, t_i^{\prime})$ corresponding to current positive entity pair $(h_i, t_i) \in \mathcal{S}_r$, where $(h_i, r, t_i^{\prime}) \notin \mathcal{G}$. $L(\mathcal{S}_r)$ should be small for task $\mathcal{T}_r$ which represents the model can properly encode truth values of triples. Thus gradients of parameters indicate how should the parameters be updated. Thus we regard the gradient of $\mathit{R}_{\mathcal{T}_r}$ based on $L(\mathcal{S}_r)$ as gradient meta $\mathit{G}_{\mathcal{T}_r}$: \begin{equation} \vspace{-1mm} \mathit{G}_{\mathcal{T}_r} = \nabla_{\mathit{R}_{\mathcal{T}_r}} L(\mathcal{S}_r) \label{eq:grad-meta} \end{equation} Following the gradient update rule, we make a rapid update on relation meta as follows: \begin{equation} \mathit{R}^\prime_{\mathcal{T}_r} = \mathit{R}_{\mathcal{T}_r} - \beta \mathit{G}_{\mathcal{T}_r} \label{eq:update-rel-meta} \end{equation} where $\beta$ indicates the step size of gradient meta when operating on relation meta. When scoring the query set by embedding learner, we use updated relation meta. After getting the updated relation meta $\mathit{R}^\prime$, we transfer it to samples in query set $\mathcal{Q}_r = \{(h_j, t_j) \}$ and calculate their scores and loss of query set, following the same way in support set: \begin{equation} s_{(h_j, t_j)} = \| \mathbf{h}_j + \mathit{R}_{\mathcal{T}_r}^\prime - \mathbf{t}_j \| \label{eq:score-que} \end{equation} \begin{equation} L(\mathcal{Q}_r) = \sum_{(h_j, t_j)\in \mathcal{Q}_r}[\gamma+s_{(h_j, t_j)}-s_{(h_j, t_j^{\prime})}]_{+} \label{eq:loss-que} \end{equation} where $L(\mathcal{Q}_r)$ is our training objective to be minimized. We use this loss to update the whole model. \subsection{Training Objective} During training, our objective is to minimize the following loss $L$ which is the sum of query loss for all tasks in one minibatch: \begin{equation} L = \sum_{(\mathcal{S}_r, \mathcal{Q}_r)\in \mathcal{T}_{train}} L(\mathcal{Q}_r) \end{equation} \section{Experiments} With MetaR, we want to figure out following things: 1) can MetaR accomplish few-shot link prediction task and even perform better than previous model? 2) how much relation-specific meta information contributes to few-shot link prediction? 3) is there any requirement for MetaR to work on few-shot link prediction? To do these, we conduct the experiments on two few-shot link prediction datasets and deeply analyze the experiment results \footnote{The source code of experiments is available at \url{https://github.com/AnselCmy/MetaR}}. \begin{table} \centering \begin{tabular}{ccccc} \toprule Dataset & Fit & \# Train & \# Dev & \# Test \\ \midrule NELL-One & Y & 321 & 5 & 11 \\ Wiki-One & Y & 589 & 16 & 34 \\ \midrule NELL-One & N & 51 & 5 & 11 \\ Wiki-One & N & 133 & 16 & 34 \\ \bottomrule \end{tabular} \caption{Statistic of datasets. Fit denotes fitting background into training tasks (Y) or not (N), \# Train, \# Dev and \# Test denote the number of relations in training, validation and test set.} \label{tab:statistic} \end{table} \subsection{Datasets and Evaluation Metrics} \label{sec:dataset} We use two datasets, NELL-One and Wiki-One which are constructed by~\citet{GMatching}. NELL-One and Wiki-One are derived from NELL~\cite{nell} and Wikidata~\cite{wikidata} respectively. Furthermore, because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and one-hop graph structures, a background graph is constructed with relations out of training/validation/test sets for obtaining the pre-train entity embeddings and providing the local graph for GMatching. Unlike GMatching using background graph to enhance the representations of entities, our MetaR can be trained without background graph. For NELL-One and Wiki-One which have background graph originally, we can make use of such background graph by fitting it into training tasks or using it to train embeddings to initialize entity representations. Overall, we have three kinds of dataset settings, shown in Table~\ref{tab:dataform}. For setting of \emph{BG:In-Train}, in order to make background graph included in training tasks, we sample tasks from triples in background graph and original training set, rather than sampling from only original training set. \begin{table} \centering \begin{tabular}{cp{110pt}} \toprule Background Conf. & Description \\ \midrule BG:Pre-Train & Use background to train entity embedding in advance.\\ \midrule BG:In-Train & Fit background graph into training tasks. \\ \midrule BG:Discard & Discard the background graph. \\ \bottomrule \end{tabular} \caption{Three forms of datasets in our experiments.} \label{tab:dataform} \end{table} Note that these three settings don't violate the task formulation of few-shot link prediction in KGs. The statistics of NELL-One and Wiki-One are shown in Table~\ref{tab:statistic}. We use two traditional metrics to evaluate different methods on these datasets, MRR and Hits@N. MRR is the mean reciprocal rank and Hits@N is the proportion of correct entities ranked in the top N in link prediction. \begin{table*}[t] \centering \begin{tabular}{lcc|cc|cc|cc} \toprule \multicolumn{1}{c}{} & \multicolumn{2}{c}{MRR} & \multicolumn{2}{c}{Hits@10} & \multicolumn{2}{c}{Hits@5} & \multicolumn{2}{c}{Hits@1}\\ \cmidrule{2-9} \textbf{NELL-One} & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \midrule GMatching\_RESCAL & \underline{.188} & -- & .305 & -- & .243 & -- & \underline{.133} & -- \\ GMatching\_TransE & .171 & -- & .255 & -- & .210 & -- & .122 & -- \\ GMatching\_DistMult & .171 & -- & .301 & -- & .221 & -- & .114 & -- \\ GMatching\_ComplEx & .185 & \underline{.201} & \underline{.313} & \underline{.311} & \underline{.260} & \underline{.264} & .119 & \underline{.143} \\ GMatching\_Random & .151 & -- & .252 & -- & .186 & -- & .103 & -- \\ \midrule MetaR (BG:Pre-Train) & .164 & .209 & .331 & .355 & .238 & .280 & .093 & .141 \\ MetaR (BG:In-Train) & \textbf{.250} & \textbf{.261} & \textbf{.401} & \textbf{.437} & \textbf{.336} & \textbf{.350} & \textbf{.170} & \textbf{.168} \\ \midrule \midrule \textbf{Wiki-One} & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \midrule GMatching\_RESCAL & .139 & -- & .305 & -- & .228 & -- & .061 & -- \\ GMatching\_TransE & .219 & -- & .328 & -- & .269 & -- & .163 & -- \\ GMatching\_DistMult & \underline{.222} & -- & \underline{.340} & -- & .271 & -- & \underline{.164} & -- \\ GMatching\_ComplEx & .200 & -- & .336 & -- & \underline{.272} & -- & .120 & -- \\ GMatching\_Random & .198 & -- & .299 & -- & .260 & -- & .133 & -- \\ \midrule MetaR (BG:Pre-Train) & \textbf{.314} & \textbf{.323} & \textbf{.404} & \textbf{.418} & \textbf{.375} & \textbf{.385} & \textbf{.266} & \textbf{.270}\\ MetaR (BG:In-Train) & .193 & .221 & .280 & .302 & .233 & .264 & .152 & .178 \\ \bottomrule \end{tabular} \caption{Results of few-shot link prediction in NELL-One and Wiki-One. \textbf{Bold} numbers are the best results of all and \underline{underline} numbers are the best results of GMatching. The contents of (bracket) after MetaR illustrate the form of datasets we use for MetaR.} \label{tab:result-fsrl} \end{table*} \subsection{Implementation} During training, mini-batch gradient descent is applied with batch size set as 64 and 128 for NELL-One and Wiki-One respectively. We use Adam~\cite{adam} with the initial learning rate as 0.001 to update parameters. We set $\gamma = 1$ and $\beta = 1$. The number of positive and negative triples in query set is 3 and 10 in NELL-One and Wiki-One. Trained model will be applied on validation tasks each 1000 epochs, and the current model parameters and corresponding performance will be recorded, after stopping, the model that has the best performance on Hits@10 will be treated as final model. For number of training epoch, we use early stopping with 30 patient epochs, which means that we stop the training when the performance on Hits@10 drops 30 times continuously. Following GMatching, the embedding dimension of NELL-One is 100 and Wiki-One is 50. The sizes of two hidden layers in relation-meta learner are 500, 200 and 250, 100 for NELL-One and Wiki-One. \subsection{Results} The results of two few-shot link prediction tasks, including 1-shot and 5-shot, on NELL-One and Wiki-One are shown in Table~\ref{tab:result-fsrl}. The baseline in our experiment is GMatching~\cite{GMatching}, which made the first trial on few-shot link prediction task and is the only method that we can find as baseline. In this table, results of GMatching with different KG embedding initialization are copied from the original paper. Our MetaR is tested on different settings of datasets introduced in Table~\ref{tab:dataform}. In Table~\ref{tab:result-fsrl}, our model performs better with all evaluation metrics on both datasets. Specifically, for 1-shot link prediction, MetaR increases by 33\%, 28.1\%, 29.2\% and 27.8\% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One, and 41.4\%, 18.8\%, 37.9\% and 62.2\% on Wiki-One, with average improvement of 29.53\% and 40.08\% respectively. For 5-shot, MetaR increases by 29.9\%, 40.5\%, 32.6\% and 17.5\% on MRR, Hits@10, Hits@5 and Hits@1 on NELL-One with average improvement of 30.13\%. Thus for the first question we want to explore, the results of MetaR are no worse than GMatching, indicating that MetaR has the capability of accomplishing few-shot link prediction. In parallel, the impressive improvement compared with GMatching demonstrates that the key idea of MetaR, transferring relation-specific meta information from support set to query set, works well on few-shot link prediction task. Furthermore, compared with GMatching, our MetaR is independent with background knowledge graphs. We test MetaR on 1-shot link prediction in partial NELL-One and Wiki-One which discard the background graph, and get the results of 0.279 and 0.348 on Hits@10 respectively. Such results are still comparable with GMatching in fully datasets with background. \begin{table} \centering \begin{tabular}{lccc} \toprule Ablation Conf. & BG:Pre-Train & BG:In-Train \\ \midrule \textit{standard} & .331 & .401 \\ \midrule \textit{-g} & .234 & .341 \\ \midrule \textit{-g -r} & .052 & .052\\ \bottomrule \end{tabular} \caption{Results of ablation study on Hits@10 of 1-shot link prediction in NELL-One.} \label{tab:result-ablation} \end{table} \subsection{Ablation Study} We have proved that relation-specific meta information, the key point of MetaR, successfully contributes to few-shot link prediction in previous section. As there are two kinds of relation-specific meta information in this paper, relation meta and gradient meta, we want to figure out how these two kinds of meta information contribute to the performance. Thus, we conduct an ablation study with three settings. The first one is our complete MetaR method denoted as \textit{standard}. The second one is removing the gradient meta by transferring un-updated relation meta directly from support set to query set without updating it via gradient meta, denoted as \textit{-g}. The third one is removing the relation meta further which makes the model rebase to a simple TransE embedding model, denoted as \textit{-g -r}. The result under the third setting is copied from \citet{GMatching}. It uses the triples from background graph, training tasks and one-shot training triples from validation/test set, so it's neither \textit{BG:Pre-Train} nor \textit{BG:In-Train}. We conduct the ablation study on NELL-one with metric Hit@10 and results are shown in Table~\ref{tab:result-ablation}. Table~\ref{tab:result-ablation} shows that removing gradient meta decreases 29.3\% and 15\% on two dataset settings, and further removing relation meta continuous decreases the performance with 55\% and 72\% compared to the \emph{standard} results. Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than gradient meta. Without gradient meta and relation meta, there is no relation-specific meta information transferred in the model and it almost doesn't work. This also illustrates that relation-specific meta information is important and effective for few-shot link prediction task. \subsection{Facts That Affect MetaR's Performance} \label{sec:analysis} We have proved that both relation meta and gradient meta surely contribute to few-shot link prediction. But is there any requirement for MetaR to ensure the performance on few-shot link prediction? We analyze this from two points based on the results, one is the sparsity of entities and the other is the number of tasks in training set. \textbf{The sparsity of entities} We notice that the best result of NELL-One and Wiki-One appears in different dataset settings. With NELL-One, MetaR performs better on \textit{BG:In-Train} dataset setting, while with Wiki-One, it performs better on \textit{BG:Pre-Train}. Performance difference between two dataset settings is more significant on Wiki-One. Most datasets for few-shot task are sparse and the same with NELL-One and Wiki-One, but the entity sparsity in these two datasets are still significantly different, which is especially reflected in the proportion of entities that only appear in one triple in training set, $82.8$\% and $37.1$\% in Wiki-One and NELL-One respectively. Entities only have one triple during training will make MetaR unable to learn good representations for them, because entity embeddings heavily rely on triples related to them in MetaR. Only based on one triple, the learned entity embeddings will include a lot of bias. Knowledge graph embedding method can learn better embeddings than MetaR for those one-shot entities, because entity embeddings can be corrected by embeddings of relations that connect to it, while they can't in MetaR. This is why the best performance occurs in \textit{BG:Pre-train} setting on Wiki-One, pre-train entity embeddings help MetaR overcome the low-quality on one-shot entities. \textbf{The number of tasks} From the comparison of MetaR's performance between with and without background dataset setting on NELL-One, we find that the number of tasks will affect MetaR's performance significantly. With \textit{BG:In-Train}, there are 321 tasks during training and MetaR achieves 0.401 on Hits@10, while without background knowledge, there are 51, with 270 less, and MetaR achieves 0.279. This makes it reasonable that why MetaR achieves best performance on \textit{BG:In-Train} with NELL-One. Even NELL-One has $37.1$\% one-shot entities, adding background knowledge into dataset increases the number of training tasks significantly, which complements the sparsity problem and contributes more to the task. Thus we conclude that both the sparsity of entities and number of tasks will affect performance of MetaR. Generally, with more training tasks, MetaR performs better and for extremely sparse dataset, pre-train entity embeddings are preferred. \section{Conclusion} We propose a meta relational learning framework to do few-shot link prediction in KGs, and we design our model to transfer relation-specific meta information from support set to query set. Specifically, using relation meta to transfer common and important information, and using gradient meta to accelerate learning. Compared to GMatching which is the only method in this task, our method MetaR gets better performance and it is also independent with background knowledge graphs. Based on experimental results, we analyze that the performance of MetaR will be affected by the number of training tasks and sparsity of entities. We may consider obtaining more valuable information about sparse entities in few-shot link prediction in KGs in the future. \section*{Acknowledgments} We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future. This work is funded by NSFC 91846204/61473260, national key research program YS2018YFB140004, and Alibaba CangJingGe(Knowledge Engine) Research Plan.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,232
package com.xing.sample.advancedimmersivemode.common.logger; /** * Created by Administrator on 2016/10/17. */ public interface LogNode { void println(int priority,String tag,String msg,Throwable tr); }
{ "redpajama_set_name": "RedPajamaGithub" }
7,465
package org.apache.nifi.processor; import org.apache.nifi.logging.LogLevel; import org.apache.nifi.reporting.ReportingTask; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import java.lang.reflect.Field; import static org.apache.nifi.processor.SimpleProcessLogger.NEW_LINE_ARROW; import static org.junit.Assert.fail; import static org.mockito.ArgumentMatchers.anyString; import static org.mockito.ArgumentMatchers.eq; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; public class TestSimpleProcessLogger { private static final String EXPECTED_CAUSES = "java.lang.RuntimeException: third" + System.lineSeparator() + NEW_LINE_ARROW + " causes: java.lang.RuntimeException: second" + System.lineSeparator() + NEW_LINE_ARROW + " causes: java.lang.RuntimeException: first"; private final Exception e = new RuntimeException("first", new RuntimeException("second", new RuntimeException("third"))); private ReportingTask task; private SimpleProcessLogger componentLog; private Logger logger; @Before public void before() { task = mock(ReportingTask.class); when(task.getIdentifier()).thenReturn("foo"); when(task.toString()).thenReturn("MyTask"); componentLog = new SimpleProcessLogger(task.getIdentifier(), task); try { Field loggerField = componentLog.getClass().getDeclaredField("logger"); loggerField.setAccessible(true); logger = mock(Logger.class); when(logger.isDebugEnabled()).thenReturn(true); when(logger.isInfoEnabled()).thenReturn(true); when(logger.isWarnEnabled()).thenReturn(true); when(logger.isErrorEnabled()).thenReturn(true); when(logger.isTraceEnabled()).thenReturn(true); loggerField.set(componentLog, logger); } catch (Exception e) { e.printStackTrace(); fail(e.getMessage()); } } @Test public void validateDelegateLoggerReceivesThrowableToStringOnError() { componentLog.error("Hello {}", e); verify(logger, times(1)).error(anyString(), eq(task), eq(EXPECTED_CAUSES), eq(e)); } @Test public void validateDelegateLoggerReceivesThrowableToStringOnInfo() { componentLog.info("Hello {}", e); verify(logger, times(1)).info(anyString(), eq(e)); } @Test public void validateDelegateLoggerReceivesThrowableToStringOnTrace() { componentLog.trace("Hello {}", e); verify(logger, times(1)).trace(anyString(), eq(task), eq(EXPECTED_CAUSES), eq(e)); } @Test public void validateDelegateLoggerReceivesThrowableToStringOnWarn() { componentLog.warn("Hello {}", e); verify(logger, times(1)).warn(anyString(), eq(task), eq(EXPECTED_CAUSES), eq(e)); } @Test public void validateDelegateLoggerReceivesThrowableToStringOnLogWithLevel() { componentLog.log(LogLevel.WARN, "Hello {}", e); verify(logger, times(1)).warn(anyString(), eq(task), eq(EXPECTED_CAUSES), eq(e)); componentLog.log(LogLevel.ERROR, "Hello {}", e); verify(logger, times(1)).error(anyString(), eq(task), eq(EXPECTED_CAUSES), eq(e)); componentLog.log(LogLevel.INFO, "Hello {}", e); verify(logger, times(1)).info(anyString(), eq(e)); componentLog.log(LogLevel.TRACE, "Hello {}", e); verify(logger, times(1)).trace(anyString(), eq(task), eq(EXPECTED_CAUSES), eq(e)); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,691
// Exception ex = context.Server.GetLastError(); // // unwrap unhandled exception // if (ex is HttpUnhandledException && ex.InnerException != null) // ex = ex.InnerException; // bool rethrow = ExceptionPolicy.HandleException(ex, "Exception Policy"); // if (!rethrow) // context.Server.ClearError(); // } // } //}
{ "redpajama_set_name": "RedPajamaGithub" }
2,088
\section{Introduction} \label{sec:introduction} In the dense neutrino flux emerging from a supernova (SN) core, neutrino-neutrino refraction causes nonlinear flavor oscillation phenomena that are unlike anything produced by ordinary matter~\cite{Pastor:2002we, Sawyer:2005jk, Fuller:2005ae, Duan:2005cp, Duan:2006an, Duan:2006jv, Hannestad:2006nj, Raffelt:2007yz, Duan:2007mv, Raffelt:2007cb, Mirizzi2007}. The crucial phenomenon is a collective mode of pair transformations of the form $\nu_e\bar\nu_e\to\nu_x\bar\nu_x$ where $x$ represents some suitable superposition of $\nu_\mu$ and $\nu_\tau$. This pair-wise form of flavor transformation leaves the net flavor-lepton number flux unchanged. Even an extremely small mixing angle is enough to trigger this effect that is insensitive to the presence of ordinary matter unless there is a Mikheyev-Smirnov-Wolfenstein (MSW) resonance in the dense-neutrino region. Collective pair transformations require a large neutrino density {\it and\/} a pair excess of a given flavor. In typical SN models one finds a hierarchy of number fluxes $F_{\nu_e}>F_{\bar\nu_e}>F_{\nu_x}=F_{\bar\nu_x}$. The first part of the hierarchy is caused by the deleptonization of the collapsed core whereas the second is caused by the absence of charged-current interactions for neutrino species other than $\nu_e$ and $\bar\nu_e$. The neutrino fluxes streaming from a collapsed star thus provide a natural environment for a flavor pair excess. On the other hand, the SN core itself is characterized by a large $\nu_e$ chemical potential that enhances the $\nu_e$ density and suppresses that of $\bar\nu_e$ so that here pair transformations cannot occur. Likewise, the deleptonization burst immediately after core bounce has an excess of $\nu_e$ and a depletion of $\bar\nu_e$ \cite{Kachelriess:2004ds}, suggesting that it is unaffected by collective pair transformations. \begin{figure}[b] \includegraphics[angle=0,width=0.98\columnwidth]{fig01.eps} \caption{Schematic evolution of the $z$-components of the total polarization vectors for neutrinos and antineutrinos in a SN caused by neutrino-neutrino interactions for the inverted-hierarchy example described in the text.\label{fig:firstexample}} \end{figure} We illustrate collective pair conversions with a simple example in Fig.~\ref{fig:firstexample}, assuming a typical SN neutrino luminosity to be quantified later. We show the evolution of the $z$-components of the global flavor polarization vector ${\bf P}$ for neutrinos and $\bar{\bf P}$ for antineutrinos, where initially $P=|{\bf P}|=1+\epsilon$ with $\epsilon=0.25$ and $\bar P=|\bar{\bf P}|=1$. We have assumed a monochromatic spectrum, that all neutrinos are emitted at $45^\circ$ relative to the radial direction, the atmospheric $\Delta m^2$, a small vacuum mixing angle $\sin2\theta=10^{-3}$ to mimic the effect of ordinary matter. and an inverted mass hierarchy. For the normal hierarchy, no visible evolution takes place. Flavor oscillations do not change those parts of the flavor fluxes that are already equal, only the transformation of the excess $\bar\nu_e$ flux over the $\bar\nu_x$ flux is observable, and likewise for neutrinos. The polarization vectors only represent this excess. Therefore, without loss of generality we may set $F_{\nu_x}=F_{\bar\nu_x}=0$ in our examples or equivalently, we may picture $F_{\nu_e}$ and $F_{\bar\nu_e}$ to represent $F_{\bar\nu_e}-F_{\bar\nu_x}$ and likewise for neutrinos. Our chosen parameters mean that at the neutrino sphere ($R=10$~km) the excess of the $\nu_e$ flux over the $\nu_x$ flux is 25\% larger than the excess of the $\bar\nu_e$ flux over the $\bar\nu_x$ flux ($\epsilon=0.25$). $\bar P_z=+1$ then represents a pure $\bar\nu_e$ excess flux, $\bar P_z=0$ represents equal excess fluxes of both flavors, and $\bar P_z=-1$ a pure $\bar\nu_x$ excess flux, and analogous for $\nu_e$ with $1\to1+\epsilon$. The main features of Fig.~\ref{fig:firstexample} are nicely explained after recognizing that the equations of motion can be brought into a form where they are equivalent to a gyroscopic pendulum~\cite{Hannestad:2006nj, Duan:2007mv}. The initial ``plateau phase'' corresponds to synchronized oscillations or, in the pendulum language, to a fast precession. We call the radius where this phase ends the synchronization radius $r_{\rm synch}$. The decline with ``wiggles'' represents a nutation mode. The overall decline is caused by the dilution of the neutrino flux and their increasing collinearity with distance, corresponding to a decline of their effective interaction energy. One salient feature of Fig.~\ref{fig:firstexample} is that the $\bar\nu_e$ flux completely converts to $\bar\nu_x$, whereas the $\nu_e$ flux converts to $\nu_x$ only to the extent allowed by the conservation of $P_z-\bar P_z=\epsilon$. This conservation is exact in the mass basis that approximately coincides with the interaction basis if the mixing angle is small. In other words, only $\nu_e\bar\nu_e$ pairs convert to $\nu_x\bar\nu_x$ pairs, whereas the unpaired $\nu_e$ excess remains in its original flavor~\cite{Hannestad:2006nj}. The current-current nature of the weak interaction causes the interaction energy to depend on $(1-\cos\theta)$ for two trajectories with relative angle $\theta$. Therefore, neutrinos emitted in different directions from a SN core experience different refractive effects~\cite{Duan:2006an, Duan:2006jv}. As a result, one would expect that their flavor content evolves differently, leading to kinematical decoherence between different angular modes~\cite{Sawyer:2005jk}. Two of us have recently shown that this multi-angle decoherence is indeed unavoidable in a ``symmetric gas'' of equal densities of neutrinos and antineutrinos~\cite{Raffelt:2007yz}. Moreover, this effect is self-accelerating in that an infinitesimal anisotropy is enough to trigger an exponential run-away towards flavor equipartition, both for the normal and inverted hierarchies. In the SN context, however, it has been numerically observed that the evolution is much more similar to the single-angle (or the isotropic) case~\cite{Duan:2006an, Duan:2006jv}. The flux emitted by a SN is extremely anisotropic. If one assumes $\nu\bar\nu$ symmetry, flavor decoherence is swift and unavoidable. Therefore, the observed suppression of multi-angle decoherence must be related to the $\nu_e\bar\nu_e$ asymmetry that is generated by SN core deleptonization. To illustrate this point we show in Fig.~\ref{fig:example2} a few examples along the lines of Fig.~\ref{fig:firstexample}, but now for multi-angle emission from the neutrino sphere that is again taken at 10~km. We consider different values of an asymmetry parameter that we define as \begin{equation}\label{eq:epsdefine} \epsilon= \frac{F(\nu_e)-F(\nu_x)}{F(\bar\nu_e)-F(\bar\nu_x)}-1 =\frac{F(\nu_e)-F(\bar\nu_e)}{F(\bar\nu_e)-F(\bar\nu_x)}\,, \end{equation} where we have used $F(\nu_x)=F(\bar\nu_x)$. As mentioned earlier, we can assume $F(\nu_x)=F(\bar\nu_x)=0$ at the neutrino sphere without loss of generality. The left panels are for the normal hierarchy, the right panels for the inverted hierarchy. $P_z(r)-\bar P_z(r)=\epsilon$ is constant so that it is sufficient to show $\bar P_z(r)$ alone. However, the length $\bar P=|\bar {\bf P}|$ is no longer preserved: Complete kinematical decoherence among the angular modes would cause $\bar P=0$. On the other hand, if $\bar P=1$ remains fixed, this signifies that all modes evolve coherently with each other. We use $\bar{\bf P}$ rather than ${\bf P}$ because the former measures what happens to the $\nu_e\bar\nu_e$ pairs, whereas the latter also includes the conserved $\nu_e$~excess. \begin{figure*} \includegraphics[width=0.85\textwidth]{fig02.eps} \caption{Radial evolution of $\bar P_z$ in a schematic SN model as in Fig.~\ref{fig:firstexample}, but now for multi-angle neutrino emission at the neutrino sphere ($R=10$~km). In addition we show the length $\bar P=|\bar {\bf P}|$ as a measure of kinematical coherence. Left: normal hierarchy. Right: inverted hierarchy. From top to bottom: $\epsilon=0$, 0.06, 0.12 and~0.25, where $\epsilon$ is defined in Eq.~(\ref{eq:epsdefine}).\label{fig:example2}} \end{figure*} In the top row we use $\epsilon=0$ (symmetric case). The flavor content decoheres quickly as expected. Both the length and the $z$-components of ${\bf P}$ and $\bar{\bf P}$ shrink to zero within about 20~meters of the nominal neutrino sphere. On the other extreme, we show in the bottom row the same for $\epsilon=0.25$. In the normal hierarchy, nothing visible happens, in analogy to the single-angle case. In the inverted hierarchy, the transformation is similar, but not identical, to the single-angle case. The nutations wash out quickly. Shortly after exiting from the synchronization phase, the length $\bar P$ shrinks a bit, but stays almost constant thereafter. Clearly, some sort of multi-angle effect has happened as we will discuss further in Sec.~\ref{sec:decoherence}, but multi-angle decoherence has certainly not occurred. In the two middle rows we show intermediate cases with $\epsilon=0.06$ and 0.12, respectively. For the inverted hierarchy, these examples are qualitatively equivalent. The evolution is at first similar to the single-angle case and analogous to $\epsilon=0.25$. The nutations are washed out and the length $\bar P$ shrinks a little bit after the synchronization radius. At some larger radius, however, something new happens in that $\bar P$ suddenly shrinks significantly, although not to zero, and there is a distinct feature in the evolution of the $z$-component. Now we obtain partial decoherence. The final flavor content is very different from the single-angle case. In the normal hierarchy, and for $\epsilon=0.06$, we obtain large decoherence that begins abruptly at some radius far beyond $r_{\rm synch}$. For the larger asymmetry $\epsilon=0.12$, the length $\bar P$ also shrinks, but closely tracks $\bar P_z$. As we will see, this case is somewhat like Phase~II of the inverted-hierarchy case, i.e., a certain amount of shrinking of the length of $\bar P$ and thus a clear multi-angle effect, but no real decoherence. Depending on the deleptonization flux, here represented by the asymmetry parameter $\epsilon$, the system behaves very differently. In particular, for the inverted hierarchy it is striking that there are either two or three distinct phases. We always have the initial synchronized phase at large neutrino densities. Next, there is always the quasi single-angle pair-transformation phase at distances larger than $r_{\rm synch}$. Just beyond this radius, the global polarization vectors quickly shrink by a small amount, but then stabilize immediately. Finally, if $\epsilon$ is below some critical value, there is a sharp transition to a third phase where the different angular modes decohere significantly, but not completely. The practical outcome for the flavor fluxes emerging from the dense-neutrino region is very different depending on $\epsilon$. The transition between these regimes is abrupt, a small change of $\epsilon$ is enough to cause one or the other form of behavior. While these phenomena call for an analytic quantitative understanding, we are here less ambitious, but more practical. We study numerically for which range of parameters the different forms of behavior occur. Towards this goal we first set up, in Sec.~\ref{sec:setup}, our conventions, the equations of motion for a spherically symmetric system, and establish the connection between the parameters of our schematic model with those of a realistic SN scenario. In Sec.~\ref{sec:decoherence} we describe in more detail what happens in the different phases of evolution diagnosed in Fig.~\ref{fig:example2} and identify useful measures of decoherence. In Sec.~\ref{sec:parameters} we investigate the role of our various model parameters in determining if the system kinematically decoheres. We discuss our findings and conclude in Sec.~\ref{sec:conclusions}. In Appendix~\ref{app:eom} we derive the equations of motion adapted to spherical symmetry. \section{Setup of the problem} \label{sec:setup} \subsection{Equations of motion} To study the flavor evolution of the neutrino flux emitted by a SN core we solve numerically the equations of motion for the flavor-dependent number fluxes, assuming spherical symmetry. We always work in a two-flavor scenario between $\nu_e$ and another flavor $\nu_x$, characterized by the atmospheric $\Delta m^2$ and by a vacuum mixing angle $\theta$ that is taken to represent the unknown 13-mixing angle. Our fundamental quantities are the flux matrices in flavor space ${\sf J}_r$ that depend on the radial coordinate $r$ (Appendix~\ref{app:eom}). The diagonal entries represent the total neutrino number fluxes through a sphere of radius~$r$. In the absence of oscillations, ${\sf J}_r$ would not depend on the radius at all. The flux matrices are represented by polarization vectors ${\bf P}_r$ in the usual way, \begin{eqnarray}\label{eq:pol} {\sf J}_r&=& \frac{F(\nu_e)+F(\nu_x)}{2} +\frac{F(\bar\nu_e)-F(\bar\nu_x)}{2}\, {\bf P}_r\cdot\bm{\sigma}\,, \nonumber\\ \bar{\sf J}_r&=& \frac{F(\bar\nu_e)+F(\bar\nu_x)}{2} +\frac{F(\bar\nu_e)-F(\bar\nu_x)}{2}\, \bar{\bf P}_r\cdot\bm{\sigma}\,, \end{eqnarray} where $\bm\sigma$ is the vector of Pauli matrices. Antineutrino quantities are always denoted with an overbar. The number fluxes $F(\nu)$ are understood at the neutrino sphere. In both equations the term proportional to the polarization vector is normalized to the antineutrino flux. As a consequence, at the neutrino sphere we have the normalization \begin{equation} P=|{\bf P}|=1+\epsilon \hbox{\quad and\quad} \bar P=|\bar{\bf P}|=1\,. \end{equation} In this way, we treat the excess flux from deleptonization as an adjustable parameter without affecting the baseline flux of antineutrinos. The diagonal part of the flux matrices is conserved and irrelevant for flavor oscillations. The polarization vector ${\bf P}_r$ only captures the {\it difference\/} between the flavor fluxes. For this reason we have defined the asymmetry $\epsilon$ in terms of the flux differences. Multi-angle effects are at the focus of our study. We label different angular modes with \begin{equation} u=\sin^2\vartheta_R\,, \end{equation} where $\vartheta_R$ is the zenith angle at the neutrino sphere $r=R$ of a given mode relative to the radial direction. The parameter $u$ is fixed for every trajectory whereas the physical zenith angle $\vartheta_r$ at distance $r$ varies. Therefore, using the local zenith angle to label the modes would complicate the equations. We will consider two generic angular distributions for the modes. In the multi-angle case we assume that the neutrino radiation field is ``half isotropic'' directly above the neutrino sphere, i.e., all outward moving modes are equally occupied as expected for blackbody emission. This implies (Appendix~\ref{app:eom}) \begin{equation} {\bf P}_{u,r}={\rm d} {\bf P}_{r}/{\rm d} u= {\rm const.} \end{equation} at $r=R$ for $0\leq u\leq 1$. Note that $u=0$ represents radial modes, $u=1$ tangential ones. The other generic distribution is the single-angle case where all neutrinos are taken to be launched at $45^\circ$ at the neutrino sphere so that $u=1/2$ for all neutrinos. For a monochromatic energy distribution, the equations of motion in spherical symmetry are (Appendix~\ref{app:eom}) \begin{widetext} \begin{eqnarray}\label{eq:eom5} \partial_r{\bf P}_{u,r}&=& +\frac{\omega {\bf B}\times{\bf P}_{u,r}}{v_{u,r}} +\frac{\lambda_r{\bf L}\times{\bf P}_{u,r}}{v_{u,r}} +\mu\,\frac{R^2}{r^2} \left[\left(\int_0^1{\rm d} u'\, \frac{{\bf P}_{u',r}-\bar{\bf P}_{u',r}}{v_{u',r}}\right) \times\left(\frac{{\bf P}_{u,r}}{v_{u,r}}\right) -({\bf P}_r-\bar{\bf P}_r)\times{\bf P}_{u,r}\right]\,, \nonumber\\* \partial_r\bar{\bf P}_{u,r}&=& -\frac{\omega {\bf B}\times\bar{\bf P}_{u,r}}{v_{u,r}} +\frac{\lambda_r{\bf L}\times\bar{\bf P}_{u,r}}{v_{u,r}} +\mu\,\frac{R^2}{r^2} \left[\left(\int_0^1{\rm d} u'\, \frac{{\bf P}_{u',r}-\bar{\bf P}_{u',r}}{v_{u',r}}\right) \times\left(\frac{\bar{\bf P}_{u,r}}{v_{u,r}}\right) -({\bf P}_r-\bar{\bf P}_r)\times\bar{\bf P}_{u,r}\right]\,, \end{eqnarray} \end{widetext} where the radial velocity of mode $u$ at radius $r$ is \begin{equation} v_{u,r}=\sqrt{1-u\,R^2/r^2}\,. \end{equation} Further, $\omega=|\Delta m^2/2E|$ is the vacuum oscillation frequency, taken to be positive. ${\bf B}=(\sin2\theta,0,\pm\cos2\theta)$ where the mixing angle $\theta$ is usually taken to be small. $B_z<0$ corresponds to the normal hierarchy, $B_z>0$ to the inverted hierarchy. ${\bf L}$ is a unit vector in the $z$-direction because we work in the interaction basis. The matter density is represented by \begin{equation} \lambda_r=\sqrt2 G_{\rm F}\,[n_{e^-}(r)-n_{e^+}(r)]\,. \end{equation} The strength of the neutrino-neutrino interaction is parameterized by \begin{equation}\label{eq:mudefine} \mu=\sqrt2 G_{\rm F}\left(F_{\bar\nu_e}^R-F_{\bar\nu_x}^R\right)\,, \end{equation} where the fluxes are taken at the neutrino sphere with radius~$R$. The somewhat complicated structure of the equations arises from projecting the evolution of each mode on the radial direction. This is still very much simpler than following the evolution on every trajectory as a function of distance (or time) on that trajectory. We have here a closed set of differential equations that is not hard to solve numerically. We show in Appendix~\ref{app:eom} that for $r\gg R$, the vacuum and matter oscillation terms take on the familiar plane-wave form because at large distances all neutrinos essentially move on radial trajectories. The neutrino-neutrino term falls off as $r^{-4}$, in agreement with the previous literature. \subsection{Schematic supernova model} \label{sec:sn} We always consider a two-flavor oscillation scenario driven by the atmospheric $\Delta m^2=1.9$--$3.0\times10^{-3}~{\rm eV}^2$. Assuming $\langle E_\nu\rangle=15$~MeV, the oscillation frequency is $\omega=\hbox{0.3--0.5}~{\rm km}^{-1}$. To be specific, we use \begin{equation}\label{eq:omegadefine} \omega=\left\langle\frac{\Delta m^2}{2E}\right\rangle= 0.3~{\rm km}^{-1} \end{equation} as a benchmark value in the monochromatic model. The total energy output of a SN is around $3\times10^{53}~{\rm erg}$, corresponding to $0.5\times10^{53}~{\rm erg}$ in each of the six neutrino species if we assume approximate equipartition of the emitted energy. If this energy is emitted over 10~s, the average luminosity per flavor would be $0.5\times10^{52}~{\rm erg/s}$. However, at early times during the accretion phase, the luminosity in the $\bar\nu_e$ flavor can exceed $3\times10^{53}~{\rm erg/s}$~\cite{Raffelt:2003en}. As our baseline estimate we use \begin{eqnarray}\label{eq:muestimate} \mu&=&7\times10^5~{\rm km}^{-1}\\ &\times& \left(\frac{L_{\bar\nu_e}}{\langle E_{\bar\nu_e}\rangle} -\frac{L_{\bar\nu_x}}{\langle E_{\bar\nu_x}\rangle}\right) \frac{15~{\rm MeV}}{10^{52}~{\rm erg/s}}\, \left(\frac{10~{\rm km}}{R}\right)^2\,.\nonumber \end{eqnarray} This is significantly larger than the assumptions of previous studies~\cite{Duan:2006an, Hannestad:2006nj}. Unless otherwise stated, we always use the benchmark values for the different parameters summarized in Table~\ref{tab:benchmark}. \begin{table} \caption{Default values for our model parameters. \label{tab:benchmark}} \begin{ruledtabular} \begin{tabular}{lll} Parameter&Standard value&Definition\\ \hline $\epsilon$&0.25&Eq.~(\ref{eq:epsdefine})\\ $\mu$&$7\times10^5~{\rm km}^{-1}$&Eq.~(\ref{eq:mudefine})\\ $\omega$&$0.3~{\rm km}^{-1}$&Eq.~(\ref{eq:omegadefine})\\ $\sin2\theta$&$10^{-3}$&---\\ \end{tabular} \end{ruledtabular} \end{table} In our calculations we always take the neutrino sphere at the radius $R=10$~km. Of course, the physical neutrino sphere is not a well-defined concept. Therefore, the radius $R$ simply represents the location where we fix the inner boundary condition. However, essentially nothing happens until the synchronization radius $r_{\rm synch}\gg R$ because the in-medium mixing angle is extremely small and both neutrinos and antineutrinos simply precess around~${\bf B}$. Therefore, as far as the vacuum and matter oscillation terms are concerned, it is almost irrelevant where we fix the inner boundary condition. Not so for the neutrino-neutrino term because we also fix the angular distribution at $r=R$. While the $r^{-2}$ scaling from flux dilution is unaffected by the radius for the inner boundary condition, the ``collinearity suppression'' also scales as $(R/r)^{2}$ for $r\gg R$. If we fix a half-isotropic distribution or a single angle of $45^\circ$ at a larger radius $R'$, the new inner boundary condition essentially amounts to $\mu\to\mu'=\mu\,(R'/R)^2$. In the early phase after bounce $R'=30$~km could be more realistic, leading to a $\mu$ value almost an order of magnitude larger. Evidently, $\mu$ is a rather uncertain model parameter that can differ by orders of magnitude from our benchmark value. However, collective pair conversions only begin at $r_{\rm synch}$ where $\mu$ is so small that synchronization ends. Therefore, the main impact of a modified $\mu$ is to change $r_{\rm synch}$ and thus to push the collective pair conversions to larger radii. The oscillations are synchronized if~\cite{Hannestad:2006nj} \begin{equation} \frac{\mu}{\omega}>\frac{2}{(1-\sqrt{1+\epsilon})^2}\,. \end{equation} In our single-angle case we find from Eq.~(\ref{eq:muvariation}) that the effective neutrino-neutrino interaction strength varies at large distances as \begin{equation} \mu_{\rm eff}(r)=\mu\,\frac{R^4}{2r^4}\,. \end{equation} Therefore, the synchronization radius is \begin{eqnarray} \frac{r_{\rm synch}}{R}&=& \left(\frac{\sqrt{1+\epsilon}-1}{2}\right)^{1/2} \left(\frac{\mu}{\omega}\right)^{1/4} \nonumber\\* &\approx&\frac{\sqrt\epsilon}{2}\, \left(\frac{\mu}{\omega}\right)^{1/4}\,. \end{eqnarray} The second line assumes $\epsilon\ll 1$. If we use our benchmark values $\omega=0.3~{\rm km}^{-1}$, $\mu=7\times10^5~{\rm km}^{-1}$, $R=10$~km and $\epsilon=0.25$, we find $r_{\rm synch}=95$~km, corresponding well, for example, to Fig.~\ref{fig:firstexample}. In any event, if $\mu$ is taken to be uncertain by two orders of magnitude, $r_{\rm synch}$ only changes by a factor of 3. The total electron lepton number emitted from a collapsed SN core is about $3\times10^{56}$. On the other hand, assuming that each neutrino species carries away $0.5\times10^{53}~{\rm erg}$ with an average energy of 15~MeV, the SN core emits about $2\times10^{57}$ neutrinos in each of the six species. In this simplified picture, the SN emits on average about 15\% more $\nu_e$ than $\bar\nu_e$. However, in the oscillation context we need the excess of $F_{\nu_e}-F_{\nu_x}$ relative to the same quantity for antineutrinos as defined in Eq.~(\ref{eq:epsdefine}). The true value of $\epsilon$ thus depends sensitively on the detailed fluxes and spectra of the emitted neutrinos. The asymmetry parameter is large when the first hierarchy in $F_{\nu_e}>F_{\bar\nu_e}>F_{\bar\nu_x}=F_{\nu_x}$ is large and/or the second hierarchy is small. Even if $F_{\bar\nu_x}$ is as small as half of $F_{\bar\nu_e}$, the asymmetry $\epsilon$ would be as large as 30\%, even when $F_{\nu_e}$ exceeds $F_{\bar\nu_e}$ by only 15\%. \subsection{Numerical multi-angle decoherence and the inner boundary condition} \label{sec:numerical} One important and somewhat confusing complication of numerically solving the equations of motion is the phenomenon of numerical multi-angle decoherence. To integrate Eq.~(\ref{eq:eom5}) one needs to work with a finite number of angular modes, equivalent to coarse-graining the phase space of the system. If the number of angular bins is chosen smaller than some critical number $N_{\rm min}$, multi-angle decoherence occurs for $r<r_{\rm synch}$, where physically it is not possible and does not occur for a fine-grained calculation. This phenomenon is shown, for example, in Fig.~3 of Ref.~\cite{Duan:2006an}. It is not caused by a lack of numerical precision, but a result of the coarse-graining of phase space. A related phenomenon is recurrence as discussed in the context of multi-angle decoherence in Ref.~\cite{Raffelt:2007yz}. In other words, a coarsely grained multi-angle system behaves differently than a finely grained one. A smaller mixing angle reduces $N_{\rm min}$, a larger neutrino-neutrino interaction strength increases it. It should be possible to estimate $N_{\rm min}$ from first principles, but for the moment we need to rely on trial and error. Starting the integration at $r=R$ is doubly punishing because the fast oscillations of individual modes caused by a large $\mu$ requires many radial steps for the numerical integration and avoiding numerical decoherence requires a large number of angular modes. On the other hand, in this region nothing but fast synchronized oscillations take place that have no physical effect if the mixing angle is small. Using a larger radius as a starting point for the integration avoids both problems and does not modify the overall flavor evolution at larger distances. From the physical perspective, the ``neutrino sphere'' is not a well-defined concept because different energy modes and different species decouple at different radii, and in any case, each individual neutrino scatters last at a different radius. If the exact inner boundary condition would matter, we would need to solve the full kinetic equations, including neutral-current and charged-current collisions. It is the beauty of the neutrino-neutrino flavor transformation problem that the real action begins at $r_{\rm synch}$, significantly outside the neutrino sphere. Our approach of reducing the equations of motion to the refractive terms is only self consistent because the exact location of the inner boundary condition is irrelevant. In summary, the nominal neutrino sphere at $R=10$~km is nothing but a point of reference where we normalize the fluxes and fix the angular distribution. As a starting point for integration we typically use $r_0=0.75\,r_{\rm synch}$. A few hundred angular modes are then usually enough to avoid numerical decoherence. We note, however, that the normal-hierarchy cases are more sensitive to both the number of angular bins and the starting radius for the integration. It can happen that a case that looks like the $\epsilon=0.12$ example in Fig.~\ref{fig:example2}, which shows a mild shrinking of the polarization vector, can become ``more coherent'' by choosing a smaller starting radius which then may also require a larger number of modes. For the normal hierarchy, the different multi-angle cases are less cleanly separated from each other than in the inverted hierarchy in that the transition is less abrupt as a function of $\epsilon$. When physical multi-angle decoherence occurs (e.g.~the middle rows of Fig.~\ref{fig:example2}), a much larger number of modes is needed to provide reproducible results. However, we are here not interested in the exact final outcome, we are mostly interested in the range of parameters that lead to decoherence. Therefore, massive computation power is not needed for our study. For those cases where we include a non-trivial spectrum of energies we also need energy bins. A distribution of energies does not lead to kinematical decoherence in the context of collective neutrino oscillations~\cite{Hannestad:2006nj} so that the number of energy bins is not a crucial parameter. Of course, to resolve the energy-dependent behavior and especially the spectral splits~\cite{Duan:2006an, Duan:2007mv, Raffelt:2007cb, Mirizzi2007}, a sufficiently fine-grained binning is required. It provides better resolution, but not a qualitatively different form of behavior. \section{Coherent evolution vs.\ decoherence} \label{sec:decoherence} \begin{figure*} \includegraphics[width=0.85\textwidth]{fig03.eps} \caption{Final location on the unit sphere of 500 antineutrino polarization vectors for our standard parameters and the inverted hierarchy. The top row is the ``side view'' ($x$-$z$-components), the bottom row the ``top view'' ($x$-$y$-components). Left:~quasi single-angle case ($\epsilon=0.25$). Middle: decoherent case ($\epsilon=0.12$). Right: symmetric system ($\epsilon=0$).\label{fig:footprints1}} \end{figure*} \begin{figure*} \includegraphics[width=0.85\textwidth]{fig04.eps} \caption{Same as Fig.~\ref{fig:footprints1}, now only top views for quasi single-angle cases with the mixing angles $\sin2\theta=0.1$, $10^{-3}$ and $10^{-6}$ from left to right. The middle panel is identical with the bottom left panel of Fig.~\ref{fig:footprints1}.\label{fig:footprints4}} \end{figure*} \subsection{Different forms of evolution} Before investigating the conditions for decoherence among angular neutrino modes we first take a closer look at what happens in the different cases shown in Fig.~\ref{fig:example2}. Considering first the quasi single-angle case with the asymmetry $\epsilon=0.25$, some insight is gained by looking at the final state of the evolution at some large radius where the neutrino-neutrino effects have completely died out and all modes simply perform vacuum oscillations. In the left-hand panels of Fig.~\ref{fig:footprints1} we show the end state of 500~polarization vectors, representing modes uniformly spaced in the angular coordinate $u$. In the upper panel we show the final state in the $x$-$z$-plane (``side view''), in the lower panel in the $x$-$y$-plane (``top view''). \begin{figure} \includegraphics[width=0.6\columnwidth]{fig05.eps} \caption{Same as Fig.~\ref{fig:footprints1}, here for the normal hierarchy and $\epsilon=0.12$.\label{fig:footprints2}} \end{figure} Initially, all polarization vectors are aligned in the flavor direction. At the beginning of the pair transformation phase at $r_{\rm synch}$, some are peeled off, forming a spiral structure that is easily gleaned from the left panels of Fig.~\ref{fig:footprints1}. This structure continues to evolve almost as in the single-angle case, i.e., once established it moves almost like a rigid body and eventually orients itself in the negative ${\bf B}$-direction. Of course, it continues to rotate around the ${\bf B}$ direction even at large radii because of vacuum oscillations. The spiral structure is different depending on the mixing angle. We illustrate this in Fig.~\ref{fig:footprints4} where we show the top view in analogy to the lower-left panel of Fig.~\ref{fig:footprints1} for different choices of mixing angle. For a large $\sin2\theta$, the polarization vectors stay close to each other. For a smaller $\sin2\theta$, the spiral spreads over a larger solid angle and has more windings. We recall that a smaller $\sin2\theta$ also has the effect of causing a larger nutation depth of the flavor pendulum~\cite{Raffelt:2007yz}. Now turn to the quasi decoherent case with $\epsilon=0.12$. Initially the same happens, but at the ``decoherence radius'' the spiral structure dissolves almost instantaneously. The polarization vectors enter a complicated structure as illustrated by the end state (central panels of Fig.~\ref{fig:footprints1}). Moreover, they are spread out all over the unit sphere, having both positive and negative $z$-components. This structure looks different for different choices of $\sin2\theta$ and $\epsilon$. However, once a sufficient number of polarization vectors is used, it is reproducible. For $\epsilon=0.06$ the picture would be qualitatively similar. Finally we show the fully symmetric case ($\epsilon=0$) in the right-hand panels. Here decoherence is fast and complete. For a small mixing angle, all polarization vectors are confined to the $x$-$z$-plane. They distribute themselves on a circle in that plane. For the normal hierarchy, we show in Fig.~\ref{fig:footprints2} as an explicit example the $\epsilon=0.12$ case of Fig.~\ref{fig:example2} that showed a clear multi-angle effect without strong decoherence. Once more we find a spiral structure. Most polarization vectors remain oriented roughly in their original direction, but in this case also with a tail of a few polarization vectors reversed. The quasi decoherent case ($\epsilon=0.06$) and the symmetric system produce similar final pictures as the corresponding cases of the inverted hierarchy. \subsection{Measures of decoherence} Even in the quasi-decoherent cases the unit sphere is not uniformly filled with polarization vectors. Rather, in the mono-energetic case considered here, the occupied phase space is a one-dimensional subspace of the unit sphere. It is parameterized by the angular variable $u$ and shows a clear line-like structure. This picture suggests to use the length of this line on the unit sphere as another global measure besides the length $\bar P$ to discriminate between different modes of evolution~\cite{Raffelt:2007yz}. In a numerical run with discrete angular bins, this quantity is simply the sum of the angles between neighboring polarization vectors. In Fig.~\ref{fig:phasespace} we show this quantity for the indicated values of $\epsilon$ as a function of radius for our inverted-hierarchy examples. \begin{figure}[b] \includegraphics[width=0.95\columnwidth]{fig06.eps} \caption{Evolution of the length of the one-dimensional subspace occupied by the polarization vectors for our standard inverted hierarchy case, taking a series of different asymmetries $\epsilon$. The length grows to larger values for smaller asymmetries. \label{fig:phasespace}} \end{figure} At the radius $r_{\rm synch}$ where the spiral forms, the length on the unit sphere quickly increases from~0 to a value that is almost independent of $\epsilon$, but depends on the mixing angle. For smaller $\sin2\theta$ it is larger, corresponding to the spiral having more windings as indicated earlier. Later, this length stays practically constant, reflecting that the spiral structure, once established, does not change much except tilting toward the negative ${\bf B}$-direction and precessing around it. When $\epsilon$ is smaller than a critical value, at the ``decoherence radius'' a sudden second growth phase shoots up from the plateau of these curves. For smaller $\epsilon$, the final length is longer, representing a more ``phase-space filling'' line on the unit sphere. Note, however, that for $\epsilon$ close to zero, the line does not fill the unit sphere, but essentially stays in a narrow band. In the perfectly symmetric case, the motion of all polarization vectors is essentially confined to the $x$-$z$-plane, i.e., the polarization vectors distribute themselves over a great circle on the sphere as shown in the right panels of Fig.~\ref{fig:footprints1}. \section{Role of model parameters} \label{sec:parameters} \subsection{Ordinary matter} \label{sec:matter} We now explore how various model parameters influence the behavior of the system. In the examples so far we have ignored matter because its effect is mainly to suppress the vacuum mixing angle. We here make this argument more precise. In Fig.~\ref{fig:profiles} we show typical matter density profiles, expressed in terms of the matter oscillation frequency $\lambda(r)$, from numerical simulations of the Garching group for different times after collapse~\cite{Arcones:2006uq}. For comparison we also show $\mu(r)$ with $\mu(R)=7\times10^5~{\rm km}^{-1}$ and a radial variation in analogy to Eq.~(\ref{eq:muvariation}). \begin{figure}[b] \includegraphics[angle=0,width=0.95\columnwidth]{fig07.eps} \caption{Typical matter density profiles from numerical simulations of the Garching group at the indicated times after core bounce~\cite{Arcones:2006uq}. For comparison we also show our benchmark value $\omega=0.3~{\rm km}^{-1}$ and $\mu(r)$ for a typical case.\label{fig:profiles}} \end{figure} \begin{figure}[b] \includegraphics[angle=0,width=0.95\columnwidth]{fig08.eps} \caption{Evolution for our standard inverted-hierarchy case for a large vacuum mixing angle of $\sin2\theta=0.1$ (blue/dotted lines in both panels), compared with different ways of suppressing the mixing angle (red/solid). Top: ordinary matter effect according to the profile at $t=2\,$s in Fig.~\ref{fig:profiles}. Bottom: small vacuum mixing angle of $\sin2\theta=3.35\times10^{-4}$.\label{fig:matterangle}} \end{figure} We observe that for the shown density profiles, the line $\omega$ intersects $\lambda(r)$ at a radius far exceeding the dense-neutrino region that lies within the radius where the $\mu(r)$ profile intersects $\omega$. In other words, the H-resonance is far outside the region of interest except perhaps for very late times. Then, of course, the neutrino luminosity will be much smaller, i.e., the $\mu(r)$ curve would also shift downward and the dense-neutrino region would be limited to smaller radii. The true density profiles may be much lower, especially at late times. This is even required for successful r-process nucleosynthesis. In this scenario an MSW resonance may take place within the dense-neutrino region, a case that was the focus of previous numerical studies~\cite{Duan:2006an, Duan:2006jv}. However, we will always assume that the H-resonance is at larger radii and that neutrino-neutrino refraction and ordinary matter effects do not interfere. What is the impact of a large matter density in the region where neutrino-neutrino effects are important? In the previous literature it was recognized that a constant matter profile essentially reduces the effective mixing angle so that matter should have the same influence as a small vacuum mixing angle~\cite{Duan:2005cp, Hannestad:2006nj}. We illustrate this point in Fig.~\ref{fig:matterangle} with the evolution of $\bar P_z$ for our usual case, but assuming now a large vacuum mixing angle $\sin 2\theta=0.1$. In the synchronization region one can now see oscillations. We overlay this curve with $\bar P_z$, using the matter profile of Fig.~\ref{fig:profiles} at $t=2\,$s. As expected, matter has the effect of slightly delaying the onset of pair transformations and of increasing the depth of the nutation amplitude. Actually, in the inverted hierarchy, the value of $\sin2\theta$ is only crucial at the onset of the bipolar oscillations. Once the overall polarization vector is tilted away from ${\bf B}$, the initial ``misalignment'' with ${\bf B}$ no longer matters. Therefore, what is crucial for the role of matter is only its density around the region where synchronization ends. The in-medium mixing angle at $r_{\rm synch}$ for the case shown in the top panel of Fig.~\ref{fig:matterangle} is $\sin2\theta_{\rm matter}=3.35\times10^{-4}$, assuming $\sin2\theta_{\rm vac}=0.1$. Using this value of $\theta_{\rm matter}$ as a vacuum mixing angle instead of matter yields the result shown in the bottom panel, again overlaid with the original vacuum case of $\sin2\theta=0.1$. We conclude that indeed we can ignore matter entirely if we account for it schematically by a small vacuum mixing angle, at least in the inverted hierarchy. Moreover, the onset of collective pair transformations is only mildly changed by the choice of mixing angle. Its main impact is that it controls the depth of the nutation pattern. The exact matter profile is only important if it is so shallow that it causes an MSW resonance in the dense-neutrino region, a case that we do not investigate. \subsection{Mixing angle} \label{sec:mixangle} This discussion suggests that, at least for the inverted hierarchy, the actual vacuum mixing angle does not strongly influence the issue of multi-angle decoherence because this effect happens when the global polarization vector is tilted far away from the ${\bf B}$ direction. On the other hand, we have already noted that the quasi-coherent spiral structure that forms just beyond the synchronization radius has more windings for a smaller mixing angle so that the system is not identical. To clarify the role of the mixing angle we have used our standard inverted-hierarchy case and have calculated the limiting asymmetry $\epsilon$ for decoherence for a broad range of mixing angles. We show the limiting contours in the plane of $\epsilon$ and $\sin 2\theta$ in Fig.~\ref{fig:mixeps} for both hierarchies, above which multiangle decoherence does not appear. \begin{figure}[ht] \includegraphics[angle=0,width=0.95\columnwidth]{fig09.eps} \caption{Limiting $\epsilon$ for decoherence as a function of mixing angle for our standard example and both hierarchies.\label{fig:mixeps}} \end{figure} We emphasize that the limiting $\epsilon$ shown in Fig.~\ref{fig:mixeps} has a different meaning for the two hierarchies. As discussed earlier, in the inverted hierarchy, $\bar P$ shortens somewhat even in the quasi single-angle regime. Therefore, as a formal criterion for distinguishing the regions of coherence and decoherence we use that the final $\bar P$ has shortened to less than 0.85. The exact choice is irrelevant because the transition between the quasi-coherent and decoherent regimes is steep as a function of~$\epsilon$. Conversely, in the normal hierarchy, $\bar P$ need not visibly shorten at all as illustrated by the example in the lower left panel of Fig.~\ref{fig:example2}. Therefore, we here demand that $\bar P$ does not visibly shorten in such a picture. We construct the demarcation line by decreasing $\epsilon$ in steps of 0.01 until the polarization vector for the first time shortens visibly. Finding this point requires a significant amount of manual iterations with a modified inner radius and number of angular bins to make sure the result does not depend on these numerical parameters. The error bars represent our confidence range for the true critical value. We conclude that for the inverted hierarchy, multi-angle decoherence is virtually independent from the value of $\sin2\theta$, except that for very large $\theta$ a slightly smaller asymmetry is enough to suppress decoherence. Assuming the presence of ordinary matter, such large mixing angles seem irrelevant, except perhaps at late times. Either way, it is conservative to assume a small mixing angle and we will use $\sin2\theta=10^{-3}$ as a default value. For the normal hierarchy we find a strong dependence of the critical $\epsilon$ on $\log_{10}(\sin2\theta)$. For a smaller mixing angle it is easier to suppress decoherence. The normal hierarchy is very different from the inverted one in that for a small mixing angle, all polarization vectors stay closely aligned with the $z$-direction unless multi-angle decoherence takes place. Therefore, it is plausible that for a smaller mixing angle, decoherence effects are delayed. \subsection{Energy distribution} \label{sec:energy} \begin{figure*} \includegraphics[width=0.855\textwidth]{fig10.eps} \caption{Final state at a large radius of the polarization vectors for our standard parameters in analogy to Fig.~\ref{fig:footprints1}. The antineutrinos (red/light gray) are on the unit sphere, whereas the neutrinos (blue/dark gray) live on a sphere of radius $1+\epsilon=1.25$. Left: monochromatic multi-angle, the antineutrinos being identical with the left column of Fig.~\ref{fig:footprints1}. Middle: Box-like energy spectrum and single angle. Right: Box-like energy spectrum and multi angle. In the lower right panel we do not show the antineutrinos.\label{fig:footprints3}} \end{figure*} The neutrinos emitted from a SN core naturally have a broad energy distribution. In Ref.~\cite{Raffelt:2007yz} it was noted that the energy distribution of neutrinos and antineutrinos is largely irrelevant for the question of decoherence as long as the oscillations exhibit self-maintained coherence~\cite{Kostelecky:1995dt}. The multi-angle transition to decoherence typically occurs within the dense-neutrino region where the synchronization of energy modes remains strong. Therefore, we expect that multi-angle decoherence is not significantly affected by the neutrino spectrum. In order to compare a monochromatic system with one that has a broad energy distribution, the crucial quantity to keep fixed is not the average energy, but the average oscillation frequency $\langle\omega\rangle=\langle\Delta m^2/2E\rangle$. If we assume that neutrinos and antineutrinos have equal distributions, it is straightforward to adjust, for example, the temperature of a thermal distribution such that $\langle\omega\rangle$ is identical to our monochromatic standard case $\omega_0=0.3~{\rm km}^{-1}$. If we assume different distributions for neutrinos and antineutrinos, the equivalent $\omega_0$ is somewhat more subtle. Consider first two different monochromatic spectra for neutrinos with a fixed frequency $\omega_1$, and one for antineutrinos with a different frequency $\omega_2$ (``bichromatic system''). Following Ref.~\cite{Duan:2005cp} one can return to a monochromatic situation by going into a reference frame that rotates around ${\bf B}$ with such a frequency that in vacuum ${\bf P}$ and $\bar{\bf P}$ precess around ${\bf B}$ with equal frequencies $\omega_0$, but in opposite directions. The rotation frequency for the corotating frame is $\omega_{\rm c}=(\omega_1-\omega_2)/2$. Therefore, our bichromatic system behaves equivalently to a monochromatic one with $\omega_0=(\omega_1+\omega_2)/2$. It is trivial to show in numerical examples that the bichromatic system is indeed equivalent to a monochromatic one with $\omega_0$ taken as the simple average of $\omega_1$ and $\omega_2$. If we have different distributions for neutrinos and antineutrinos, we define the initial average frequencies by \begin{equation} \langle\omega_\nu\rangle= \frac{\int_0^\infty{\rm d}\omega\,\omega\,P_\omega^z} {\int_0^\infty{\rm d}\omega\,P_\omega^z}\,, \end{equation} and analogous for $\langle\omega_{\bar\nu}\rangle$. The equivalent monochromatic frequency is then $\omega_0=\frac{1}{2}\, (\langle\omega_\nu\rangle+\langle\omega_{\bar\nu}\rangle)$. The initial distribution $P_\omega^z$ can involve negative values if some part of the spectrum initially consists of $\nu_x$ and not $\nu_e$. Such spectral cross-overs occur, for example, if one assumes thermal fluxes with equal luminosities but different temperatures. We have studied several numerical examples of quasi single-angle behavior and of multi-angle decoherence, taking different neutrino and antineutrino energy spectra, such as flat or thermal and with equal or different temperatures. We always found that the evolution of the global polarization vectors is almost identical to the equivalent monochromatic cases. We never observed that a broad energy spectrum caused a significant deviation from the monochromatic behavior at those radii that are relevant for decoherence. Of course, a multi-energy system is qualitatively different from a monochromatic one in that the final energy distribution shows a ``spectral split''~\cite{Duan:2006an, Duan:2006jv, Duan:2007mv, Raffelt:2007cb, Mirizzi2007}. In a single-angle multi-energy system, this means the $\bar\nu_e$ spectrum is completely transformed to the $\bar\nu_x$ flavor, whereas only the high-energy part of the $\nu_e$ spectrum is transformed, the low-energy part remaining in (or rather returning to) the original flavor. The energy $E_{\rm split}$ of this sharp transition is fixed by lepton-number conservation in the sense that the neutrino-neutrino interactions only catalyzes the transformation of $\nu_e\bar\nu_e$ pairs. For various examples we find results in full agreement with the previous literature~\cite{Duan:2006an, Duan:2006jv, Duan:2007mv, Raffelt:2007cb, Mirizzi2007}. For sufficiently large asymmetries $\epsilon$ where the multi-angle system evolves in the quasi single-angle mode, there is no significant modification of the spectral split so that it is not worthwhile to show any examples. In the decoherent case, the final spectra naturally are very different, but we have not explored such cases systematically because multi-angle decoherence does not seem to be generic for realistic SN scenarios. To illustrate the modifications caused by an energy spectrum in a different way from the previous literature, we show in Fig.~\ref{fig:footprints3} the side and top views of the location of neutrino and antineutrino polarization vectors on the unit sphere in analogy to Fig.~\ref{fig:footprints1} for our standard parameter values. In the left column we show the same monochromatic multi-angle case that we already showed in the left column of Fig.~\ref{fig:footprints1}, with 500 modes. In addition we include the neutrinos (blue/dark gray) that here live on a sphere of radius $1+\epsilon=1.25$. The neutrinos form a spiral structure similar to the one of the antineutrinos, but in the final state this structure cannot move to the negative ${\bf B}$ directions because of lepton number conservation. In the middle column we show a single-angle example with the same parameters, now using a box-like spectrum of oscillation frequencies where initially $\bar P_{\omega}^z=(2\omega_0)^{-1}$ and $P_{\omega}^z=(1+\epsilon)(2\omega_0)^{-1}$ for $0\leq\omega\leq2\omega_0$ so that $\langle\omega_\nu\rangle =\langle\omega_{\bar\nu}\rangle=\omega_0$ and it is equivalent to the original monochromatic case. We now see that most of the antineutrinos have moved to the negative ${\bf B}$ direction as before, whereas the neutrinos populate both the positive and negative ${\bf B}$ direction, representing the spectral split. The lack of full adiabaticity prevents the split from being complete, leaving some polarization vectors not fully aligned or anti-aligned with ${\bf B}$. At large radii when the neutrino-neutrino interactions have died out, these modes precess with their different vacuum oscillation frequencies so that they are found on a spiral locus extending from the ``south pole'' to the ``north pole'' that gets wound up further at larger radii. Note that here we have used 1000 energy modes in order to obtain a visible population occupying these non-adiabatic final states. Still, only very few red dots (antineutrinos) are visible, the vast majority being at the south pole. Likewise for the neutrinos (blue dots), the spiral is populated only by a small fraction of the 1000 modes. In other words, the evolution is nearly adiabatic. Finally we combine a box-like energy spectrum and a multi-angle distribution (right panels). The antineutrinos all cluster around the negative ${\bf B}$ direction and fill the ``southern polar cap'' more or less uniformly because at late times modes with different energies precess with different frequencies. The neutrinos populate both the northern and southern polar caps, representing the spectral split. At intermediate latitudes we find coherent spiral structures. They correspond to modes with different angles but equal $\omega$ so that even at late times they do not dissolve by differential precession. \subsection{Effective interaction strength} \label{sec:border} Besides the asymmetry $\epsilon$ itself, the most uncertain model parameter is the effective neutrino-neutrino interaction strength $\mu$ as defined in Eq.~(\ref{eq:mudefine}). In Fig.~\ref{fig:regimes} we show the demarcation lines between coherence and decoherence for both hierarchies in the $\mu$-$\epsilon$-plane, keeping all other parameters at their standard values. The contours are constructed as described in Sec.~\ref{sec:mixangle}. The numerical contours are visually very well approximated by linear regressions of the form \begin{eqnarray}\label{eq:epscontour} \epsilon_{\rm IH}&\approx&0.225+0.027\,\log_{10} \left(\frac{\mu}{10^6~{\rm km}^{-1}}\right)\,, \nonumber\\* \epsilon_{\rm NH}&\approx&0.172+0.087\,\log_{10} \left(\frac{\mu}{10^6~{\rm km}^{-1}}\right)\,. \end{eqnarray} For the normal hierarchy, the linear regression would intersect $\epsilon=0$ within the range of investigated $\mu$-values, but in reality turns over and saturates around $\epsilon=0.06$. \begin{figure}[ht] \includegraphics[width=1.0\columnwidth]{fig11.eps} \caption{Limiting $\epsilon$ for decoherence as a function of the effective neutrino-neutrino interaction strength $\mu$ for our standard parameters. The linear regressions are ``visual fits'' represented by Eq.~(\ref{eq:epscontour}).\label{fig:regimes}} \end{figure} \subsection{Vacuum oscillation frequency} \label{sec:omega} The average vacuum oscillation frequency $\omega$ depends on the atmospheric $\Delta m^2$ that is quite well constrained, and a certain average of the neutrino energies. Our standard value is $\omega=0.3~{\rm km}^{-1}$. If we increase this to $1~{\rm km}^{-1}$, the $\epsilon$-$\mu$-contour in Fig.~\ref{fig:regimes} is essentially parallel-shifted to larger $\epsilon$ by about 0.035 (inverted hierarchy). This range of $\omega$ probably brackets the plausible possibilities so that the uncertainty of $\omega$ does not strongly influence the practical demarcation between the regimes. The normal hierarchy is more sensitive to $\omega$. In Fig.~\ref{fig:nhomega} we show a contour for the coherence regime in the $\epsilon$-$\omega$ plane, assuming otherwise our standard parameter values. Changing $\omega$ from $0.3$ to $1~{\rm km}^{-1}$ increases the critical $\epsilon$ by almost~0.15. \begin{figure}[ht] \includegraphics[width=1.0\columnwidth]{fig12.eps} \caption{Limiting $\epsilon$ for decoherence as a function of the vacuum oscillation frequency $\omega$ for our standard parameters and the normal hierarchy.\label{fig:nhomega}} \end{figure} \section{Conclusions} \label{sec:conclusions} The nonlinear neutrino transformations that occur in the dense-neutrino region of a SN show numerous novel features. It was noted that multi-angle effects play an important role in that the neutrino-neutrino interaction depends on the relative angles of the various trajectories~\cite{Duan:2006an, Duan:2006jv}. At the same time it was numerically observed that for a typical example the behavior was unexpectedly quite similar to the single-angle case~\cite{Duan:2006an, Duan:2006jv}. On the other hand it was analytically shown that a gas of equal densities of neutrinos and anti-neutrinos has a pronounced angular instability and kinematical decoherence between different angular modes in flavor space is fast, representing a stable fixed point of the system~\cite{Raffelt:2007yz}. We have here not attempted to develop further analytical insights, but have taken a practical approach and explored numerically the range of parameters where different forms of behavior dominate in a realistic SN scenario. To this end we have first clarified that ``multi-angle effects'' mean one of two clearly separated forms of behavior. The flavor content of the system can evolve in a quasi single-angle form. On the level of the polarization vectors this means that they fill only a restricted volume of the available phase space and maintain a coherent structure. On the other hand, nearly complete flavor equilibrium can arise where the available phase space is more or less uniformly filled. For realistic assumptions about supernova and neutrino parameters, the switch between these modes of evolution is set by the degree of asymmetry between the neutrino and antineutrino fluxes. While this asymmetry is caused by the deleptonization flux, the crucial parameter $\epsilon$ is the asymmetry between $F(\nu_e)-F(\nu_x)$ and the corresponding antineutrino quantity as defined in Eq.~(\ref{eq:epsdefine}) because for flavor oscillations the part of the density matrix that is proportional to $F(\nu_e)+F(\nu_x)$ drops out. While in a realistic SN on average $F(\nu_e)$ is about 15\% larger than $F(\bar\nu_e)$, the asymmetry parameter as defined in Eq.~(\ref{eq:epsdefine}) is typically much larger. The critical value of $\epsilon$ that is enough to suppress decoherence depends on the type of neutrino mass hierarchy, the average energies, luminosities, and on the mixing angle. We have found that for $\epsilon\agt0.3$, decoherence is suppressed for the entire range of plausible parameters, but a value smaller than 0.1 may be enough, depending on the combination of other parameters. We conclude that the quasi single-angle behavior may well be typical for realistic SN conditions, i.e., that the deleptonization flux is enough to suppress multi-angle decoherence. To substantiate this conclusion one should analyze the output of numerical simulations in terms of our model parameters. Besides the flavor-dependent luminosities and average energies, one needs the angular distribution of the neutrino radiation field at some radius where collisions are no longer important. If our conclusion holds up in the light of realistic SN simulations, a practical understanding of the effect of self-induced neutrino flavor transformations quickly comes into reach. In the normal mass hierarchy, nothing new would happen on a macroscopic scale. In the inverted hierarchy, the final effect would be a conversion of $\nu_e\bar\nu_e$ pairs and a split in the $\nu_e$ spectrum. These phenomena are only mildly affected by multi-angle effects as long as we are in the quasi single-angle regime. If at late times the matter density profile contracts enough that an MSW effect occurs in the dense-neutrino region, the situation becomes more complicated as the neutrino-neutrino and ordinary matter effects interfere and produce a richer structure of spectral modifications~\cite{Duan:2006an, Duan:2006jv}. Even then, numerical simulations are much simpler if multi-angle decoherence is suppressed. It is not obvious how $\epsilon$ evolves at late times. The deleptonization of the core is probably faster than the cooling so that one may think that $\epsilon$ becomes smaller. On the other hand, the $\bar\nu_e$ can essentially only interact via neutral current reactions and their flux and energy distribution should, therefore, become very similar to the ones of $\nu_x$ and $\bar\nu_x$. Therefore, it is not obvious if at late times the initial flux difference $F_{\nu_e}-F_{\bar\nu_e}$ or $F_{\bar\nu_e}-F_{\bar\nu_x}$ decreases more quickly. We also note that there can be a cross-over in the sense that at late times the flux hierarchy can become $F_{\nu_x}=F_{\bar\nu_x} >F_{\nu_e}>F_{\bar\nu_e}$ as in Ref.~\cite{Raffelt:2003en}, meaning that we would have a pair excess flux of $\nu_x\bar\nu_x$ instead of a $\nu_e\bar\nu_e$ excess. We have refrained from an interpretation of our numerical findings because we do not have developed a theory of kinematical decoherence for a system that is asymmetric between neutrinos and antineutrinos and where the effective interaction strength varies as a function of time (or here of radius). The absence of multi-angle decoherence seems to follow from a lack of time for it to develop. One can interpret our results such that a more adiabatic decrease of the neutrino-neutrino interaction strength requires a larger asymmetry to suppress decoherence. The different length scales of the problem seem to conspire such that the evolution is adiabatic in that sharp spectral splits develop, but not so adiabatic that kinematical decoherence would be typical. An analytic understanding of this conspiracy remains to be found. Our results suggest that signatures of collective flavor transformations are not erased by multi-angle decoherence and will survive to the surface, modulated by the usual MSW flavor conversions~\cite{Dighe:1999bi}. The survival of observable signatures then also depends on the density fluctuations of the ordinary medium that can be a source of kinematical flavor decoherence~\cite{Fogli:2006xy,Friedland:2006ta}. All authors in this field have relied on the simplifying assumption of either homogeneity or exact spherical symmetry to make the equations numerically tractable. The neutrino emission from a real SN is influenced by density and temperature fluctuations of the medium in the region where neutrinos decouple. Likewise, the neutrino fluxes emitted from the accretion tori of coalescing neutron stars, the likely engines of short gamma ray bursts, have fewer symmetries than assumed here. It remains to be investigated if systems with more general geometries behave qualitatively similar to the spherically symmetric case or if deviations from spherical symmetry can provide a new source of kinematical decoherence. \begin{acknowledgments} This work was partly supported by the Deutsche Forschungsgemeinschaft under the grant TR-27 ``Neutrinos and Beyond'', by The Cluster of Excellence for Fundamental Physics ``Origin and Structure of the Universe'' (Garching and Munich), by the European Union under the ILIAS project (contract No.\ RII3-CT-2004-506222) and an RT Network (contract No.\ MRTN-CT-2004-503369), and by the Spanish grants FPA2005-01269 (MEC) and ACOMP07-270 (Generalitat Valenciana). A. E. has been supported by a FPU grant from the Spanish Government. SP and RT were supported by MEC contracts ({\em Ram\'{o}n y Cajal} and {\em Juan de la Cierva}, respectively). \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,100
Q: Find file names of excel file in sub-folders I am extracting file names from sub-folders, the code will successfully extract all file names from the sub-folders. However, I would like to extract the file names of xlsx files only. Please assist. Thanks! Sub Loop_Folders_And_Get_Files() Columns("A:A").Select Selection.ClearContents Dim i, lastrow1 As Integer i = 1 lastrow1 = Range("C" & Rows.Count).End(xlUp).row For i = 1 To lastrow1 Call GetFiles(Range("C" & i).Value) Next i End Sub Sub GetFiles(ByVal path As String) Dim fso As Object Set fso = CreateObject("Scripting.FileSystemObject") Dim folder As Object Set folder = fso.getfolder(path) Dim subfolder As Object For Each subfolder In folder.subfolders GetFiles (subfolder.path) Next subfolder Dim file As Object For Each file In folder.Files Range("A" & Rows.Count).End(xlUp).Offset(1, 0) = file.path Next file Set fso = Nothing Set folder = Nothing Set subfolder = Nothing Set file = Nothing End Sub A: Solution 1 Try this, it should work getting all excel files in folder Sub LoopThroughFiles_01() Dim oFSO As Object Dim oFolder As Object Dim oFile As Object Dim i As Integer Set oFSO = CreateObject("Scripting.FileSystemObject") Set oFolder = oFSO.GetFolder("C:\VBA Folder") For Each oFile In oFolder.Files If Right(oFile.shortName, 4) = ".XLS" Then Cells(i + 1, 1) = oFile.Name i = i + 1 End If Next oFile End Sub Solution 2 Try this, it should work getting all excel ".xlsx" files in folder Sub LoopThroughFiles_02() Dim oFSO As Object Dim oFolder As Object Dim oFile As Object Dim i As Integer Set oFSO = CreateObject("Scripting.FileSystemObject") Set oFolder = oFSO.GetFolder("C:\VBA Folder") i = 1 For Each oFile In oFolder.Files 'Debug.Print oFile.shortName 'Debug.Print Right(oFile.Name, Len(oFile.Name) - InStrRev(oFile.Name, ".")) If Right(oFile.Name, Len(oFile.Name) - InStrRev(oFile.Name, ".")) = "xlsx" Then Cells(i, 1) = oFile.Name i = i + 1 End If Next oFile End Sub
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,730
Q: os.listdir print more files than command `ls` but less than `ls -a` I'd like to count the command in path /Users/me/anaconda3/bin In [3]: len(os.listdir("/Users/me/anaconda3/bin")) Out[3]: 474 However, when I check with commands In [5]: !count=0; for f in $(ls /Users/me/anaconda3/bin) ;do count=$(( $count + 1)); done; echo $count 470 However, if check all the files: In [17]: ls -a /Users/me/anaconda3/bin | wc -l 476 What's the reason cause the difference? A: Its easy if you read the documentation of os.listdir Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries '.' and '..' even if they are present in the directory. That means the os.listdir command always has no_of_elements_in(`ls -a`)-no_of_elements_in(".. and .") that is len('os.listdir') =no_of_elements_in(`ls -a`)-2 In your case 474=476-2
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,727
Ба̀нските минера̀лни водѝ извират от два минерални извора и един сондажен кладенец в южния край на Разложкия хидротермален басейн. Разположени са югозападно от град Банско, на около 1050 метра надморска висоина, в местността Мъртва поляна. Формирането на Банските минерални води е свързано с окарстените мрамори в Северен Пирин. Общият дебит на Банските минерални води е около 60 литра в секунда, а температурата им е около 17 C. В химическо отношение водата е хидрокарбонатна, силициево-магнезиева и е слабо минерализирана. Бележки География на област Благоевград Извори в България Банско Пирин
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,860
Medtronic MiniMed Insulin Pump Lawsuit MiniMed pumps are small drug delivery devices which provide a supply of insulin to diabetics as part of their disease management. On November 21, 2019, the FDA issued a safety alert notifying the public and medical professionals of the new MiniMed Insulin Pump recall. **We are no longer accepting new clients** The U.S. Food and Drug Administration (FDA) has announced that device maker, Medtronic has issued a Class I recall of their Medtronic MiniMed Insulin Pumps over a potential for incorrect insulin dosing. The recall may affect an estimated 322,005 insulin pumps in the U.S. Affected insulin pumps may have a missing or broken retainer ring which normally helps to lock the insulin cartridge containing the supply of insulin into place. If the cartridge is not locked properly, the pump may over deliver or under deliver insulin which may result in a serious medical condition of hypoglycemia or hyperglycemia. Either condition may be serious or life-threatening. The FDA has said that the pumps have been involved in 26,421 complaints involving 2,175 injuries and one death. Many people may be considering filing a MiniMed Insulin Pump lawsuit against Medtronic to seek compensation for their injuries. Medtronic Current Recall On November 21, 2019, the FDA issued a safety alert notifying the medical community and public about a Class 1 recall issued by medical device maker Medtronic. A Class I recall indicates a high potential for serious injury or death. The recall was based on a potential for inaccurate dosing which may result in overdose or underdose of insulin. This may occur due to a missing or broken retainer ring which fails to lock the insulin reservoir or cartridge in place. If not locked securely, the device may over or under deliver insulin to the patient. Insulin over dosage may cause hypoglycemia, while under dosage may cause hyperglycemia. The recall affects an estimated 322,005 MiniMed 600 series insulin pumps in the U.S. including: Model 630G (MMT-1715) manufactured before October 2019 Model 670G (MMT-1780) manufactured before August 2019 The MiniMed model 630G pumps were approved for use in persons 16 years of age or older and 670G pumps were approved for use in patients over the age of 7, indicating that Type 1 diabetics who are children, adult or elderly and who are using MiniMed 600 series pumps may be at risk. FDA MiniMed Recall Instructions Users of the affected pumps were advised to: Examine the retainer ring of their pump Stop using the pump if it appeared to be damaged, loose or missing. Stop using the pump if the reservoir cartridge does not lock into place If pump use is discontinued, follow doctor's instructions for manual insulin dosing If pump is dropped, check pump and retainer ring for damage Check pump retainer ring and ensure cartridge is properly locked at every set change Symptoms of hypo or hyperglycemia should be reported to a healthcare professional right away. Signs of hypoglycemia may cause dizziness, confusion, weakness, irregular heart rhythm, seizures, loss of consciousness, death Signs of hyperglycemia may include extreme thirst and fatigue, nausea and vomiting, shortness of breath, weakness, loss of consciousness, coma and death Previous Medtronic Recalls The November 2019 Medtronic insulin pump recall is only the latest in a history of recalls involving MiniMed insulin pumps. Past recalls have included: In November 2019, the FDA issued a recall for certain MiniMed pumps with remote controllers over concerns about cybersecurity of the controller In June 2019, the FDA warned about a possible risk of hacking involving the MiniMed pumps after it was discovered that devices programming may lack cybersecurity protections In September 2017, Medtronic issued a recall for the MiniMed Infusion pumps due to cases of hypoglycemia which had involved a blocked membrane In September 2015, Medtronic recalled the MiniMed 620G and 640G pumps due to a problem with the drive motor in the pump and a separate recall over a malfunctioning timer unit In September 2014, Medtronic recalled the MiniMed Paradigm insulin pump after programming errors resulted in patients receiving the maximum insulin dose In June 2013, the company recalled MiniMed Paradigm infusion sets after it was discovered that fluid entering tubing vents could prevent the pump from priming properly. The following month, the Paradigm reservoirs were recalled for a similar problem In 2009, Medtronic recalled MiniMed Quickset infusion sets for the Paradigm pump after a manufacturing defect was discovered that could result in incorrect insulin delivery. The current recall concerns a newer type of pump which are designed to be used as part of a "closed-loop" system which automatically detects blood sugar levels and delivers insulin without the user's intervention. The automated system was approved in as the MiniMed 670G in September 2016 as the first "artificial pancreas" type pumps. Its use was expanded to users over the age of 7 years in June 2018. The 630G was first approved in 2016 and included an "automatic suspend" for glucose levels that were too low, but it was not a "fully automatic" system as insulin doses were preset. The 630G was expanded to include a "SmartGuard" system in June 2017 with an additional sensor. Filing a MiniMed Insulin Pump Lawsuit Even though Medtronic's technology continues to advance, serious risks have continued to occur. These risks may pose threat of severe or permanent injury or even death to the users. Some people who were injured by a malfunctioning Medtronic MiniMed Insulin Pump may be considering lawsuits to seek compensation for their injuries. People may be seeking compensation for: Future medical costs FDA says Medtronic MiniMed insulin pump recall is serious, MassDevice (02/2020) Medtronic Recalls MiniMed Insulin Pumps for Incorrect Insulin Dosing, U.S. Food and Drug Administration (11/2019) Notwithstanding claims relating to this product, the drug/medical device remains approved by the U.S. FDA.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,215
BCN Faces: No way back (EN) Who are the people in Barcelona? What topics interest them? What are their passions? How is life in Barcelona behind the tourist glitter? Under the headline: "BCN Faces" I will try to put a face to some of the people living in this amazing city. It is their lives, opinions, values, claims, etc., that appear under this headline, not necessarily mine or a general representation of all people living in Barcelona. These posts are theirs, completely. Sometimes with photos, other times without if they and I consider it the best solution. Please meet Anna and Mariona. I met the two women in front of the Estación de Francia, the day Catalonia's independence was declared but not legally implemented for fear of the Spanish army being sent in. They are two ordinary women of my age who live in a smaller town outside of Barcelona. They are well-educated, working, married, mothers, and live a life like most European women except that they are involved in Catalonia's struggle for independence. Both are born and raised in Catalonia and they are first and foremost Catalans, but like all Catalans they have friends, acquaintances and family with Spanish roots. Nevertheless, they are involved in the struggle for a Catalan republic and it is no longer just a political struggle, it has also become personal. Earlier, the dream of an independent Catalonia was perhaps a utopia, but now – in the light of recent actions taken by the central administration in Madrid – it has become a clear objective. As they say: "No hay vuelta atrás" – there is no way back. The road ahead must be peaceful and democratic. Naturally, it's the Catalan way. They – and all Catalan separatists – do not want independence with "blood", but with peaceful means. Independence must be realized through dialogue, negotiation and peace, and it will be a continuous process. And as they say there is a thereafter, as in a divorce. The two countries will be living side by side, and they will have to work together. Besides, the two countries also have a lot in common. The negative framing of Catalonia Both the starting point and the driving force of their struggle is the indignation over the treatment of Catalonia and the Catalans from the government and the administration in Madrid. Part of the treatment is the constant negative framing of Catalonia and the Catalans, a framing with roots in the Franco dictatorship. A framing which means that many Spaniards have a negative and often untrue image of Catalonia. Mariona experienced an example of this during a family holiday in Spain, where a Spaniard was very surprised that her son could understand Spanish. Many Spaniards have the perception that the Catalans do not understand Spanish and that the children do not learn the language at school. This from a negative framing that implies that the Catalans do not really want to speak Spanish or want their children to speak it. Catalan may be their mother tongue, but in reality, most Catalans are bilingual and can effortlessly speak the two languages, just as their children learn both Spanish and Catalan at school. Recently, the negative framing and the pressure on Catalonia has increased. Helped by an uncritical press, the region and population are being framed in both media and by Spanish politicians, especially from the government, as radicals and extremists and the demonstrations are described as violent. But why are there children at the demonstrations if they are violent? According to the government the children is not a sign of peaceful demonstrations, and that they naturally come along with their parents. To the government it is a sign of Catalan indoctrination of the children. "How can they sink that low? Do they have no shame?", Anna and Mariona ask themselves. It makes them both sad and angry: "How can they say such things and still wish us to stay in Spain? And why should we stay?". With deceptions and direct lies about what is going on in the region, such as the examples above, the Catalans are framed as the villains. Another recent example of this framing is the statement that the region "was not prepared for independence". A statement originally from Catalan side which has been taken out of context by the government and "translated" into that the region was not prepared for independence because of lack of planning and management of the process of independence. In fact, the original statement reported that Catalonia was not prepared for the violence that arose with the intervention of the Spanish police in the elections on 1 October. The Catalans did not foresee the violent behavior of the police. They knew there would be obstacles and that the consequences might be fines etc., but they were unprepared for the violence. But stating that they were not prepared for the process towards independence, as the government states it, is another deception, another lie. This kind of framing is often not challenged, it remains as the truth. The press does not report everything and in truthfully manner. The treatment of the Catalan autonomy Besides the negative framing, Catalonia is also experiencing political obstruction according to Anna and Mariona. For several years, Catalonia has asked permission to hold elections, like the referendum in Scotland, to uncover the position of the Catalans towards independence. Also, because there is a large part of the Catalan population that wishes to remain a part of Spain. But the government have declined. The government does not want a referendum, or even to talk about it. Anna and Mariona wish that all Catalans would be voting, and not only the yes-voters, but also those who vote no. For the women it´s all about being allowed to express an opinion, it´s about democracy. There have also been negotiations concerning the economic relations between Catalonia and the Spanish state, the so-called pacto fiscal. Catalonia pays taxes, which goes to the treasury, and later a smaller amount returns to Catalonia. It has been the region's wish to renegotiate that part, but an agreement has not been possible. Catalonia is therefore suffering from a lack of investment in infrastructure, while investments in infrastructure are being made elsewhere in Spain, and not always in appropriate manner with examples of empty airport show. There is an air of corruption and favors surrounding the government's financial dispositions and investments which doesn´t reduce Catalan dissatisfaction. In addition, Catalonia has experienced savings in the social, educational and cultural areas under the current government. Since 2010, the Catalan autonomy has also experienced a loss of powers, and that obstacles have been placed for the legislative work in the region, especially in recent months. The laws that have been approved by the Catalan Parliament must also be approved by the administration in Madrid, which has resulted in many laws being declined. From the government in Madrid, this has been framed as the Catalan Parliament only working for independence, but in fact hundreds of laws have been approved that concern other matters than independence, but the Spanish state has not approved them. And that is laws, that elected Catalan politicians have voted for. Again, according to Anna and Mariona, it is a part of the negative framing trying to create a bad image of Catalonia. And the framing, has increased lately. Following the referendum on October 1, a law was swiftly approved – in just two days – in the Spanish Parliament allowing Spanish companies to move headquarters quickly. A law that was then followed up with private phone calls to companies headquartered in Barcelona and Catalonia with requests to move, which many companies have chosen to do. The king has also played a role on the government's side. Perhaps because Catalonia wishes to become a republic. Anna and Mariona point out that the speeches of the king and the prime minister are very similar in both content and choice of words, so you might suspect them coming from the same place. A modern witch hunt Legally, there isn´t any help either. The legal system has put the pressure on and a large part of the Catalan politicians who have worked for independence have been imprisoned for revolt, etc. But not only politicians, but also leaders of various Catalan organizations, so-called cultural activists such as "los Jordis". And how the elected Catalan politicians and leading cultural figures have been treated, make Anna and Mariona angry and hurt. The various politicians have been taken away in handcuffs, under insults, thrown into the back of cars and driven away – still in handcuffs and without safety belts on – directly to each prison, where they have been detained. Some of them had to start their legal declarations without their lawyer present, and some of them were sent prison without the possibility to put down bail. A very hard treatment, according to Anna and Mariona, as if the Catalan politicians were terrorists. A treatment not worthy for politicians, who are elected by the people, and a treatment much harder the treatment experienced by the many corruption-induced politicians from the government party. The treatment of the Catalan politicians reminds the women of a witch hunt. A feeling that does not diminish by the sensation that there isn´t a clear line between the legislative and judicial powers. "How can the government say that Catalan politicians are criminals when they themselves constantly have court cases?", ask the women and emphasize that the Catalan politicians have a democratic mandate from the people to work for independence. But the "witch hunt" is not only aimed at the top, at the big fish. There are also examples of the system investigating the smaller fish. As an example, Anna and Mariona refer to 8 teachers being notified to the police, where after they have been called into questioning and have received accusations. For what? For indoctrination, for talking to the children in school after the election on October 1st. They also mention that 200 mayors who have traveled to Belgium to show their support for the Catalan President Carles Puigdemont have been asked to justify their expenses to control whether public funds have been used for the journey. It has affected the women very much what has happened to especially the political system in Catalonia. "If the politicians are going to prison, we can go there too?" But they are not afraid. Those politicians who have been released subsequently with bail, the bail has been gathered with private donations. The inheritance after Franco The recent events show Anna and Mariona that there is still a legacy left from the Franco era, the roots are still there as the treatment of Catalonia and the Catalan politicians proves. But there is also a system that does not seem completely democratic. Media, if not directly governed, are often influenced by the state apparatus as well as the monarchy and the judicial system. But also, the use and choice of words and the way in which Spain is regarded as a unit. In reality, according to Anna and Mariona, Spain is composed of different countries and regions, each with their own language and culture. Thus, there is not only one Spain, but the Spanish state wants the different regions to be silent and submissive under the idea of "one Spain", again a perception with roots dating back to the Franco era. An idea that is also expressed in the Spanish constitution, that was written, while Franco was still warm, as the women express it. And, that is, under the supervision of the army and the former system. But, as women say, things have changed. "And what about human rights? Should they not stand above the Constitution?" The women feel that the rights have been neglected lately. It also feels like democracy have been neglected. Now, the government has put Article 155 of the constitution in force, which means that the government has taken over the administration of the region until the elections on 21 December. In practice, according to Anna and Mariona, the smallest party in the Catalan Parliament is now ruling Catalonia. "And it goes against what the Catalans have voted, is that democratic?" Article 155 affects the daily life in the administration of the region, as everything now must pass through the central administration in Madrid with subsequently delays, also in relation to finances, payments, etc. And then the announced elections in December. Strange elections, but "we vote, for in Catalonia we love to vote". But will it change anything? For the women, that is hard to see. Perhaps some Catalans are tired of it all or scared, on the other hand others might have moved position towards independence. So, all in all, it does not change anything and then, what's next? A new application of article 155, and maybe new elections, and then a new application of article 155, etc. The government might think that they can stop the process by sending all the "heads" in prison, but as Anna and Mariona point out, it is the will of the people, so there is no way back. Well, that is hard for the women to imagine. They want a happy divorce, but right now they find it hard to picture a solution. But part of the solution must be a change of attitude to recognizing that there are differences in Spain and for to stop the oppression of them. In addition, it is necessary to uncover the position of the Catalan people, that is uncover if there is a majority for independence or not? In order to be able to find the right solution to what is called the "Catalan question" in the media. But, as said, the struggle for independence has become more personal than political. For the women it has reach the point where they can´t take more. "When they come to my house to beat my people, then they do not love us," and therefore the emotional attachment to Spain have ended for the two women. As they say: "Before you could live with Spain, now we had enough, they do not let us do anything". But both women recognize and respect that Catalonia also has a large part of the population who wants to belong to Spain, so if there is no majority for independence, then Catalonia must live with it. "What happens in the ballot box must be respected". But a solution is high on the wish list. Many families are divided in the question, especially across generations. Generally, the younger generation is more political, and they are not afraid as the generations before who lived under the dictatorship. They plan and see the opportunity, for them it's not a utopia. They get information, etc. from social media, and they act. The demonstrations have also grown in numbers and show a wide section of the Catalan population. The best solution, according to women, would be that there would be international mediation, but it is rather difficult when the EU view Catalonia as an internal Spanish matter. So far, the government has not wanted mediation, even though Catalonia has suggested it. The women can´t understand why they do not want to mediate: "If the government really wants unity, why do not they want mediation?". They hope and wish therefore that the countries and populations abroad will be listening and seek to understand the situation. For in the end, Catalonia and the Catalans only wishes to be recognized and respected for being who they are: Catalan. TagsArticle 155 • artikel 155 • arv fra Franco • Autonomy • Økonomi • ét Spanien • Bail • Barcelona • BCN Faces • carles puigdemont • Catalan • Catalan culture • Catalan identity • Catalanere • Catalansk • Catalansk republik • catalanske politikere • catalanske selvstyre • Catalonia • Catalonien • Centraladministrationen i Madrid • Companies • corruption • cultural activists • Deceptions • Declaration of Indepence • Democracy • Demokrati • demonstration • det catalanske spørgsmål • det politiske system • Economy • Election • EU • fængsel • folkeafstemning • folkevalgte • forfatning • Franco • Government in Madrid • headquater • hovedsæde • human rights • Independence • independence proces • infrastructure • infrastruktur • Inheritance after Franco • Investments • King • laws • lies • life in Barcelona • Los Jordis • Madrid • media • mediation • modern witch hunt • negative framing • No hay vuelta atrás • One Spain • pacto fiscal • Police • press • prison • referendum • solution • Spain • spanish • violance 0 comments on "BCN Faces: No way back (EN)"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,568
{"url":"https:\/\/www.elab.tecnico.ulisboa.pt\/wiki\/index.php?title=World_Pendulum&oldid=3862","text":"# Description\n\nSoyuz lift-off from French Guiana @ 5\u00ba north of the Equator .\n\nRockets are launched to space from equatorial latitudes. This is due to the fact that the apparent weight of objects is gradually reduced from the poles to the equator. We will feel lighter at the equator than at the poles!\n\nThis small difference in apparent weight allows the same rocket to launch heavier payloads into orbit if launched nearer from the equator. For example, a Soyuz rocket launching into geostationary orbit from the French Guiana (5\u00baN) can carry 3 tons while it will only be capable of launching 1.7 tons of cargo when launched from Baikonur, Kazakhstan (46\u00baN).\n\nThe goal of this experiment is to find the value of the gravity \"constant\" through a constellation of pendulums placed in various latitudes and remotely operated, through the internet, by anyone.\n\nIt is expected that CPLP countries can contribute to this effort, bringing students, teachers and interested citizens closer together.\n\nThere are two different activities occurring simultaneously: (i) access, through e-lab, of the pendulums located in different latitudes and (ii) the construction and local operation in schools or at home.\n\nLisboa, Ilh\u00e9us, Faro e Rio de Janeiro were the first cities to contribute to the network in January 2013, making it possible for the first fits of experimental data to the theoretical equation within our project that describes how gravity changes with latitude to occur.\n\n\u2022 Video Faro: rtsp:\/\/elabmc.ist.utl.pt\/worldpendulum_ccvalg.sdp\n\u2022 Video Lisboa: rtsp:\/\/elabmc.ist.utl.pt\/worldpendulum_planetarium.sdp\n\u2022 Video Ilh\u00e9us: rtsp:\/\/elabmc.ist.utl.pt\/worldpendulum_ilheus.sdp\n\u2022 Video Rio Janeiro: rtsp:\/\/elabmc.ist.utl.pt\/worldpendulum_puc.sdp\n\u2022 Video Maputo: rtsp:\/\/elabmc.ist.utl.pt\/worldpendulum_maputo.sdp\n\u2022 Video S\u00e3o Tom\u00e9: rtsp:\/\/elabmc.ist.utl.pt\/wp_saotome.sdp\n\u2022 Laboratory: World Pendulum in elab.ist.utl.pt\n\u2022 Control room: Choose a location\n\n# Experimental apparatus\n\nThe pendulum design used was based in Dr. Jodl's design[1]. Some minor changes were made to allow the same design to be easily replicated in high schools. The data concerning each pendulum follows:\n\nPendulum used for the world pendulum standard gravity experiment.\nPendulum string support to avoid elongation errors. The cable is fixed by soldering it into a brass M4 screw 40mm long.\nStandard launcher of the pendulum mass for the World Pendulum Alliance (WPA). This launcher uses a V-slot rail technology and it is characterized by a maximum horizontal launching distance of 250 mm.\nPhysical sizes by place\nPlace Latitude Longitude Altitude (m) Cable length (mm) Sphere diameter (mm)\nCCV_Algarve\/Faro 37\u00ba00'N 7\u00ba56'W 10 2677 +\/- 0.5 @23\u00baC 80.5 +\/- 1.0\nUESC\/Ilh\u00e9us 14\u00ba47'S 39\u00ba10'W 220 2705 +\/- 0.5 @23\u00baC 80.5 +\/- 1.0\nLisbon 38\u00ba41'N 9\u00ba12'W 20 2677 +\/- 0.5 @19\u00baC 80.5 +\/- 1.0\nMaputo 25\u00ba56'S 32\u00ba36'E 80 2609.8 +\/- 0.5 @27\u00baC 80.5 +\/- 1.0\nS\u00e3o Tom\u00e9 0\u00ba21'N 6\u00ba43'E 50 2756.5 +\/- 0.5 @29\u00baC 81.8 +\/- 0.5\nPrague - CTU 50\u00ba5.5'N 14\u00ba25.0'E 150 2850 +\/- 0.5 @25\u00baC 80.1 +\/- 0.5\nBarcelona - UPC 41\u00ba24.6'N 2\u00ba13.1'E 55 2756.5 +\/- 0.5 81.8 +\/- 0.1\nRio de Janeiro - PUC 22\u00ba54.1'S 43\u00ba12'W 50 2826,0 +\/- 0.5 81.6 +\/- 0.1\nPraia - UniCV 14\u00b056'N 23\u00b031'W 40 2826,0 +\/- 0.5 81.6 +\/- 0.1\nBogot\u00e1 - UniAndes 4\u00b036'N 74\u00b03'W 2650 2815,3 +\/- 0.5 82.0 +\/- 0.1\nPanama city - UTP 9\u00b01.3'N 79\u00b031.9'W 82 2825 + \/- 0.5 @28\u00baC 81.9 +\/- 0.1\nSantiago - UChile 33\u00b027.5'S 70\u00b039.8'W 552 2825 +\/- 0.5 @27\u00baC 81.9 +\/- 0.1\nValparaiso - UTFSM 33\u00b01'S 71\u00b037'W 30 2827.5 +\/- 0.5 @28\u00baC 81.8 +\/- 0.1\nPanama city - USMA 33\u00b01'S 71\u00b037'W 30 2800.0 +\/- 0.5 @35\u00baC 81.8 +\/- 0.1\nBrasilia - UnB 15\u00b0 46'S 47\u00b0 52'W 1034 2826.8 mm +\/- 0.5 @26\u00baC 81.4 +\/- 0.1\nMarseille - ECM 43\u00b020.6'N 5\u00b026.2'E 162 2811.0 mm +\/- 0.5 @22\u00baC 82.0 +\/- 0.1\n\nTypical quantities\nString length (not counting the sphere) min: 0.5 m nominal: 2.8m max: 12m\nSphere mass 2kg +\/- 75g\nSphere diameter 81.2mm +\/-1.5mm\nString Remanium(r) - Stainless steel (Nickel chromium)\n\n- 0,4mm\n\nModulus of elasticity of string ~200GPa\nOscillation period measurement system Microprocessor with 7,3728MHz - 30ppm crystal\n\n+ laser + PIN photodiode\n\nWire CTE (25-500\u00baC) (Coefficient of thermal expansion) ~14 x 10-6 K-1\n\nPenulum length limits*\nMinimum ~1.5 m\nMaximum virtually no limit (~63.5 m)\n\n*These limits were estimated for the standard World Pendulum Alliance launcher (WPA). A photo of a standard WPA launcher is shown on the figure on the right. Check Pendulum length limits to understand these limits were obtained.\n\nThe experimental apparatus can be easily adapted to human operation, using a good chronometer, for local execution. The stainless steel structures can made in by brass or bronze for easier machining. The cable used can be replaced by a sport fishing steel cable and the mass can be replaced by a Olympic weight throw training weight, weighing 2Kg. A calibrated measuring tape should be used to measure the cable length, a few days after assembling the apparatus to allow for cable expansion.\n\n# Local partners\n\nThe pendulum[2], although one of the simplest systems commonly studied, is one of the richest in terms of physics.\n\nIn order to build a precise pendulum the most important factors are the precise measurement of the length of the cable, its quality, and of that of the pendulum supports. Selecting a mass between 1 to 4 Kg ensures that the pendulum's period error will be small enough for small local gravity changes (smaller than 0.1%) to be detectable, as long as a precise chronometer is used for timekeeping.\n\nA local apparatus can be assembled using readily available materials and the local \"g\" values determined using such an apparatus can then be compared to the ones obtained through the remote pendulum constellation and the theoretical model.\n\nCollecting this data through a social network will allow a more precise description of how \"g\" varies around the globe. The \"World Pendulum\" can be an important collaborative network for the dissemination of physics in schools.\n\nInstructions on how to build such a pendulum are available in Precision Pendulum. The documentation of the development and construction of a pendulum are available in Precision Pendulum while the instructions on how to assemble it are available in Precision Pendulum Assembly.\n\n# Physics\n\nDetermining gravity's acceleration in different parts of the globe raises questions about the importance of models in physics. It's possible to show that gravity's acceleration at sea level changes with latitude, and therefore a correction is needed for each individual location. This process allows us to demystify science and correct the existing \"urban myth\" around some physical constants that only are truly constant when some approximations are done. In this particular case, we will show how the introduction of successive corrections to gravity's \"constant\" will lead to values closer to those experimentally obtained.\n\n## Geophysical model\n\nThe starting point is the commonly used, constant, value of 9.81 ms-2. This is obtained by considering the Earth as being (i) a sphere (ii) that is not rotating. It's trivial to note that this model, due to the symmetry of the spherical form, does not allow for different values in different locations. This changes as soon as Earth's rotation dynamics and ellipsoid shape (flattening of the poles) are taken in account. These factors allow for gravity to change with latitude, and in fact these two factors are the two most important ones in this phenomena, outweighing every other effect, such as (i) altitude, (ii) tidal forces, and (iii) subsoil composition.\n\nTo demonstrate these finer aspects, gravity's acceleration must be determined in various latitudes around the globe distant from each other. Using the data collected, students can then ask themselves about how \"constant\" the value truly is and improve their intuition of gravity.\n\n### Experimental studies\n\n#### Variation with latitude\n\nAs seen, the first possible study consists of using the remote pendulums to obtain a measurement of the local gravity acceleration for each location they're based in. Through considering (or not) several factors, it is possible to fit the data to a experimental description of the Earth using spherical harmonics (equation \\eqref{harmonica-esferica}). This experimental work can be conducted using e-lab's pendulum constellation and our partner's pendulums.\n\n#### Local determination\n\nFollowing the instructions available in this wiki - Precision_Pendulum - or using any other kind of design that results in a rigorous apparatus, a local pendulum is built. It's then possible for measurements of local gravity to be made, as long as a good chronometer is used. Furthermore, it's also possible to contribute to the enrichment of the World Pendulum network's spreadsheet.\n\n#### Tidal study\n\nUsing an almanac appropriate for the location, on can obtain the times of particular Moon\/Sun alignments (full moon, new moon, waxing crescent and waxing gibbous). Plotting a graph spanning several months, one can try to verify and quantify the influence of tidal forces and Moon\/Sun alignments in the apparent weight. It's possible to try and verify the correlation between Moon phases and changes in measurement of local gravity, by making a month or year-long study. Tidal effects are on the limit of detection by the pendulums of the e-lab constellation. For the experiment to be successful, it's necessary to be very rigorous on the time at which the experimental runs are made and some advanced numerical techniques, like the Fourier transform, need to by employed for the signal to be extracted from the data.\n\n#### Analysis of wire torsion\n\nEffect of wire torsion and sphere ellipticity in the measurement of pendulum speed.\n\nThose paying more attention will note that the speed of the mass changes due to wire torsion and due to the mass not being a perfect sphere. This is pictured in the image to the right. The pendulum can be studied taking into account the effect of the wire torsion (the use of Euler-Lagrange equations is recommended for this).\n\n## Uniformly accelerated circular movement\n\nThe speed of the sphere in the lowest point of the trajectory is determined by measuring how much time the laser beam is interrupted. Knowing the sphere diameter, it's trivial to determine the speed at the origin. From this, the maximum kinetic energy can be calculated and the launching height of the pendulum determined. The calculated launching point can then be compared with the real one.\n\n# Latitude provideres\n\nThe gravitational constant plotted against latitude with points of interest around the globe highlighted. Principe Island is just over zero latitude. Lisbon value was obtained with the current experiment and already over plotted on the graphic.\n\nLanguage is an important nationality factor (\"My fatherland is the Portuguese language.\", F. Pessoa) and a simple way to define what is called brother countries (\"pa\u00edses irm\u00e3os\"). Only four languages are disseminated around the world, Portuguese being one of them. The Portuguese speaking community covers latitudes from ~30S to ~40N, almost a 75\u00ba span across the equator. Therefore, CPLP countries can help by being \"latitude providers\" (see Figure).\n\nTo conduct this world experiment, at least four spaced points are needed in order to have a proper fit. But due to the strong non-linearity of the equation, more points are needed to provide a suitable adjustment, in particular on the \"knee\" close to the earth\u2019s equator. Brazil itself can provide almost four crucial points close to the equator (e.g. Recife 8\u00ba, Salvador \u2013 12\u00ba, Rio de Janeiro \u2013 23\u00ba, Porto Alegre \u2013 30\u00ba) but lacks points with a latitude where the characteristic varies more strongly, the almost linear region around 30\u00ba to 60\u00ba, where Portugal can give two good points (e.g. Porto - 37\u00ba and Faro - 41\u00ba). Mozambique can contribute with 27\u00ba (Maputo) and S. Tom\u00e9 e Principe or Brazil are both good choices to cover the equator. Angola could give complementary points to those acquired in Brazil, as the sensibility of the measurement is more pronounced close to the equator and the poles.\n\n# Data fitting\n\nAvailable references [2] [3] [4] [5] [6] [7] give a very good description of the mathematical model needed to fit the data. If all major factors are taken into account, gravity as a function of latitude is given by:\n\n$g_{n}(\\varphi) = 9.780 326 772\\times[1 + 0.005 302 33 \\cdot sin^{2}(\\varphi) - 0.000 005 89 \\cdot sin^{2}(2\\varphi)]$\n\nwhere $$\\varphi$$ is the latitude. This expression is one of the best experimental approximations and results from the standardization agreement to adjust the World Geodetic System datum surface (WSG84) to an ellipsoid with radius r1=6378137m at the equator and r2=6356752m polar semi-minor radius. This formula takes into account the fact that the Earth is an ellipsoid and that there is an additional increase in the acceleration of gravity when one moves nearer to the poles, due to a weaker centrifugal force. Nevertheless the students could derive a non-harmonic, first order approximation by taking into account only centrifugal force. Then, as a second step, they could include the two other major errors, the centrifugal force and earth\u2019s ellipsoid format.\n\nThe variability of the period with elapsed time (angle amplitude < 7,5\u00ba), showing that this error is less than 0,05% regardless initial amplitude.\n\nThe pictures shows the expected deviation from the \u201cearth\u2019s constant acceleration\u201d, the real acceleration for each latitude. We have plotted the point already obtained with this apparatus in Lisbon and the marks over the expected latitudes for future partners. Of course these approximations do not include one important source of deviation from real data to the mathematical model, the experimental error, as we do not include the experimental source of error. However, those systematic errors could be under the expected precision needed (0,1%) for the former approximation if a careful design of the apparatus is considered. Nevertheless those errors must be discussed in advanced courses and their weight must be proved when considering the real pendulum.\n\n# Historical notes\n\nThe pendulum importance as the basis of clocks and chronographs was only overthrown when the Royal Society convinced the English parliament to create an award, ranging from 10k\u00a3 to 20k\u00a3 (equivalent nowadays to more than 3.5M\u20ac), for the invention of a chronograph that didn't depend on it. The time precision of pendulum based systems is only bettered by modern electronic systems.\n\nIn the discovery age longitude was determined with a high error, since clocks and chronographs were reliant on pendulums and these were very sensitive to ships rocking, suffering changes in frequency or even stopping. Local longitude was calculated by comparing the solar hour (or stellar hour) with the ship's clock time.\n\n# References\n\n1. World pendulum\u2014a distributed remotely controlled laboratory (RCL) to measure the Earth's gravitational acceleration depending on geographical latitude, Grober S, Vetter M, Eckert B and Jodl H J, European Journal of Physics - EUR J PHYS , vol. 28, no. 3, pp. 603-613, 2007\n2. Physics for scientists and engineers, 5th edition, Hardcourt College Publishers, R.Serway and R. Beichner, 2000\n3. http:\/\/rcl-munich.informatik.unibw-muenchen.de\/\n4. Nelson, Robert; M. G. Olsson (February 1987). \"The pendulum - Rich physics from a simple system\". American Journal of Physics 54 (2): doi:10.1119\/1.14703\n5. Pendulums in the Physics Education Literature: A Bibliography, Gauld, Colin 2004 Science & Education, issue 7, volume 13, 811-832 (http:\/\/dx.doi.org\/10.1007\/s11191-004-9508-7)\n6. The exact equation of motion of a simple pendulum of arbitrary amplitude: a hypergeometric approach, M I Qureshi et al 2010 Eur. J. Phys. 31 1485(http:\/\/dx.doi.org\/10.1088\/0143-0807\/31\/6\/014)\n7. A comprehensive analytical solution of the nonlinear pendulum, Karlheinz Ochs 2011 Eur. J. Phys. 32 479 (http:\/\/dx.doi.org\/10.1088\/0143-0807\/32\/2\/019)","date":"2022-09-25 05:53:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5995157361030579, \"perplexity\": 2532.183177391991}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334514.38\/warc\/CC-MAIN-20220925035541-20220925065541-00656.warc.gz\"}"}
null
null
Ninewells offers the perfect location for families; whether you already have an established family or are looking to grow. Houses have been designed for modern family living and Ninewells boasts plenty of outdoor open spaces, play areas and access to the countryside on your doorstep. The city centre is just 2.4 miles away via the excellent travel and cycle networks on offer, meaning you and your family can benefit from the best of both worlds. It's not just these things that make Ninewells the best place to move or start your family – some of the city's best schools are within easy reach. With The Perse School located about half a mile away offering education from 3-years all the way through to the sixth form age of 18 years, and a host of other outstanding schools like St Faith's, Sancton Wood, St Mary's and The Leys schools close by, Ninewells really is top of the class when it comes to its location. With homes ready to move into now, you could be enjoying your new home while your children enjoy their new term at some of the best schools in the area. Four and five bedroom houses are available from £899,950, see our current availability now.
{ "redpajama_set_name": "RedPajamaC4" }
1,732
\subsection*{Writeup} \newpage \setcounter{page}{1} \input{sections/intro} \input{sections/prelims} \input{sections/influence-basics} \input{sections/poincare-kkl} \input{sections/friedgut} \input{sections/friedgut-kalai} \input{sections/kruskal-katona} \section*{Acknowledgements} A.D.~is supported by NSF grants CCF 1910534 and CCF 1926872. S.N.~is supported by NSF grants CCF-1563155 and by CCF-1763970. R.A.S.~is supported by NSF grants CCF-1814873, IIS-1838154, CCF-1563155, and by the Simons Collaboration on Algorithms and Geometry. This material is based upon work supported by the National Science Foundation under grant numbers listed above. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation (NSF). This work was done while A.D. was participating in the ``Probability, Geometry, and Computation in High Dimensions'' program at the Simons Institute for the Theory of Computing. \section{Sharp Threshold Results for Symmetric Convex Sets\ignore{ under Dilations}} \label{sec:sharp-threshold} For any symmetric convex set $K \subseteq \R^n$, we have that $\gamma_\sigma(K) = \gamma(K/\sigma)$, and hence the map $\Psi_K:\sigma \mapsto \gamma_{\sigma}(K)$ is a non-increasing function of $\sigma$ (since $K/\sigma_1 \subseteq K/\sigma_2$ whenever $\sigma_1 \ge \sigma_2$). Given this, it is natural to study the rate of decay of $\Psi_K$ for different symmetric convex sets $K \subseteq \R^n$. The $S$-inequality (\Cref{prop:s-inequality}) can be interpreted as saying that the slowest rate of decay across all symmetric convex sets of a given volume is achieved by a symmetric strip. Let $K_{\ast}$ be such a strip, i.e.~we may take $K_{\ast}=\{x \in \mathbb{R}^n: |x_1| \le c_\ast\}$ where $c_\ast = \Theta(\sqrt{\ln(1/\eps)})$ is chosen so that $\Psi_{K_\ast}(1) = 1-\varepsilon$ (and hence $\gamma(K_\ast) = 1-\varepsilon$). With this choice of $c_\ast$, it follows that $\Psi_{K_\ast}(\sigma) = \varepsilon$ for $\sigma=\tilde{\Theta} (1/\epsilon)$. Hence, for the volume of $K_{\ast}$ to shrink from $1-\varepsilon$ to $\varepsilon$, the variance of the underlying Gaussian has to increase very dramatically, by a factor of $\tilde{O} (1/\epsilon^2)$. Taking, for example, $\varepsilon = 0.01$, we see that in order for the symmetric strip $K_\ast$ to have its Gaussian volume change from $\gamma_{1}(K_\ast)=0.99$ to $\gamma_\sigma(K_\ast)=0.01$, the parameter $\sigma$ must vary over an interval of size $\Theta(1)$, so the strip $K_\ast$ does not exhibit a ``sharp threshold.'' Of course, as we have seen before, the symmetric strip $K_\ast$ has an extremely large (constant) convex influence in the direction $e_1$. We now show that large individual influences are the only obstacle to sharp thresholds, i.e.~any symmetric convex set in which no direction has large convex influence must exhibit a sharp threshold: \begin{theorem} [Sharp thresholds for symmetric convex sets with small max influence]~\label{thm:threshold-Friedgut-Kalai} Let $K \subseteq \mathbb{R}^n$ be a centrally symmetric convex set. Suppose $\varepsilon, \delta>0$ where $\delta \le \varepsilon^{-10\log(1/\epsilon)}$ and $\varepsilon>0$ is sufficiently small (at most some fixed absolute constant). Suppose that $\gamma(K) \le 1-\varepsilon$ and $\max_{v \in \mathbb{S}^{n-1}} [\Inf_v(K)] \le \delta$. Then, for $\sigma = 1 + \Theta \pbra{\frac{\ln(1/\varepsilon)}{\sqrt{\ln(\varepsilon/\delta)}}}$, we have $\gamma_{\sigma}(K) \leq\varepsilon$. \end{theorem} Setting $\varepsilon=0.01$ and $\delta =o(1)$, the above theorem implies that for $K$ a symmetric convex set $K$ with $\max_{v \in \mathbb{S}^{n-1}} [\Inf_v(K)] =o(1)$, it must be the case that $\gamma_{\sigma}(K)$ changes from $0.99$ to $0.01$ as the underlying $\sigma$ changes from $1$ to $1+o(1)$. \medskip \noindent {\bf Discussion.} \Cref{thm:threshold-Friedgut-Kalai} can be seen as a convex influence analogue of a ``sharp threshold'' result due to Kalai \cite{Kalai:04}. Building on \cite{FriedgutKalai:96}, Kalai \cite{Kalai:04} shows that if $f: \bn \to \zo$ is monotone and its max influence is $o(1)$, then $\mu_p(f)$ must have a sharp threshold (where $\mu_p(f)$ is the expectation of $f$ under the $p$-biased measure) (see also Theorem~3.8 of~\cite{KalaiSafra:05}). This is closely analogous to~\Cref{thm:threshold-Friedgut-Kalai}, which establishes a sharp threshold for $\gamma_{\sigma}(K)$ under the assumption that the max convex influence of $K$ is $o(1)$. We note an interesting quantitative distinction between \Cref{thm:threshold-Friedgut-Kalai} and the result of \cite{Kalai:04}: if the max influence of a monotone $f: \bn \to \zo$ function is $\delta$, then \cite{Kalai:04} shows that $\mu_p(f)$ goes from $0.01$ to $0.99$ in an interval of width $\approx 1/\mathsf{poly}(\log \log (1/\delta))$ (see the discussion following Theorem~3.8 of \cite{KalaiSafra:05}). In contrast, \Cref{thm:threshold-Friedgut-Kalai} shows that $\gamma_{\sigma}(K)$ goes from $0.01$ to $0.99$ in an interval of width $\approx 1/\sqrt{\log (1/\delta)}$, thus establishing an exponentially ``sharper threshold'' in the convex setting.\footnote{Roughly speaking, the extra exponential factor in \cite{Kalai:04} arises because of Friedgut's junta theorem; our proof takes a different path and does not incur this quantitative factor.} \medskip \begin{proofof}{Theorem~\ref{thm:threshold-Friedgut-Kalai}} We may assume that $\gamma(K) \ge \varepsilon$, since otherwise, there is nothing to prove. Let $r_{\mathsf{in}}=r_{\mathsf{in}}(K)$ be the in-radius of $K$. By \Cref{claim:small-inf-big-inradius} we get that \begin{equation}~\label{eq:lb-rinn-K1} r_{\mathsf{in}} \ge \sqrt{\ln \bigg( \frac{\gamma(K)}{2^{3/2} \pi \delta}\bigg)} \ge \sqrt{\ln \bigg( \frac{\varepsilon}{2^{3/2} \pi \delta}\bigg)} \end{equation} (note that our assumptions on $\delta$ and $\varepsilon$ imply that the right hand side of \eqref{eq:lb-rinn-K1} is well-defined). Next, we observe that a \emph{mutatis mutandis} modification of the proof of \Cref{eq:kkl-first-goal} gives that \begin{equation}~\label{eq:influence-lb-sigma} \TInf^{(\sigma)}[K] \ge \frac{1}{\sqrt{\pi}} \cdot r_{\mathsf{in}} \cdot \Var_{\sigma}[K]. \end{equation} We further recall that by our Marguilis-Russo formula for symmetric convex sets (\Cref{prop:russo-margulis-convex}), we have \begin{equation}~\label{eq:russo-m-1} \frac{d \gamma_{\sigma}(K)}{d\sigma^2} = -\frac{1}{\sigma^2 \sqrt{2}} \TInf^{(\sigma)}[K]. \end{equation} Combining \eqref{eq:lb-rinn-K1}, \eqref{eq:influence-lb-sigma} and \eqref{eq:russo-m-1}, we get that \begin{equation}\nonumber \frac{d \gamma_{\sigma}(K)}{d\sigma^2} \leq -\frac{1}{\sqrt{2 \pi}\sigma^2} \cdot \Var_{\sigma}[K] \cdot \sqrt{\ln \bigg( \frac{\varepsilon}{2^{3/2} \pi \delta}\bigg)}. \end{equation} Expressing $\Var_{\sigma}[K]$ as $\gamma_{\sigma}(K) \cdot (1-\gamma_{\sigma}(K))$ and ``solving'' the above differential equation for $\gamma_{\sigma}(K)$, we get that \begin{equation} \label{eq:potato} \ln \bigg( \frac{\gamma_{\sigma}(K)}{1-\gamma_{\sigma}(K) }\bigg) - \ln \bigg( \frac{\gamma(K)}{1-\gamma(K) }\bigg) \le \frac{-1}{\sqrt{2 \pi}} \cdot \sqrt{\ln \bigg( \frac{\varepsilon}{2^{3/2} \pi \delta}\bigg)} \cdot 2 \ln \sigma. \end{equation} Using the assumption that $\gamma(K) \leq 1-\epsilon$, it follows that for $\sigma \ge 1$, we have \[ \ln \bigg( \frac{\gamma_{\sigma}(K)}{1-\gamma_{\sigma}(K) }\bigg) \le \ln (1/\epsilon) + \frac{-1}{\sqrt{2 \pi}} \cdot \sqrt{\ln \bigg( \frac{\varepsilon}{2^{3/2} \pi \delta}\bigg)} \cdot 2 \ln \sigma. \] Recalling the assumption that $\delta \le \varepsilon^{-10\log(1/\epsilon)}$ , we see that choosing \[ \sigma = 1 + \Theta \pbra{\frac{\ln(1/\varepsilon)}{\sqrt{\ln(\varepsilon/\delta)}}}, \] we get $\gamma_{\sigma}(K) \leq \varepsilon$ as claimed. \end{proofof} \begin{remark} We close this section by noting that the type of threshold phenomenon studied here has previously been considered in geometric functional analysis. In particular, the seminal work of Milman \cite{Milman:71}, using concentration of measure to establish Dvoretzky's theorem \cite{Dvoretzky:61} on almost Euclidean sections of symmetric convex sets, implies a type of threshold phenomenon for symmetric convex sets. Milman's result can be shown to imply that if the ``Dvoretzky number'' of a symmetric convex set is $\omega_n(1)$, then the set must exhibit a type of sharp threshold behavior. Indeed, Milman's theorem can be used to give an alternate proof of a result that is similar to \Cref{thm:threshold-Friedgut-Kalai}. \end{remark} \section{Introduction} \label{sec:intro} \noindent {\bf Background: An intriguing analogy.} This paper is motivated by an intriguing, but at this point only partially understood, analogy between \emph{monotone Boolean functions over the hypercube} and \emph{symmetric convex sets in Gaussian space.} Perhaps the simplest manifestation of this analogy is the following pair of easy observations: since a Boolean function $f: \bn \to \bits$ is monotone if $f(x) \leq f(y)$ whenever $x_i \leq y_i$ for all $i$, it is clear that ``moving an input up towards $1^n$'' by flipping bits from $-1$ to $1$ can never decrease the value of $f$. Similarly, we may view a symmetric\footnote{A set $K \subseteq \R^n$ is symmetric if $-x \in K$ whenever $x \in K.$} convex set $K \subseteq \R^n$ as a 0/1 valued function, and it is clear from symmetry and convexity that ``moving an input in towards the origin'' can never decrease the value of the function. The analogy extends far beyond these easy observations to involve many analytic and algorithmic aspects of monotone Boolean functions over $\bn$ under the uniform distribution and symmetric convex subsets of $\R^n$ under the Gaussian measure. Below we survey some known points of correspondence (several of which were only recently established) between the two settings: \begin{enumerate} \item {\bf Density increments.} The well-known Kruskal-Katona theorem \cite{Kruskal:63,katona1968theorem} gives quantitative information about how rapidly a monotone $f: \bn \to \bits$ increases on average as the input to $f$ is ``moved up towards $1^n$.'' Let $f: \bn \to \zo$ be a monotone function and let $\mu_f(j)$ be the fraction of the ${n \choose j}$ many weight-$j$ inputs for which $f$ outputs 1; the Kruskal-Katona theorem implies (see e.g.~\cite{lovasz2007combinatorial}) that if $k = cn$ for some $c$ bounded away from 0 and 1 and $\mu_f(k) \in [0.1,0.9]$, then $\mu_f(k+1) \ge \mu_f(k) + \Theta(1/n).$ Analogous ``density increment'' results for symmetric convex sets are known to hold in various forms, where the analogue of moving an input in $\bn$ up towards $1^n$ is now moving an input in $\R^n$ in towards the origin, and the analogue of $\mu_f(j)$ is now $\alpha_r(K)$, which is defined to be the fraction of the origin-centered radius-$r$ sphere $r\mathbb{S}^{n-1}$ that lies in $K$. For example, Theorem~2 of the recent work \cite{DS21-weak-learning} shows that if $K \subseteq \R^n$ is a symmetric convex set (which we view as a function $K: \R^n \to \zo$) and $r=\Theta(\sqrt{n})$ satisfies $\alpha_r(K) \in [0.1,0.9]$, then $\alpha_K(r(1-1/n) ) \geq \alpha_K(r) + \Theta(1/n)$. \item {\bf Weak learning from random examples.} Building on the above-described density increment for symmetric convex sets, \cite{DS21-weak-learning} showed that any symmetric convex set can be learned to accuracy $1/2 + \Omega(1)/\sqrt{n}$ in $\poly(n)$ time given $\poly(n)$ many random examples drawn from ${\cal N}(0,1)^n$. \cite{DS21-weak-learning} also shows that any $\poly(n)$-time weak learning algorithm (even if allowed to make membership queries) can achieve accuracy no better than $1/2 + O(\log(n)/\sqrt{n})$. These results are closely analogous to the known (matching) upper and lower bounds for $\poly(n)$-time weak learning of monotone functions with respect to the uniform distribution over $\bn$: Blum et al.~\cite{BBL:98} showed that $1/2 + \Theta(\log(n)/\sqrt{n})$ is the best possible accuracy for a $\poly(n)$-time weak learner (even if membership queries are allowed), and O'Donnell and Wimmer \cite{OWimmer:09} gave a $\poly(n)$ time weak learner that achieves this accuracy using random examples only. \item {\bf Analytic structure and strong learning from random examples.} \cite{BshoutyTamon:96} showed that the Fourier spectrum of any $n$-variable monotone Boolean function over $\bn$ is concentrated in the first $O(\sqrt{n})$ levels. Analogously, \cite{KOS:08} showed that the same concentration holds for the first $O(\sqrt{n})$ levels of the Hermite spectrum\footnote{The Hermite polynomials form an orthonormal basis for the space of square-integrable real-valued functions over Gaussian space; the Hermite spectrum of a function over Gaussian space is analogous to the familiar Fourier spectrum of a function over the Boolean hypercube. See~\Cref{sec:prelims} for details.} of the indicator function of any convex set. In both cases this concentration gives rise to a learning algorithm, using random examples only, running in $n^{O(\sqrt{n} )}$ time and learning the relevant class (either monotone Boolean functions over the $n$-dimensional hypercube or convex sets under Gaussian space) to any constant accuracy. \item {\bf Qualitative correlation inequalities.} The well-known Harris-Kleitman theorem \cite{harris60,kleitman66} states that monotone Boolean functions are non-negatively correlated: any monotone $f,g: \bn \to \zo$ must satisfy $\E[fg] - \E[f]\E[g] \geq 0$. The Gaussian Correlation Inequality \cite{roy14} gives an exactly analogous statement for symmetric convex sets in Gaussian space: if $K,L \subseteq \R^n$ are any two symmetric convex sets, then $\E[K L] - \E[K]\E[L] \geq 0$, where now expectations are with respect to ${\cal N}(0,1)^n$. \item {\bf Quantitative correlation inequalities.} Talagrand \cite{Talagrand:96} proved the following \emph{quantitative} version of the Harris--Kleitman inequality: for monotone $f,g: \bn \to \zo$, \begin{equation}~\label{eq:Talagrand} \E[f g] - \E[f] \E[g] \ge \frac{1}{C} \cdot \Psi\left(\sum_{i=1}^n \Inf_i[f] \Inf_i(g)\right). \end{equation} Here $\Psi(x) := x/\log(e/x)$, $C>0$ is an absolute constant, $\Inf_i[f]$ is the influence of coordinate $i$ on $f$ (see \Cref{sec:prelims}), and the expectations are with respect to the uniform distribution over $\bn$. In a recent work \cite{DNS20} proved a closely analogous quantitative version of the Gaussian Correlation Inequality: for $K, L$ symmetric convex subsets of $\R^n$, \begin{equation} \label{eq:DNS} \E[KL] - \E[K]\E[L] \geq {\frac 1 C} \cdot \Upsilon\left(\sum_{i=1}^n \widetilde{K}(2e_i)\widetilde{L}(2e_i)\right), \end{equation} where $\Upsilon: [0,1] \to [0,1]$ is $\Upsilon(x) = \min\left\{x,\frac{x}{\log^2\left(1/x\right)}\right\}$, $C>0$ is a universal constant, $\widetilde{K}(2e_i)$ denotes the degree-2 Hermite coefficient in direction $e_i$ (see~\Cref{sec:prelims}), and expectations are with respect to ${\cal N}(0,1)^n$. \end{enumerate} We remark that in many of the above cases the proofs of the two analogous results (Boolean versus Gaussian) are very different from each other even though the statements are quite similar.\ignore{ For example, the original Kruskal-Katona theorem has a combinatorial proof whereas the Gaussian analogue described in the first list item above can be proved using either elementary geometric and probabilistic arguments (see \cite{DS21-weak-learning}) or spherical isoperimetry (see \cite{DS21-weak-learning}).} For example, the Harris-Kleitman theorem has a simple one-paragraph proof by induction on $n$, whereas the Gaussian Correlation Inequality was a famous conjecture for four decades before Thomas Royen proved it in 2014. \medskip \noindent {\bf Motivation.} We feel that the examples presented above motivate a deeper understanding of this ``Boolean/Gaussian analogy.'' This analogy may be useful in a number of ways; in particular, via this connection known results in one setting may suggest new questions and results for the other setting.\footnote{Indeed, the recent Gaussian density increment and weak learning results of \cite{DS21-weak-learning} were inspired by the Kruskal-Katona theorem and the weak learning algorithms and lower bounds of \cite{BBL:98} for monotone Boolean functions. Similarly, the recent quantitative version of the Gaussian Correlation Theorem established in \cite{DNS20} was motivated by the existence of Talagrand's quantitative correlation inequality for monotone Boolean functions.} Thus the overarching goal of this paper is to strengthen the analogy between monotone Boolean functions over $\bn$ and symmetric convex sets in Gaussian space. We do this through the study of a new notion of \emph{influence} for symmetric convex sets in Gaussian space. \subsection{This Work: A New Notion of Influence for Symmetric Convex Sets} Before presenting our new notion of influence for symmetric convex sets in Gaussian space, we first briefly recall the usual notion for Boolean functions. For $f: \bn \to \bn$, the \emph{influence of coordinate $i$ on $f$}, denoted $\Inf_i[f]$, is $\Pr[f(\bx) \neq f(\bx^{\oplus i})]$, where $\bx$ is uniform random over $\bn$ and $\bx^{\oplus i}$ denotes $\bx$ with its $i$-th coordinate flipped. It is a well-known fact (see e.g.~Proposition~2.21 of \cite{ODbook}) that for monotone Boolean functions $f$, we have $\Inf_i[f] = \widehat{f}(i)$, the degree-1 Fourier coefficient corresponding to coordinate $i$. Inspired by the relation $\Inf_i[f] = \widehat{f}(i)$ for influence of monotone Boolean functions, and by the close resemblance between \Cref{eq:Talagrand} and \Cref{eq:DNS}, \cite{DNS20} proposed to define the \emph{influence of $K$ along direction $v$,} for $K: \R^n \to \zo$ a symmetric convex set and $v \in \mathbb{S}^{n-1}$, to be \[ \Inf_v[K] := -\widetilde{K}(2v),\] the (negated) degree-2 Hermite coefficient\footnote{We observe that if $K$ is a symmetric set then since its indicator function is even, the degree-1 Hermite coefficient $\widetilde{K}(v)$ must be 0 for any direction $v$.} of $K$ in direction $v$ (see \Cref{def:csc-influence} for a detailed definition). \cite{DNS20} proved that this quantity is non-negative for any direction $v$ and any symmetric convex $K$ (see \Cref{prop:influence-nonneg}). They also defined the \emph{total influence of $K$} to be \begin{equation} \label{eq:TInf} \TInf[f] := \sum_{i=1}^n \Inf_{e_i}[f] \end{equation} and observed that this definition is invariant under different choices of orthonormal basis other than $e_1,\dots,e_n$, but did not explore these definitions further. The main contribution of the present work is to carry out an in-depth study of this new notion of influence for symmetric convex sets. For conciseness, and to differentiate it from other influence notions (which we discuss later), we will sometimes refer to this new notion as ``convex influence.'' Inspired by well known results about influence of monotone Boolean functions, we establish a number of different results about convex influence which show that this notion shares many properties with the familiar Boolean influence notion. Intriguingly, and similar to the Boolean/Gaussian analogy elements discussed earlier, while the statements we prove about convex influence are quite closely analogous to known results about Boolean influences, the proofs and tools that we use (Gaussian isoperimetry, Brascamp-Lieb type inequalities, theorems from the geometry of Gaussian space such as the $S$-inequality~\cite{s-inequality}, etc.) are very different from the ingredients that underlie the corresponding results about Boolean influence. \subsection{Results and Organization} We give an overview of our main results below. \paragraph{Basics, examples, Margulis-Russo, and extremal functions.} We begin in \Cref{sec:basics} by working through some basic properties of our new influence notion. After analyzing some simple examples in \Cref{subsec:influence-examples}, we next show in \Cref{subsec:russo-margulis} that the total convex influence for a symmetric convex set is equal to (a scaled version of) the rate of change of the Gaussian volume of the set as the variance of the underlying Gaussian is changed. This gives an alternate characterization of total convex influence, and may be viewed as an analogue of the Margulis-Russo formula for our new influence notion. We continue in \Cref{subsec:extremal} by giving some straightforward characterizations of extremal symmetric convex sets vis-a-vis our influence notion, namely the ones that have the largest individual influence in a single direction and the largest total influence. As one would expect, these extremal functions are the Gaussian space analogues of the Boolean dictator and majority functions respectively. Next, we compare our new influence notion with some other previously studied notions of influence over Gaussian space (\Cref{subsec:other-notions-influence}). These include the ``geometric influences'' that were studied by \cite{keller2012geometric} as well as the standard notion (from the analysis of functions over product probability domains, see e.g.~Chapter~8 of \cite{ODbook}) of the expected variance of the function along one coordinate when all other coordinates are held fixed. \paragraph{Total influence lower bounds.} In \Cref{sec:poincare-kkl} we give two lower bounds on the total convex influence (\Cref{eq:TInf}) for symmetric convex sets, which are closely analogous to the classical Poincar\'{e} and KKL Theorems. Our KKL analogue is quadratically weaker than the KKL theorem for Boolean functions; we conjecture that a stronger bound in fact holds, which would quantitatively align with the Boolean variant (see Item~1 of \Cref{sec:discussion}). Our proofs, which are based on the ``$S$-inequality'' of Lata\l a and Oleskiewicz \cite{s-inequality} and on the Gaussian isoperimetric inequality, are quite different from the proofs of the analogous statements for Boolean functions. \paragraph{(A consequence of) Friedgut's junta theorem.} In \Cref{sec:our-friedgut} we establish a convex influences analogue of a consequence of Friedgut's junta theorem. Friedgut's junta theorem states that any Boolean $f: \bn \to \zo$ with small total influence must be close to a junta. This implies that for any monotone $f: \bn \to \zo$ with small total influence, ``averaging out'' over a small well-chosen set of input variables (the variables on which the approximating junta depends) results in a low-variance function. We prove a closely analogous statement for symmetric convex sets with small total convex influence, thus capturing a convex influence analogue of this consequence of Friedgut's junta theorem. (We conjecture that a convex influence analogue holds for Friedgut's original junta theorem; see Item~2 of \Cref{sec:discussion}.) \paragraph{Sharp thresholds for functions with all small influences.} In \Cref{sec:sharp-threshold} we establish a ``sharp threshold'' result for symmetric convex sets in Gaussian space, which is analogous to a sharp threshold result for monotone Boolean functions due to Kalai \cite{Kalai:04}. Building on earlier work of Friedgut and Kalai \cite{FriedgutKalai:96}, Kalai \cite{Kalai:04} showed that if $f: \bn \to \zo$ is a monotone Boolean function and $p \in (0,1)$ is such that (i) all the $p$-biased influences of $f$ are $o_n(1)$ and (ii) the expectation of $f$ under the $p$-biased measure is $\Theta(1)$, then $f$ must have a ``sharp threshold'' in the following sense: the expectation of $f$ under the $p_1$-biased measure ($p_2$-biased measure, respectively) is $o_n(1)$ ($1-o_n(1)$, respectively) for some $p_1 < p < p_2$ with $p_2 - p_1 = o_n(1)$. For our sharp threshold result, we prove an analogous statement for symmetric convex sets, where now ${\cal N}(0,\sigma^2)$ takes the place of the $p$-biased distribution over $\bn$ and the $\sigma$-biased convex influences (see \Cref{def:sigma-biased-influence}) take the place of the $p$-biased influences. Interestingly, the sharpness of our threshold is quantitatively better than the known analogous result \cite{Kalai:04} for monotone Boolean functions; see \Cref{sec:sharp-threshold} for an elaboration of this point. \ignore{Our second sharp threshold result is an analogue of the celebrated result of Friedgut and Kalai \cite{FriedgutKalai:96}. This result states that if a monotone $f: \bn \to \zo$ satisfies condition (ii) above and moreover has ``sufficient symmetry,'' then it must have a sharp threshold (with a stronger quantitative bound than is established in \cite{Kalai:04}, thanks to the symmetry assumption). We similarly show (as an easy consequence of our first sharp threshold result described above)that if a symmetric convex set has $\sigma$-biased Gaussian measure bounded away from 0 and 1 and has ``sufficient symmetry,'' then it must have a sharp threshold. } \paragraph{A stable density increment result.} Finally, in \Cref{sec:robust-kk}, we use our new influence notion to give a Gaussian space analogue of a ``stability'' version of the Kruskal-Katona theorem due to O'Donnell and Wimmer \cite{OWimmer:09}. In \cite{OWimmer:09} it is shown that the $\Omega(1/n)$ density increment of the Kruskal-Katona theorem (see Item~1 at the beginning of this introduction) can be strengthened to $\Omega(\log(n)/n)$ as long as a ``low individual influences''-type condition holds. We analogously show that a similar strengthening of the Gaussian space density increment result mentioned in Item~1 earlier can be achieved under the condition that the convex influence in every direction is low. \subsection{Techniques} We give a high-level overview here of the techniques for just one of our results, namely our analogue of the KKL theorem, \Cref{thm:our-kkl}. Several of our other results either employ similar tools (for example, our robust density increment result, \Cref{thm:KK-robust-convex}, and our main sharp threshold result, \Cref{thm:threshold-Friedgut-Kalai}) or else build off of \Cref{thm:our-kkl} (for example, our analogue of a consequence of Friedgut's junta theorem, \Cref{thm:weak-friedgut-convex}). The KKL theorem states that if $f: \bn \to \bits$ has every coordinate influence small, specifically $\max_{i \in [n]} \Inf_i[f] \leq \delta$, then the total influence of $f$ must be large compared to $f$'s variance, specifically it must hold that $\TInf[f] = \Omega(\Var[f] \log(1/\delta))$. This is a dramatic strengthening of the Poincar\'{e} inequality (which only states that $\TInf[f] \geq \Var[f]$) and is a signature result in the analysis of Boolean functions with many applications. The classical proof of the KKL theorem is based on hypercontractivity \cite{Bon70,Bec75}, and only recently \cite{EldanGross:20,KKKMS:21} have proofs been given which avoid the use of hypercontractivity. Our convex influences analogue of the KKL theorem states that if $K$ is a symmetric convex set and the convex influence $\Inf_v[K]$ in every direction $v \in \mathbb{S}^{n-1}$ is at most $\delta$ and $\delta \leq \Var[K]/10$, then the total convex influence $\TInf[K]$ must be at least $\Omega\pbra{\Var[K]\sqrt{\log\pbra{\frac{\Var[K]}{\delta}}}}.$ Our proof does not employ hypercontractivity but instead uses tools from convex geometry. It proceeds in two main conceptual steps: \begin{enumerate} \item First, we use a Brascamp--Lieb-type inequality due to Vempala \cite{Vempalafocs10} to argue that the maximum convex influence of $K$ in any coordinate can be lower bounded in terms of the Gaussian volume of $K$ and its ``width'' (equivalently, the radius of the largest origin-centered ball contained in $K$, which is called the \emph{in-radius} of $K$ and is denoted $r_{\mathsf{in}}(K)$). This lets us show that $r_{\mathsf{in}}(K) \geq \Omega(\sqrt{\ln(\Var[K]/\delta)})$ (see \Cref{eq:kkl-second-goal}). \item Next, we argue that $\TInf[K] \geq {\frac 1 {\sqrt{\pi}}}\Var\sbra{K}\cdotr_{\mathsf{in}}(K)$ (see \Cref{eq:kkl-first-goal}), which together with the lower bound on $r_{\mathsf{in}}(K)$ gives the result. This is shown using our Margulis-Russo analogue, the Gaussian Isoperimetric Theorem, and concavity of the Gaussian isoperimetric function. \end{enumerate} \subsection{Discussion and Future Work} \label{sec:discussion} We believe that much more remains to be discovered about this new notion of influences for symmetric convex sets. We list some natural concrete (and not so concrete) questions for future work: \begin{enumerate} \item {\bf A stronger KKL-type theorem for convex influences?} We conjecture that the factor of $\sqrt{\log(\Var[K]/\delta)}$ in our KKL analogue, \Cref{thm:our-kkl}, can be strengthened to $\log(1/\delta)$. As witnessed by \Cref{eg:solid-cube}, this would be essentially the strongest possible quantitative result, and would align closely with the original KKL theorem \cite{KKL:88}. \item {\bf An analogue of Friedgut's theorem for convex influences?} As described earlier, our \Cref{thm:weak-friedgut-convex} establishes a Gaussian space analogue of a consequence of Friedgut's Junta Theorem \cite{Friedgut:98} for Boolean functions over $\bn$. The following would give a full-fledged Gaussian space analogue of Friedgut's Junta Theorem: \begin{conjecture} [Friedgut's Junta Theorem for convex influences] \label{conj:Friedgut} Let $K \subseteq \R^n$ be a convex symmetric set with $\Inf[K] \leq I$. Then there are $J \leq 2^{O(I/\eps)}$ orthonormal directions $v^1,\dots,v^{J} \in \mathbb{S}^{n-1}$ and a symmetric convex set $L \subseteq \R^n$, such that \begin{enumerate} \item $L(x)$ depends only on the values of $v^1 \cdot x, \dots,v^{J} \cdot x$, and \item $\Pr_{\bx \sim \calN(0, 1)^n}[K(\bx) \neq L(\bx)] \leq \eps.$ \end{enumerate} \end{conjecture} \item {\bf Are low-influence directions (almost) irrelevant?} Related to the previous question, we note that it seems to be surprisingly difficult to show that low-influence directions ``don't matter much'' for convex sets. For example, it is an open question to establish the following, which would give a dimension-free robust version of the last assertion of \Cref{prop:influence-nonneg}: \begin{conjecture} \label{conj:low-inf} Let $K \subseteq \R^n$ be symmetric and convex, and suppose that $v \in \mathbb{S}^{n-1}$ is such that $\Inf_v[K] \leq \eps.$ Then there is a symmetric convex set $L$ such that \begin{enumerate} \item $L(x)$ depends only on the projection of $x$ onto the $(n-1)$ dimensional subspace orthogonal to $v$, and \item $\Pr_{\bx \sim \calN(0, 1)^n}[K(\bx) \neq L(\bx)] \leq \tau(\eps)$ for some function $\tau$ depending only on $\eps$ (in particular, independent of $n$) and going to 0 as $\eps \to 0.$ \end{enumerate} \end{conjecture} While the corresponding Boolean statement is very easy to establish, natural approaches to \Cref{conj:low-inf} lead to open (and seemingly challenging) questions regarding dimension-free stable versions of the Ehrhard-Borell inequality \cite{Figalli:20,Zvavitch:20}. \item {\bf Algorithmic results?} Finally, a broader goal is to further explore the similarities and differences between the theory of convex symmetric sets in Gaussian space and the theory of monotone Boolean functions over $\bn$. One topic where the gap in our understanding\ignore{ of monotone Boolean functions and (symmetric) convex sets} is particularly wide is the algorithmic problem of \emph{property testing}. The problem of testing monotonicity of functions from $\bn$ to $\bits$ is rather well understood, with the current state of the art being an $\tilde{O}(n^{1/2})$-query upper bound and an $\tilde{\Omega}(n^{1/3})$-query lower bound \cite{KMS15,CWX17}. In contrast, the problem of testing whether an unknown region in $\R^n$ is convex (with respect to the standard normal distribution) is essentially wide open, with the best known upper bound being $n^{O(\sqrt{n})}$ queries \cite{chen2017sample} and no nontrivial lower bounds known. \end{enumerate} \ignore{ } \section{Towards a Junta Theorem for Convex Sets} \label{sec:our-friedgut} Friedgut's junta theorem \cite{Friedgut:98} is an important result in the analysis of Boolean functions. It says that Boolean functions with low total influence must be close to juntas; more precisely, if $f: \bn \to \zo$ has $\TInf[f] \leq I$, then $f$ is $\eps$-close to a junta on some set $J$ of at most $2^{O(I/\eps)}$ variables. Like the KKL theorem, the standard proof of Friedgut's theorem uses the hypercontractive inequality (and is in fact quite similar to the proof of the KKL theorem; see Section~9.6 of \cite{ODbook}). An easy consequence of Friedgut's junta theorem is that for any low-influence function, averaging out a well-chosen small set of coordinates makes the function have low variance: \begin{corollary} [Corollary of Friedgut's junta theorem] \label{cor:friedgut} Let $f: \bn \to \zo$ be a function that has $\TInf[f] \leq I$. Let $\eps > 0$ and let $f_{-J}: \bn \to [0,1]$ denote the function obtained by ``averaging out'' the coordinates in $J$, i.e. $f_{-J}$ is defined as \[ f_{-J}(x^{[n]\setminus J}) := \Ex_{\bx \sim \bits^{J}}[f(\bx^{J},x^{[n]\setminus J})], \] where $J$ is the set of $2^{O(I/\eps)}$ variables whose existence is given by Friedgut's junta theorem (so $f_{-J}$ depends only on the coordinates in $[n] \setminus J$). Then $\Var[f_{-J}] \leq 4\eps.$ \end{corollary} \begin{proof} Let $\mu_f = \E[f]=\E[f_{-J}].$ Let $g$ denote the $J$-junta which $\eps$-approximates $f$, and let $\mu_g = \E[g_J]$; note that since $f$ and $g$ are both $0/1$-valued, we have $\E[(f-g)^2] = \eps.$ We have \begin{align*} \Var[f_{-J}] &= \E[(f_{-J}-\mu_f)^2]\\ &\leq 2 \left(\E[(f_{-J} - \mu_g)^2] + \E[(\mu_f - \mu_g)^2] \right)\\ &\leq 2 \left(\E[(f - g)^2] + \E[(f -g)^2] \right) = 4\eps. \qedhere \end{align*} \end{proof} In this section we prove a Gaussian space analogue of \Cref{cor:friedgut} for our convex influence notion: \begin{theorem}[Analogue of \Cref{cor:friedgut} for convex influence] \label{thm:weak-friedgut-convex} Let $K\sse\R^n$ be a symmetric convex set with $\TInf[K] \leq I$. For any $\eps > 0$, there exists a set of orthogonal directions $S = \{v_{i_1}, \ldots, v_{i_\ell}\}$ with $\ell = \exp\pbra{O\pbra{\pbra{I^2/\eps^4}}}$ such that the following holds: For convenience, rename coordinates so that $v_{i_1} = e_1,\dots,v_{i_\ell}=e_\ell$, and define $K_{-S}$ to be the symmetric log-concave function \[K_{-S}(x) := \Ex_{(\bx_{1}, \ldots, \bx_{\ell}) \sim {\cal N}(0,1)^\ell}[K(\bx_1,\dots,\bx_\ell, x_{\ell+1},\dots,x_n)]\] (so $K_{-S}$ depends only on the variables $x_{\ell+1},\dots,x_n$). Then $\Var[K_{-S}] \leq \eps.$ \end{theorem} \subsection{The Main Technical Ingredient} The main technical ingredient in our proof of \Cref{thm:weak-friedgut-convex} is the following generalization of \Cref{thm:our-kkl} (our convex influence analogue of the KKL theorem) to symmetric log-concave functions: \begin{proposition}[KKL for symmetric logconcave functions] \label{prop:kkl-symmetric-logconcave} Let $f: \R^n \to [0,1]$ be a symmetric log-concave function with $\Var[f] \geq \sigma^2$ and $\TInf[f] \leq I$. Then there exists some direction $v^\ast \in \mathbb{S}^{n-1}$ such that \[\Inf_{v^\ast}[f] \geq \Omega\pbra{\sigma^2 e^{-4 \pi I^2/\sigma^4}}.\] \end{proposition} \begin{proof} For $t\in[0,1]$, define the level set \[A_t := \{x\in\R^n : f(x)\geq t\}.\] It is immediate from the log-concavity of $f$ that $A_t$ is a symmetric convex set for all $t \in [0,1]$, and that \[f(x) = \int_{0}^1 A_t(x)\,dt = \Ex_{\bt \in [0,1]}[A_{\bt}(x)],\] where we identified $A_t$ with its indicator function. Next, note that for any $x \in \R^n$, by Jensen's inequality we have that \begin{equation} \label{eq:arglebargle} \Ex_{\bt}\sbra{\abs{A_{\bt}(x) - \Ex_{\by}\sbra{A_{\bt}(\by)}}^2} \geq \Ex_{\bt}\sbra{\abs{A_{\bt}(x) - \Ex_{\by}\sbra{A_{\bt}(\by)}}}^2 \geq \abs{\Ex_{\bt}\sbra{A_{\bt}(x)} - \Ex_{\bt}\sbra{\Ex_{\by}\sbra{A_{\bt}(\by)}}}^2. \end{equation} Averaging \Cref{eq:arglebargle} over $\bx \sim {\cal N}(0,1)^n$, the LHS becomes $\Ex_{\bt}\sbra{\Var\sbra{A_{\bt}}}$ and the RHS becomes $\Var[f]$, so we get that \begin{equation} \label{eq:var-lb} \Ex_{\bt}\sbra{\Var\sbra{A_{\bt}}} \geq \Var[f] \geq \sigma^2. \end{equation} Let $r_{\mathsf{in}}(A_t)$ denote the in-radius of $A_t$. From the proof of \Cref{thm:our-kkl} (in particular, see \Cref{eq:kkl-first-goal}), we have that \[\TInf[A_t] \geq {\frac 1 {\sqrt{\pi}}} r_{\mathsf{in}}(A_t)\cdot\Var[A_t] \] and as $\TInf[f] = \int_{0}^1 \TInf[A_t]\,dt$, we have that \begin{equation}\label{eq:var-ub} I \geq \TInf[f] \geq {\frac 1 {\sqrt{\pi}}} \int_{0}^1r_{\mathsf{in}}(A_t)\cdot\Var[A_t]\,dt, \quad\text{i.e.}\quad \sqrt{\pi}I \geq \Ex_{\bt \in [0,1]}[r_{\mathsf{in}}(A_{\bt}) \Var[A_{\bt}]]. \end{equation} Let ${\cal D}$ be the distribution over $[0,1]$ which samples each outcome of $t \in [0,1]$ with probability proportional to $\Var[A_t]$ (so the density function of ${\cal D}$ at $t$ is $\Var[A_t]/\int_0^1 \Var[A_s] ds$). Armed with ${\cal D}$, we may infer from \Cref{eq:var-lb,eq:var-ub} that $ \Ex_{\bt \sim {\cal D}}[r_{\mathsf{in}}(A_{\bt})] \leq \sqrt{\pi}I/\sigma^2$, so by Markov's inequality we have that \begin{equation} \label{eq:lemon} \Pr_{\bt \sim {\cal D}}\left[r_{\mathsf{in}}(A_{\bt}) \leq 2 \sqrt{\pi}I/\sigma^2\right] \geq 1/2. \end{equation} Recall that $r_{\mathsf{in}}(A_{t})$ is non-increasing, and let $t^\ast = \inf\{t \in [0,1]: r_{\mathsf{in}}(A_{t}) \leq 2 \sqrt{\pi}I/\sigma^2\}$. By definition of ${\cal D}$ and \Cref{eq:lemon}, we have that \begin{equation} \label{eq:apple} \int_{t = t^\ast}^1 \Var[A_t] dt \geq \sigma^2/2. \end{equation} Applying \Cref{claim:small-inf-big-inradius} to $A_{t^\ast}$, we get that there exists some direction $v^\ast \in \mathbb{S}^{n-1}$ such that \begin{equation} \label{eq:nimbu} \Inf_{v^\ast}[A_{t^\ast}] \geq \frac{\gamma(A_{t^\ast}) e^{-r_{\mathsf{in}}(A_{t^\ast})^2}}{2^{3/2} \pi} \geq \frac{e^{-r_{\mathsf{in}}(A_{t^\ast})^2}}{2^{3/2} \pi} \cdot \Var\sbra{A_{t^\ast}}. \end{equation} Since $A_{t} \subseteq A_{t^\ast}$ for $t \geq t^\ast$, it follows from the proof of \Cref{claim:small-inf-big-inradius} (and the fact that $r_{\mathsf{in}}(A_{t})$ is non-increasing in $t$) that the direction $v^\ast$ has $\Inf_{v^\ast}[A_{t}] \geq \frac{ e^{-r_{\mathsf{in}}(A_{t^\ast})^2}}{2^{3/2} \pi} \cdot \Var\sbra{A_{t}}$ for all $t \geq t^\ast$. Hence \begin{align} \Inf_{v^\ast}[f] &= \int_{t=0}^1 \Inf_{v^\ast}[A_t] dt \geq \int_{t=t^\ast}^1 \Inf_{v^\ast}[A_{t}] dt \geq \frac{ e^{-r_{\mathsf{in}}(A_{t^\ast})^2}}{2^{3/2} \pi} \cdot \int_{t=t^\ast}^1 \Var\sbra{A_{t}} dt \nonumber \\ &\geq \frac{ \sigma^2 e^{-r_{\mathsf{in}}(A_{t^\ast})^2}}{2^{5/2} \pi} \label{eq:banana}\\ &\geq \frac{ \sigma^2 e^{-4\pi I^2/\sigma^4}}{2^{5/2} \pi} \nonumber \end{align} where \Cref{eq:banana} is by \Cref{eq:apple}. \ignore{ \bigskip \bigskip \bigskip \gray{ We recall that $0 \leq \Var[A_t] \leq 1$; with \Cref{eq:var-lb}, this implies that \begin{equation} \label{eq:gourd} \Pr_{\bt \in [0,1]}[\Var[A_{\bt}] \geq \sigma/2] \geq \sigma/2. \end{equation} Via Markov's inequality, \Cref{eq:var-ub} now yields that \begin{equation} \label{eq:pumpkin} \Prx_{\bt \in [0,1]}[r_{\mathsf{in}}(A_{\bt}) \leq 24I/\sigma^2] \geq \sigma/4. \end{equation} (Otherwise, for at least a $\sigma/4$ fraction of outcomes of $\bt$ it would be the case that both $r_{\mathsf{in}}(A_{\bt}) > 24I/\sigma^2$ and $\Var[A_{\bt}] > \sigma/2$. Recalling the non-negativity of $r_{\mathsf{in}}(\cdot)$, this would consequently yield that $\E_{\bt \in [0,1]}[r_{\mathsf{in}}(A_{\bt}) \Var[A_{\bt}]] \geq 3I,$ which contradicts \Cref{eq:var-ub}.) } \bigskip \bigskip \bigskip \red{To be able to apply \Cref{claim:small-inf-big-inradius}, I need that there are a decent fraction of $t$-outcomes such that (i) $\gamma(A_t) \geq X$ and (ii) $r_{\mathsf{in}}(A_t) \leq Y$; if there are a $p$ fraction of such $t$-outcomes then the prop will give that $\Inf \geq p X e^{-Y^2}/(2^{3/2} \pi).$ It seems a problem that we need $\gamma(A_t)$ to be small and $r_{\mathsf{in}}(A_t)$ to be large (pulling in opposite directions) - recall that both $\gamma(A_t)$ and $r_{\mathsf{in}}(A_t)$ are decreasing with $t$. } \bigskip \bigskip \bigskip \gray{ OLD STUFF: \bigskip Let $\bx$ be a random variable such that $\Pr[\bx = t] \propto \Var[A_t]$.\snote{Wasn't entirely clear what Anindya meant in his notes, but does this make sense? Hopefully it's clear that we normalize by $\int_0^1 \Var[A_t]\,dt$.} Combining \Cref{eq:var-lb,eq:var-ub}, we get that \[\Theta\pbra{\frac{I}{\sigma}} \geq \Ex_{\bt\sim\bx}\sbra{r_{\mathsf{in}}(\bt)}\] and so by Markov's inequality, we have \[\Prx_{\bt\sim\bx}\sbra{r_{\mathsf{in}}(\bt) \leq \frac{2I}{\sigma}} \geq \frac{1}{2}.\] In other words, there exists some $t^\ast \in [0,1]$ such that \[\int_{t\geq t^\ast}\Var[A_t] \geq \frac{\sigma}{2}\] and $r_{\mathsf{in}}(A_t) \leq \frac{2I}{\sigma}$ for all $t\geq t^\ast$ (where we used the fact that $r(t)$ is monotonically decreasing in $t$; this is clear from the fact that $A_{t_1} \sse A_{t_2}$ for $0 \leq t_1 \leq t_2 \leq 1$). In particular, it follows from \red{TODO: proposition in Section 3 relating inradius and separating hyperplane business} that there is a direction $v \in \mathbb{S}^{n-1}$ such that \[\Inf_v[f] \geq \Omega\pbra{\sigma e^{-I^2/\sigma^2}}. \] } } \end{proof} \subsection{Proof of \Cref{thm:weak-friedgut-convex}} If $\Var[K] \leq \epsilon$, then the result clearly holds. If $\Var[K] > \eps$, then by \Cref{prop:kkl-symmetric-logconcave}, there exists some direction $v\in\mathbb{S}^{n-1}$---without loss of generality, say $v = e_1$---such that \[\Inf_{v}[K] = \Inf_{e_1}[K] \geq c\eps\cdot e^{-4 \pi I^2/\eps^2}\] for some absolute constant $c > 0$. Let $K_{-\{e_1\}} : \R^{n-1} \to [0,1]$ be the symmetric log-concave function obtained from $K$ by averaging out the coordinate $e_1$, i.e. we define \[K_{-\{e_1\}}(x):= \Ex_{\bx_1 \sim {\cal N}(0,1)}[K(\bx_1,x_2,\dots,x_n)].\] It follows from \Cref{obs:influence-averaging} that for all $i \neq 1$, we have $\Inf_{e_i}\sbra{K_{-e_1}} = \Inf_{e_i}[K]$, and so we have \[\TInf\sbra{K_{-e_1}} = \TInf[K] - \Inf_{e_1}[K] \leq I - c\eps\cdot e^{-4 \pi I^2/\eps^2}.\] If $\Var[K_{-e_1}] \leq \eps$, then the claimed result holds; if not, then once again by \Cref{prop:kkl-symmetric-logconcave}, there exists some direction---without loss of generality, say $e_2$---such that \[\Inf_{e_2}\sbra{K_{-e_1}} \geq c\eps\cdot e^{-4 \pi I^2/\eps^2},\] and we can average out $e_2$ to obtain $K_{-\cbra{e_1, e_2}},$ \[K_{-\{e_1,e_2\}}(x):= \Ex_{\bx_1,\bx_2 \sim {\cal N}(0,1)}[K(\bx_1,\bx_2,x_3,\dots,x_n)],\] with \[\TInf\sbra{K_{-\cbra{e_1, e_2}}} \leq I - 2c\eps\cdot e^{-4 \pi I^2/\eps^2}.\] If $\Var\sbra{K_{-\cbra{e_1, e_2}}} \leq \eps$, then the desired result holds; if not, then we repeat as above. Note, however, that the maximum possible number of repetitions (before $\TInf[K_{-S}]$ would become negative, which is impossible) is at most \[\frac{I}{c\eps \cdot e^{-4 \pi I^2/\eps^2}} = \exp\pbra{O( I^2/\eps^2)};\] so after at most this many repetitions it must be the case that $\Var\sbra{K_{-S}} \leq \eps.$ This concludes the proof of \Cref{thm:weak-friedgut-convex}. \qed \section{A Robust Kruskal-Katona Analogue\ignore{Shell-Density Increment ()} for Symmetric Convex Sets} \label{sec:robust-kk} Recall from \Cref{eq:sdf} that for a symmetric convex set $K \subseteq \mathbb{R}^n$, the shell density function $\alpha_K : [0,\infty) \rightarrow [0,1]$ is defined to be $\alpha_K(r) \coloneqq \Prx_{\bx \in \mathbb{S}^{n-1}_r}[\bx \in K]$, and that $\alpha_K(\cdot)$ is non-increasing. In \cite{DS21-weak-learning}, De~and~Servedio established the following \emph{quantitative} lower bound on the rate of decay of $\alpha_K(\cdot)$: \begin{theorem} [Theorem~12 of~\cite{DS21-weak-learning}] ~\label{thm:DS21-weak-learning} Let $K \subseteq \mathbb{R}^n$ be a convex body that has in-radius $r_{\mathsf{in}}>0$. Then for $r>r_{\mathsf{in}}$ such that $\min \{\alpha_K(r), (1-\alpha_K(r))\} \ge e^{-n/4}$, as $\Delta r \rightarrow 0^+$ we have that \[ \alpha_K(r -\Delta r) - \alpha_K(r) \ge \Omega \bigg(\frac{r_{\mathsf{in}} \cdot \sqrt{n} \cdot \Delta r}{r^2} \bigg) \alpha_K(r) (1-\alpha_K(r)). \] \end{theorem} As alluded to in Item~1 of \Cref{sec:intro}, the above result can be used to obtain a Kruskal-Katona type theorem for centrally symmetric convex sets. In particular, we have the following corollary: \begin{corollary}~\label{corr:DS21-weak-learning} Let $K \subseteq \mathbb{R}^n$ be a symmetric convex set and $r = \Theta(\sqrt{n})$ be such that $\alpha_K(r) \in [1/10, 9/10]$. Then, as $\varepsilon \rightarrow 0^+$, we have that \[ \alpha_K(r(1-\varepsilon)) - \alpha_K(r) = \Omega(\varepsilon). \] \end{corollary} \begin{proof} Let $r_{\mathsf{in}}$ denote the in-radius of $K$, so for any $\zeta>0$, there is a point $z_\ast$ such that $z_\ast \not \in K$ and $\Vert z_\ast \Vert_2 = r_{\mathsf{in}} + \zeta$. By the separating hyperplane theorem, it follows that there is a unit vector $\hat{v} \in \mathbb{R}^n$ such that \begin{equation}~\label{eq:inclusion-1} K \subseteq K_{\ast} \coloneqq \{x \in \mathbb{R}^n: |\hat{v} \cdot x| \le r_{\mathsf{in}}+ \zeta\}. \end{equation} We next upper bound $\alpha_{K_\ast}(r)$. For this, without loss of generality, we may assume that $\hat{v}=e_1$. We have \[ \alpha_{K_\ast}(r) = \Prx_{y \in \mathbb{S}^{n-1}_r} [|y_1| \le r_{\mathsf{in}}+ \zeta] \leq O \bigg(\frac{(r_{\mathsf{in}} +\zeta) \cdot \sqrt{n}}{r}\bigg), \] where the upper bound is an easy consequence of well-known concentration of measure results for the $n$-dimensional unit sphere. Now, using \eqref{eq:inclusion-1} and letting $\zeta \rightarrow 0$, we have \[ \alpha_K(r) \le \alpha_{K_\ast}(r) \leq O \bigg(\frac{r_{\mathsf{in}}\cdot \sqrt{n}}{r}\bigg). \] Since $\alpha_K(r) \ge 0.1$ by assumption, it follows that $r_{\mathsf{in}} = \Omega(1)$. \Cref{corr:DS21-weak-learning} now follows from~\Cref{thm:DS21-weak-learning}. \end{proof} \medskip \noindent {\bf A Robust Analogue of Kruskal-Katona.} The lower bound given by \Cref{corr:DS21-weak-learning} cannot be improved in general; for example, the convex set $K= \Dict_{e_1} \coloneqq \{x: |x_1| \le 1\}$ satisfies the conditions of \Cref{corr:DS21-weak-learning} and has \[ \alpha_{\Dict_{e_1}}(r(1-\varepsilon)) - \alpha_{\Dict_{e_1}}(r) = \Theta(\varepsilon) \] for $r=\Theta(\sqrt{n}).$ This is closely analogous to how the $\Omega(1/n)$ density increment of the original Kruskal-Katona theorem for monotone Boolean functions (recall Item~1 of \Cref{sec:intro}) cannot be improved in general because of functions like the Boolean dictator function $f(x)=x_1$. However, if ``large single-coordinate influences'' are disallowed then stronger forms of the Kruskal-Katona theorem, with larger density increments, hold for monotone Boolean functions. In particular, O'Donnell and Wimmer proved the following ``robust'' version of the Kruskal-Katona theorem: \begin{theorem} [Theorem~1.3 of ~\cite{OWimmer:09}] \label{thm:KK-stability} Let $f: \{\pm 1\}^n \rightarrow \{0,1\}$ be a monotone function and let $n/10 \le j \le 9n/10.$ If $1/10 \leq \mu_j(f) \le 9/10$ and it holds for all $i\in[n]$ that \begin{equation}~\label{eq:low-influence-cond} \bigg|\Prx_{\bx \sim \binom{[n]}{j}} [f(\bx) = 1 | \bx_i = 1] -\Prx_{\bx \sim \binom{[n]}{j}} [f(\bx) = 1 | \bx_i = -1] \bigg| \le \frac{1}{n^{1/10}}, \end{equation} then $$ \mu_{j+1} (f) \ge \mu_j(f) + \Omega \bigg( \frac{\log n}{n} \bigg).$$ \end{theorem} In words, under condition \Cref{eq:low-influence-cond} (which is akin to saying that each variable $x_i$ has ``low influence on $f$''), the much larger density increment $\Omega(\log(n)/n)$ must hold. Using our notion of convex influences, we now establish a robust version of \Cref{corr:DS21-weak-learning} which is similar in spirit to the Boolean ``robust Kruskal-Katona'' result given by \Cref{thm:KK-stability}. Intuitively, our result says that if all convex influences are small, then we get a stronger density increment than \Cref{corr:DS21-weak-learning}: \ignore{ } \begin{theorem}~\label{thm:KK-robust-convex} Let $K \subseteq \mathbb{R}^n$ be a centrally symmetric convex set and $ \sqrt{n} \le r = \Theta( \sqrt{n})$ be such that $\alpha_K(r) \in [1/10, 9/10]$. If $\Inf_{v}[K] \le \delta$ for all $v \in \mathbb{S}^{n-1}$ then as $\varepsilon \rightarrow 0^+$ we have that \[ \alpha_K(r(1-\varepsilon)) - \alpha_K(r) = \Omega(\varepsilon \sqrt{\ln(1/\delta)}). \] \end{theorem} \begin{proofof}{Theorem~\ref{thm:KK-robust-convex}} We begin by proving that $\gamma(K) = \Theta(1)$. Note that \begin{eqnarray*} \gamma(K) = \int_{r=0}^\infty \alpha_K(r) \cdot \chi_n(r) dr\ge \int_{r=0}^{\sqrt{n}} \alpha_K(r) \cdot \chi_n(r) dr, \end{eqnarray*} where $\chi_n(\cdot)$ is the pdf of the $\chi$-distribution with $n$-degrees of freedom. Now, since $\alpha_K(\cdot)$ is non-increasing and $\int_{r=0}^{\sqrt{n}} \chi_n(r) = \Omega(1)$, it must be the case that \begin{eqnarray}~\label{eq:lb-volume-K} \gamma(K) \ge \alpha_K(\sqrt{n}) \cdot \int_{r=0}^{\sqrt{n}} \chi_n(r) dr = \Theta(1), \end{eqnarray} where the last equality uses the fact that $r \geq \sqrt{n}$ and $\alpha_K(r) \ge 1/10$. Let $r_{\mathsf{in}}$ denote the in-radius of $K$. Exactly as reasoned in the proof of~\Cref{corr:DS21-weak-learning}, there exists a unit vector $v \in \mathbb{S}^{n-1}$ such that $K \subseteq \{x \in \mathbb{R}^n: |v \cdot x| \le r_{\mathsf{in}} + \zeta\}$ for any $\zeta>0$. Since $\gamma(K)=\Omega(1)$, it now follows from \Cref{claim:small-inf-large-slab} that there is a direction $v$ such that \[ \Inf_v[K] = \Omega (e^{-(r_{\mathsf{in}} + \zeta)^2}). \] By our hypothesis, we have that $\Inf_v[K] \le \delta$, so taking $\zeta \rightarrow 0$ we get that $r_{\mathsf{in}} = \Omega(\sqrt{\ln (1/\delta)})$ (note that we may assume $\delta$ is at most some sufficiently small constant, since otherwise the claimed result is given by~\Cref{corr:DS21-weak-learning}). We now apply~\Cref{thm:DS21-weak-learning} to obtain that \[ \alpha_K(r(1-\varepsilon)) - \alpha_K(r) = \Omega(\varepsilon \sqrt{\ln(1/\delta)}), \] thus proving \Cref{thm:KK-robust-convex}. \end{proofof} \section{Omitted Proofs from \Cref{sec:influence-basics}} \label{appendix:sec-3} \subsection{Proof of \Cref{prop:influence-nonneg}} We will require the following fact about log-concave functions: \begin{fact} [Lemma~4.7 of \cite{Vempalafocs10}] \label{fact:1dlogconcave} Let $g: \R \to \R^+$ be a log-concave function such that \[\Ex_{\bx \sim \calN(0,1)}[\bx g(\bx)]=0.\] Then $\E[ \bx^2 g(\bx)] \le \E[g(\bx)],$ with equality if and only if $g$ is a constant function. \end{fact} We will also require the following Brunn-Minkowski-type inequality over Gaussian space, as well as a recent characterization of the equality case: \begin{proposition}[Ehrhard-Borell inequality, \cite{Ehrhard:83,Borell:03,Borell:08}] \label{prop:ehrhard-borell} Let $A, B \subseteq \R^n$ be Borel sets, identified with their indicator functions. Then \begin{equation} \label{eq:ehrhard} \Phi^{-1}\pbra{\gamma_n\pbra{\lambda A + (1-\lambda)B}} \geq \lambda\Phi^{-1}\pbra{\gamma_n\pbra{A}} + (1-\lambda)\Phi^{-1}\pbra{\gamma_n\pbra{B}} \end{equation} where\ignore{$\Phi : \R \to [0,1]$ denotes the cumulative distribution function of the standard, one-dimensional Gaussian distribution, $\gamma_n$ denotes the $n$-dimensional standard Gaussian measure, and} $\lambda A +(1-\lambda)B := \{ \lambda x+ (1-\lambda)y : x\in A, y\in B \}$ is the \emph{Minkowski sum} of $\lambda A$ and $(1-\lambda)B$. \end{proposition} \begin{proposition}[Theorem 1.2 of \cite{rvh-equality}] \label{fact:rvh-equality-ehrhard} Equality holds in the Ehrhard-Borell inequality (\Cref{eq:ehrhard}) if and only if either \begin{itemize} \item $A$ and $B$ are parallel halfspaces, i.e. we have \[A = \{ x : \abra{a, x} + b_1 \geq 0 \} \qquad\text{and}\qquad B = \{ x : \abra{a, x} + b_2 \geq 0 \}\] for some $a \in \R^n$, and $b_1, b_2 \in \R$; or \item $A$ and $B$ are convex sets with $A = B$. \end{itemize} \end{proposition} \begin{proof}[Proof of \Cref{prop:influence-nonneg}] Without loss of generality, let $v = e_1$. We have \begin{align} \Inf_{e_1}[K] = -\wt{K}(2e_1) &= \Ex_{\bx\sim\calN(0,1)^n}\sbra{-K(\bx)h_2(\bx_1)}\nonumber\\ &= \Ex_{\bx_1\sim\calN(0,1)}\sbra{-\pbra{\underbrace{\Ex_{(\bx_2, \ldots, \bx_n)\sim\calN(0,1)^{n-1}}\sbra{K(\bx_1, \ldots, \bx_n)}}_{=: K_{e_1}(\bx_1)}}h_2(\bx_1)} \label{num:cucumber}\\ & = -\wt{K_{e_1}}(2), \nonumber \end{align} where $g$ is a univariate function. From \Cref{fact:marginal-log-concave}, it follows that $K_{e_1}$ is log-concave, and $K_{e_1}$ is symmetric since $K$ is symmetric, so $\E_{\bx_1 \sim \calN(0,1)}[\bx_1 g(\bx_1)]=0$. Hence, using the fact that $h_2(x_1) = (x_1^2-1)/\sqrt{2}$, we get that \[ \Inf_{e_1}[K] = -\Ex_{\bx_1 \sim \calN(0,1)}[h_2(\bx_1) K_{e_1}(\bx_1)] = {\frac 1 {\sqrt{2}}} \cdot \Ex_{\bx_1 \sim \calN(0,1)}\big[K_{e_1}(\bx_1)(1-\bx_1^2) \big] \ge 0, \] where the inequality is by~\Cref{fact:1dlogconcave}. Next, we move to the characterization of $\Inf_{e_1}[K]=0$. Note that if $K(x) = K(y)$ whenever $x_{e_1^\perp} = y_{e_1^\perp}$ (i.e. $K(x) = K(y)$ whenever $(x_2, \ldots, x_n)= (y_2, \ldots, y_2)$), then the function $K$ does not depend on the variable $x_1$. This lets us re-express \Cref{num:cucumber} as \[ \Ex_{\bx_1\sim\calN(0,1)}[-h_2(\bx_1)] \Ex_{(\bx_2, \ldots, \bx_n)\sim\calN(0,1)^{n-1}}[K(\cdot, \bx_2, \ldots, \bx_n)]. \] As the first term in the above product is zero, we conclude that $\Inf_{e_1}[K]=0$. To see the reverse direction, suppose $\Inf_{e_1}[K]=0$. From \Cref{num:cucumber} and~\Cref{fact:1dlogconcave}, it follows that $K_{e_1}(\cdot)$ is a constant function. Now, for any $\alpha \in \mathbb{R}$, define $K_{\alpha} \subset \R^{n-1}$ as follows: \[ K_\alpha :=\{ (x_2, \ldots, x_n) : (\alpha, x_2, \ldots, x_n) \in K\}.\] Thus, $K_{\alpha}$ is the convex set obtained by intersecting $K$ with the affine plane $\{x \in \R^n: x_1=\alpha\}$. Observe that $K_{e_1}(\alpha)$ is the $(n-1)$-dimensional Gaussian measure of $K_{\alpha}$. Let $K_\alpha^* := {{\frac 1 2}}(K_{\alpha} + K_{-\alpha})$. Note that $K_\alpha^* \subseteq \R^{n-1}$ is a centrally symmetric, convex set, and that $K_\alpha^* \subseteq K_0$ because of convexity. By the Ehrhard-Borell inequality, we have \[ \Phi^{-1}\pbra{\gamma_{n-1}\pbra{\frac{K_\alpha + K_{-\alpha}}{2}}} \geq \frac{1}{2}\Phi^{-1}\pbra{\gamma_{n-1}(K_\alpha)} + \frac{1}{2}\Phi^{-1}\pbra{\gamma_{n-1}(K_{-\alpha)}}. \] However, $\gamma_{n-1}(K_\alpha) = \gamma_{n-1}(K_{-\alpha})$ because $K$ is centrally symmetric, so it follows that $\gamma_{n-1}(K_\alpha^*) \geq \gamma_{n-1}(K_\alpha)$. From our earlier observation that $K_\alpha^* \subseteq K_0$ and that $K_{e_1}(\cdot)$ is constant (which implies $\gamma_{n-1}(K_\alpha) = \gamma_{n-1}(K_0)$), it follows that $K_0 = K_\alpha^*$ up to a set of measure zero. In other words, we have equality in the application of the Ehrhard-Borell inequality above, and so by \Cref{fact:rvh-equality-ehrhard}, we must have $K_\alpha = K_{-\alpha}$ (since $K_\alpha$ and $K_{-\alpha}$ cannot be parallel halfspaces). Consequently, {up to a set of measure zero, we have that} \[K_0 = K_\alpha^* = \frac{K_\alpha + K_{-\alpha}}{2} = \frac{K_\alpha + K_{\alpha}}{2} = K_\alpha.\] As this is true for all $\alpha \in \R$, it follows that {up to a set of measure zero,} $K(x) = K(y)$ if $(x_2, \ldots, x_n) = (y_2, \ldots, y_n)$ (where we used the fact that $K(x) = K_{x_1}(x_2, \ldots, x_n)$). \end{proof} \subsection{Proof of \Cref{claim:small-inf-big-inradius}} We will use the following Brascamp--Lieb-type inequality. \begin{lemma} [Final assertion of Lemma~4.7 of \cite{Vempalafocs10}] \label{lem:vempala} If $g: \R \to \R_{\geq 0}$ is log-concave and symmetric and supported in $[-c,c]$, then \[ {\frac {\int_{-c}^c x^2 e^{-x^2 / 2} g(x) dx}{\int_{-c}^c e^{-x^2 / 2} g(x) dx}} \leq 1 - {\frac 1 {2 \pi}} e^{-c^2}. \] \end{lemma} We use this in the proof of the following claim, which will easily yield \Cref{claim:small-inf-big-inradius}: \begin{proposition} \label{claim:small-inf-large-slab} Let $K \subseteq \R^n$ be a centrally symmetric convex set with $\gamma(K) \geq \Delta$, and let $v \in \mathbb{S}^{n-1}$ be a unit vector such that $K \subseteq \{x \in \R^n: |v \cdot x| \leq c\}$. Then we have \[\Inf_v[K] \geq \frac{\Delta e^{-c^2}}{2^{3/2} \pi}.\] \end{proposition} \begin{proof}[Proof of \Cref{claim:small-inf-large-slab}] For ease of notation, we take $v=e_1$ and so $K \subseteq \{x \in \R^n: |x_1| \leq c\}.$ From \Cref{eq:inf-averaging-no-change,eq:inf-averaging-def}, we have that \begin{equation} \label{eq:applesauce} \Inf_v[K] = \Inf_{e_1}[K] = \TInf\sbra{K_{e_1}} = {\frac 1 {2 \sqrt{\pi}}} \int_\R K_{e_i}(x)(1-x^2) e^{-x^2/2}\,dx \end{equation} where $K_{e_1} : \R \to [0,1]$ is the symmetric log-concave function given by \[K_{e_1}(x) := \Ex_{\bx\sim\calN(0,1)^{n-1}}\sbra{K(x, \bx_2, \ldots, \bx_n)}.\] As $K(x)=0$ when $|x_1|>c$ we have that $\supp(g) \sse [-c, c]$ and so it follows from \Cref{eq:applesauce} that \begin{align} \Inf_v[K] = {\frac 1 {2 \sqrt{\pi}}} \int_{-c}^c K_{e_1}(x)(1-x^2) e^{-x^2/2}\, dx \label{eq:orange}. \end{align} It follows then from \Cref{lem:vempala} that \[ \Inf_v[K] \geq {\frac 1 {2^{3/2} \pi}} \pbra{\frac{e^{-c^2}}{\sqrt{2 \pi}} \int_{-c}^c K_{e_1}(x) e^{-x^2 / 2}\, dx} = \frac{\Delta e^{-c^2}}{2^{3/2} \pi} \] which completes the proof of \Cref{claim:small-inf-large-slab}. \end{proof} \begin{proof}[Proof of \Cref{claim:small-inf-big-inradius}] By definition of the in-radius and the supporting hyperplane theorem, there must exist some unit vector $\hat{v} \in \mathbb{R}^n$ such that \begin{equation}~\label{eq:inclusion-0} K \subseteq K_{\ast} \coloneqq \{x \in \mathbb{R}^n: |\hat{v} \cdot x| \le r_{\mathsf{in}}\}, \nonumber \end{equation} and hence by \Cref{claim:small-inf-large-slab} we get that \[ \Inf_{\hat{v}}[K] \geq {\frac {\gamma(K) e^{-r_{\mathsf{in}}^2}}{2^{3/2} \pi}} \geq {\frac {\Var[K] e^{-r_{\mathsf{in}}^2}}{2^{3/2} \pi}}, \] giving \Cref{claim:small-inf-big-inradius} as claimed. \ignore{ By definition of the in-radius, for any $\zeta>0$, there is a point $z_\ast$ such that $z_\ast \not \in K$ and $\Vert z_\ast \Vert_2 = r_{\mathsf{in}} + \zeta$. By the separating hyperplane theorem, there must exist some unit vector $\hat{v} \in \mathbb{R}^n$ such that \begin{equation}~\label{eq:inclusion-0} K \subseteq K_{\ast} \coloneqq \{x \in \mathbb{R}^n: |\hat{v} \cdot x| \le r_{\mathsf{in}}+ \zeta\}, \nonumber \end{equation} and hence by \Cref{claim:small-inf-large-slab} we get that \[ \Inf_{\hat{v}}[K] \geq {\frac {\gamma(K) e^{-(r_{\mathsf{in}} + \zeta)^2}}{2^{3/2} \pi}} \geq {\frac {\Var[K] e^{-(r_{\mathsf{in}} + \zeta)^2}}{2^{3/2} \pi}}. \] Letting $\zeta \to 0^+$, \Cref{claim:small-inf-big-inradius} is proved. } \end{proof} \subsection{Proof of \Cref{prop:russo-margulis-convex}} This is an elementary calculation using the chain rule and the Leibniz rule. \begin{proof}[Proof of \Cref{prop:russo-margulis-convex}] By the chain rule, we have \begin{align*} \frac{d}{d\sigma^2}\Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)} &= \frac{d}{d\sigma^2}\pbra{\frac{1}{\sqrt{(2\pi\sigma^2)^n}}\underbrace{\int_{K}\exp\pbra{-\frac{\|x\|^2}{2\sigma^2}}\,dx}_{=: A(\sigma^2)}}\\ &= A(\sigma^2)\frac{d}{d\sigma^2}\pbra{\frac{1}{\sqrt{(2\pi\sigma^2)^n}}} + \frac{1}{\sqrt{(2\pi\sigma^2)^n}}\pbra{\frac{d}{d\sigma^2}A(\sigma^2)}\\ &= \frac{-nA(\sigma^2)}{2\sigma^2\sqrt{(2\pi \sigma^2)^n}} + \frac{1}{\sqrt{(2\pi\sigma^2)^n}}\pbra{\frac{d}{d\sigma^2}A(\sigma^2)}\\ &= -\frac{n\Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)}}{2\sigma^2} + \frac{1}{\sqrt{(2\pi\sigma^2)^n}}\pbra{\frac{d}{d\sigma^2}A(\sigma^2)}. \end{align*} Now, using the dominated convergence theorem to commute integration and differentiation, we get \begin{align*} \frac{1}{\sqrt{(2\pi\sigma^2)^n}}\pbra{\frac{d}{d\sigma^2}A(\sigma^2)} &= \frac{1}{\sqrt{(2\pi\sigma^2)^n}}\pbra{\frac{d}{d\sigma^2}\int_{K}\exp\pbra{-\frac{\|x\|^2}{2\sigma^2}}\,dx}\\ &= \frac{1}{\sqrt{(2\pi\sigma^2)^n}}\int_{K}\frac{d}{d\sigma^2}\exp\pbra{-\frac{\|x\|^2}{2\sigma^2}}\,dx\\ &= \frac{1}{\sqrt{(2\pi\sigma^2)^n}}\int_{K}\exp\pbra{-\frac{\|x\|^2}{2\sigma^2}}\frac{\|x\|^2}{2\sigma^4}\,dx\\ &= \frac{\Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)\pbra{\frac{\|\bx\|^2}{\sigma^2}}}}{2\sigma^2}. \end{align*} This in turn implies that \begin{align*} \frac{d}{d\sigma^2}\Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)} &= -\frac{n\Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)}}{2\sigma^2} + \frac{\Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)\pbra{\frac{\|\bx\|^2}{\sigma^2}}}}{2\sigma^2}\\ &= \frac{1}{\sigma^2\sqrt{2}}\Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)\pbra{\frac{\pbra{\frac{\|\bx\|^2}{\sigma^2}} - n}{\sqrt{2}}}}\\ &= \frac{1}{\sigma^2\sqrt{2}}\sum_{i=1}^n\Ex_{\bx\sim\calN(0,\sigma^2)^n} \sbra{K(\bx)\pbra{\frac{\pbra{\frac{\bx_i}{\sigma}}^2 - 1}{\sqrt{2}}}}\\ &= \frac{1}{\sigma^2\sqrt{2}}\sum_{i=1}^n \wt{K}_\sigma(2e_i) \end{align*} where we used the fact that $h_2(x) = \frac{x^2 - 1}{\sqrt{2}}$. \end{proof} \section{Influences for Symmetric Convex Sets} \label{sec:influence-basics} In this section, we first introduce our new notion of influence for symmetric convex sets over Gaussian space and establish some basic properties. In \Cref{subsec:influence-examples} we analyze the influences of several natural symmetric convex sets, and in \Cref{subsec:russo-margulis} we give an analogue of the Margulis-Russo formula (characterizing the influences of monotone Boolean functions) which provides an alternative equivalent view of our new notion of influence for symmetric convex sets in terms of the behavior of the sets under dilations. We characterize the symmetric convex sets which have extremal max influence and total influence in \Cref{subsec:extremal}. Finally, in \Cref{subsec:other-notions-influence}, we compare our new notion of influence with some previously studied influence notions over Gaussian space. \subsection{Definitions and Basic Properties} \label{sec:basics} \begin{definition}[Influence for symmetric log-concave functions] \label{def:csc-influence} Let $f \in L^2\pbra{\R^n, \gamma}$ be a symmetric (i.e. $f(x) = f(-x)$) log-concave function. Given a unit vector $v \in \mathbb{S}^{n-1}$, we define the \emph{influence of direction $v$ on $f$} as being \[\Inf_v[f] := -\widetilde{f}(2v) = \Ex_{\bx \sim \calN(0,1)^n}\sbra{-f(\bx) h_{2}(v \cdot \bx)} = \Ex_{\bx \sim \calN(0,1)^n}\sbra{f(\bx) \cdot \pbra{\frac {1 - (v \cdot \bx)^2}{\sqrt{2}}}},\] the negated ``degree-2 Hermite coefficient in the direction $v$.'' Furthermore, we define the \emph{total influence of $f$} as \[\TInf[f] := \sum_{i=1}^n \Inf_{e_i}[f].\] \end{definition} Note that the indicator of a symmetric convex set is a symmetric log-concave function, and this is the setting that we will be chiefly interested in. The following proposition (which first appeared in \cite{DNS20}, and a proof of which can be found in \Cref{appendix:sec-3}) shows that these new influences are indeed ``influence-like.'' An arguably simpler argument for the non-negativity of influences is presented in \Cref{subsec:russo-margulis}. \begin{proposition}[Influences are non-negative] \label{prop:influence-nonneg} If $K$ is a centrally symmetric, convex set, then $\Inf_v[K] \geq 0$ for all $v\in \mathbb{S}^{n-1}$. Furthermore, equality holds if and only if $K(x) = K(y)$ whenever $x_{v^\perp} = y_{v^\perp}$ (i.e. the projection of $x$ orthogonal to $v$ coincides with that of $y$) almost surely. \end{proposition} We note that the total influence of a symmetric, convex set $K$ is independent of the choice of basis; indeed, we have \begin{equation} \label{eq:total-inf-def} \TInf[K] = \Ex_{\bx\sim\calN(0,1)^n}\sbra{f(\bx)\pbra{\frac {n - \|\bx\|^2}{\sqrt{2}}}} \end{equation} which is invariant under orthogonal transformations. Hence any orthonormal basis $\{v_1, \ldots, v_n\}$ could have been used in place of $\{e_1, \ldots, e_n\}$ in defining $\TInf[K]$. We note that (as is shown in the proof of \Cref{prop:influence-nonneg}), the influence of a fixed coordinate is not changed by averaging over some set of other coordinates: \begin{fact} \label{obs:influence-averaging} Let $K \sse \R^n$ be a symmetric, convex set, and define the log-concave function $K_{e_i}: \R \to [0,1]$ as \begin{equation} \label{eq:inf-averaging-def} K_{e_i}(x) := \Ex_{\bx\sim\calN(0,1)^{n-1}}\sbra{K(\bx_1, \ldots, \bx_{i-1}, x,\bx_{i+1}, \ldots, \bx_n)}. \end{equation} Then we have \begin{equation} \label{eq:inf-averaging-no-change} \Inf_{e_i}[K] = \Inf_{e_1}[K_{e_i}] = \TInf[K_{e_i}]. \end{equation} \end{fact} We conclude with the following useful relationship between the in-radius of a symmetric convex set $K$ and its max influence along any direction. \Cref{claim:small-inf-big-inradius} is proved in \Cref{appendix:sec-3}. \begin{proposition} \label{claim:small-inf-big-inradius} Let $K \subseteq \R^n$ be a centrally symmetric convex set with $\gamma(K) \geq \Delta$, and let $r_{\mathsf{in}}=r_{\mathsf{in}}(K)$ be the in-radius of $K$. Then there is some direction $v \in \mathbb{S}^{n-1}$ such that \[\Inf_v[K] \geq \frac{\Delta e^{-r_{\mathsf{in}}^2}}{2^{3/2} \pi}.\] \end{proposition} \ignore{ } \subsection{Influences of Specific Symmetric Convex Sets} \label{subsec:influence-examples} In this subsection we consider some concrete examples by analyzing the influences of a few specific symmetric convex sets, namely ``slabs'', balls, and cubes. As we will see, these are closely analogous to well-studied monotone Boolean functions (dictator, Majority, and Tribes, respectively). \begin{example}[Analogue of Boolean dictator: a ``slab''] \label{eg:dict} Given a vector $w \in \R^n$, define $\Dict_{w} := \cbra{ x \in \R^n : \abs{\abra{x, w}} \leq 1 }$. As suggested by the notation, this is the analogue of a single Boolean variable $f(x) = x_i$, i.e.~a ``dictatorship.'' For simplicity, suppose $w := \frac{1}{c}\cdot e_1$ for some $c > 0$, i.e. $\Dict_{w} = \cbra{x \in \R^n : |x_1| \leq c}$. We then have \[\Inf_{e_i}\sbra{\Dict_{w}} = \begin{cases} \Theta\pbra{c\cdot\exp\pbra{-c^2/2}} & i = 1\\ 0 & i \neq 1 \end{cases}.\] Note that while in the setting of the Boolean hypercube there is only one ``dictatorship'' for each coordinate, in our setting given a particular direction we can have ``dictatorships'' of varying widths and volumes. \end{example} \ignore{ Recall that Majority functions are the unique maximizers of total influence among all monotone Boolean functions (see Theorem 2.33 of \cite{ODbook}). It should be clear from the following example (and, looking ahead, from \Cref{prop:ball-max-inf}) that the ball exhibits similar extremal behavior.\footnote{It is immediate from Parseval's formula and the Cauchy--Schwarz inequality that $\TInf[K] \leq \sqrt{n}$ for any symmetric convex $K\sse\R^n$, and so the ball of radius $\sqrt{n}$ has (up to constants) the largest possible total influence.} } \begin{example}[Analogue of Boolean Majority: a ball] \label{eg:ball} Let $B_r := \cbra{ x \in \R^n : \|x\|_2 \leq r }$ denote the ball of radius $r$. Analogous to the Boolean majority function, we argue that for $B=B_{\sqrt{n}}$ we have that $\Inf_{e_i}(B)=\Theta(1/\sqrt{n})$ for all $i \in [n].$ Recall from \Cref{eq:total-inf-def} that \[\TInf\sbra{B} = \frac{1}{\sqrt{2}} \Ex_{\bx\sim\calN(0,1)^n}\sbra{B(\bx)\pbra{n - \|\bx\|^2}}.\] By the Berry-Esseen Central Limit Theorem (see \cite{berry,esseen} or, for example, Section~11.5 of \cite{ODbook}), we have that for $t \in \R$, \begin{equation} \label{eq:berry-essen-approx} \left|\Prx_{\bx\sim\calN(0,1)^n}\sbra{\frac{\|\bx\|^2 - n}{\sqrt{n}} \leq t} - \Prx_{\by \sim\calN(0,1)}\sbra{\by \leq t}\right| \leq {\frac{c}{\sqrt{n}}} \nonumber \end{equation} for some absolute constant $c$. In particular, this implies that \[\Prx_{\bx\sim\calN(0,1)^n}\sbra{\|\bx\|^2 \leq n - \sqrt{n}} \geq \Prx_{\by \sim {\cal N}(0,1)}[\by \leq -1] - \frac{c}{\sqrt{n}} \geq 0.15.\] Since $\Pr_{\bx \sim {\cal N}(0,1)^n}[B(\bx)=1] = \frac12 \pm o_n(1),$ and $B(x)(n-\|x\|^2)$ is never negative, it follows that \[\Ex_{\bx\sim\calN(0,1)^n}\sbra{B(\bx)\pbra{n - \|\bx\|^2}} \geq \Theta\pbra{\sqrt{n}}\] from which symmetry implies that $\Inf_{e_i}\sbra{B} \geq \Theta\pbra{\frac{1}{\sqrt{n}}}$ for all $i \in [n].$ The upper bound $\Inf_{e_i}\sbra{B} \leq \Theta\pbra{\frac{1}{\sqrt{n}}}$ follows from Parseval's identity. \end{example} Our last example is analogous to the ``Tribes CNF'' function introduced by Ben-Or and Linial \cite{BenOrLinial:85short} (alternatively, see Definition 2.7 of \cite{ODbook}): \begin{example}[Analogue of Boolean $\Tribes$: a cube] \label{eg:solid-cube} Let $C_r := \cbra{ x \in \R^n : |x_i| \leq r \text{ for all } i \in [n]}$ denote the axis-aligned cube of side-length $2r$ and $\gamma(C_{r}) = \frac{1}{2}$, i.e. let $r > 0$ be the unique value such that \begin{equation} \label{eq:bbb}\Prx_{\bg\sim\calN(0,1)}\sbra{|\bg| \leq r} = \pbra{\frac{1}{2}}^{1/n} = 1 - \frac{\Theta(1)}{n}.\end{equation} By standard tail bounds on the Gaussian distribution, we have that $r = \Theta(\sqrt{\log n})$. Because of the symmetry of $C_r$, we have $\Inf_{e_i}[C_{r}] = \Inf_{e_j}[C_{r}]$ for all $i, j \in [n]$. Note, however, that we can write \[C_{r}(x) = \prod_{i=1}^n \Dict_{1/r}(x_i)\] where $\Dict_{1/r} : \R \to \zo$ is as defined in \Cref{eg:dict}. By considering the Hermite representation of $C_r(x),$ it is easy to see that \[\Inf_{e_i}[C_r] = \Ex_{\bg\sim\calN(0,1)}\sbra{\Dict_{1/r}(\bg)}^{n-1}\TInf\sbra{\Dict_{1/r}}.\] By our choice of $r$ above, we have $\E\sbra{\Dict_{1/r}} = \sqrt[n]{1/2}$ and so \[ \Ex_{\bg\sim\calN(0,1)}\sbra{\Dict_{1/r}(\bg)}^{n-1} = \Theta(1). \] From \Cref{eg:dict}, we know $\TInf\sbra{\Dict_{1/r}} = \Theta\pbra{r e^{-r^2/2}}$, and so we have \begin{equation} \label{eq:aaa}\Inf_{e_i}[C_r] = \Theta\pbra{re^{-r^2/2}}. \end{equation} We now recall the following tail bound on the normal distribution (see Theorem~1.2.6 of \cite{durrett_2019} or Equation~2.58 of \cite{TAILBOUND}): \begin{equation} \label{eq:normal-tail} \varphi(r) \left({\frac 1 r} - {\frac 1 {r^3}} \right) \leq \Prx_{\bg \sim N(0,1)}[\bg \geq r] \leq \varphi(r) \left({\frac 1 r} - {\frac 1 {r^3}} + {\frac 3 {r^5}}\right), \end{equation} where $\varphi(r) = {\frac 1 {\sqrt{2 \pi}}} e^{-r^2/2}$ is the density function of $N(0,1)$. Combining \Cref{eq:bbb}, \Cref{eq:aaa} and \Cref{eq:normal-tail} we get that $ \Inf_{e_i}[C_r] = \Theta(r^2) \cdot \Prx_{\bg \sim N(0,1)}[\bg \geq r] = \Theta(\log(n)) \cdot \Theta(1/n), $ which corresponds to the influence of each individual variable on the Boolean ``tribes'' function. \ignore{ } \end{example} \subsection{Margulis-Russo for Convex Influences: An Alternative Characterization of Influences via Dilations} \label{subsec:russo-margulis} In this subsection we give an alternative view of the notion of influence defined above, in terms of the behavior of the Gaussian measure of the set as the variance of the underlying Gaussian is changed.\footnote{Since $\gamma_\sigma(K) = \gamma(K/\sigma)$, decreasing (respectively increasing) the variance of the underlying Gaussian measure is equivalent to dilating (respectively shrinking) the set.} This is closely analogous to the Margulis-Russo formula for monotone Boolean functions on $\bn$ (see \cite{Russo:81, Margulis:74} or Equation~(8.9) in \cite{ODbook}), which relates the derivative (with respect to $p$) of the $p$-biased measure of a monotone function $f$ to the $p$-biased total influence of $f$. We start by defining $\sigma$-biased convex influences, which are analogous to $p$-biased influences from Boolean function analysis (see Section~8.4 of \cite{ODbook}). \begin{definition}[$\sigma$-biased influence] \label{def:sigma-biased-influence} Given a centrally symmetric convex set $K \sse \R^n$, we define the \emph{$\sigma$-biased influence of direction $v$ on $K$} as being \[\Inf^{(\sigma)}_v[K] := -\widetilde{f}_\sigma(2v) = \Ex_{\bx \sim \calN(0,1)^n}\sbra{-f(\bx) h_{2, \sigma}(v \cdot \bx)},\] the negated degree-2 $\sigma$-biased Hermite coefficient in the direction $v$. We further define the \emph{$\sigma$-biased total influence of $K$} as \[\TInf^{(\sigma)}[K] := \sum_{i=1}^n \Inf_{e_i}^{(\sigma)}[K].\] \end{definition} The proof of the following proposition, which asserts that the rate of the change of the Gaussian measure of a symmetric convex set $K$ with respect to $\sigma^2$ is (up to scaling) equal to the $\sigma$-biased total influence of $K$, is deferred to \Cref{appendix:sec-3}. We note that this relation was essentially known to experts (see e.g.~\cite{s-inequality}), though we are not aware of a specific place where it appears explicitly in the literature. \begin{proposition}[Margulis-Russo for symmetric convex sets] \label{prop:russo-margulis-convex} Let $K \sse \R^n$ be a centrally symmetric convex set. Then for any $\sigma > 0$ we have \[\frac{d}{d\sigma^2} \E_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)} = \frac{-\TInf^{(\sigma)}[K]}{\sigma^2\sqrt{2}} = \frac{-1}{\sigma^2\sqrt{2}}\sum_{i=1}^n \Inf^{(\sigma)}_{e_i}[K].\] \end{proposition} Note that decreasing (respectively increasing) the variance of the background Gaussian measure is equivalent to dilating (respectively shrinking) the symmetric convex set while keeping the background measure fixed; this lets us write\ignore{ \rnote{Does this need a ${\frac 1 {\sqrt{2}}}$ in front? It seems to me that \begin{align*} \TInf[K] &= \sqrt{2} \frac{d}{d\sigma^2} \Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{K(\bx)} \bigg|_{\sigma^2 = 1}\\ &=\sqrt{2} \lim_{h \to 0} {\frac {\Ex_{\bx\sim\calN(0,1+h)^n}\sbra{K(\bx)} - \Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx)}} h} &=\sqrt{2} \lim_{h \to 0} {\frac {\Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx/\sqrt{1+h})} - \Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx)}} h}\\ &=\sqrt{2} \lim_{h \to 0} {\frac {\Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx(1-h/2))} - \Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx)}} h}\\ &=\sqrt{2} \lim_{h \to 0} {\frac {\Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx(1-h))} - \Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx)}} {2h}}\\ &= {\frac 1 {\sqrt{2}}} \lim_{\delta \to 0} {\frac {\Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx(1-\delta))} - \Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx)}} {\delta}} \end{align*} If this is correct let's just put the corrected formula without this derivation. }} \begin{equation} \label{eq:russo-margulis-dilations} \TInf[K] ={\frac 1 {\sqrt{2}}} \lim_{\delta\to0}\frac{\gamma_n(K) - \gamma_n\pbra{(1-\delta)K}}{\delta} \end{equation} for a symmetric convex $K\sse\R^n$. We also note that \Cref{prop:russo-margulis-convex} easily extends to the following coordinate-by-coordinate version (which also admits a similar description in terms of dilations): \begin{proposition}[Coordinate-wise Margulis-Russo] \label{prop:coordinate-russo-margulis} Let $K \sse \R^n$ be a centrally symmetric convex set. Then for any $\sigma > 0$, we have \[ {\frac {d}{d\sigma_i^2}} \Ex_{\substack{\bx_i \sim \calN(0,\sigma_i^2)\\j\neq i \ :\ \bx_j\sim\calN(0,\sigma^2)}} \sbra{K(\bx)} \bigg|_{\sigma_i^2 = \sigma^2} = \frac{-1}{\sigma^2\sqrt{2}}\Inf^{(\sigma)}_{e_i}[K]. \] \end{proposition} In particular, we have \[\Inf_{e_i}[K] = -\sqrt{2}\frac{d}{d\sigma^2} \Ex_{\substack{\bx_i \sim \calN(0,\sigma^2)\\j\neq i \ :\ \bx_j\sim\calN(0,1)}} \sbra{K(\bx)} \bigg|_{\sigma^2 = 1}.\] Note that decreasing the variance of the underlying Gaussian measure along a coordinate direction cannot cause the volume of the set to decrease.\ignore{\rnote{Not sure we need the next footnote - isn't this just a consequence of the fact that decreasing the variance of the Gaussian in a coordinate direction is equivalent to stretching/dilating the body in that direction while leaving the measure unchanged, and clearly this can't decrease volume? If so can we delete the next footnote?}\footnote{This is easy to see from the fact that the indicator of a symmetric convex set is a unimodal log-concave function, and that marginals of log-concave functions are log-concave (see \Cref{fact:marginal-log-concave}).}} It follows then that $\Inf_{e_i}[K] \geq 0$ for all $e_i$. \input{sections/extremal} \input{sections/influence-comparison} \section{Lower Bounds on Total Convex Influence} \label{sec:poincare-kkl} Two fundamental results on the influence of variables for Boolean functions $f: \bn \to \bits$ are the Poincar\'e inequality and the celebrated ``KKL Theorem'' of Kahn, Kalai, and Linial \cite{KKL:88}, both of which give lower bounds on total influence. The former states that the total influence of any $f: \bn \to \bits$ is at least its variance, and has a very elementary proof (indeed it can be proved in a single line by comparing the Fourier expressions for the two quantities). The KKL theorem gives a more refined bound, showing (roughly speaking) that if all influences are small then the total influence must be somewhat large. Several proofs of the KKL Theorem are now known, using a range of different techniques such as the famous hypercontractive inequality \cite{Bon70,Bec75} (the original approach), methods of stochastic calculus \cite{EldanGross:20}, and the Log-Sobolev inequality \cite{KKKMS:21}. In this section we prove convex influence analogues of the Poincar\'{e} inequality and the KKL theorem. We use the ``$S$-inequality'' of Lata\l a and Oleskiewicz \cite{s-inequality} to give a relatively quick proof our Poincar\'{e} analogue, and prove our analogue of the KKL theorem using the Gaussian Isoperimetric Theorem \cite{Borell:75}. \subsection{A Poincar\'e Inequality for Symmetric Convex Sets} \label{subsec:poincare} Recall from \Cref{subsec:other-notions-influence} that our convex influence notion can be much smaller than the geometric influence defined in \cite{keller2012geometric} or the ordinary ``variance along a fiber'' influence notion of \Cref{subsubsec:varinf}. Given this, it is of interest to give \emph{lower bounds} on the total convex influence. Our first result along these lines is an analogue of the standard Poincar\'{e} inequality for Boolean functions (or more generally for functions over product domains): \begin{proposition}[Poincar\'e for symmetric convex sets] \label{prop:poincare-csc-sets} Let $K\sse\R^n$ be a symmetric convex set. Then $\TInf[K] \geq \Omega\pbra{\Var[K]}.$ \end{proposition} The main tool we use for our proof of \Cref{prop:poincare-csc-sets} is the following celebrated result of Lata\l a and Oleskiewicz concerning the rate of growth of symmetric convex sets under dilations: \begin{proposition}[$S$-inequality, Theorem 1 of \cite{s-inequality}] \label{prop:s-inequality} Let $K\sse\R^n$ be a symmetric convex set, and let $\Dict_{w}\sse\R^n$ be a symmetric strip (i.e. $\Dict_w = \cbra{x \in \R^n : |x\cdot w| \leq 1}$ for some fixed $w \in \R^n$) such that $\gamma_n(A) = \gamma_n\pbra{\Dict_w}$. Then \[\gamma_n(tK) \geq \gamma_n\pbra{\Dict_{w/t}} \qquad \text{for } t\geq 1,\] and \[\gamma_n(tK) \leq \gamma_n\pbra{\Dict_{w/t}} \qquad \text{for } 0\leq t\leq 1.\] \end{proposition} Intuitively, the above result says that among all convex symmetric sets of a given Gaussian volume, the Gaussian volume of dictatorships (see \Cref{eg:dict}) grows the slowest under enlargement by dilations. The proof of \Cref{prop:poincare-csc-sets} combines \Cref{prop:s-inequality} with our Margulis-Russo analogue (the characterization of influence in terms of dilations given in \Cref{subsec:russo-margulis}): \medskip \begin{proofof}{\Cref{prop:poincare-csc-sets}} Write $\gamma_n(K) = \alpha$, and let $\Dict_{e_1/a} = \{x\in\R^n : |x_1| \leq a\}$ be such that $\gamma_1\pbra{\Dict_{e_1/a}} = \alpha$, i.e. $\gamma_n\pbra{[-a, a]} = \alpha$. Recall from \Cref{subsec:russo-margulis} that \begin{equation} \label{eq:basic-inf-def} \TInf[K] = {\frac 1 {\sqrt{2}}} \lim_{\delta\to0} \frac{\gamma_n(K) - \gamma_n\pbra{(1-\delta)K}}{\delta}. \end{equation} By the $S$-inequality (\Cref{prop:s-inequality} above), for any fixed $0 < \delta \leq 1$, we have \[\gamma_n\pbra{(1-\delta)K} \leq \gamma_n\pbra{(1-\delta)\Dict_{e_1/a}} = \gamma_1\pbra{\sbra{-(1-\delta)a, (1-\delta)a}},\] which implies that \begin{align} \gamma_n(K) - \gamma_n\pbra{(1-\delta)K} &\geq \gamma_1\pbra{[-a, a]} - \gamma_1\pbra{\sbra{-(1-\delta)a, (1-\delta)a}} \nonumber \\ &= \frac{1}{\sqrt{2\pi}}\pbra{\int_{-a}^a e^{-x^2/2}\,dx - \int_{-(1-\delta)a}^{(1-\delta)a} e^{-x^2/2}\,dx} \nonumber \\ &= \sqrt{\frac{2}{\pi}}\int_{(1-\delta)a}^a e^{-x^2/2}\, dx. \label{eq:final-step-poincare} \end{align} When $\alpha \leq 1/2$, we have $\alpha \leq a \leq 1$, and clearly \begin{equation} \label{eq:poincare-case-1} \sqrt{\frac{2}{\pi}}\int_{(1-\delta)a}^a e^{-x^2/2}\, dx \geq \Omega\pbra{\delta a} \geq \Omega\pbra{\delta \alpha}. \end{equation} Combining \Cref{eq:basic-inf-def,eq:final-step-poincare,eq:poincare-case-1} implies the desired result in this case. On the other hand, when $\alpha \geq 1/2$, we have \[\sqrt{\frac{2}{\pi}}\int_{(1-\delta)a}^a e^{-x^2/2}\, dx \geq \Omega\pbra{\delta a e^{-a^2/2}}.\] Standard tail bounds on the Gaussian distribution give $ae^{-a^2/2} \geq \Omega\pbra{1 - \Phi(a) }$ when $a \geq 1$ (see, for example, Theorem 1.2.6 of \cite{durrett_2019}). It follows that if $\gamma_1\pbra{[-a, a]} = \frac{1}{\sqrt{2\pi}}\int_{-a}^a e^{-x^2}\,dx = \alpha$, then $ae^{-a^2/2} \geq \Omega(1-\alpha)$. In particular, we have \begin{equation} \label{eq:poincare-case-2} \sqrt{\frac{2}{\pi}}\int_{(1-\delta)a}^a e^{-x^2/2}\, dx \geq \Omega\pbra{\delta(1-\alpha)} \end{equation} and combining \Cref{eq:basic-inf-def,eq:final-step-poincare,eq:poincare-case-2} implies the desired result. \end{proofof} \medskip \subsection{A KKL Analogue for Symmetric Convex Sets} \label{subsec:kkl} For the symmetric convex set $\Dict_{e_1}$, both the total convex influence and the variance are $\Theta(1)$, so \Cref{prop:poincare-csc-sets} is best possible (up to a constant factor) for arbitrary symmetric convex sets. But of course $\Dict_{e_1}$ has very large (constant) influence in a single direction, analogous to a Boolean function with an individual coordinate of constant influence. The famous KKL theorem for Boolean functions over $\bn$ states that if no coordinate influence is allowed to be large (each is at most $\delta$), then the total influence must be large (at least $\Omega(\Var[f] \cdot \log(1/\delta))$). We now prove an analogous result for convex influences, though we only achieve a quadratically weaker bound in terms of the max influence: \begin{theorem}[KKL for symmetric convex sets] \label{thm:our-kkl} Let $K\sse\R^n$ be a symmetric convex set with $\Inf_v[K] \leq \delta \leq \Var[K]/10$ for all $v \in \mathbb{S}^{n-1}$. Then \begin{equation} \label{eq:kkl-convex-sets} \TInf[K] \geq \Omega\pbra{\Var[K]\sqrt{\log\pbra{\frac{\Var[K]}{\delta}}}}. \end{equation} \end{theorem} Our proof of \Cref{thm:our-kkl} is inspired by the approach of \cite{lat-ole-kkl}. The main technical ingredient we use is the \emph{Gaussian isoperimetric inequality:} \begin{proposition}[Gaussian isoperimetric inequality, \cite{Borell:75}] \label{prop:gaussian-isoperimetric-inequality} Given any Borel set $A \sse\R^n$, we have \[\Phi^{-1}\pbra{\gamma_n\pbra{A_t}} \geq \Phi^{-1}\pbra{\gamma_n\pbra{A}} + t\] where $A_t := A + B_t$ is the $t$-enlargement of $A$. \end{proposition} We remark that it is easy to obtain \Cref{prop:gaussian-isoperimetric-inequality} from the Ehrhard-Borell inequality \cite{Ehrhard:83,Borell:03,Borell:08}, which we recall as \Cref{prop:ehrhard-borell} in \Cref{appendix:sec-3}. We will also require the following easy estimate on the \emph{Gaussian isoperimetric function} $\varphi\circ\Phi^{-1}(\cdot)$. \begin{proposition} \label{prop:gaussian-isop-func-estimate} Let $\Phi: \R \to [0, 1]$ denote the cumulative distribution function of the standard one-dimensional Gaussian distribution, and let $\varphi := \Phi'$ denote its density. Then for all $\alpha \in (0, 1)$, we have \[\varphi\circ\Phi^{-1}(\alpha) \geq \sqrt{\frac{2}{\pi}}\min(\alpha, 1-\alpha).\] \end{proposition} \begin{proof} By symmetry, it suffices to show that $\varphi\circ\Phi^{-1}(\alpha) \geq \sqrt{\frac{2}{\pi}}\alpha$ for $\alpha\in\sbra{0,\frac{1}{2}}$. This is immediate from the fact that \[\varphi\circ\Phi^{-1}(0) = 0 \qquad\text{and}\qquad \varphi\circ\Phi^{-1}\pbra{\frac{1}{2}} = \frac{1}{\sqrt{2\pi}},\] and the concavity of $\varphi\circ\Phi^{-1}$ (see, for example, Exercise~5.43 of \cite{ODbook}). \end{proof} \medskip \begin{proofof}{\Cref{thm:our-kkl}} Let $r_{\mathsf{in}}$ denote the in-radius of $K$. We will show that \begin{equation} \label{eq:kkl-first-goal} \TInf[K] \geq {\frac 1 {\sqrt{\pi}}}\Var\sbra{K}\cdotr_{\mathsf{in}} \end{equation} and that \begin{equation} \label{eq:kkl-second-goal} r_{\mathsf{in}} \geq \Omega(\sqrt{\ln(\Var[K]/\delta)}) \end{equation} from which the desired result follows. For \Cref{eq:kkl-second-goal}, by \Cref{claim:small-inf-big-inradius} we have that for some direction $v \in \mathbb{S}^{n-1}$, \[ \Inf_{\hat{v}}[K] \geq {\frac {\gamma(K) e^{-r_{\mathsf{in}}^2}}{2^{3/2} \pi}} \geq {\frac {\Var[K] e^{-r_{\mathsf{in}}^2}}{2^{3/2} \pi}}. \] Combining this with $\Inf_{\hat{v}}[K] \leq \delta$\ignore{ (and observing that we may assume $\delta$ is at most some sufficiently small absolute constant, since otherwise \Cref{thm:our-kkl} is implied by \Cref{prop:poincare-csc-sets})} and recalling that $\delta \leq \Var[K]/10$, we get \Cref{eq:kkl-second-goal}. \ignore{We begin with \Cref{eq:kkl-second-goal}: by definition of the in-radius, for any $\zeta>0$, there is a point $z_\ast$ such that $z_\ast \not \in K$ and $\Vert z_\ast \Vert_2 = r_{\mathsf{in}} + \zeta$. By the separating hyperplane theorem, there must exist some unit vector $\hat{v} \in \mathbb{R}^n$ such that \begin{equation}~\label{eq:inclusion-0} K \subseteq K_{\ast} \coloneqq \{x \in \mathbb{R}^n: |\hat{v} \cdot x| \le r_{\mathsf{in}}+ \zeta\}, \nonumber \end{equation} and hence by \Cref{claim:small-inf-large-slab} we get that \[ \Inf_{\hat{v}}[K] \geq {\frac {\gamma(K) e^{-(r_{\mathsf{in}} + \zeta)^2}}{2^{3/2} \pi}} \geq {\frac {\Var[K] e^{-(r_{\mathsf{in}} + \zeta)^2}}{2^{3/2} \pi}}. \] Combining this with $\Inf_{\hat{v}}[K] \leq \delta$ (and observing that we may assume $\delta$ is at most some sufficiently small absolute constant, since otherwise \Cref{thm:our-kkl} is implied by \Cref{prop:poincare-csc-sets}), letting $\zeta \to 0^+$ we get \Cref{eq:kkl-second-goal}.} We turn now to establishing \Cref{eq:kkl-first-goal}. Recall from \Cref{eq:russo-margulis-dilations} of \Cref{subsec:russo-margulis} (our Margulis-Russo formula) that \begin{equation} \label{eq:kkl-inf-def} \TInf[K] ={\frac 1 {\sqrt{2}}} \lim_{\delta\to0} \frac{\gamma(K) - \gamma\pbra{(1-\delta)K}}{\delta}. \end{equation} We proceed to upper-bound $\gamma\pbra{(1-\delta)K}$ in terms of $\gamma(K)$. Since $r_{\mathsf{in}}$ is the in-radius of $K$, for all $0 < \delta \leq 1$, we have that \begin{equation} \label{eq:kkl-containment} (1-\delta)K + \delta r_{\mathsf{in}} B_{1} = (1-\delta)K + B_{\delta r_{\mathsf{in}}} \sse K. \end{equation} Let $K^c := \R\backslash K$, and let $(K^c)_{\delta r_{\mathsf{in}}} := K^c + B_{\delta r_{\mathsf{in}}}$ be the $\delta r_{\mathsf{in}}$-enlargement of $K^c$. It follows from \Cref{eq:kkl-containment} that $(1-\delta)K\cap (K^c)_{\delta r_{\mathsf{in}}} = \emptyset$, which in turn implies that \begin{equation} \label{eq:kkl-first-ub} \gamma((1-\delta)K) + \gamma((K^c)_{\deltar_{\mathsf{in}}}) \leq 1, \qquad\text{and so}\qquad \gamma((1-\delta)K) \leq 1 - \gamma((K^c)_{\deltar_{\mathsf{in}}}). \end{equation} However, from the Gaussian isoperimetric inequality (\Cref{prop:gaussian-isoperimetric-inequality}), we know that \begin{equation} \label{eq:kkl-gip} \gamma\pbra{(K^c)_{\deltar_{\mathsf{in}}}} \geq \Phi\pbra{\Phi^{-1}\pbra{\gamma\pbra{K^c}} + \deltar_{\mathsf{in}}}. \end{equation} Let $\alpha = \gamma\pbra{K^c}$, so $\gamma(K) = 1-\alpha$. Putting \Cref{eq:kkl-inf-def,eq:kkl-first-ub,eq:kkl-gip} together, we get \begin{align*} \TInf[K] &\geq {\frac 1 {\sqrt{2}}} \lim_{\delta\to0} \frac{\Phi\pbra{\Phi^{-1}\pbra{\alpha} + \deltar_{\mathsf{in}}} - \alpha}{\delta}\\ &= {\frac 1 {\sqrt{2}}} r_{\mathsf{in}}\pbra{\lim_{\eps\to0} \frac{\Phi\pbra{\Phi^{-1}\pbra{\alpha} + \eps} - \Phi\pbra{\Phi^{-1}\pbra{\alpha}}}{\eps}}\\ &= {\frac 1 {\sqrt{2}}} r_{\mathsf{in}}\cdot\Phi'\pbra{\Phi^{-1}(\alpha)}\\ &= {\frac 1 {\sqrt{2}}} r_{\mathsf{in}}\cdot\varphi\circ\Phi^{-1}(\alpha) \end{align*} by making the change of variables $\eps := \deltar_{\mathsf{in}}$ and using the fact that $\varphi = \Phi'$. It follows then from \Cref{prop:gaussian-isop-func-estimate} that \[\TInf[K] \geq {\frac 1 {\sqrt{2}}} r_{\mathsf{in}}\cdot \pbra{\sqrt{\frac{2}{\pi}}\min(\alpha, 1-\alpha)} \geq {\frac 1 {\sqrt{\pi}}}\Var\sbra{K}\cdotr_{\mathsf{in}}\] which completes the proof. \end{proofof} \medskip As discussed in Item~1 of \Cref{sec:discussion}, we conjecture that the RHS of \Cref{eq:kkl-convex-sets} can be strengthened to $\Omega\pbra{\Var[K]\log\pbra{\frac{1}{\delta}}}$, which would be the best possible bound by \Cref{eg:solid-cube}. \section{Preliminaries} \label{sec:prelims} In this section we give preliminaries setting notation and recalling useful background on convex geometry, log-concave functions, and Hermite analysis over $\calN\pbra{0, \sigma^2}^n$. \subsection{Convex Geometry} Below we briefly recall some basic facts and notation from convex geometry and log-concavity that we will use. Some of our main results employ relatively sophisticated results from these areas; we will recall these as necessary in the relevant sections and here record only basic facts.\ignore{ (for example, the $S$-inequality of Lata\l a and Oleskiewicz \cite{s-inequality} is crucial to the main results of \Cref{sec:poincare-kkl}),} For a general and extensive resource we refer the interested reader to \cite{aga-book}. We identify sets $K \sse \R^n$ with their indicator functions $K : \R^n \to \zo$, and we say that $K\sse\R^n$ is \emph{symmetric} if $K(x) = K(-x)$. We write $B_r$ to denote the origin-centered ball of radius $r$ in $\R^n$. If $K \subseteq \R^n$ is a nonempty symmetric convex set then we let $r_{\mathsf{in}}(K)$ denote $\sup_{r \geq 0} \{r: B_r \subseteq K\}$ and we refer to this as the \emph{in-radius} of $K$. We record the following easy consequence of the separating hyperplane theorem:\rnote{I think we are likely to use this somewhere, right?} \begin{fact} \label{fact:slab} Let $K \subseteq \R^n$ be a symmetric convex set with in-radius $r_{\mathsf{in}} < \infty.$ Then there is a direction $v \in \mathbb{S}^{n-1}$ such that $K \subseteq \{x \in \R^n: |v \cdot x| \leq r_{\mathsf{in}}\}.$ \end{fact} Recall that a function $f : \R^n \to \R_{\geq 0}$ is \emph{log-concave} if its domain is a convex set and it satisfies $f(\theta x + (1-\theta)y) \geq f(x)^\theta f(y)^{1-\theta}$ for all $x, y \in \mathrm{domain}(f)$ and $\theta \in [0,1]$. In particular, the $0/1$-indicator functions of convex sets are log-concave. Recall that the \emph{marginal} of $f: \R^n \to \R$ on the set of variables $\{i_1,\dots,i_k\}$ is obtained by integrating out the other variables, i.e. it is the function \[ g(x_{i_1},\dots,x_{i_k}) = \int_{\R^{n-k}} f(x_1,\dots,x_n) dx_{j_1} \dots dx_{j_{n-k}}, \] where $\{j_{1},\dots,j_{n-k}\} = [n] \setminus \{i_1,\dots,i_k\}$. We recall the following fact: \begin{fact}[\cite{Dinghas,Leindler,Prekopa, Prekopa2} (see Theorem 5.1, \cite{LV07})] \label{fact:marginal-log-concave} All marginals of a log-concave function are log-concave. \end{fact} The next fact is obvious\rnote{is it obvious enough that we don't want to cite it? I guess we could cite \cite{Ibragimov:56} and/or \cite{An:95}} from the definition of log-concave functions. \begin{fact} \label{fact:unimodal-log-concave} A one-dimensional log-concave function is unimodal. \end{fact} \subsection{Hermite Analysis over $\calN\pbra{0, \sigma^2}^n$} Our notation and terminology here follow Chapter~11 of \cite{ODbook}. We say that an $n$-dimensional \emph{multi-index} is a tuple $\alpha \in \N^n$, and we define \begin{equation} \label{eq:index-notation} |\alpha| := \sum_{i=1}^n \alpha_i. \end{equation} We write $\calN(0, \sigma^2)^n$ to denote the $n$-dimensional Gaussian distribution with mean $0$ and variance $\sigma^2$, and denote the corresponding measure by $\gamma_{n, \sigma}(\cdot)$. When the dimension $n$ is clear from context we will simply write $\gamma_\sigma(\cdot)$ instead, and sometimes when $\sigma=1$ we simply write $\gamma$ for $\gamma_{1}$. For $n \in \N_{> 0}$, we write $L^2\pbra{\R^n, \gamma_{\sigma}}$ to denote the space of functions $f: \R^n \to \R$ that have finite second moment $\|f\|_2^2$ under the Gaussian measure $\gamma_\sigma$, that is: \[ \|f\|_2^2 = \Ex_{\bz \sim \calN\pbra{0, \sigma^2}^n} \left[f(\bz)^2\right]^{1/2} < \infty. \] We view $L^2\pbra{\R^n, \gamma_\sigma}$ as an inner product space with $\la f, g \ra := \E_{\bz \sim \calN\pbra{0,\sigma^2}^n}[f(\bz)g(\bz)]$ for $f, g \in L^2\pbra{\R^n, \gamma_\sigma}$. Recall the \emph{Hermite basis} for $1$-dimensional Gaussian space $L^2\pbra{\R, \gamma_{1}}$: \begin{definition}[Hermite basis] The \emph{Hermite polynomials} $(h_j)_{j\in\N}$ are the univariate polynomials defined as $$h_j(x) = \frac{(-1)^j}{\sqrt{j!}} \exp\left(\frac{x^2}{2}\right) \cdot \frac{d^j}{d x^j} \exp\left(-\frac{x^2}{2}\right).$$ \end{definition} \begin{fact} [Proposition~11.33, \cite{ODbook}] \label{fact:hermite-orthonormality} The Hermite polynomials $(h_j)_{j\in\N}$ form a complete, orthonormal basis for $L^2(\R, \gamma_1)$. For $n > 1$ the collection of $n$-variate polynomials given by $(h_\alpha)_{\alpha\in\N^n}$ where $$h_\alpha(x) := \prod_{i=1}^n h_{\alpha_i}(x)$$ forms a complete, orthonormal basis for $L^2(\R^n, \gamma_{n,1})$. \end{fact} We will also require an orthonormal basis for $L^2\pbra{\R, \gamma_\sigma}$. \begin{definition}[$\sigma$-biased Hermite polynomials] \label{def:biased-hermite-basis} Given $\sigma > 0$, we define the \emph{$\sigma$-biased $i^\text{th}$ Hermite polynomial} $h_{i, \sigma}(x)$ as \[h_{i,\sigma}(x) := h_i\pbra{\frac{x}{\sigma}}\] where $h_i$ denotes the $i^\text{th}$ univariate Hermite polynomial. \end{definition} \begin{claim} \label{claim:biased-hermite-orthonormal} The polynomials $(h_{i, \sigma})_{i\in\N}$ are orthonormal with respect to $L^2\pbra{\R, \gamma_\sigma}$. \end{claim} \begin{proof} Indeed, we have \begin{align*} \langle h_{i, \sigma},h_{j, \sigma} \rangle_{\gamma_{n, \sigma}} &= \int_{-\infty}^{\infty} h_i\pbra{\frac{x}{\sigma}} h_j\pbra{\frac{x}{\sigma}} {\frac 1 {\sigma \sqrt {2 \pi}}} e^{-x^2 / (2\sigma^2)} dx\\ &= \int_{-\infty}^{\infty} h_i(u) h_j(u) {\frac 1 {\sqrt{2 \pi}}} e^{-u^2/2} du \\ &= \begin{cases} 1 & i = j\\ 0 & i \neq j \end{cases} \end{align*} where we made the substitution $u := \frac{x}{\sigma}$ and used the orthonormality of the Hermite basis with respect to $\gamma_{1}$ (see \Cref{fact:hermite-orthonormality}). \end{proof} As usual, we obtain an orthonormal basis for $L^2\pbra{\R^n, \gamma_{\sigma}}$ by taking $n$-fold products of the univariate polynomials: \begin{fact} \label{fact:biased-hermite-orthonormality} \ignore{The $\sigma$-biased Hermite polynomials $(h_{i,\sigma})_{i\in\N}$ form a complete, orthonormal basis for $L^2(\R, \gamma_1)$.}For $n \geq 1$ and $\sigma > 0$, the collection of $n$-variate $\sigma$-biased Hermite polynomials given by $(h_{\alpha, \sigma})_{\alpha\in\N^n}$ where $$h_{\alpha, \sigma}(x) := \prod_{i=1}^n h_{\alpha_i, \sigma}(x)$$ forms a complete, orthonormal basis for $L^2(\R^n, \gamma_{n,\sigma})$. \end{fact} Given a function $f \in L^2(\R^n, \gamma_\sigma)$ and $\alpha \in \N^n$, we define its \emph{($\sigma$-biased) Hermite coefficient on} $\alpha$ to be $\widetilde{f}_\sigma(\alpha) := \la f, h_{\alpha, \sigma} \ra$. It follows that $f$ is uniquely expressible as $f = \sum_{\alpha\in\N^n} \widetilde{f}_\sigma(\alpha)h_{\alpha, \sigma}$ with the equality holding in $L^2(\R^n, \gamma_\sigma)$; we will refer to this expansion as the \emph{Hermite expansion} of $f$. When $\sigma = 1$, we will simply write $\wt{f}(\alpha)$ instead of $\wt{f}_\sigma(\alpha)$ and $h_\alpha$ instead of $h_{\alpha, 1}$. Parseval's and Plancharel's identities hold in this setting: \begin{fact}[Plancharel's identity] \label{fact:hermite-plancharel} For $f, g \in L^2(\R^n, \gamma_\sigma)$, we have: $$\la f, g\ra = \Ex_{\bz\sim\calN(0,\sigma^2)^n}[f(\bz)g(\bz)] = \sum_{\alpha\in \N^n}\widetilde{f}_\sigma(\alpha)\widetilde{g}_\sigma(\alpha),$$ and as a special case we have Parseval's identity, $$\la f, f\ra = \Ex_{\bz\sim\calN(0,\sigma^2)^n}[f(\bz)^2] = \sum_{\alpha\in \N^n}\widetilde{f}_\sigma(\alpha)^2.$$ \end{fact} The following notation will sometimes come in handy. \begin{definition} \label{def:hermite-coeff-along-direction} Let $v \in \mathbb{S}^{n-1}$ and $f \in L^2\pbra{\R^n, \gamma_\sigma}$. We define $f$'s \emph{Hermite coefficient of degree $k$ along $v$}, written $\wt{f}(kv)$, to be \[\wt{f}_\sigma(kv) := \Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{f(\bx)\cdot h_{k, \sigma}\pbra{v\cdot\bx}}.\] \end{definition} \begin{notation} We will write $e_i \in \N^n$ to denote the $i^\text{th}$ standard basis vector for $\R^n$, i.e. the unit vector with a 1 in coordinate $i$ and 0 in all other coordinates. \ignore{ \[(e_i)_j = \begin{cases} 1 & i = j\\ 0 & i \neq j \end{cases}. \]} \end{notation} In this notation, for example, $\wt{f}(ke_i) = \E_{\bx\sim\calN(0,1)^n}\sbra{f(\bx)\cdot h_k(\bx_i)}$. \subsection{Extremal Symmetric Convex Sets} \label{subsec:extremal} The unique maximizer of $\Inf_1[f]$ across all monotone Boolean functions $f: \bn \to \bits$ is the dictator function $f(x)=x_1$. The next proposition gives an analogous statement for the ``dictatorship'' function $\Dict_{w}$ from \Cref{eg:dict}, for every possible Gaussian volume: \ignore{The next proposition is analogous to the easy-to-see statement in the setting of monotone Boolean functions on the hypercube that for any monotone $f \isafunc$, we have $\Inf_i[f] \leq \Inf_i[x_i] = 1$. } \begin{proposition} \label{prop:dict-maximizes-inf-along-coordinate} Let $K \sse \R^n$ be a symmetric convex set and let $v \in \mathbb{S}^{n-1}$. Let $c \geq 0$ be chosen so that the ${\cal N}(0,1)^n$ Gaussian volume of $\Dict_{cv}$ equals that of $K$, i.e.~$\gamma(\Dict_{cv}) = \gamma(K)$. Then $\Inf_{v}[K] \leq \Inf_{v}[\Dict_{cv}]$. \end{proposition} \begin{proof} Without loss of generality (for ease of notation) we take $v=e_1$. Let $g_K: \R \to [0,1]$ be the function obtained by marginalizing out variables $x_2,\dots,x_n$, so \[ g_K(x_1) = \Ex_{(\bx_2,\dots,\bx_n) \sim {\cal N}(0,1)^{n-1}}\sbra{K(x_1,\bx_2,\dots,\bx_n)}. \] As noted following\ignore{\rnote{Do we want to package this into an observation with a number for subsequent reference (like here)?}} \Cref{prop:influence-nonneg}, we have that $\Inf_{e_1}[K]=\Inf_{e_1}[g_K]$. We observe that by definition we have \[ \Inf_{e_1}[g_K] = {\frac 1 {\sqrt{2}}} \cdot \Ex_{\bx_1 \sim {\cal N}(0,1)}\sbra{(1 - \bx_1^2)g_K(\bx_1)}. \] Since $1-t^2$ is a decreasing function of $t$ for all $t \geq 0$, it is easy to see that the symmetric $[0,1]$-valued function $g_K$ that maximizes $\Ex_{\bx_1 \sim {\cal N}(0,1)}\sbra{(1 - \bx_1^2)g_K(\bx_1)}$ subject to having $\E_{\bx_1 \sim {\cal N}(0,1)}[g_K(\bx_1)]=\gamma(\Dict_{ce_1})$ is the function $g$ for which $g(t)=1$ for $|t| \leq c$ and $g(t)=0$ for $|t| > c$. This corresponds precisely to having $K = \Dict_{ce_1}$; so in fact taking $K=\Dict_{ce_1}$ maximizes $\Inf_{e_1}[K]$ over all measurable subsets of $\R^n$ of Gaussian volume $\gamma\pbra{\Dict_{ce_1}}$ (not just over all symmetric convex sets of that volume). \end{proof} We note that a slight extension of this argument can be used to give a robust version of \Cref{prop:dict-maximizes-inf-along-coordinate}, showing that for any $c>0$, any symmetric convex set $K$ (in fact any measurable set $K$) of Gaussian volume $\gamma(\Dict_{cv})$ that has $\Inf_{v}[K]$ close to $\Inf_v[\Dict_{cv}]$ must in fact be close to $\Dict_{cv}$. This is analogous to the easy fact that any monotone Boolean function with $\Inf_1[f]$ close to 1 must be close to the function $f(x)=x_1$. Next we give a similar result but for total convex influence rather than influence in a single direction, analogous to the well known fact that the Majority function maximizes total influence across all $n$-variable monotone Boolean functions $f: \bn \to \bits$: \ignore } \begin{proposition} \label{prop:ball-max-inf} Let $K \sse \R^n$ be a symmetric convex set, and let $r \geq 0$ be chosen so that the ${\cal N}(0,1)^n$ Gaussian volume of $B_r$ equals that of $K$, i.e.~$\gamma(B_r) = \gamma(K)$. Then $\TInf[K] \leq \TInf[B_r]$. \end{proposition} \begin{proof} The argument is similar to that of \Cref{prop:dict-maximizes-inf-along-coordinate}. We have \[\TInf[K] = \Ex_{\bx\sim\calN(0,1)^n}\sbra{K(\bx)\pbra{\frac {n - \|\bx\|_2^2}{\sqrt{2}}}} = {\frac 1 {\sqrt{2}}}\cdot \Ex_{\br \sim \chi(n)}\sbra{(n-\br^2)\alpha_K(\br)} \] (recall \Cref{eq:sdf}), where $\chi(n)$ is the $\chi$-distribution with $n$ degrees of freedom. We observe that taking $K=B_r$ results in $\alpha_K(t) = 1$ for $t \leq r$ and $\alpha_K(t)=0$ for $t>r$, that the range of $\alpha_K(\cdot)$ is contained in $[0,1]$ for any $K$, and that $n-t^2$ is a decreasing function of $t$ for all $t \geq 0$. Combining these observations, it is easily seen that taking $K=B_r$ in fact maximizes the expression on the RHS over all measurable subsets of $\R^n$ of volume $\gamma(B_r)$ (not just over all symmetric convex sets of that volume). \end{proof} As before, the argument above can be used to establish a robust version of \Cref{prop:ball-max-inf}, showing that any symmetric convex set $K$ (in fact any measurable set $K$) of Gaussian volume $\gamma(B_r)$ that has $\TInf[K]$ close to $\TInf[B_r]$ must in fact be close to $B_r$. \subsection{Other Notions of Influence} \label{subsec:other-notions-influence} Here, we compare the notion of influence for symmetric convex sets proposed in \Cref{def:csc-influence} with two previous notions of influence, namely i) the \emph{geometric influence} introduced in \cite{keller2012geometric}; and ii) the \emph{expected variance along a fiber} which coincides with the usual notion of influence for Boolean functions on the hypercube. \subsubsection{Geometric Influences} \label{subsubsec:geometric-inf} In \cite{keller2012geometric}, Keller, Mossel, and~Sen introduced the notion of \emph{geometric influence} for functions over Gaussian space, and proved analogues of seminal results from the analysis of Boolean functions---including the KKL theorem, the Margulis--Russo lemma, and an analogue of Talagrand's correlation inequality---for this notion of influence. Informally, the geometric influence captures the expected \emph{lower Minkowski content} along each one-dimensional fiber of a set. \begin{definition}[Geometric influences] \label{def:kms-geometric-influences} Given a Borel measurable set $K\sse\R$, its \emph{lower Minkowski content} (with respect to the standard Gaussian measure), denoted $\gamma^+$, is defined as \[\gamma^+(K) := \lim\inf_{r \to 0^+} \frac{\gamma\pbra{K + [-r, r]} - \gamma(K)}{r}.\] (Note that for $K = [a, b] \subset \R$, we have $\gamma^{+}(K) = \varphi(a) + \varphi(b).$) For any Borel-measurable $K \sse\R^n$, for each $i \in [n]$ and $x \in \R^n$, define the fiber \[K^x_i := \{y\in\R : (x_1, \ldots, x_{i-1}, y, x_{i+1}, \ldots, x_n) \in K\}.\] The \emph{geometric influence of coordinate $i$ on $K$} is \[\Inf^{\calG}_i[K] := \Ex_{\bx\sim\calN(0,1)^n}\sbra{\gamma^+(K^{\bx}_i)}.\] \end{definition} For convex sets the \emph{total geometric influence} admits a geometric interpretation as the change in the boundary of the set under uniform enlargement: \begin{proposition}[Remark 2.2 of \cite{keller2012geometric}] \label{prop:geom-meaning-geom-inf} Let $K\sse\R^n$ be a convex set. Then we have \[\lim_{r \to 0^+} \frac{\gamma_n(K + [-r, r]^n) - \gamma_n(K)}{r} = \sum_{i=1}^n \Inf_i^{\calG}[K]\] where the right-hand side above is the \emph{total geometric influence of $K$}. \end{proposition} Note that unlike $\TInf[K]$ (see \Cref{def:csc-influence}), the total geometric influence is not invariant under rotations as the boundary of a convex set under enlargement is not rotationally invariant. It is possible for our convex influence notion to be much smaller than the geometric influence. For example, a routine computation shows that the $\sqrt{n}$-radius Euclidean ball $B := B_{\sqrt{n}}$ has $\Inf^{\calG}_{i}[B] = \Omega(1)$ for each $i \in [n]$, whereas as seen from \Cref{eg:ball}, for convex influence we have $\Inf_{e_i}[B] = O\pbra{\frac{1}{\sqrt{n}}}$. \ignore{ for ( the {influence} of a coordinate on a symmetric convex set to be much smaller than the {geometric influence} of that coordinate on the set. The following example shows that it is possible for the {influence} of a coordinate on a symmetric convex set to be much smaller than the {geometric influence} of that coordinate on the set. \begin{example} \label{eg:geom-inf-vs-dns-inf} Consider $B := B_{\sqrt{n}} \sse\R^n$ as defined in \Cref{eg:ball}. For all $x \in \R^{n}$, we have $B^x_i = [-a, a]$ (where $B^x_i$ is as defined in \Cref{def:kms-geometric-influences}) for some $a$. Let $\bx \sim \calN(0,1)^n$; recall that $\by^2 := \sum_{i=2}^n \bx_i^2$ is distributed as $\by^2\sim\chi^2(n-1)$, and so we have from standard tail bounds on $\chi^2(n-1)$ that $\Pr\sbra{\bx_1^2 \leq 1} = \Omega(1)$, and so $\Inf^{\calG}_1[B] = \Omega(1)$. On the other hand, as seen from \Cref{eg:ball}, $\Inf_{e_1}[B] = O\pbra{\frac{1}{\sqrt{n}}}$. \end{example} \red{TODO: Comparison?} } \subsubsection{Variance Along a Fiber} \label{subsubsec:varinf} In the setting of Boolean functions over the hypercube, the usual notion of influence of a coordinate on a function $f: \bn \to \bits$ coincides with the expected variance of the function along a random fiber in the direction of that coordinate. This is also a standard notion of influence for product probability measures more generally, see e.g. Proposition~8.24 of \cite{ODbook}. More formally, we have the following definition. \begin{definition}[Expected variance along a fiber] \label{def:varinf} Given a function $f \in L^2\pbra{\R^n, \gamma}$, we define \[\mathbf{VarInf}_i[f] := \Ex_{\bx\backslash\{\bx_i\} \sim \calN(0,1)^{n-1}}\sbra{\Var_{\bx_i}[f(\bx)]}\] to be the \emph{expected variance of $f$ along the $i^\text{th}$ fiber}. \end{definition} We can express the expected variance along the $i^\text{th}$ fiber in terms of the Hermite expansion as $\mathbf{VarInf}_i[f] = \sum_{\alpha_i > 0} \wt{f}(\alpha)^2$ (see Proposition 8.23 of \cite{ODbook}). In our setting it is possible for the convex influence of a symmetric convex set to be much smaller than the expected variance along a fiber. This is witnessed by the symmetric convex set $\Dict_v \sse\R^n $ given by \[\Dict_v := \{x \in \R^n : |x\cdot v| \leq 1\} \qquad\text{where}\qquad v = \frac{1}{\sqrt{n}}(1, \ldots, 1).\] A routine computation shows that $\mathbf{VarInf}_i\sbra{\Dict_v} = \Theta\pbra{\frac{1}{\sqrt{n}}}$ for each $i \in [n]$. On the other hand, since $\TInf[\Dict_v]$ is rotationally invariant, it follows from \Cref{eg:dict} that $\TInf\sbra{\Dict_v} = \TInf\sbra{\Dict_{e_1}} = \Theta(1)$, and consequently by symmetry, it follows that $\Inf_{e_i}[K] = \Theta\pbra{\frac{1}{n}}$. \ignore{ \gray{ \begin{example} \label{eg:inf-versus-varinf} Consider the symmetric convex set $\Dict_v \sse\R^n $ given by \[\Dict_v := \{x \in \R^n : |x\cdot v| \leq 1\} \qquad\text{where}\qquad v = \frac{1}{\sqrt{n}}(1, \ldots, 1).\] It's easy to see that $\mathbf{VarInf}\sbra{\Dict_v} = \Theta\pbra{\frac{1}{\sqrt{n}}}$;\snote{We have $\sum_{i=2}^n x_i \in [-2(n-1), 2(n-1)]$ with constant probability, and we have $|a - 2(n-1)| \leq \sqrt{n}$ with constant probability, so $Var_1[\Dict_v]$ is $Theta(1)$ but clearly we have upper bound of $O(1/\sqrt{n}$ due to Parseval, so it is $\Theta(1/\sqrt{n})$.} on the other hand, as $\TInf[\Dict_v]$ is rotationally invariant, it follows from \Cref{eg:dict} that $\TInf\sbra{\Dict_v} = \TInf\sbra{\Dict_{e_1}} = \Theta(1)$. By symmetry, it follows that $\Inf_{e_i}[K] = \Theta\pbra{\frac{1}{n}}$. \end{example} } \red{TODO: Comparison?} } \section{Preliminaries} \label{sec:prelims} In this section we give preliminaries setting notation and recalling useful background on convex geometry, log-concave functions, and Hermite analysis over $\calN\pbra{0, \sigma^2}^n$. \subsection{Convex Geometry and Log-Concavity} \label{subsec:log-concave} Below we briefly recall some notation, terminology and background from convex geometry and log-concavity. Some of our main results employ relatively sophisticated results from these areas; we will recall these as necessary in the relevant sections and here record only basic facts.\ignore{ (for example, the $S$-inequality of Lata\l a and Oleskiewicz \cite{s-inequality} is crucial to the main results of \Cref{sec:poincare-kkl}),} For a general and extensive resource we refer the interested reader to \cite{aga-book}. We identify sets $K \sse \R^n$ with their indicator functions $K : \R^n \to \zo$, and we say that $K\sse\R^n$ is \emph{symmetric} if $K(x) = K(-x)$. We write $B_r$ to denote the origin-centered ball of radius $r$ in $\R^n$. If $K \subseteq \R^n$ is a nonempty symmetric convex set then we let $r_{\mathsf{in}}(K)$ denote $\sup_{r \geq 0} \{r: B_r \subseteq K\}$ and we refer to this as the \emph{in-radius} of $K$. \ignore{We record the following easy consequence of the separating hyperplane theorem:\rnote{I think we are likely to use this somewhere, right? Should we note (as LO do) that this is half the width of $K$ (should we define the width of $K$)?} \begin{fact} \label{fact:slab} Let $K \subseteq \R^n$ be a symmetric convex set with in-radius $r_{\mathsf{in}} < \infty.$ Then there is a direction $v \in \mathbb{S}^{n-1}$ such that $K \subseteq \{x \in \R^n: |v \cdot x| \leq r_{\mathsf{in}}\}.$ \end{fact} } Recall that a function $f : \R^n \to \R_{\geq 0}$ is \emph{log-concave} if its domain is a convex set and it satisfies $f(\theta x + (1-\theta)y) \geq f(x)^\theta f(y)^{1-\theta}$ for all $x, y \in \mathrm{domain}(f)$ and $\theta \in [0,1]$. In particular, the $0/1$-indicator functions of convex sets are log-concave. Recall that the \emph{marginal} of $f: \R^n \to \R$ on the set of variables $\{i_1,\dots,i_k\}$ is obtained by integrating out the other variables, i.e. it is the function \[ g(x_{i_1},\dots,x_{i_k}) = \int_{\R^{n-k}} f(x_1,\dots,x_n) dx_{j_1} \dots dx_{j_{n-k}}, \] where $\{j_{1},\dots,j_{n-k}\} = [n] \setminus \{i_1,\dots,i_k\}$. We recall the following fact: \begin{fact}[\cite{Dinghas,Leindler,Prekopa, Prekopa2} (see Theorem 5.1, \cite{LV07})] \label{fact:marginal-log-concave} All marginals of a log-concave function are log-concave. \end{fact} The next fact follows easily from the definition of log-concavity: \begin{fact} [\cite{Ibragimov:56}, see e.g.~\cite{An:95}] \label{fact:unimodal-log-concave} A one-dimensional log-concave function is unimodal. \end{fact} \subsection{Gaussian Random Variables} We write $\bz \sim \calN(0,1)$ to mean that $\bz$ is a standard Gaussian random variable, and will use the notation \[\varphi(z) := \frac{1}{\sqrt{2\pi}}e^{-x^2/2} \qquad\text{and}\qquad \Phi(z) := \int_{-\infty}^z \varphi(t)\,dt\] to denote the pdf and the cdf of this random variable. Recall that a non-negative random variable $\br^2$ is distributed according to the chi-squared distribution $\chi^2(n)$ if $\br^2 = \bg_1^2 + \cdots + \bg_n^2$ where $\bg \sim {\cal N}(0,1)^n,$ and that a draw from the chi distribution $\chi(n)$ is obtained by making a draw from $\chi^2(n)$ and then taking the square root. We define the \emph{shell-density function for $K$}, $\alpha_K : [0,\infty) \rightarrow [0,1]$, to be \begin{equation} \label{eq:sdf} \alpha_K(r) := \Prx_{\bx \in r\mathbb{S}^{n-1}} [\bx \in K], \end{equation} where the probability is with respect to the normalized Haar measure over $r\mathbb{S}^{n-1}$; so $\alpha_K(r)$ equals the fraction of the origin-centered radius-$r$ sphere which lies in $K.$ We observe that if $K$ is convex and symmetric then $\alpha_K(\cdot)$ is a nonincreasing function. A view which will be sometimes useful later is that $\alpha_K(r)$ is the probability that a random Gaussian-distributed point $\bg \sim N(0,1)^n$ lies in $K$, conditioned on $\|\bg\|=r.$ \ignore{ We recall the following tail bound:\rnote{Check that we use this and the next fact, delete if we don't} \begin{lemma} [Tail bound for the chi-squared distribution \cite{Johnstone01}] \label{lem:johnstone} Let $\br^2 \sim \chi^2(n)$. Then we have \[\Prx\big[|\br^2-n| \geq tn\big] \leq e^{-(3/16)nt^2}\quad\text{for all $t \in [0, 1/2)$.}\] It follows that for $\br \sim \chi(n)$, \[ \Prx \big[ \sqrt{{n}/{2}} \le \br \le \sqrt{{3n}/{2}} \big] \ge 1- e^{-\frac{3n}{64}}. \] \end{lemma} The following fact about the anti-concentration of the chi distribution will be useful: \begin{fact} \label{fact:chi-squared-2} For $n > 1$, the maximum value of the pdf of the chi distribution $\chi(n)$ is at most $1$, and hence for any interval $I=[a,b]$ we have $\Pr_{\br^2 \sim \chi^2(n)}[\br \in [a,b]] \leq b-a.$ \end{fact} } \subsection{Hermite Analysis over $\calN\pbra{0, \sigma^2}^n$} Our notation and terminology here follow Chapter~11 of \cite{ODbook}. We say that an $n$-dimensional \emph{multi-index} is a tuple $\alpha \in \N^n$, and we define \begin{equation} \label{eq:index-notation} |\alpha| := \sum_{i=1}^n \alpha_i. \end{equation} We write $\calN(0, \sigma^2)^n$ to denote the $n$-dimensional Gaussian distribution with mean $0$ and variance $\sigma^2$, and denote the corresponding measure by $\gamma_{n, \sigma}(\cdot)$. When the dimension $n$ is clear from context we simply write $\gamma_\sigma(\cdot)$ instead, and sometimes when $\sigma=1$ we simply write $\gamma$ for $\gamma_{1}$. For $n \in \N_{> 0}$ and $\sigma > 0$, we write $L^2\pbra{\R^n, \gamma_{\sigma}}$ to denote the space of functions $f: \R^n \to \R$ that have finite second moment $\|f\|_2^2$ under the Gaussian measure $\gamma_\sigma$, that is: \[ \|f\|_2^2 = \Ex_{\bz \sim \calN\pbra{0, \sigma^2}^n} \left[f(\bz)^2\right]^{1/2} < \infty. \] We view $L^2\pbra{\R^n, \gamma_\sigma}$ as an inner product space with $\la f, g \ra := \E_{\bz \sim \calN\pbra{0,\sigma^2}^n}[f(\bz)g(\bz)]$ for $f, g \in L^2\pbra{\R^n, \gamma_\sigma}$. We define ``biased Hermite polynomials,'' which yield an orthonormal basis for $L^2\pbra{\R^n, \gamma_{\sigma}}$: \begin{definition}[Hermite basis] For $\sigma > 0$, the \emph{$\sigma$-biased Hermite polynomials} $(h_{j,\sigma})_{j\in\N}$ are the univariate polynomials defined as $$ h_{j,\sigma}(x) := h_j\pbra{\frac{x}{\sigma}}, \quad\text{where}\quad h_j(x) := \frac{(-1)^j}{\sqrt{j!}} \exp\left(\frac{x^2}{2}\right) \cdot \frac{d^j}{d x^j} \exp\left(-\frac{x^2}{2}\right).$$ \end{definition} \begin{fact} [Easy extension of Proposition~11.33, \cite{ODbook}] \label{fact:biased-hermite-orthonormality} For $n \geq 1$ and $\sigma > 0$, the collection of $n$-variate $\sigma$-biased Hermite polynomials given by $(h_{\alpha, \sigma})_{\alpha\in\N^n}$ where $$h_{\alpha, \sigma}(x) := \prod_{i=1}^n h_{\alpha_i, \sigma}(x)$$ forms a complete, orthonormal basis for $L^2(\R^n, \gamma_{\sigma})$. \end{fact} Given a function $f \in L^2(\R^n, \gamma_\sigma)$ and $\alpha \in \N^n$, we define its \emph{($\sigma$-biased) Hermite coefficient on} $\alpha$ to be $\widetilde{f}_\sigma(\alpha) := \la f, h_{\alpha, \sigma} \ra$. It follows that $f$ is uniquely expressible as $f = \sum_{\alpha\in\N^n} \widetilde{f}_\sigma(\alpha)h_{\alpha, \sigma}$ with the equality holding in $L^2(\R^n, \gamma_\sigma)$; we will refer to this expansion as the \emph{($\sigma$-biased) Hermite expansion} of $f$. When $\sigma = 1$, we will simply write $\wt{f}(\alpha)$ instead of $\wt{f}_\sigma(\alpha)$ and $h_\alpha$ instead of $h_{\alpha, 1}$. Parseval's and Plancharel's identities hold in this setting: \begin{fact} \label{fact:hermite-plancharel} For $f, g \in L^2(\R^n, \gamma_\sigma)$, we have: \begin{align*}\la f, g\ra &= \Ex_{\bz\sim\calN(0,\sigma^2)^n}[f(\bz)g(\bz)] = \sum_{\alpha\in \N^n}\widetilde{f}_\sigma(\alpha)\widetilde{g}_\sigma(\alpha), \tag{Plancherel}\\ \la f, f\ra& = \Ex_{\bz\sim\calN(0,\sigma^2)^n}[f(\bz)^2] = \sum_{\alpha\in \N^n}\widetilde{f}_\sigma(\alpha)^2. \tag{Parseval} \end{align*} \end{fact} The following notation will sometimes come in handy. \begin{definition} \label{def:hermite-coeff-along-direction} Let $v \in \mathbb{S}^{n-1}$ and $f \in L^2\pbra{\R^n, \gamma_\sigma}$. We define $f$'s \emph{$\sigma$-biased Hermite coefficient of degree $k$ along $v$}, written $\wt{f}_\sigma(kv)$, to be \[\wt{f}_\sigma(kv) := \Ex_{\bx\sim\calN(0,\sigma^2)^n}\sbra{f(\bx)\cdot h_{k, \sigma}\pbra{v\cdot\bx}}\] (as usual omitting the subscript when $\sigma=1$). \end{definition} \begin{notation} We will write $e_i \in \N^n$ to denote the $i^\text{th}$ standard basis vector for $\R^n$. \ignore{ \[(e_i)_j = \begin{cases} 1 & i = j\\ 0 & i \neq j \end{cases}. \]} \end{notation} In this notation, for example, $\wt{f}(2e_i) = \E_{\bx\sim\calN(0,1)^n}\sbra{f(\bx)\cdot h_2(\bx_i)}$. Finally, for a measurable set $K \subseteq \R^n$, it will be convenient for us to write $\gamma(K)$ to denote $\Pr_{\bx \sim {\cal N}(0,1)^n}[x \in K]$, the \emph{(standard) Gaussian volume} of $K$.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,580
HomeUncategorizedBinges & Boxsets – TV Review – The Avengers: The Cybernauts Trilogy (1965-1976) Binges & Boxsets – TV Review – The Avengers: The Cybernauts Trilogy (1965-1976) October 1, 2019 Hugh David Uncategorized 0 Title: The Avengers: The Cybernauts Trilogy Starring: Patrick Macnee, Diana Rigg, Joanna Lumley, Gareth Hunt, Michael Gough, Peter Cushing Written by: Philip Levene, Brian Clemens Directed by: Sidney Hayers, Robert Day Label: Network Releasing Video format: 1080p Aspect Ratio: 1.33:1 pillarboxed Soundtracks: LPCM 1.0 No. of discs: 1 x BD-50 Packaging: Blu-ray digipack with booklet Region Coding: Region B Rating: BBFC PG There isn't much more one can say about The Avengers, the global and stylish television trailblazer it is (by any standards), but we do have something new to present with The Cybernauts Trilogy. We've selected three highpoints from across the run for this special 2019 release: The Cybernauts, their 1965 robotic debut featuring Diana Rigg; the 1967 follow-up Return of the Cybernauts, again with Diana Rigg and Peter Cushing guesting in one of his most sinister roles; and, making its world High Definition debut, The New Avengers 1976 finale The Last of the Cybernauts…?? . We've pulled the original film elements for all three episodes into the Network studio for brand new, painstaking, High Definition restorations, exclusively for this release. There's much more to this release, though: as well as presenting the episodes in all their uncut, restored glory, we're also presenting the option to watch each episode in its contemporary transmission context, with actual commercials in situ. Included in the package is a 32-page booklet by celebrated television historian Andrew Pixley detailing the history of the three episodes and there's limited edition digipack packaging whilst stocks last. This really is the last of the Cybernauts. WARNINGS: POSSIBLE SPOILERS AHEAD FOR ALL ERAS OF THE AVENGERS AND NEW AVENGERS; Also: the author of the review used to work for Network from 2011 to 2013. However, he is unaffiliated with them at present, and the copy under review was a personal purchase, not a review copy or freebie. What, indeed, is there left to say about crown jewel of British cult TV THE AVENGERS? Between its ground-breaking five-year run that encompassed a sea change in how televisual drama was made in Britain, through the immense range of talent in front of and behind the camera, to its enduring legacy across British-made entertainment since, the show deserves every accolade awarded it, its place in history assured. Transformative masterstroke number 1 was Patrick Macnee's iconic British agent John Steed, stepping forward from second billing in the mostly-lost live-shot 1961 first season to a career-making lead modelled very much on himself. Steed is the acceptable face of English heroism from an era whose fictional icons have revealed themselves to be unfit for service in modern times. He is a man at ease with his society even as he investigates its failings and punishes both its criminals and its enemies. Sartorial elegance, fine taste and sportsmanship are held in high regard, yet never at the cost of intellectual achievement, justice served or good humour. And his deft balancing, of unfailing respect for those who have chosen to be seen as women with honest interest in or desire for them, is arguably a masterclass in nontoxic masculinity worth learning from in the 21st century. Masterstroke number 2 was to re-cast the Ian Hendry's male lead role in season 2 with actress Honor Blackman as Cathy Gale, but leave the scripts mostly as they were (similar to how Ridley Scott nearly two decades later would cast Officer Ripley of the Nostromo). This immediately made her a ground-breaking action heroine, while both the costume designer and the camera loved her face and form, and so did audiences. Live television meant she would have to use martial arts on stuntmen, then have to change into something svelte and fashionable and look unruffled, which she absolutely did time and again. After two seasons, however, she stepped up to the 007 franchise as the female lead in GOLDFINGER, and a replacement was needed. Masterstroke number 3 was casting Diana Rigg, today a Dame, as Mrs. Emma Peel. While still stylish and physical, Mrs.Peel was a very different kettle of fish, but the foundations of her role were more solid given the production moved to filmed recordings and so scenes could be re-written, re-shot, or re-edited before the final edit was delivered to transmission. Without the restrictions of live television, the show moved towards more outlandish, comic-book villains and storylines, even if they started from something realistic and then current. Rigg and Macnee rose to the creative avenues these gave them, and remain the most iconic team to have been featured in the franchise. Which brings us the long way round to this release. Owned by Studiocanal, who have released generally excellent HD remasters of the filmed series (the two Emma Peel ones, and the final year with Linda Thorson's Tara King) alongside DVDs of the existing episodes of the first three, the series has nevertheless been a holy grail for the incredible restoration team at Network. Their direct relationship with the late great Brian Clemens, a giant of British television and film who wrote for every iteration of the series as well as producing the filmed years and THE NEW AVENGERS revival, enabled them to show him how well they could restore CI5: THE PROFESSIONALS when they obtained the rights; it's not hard to imagine they might have been able to do the same thing with THE AVENGERS if he had lived. But now, in bringing together these three episodes across three eras that feature the same core villain, they have allowed us to compare and contrast the evolution of the show, while also staking a claim to being the best people for the job of preparing for HD release THE NEW AVENGERS. Fans of English cult tv will know what to expect: mystery, suspense, chills, thrills, gags and glamour, from all concerned. THE CYBERNAUTS features Michael Gough, with the inimitable Burt Kwouk in a small role devoid of racist intent or execution (in 1965! Imagine that!). In typical AVENGERS' style, Macnee and Riggs save each other rather than only the former rescuing the latter. Versatile feature director Sidney Hayers directs a script from actor/writer/playwright Philip Levene (both would become production regulars), demonstrating his facility with thrillers and horror, including a dark ending in which we see exactly why our heroes are named as they are; they stand and watch the villain meet his unpleasant end at the hands of his own creation, something that comes about through Steed's own doing. Peel shows some emotion, but Steed is impassive – this is justice rendered, facilitated by him. 1967's RETURN OF THE CYBERNAUTS shows how far Macnee and Riggs have come with their portrayals and on-screen relationship, while also revealing the more expansive nature of the first colour film series' production. Sets that aim for Ken Adams' flair, fine costuming and the inimitable Peter Cushing in the villain role make for a perfect example of the show at its very best, and arguably British cult tv as well. TARZAN regular director Robert Day directs from another Levene script, and definitely brings a cinema-influenced visual flair to the proceedings. THE LAST OF THE CYBERNAUTS…?, however, differs in a few crucial noticeable ways. Only the third episode aired in the new revival of the show, Macnee and therefore Steed is noticeably older, although still wonderful to watch. His new team of Mike Gambit (Gareth Hunt) and Purdey (Joanna Lumley) are still settling into their roles, both eye candy for different parts of the audience (this is 1976 after all – we get Gambit naked in bed, chest on display, and Purdey in ballerina gear exercising at the pole), but already Lumley's chemistry with Macnee echoes the earlier iterations of the show. The action is more violent, quite literally more explosive – witness the opening car chase – while returning director Hayers, working with a Clemens script, does his best with copious amounts of location shooting but manages to throttle back the realism just enough times to find the spirit of the original – note the nod back to classic 1960 French horror LES YEUX SANS VISAGE, as well as the humourous "weapon" that saves the day. It's a different kettle of fish, and whileTHE NEW AVENGERS would last two series, Macnee and Clemens were right to later declare it as having failed to re-capture the magic of the original, even if it was mostly enjoyable on its own terms. Reference quality, as one would expect with Network. THE NEW AVENGERS episode is on a par with their insanely good releases of THE PROFESSIONALS, made subsequently by the same production team. The colour episode of THE AVENGERS looks sublime, clearly a distinct cut above the Studiocanal remaster of the same episode, with textures and details unseen in the latter, as well as finer clarity and clean-up. The b/w episode, however, is possibly the best of the lot, as good as the best feature restorations Criterion or Arrow Academy or, for that matter, Network themselves. The audio might lack directionality, but not warmth and depth; mono though these tracks are, these are the clearest and cleanest they have ever sounded, capturing the viewer with ease. The Vintage Experience is something Network trialled on earlier discs of other series, and boy is it a nostalgic joy on the first and even subsequent viewings, assuming, of course, you are old enough to have lived through at least one of these eras' adverts restored and reprogrammed here. In your reviewer's case, while not old enough to be allowed to stay up and watch THE NEW AVENGERS on its initial airings, the ads included here are almost all familiar from the mid-late seventies. Seeing them again has been both the joy mentioned but also a salient reminder of how far we've managed to come as a society in England despite the current attempts to rewind the clock back to well before any of these episodes. There is a diversity to some of the 60s ads that has been forgotten in modern discussions but shows how high street brands were moving forward with normalising multiculturalism. Additionally, seeing the shows with ad breaks intact restores their intended pacing, meaning recap dialogue or cliffhanger moments function as they were supposed to. This too is a real pleasure. However good the Studiocanal box set masters looked to the untrained eye, these remasters are eye-poppingly glorious, suggesting that even if Studiocanal have no plans to sublicense the entire original series to Network, they would do well to let them have THE NEW AVENGERS and get it out there alongside their wonderful sets of fellow 70s highlights THE PROFESSIONALS and THE SWEENEY S1. Frankly, any fan of the series will have already bought this, but if you're sitting on the fence after splurging on the full series box sets, do not hesitate – get this NOW. In The Shadow Of The Moon (2019) Movie Review Trash or Treasure: The Return Of Swamp Thing (1989)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,127
Lydia Styslinger (* in Birmingham, Jefferson County, Alabama) ist eine US-amerikanische Theater- und Filmschauspielerin. Leben Styslinger wurde in Birmingham, im US-Bundesstaat Alabama geboren und wuchs dort und in Aspen, im US-Bundesstaat Colorado auf. Sie studierte Schauspiel an der Meadows School of the Arts der Southern Methodist University, die sie mit dem Bachelor of Arts verließ. Sie sammelte Erfahrungen in Bühnenstücken am Red Mountain Theatre Arts Campus in ihrer Geburtsstadt Birmingham und am Theatre Aspen. Sie debütierte 2015 im Katastrophenfilm Der Sturm – Life on the Line in der Rolle der Elly als Filmschauspielerin. Im Folgejahr hatte sie eine Besetzung im Kurzfilm 11 Angry Teens inne. 2018 übernahm sie Rollen in dem Spielfilm Summertime und im Kurzfilm Neil. 2019 folgten Besetzungen in den Spielfilmen 10 Minutes Gone und Trauma Center. 2020 übernahm sie in dem Film Son of the South die Rolle der Susan Wilbur, außerdem hatte sie eine Nebenrolle in Immer Ärger mit Grandpa inne. Filmografie 2015: Der Sturm – Life on the Line (Life on the Line) 2016: 11 Angry Teens (Kurzfilm) 2018: Summertime 2018: Neil (Kurzfilm) 2019: 10 Minutes Gone 2019: Trauma Center 2020: Son of the South 2020: Immer Ärger mit Grandpa (The War with Grandpa) Weblinks Einzelnachweise Filmschauspieler Theaterschauspieler US-Amerikaner Geboren im 20. Jahrhundert Frau
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,578
Home India News Shots Legal News Shots- Today's Top Picks! Legal News Shots- Today's Top Picks! Legal News Shots- Today's Top Picks! INDIA – Supreme Court Demands The Center's Stand On Petition From IAS Trainees and IPS Officers On Monday (June 17), the Supreme Court requested the Center's stand on requests moved by a group of 2018 batch trainee IAS and IPS officers seeking another opportunity to practice framework preference in parity with 20 other peers given the relief by the apex court last month. On May 17, the apex court had requested the Center to accommodate 20 trainee IAS and IPS officers, who had questioned the government's allocation process for the 2018 batch, by raising their preference to one post each in the state frameworks this year. However, the Centre had then moved the top court challenging the High Court's decision. INDIA – "Archaic Labour Law Must Be Abolished," Claims The Indian Staffing Federation The percentage of individuals in formal employment in India will only increase if the archaic labour regulations of the country are completely scrapped, the Indian Staffing Federation said, adding that the laws are insignificant in today's context. "Small companies will not shy away from scaling up their companies or adding individuals if labour laws are made friendly. This will automatically increase the country's formal employment. It will also enhance the ease of doing business and pick up the investment flow. Ultimately, streamlining labour laws will have a beneficial effect on the economy," ISF President Rituparna Chakraborty. She added, "we need laws that are going to be easy, productive, and that will improve the employability and employment system we have in the nation today." INDIA – Heatwave In Bihar Killed Over 70, Government Call For Tougher Law Heatwave in Bihar that killed over 70 people on Saturday (June 15, 2019) last has led the administration in Gaya city to take strict measures, banning large gatherings during the day. In a statement, the Gaya government enforced Section 144 of the Code of Criminal Procedure, which usually prohibits large crowds, to control law and order but this time to force the general population to stay indoors so that they do not become a victim of the heatwave killer. When the temperature stays at 45 degrees Celsius and above for two consecutive days, a heat wave is declared. When the temperature reaches 47 degrees Celsius, it receives a "severe" tag. USA – War Crimes Trial for U.S. Navy SEAL In The Killing Of ISIL Inmates To Start Today (June 18) A trial of a decorated U.S. Navy SEAL charged with the murder of an injured ISIL prisoner-one of the most prominent war crimes cases of the Navy-is scheduled to start at U.S. Naval Base San Diego on Monday (June 17). A trial of a decorated U.S. Navy SEAL charged with the murder of an injured ISIL prisoner-one of the most prominent war crimes cases of the Navy-is scheduled to start at U.S. Naval Base San Diego on Monday. The judge, Captain Aaron Rugh, held previously this month the efforts of the prosecution to monitor defence messages to discover a news leak casting doubt on Gallagher's capacity to get a fair trial and breached his constitutional lawsuit. He released Gallagher from custody, removed the lead prosecutor, and lowered the maximum penalty he faces if convicted with the parole of premeditated murder to life imprisonment-instead of no parole. The judge also allows the defence to dismiss two more prospective jurors without cause than normal during jury selection. INDIA – 2019 Law School Admission Test Results (LSAT) Released, Check Your Score At pearsonvueindia.com The result of the 2019 Law School Admission Test (LSAT, India) examination was officially declared Monday, June 17, 2019. The 2019 LSAT India Result can be downloaded from pearsonvueindia.com. Alternatively, the results of LSAT can also be checked from the third-party platform – lsat.nopaperforms.com. The released 2019 LSAT India Result is accepted for admission to both undergraduate and postgraduate courses. The LSAT examination was conducted on June 2 by the Pearson VUE on behalf of the US-based law organisation-Law School Admission Council (LSAC). According to the Careers360 report, the LSAC will grant up to 6 lakhs of LSAT India Law Scholarship to the exam topper for this present academic year. This initiative marks the celebration of finishing Ten (10) years of administering LSAT India Examination. INDIA – 2019 Maha Directorate of Higher Education Releases CET LLB Result at mhtcet2019.mahaonline.gov.in On June 17 (Monday), the Maharashtra Higher Education Directorate published the MHT CET LLB Result 2019. The law result of the Maharashtra Common Entrance Test (MHT CET) for the 2019 academic batch was uploaded to the official website of Maharashtra State Common Entrance Test Cell – mhtcet2019.mahaonline.gov.in and is accessible for download in PDF format. The declared MHT CET Law Result 2019 includes names, roll numbers and total scored marks out of 150 qualifying applicants. UNITED KINGDOM – Mohamed Morsy, Ousted Egyptian President, Dies During Trial In court Room Former Egyptian President Mohammed Morsi, ousted by the army in 2013 after one year in office, collapsed in a courtroom and died, officials claim. A top figure in the now-banned Islamist Muslim Brotherhood movement, Morsi had just spoken at a hearing on allegations of espionage from a cage. State TV said that the cause of death was a heart attack. Activists and his family had long said that Morsi did not receive therapy for severe health issues such as high blood pressure and diabetes and was constantly kept in solitary confinement. INDIA – Case Of Now Or Never; India Needs An Immediate Tax paradigm Shift In 2019 Budget It's a loud and clear message. India requires an immediate tax paradigm shift. Thus, the state placed direct tax measures firmly at the top of its portfolio within a week of its second term, at the core of which is a new Direct Tax Code (DTC). To render it much simpler, the DTC requires to rejig the current tax rate framework. A region that needs near consideration if our corporate tax system is to be streamlined is the minimum alternative income (MAT) ratio of 18.5 per cent (on a company's library revenues). Also, the gradual decrease in corporate tax payments and the rise in the MAT level over the years have significantly bridged the gulf between the two levels. Furthermore, the age-old evaluation method needs to be stepped up, and the idea of independent e-assessments offers the state with an excellent chance to implement such a shift. There appears to be a divide at present between the concept of independent e-assessments and the current series of laws. This divide requires to be bridged through the adoption of new evaluation legislation and the issuance of full execution rules. This Government demonstrates tremendous strength in its previous period by implementing GST and in this nation altered the indirect tax landscape for good. There can be no more appropriate moment in our direct tax system than the current Government to reproduce a similar transformative change. INDIA – The Indian Govt Is Preparing An All-out Assault On Tribal Rights A meeting in Delhi on Tuesday (June 18) could determine the destiny of eight million Indian tribal individuals and other forest inhabitants. The talks between states and the Ministry of Tribal Affairs follow the highly contentious order of the Supreme Court in February to dislodge millions of individuals whose land rights claims have been dismissed. The next Supreme Court hearing in the case will be on July 24, when the court can again order the eviction of millions of individuals. This arises at a moment when the tribal peoples of India are facing an unprecedented assault on their rights. India's new Minister of Environment and Forests, who spoke in support of sight shooting policies, will also attempt to push through a draft amendment to the British-era Indian Forest Act. The suggested modifications were defined as even more draconian than the original. USA – Gunman Was Shot Dead After Opening Fire At The Federal Court In Downtown Dallas A man in a mask and fighting equipment was fatally fired in downtown Dallas on Monday (June 17) morning after opening fire outside the Earle Cabell Federal Building with an assault weapon. No one else has been wounded. At a news conference on a road corner near the federal building, FBI Special Agent in Charge Matthew DeSarno identified the gunman as Brian Isaack Clyde, 22. Clyde died on the scene and was taken to the Baylor University Medical Center, officials said. INDIA – Orissa High Court Lawyers Boycott Court Protesting Inadmissible Nominees Recommended As Judges Until June 26 Orissa High Court lawyers on Monday, June 17, 2019, boycotted the Court of Chief Justice and two other magistrates to protest against the Collegium's decision to recommend names of individual lawyers who are not High Court Bar Association regular practitioners for appointment as judges. At the meeting of the General Body at 1:30 p.m., it was decided that the lawyers will continue to boycott the courts of Orissa's Chief Justice, Justice Sk Mishra, and Justice Kumari Sanju Panda (which make up the Collegium) until June 26, when the next meeting is held. Meanwhile, a delegation from the High Court Bar Association will meet with India's Chief Justice, Prime Minister and Law and Justice Minister of the Union and other officials. INDIA – Court In Kathua Rape-Murder Case: "Poetic Justice Should Be Done For Evil Criminal Act" The special court, which convicted Six (6) individuals in the rape and murder of an 8-year-old in Jammu and Kashmir's Kathua, called the crime "devilish and monstrous" and said it was committed in the most "shameful, inhuman and barbaric way" for which "poetic justice" requires to be done. Judge Dr Tejwinder Singh said, "the devil and monstrous crime" has sent shockwaves across society, and the guilty must be brought under the sword of justice. After hearing prosecutors as well as a battery of 57 defence attorneys, the judge said, "There is nothing recorded that could demonstrate that in this situation there is a false implication (as claimed by defence attorneys) of accused individuals." USA – American Copyright Troll Lawyer Imprisoned For 14 Years An American lawyer, Paul Hansmeier, who ran an x-rated online fraud scheme has been imprisoned for 14 years. He shared internet copies of pornographic films and then sued individuals who uploaded them for copyright infringement. Victims were told to pay a "settlement fee" of $3000 to prevent further legal intervention. The plan is thought to have raised about $3 million over three years for Hansmeier and his accomplice. "It's almost incalculable how much your abuse of confidence has hurt the administration of justice," said the judge at the hearing where Hansmeier was convicted. The judge also ordered Hansmeier to repay the scam's $1.5 m to 704 victims. "I'm looking forward to finally putting this whole mess behind me," the Minneapolis Star Tribune cited him as stating after being convicted. INDIA – Plea For Trial In The Supreme Court After Conviction Of The Accused The Supreme Court linked the issue to a bigger court whether more individuals could be summoned to stand in court after the conviction of the accused. A judicial bench in the Supreme Court, N.V. Ramana, and M. Shantanagoudar was receiving an application from four individuals who had been ordered by a Punjab tribunal to stand trial after nine individuals had already been sentenced in the situation. The Supreme Court said CrPC Section 319 reflected two critical objectives: the courts' duty to establish the guilt of all the accused and give full justice, and the responsibility of the State to bring every criminal prosecution to its logical end (Shutterstock). The court, however, said that further consideration was required for the pending legal issues: Whether the trial court has the authority under Section 319 of the CrPC to invite extra suspects at the end of the district and the dismissal judgment. Further, whether the tribunal of the trial has the authority to invite additional suspects when the prosecution of certain other indicted persons is pending and lastly. What are the rules to be followed by the qualified tribunal when practising authority under Section 319 of the CrPC? INDIA – PIL filed Against The Use Of Electronic Voting Machines; Supreme Court Refuses The petitioner wanted the court to declare the use of EVMs unconstitutional and, requested the paper ballot, for re-election for Lok Sabha. Not only had the petitioner challenged the clauses of 61 A of the People's Representation Act, stating that the voting machines should not have been permitted in the recent elections, but he also requested re-election through the paper ballot. Notwithstanding anything specified in this Act or the laws specified therein, voting machines may issue and record ballots in such a way as may be specified in such constituencies or constituencies as may be specified by the Election Commission, taking into account the conditions of each situation. DUBAI – Know The Law: Dubai Decreases The Bankruptcy Fee According to a law published by His Highness Shaikh Mohammad Bin Rashid Al Maktoum, Vice-President and Prime Minister of the UAE and Dubai Ruler, the Dubai government has decreased the charges for filing bankruptcy and peaceful resolution of bankruptcy allegations at Dubai Courts from Dh2.000 to Dh500. Claims lodged by shareholders against the board of directors of a public shareholding business or its executive management, if the plaintiff's stake does not exceed 10% of the company's complete stocks, will also be free. Applications for publicity or evidence of Islam, ratification of a social welfare request, and demands for death and inheritance shall also be free of charge. INDIA – 22-Week Pregnancy Of A Minor Rape Victim Terminated To Prevent Physical And Mental Health Issues; AIIMS Tells Delhi HC The All India Medical Sciences Institute (AIIMS) informed the Delhi High Court on Monday (June 17) that it had removed a minor rape victim's over-22-week pregnancy as continuing with it would have had a severe effect on the physical and mental health of the girl. It also informed the court that in the rape case, the foetus had been maintained for DNA assessment. The submission was obtained before Justice Najmi Waziri, who later disposed of the complaint moved on behalf of the rape victim requesting approval to terminate her pregnancy medically. On June 14, the high court had requested AIIMS to set up a medical board to examine whether it was necessary to end the pregnancy of the 14-year-old rape victim and, if necessary, to carry out the procedure promptly. The court also directed that a report be put before it on the choice of the medical board. It also instructed the conservation of the foetus in the rape situation for DNA assessment. INDIA – SC Decides To hear PIL Seeking Security For Doctors At Government Hospitals Nationwide On Monday (June 18), the Supreme Court (SC) decided to hear a Public Interest Litigation (PIL) requesting safety for doctors at public hospitals across the nation. A Deepak Gupta and Surya Kant Vacation Bench of Justices mentioned the matter for hearing today, Tuesday, June 18, 2019. The petition lodged on Friday (June 14) requested an urgent hearing and also requested instructions from the union ministries and the West Bengal government to deputize security staff at all state-run hospitals to guarantee doctors' safety and security. Doctors across the nation have protested against the assault on their West Bengal colleagues. Scores of doctors in the domestic capital's state and private hospitals, including those at AIIMS, decided on Monday to boycott job for a day. DUBAI – Know The Law: Can UAE Employers Decrease Wages Without New Contracts? It is known that you have been working for a private company in Dubai for the last eight years, and your employer intends to decrease all workers ' wages by 15%. In this respect, you want to understand if it is legitimate for the employer to decrease the remuneration of an employee and if it is necessary for the company to tell its employees by e-mail or official notification. You also want to understand if the employer is needed to provide the accrued end-of-service advantages and draft a new agreement once the worker accepts the decrease. It is unlawful for an employer to decrease the remuneration of an employee, except as provided for in Article 60 of Federal Law No. 8 of 1980 Regulating Employment Relations in the UAE (Employment Law). Article 60 of the Employment Law states: "No amount of money may be deducted from the remuneration of an employee in respect of private claims, except in some cases." Although the employer offers an employee with an official letter or e-mail concerning wage decrease, it will only be valid if a revised employment contract is signed and presented to Mohre AFRICA – Centurion Law Group Declares Its Plan To Pursue Public Listing Centurion Law Group ("Centurion") (Centurionlg.com) is set to become the first African legal and energy consultative company to be publicly listed this year as it is preparing to enter one of Europe's leading stock exchanges. Last year, Centurion obtained IMANI-African Lawyers on Demand to launch Centurion Plus, Africa's leading flexible legal services model offering cost savings and effective, flexible legal services across the continent. Through Centurion Plus, corporate customers across Africa can pick for temporary and project-based legal services from a pool of online 190 carefully vetted, on-demand lawyers. "Centurion has always distinguished itself by its capacity to adapt to change, make the deal and be pan-African and pro-African," said CEO NJ Ayuk. "The legal market in Africa has altered a lot, and we are proud to be a leader in legal transitions in Africa. Previous articleLegal News Shots- Top Interesting Shots of the Day Next articleLegal News Shots- Top Interesting Shots of the Day
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,661
\section{Introduction and main results} We consider the following class of equations \begin{equation}\label{peq} -\Delta u+Vu=(I_{\alpha}\ast |u|^p)|u|^{p-2}u, \quad x\in \mathbb R^N \end{equation} where $V$ is an external potential, $N\geq 3$ and $I_\alpha$ is the Riesz potential given for each $x\in\mathbb R^N\setminus\{0\}$ by $$ I_\alpha(x):=\frac{A_\alpha}{\abs{x}^{N-\alpha}},\quad \text{where}\quad A_\alpha=\frac{\Gamma((N-\alpha)/2)}{\Gamma(\alpha/2)\pi^{N/2}2^\alpha}\quad \text{ and } \alpha\in (0,N), $$ with $\Gamma$ being the Euler gamma function. Equation \eqref{peq} with $p=2$, $N=3$ and $V$ constant, seems to appear in the literature in the 1950s with the work of S.~Pekar on quantum theory of the polaron \cite{Pekar}. Later rediscovered by Choquard in the context of Hartree-Fock theory of one component plasma and attracted attention of the mathematical community in late 1970s with the papers of E.~Lieb \cite{Lieb1} and P.L.~Lions \cite{Lions2, Lions3} which opened the way to an intensive study of \eqref{peq}, related problems, generalizations and extensions, see \cite{MV4} for a survey. Indeed, equations of the form \eqref{peq} show up as well in the context of self-gravitating matter \cite{Penrose1} and in the study of pseudo-relativistic boson stars \cite{boson}. The richness of plenty of applications have just in part contributed to the mathematical success and longevity of the interest in this kind of problems, as nonlocal Schr\"odinger type equations have been carrying over new mathematical challenges. Beyond physical motivations, ground state solutions to \eqref{peq} are of particular interest because of connections with stochastic analysis \cite{DonskerVaradhan2}. The energy functional related to the Choquard equation \eqref{peq} is given for \(u : \mathbb R^N \to \mathbb R\) by $$ E(u)=\frac{1}{2}\int_{\mathbb R^N}\abs{\nabla u}^2+V\abs{u}^2-\frac{1}{2p}\int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{p})\abs{u}^{p} $$ and in the case of constant potential $V\equiv 1$ it is well defined and $C^1(H^1(\mathbb R^N))$, by means of the Hardy--Littlewood--Sobolev inequality \cite{LL}, provided the following condition holds \begin{equation}\label{subcritical_range} \frac{N+\alpha}{N}\leq p\leq \frac{N+\alpha}{N-2}. \end{equation} Actually the range \eqref{subcritical_range} turns out to be sharp for the existence of variational solutions \cite{MV3}. Indeed, at the endpoints, sometimes called respectively lower and upper critical exponents for the Hardy--Littlewood--Sobolev inequality, a Pohozaev type identity prevent finite energy solutions to exist. As in the local Sobolev critical case, typical scaling invariance phenomena show up together with explicit one parameter family of extremal functions to the Hardy--Littlewood--Sobolev inequality, see Section \ref{Preliminary}. The case in which a more general nonlinearity has upper critical growth has been recently considered in \cite{Cassani_Zhang} where Brezis--Nirenberg type perturbations allow to obtain ground state solutions as well as to study the singularly perturbed associated problem. In fact, in the upper critical case no reasonable perturbations of the constant potential have influence on the ground state energy level. The lack of compactness in the upper critical case occurs regardless of the properties of the external potential. In this paper, we consider the lower critical case, namely the following problem \begin{equation}\label{q1} -\Delta u+V_{\mu, \nu} u=(I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{\alpha}{N}-1}u,\quad \text{in \(\mathbb R^N\)}, \end{equation} where the external Schr\"odinger potential \(V_{\mu, \nu} : \mathbb R^N \to \mathbb R\) is a perturbation of the constant potential defined as $$ V_{\mu, \nu} =1-\frac{\mu}{\nu^2 + \abs{x}^2}, \quad \text{for } \mu,\nu\in\mathbb R \text{ and } x \in \mathbb R^N $$ This class of potentials has been considered in \cite{MV5} where the authors prove that \eqref{q1} admits a ground state solution provided $\frac{N^2(N-2)}{4(N+1)}<\mu\leq \nu^2$. The upper bound is equivalent to requiring $V_{\mu, \nu} \ge 0$ and this was not explicitly assumed in \cite{MV5}. Moreover, in \cite{MV5} is also proved that no nontrivial solutions do exist when $\mu<\frac{(N-2)^2}{4}$. A natural question is whether \eqref{q1} admits a ground state solution in the range $$\mu\in\left[\frac{(N-2)^2}{4},\frac{N^2(N-2)}{4(N+1)}\right].$$ Following Lions \cite{Lions1}, the strategy in \cite{MV5} to proving the existence of ground state solutions to \eqref{q1} is to establish the existence of minimizers to the constraint minimization problem $$ c_{\mu, \nu}:=\inf\left\{\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2: G(u)=1,\,\ u\in H^1(\mathbb R^N)\right\} $$ where $$ G(u):=\int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}} $$ and eventually removing the Lagrange multiplier by some appropriate scaling. In Section~\ref{Preliminary} below, we show that $c_{\mu, \nu}\le0$ if $\mu\ge\mu^{\nu}$, where $\mu^{\nu}$ is the best constant of the embedding $H^1(\mathbb R^N)\hookrightarrow L^2(\mathbb R^N,(\nu^2 + \abs{x}^2)^{-1}\,\mathrm{d} x)$. Precisely, $$ \mu^{\nu}:=\inf_{u\in H^1(\mathbb R^N)\setminus\{0\}}\frac{\displaystyle \int_{\mathbb R^N}\abs{\nabla u}^2+ \abs{u}^2}{\displaystyle \int_{\mathbb R^N}\frac{\abs{u (x)}^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x}. $$ Clearly, $\mu^{\nu}\ge\nu^2$ and by Hardy's inequality, $\mu^{\nu}\ge(N-2)^2/4$. Actually, $\mu^{\nu}>\frac{(N-2)^2}{4} +\nu^2$, since the infimum $\mu^{\nu}$ is achieved by some function $u_0\in H^1(\mathbb R^N)\setminus\{0\}$ (see Section~\ref{sectionProofTheorem1}) and then \begin{equation}\label{lower_est_asym} \mu^{\nu} =\frac{\int_{\mathbb R^N}\abs{\nabla u_0}^2+ \abs{u_0}^2}{\int_{\mathbb R^N}\frac{1}{\nu^2 + \abs{x}^2}\abs{u_0}^2\,\mathrm{d} x} >\frac{\int_{\mathbb R^N}\abs{\nabla u_0}^2}{\int_{\mathbb R^N}\frac{\abs{u_0 (x)}^2}{\abs{x}^2}\,\mathrm{d} x} + \nu^2 \ge\frac{(N-2)^2}{4} + \nu^2. \end{equation} It can be proved as in \cite{MV5} that \(c_{\mu, \nu}\) is achieved and gives by rescaling a groundstate to \eqref{q1} provided $$\frac{N^2 (N - 2)}{4 (N + 2)} < \mu < \mu^\nu.$$ A natural question is whether \eqref{q1} admits a ground state solution when $\mu\ge\mu^{\nu}$. The main purpose of this paper is to make advances in the understanding of the picture presented above, by partially answering to the open questions raised so far and opening the way to new challenges. Our main results are the following: \begin{theorem}% \label{Theorem 1}% There exists a threshold $\mu_{\nu}\in[\frac{(N-2)^2}{4},\mu^{\nu})$ such that \eqref{q1} admits a positive ground state solution if and only if $\mu_{\nu}<\mu<\mu^{\nu}$ and no ground state solution exists for $\frac{(N-2)^2}{4}<\mu<\mu_{\nu}$. \end{theorem} \begin{theorem}% \label{Theorem 2}% Assume that \(N \ge 4\) or \(N = 3\), $\frac{3}{2}<\alpha<3$ and $\ker (-\Delta + V_{\mu, \nu}) = \{0\}\). If $\mu>\max\left\{\mu^{\nu},\frac{N^2(N-2)}{4(N+1)}\right\}$, then \eqref{q1} admits a ground state solution (necessarily sign changing). \end{theorem} \begin{theorem}\label{Theorem 3} The best constant $\mu^{\nu}$ of the embedding $H^1(\mathbb R^N)\hookrightarrow L^2(\mathbb R^N,(\nu^2 + \abs{x}^2)^{-1}\,\mathrm{d} x)$ enjoys the following $$ \lim_{N\to \infty}\frac{\mu^{\nu}}{\frac{N^2(N-2)}{4(N+1)}}=1. $$ \end{theorem} As a consequence of Theorem \ref{Theorem 3}, the four quantities $\mu_{\nu},\: \mu^{\nu},\: \frac{(N-2)^2}{4},\: \frac{N^2(N-2)}{4(N+1)}$ turn out to be asymptotically sharp as $N\to\infty$. The statement of Theorem~\ref{Theorem 1} suggests the following open question. \begin{problem} Do groundstates solutions exist in the borderline cases $\mu=\mu_\nu$ and $\mu=\mu^\nu$? \end{problem} The statement of Theorem~\ref{Theorem 1} only makes sense when $\mu_{\nu}>\frac{(N-2)^2}{4}$. \begin{problem} In the case $\mu_{\nu}>\frac{(N-2)^2}{4}$, do there exist nontrivial solutions for \eqref{q1} for $\mu\in(\frac{(N-2)^2}{4},\mu_{\nu})$\,? \end{problem} For $\nu=1$ and $N \in \{3,4,5\}$, one has \(\nu^2+\frac{(N-2)^2}{4}>\frac{N^2(N-2)}{4(N+1)}\) and so $\mu^{\nu}>\frac{N^2(N-2)}{4(N+1)}$. \begin{problem} Does one have \[ \mu^{\nu}>\frac{N^2(N-2)}{4(N+1)} \] in general? \end{problem} A negative answer would suggest a further problem. \begin{problem} In the case $\mu^{\nu}<\frac{N^2(N-2)}{4(N+1)}$, do exist ground state solutions to \eqref{q1} when $\mu^{\nu}<\mu<\frac{N^2(N-2)}{4(N+1)}$\,? \end{problem} In Theorem~\ref{Theorem 2} the restriction $\frac{3}{2}<\alpha<3$ and $\ker (-\Delta + V_{\mu, \nu}) = \{0\}\) when \(N = 3\), is only used to guarantee that the ground state level is strictly below the first level at which the Palais-Smale compactness condition fails (see Section~\ref{sectionProofTheorem2}). More precisely, if $\Lambda$ denotes the spectrum of the eigenvalue problem \begin{equation*} -\Delta u+u=\frac{\lambda}{\nu^2 + \abs{x}^2}u,\quad u\in H^1(\mathbb R^N), \end{equation*} in dimension $N=3$, gives the existence of ground state solutions to \eqref{q1} in the non-resonant case, namely when $\mu\not\in\Lambda$. \begin{problem} Does Theorem~\ref{Theorem 2} remain true in dimension \(N = 3\) in the resonant case \(\mu \in \Lambda\)? \end{problem} Finally, the necessity of the assumption on \(\alpha\) is not clear. \begin{problem} Does Theorem~\ref{Theorem 2} remain true in the resonant case in dimension $N=3$ for $\alpha\in (0,3/2)$\,? \end{problem} \section{Proof of Theorem \ref{Theorem 1}}\label{Preliminary} \label{sectionProofTheorem1} Before proving Theorem~\ref{Theorem 1}, we introduce some preliminary results. First, the following Hardy--Littlewood--Sobolev inequality will be frequently used in the sequel. \begin{lemma}% [Hardy--Littlewood--Sobolev inequality {\cite[Theorem 4.3]{LL}}] \label{hls} Let $s, r>1$ and $0<\alpha<N$ with $1/s+1/r=1+\alpha/N$, $f\in L^s(\mathbb R^N)$ and $g\in L^r(\mathbb R^N)$, then there exists a positive constant $C(s, N, \alpha)$ (independent of $f, g$) such that $$ \left|\int_{\mathbb R^N}\int_{\mathbb R^N}f(x)\abs{x - y}^{\alpha-N}g(y)\,\mathrm{d} x\mathrm{d} y\right|\le C(s, N, \alpha)\|f\|_s\|g\|_r. $$ In particular, if $s=r=2N/(N+\alpha)$, the sharp constant is given by $$ \mathcal{C}_\alpha:=\pi^{\frac{N-\alpha}{2}}\frac{\Gamma(\alpha/2)}{\Gamma((N+\alpha)/2)}\left[\frac{\Gamma(N/2)}{\Gamma(N)}\right]^{-\alpha/N}. $$ \end{lemma} The energy functional associated to the Choquard equation \eqref{q1} is given by $$ J_{\mu, \nu} (u)=\frac{1}{2}\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2-\frac{N}{2(N+\alpha)}\int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}},\,\, u\in H^1(\mathbb R^N). $$ Let $$ c_\infty:=\inf\left\{\int_{\mathbb R^N}\abs{u}^2: G(u)=1,\,\ u\in L^2(\mathbb R^N)\right\}, $$ then it follows from \cite[Theorem 4.3]{LL} that $c_\infty>0$ and from \cite[Proposition 5]{MV5} that $c_{\mu, \nu}\le c_\infty$ for any $\mu,\nu > 0$. The following monotonicity property holds \begin{lemma}\label{monotonicity} If \(\mu_1 \le \mu_2\) and $\nu_1 \ge \nu_2$, then $c_{\mu_1, \nu_1} \ge c_{\mu_2, \nu_2}$. If moreover \(c_{\mu_1, \nu_1}\) is achieved, then equality holds if and only if \(\mu_1 = \mu_2\) and \(\nu_1 = \nu_2\). \end{lemma} \begin{proof} For every \(u \in H^1 (\mathbb R^N)\), we observe that \[ \int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu_1, \nu_1}\abs{u}^2 \ge \int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu_2, \nu_2}\abs{u}^2; \] the conclusion follows by taking the infimum over the functions \(u \in H^1 (\mathbb R^N)\) that satisfy the constraint \(G (u) = 1\). If we now assume that \(c_{\mu_1, \nu_1}\) is achieved for some function \(u \in H^1 (\mathbb R^N)\) such that \(G (u) = 1\), then \[ c_{\mu_2, \nu_2} \le \int_{\mathbb R^N} \abs{\nabla u}^2+V_{\mu_2, \nu_2}\abs{u}^2 = c_{\mu_1, \nu_1} - \int_{\mathbb R^N} (V_{\mu_1, \nu_1} - V_{\mu_2, \nu_2}) \abs{u}^2, \] where the second integral will be positive if either \(\mu_1 < \mu_2\) or $\nu_1 > \nu_2$. \end{proof} The Nehari manifold $\mathcal{N}_{\mu, \nu}$ associated to \eqref{q1} is given by $$ \mathcal{N}_{\mu, \nu}:=\left\{u\in H^1(\mathbb R^N)\setminus\{0\}:\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2=\int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}}\right\}. $$ It is easy to show that the set $\mathcal{N}_{\mu, \nu}$ is a $C^1$-manifold of codimension 1 for any $\mu,\nu > 0$. Moreover, $\mathcal{N}_{\mu, \nu}$ is regular in the sense that zero is isolated in $\mathcal{N}_{\mu, \nu}$. In fact, for any $u\in\mathcal{N}_{\mu, \nu}$, \begin{equation*} \begin{split} \int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}}=&\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2\\ =&\frac{\mu}{\mu^{\nu}}\left[\int_{\mathbb R^N}\abs{\nabla u}^2 + \abs{u}^2-\mu^{\nu}\int_{\mathbb R^N}\frac{\abs{u (x)}^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x\right]\\ &\qquad +\Bigl(1- \frac{\mu}{\mu^{\nu}}\Bigr)\int_{\mathbb R^N}\abs{\nabla u}^2+ \abs{u}^2\\ \ge&\Bigl(1- \frac{\mu}{\mu^{\nu}}\Bigr) \int_{\mathbb R^N}\abs{\nabla u}^2+ \abs{u}^2. \end{split} \end{equation*} By virtue of the Hardy--Littlewood--Sobolev inequality (Lemma~\ref{hls}), there exists $C>0$ depending on $\mu$ and $\nu$ such that $\nor{u}\ge C$ for any $u\in\mathcal{N}_{\mu, \nu}$. Let us denote the least energy by $$ m_{\mu, \nu}:=\inf_{u\in\mathcal{N}_{\mu, \nu}}J_{\mu, \nu} (u). $$ \begin{lemma}\label{lemma2} If $\mu<\mu^{\nu}$, then $$ m_{\mu, \nu}=\frac{\alpha}{2(N+\alpha)}c_{\mu, \nu}^{\frac{N+\alpha}{\alpha}} $$ \end{lemma} \begin{proof} We observe that if $\mu<\mu^{\nu}$, then \[ \int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2>0 \] for any $u\in H^1(\mathbb R^N)\setminus\{0\}$. Let $u_t=tu$, where \[t=\left(\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2\right)^{-\frac{N}{2(N+\alpha)}}, \] then $G(u_t)=1$ and by \eqref{yong}, \begin{equation*} \begin{split} m_{\mu, \nu} (u)&=\frac{\alpha}{2(N+\alpha)}\inf_{u\in\mathcal{N}_{\mu, \nu}}\left(\int_{\mathbb R^N}|\nabla u_t|^2+V_{\mu, \nu}|u_t|^2\right)^{\frac{N+\alpha}{\alpha}}\\ &\ge\frac{\alpha}{2(N+\alpha)}c_{\mu, \nu}^{\frac{N+\alpha}{\alpha}}. \end{split} \end{equation*} On the other hand, for every $u\in H^1(\mathbb R^N)$ with $G(u)=1$, let \[ s=\left(\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2\right)^{\frac{N}{2 \alpha}}, \] then $u_s=su\in\mathcal{N}$ and \begin{equation*} \begin{split} \inf_{v\in\mathcal{N}}J(v)&=\frac{\alpha}{2(N+\alpha)}\inf_{v\in\mathcal{N}}\int_{\mathbb R^N}\abs{\nabla v}^2+V_{\mu, \nu}\abs{v}^2\\ &\le\frac{\alpha}{2(N+\alpha)}\inf\left\{\int_{\mathbb R^N}|\nabla u_s|^2+V_{\mu, \nu}|u_s|^2: G(u)=1,\,\ u\in H^1(\mathbb R^N)\right\}\\ &=\frac{\alpha}{2(N+\alpha)}\inf\left\{\left(\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2\right)^{\frac{N+\alpha}{\alpha}}: G(u)=1,\,\ u\in H^1(\mathbb R^N)\right\}\\ &=\frac{\alpha}{2(N+\alpha)}c_{\mu, \nu}^{\frac{N+\alpha}{\alpha}}.\qedhere \end{split} \end{equation*} \end{proof} \begin{lemma}\label{critical0}There exists \(u \in H^1 (\mathbb R^N)\) such that \[ \int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2 \] and \(G (u) = 0\). In particular, $c_{\mu^{\nu}, \nu}=0$. \end{lemma} \begin{proof} Obviously, we have $c_{\mu^{\nu}, \nu}\ge 0$. By the definition of $\mu^{\nu}$, there exists a sequence $\{u_n\}_{n \in \mathbb{N}}$ in $H^1(\mathbb R^N)$ such that as $n\to\infty$, $$ \int_{\mathbb R^N}\abs{\nabla u_n}^2+ \abs{u_n}^2\to \mu^{\nu},\quad \text{ and} \quad \int_{\mathbb R^N}\frac{\abs{u_n (x)}^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x=1. $$ Up to a subsequence, there exists $u_0\in H^1(\mathbb R^N)$ such that $u_n\rightharpoonup u_0$ weakly in $H^1(\mathbb R^N)$ and almost everywhere in $\mathbb R^N$, as $n\to\infty$. Noting that $1/(\nu^2 + \abs{x}^2)\to 0$, as $\abs{x}\to\infty$, we have by Rellich's compactnes theorem \[ \int_{\mathbb R^N}\frac{|u_0(x)|^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x = \lim_{n \to \infty} \int_{\mathbb R^N}\frac{|u_n (x)|^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x =1 \] and by the weak lower-semicontinuity of the norm, \[ \int_{\mathbb R^N}\abs{\nabla u_0}^2+|u_0|^2\le \lim_{n \to \infty} \int_{\mathbb R^N}\abs{\nabla u_n}^2+|u_n|^2 = \mu^{\nu}, \] which implies that $u_0$ is a minimizer of $\mu^{\nu}$. Therefore, $c_{\mu^{\nu}, \nu}=0$. Since \(u_0 \ne 0\), we reach the conclusion by a suitable scaling of \(u_0\). \end{proof} \begin{proof}[Proof of Theorem~\ref{Theorem 1}] Since the Choquard equation \eqref{q1} does not admit any nontrivial solution if $\mu<\frac{(N-2)^2}{4}$, it follows that $c_{\mu, \nu}=c_\infty$ if $\mu<\frac{(N-2)^2}{4}$. Noting that $c_{\mu^{\nu}, \nu}=0$ and $c_{\mu, \nu}=c_\infty$ if $\mu<\frac{(N-2)^2}{4}$, by Lemma~\ref{monotonicity}, there exists $\mu_{\nu}\in[\frac{(N-2)^2}{4},\mu^{\nu})$ such that $$ c_{\mu, \nu}=c_\infty \text{ if $0<\mu<\mu_{\nu}$} \qquad \text{ and } \qquad c_{\mu, \nu}<c_\infty \text{ if $\mu_{\nu}<\mu<\mu^{\nu}.$ } $$ Next we show that $c_{\mu, \nu}>0$ if $\mu_{\nu}<\mu<\mu^{\nu}$. Indeed, assume by contradiction that for some $\mu_{\nu}<\mu<\mu^{\nu}$, one has $c_{\mu, \nu}=0$. Let $\{v_n\}_{n \in \mathbb{N}}$ be a minimizing sequence for $c_{\mu,\nu}$, then \begin{equation*} \begin{split} 0&=\lim_{n\to \infty}\int_{\mathbb R^N}\abs{\nabla v_n}^2+V_{\mu, \nu}\abs{v_n}^2\\ &\ge\liminf_{n\to \infty}\int_{\mathbb R^N}\abs{\nabla v_n}^2+V_{\mu^{\nu}, \nu}\abs{v_n}^2\ge c_{\mu^{\nu}, \nu}=0, \end{split} \end{equation*} which implies that \[ \lim_{n \to \infty} \int_{\mathbb R^N}\frac{\abs{v_n (x)}^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x = 0, \] and then $v_n\to 0$ strongly in $H^1(\mathbb R^N)$, as $n\to\infty$, which contradicts the fact $G(v_n)=1$. Then for $\mu_{\nu}<\mu<\mu^{\nu}$, by virtue of \cite[Theorem 3]{MV5}, the level $c_{\mu, \nu}$ is achieved and $m_{\mu, \nu}$ as well. \medbreak Finally, to conclude the proof, it remains to proveing that for any $(N-2)^2/4<\mu<\mu_{\nu}$, $c_{\mu, \nu}$ cannot be achieved. In fact, otherwise by the equality case in Lemma~\ref{monotonicity}, we would have \(c_{\lambda, \nu} < c_\infty\) if \(\lambda > \mu\), contradicting the very definition of \(\mu_\nu\). \end{proof} \section{Proof of Theorem~\ref{Theorem 2}} \label{sectionProofTheorem2} For $\mu>\mu^{\nu}$, the set $\mathcal{N}_{\mu, \nu}$ is still a $C^1$-manifold of codimension 1. However, in contrast to the case $\mu<\mu^{\nu}$, the manifold $\mathcal{N}_{\mu, \nu}$ is no longer regular for $\mu>\mu^{\nu}$. Precisely, we next show that for $\mu>\mu^{\nu}$, $0$ is adherent to $\mathcal{N}_{\mu, \nu}$, which in turn implies $\inf_{u\in\mathcal{N}_{\mu, \nu}}J_{\mu, \nu} (u)\le0$. This is a main obstruction in proving the existence of a nontrivial critical point. \begin{lemma}\label{lemma1} If $\mu>\mu^{\nu}$, then $$ \inf_{u\in\mathcal{N}_{\mu, \nu}}J_{\mu, \nu} (u)=0. $$ \end{lemma} \begin{proof} For $\mu>\mu^{\nu}$, we have \begin{equation} \label{yong} \begin{split} \inf_{u\in\mathcal{N}_{\mu, \nu}}J_{\mu, \nu} (u)&=\frac{\alpha}{2(N+\alpha)}\inf_{u\in\mathcal{N}_{\mu, \nu}}\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2\\ &=\frac{\alpha}{2(N+\alpha)}\inf_{u\in\mathcal{N}_{\mu, \nu}}\left[\frac{\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2}{\left(\int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}}\right)^{\frac{N}{N+\alpha}}}\right]^{\frac{N+\alpha}{\alpha}}\\ &=:\frac{\alpha}{2(N+\alpha)}d_\mu^{\frac{N+\alpha}{\alpha}}. \end{split} \end{equation} Next we show that actually $d_\mu=0$. Clearly, \begin{multline*} d_\mu=\inf\biggl\{\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2: u\in H^1(\mathbb R^N), \int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2>0\\ \text{ and } G(u)=1\biggr\}. \end{multline*} Let $u_0$ be the minimizer of $c_{\mu^{\nu}, \nu}$ obtained in Lemma~\ref{critical0}, then $\int_{\mathbb R^N}\abs{\nabla u_0}^2+V_{\mu, \nu}|u_0|^2<0$ for any $\mu>\mu^{\nu}$. On the other hand, for any $\mu>\mu^{\nu}$, there exists $u_1\in H^1(\mathbb R^N)$ such that $$ \int_{\mathbb R^N}|\nabla u_1|^2\ge(\mu+2)\int_{\mathbb R^N}|u_1|^2>0, $$ which implies $\int_{\mathbb R^N}|\nabla u_1|^2+V_{\mu, \nu}|u_1|^2>0$. By scaling, there exist $\Tilde{u}_0,\Tilde{u}_1\in H^1(\mathbb R^N)$ such that $G(\Tilde{u}_i)=1,\,\, i=0,1$ and $$ \int_{\mathbb R^N}|\nabla \Tilde{u}_0|^2+V_{\mu, \nu}|\Tilde{u}_0|^2<0,\,\,\int_{\mathbb R^N}|\nabla \Tilde{u}_1|^2+V_{\mu, \nu}|\Tilde{u}_1|^2>0. $$ Let $\gamma(t)=t\Tilde{u}_0+(1-t)\Tilde{u}_1$, $t\in[0,1]$, then $0<\inf_{t\in[0,1]}\|\gamma(t)\|\le\sup_{t\in[0,1]}\|\gamma(t)\|<+\infty$. Let $\Tilde{\gamma} (t)=\gamma(t)[G(\gamma(t))]^{-\frac{N}{2(N+\alpha)}}$, then $G(\Tilde{\gamma} (t))=1$ for $t\in[0,1]$ and $$ \int_{\mathbb R^N}|\nabla \Tilde{\gamma} (0)|^2+V_{\mu, \nu}|\Tilde{\gamma} (0)|^2>0,\,\,\, \int_{\mathbb R^N}|\nabla \Tilde{\gamma} (1)|^2+V_{\mu, \nu}|\Tilde{\gamma} (1)|^2<0. $$ By the continuity of $\Tilde{\gamma}$, there exists a sequence $\{\Tilde{\gamma} (t_n)\}_{n \in \mathbb{N}}$ such that $\int_{\mathbb R^N}|\nabla \Tilde{\gamma} (t_n)|^2+V_{\mu, \nu}|\Tilde{\gamma} (t_n)|^2\to 0^+$, as $n\to\infty$, where $t_n\in[0,1]$. As a consequence, we get $d_{\mu}=0$ and there exists $\{\lambda_n\}$ with $\lambda_n\Tilde{\gamma} (t_n)\in\mathcal{N}_{\mu, \nu}$ and $\lambda_n\to 0$, as $n\to\infty$. \end{proof} \begin{corollary} As a byproduct from the proof of Lemma~\ref{lemma1}, we have $c_{\mu,\nu}<0$ if $\mu>\mu^{\nu}$. \end{corollary} \subsection{Eigenvalues and eigenfunctions.} Consider the eigenvalue problem \begin{equation*} -\Delta u+u=\frac{\lambda}{\nu^2 + \abs{x}^2}u,\,\,u\in H^1(\mathbb R^N). \end{equation*} Obverse that this problem is equivalent to the eigenvalue problem $T(u)=\frac{1}{\lambda}u$, where $T: H^1(\mathbb R^N)\to H^1(\mathbb R^N)$, $$ T(u):=(-\Delta+I)^{-1}\circ\mathcal{K} (u),\,\, \mathcal{K} (u)=u/(\nu^2 + \abs{x}^2),\,\,u\in H^1(\mathbb R^N). $$ Since $1/(\nu^2 + \abs{x}^2)\to 0$, the linear operator $\mathcal{K}: H^1(\mathbb R^N)\to L^2(\mathbb R^N)$ is compact. It is well known that $(-\Delta+I)^{-1}: L^2(\mathbb R^N)\to H^1(\mathbb R^N)$ is continuous. Then $T$ is compact and self-adjoint, which implies that there exist a sequence of eigenvalues $\{\lambda_n\}$ with finite multiplicity and such that $$ 0<\mu^{\nu}=\lambda_1<\lambda_2\le\cdots\le\lambda_n\le\cdots \to +\infty,\,\, n\to \infty $$ and the associated normalized eigenfunctions $\{\varphi_n\}$ satisfy for any $i,j\in\mathbb{N}$ and $i\not=j$, $$ \int_{\mathbb R^N}\nabla \varphi_i\nabla \varphi_j+\varphi_i\varphi_j=0,\,\,\int_{\mathbb R^N}\frac{1}{\nu^2 + \abs{x}^2}|\varphi_i|^2\,\mathrm{d} x=1. $$ Moreover, noting that for any $n$, $$ \lim_{a\to 0^+}\sup_{x\in\mathbb R^N}\int_{B_a(x)}\frac{1}{\abs{x - y}^{N-2}}\Bigl(1+\frac{\lambda_n}{(1+|y|^2)}\Bigr)\,\mathrm{d} y=0, $$ by virtue of \cite[Theorem C.2.5, C.3.4]{Simon}, there exist $C_n,\delta_n>0$ such that $$ |\varphi_n(x)| + |\nabla \varphi_n(x)|\le C_n\exp{(-\delta_n\abs{x})},\,\, x\in\mathbb R^N. $$ From now on, we assume $\mu>\frac{N^2(N-2)}{4(N+1)}$ and $\mu\in[\lambda_n,\lambda_{n+1})$ if $N\ge4$ or $\mu\in(\lambda_n,\lambda_{n+1})$ if $N\ge3$. Let $$ E^-={\rm span}\{\varphi_1,\varphi_2,\cdots,\varphi_n\},\,\, E^+=\overline{{\rm span}\{\varphi_{n+1},\varphi_{n+2},\cdots\}}, $$ then $H^1(\mathbb R^N)=E^-\oplus E^+$. \subsection{Energy estimates.} For any $\varepsilon > 0$, set $$ u_{\varepsilon} (x)=\varepsilon^{\frac{N}{2}}U(\varepsilon x), $$ where $U(x)=C\nu^{\frac{N}{2}} (\nu^2 + \abs{x}^2)^{-\frac{N}{2}}$ is a minimizer of $c_\infty$. Following \cite{MV5}, we have \begin{lemma}\label{lemma4} Assume $\mu>\frac{N^2(N-2)}{4(N+1)}$. Then, for $\varepsilon>0$ small enough, we have $$ \int_{\mathbb R^N}\abs{\nabla u_{\varepsilon}}^2 +V_{\mu, \nu}\abs{u_{\varepsilon}}^2>0, $$ and $$ \lim_{\varepsilon \to 0}\varepsilon^{-2}\int_{\mathbb R^N}\Bigl[\abs{\nabla u_{\varepsilon} (x)}^2-\frac{\mu}{\nu^2 + \abs{x}^2}\abs{u_{\varepsilon} (x)}^2\Bigr]\,\mathrm{d} x<0. $$ \end{lemma} \begin{proof} Observe that $$ \int_{\mathbb R^N}\abs{\nabla u_{\varepsilon}}^2=\varepsilon^2\int_{\mathbb R^N}|\nabla U|^2<+\infty,\,\,\int_{\mathbb R^N} (I_\alpha\ast \abs{u_{\varepsilon}}^{\frac{N+\alpha}{N}})\abs{u_{\varepsilon}}^{\frac{N+\alpha}{N}}=1 $$ and $$ \int_{\mathbb R^N}\abs{u_{\varepsilon}}^2=c_\infty,\,\,\,\int_{\mathbb R^N}\abs{\nabla u_{\varepsilon}}^2 +V_{\mu, \nu}\abs{u_{\varepsilon}}^2=c_\infty+\varepsilon^{2}\mathcal{I}_\mu(\varepsilon), $$ where $$ \mathcal{I}_\mu(\varepsilon)=\varepsilon^{-2}\int_{\mathbb R^N}\Bigl[\abs{\nabla u_{\varepsilon} (x)}^2-\frac{\mu \abs{u_{\varepsilon} (x)}^2}{\nu^2 + \abs{x}^2}\Bigr]\,\mathrm{d} x. $$ For some $c>0$, by Lebesgue's dominated convergence theorem, if $\mu>\frac{N^2(N-2)}{4(N+1)}$ we obtain \begin{equation*} \begin{split} \mathcal{I}_\mu(\varepsilon)&=\nu^{-2}\int_{\mathbb R^N}\Bigl[\frac{N^2(N-2)}{4(N+1)}-\frac{\mu\abs{x}^2}{\varepsilon^2+ \abs{x}^2}\Bigr]\frac{c}{\abs{x}^2(1 + \abs{x}^2)^N}\,\mathrm{d} x\\ &\to \nu^{-2}\Bigl[\frac{N^2(N-2)}{4(N+1)}-\mu\Bigr]\int_{\mathbb R^N}\frac{c}{\abs{x}^2(1 + \abs{x}^2)^N}\,\mathrm{d} x<0,\,\,\text{as $\varepsilon \to 0$.}\qedhere \end{split} \end{equation*} \end{proof} Let us recall for the convenience of the reader the following two elementary lemmas: \begin{lemma}{\cite{MMVS}}\label{norm} Define $$ \nor{u}_\ast:=\left[\int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}}\right]^{\frac{N}{2(N+\alpha)}},\,\, u\in L^2(\mathbb R^N), $$ then $\|\cdot\|_\ast$ is a norm in $L^2(\mathbb R^N)$. \end{lemma} \begin{proof} We need to prove the triangle inequality. By the Hardy--Littlewood--Sobolev inequality, for any $u\in L^2(\mathbb R^N)$ one has $$ \int_{\mathbb R^N} (I_\alpha\ast \abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}}\le\mathcal{C}_\alpha\nor{u}_2^{\frac{2(N+\alpha)}{N}}<+\infty. $$ We also observe that by the semigroup property of Riesz potentials, we have \[ \nor{u}_* = \biggl(\int_{\mathbb R^N} \abs{I_{\alpha/2} \ast \abs{u}^\frac{N + \alpha}{N}}^2 \biggr)^\frac{N}{2(N + \alpha)}. \] Let \(u, v \in L^2 (\mathbb R^N)\). By the Minkowski integral inequality, we have for almost every \(x \in \mathbb R^N\), \[ \bigl(I_{\alpha/2} \ast \abs{u + v}^\frac{N + \alpha}{N}\bigr)^\frac{N}{N + \alpha} \le \bigl(I_{\alpha/2} \ast \abs{u}^\frac{N + \alpha}{N}\bigr)^\frac{N}{N + \alpha} + \bigl(I_{\alpha/2} \ast \abs{v}^\frac{N + \alpha}{N}\bigr)^\frac{N}{N + \alpha}, \] and thus by integrating and using again the Minkowski inequality we get \[ \begin{split} \nor{u + v}_* &\le \biggl(\int_{\mathbb R^N} \bigl(\bigl(I_{\alpha/2} \ast \abs{u}^\frac{N + \alpha}{N}\bigr)^\frac{N}{N + \alpha} + \bigl(I_{\alpha/2} \ast \abs{v}^\frac{N + \alpha}{N}\bigr)^\frac{N}{N + \alpha} \bigr)^\frac{2(N + \alpha)}{N}\bigr) \biggr)^\frac{N}{2(N + \alpha)}\\ &\le \biggl(\int_{\mathbb R^N}\abs{I_{\alpha/2}\ast \abs{u}^{\frac{N+\alpha}{N}})}^2\biggr)^{\frac{N}{2(N+\alpha)}} + \biggl(\int_{\mathbb R^N}\abs{I_{\alpha/2}\ast \abs{v}^{\frac{N+\alpha}{N}}}^2 \biggr)^{\frac{N}{2(N+\alpha)}}\\ &= \nor{u}_* + \nor{v}_*.\qedhere \end{split} \] \end{proof} \begin{lemma} \label{pointInequality} For $p\in(1,2)$ and \(a, b \in \mathbb R\), we have \[ |\abs{a}^p + \abs{b}^p -\abs{a + b}^p| \le 2^{2 - p} p\, \abs{a}\, \abs{b}^{p - 1}. \] \end{lemma} \begin{proof} We have \[ \abs{a}^p + \abs{b}^p - \abs{a + b}^p =p a \int_0^1 \abs{t a}^{p - 2} t a - \abs{t a + b}^{p - 2} (t a + b)\,\mathrm{d} t. \] We observe that the integrand is maximal when \(ta = \frac{b}{2}\), and thus \[ \bigabs{\abs{t a}^{p - 2} t a - \abs{t a + b}^{p - 2} (t a + b)} \le 2^{2 - p} \abs{b}^{p - 1}.\qedhere \] \end{proof} \begin{lemma}\label{estimate} Let $\mu>\frac{N^2(N-2)}{4(N+1)}$. If $\mu\in[\lambda_n,\lambda_{n+1})$ when $N=4$ or $\mu\in(\lambda_n,\lambda_{n+1})$ when $N = 3$, we have for $\varepsilon > 0$ small enough, $$ \sup_{w\in\hat{E} (u_{\varepsilon})}J_{\mu, \nu} (w)<\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}, $$ where $\hat{E} (u_{\varepsilon}):=\{w\in H^1(\mathbb R^N): w=tu_{\varepsilon} +v,\,\,t\ge 0,\,\,v\in E^-\}$. \end{lemma} \begin{proof} \resetconstant By Lemma~\ref{pointInequality}, we have for every $x\in\mathbb R^N$, $\varepsilon > 0$, $v\in E^-$ and $t > 0$ $$ |\abs{t u_{\varepsilon} + v}^{\frac{N+\alpha}{N}} -\abs{t u_{\varepsilon}}^{\frac{N+\alpha}{N}}-\abs{v}^{\frac{N+\alpha}{N}}| \le \Cl{cstxnyo} \abs{t u_{\varepsilon}} \abs{v}^{\frac{\alpha}{N}}, $$ where $C_1=2^{(N-\alpha)/N} (N+\alpha)/N$. From which we get \begin{multline} \label{esti} \int_{\mathbb R^N} (I_\alpha\ast \abs{t u_{\varepsilon} + v}^{\frac{N+\alpha}{N}})\abs{t u_{\varepsilon} + v}^{\frac{N+\alpha}{N}}\\ =t^{\frac{2(N+\alpha)}{N}} +2t^{\frac{N+\alpha}{N}}\int_{\mathbb R^N} (I_\alpha\ast \abs{u_{\varepsilon}}^{\frac{N+\alpha}{N}})\abs{v}^{\frac{N+\alpha}{N}} +\int_{\mathbb R^N} (I_\alpha\ast \abs{v}^{\frac{N+\alpha}{N}})\abs{v}^{\frac{N+\alpha}{N}}\\ - 2 \Cr{cstxnyo} \int_{\mathbb R^N} (I_\alpha \ast (\abs{t u_\varepsilon}^\frac{N + \alpha}{N} + \abs{v}^\frac{N + \alpha}{N})) \abs{t u_\varepsilon} \abs{v}^\frac{\alpha}{N}, \end{multline} By the Hardy--Littlewood--Sobolev inequality, we have \begin{multline*} \int_{\mathbb R^N} (I_\alpha \ast \abs{t u_\varepsilon}^\frac{N + \alpha}{N} + \abs{v}^\frac{N + \alpha}{N}) \abs{t u_\varepsilon} \abs{v}^\frac{\alpha}{N}\\ \le \C (\nor{t u_\varepsilon}_L^2 + \nor{v}_{L^2} )^\frac{N + \alpha}{N} \Bigl(\int_{\mathbb R^N} \abs{u_\varepsilon}^\frac{2N}{N + \alpha}\abs{v}^\frac{2 \alpha}{N + \alpha} \Bigr)^\frac{N + \alpha}{2N} \end{multline*} Recalling that $|u_{\varepsilon} (x)|\le \C \varepsilon^{N/2}$ for any $x\in\mathbb R^N$ and that all the norms in $E^-$ are equivalent (since ${\rm dim} (E^-)<+\infty$) and $E^- \subset L^{2 \alpha/(N + \alpha)} (\mathbb R^N)$ for any $r>0$, we have \[ \int_{\mathbb R^N} (I_\alpha \ast \abs{t u_\varepsilon}^\frac{N + \alpha}{N} + \abs{v}^\frac{N + \alpha}{N}) \abs{t u_\varepsilon} \abs{v}^\frac{\alpha}{N} \le \C (t^\frac{N + \alpha}{N} \nor{v}^{\frac{\alpha}{N}} + \nor{v}^\frac{N+ 2\alpha}{N}) t \varepsilon^\frac{N}{2}. \] For any $t>0$, \begin{multline*} J_{\mu, \nu} (tu_{\varepsilon} +v)\le\frac{t^2}{2}\int_{\mathbb R^N}\abs{\nabla u_{\varepsilon}}^2+V_{\mu, \nu}\abs{u_{\varepsilon}}^2+t\int_{\mathbb R^N}\nabla u_{\varepsilon}\cdot \nabla v+V_{\mu, \nu} u_{\varepsilon} v\\ -\frac{N}{2(N+\alpha)}\int_{\mathbb R^N} (I_\alpha\ast \abs{t u_{\varepsilon} + v}^{\frac{N+\alpha}{N}})\abs{t u_{\varepsilon} + v}^{\frac{N+\alpha}{N}}. \end{multline*} Notice that $\abs{\nabla u_{\varepsilon}} \le C \varepsilon^{\frac{N}{2} + 1}\) and $\abs{\nabla v}\in L^r(\mathbb R^N)$ for any $r>0$. By Lemma~\ref{lemma4} \begin{equation*} \left|\int_{\mathbb R^N}\nabla u_{\varepsilon}\cdot \nabla v+V_{\mu, \nu}u_{\varepsilon} v\right| \le\|\nabla u_{\varepsilon}\|_{\infty}\|\nabla v\|_{L^1}+(1+\mu)\|u_{\varepsilon}\|_{\infty}\nor{v}_1 \le \Cl{cstNq} \varepsilon^{\frac{N}{2}} \nor{v}. \end{equation*} Then we have \begin{multline*} J_{\mu, \nu} (tu_{\varepsilon} +v) \le\frac{c_\infty}{2}t^2-\Cl{cstGap} t^2 \varepsilon^2+\Cr{cstNq} t \varepsilon^{\frac{N}{2}} \nor{v}\\ -\frac{N}{2(N+\alpha)}\int_{\mathbb R^N} (I_\alpha\ast \abs{t u_{\varepsilon} + v}^{\frac{N+\alpha}{N}})\abs{t u_{\varepsilon} + v}^{\frac{N+\alpha}{N}}. \end{multline*} By Lemma~\ref{norm} and by \eqref{esti}, there exists $\Cl{cstEquiv} >0$ (independent of $v$) such that \begin{multline} \label{ineqAsymptBeforeYoung} J_{\mu, \nu} (tu_{\varepsilon} +v) \le\frac{c_\infty}{2}t^2-\frac{N}{2(N+\alpha)}t^{\frac{2(N+\alpha)}{N}}- \Cr{cstGap} t^2 \varepsilon^2 -\Cr{cstEquiv}\nor{v}^{\frac{2(N+\alpha)}{N}}\\ + \Cr{cstNq} t \varepsilon^{\frac{N}{2}} \nor{v} + \Cl{cstremxdxr} (t^\frac{N + \alpha}{N} \nor{v}^{\frac{\alpha}{N}} + \nor{v}^\frac{N+ 2\alpha}{N}) t \varepsilon^\frac{N}{2}. \end{multline} By Young's inequality the following hold: \begin{gather} \label{ineqNewYoung}\Cr{cstNq} t \varepsilon^{\frac{N}{2}} \nor{v} \le \tfrac{\Cr{cstEquiv}}{3}\nor{v}^\frac{2 (N + \alpha)}{N} + \Cl{cstNewYoung} t^\frac{2(N + \alpha)}{N + 2 \alpha} \varepsilon^\frac{N (N + \alpha)}{N + 2 \alpha},\\ \label{ineqNewYoungBisp}\Cr{cstremxdxr} \nor{v}^\frac{\alpha}{N} t^\frac{2N + \alpha}{N} \varepsilon^\frac{N}{2} \le \tfrac{\Cr{cstEquiv}}{3} \nor{v}^\frac{2 (N + \alpha)}{N} + \Cl{cstNewYoungBisp} t^\frac{2 (N + \alpha)}{N} \varepsilon^\frac{N (N + \alpha)}{2N + \alpha},\\ \label{ineqNewYoungBis} \Cr{cstremxdxr} \nor{v}^{\frac{N + 2 \alpha}{N}} t \varepsilon^\frac{N}{2} \le \tfrac{\Cr{cstEquiv}}{3}\nor{v}^\frac{2 (N + \alpha)}{N} + \Cl{cstNewYoungBis} t^\frac{2 (N + \alpha)}{N} \varepsilon^{N + \alpha}, \end{gather} from which we obtain \begin{multline*} J_{\mu, \nu} (tu_{\varepsilon} +v)\le(\tfrac{c_\infty}{2} - \Cr{cstGap} \varepsilon^2) t^2 - \bigl(\tfrac{N}{2(N+\alpha)} +\Cr{cstNewYoungBisp} \varepsilon^\frac{N (N + \alpha)}{2N + \alpha} + \Cr{cstNewYoungBis} \varepsilon^{N + \alpha}\bigr) t^{\frac{2(N+\alpha)}{N}} \\ + (\Cr{cstNewYoung} \varepsilon^\frac{N (N + \alpha)}{N + 2 \alpha} )t^\frac{2(N + \alpha)}{N + 2 \alpha}. \end{multline*} By uniform convergence, it follows that there exists \(t_* \in (0, 1)\) such that for sufficiently small \(\varepsilon>0\), we have \[ J_{\mu, \nu} (tu_{\varepsilon} +v) \le \frac{\alpha}{4(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}. \] We can thus assume \(t \ge t_*\). Since \(\frac{2 (N + \alpha)}{N + 2 \alpha} \le 2\), we have \begin{multline*} J_{\mu, \nu} (tu_{\varepsilon} +v) \le(\tfrac{c_\infty}{2} - \Cr{cstGap} \varepsilon^2) t^2 - \bigl(\tfrac{N}{2(N+\alpha)} + \Cr{cstNewYoungBisp} \varepsilon^\frac{N (N + \alpha)}{2N + \alpha} + \Cr{cstNewYoungBis} \varepsilon^{N + \alpha}\bigr) t^{\frac{2(N+\alpha)}{N}} \\+ \frac{\Cr{cstNewYoung}}{t_*^\frac{2 \alpha}{N + 2 \alpha}} t^2 \varepsilon^\frac{N (N + \alpha)}{N + 2 \alpha} \end{multline*} and since \(\frac{N (N + \alpha)}{N + 2 \alpha} > \frac{N (N + \alpha)}{2N + \alpha}\), the conclusion follows provided \begin{equation*} \frac{N (N + \alpha)}{2N + \alpha} > 2. \end{equation*} Equivalently, we should have \(\frac{1}{N + \alpha} \le \frac{1}{2} - \frac{1}{N}\), which will be the case whenever \(N \ge 4\). The proof is completed when \(N \ge 4\). If \(N = 3\), the estimates above are still valid and show that there exist \(t_*, t^* \in (0, +\infty)\), such that if \(t \in (0, t_*] \cup [t^*, +\infty)\) and \(v \in E^-\), \[ J_{\mu, \nu} (tu_{\varepsilon} +v) \le \frac{\alpha}{4(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}. \] It remains thus to prove a strict inequality for \(t \in [t_*, t^*]\). Since \(E^-\) is a set of negative eigenfunctions, there exists a constant \(\Cl{cstNegEig}\) such that \[ \int_{\mathbb R^N} \abs{\nabla v}^2 + V_{\mu, \nu} \le -\Cr{cstNegEig} \nor{v}^2, \] and thus we have in place of \eqref{ineqAsymptBeforeYoung}, the following \begin{multline*} J_{\mu, \nu} (tu_{\varepsilon} +v) \le\frac{c_\infty}{2}t^2-\frac{N}{2(N+\alpha)}t^{\frac{2(N+\alpha)}{N}}- \Cr{cstGap} t^2 \varepsilon^2 -\Cr{cstEquiv}\nor{v}^{\frac{2(N+\alpha)}{N}} - \Cr{cstNegEig} \\ + \Cr{cstNq} t \varepsilon^{\frac{N}{2}} \nor{v} + \Cr{cstremxdxr} (t^\frac{N + \alpha}{N} \nor{v}^{\frac{\alpha}{N}} + \nor{v}^\frac{N+ 2\alpha}{N})\, t \varepsilon^\frac{N}{2}. \end{multline*} We now use again estimates \eqref{ineqNewYoung} and \eqref{ineqNewYoungBis} and \eqref{ineqNewYoungBisp} replaced by the following \[ \nor{v}^\frac{\alpha}{N} t^\frac{2N + \alpha}{N} \varepsilon^\frac{N}{2} \le \Cr{cstNegEig} \nor{v}^{2} + \Cl{cstNewYoungNegEig} t^\frac{2(2N + \alpha)}{2N - \alpha} \varepsilon^\frac{N^2}{2N - \alpha}, \] to get for \(t_* \le t \le t^*\) the following estimate \begin{multline*} J_{\mu, \nu} (tu_{\varepsilon} +v)\le(\tfrac{c_\infty}{2} - \Cr{cstGap} \varepsilon^2) t^2 - \bigl(\tfrac{N}{2(N+\alpha)} + \Cr{cstNewYoungNegEig} t^*{}^\frac{2 \alpha^2}{N (2N - \alpha)} \varepsilon^\frac{N^2}{2N - \alpha} + \Cr{cstNewYoungBis} \varepsilon^{N + \alpha}\bigr) t^{\frac{2(N+\alpha)}{N}}\\ + \frac{\Cr{cstNewYoung}}{t_*^\frac{2 \alpha}{N + 2 \alpha}} t^2 \varepsilon^\frac{N (N + \alpha)}{N + 2 \alpha}. \end{multline*} For \(N = 3\) we have \(N^2/(2N - \alpha) > 2\) if and only if \(\alpha > \frac{3}{2}\) and this concludes the proof. \end{proof} \subsection{Palais-Smale condition.} To obtain the existence of nontrivial solutions to \eqref{q1}, the following compactness result plays a crucial role. \begin{lemma}\label{ps} $J_{\mu, \nu}$ satisfies the Palais-Smale condition in $(-\infty,c)$ if $c<\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}$. Namely, if $\{u_m\}_{m \in \mathbb{N}}\subset H^1(\mathbb R^N)$ satisfies $$ J_{\mu, \nu} (u_m)\to c,\,\, J_{\mu, \nu}'(u_m)\to 0\,\,\text{in $H^{-1} (\mathbb R^N)$, as $m\to\infty$} $$ then up to a subsequence, there exists $u\in H^1(\mathbb R^N)$ such that $u_m\to u$ strongly in $H^1(\mathbb R^N)$, as $m\to\infty$. \end{lemma} Before proving Lemma~\ref{ps}, let us first prove a suitable version of Lions's lemma \cite[Lemma I.1]{Lions1} in the nonlocal context. \begin{lemma}\label{lions} Let $r>0$, $N\ge3$ and $\{u_m\}_{m \in \mathbb{N}}$ be bounded in $H^1(\mathbb R^N)$, then the following are equivalent \begin{enumerate} \item for some $p\in\left[\frac{N+\alpha}{N},\frac{N+\alpha}{N-2}\right)$, \[ \lim_{m\to \infty}\sup_{z\in\mathbb R^N}\int_{B_r(z)}\int_{B_r(z)}\frac{\abs{u_m (x)}^p\abs{u_m (y)}^p}{\abs{x - y}^{N-\alpha}}\,\mathrm{d} x\,\mathrm{d} y=0, \] \item for some $q\in\left[2,\frac{2N}{N-2}\right)$ \[ \lim_{m\to \infty}\sup_{z\in\mathbb R^N}\int_{B_r(z)}|u_m|^q\,\mathrm{d} x=0. \] \end{enumerate} In both cases, one has for any $s\in(2,\frac{2N}{N-2})$ and $t\in(\frac{N+\alpha}{N},\frac{N+\alpha}{N-2})$ $$ \lim_{m\to \infty}\int_{\mathbb R^N}|u_m|^s\,\mathrm{d} x=\lim_{m\to \infty}\int_{\mathbb R^N} (I_\alpha\ast|u_m|^t)|u_m|^t\,\mathrm{d} x=0. $$ \end{lemma} \begin{proof} We only prove the necessary condition as the sufficient condition is trivial. Let $r>0, p\in[\frac{N+\alpha}{N},\frac{N+\alpha}{N-2})$ and $\{u_m\}_{m \in \mathbb{N}}$ be bounded in $H^1(\mathbb R^N)$ satisfying $$ \sup_{z\in\mathbb R^N}\int_{B_r(z)}\int_{B_r(z)}\frac{\abs{u_m (x)}^p\abs{u_m (y)}^p}{\abs{x - y}^{N-\alpha}}\,\mathrm{d} x\,\mathrm{d} y\to 0,\,\,m\to \infty, $$ we claim \begin{equation}\label{vanish} \sup_{z\in\mathbb R^N}\int_{B_r(z)}\abs{u_m (x)}^q\,\mathrm{d} x\to 0,\,\,m\to \infty \end{equation} where $q=p\frac{2N}{N+\alpha}\in[2,\frac{2N}{N-2})$. Indeed, otherwise, up to a subsequence there exist $\delta>0$ and $\{z_m\}\subset\mathbb R^N$ such that $$ \int_{B_r(z_m)}\abs{u_m (x)}^q\,\mathrm{d} x\to\delta,\,\,m\to \infty. $$ Moreover, up to a subsequence, there exists $u\in H^1(\mathbb R^N)\setminus\{0\}$ such that $u_m(\cdot+z_m)\to u$ weakly in $H^1(\mathbb R^N)$ and almost everywhere in $B_r(0)$ for any $m$. By the Hardy--Littlewood--Sobolev inequality, $$ \int_{B_r(0)}\int_{B_r(0)}\frac{|u(x)|^p|u(y)|^p}{\abs{x - y}^{N-\alpha}}\,\mathrm{d} x\,\mathrm{d} y\in(0,\infty). $$ Then by Fatou's lemma, \begin{equation*} \begin{split} \lim_{m\to \infty}\sup_{z\in\mathbb R^N}\int_{B_r(z)}\int_{B_r(z)}&\frac{\abs{u_m (x)}^p\abs{u_m (y)}^p}{\abs{x - y}^{N-\alpha}}\,\mathrm{d} x\,\mathrm{d} y\\ &\ge\liminf_{m\to \infty}\int_{B_r(z_m)}\int_{B_r(z_m)}\frac{\abs{u_m (x)}^p\abs{u_m (y)}^p}{\abs{x - y}^{N-\alpha}}\,\mathrm{d} x\,\mathrm{d} y\\ &=\liminf_{m\to \infty}\int_{B_r(0)}\int_{B_r(0)}\frac{|u_m(x+z_m)|^p|u_m(y+x_m)|^p}{\abs{x - y}^{N-\alpha}}\,\mathrm{d} x\,\mathrm{d} y\\ &\ge\int_{B_r(0)}\int_{B_r(0)}\frac{|u(x)|^p|u(y)|^p}{\abs{x - y}^{N-\alpha}}\,\mathrm{d} x\,\mathrm{d} y>0, \end{split} \end{equation*} which is a contradiction and \eqref{vanish} holds. Finally, by Lions's lemma \cite[Lemma I.1]{Lions1} and the Hardy--Littlewood--Sobolev inequality, we have for any $s\in(2,\frac{2N}{N-2})$ and $t\in(\frac{N+\alpha}{N},\frac{N+\alpha}{N-2})$, \begin{equation*} \lim_{m\to \infty}\int_{\mathbb R^N}|u_m|^s\,\mathrm{d} x=\lim_{m\to \infty}\int_{\mathbb R^N} (I_\alpha\ast|u_m|^t)|u_m|^t\,\mathrm{d} x=0.\qedhere \end{equation*} \end{proof} \begin{proof}[Proof of Lemma~\ref{ps}] Let $\{u_m\}_{m \in \mathbb{N}}\subset H^1(\mathbb R^N)$ be a (P-S)$_c$ sequence, namely $$ J_{\mu, \nu} (u_m)\to c<\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}},\,\, J_{\mu, \nu}'(u_m)\to 0\,\,\text{in $H^{-1} (\mathbb R^N)$, as $m\to\infty$}. $$ Then \begin{equation*} \begin{split} c+o_m(1)\|u_m\|&=J_{\mu, \nu} (u_m)-\frac{1}{2}\langle J_{\mu, \nu}'(u_m),u_m\rangle\\ &=\frac{\alpha}{2(N+\alpha)}\int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{N+\alpha}{N}}, \end{split} \end{equation*} where $o_m(1)\to 0$, as $m\to\infty$. We first prove that the sequence $\{u_m\}_{m \in \mathbb{N}}$ is bounded in $H^1(\mathbb R^N)$. Indeed, if not $\|u_m\|\to\infty$, as $m\to\infty$. Set $\Tilde{u}_m=u_m/\|u_m\|$, to have $$ \lim_{m\to \infty}\int_{\mathbb R^N} (I_\alpha\ast|\Tilde{u}_m|^{\frac{N+\alpha}{N}})|\Tilde{u}_m|^{\frac{N+\alpha}{N}}=0. $$ It follows from Lemma~\ref{lions} that $\Tilde{u}_m\to 0$ strongly in $L^s(\mathbb R^N)$ for any $s\in(2,\frac{2N}{N-2})$. Then we have $$ \int_{\mathbb R^N}\frac{1}{\nu^2 + \abs{x}^2}|u_m|^2\,\mathrm{d} x=\|u_m\|^2\int_{\mathbb R^N}\frac{1}{\nu^2 + \abs{x}^2}|\Tilde{u}_m|^2\,\mathrm{d} x=o_m(1)\|u_m\|^2, $$ which implies \begin{equation*} \begin{split} c+o_m(1)\|u_m\|&=J_{\mu, \nu} (u_m)-\frac{N}{2(N+\alpha)}\langle J_{\mu, \nu}'(u_m),u_m\rangle\\ &=\frac{\alpha}{2(N+\alpha)}\int_{\mathbb R^N}|\nabla u_m|^2+V_{\mu, \nu}|u_m|^2\\ &=\Bigl[\frac{\alpha}{2(N+\alpha)} +o_m(1)\Bigr]\int_{\mathbb R^N}|\nabla u_m|^2+|u_m|^2. \end{split} \end{equation*} and the sequence $\{u_n\}_{n \in \mathbb{N}}$ stays bounded. Next we may assume there exists $u\in H^1(\mathbb R^N)$ such that $u_m\rightharpoonup u$ weakly in $H^1(\mathbb R^N)$ and almost everywhere in $\mathbb R^N$, as $m\to\infty$. Let $v_m=u_m-u$, then by Brezis-Lieb's lemma \begin{equation*} \int_{\mathbb R^N}|\nabla u_m|^2+V_{\mu, \nu}|u_m|^2=\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2+\int_{\mathbb R^N}|\nabla v_m|^2+|v_m|^2+o_m(1) \end{equation*} and by \cite[Lemma 2.4]{MV3}, \begin{multline*} \int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{N+\alpha}{N}}\\ =\int_{\mathbb R^N} (I_\alpha\ast\abs{u}^{\frac{N+\alpha}{N}})\abs{u}^{\frac{N+\alpha}{N}} +\int_{\mathbb R^N} (I_\alpha\ast|v_m|^{\frac{N+\alpha}{N}})|v_m|^{\frac{N+\alpha}{N}} +o_m(1). \end{multline*} Then \begin{equation}\label{decomposition} \left\{ \begin{aligned} &c+o_m(1)=J_{\mu, \nu} (u)+\frac{1}{2}\int_{\mathbb R^N}|\nabla v_m|^2+|v_m|^2-\frac{N}{2(N+\alpha)}\int_{\mathbb R^N} (I_\alpha\ast |v_m|^{\frac{N+\alpha}{N}})|v_m|^{\frac{N+\alpha}{N}},\\ &o_m(1)=\langle J_{\mu, \nu}'(u),u\rangle+\int_{\mathbb R^N}|\nabla v_m|^2+|v_m|^2-\int_{\mathbb R^N} (I_\alpha\ast |v_m|^{\frac{N+\alpha}{N}})|v_m|^{\frac{N+\alpha}{N}}. \end{aligned} \right. \end{equation} We have $J_{\mu, \nu}'(u)=0$ in $H^{-1} (\mathbb R^N)$ and $J_{\mu, \nu} (u)\ge 0$. Suppose $\|v_m\|^2\to l\ge 0$, as $m\to\infty$, then by \eqref{decomposition} $$ \lim_{m\to \infty}\int_{\mathbb R^N} (I_\alpha\ast |v_m|^{\frac{N+\alpha}{N}})|v_m|^{\frac{N+\alpha}{N}}=l. $$ If $l>0$, then by the definition of $c_\infty$, we have $$ l+o_m(1)=\|v_m\|^2\ge\|v_m\|_2^2\ge c_\infty\left(l+o_m(1)\right)^{\frac{N}{N+\alpha}}, $$ which implies $l\ge c_\infty^{\frac{N+\alpha}{\alpha}}$. Then by \eqref{decomposition}, $$ c\ge\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}, $$ which is a contradiction. Therefore $l=0$ and the proof is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{Theorem 2}] Now, we are in position to prove Theorem~\ref{Theorem 2}. For this purpose, we recall the following critical point theorem due to P.~Bartolo, V.~Benci and D.~Fortunato \cite{Benci}. \begin{lemma}\label{critical}{\rm \cite[Theorem 2.4]{Benci}} Let $H$ be a real Hilbert space and $f\in C^1(H,\mathbb R)$ be a functional satisfying the following assumptions: \begin{itemize} \item [$({f_1})$] $f(-u)=f(u)$ for any $u\in H$ and $f(0)=0$; \item [$({f_2})$] there exists $\beta>0$ such that $f$ satisfies the Palais-Smale condition in $(0,\beta)$; \item [$({f_3})$] there exists two closed subspaces $V, W\subset H$ and positive constants $\rho,\delta$ such that \begin{itemize} \item [$({i})$] $f(u)<\beta$ for any $u\in W$; \item [$({ii})$] $f(u)\ge\delta$ for any $u\in V$ with $\nor{u}=\rho$; \item [$({iii})$] ${\rm codim} (V)<+\infty$. \end{itemize} \end{itemize} Then $f$ admits at least $m$ pairs of critical points with critical values belonging to the interval $[\delta,\beta)$ and $$ m={\rm dim} (V\cap W)-{\rm codim} (V+W). $$ \end{lemma} Let us divide the proof of Theorem \ref{Theorem 2} into two steps: \medbreak \textbf{Step 1.} We use Lemma~\ref{critical} to show that \eqref{q1} admits at least one nontrivial solution for $\mu\in[\lambda_n,\lambda_{n+1})$. Obviously, $J_{\mu, \nu} (-u)=J_{\mu, \nu} (u)$ for any $u\in H^1(\mathbb R^N)$ and $J_{\mu, \nu} (0)=0$. By Lemma~\ref{ps}, $J_{\mu, \nu}$ satisfies the Palais-Smale condition in $(0,\beta)$ with $\beta=\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}$. Take $V=E^+$ and $$ W=\{w\in H^1(\mathbb R^N): w=tu_{\varepsilon} +v,\,\,t\in\mathbb R,\,\,v\in E^-\}, $$ then $V+W=H^1(\mathbb R^N),\,\,{\rm codim} (V)=n<+\infty.$ By Lemma~\ref{lemma4}, for $\varepsilon$ small enough, $\int_{\mathbb R^N}\abs{\nabla u_{\varepsilon}} +V_{\mu, \nu}\abs{u_{\varepsilon}}^2>0$, which implies $u_{\varepsilon}\not\in E^-$. Then ${\rm dim} (V\cap W)=1,\,\,m=1$. Noting that $J_{\mu, \nu}$ is even in $H^1(\mathbb R^N)$, by Lemma~\ref{estimate}, for $\varepsilon > 0$ small, we have $$ \sup_{w\in W}J_{\mu, \nu} (w)<\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}. $$ On the other hand, observe that for any $u\in E^+$, $$ \int_{\mathbb R^N}\abs{\nabla u}^2+ \abs{u}^2\ge\lambda_{n+1}\int_{\mathbb R^N}\frac{1}{\nu^2 + \abs{x}^2}\abs{u}^2\,\mathrm{d} x. $$ By the Hardy--Littlewood--Sobolev inequality, for any $u\in E^+$, we get \begin{equation*} \begin{split} J_{\mu, \nu} (u)&\ge\frac{1}{2}\int_{\mathbb R^N}\abs{\nabla u}^2+V_{\mu, \nu}\abs{u}^2-\mathcal{C}_\alpha\nor{u}_2^{\frac{2(N+\alpha)}{N}}\\ &\ge\frac{1}{2}\left(1-\frac{\mu}{\lambda_{n+1}}\right)\int_{\mathbb R^N}\abs{\nabla u}^2+ \abs{u}^2-\mathcal{C}_\alpha\nor{u}_2^{\frac{2(N+\alpha)}{N}}\\ &=\nor{u}^2\left[\frac{1}{2}\left(1-\frac{\mu}{\lambda_{n+1}}\right)-\mathcal{C}_\alpha\nor{u}_2^{\frac{2\alpha}{N}}\right]\\ &\ge\frac{1}{4}\left(1-\frac{\mu}{\lambda_{n+1}}\right)\rho^2,\,\,\text{for $\nor{u}=\rho$ sufficiently small.} \end{split} \end{equation*} As a consequence of Lemma~\ref{critical}, \eqref{q1} admits at least one nontrivial solution $u\in H^1(\mathbb R^N)$ with $J_{\mu, \nu} (u)<\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}$. \medbreak \textbf{Step 2.} In the following, we prove the existence of ground state solutions to \eqref{q1}. Let $$ K:=\{u\in H^1(\mathbb R^N)\setminus\{0\}: J_{\mu, \nu}'(u)=0\,\,\text{in $H^{-1} (\mathbb R^N)$}\}, $$ then by \textbf{Step 1}, $K\not=\emptyset$ and $$ m_{\mu, \nu}:=\inf_{u\in K}J_{\mu, \nu} (u)<\frac{\alpha}{2(N+\alpha)}c_\infty^{\frac{N+\alpha}{\alpha}}. $$ Obviously, $m_{\mu, \nu}\ge 0$ and there exists a sequence $\{u_m\}_{m \in \mathbb{N}}\) in $K$ such that $J_{\mu, \nu}'(u_m)=0$ in $H^{-1} (\mathbb R^N)$ and $J_{\mu, \nu} (u_m)\to m_{\mu, \nu}$ as $m\to\infty$. By Lemma~\ref{ps}, up to a subsequence, there exists $u_0\in H^1(\mathbb R^N)$ such that $u_m\to u_0$ strongly in $H^1(\mathbb R^N)$ as $m\to\infty$. Then $u_0\in K\cup\{0\}$ and $J_{\mu, \nu} (u_0)=m_{\mu, \nu}$. To conclude the proof, it remains to show that $m_{\mu, \nu}>0$, indeed if not, then $$ \lim_{m\to \infty}\int_{\mathbb R^N} (I_\alpha\ast |u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{N+\alpha}{N}}=0. $$ Similarly as above, the sequence $\{u_m\}_{m \in \mathbb{N}}$ is bounded in $H^1(\mathbb R^N)$ and by virtue of Lemma~\ref{lions}, $$ \lim_{m\to \infty}\int_{\mathbb R^N}\frac{1}{\nu^2 + \abs{x}^2}|u_m|^2\,\mathrm{d} x=0 $$ and then $\|u_m\|\to 0$, as $m\to\infty$. Observe that $u_m=\bar{u}_m+v_m$ and $\|u_m\|^2=\|\bar{u}_m\|^2+\|v_m\|^2$, where $\bar{u}_m\in E^-$ and $v_m\in E^+$. Then $\|\bar{u}_m\|\to 0$ and $\|v_m\|\to 0$, as $m\to\infty$. If $v_m=0$, then by $u_m\not=0$, we have $\bar{u}_m\not=0$ and \begin{equation*} \begin{split} J_{\mu, \nu} (u_m)&=J_{\mu, \nu} (\bar{u}_m)=\frac{1}{2}\int_{\mathbb R^N}|\nabla \bar{u}_m|^2+V_{\mu, \nu}|\bar{u}_m|^2-\int_{\mathbb R^N} (I_\alpha\ast |\bar{u}_m|^{\frac{N+\alpha}{N}})|\bar{u}_m|^{\frac{N+\alpha}{N}}\\ &\le\frac{1}{2}\left(1-\frac{\mu}{\lambda_n}\right)\int_{\mathbb R^N}|\nabla \bar{u}_m|^2+|\bar{u}_m|^2-\int_{\mathbb R^N} (I_\alpha\ast |\bar{u}_m|^{\frac{N+\alpha}{N}})|\bar{u}_m|^{\frac{N+\alpha}{N}}\\ &<0, \end{split} \end{equation*} which contradicts the fact that \begin{equation*} \begin{split} J_{\mu, \nu} (u_m)&=J_{\mu, \nu} (u_m)-\frac{1}{2}\langle J_{\mu, \nu}'(u_m),u_m\rangle\\ &=\frac{\alpha}{2(N+\alpha)}\int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{N+\alpha}{N}}>0. \end{split} \end{equation*} So we get $v_m\not=0$ for any $m$. \medbreak \textbf{Case 1.} Assume that up to a subsequence, $\lim\limits_{m\to \infty}\frac{\|\bar{u}_m\|}{\|v_m\|}<+\infty$, then $\|\bar{u}_m\|\le C\|v_m\|$ for any $m$. By $J_{\mu, \nu}'(u_m)=0$ in $H^{-1} (\mathbb R^N)$ and $v_m\in E^+$, we have \begin{equation}\label{bound} \begin{split} \int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{\alpha-N}{N}}u_mv_m&=\int_{\mathbb R^N}|\nabla v_m|^2+V_{\mu, \nu}|v_m|^2\\ &\ge\left(1-\frac{\mu}{\lambda_{n+1}}\right)\int_{\mathbb R^N}|\nabla v_m|^2+|v_m|^2. \end{split} \end{equation} By the Hardy--Littlewood--Sobolev inequality and H\"older's inequality, \begin{equation*} \begin{split} \int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{\alpha-N}{N}}u_mv_m &\le\mathcal{C}_\alpha\left(\int_{\mathbb R^N}|u_m|^2\right)^{\frac{N+\alpha}{2N}}\left(\int_{\mathbb R^N}|u_m|^{\frac{2\alpha}{N+\alpha}}|v_m|^{\frac{2N}{N+\alpha}}\right)^{\frac{N+\alpha}{2N}}\\ &\le\mathcal{C}_\alpha\|u_m\|_2^{\frac{N+2\alpha}{N}}\|v_m\|_2\le c\|v_m\|^{\frac{2(N+\alpha)}{N}}, \end{split} \end{equation*} where $c>0$ (independent of $m$). By \eqref{bound}, $$ \left(1-\frac{\mu}{\lambda_{n+1}}\right)\|v_m\|^2\le c\|v_m\|^{\frac{2(N+\alpha)}{N}}, $$ which is a contradiction, since $\mu<\lambda_{n+1}$ and $\|v_m\|\to 0$, as $m\to\infty$. Thus, $m_{\mu, \nu}>0$. \medbreak \textbf{Case 2.} Assume that, up to a subsequence, $\lim\limits_{m\to \infty}\frac{\|\bar{u}_m\|}{\|v_m\|}=\infty$, then $\bar{u}_m\not=0$ for $m$ large and $\lim\limits_{m\to \infty}\frac{\|v_m\|}{\|\bar{u}_m\|}=0$. By $J_{\mu, \nu}'(u_m)=0$ in $H^{-1} (\mathbb R^N)$, we have \begin{equation} \label{bound1} \begin{split} \int_{\mathbb R^N}|\nabla \bar{u}_m|^2&+V_{\mu, \nu}|\bar{u}_m|^2\\ &=\int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{\alpha-N}{N}}u_m\bar{u}_m\\ &=\int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{N+\alpha}{N}}- \int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{\alpha-N}{N}}u_mv_m. \end{split} \end{equation} By Lemma~\ref{norm}, we have $\left|\|u_m\|_\ast-\|\bar{u}_m\|_\ast\right|\le\|v_m\|_\ast$ and then by $\|v_m\|=o(\|\bar{u}_m\|)$, $$ \int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{N+\alpha}{N}}=\|\bar{u}_m\|_\ast^{\frac{2(N+\alpha)}{N}} (1+o_m(1)). $$ At the same time, similarly as above, for some $c>0$ we have $$ \int_{\mathbb R^N} (I_\alpha\ast|u_m|^{\frac{N+\alpha}{N}})|u_m|^{\frac{\alpha}{N}}|v_m|\le c\|u_m\|^{\frac{N+2\alpha}{N}}\|v_m\|=o(\|\bar{u}_m\|_\ast^{\frac{2(N+\alpha)}{N}}), $$ where we used the fact that norms in $E^-$ are equivalent. Then by \eqref{bound1}, for $m$ large enough, $$ \int_{\mathbb R^N}|\nabla \bar{u}_m|^2+V_{\mu, \nu}|\bar{u}_m|^2=\|\bar{u}_m\|_\ast^{\frac{2(N+\alpha)}{N}} (1+o_m(1))>0, $$ which contradicts the fact that $$ \int_{\mathbb R^N}|\nabla \bar{u}_m|^2+V_{\mu, \nu}|\bar{u}_m|^2\le0,\,\,\text{since $\bar{u}_m\in E^-$ and $\mu\ge\lambda_n$}. $$ Thus $m_{\mu, \nu}>0$ and the proof of Theorem \ref{Theorem 2} is now complete. \end{proof} \section{Proof of Theorem~\ref{Theorem 3}.} Finally we establish an upper bound for the value $\mu^{\nu}$. \begin{proof}[Proof of Theorem~\ref{Theorem 3}] For any $p>\max\{2,N/4\}$, let $$ u_p(x)=\frac{\nu^{2p}}{(\nu^2 + \abs{x}^2)^p},\,\,x\in\mathbb R^N, $$ then $$ |\nabla u_p(x)|=\frac{2p\nu^{2 p}\abs{x}}{(\nu^2 + \abs{x}^2)^{p+1}},\,\,x\in\mathbb R^N, $$ and by the change of variables \(r = \nu \sqrt s\), we get \begin{equation*} \begin{split} \int_{\mathbb R^N}|\nabla u_p|^2+|u_p|^2&=C_N\left[4p^2 \int_0^\infty\frac{\nu^{4 p} r^{N+1}}{(\nu^2 + r^2)^{2p+2}}\,\mathrm{d} r+ \int_0^\infty\frac{\nu^{4 p} r^{N-1}}{(\nu^2 + r^2)^{2p}}\,\mathrm{d} r\right]\\ &=\frac{1}{2}C_N \left[4p^2\nu^{N-2}\int_0^\infty\frac{s^{N/2}}{(1+s)^{2p+2}}\,\mathrm{d} s+\nu^N\int_0^\infty\frac{s^{N/2-1}}{(1+s)^{2p}}\,\mathrm{d} s\right] \end{split} \end{equation*} and \begin{equation*} \begin{split} \int_{\mathbb R^N}\frac{|u_p|^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x&=C_N \int_0^\infty\frac{\nu^{4p}r^{N-1}}{(\nu^2 + r^2)^{2p+1}}\,\mathrm{d} r=\frac{1}{2}C_N\nu^{N - 2} \int_0^\infty\frac{s^{N/2-1}}{(1+s)^{2p+1}}\,\mathrm{d} s. \end{split} \end{equation*} By the definition of the Beta function, for any $x,y>0$, $$ B(x,y)=\int_0^\infty\frac{t^{x-1}}{(1+t)^{x+y}}\,\mathrm{d} t,\,\,B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}. $$ Recalling that $4p>N$, \begin{equation*} \begin{split} \frac{\int_{\mathbb R^N}|\nabla u_p|^2+|u_p|^2}{\int_{\mathbb R^N}\frac{|u_p|^2}{\nu^2 + \abs{x}^2}\,\mathrm{d} x}&=\frac{4 p^2 B(\frac{N}{2} +1,2p+1-\frac{N}{2})+\nu^2 B(\frac{N}{2},2p-\frac{N}{2})}{B(\frac{N}{2},2p+1-\frac{N}{2})}\\ &=\frac{2Np^2}{2p+1} +\frac{4\nu^2 p}{4p-N}. \end{split} \end{equation*} It follows that $$ \mu^{\nu}\le\min_{p>N/4}\left(\frac{2Np^2}{2p+1} +\frac{4p\nu^2}{4p-N}\right). $$ In particular, if we take \(p = \frac{N}{4} + 1\), we obtain \[ \mu^{\nu}\le \frac{N (N + 4)^2}{4(N + 6)} + \frac{(N + 4)\nu^2}{4}, \] which together with \eqref{lower_est_asym} yields \[ \lim_{N\to \infty}\frac{\mu^{\nu}}{\frac{N^2(N-2)}{4(N+1)}}=1. \] \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,664
\section{Introduction} The ``Game about Squares'' \cite{gas} is an addictive game where unit squares have to be moved on an integer lattice onto dots of the same color. It has been released by Andrey Shevchuk in July 2014. In the meantime several clones of the game are available for different platforms. The basic rules of the game are the following: Let $\mathcal{D}=\{(-1,0),(1,0),(0,-1),(0,1)\}$ (left, right, down and up, respectively) be a set of directions. The game is played on an infinite integer lattice $\mathbb{Z}^2$. An instance $(S,p(S),f(S),d(S),A,p(A),d(A))$ of the game consists of \begin{itemize} \item A finite set of squares $S$ with different initial positions $p:S\rightarrow \mathbb{Z}^2$ on the lattice. In the game the squares are represented by unit squares of different colors. \item A final position $f:S\rightarrow \mathbb{Z}^2$ for every square, marked by a dot of the color of the corresponding square. No two squares have the same final position. \item An initial direction $d:S\rightarrow \mathcal{D}$ for every square. \item A finite set of arrows $A$ with distinct positions $p:A\rightarrow \mathbb{Z}^2$ and directions $d:A\rightarrow \mathcal{D}$. \end{itemize} The game is played in rounds. In every round the player chooses a square $s\in S$ that is pushed. Let $d$ be the direction $d(s)$ of the square. The square moves one position into direction $d$, that is, its new position is $p(s)_{\text{new}}:= p(s)+d$. If there is already another square $s_2$ at the new position, $s_2$ also moves into direction $d$, independent of its own direction. If $s_2$ lands on the position of a third square $s_3$, $s_3$ also moves into direction $d$ and so on. If a square $s$ lands on a position with an arrow (that is, there exists an $a\in A$ with $p(s)_{\text{new}}=p(a)$), the square adopts the direction of the arrow, that is, $d_{\text{new}}(s):= d(a)$. The player wins the game if after a finite number of moves each square $s\in S$ is on its final position $f(s)$. A \emph{winning sequence} is a sequence $(s_1,\ldots, s_k)$, $s_i\in S$, such that if in each round $i\in\{1,\ldots,k\}$ the player pushes the square $s_i$, the game is won in round $k$. The original game \cite{gas} consists of 35 levels with increasing difficulty. Between Level 22 and Level 23 the author of the game, Andrey Shevchuk, asks ``Do you think this game is hard?''. We interpret this question in a mathematical way (even if this has not been the intention of Shevchuk). We prove by a reduction from {\sc Satisfiability}, that the game is NP-hard. Nevertheless, it remains an open question, if this game is in NP or if it is even PSPACE-hard. \section{Reduction from SATISFIABILITY} We prove that the game is NP-hard by a reduction from {\sc SATISFIABILITY}, which has been proven to be NP-hard by Cook \cite{Cook}. More precisely we prove, that it is NP-hard to decide if a given instance of the ``Game about Squares'' (in short GaS) can be won, that is, there is a finite number of moves such that all squares reach their final position. Let $(X,\mathcal{C})$ be a {\sc Satisfiability} instance where $X=\{x_1,\ldots, x_n\}$ is a set of variables and $\mathcal{C}=\{C_1,\ldots, C_m\}$ is a set of clauses over $X$. We construct an instance for the GaS that can be won if and only if there is a truth assignment $\pi:X\rightarrow\{\mathtt{true},\mathtt{false}\}$ satisfying all clauses. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[draw=white,fill=lightgrayX] (1,-4) rectangle (2,2); \draw[draw=white,fill=lightgrayX] (3,-4) rectangle (4,1); \draw[dotted] (0.5,-4.5) grid (5.5,2.5); \Block{2}{0}; \node at (1.5,2.2) {$x_i$}; \node at (3.5,2.2) {$\overline{x_i}$}; \SquareC{4}{1}{$p$}{colorA}; \SquareLeft{4}{1}; \FinalC{2}{1}{$p$}{colorA}; \SquareC{3}{1}{\textcolor{white}{$\mathbf{x}$}}{colorB}; \SquareDown{3}{1}; \FinalC{1}{-4}{\textcolor{white}{$\mathbf{x}$}}{colorB}; \Left{3}{-4}; \end{tikzpicture} \end{center} \caption{Variable gadget for a variable $x_i$. A black triangle shows the position of an arrow and its direction. White triangles show the initial direction of a square. If the left column is used by the square labeled with $x$ then $x_i=\mathtt{true}$. Otherwise we have $x_i=\mathtt{false}$.} \label{fig:variable} \end{figure} In our GaS instance squares can only be moved to the left and down. To this end, the initial direction of each square and the direction of each arrow is either left or down. With this restriction we observe, that a square can never be above or left of its initial position. If a square if below or right of its final position, the game cannot be won. For a given square $s$ we call a position infeasible, if it it left or below of the final position of $s$ or if it is above or right of the initial position of $s$. Otherwise, we call the position feasible for $s$. Moreover, we use in our instance so called blockers, that are squares that are initially at their final position. Moving them from that position, they can never be moved back to their final position. Thus in order to win the game, blockers are not allowed to be moved. For every variable we insert a variable gadget as shown in Figure \ref{fig:variable}. It consists of a variable square (labeled $x$), a decision square (labeled $p$), a blocker and two columns. The decision square can push the variable square from the right to the left column, where the blocker ensures that these are the only two columns that can be used. Depending on which column the variable square moves to its final position, the variable is set to $\mathtt{true}$ or to $\mathtt{false}$. Accordingly we associate the literal $x_i$ with the left and $\overline{x_i}$ with right column. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[fill=lightgrayX,draw=white] (3,4) rectangle (10,5); \draw[fill=lightgrayX,draw=white] (2,6) rectangle (10,7); \draw[dotted] (1.5,2.5) grid (10.5,7.5); \SquareC{9}{6}{$C_j$}{colorC}; \SquareLeft{9}{6}; \FinalC{2}{4}{\tiny $C_j$}{colorC}; \FinalC{3}{3}{\tiny $D_j$}{colorD}; \SquareC{4}{4}{$D_j$}{colorD}; \SquareDown{4}{4}; \Down{2}{6}; \end{tikzpicture} \end{center} \caption{Clause gadget for a clause $C_j$. The two rows are colored in gray. The clause squares has to go to its final position on the left. The square $D$ can only reach its final destination if the clause square uses the row at the bottom, that is, the clause is satisfied. } \label{fig:clause} \end{figure} For every clause $C_j$ we insert a clause gadget as shown in Figure \ref{fig:clause}. It consists of two rows, a clause square $C_j$ and a square $D_j$ indicating if the clause is satisfied or not. Only if the clause square moves on the lower row the indication square $D_j$ can reach its final position. We build a lattice containing these gadgets such that each pair of variable columns intersects with each pair of clause rows: For $i\in\{1,\ldots,n\}$ we place the variable gadget for $x_i$ in such a way that the variable square is at $(4(i+1),4(m+1))$ and its final position is at $(4(i+1)-2,1)$. For $j\in\{1,\ldots,m\}$ we place the clause gadget for $C_j$ such that the clause square is at $(4(n+1),4(m-j)+6)$ and its final position is at $(1,4(m-j)+4)$. See Figure \ref{fig:all} for an example. All gadgets are placed into a lattice of size $4(n+1)\times 4(m+1)$. Note that each lattice point is feasible for one variable and one clause square. It remains to specify the crossings of clause rows and variable columns. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[fill=lightgrayX,draw=white] (1,-0.5) rectangle (2,5.5); \draw[fill=lightgrayX,draw=white] (3,-0.5) rectangle (4,5.5); \draw[fill=lightgray,draw=white] (-0.5,1) rectangle (5.5,2); \draw[fill=lightgray,draw=white] (-0.5,3) rectangle (5.5,4); \draw[dotted] (-0.5,-0.5) grid (5.5,5.5); \node at (1.5,5.2) {$x_i$}; \node at (3.5,5.2) {$\overline{x_i}$}; \node at (6.5,3.5) {$C_j$ not sat.}; \node at (6.5,1.5) {$C_j$ sat.}; \Left{0}{1}; \Left{0}{3}; \Left{2}{1}; \Left{2}{3}; \Down{1}{0}; \Down{1}{2}; \Down{3}{0}; \Down{3}{2}; \end{tikzpicture} \end{center} \caption{A crossing between a variable $x_i$ and a clause $C_j$ that neither contains $x_i$ nor $\overline{x_i}$.} \label{crossing} \end{figure} Figure \ref{crossing} shows a crossing between the columns of a variable $x_i$ and a clause $C_j$ that neither contains $x_i$ nor $\overline{x_i}$. Note that if a variable square pushed a clause tile ore vice versa, the pushed square changes its orientation and cannot reach its final position. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[fill=lightgrayX,draw=white] (1,-0.5) rectangle (2,5.5); \draw[fill=lightgrayX,draw=white] (3,-0.5) rectangle (4,5.5); \draw[fill=lightgray,draw=white] (-0.5,1) rectangle (5.5,2); \draw[fill=lightgray,draw=white] (-0.5,3) rectangle (5.5,4); \draw[dotted] (-0.5,-0.5) grid (5.5,5.5); \node at (1.5,5.2) {$x_i$}; \node at (3.5,5.2) {$\overline{x_i}$}; \Block{2}{2}; \Left{0}{1}; \Left{0}{3}; \Left{2}{1}; \Left{2}{3}; \Down{1}{0}; \Down{1}{2}; \Down{3}{0}; \begin{scope}[xshift=8.5cm, yshift=0cm] \draw[fill=lightgrayX,draw=white] (1,-0.5) rectangle (2,5.5); \draw[fill=lightgrayX,draw=white] (3,-0.5) rectangle (4,5.5); \draw[fill=lightgray,draw=white] (-0.5,1) rectangle (5.5,2); \draw[fill=lightgray,draw=white] (-0.5,3) rectangle (5.5,4); \node at (1.5,5.2) {$x_i$}; \node at (3.5,5.2) {$\overline{x_i}$}; \draw[dotted] (-0.5,-0.5) grid (5.5,5.5); \Block{0}{2}; \Left{0}{1}; \Left{0}{3}; \Left{2}{1}; \Left{2}{3}; \Down{1}{0}; \Down{3}{0}; \Down{3}{2}; \end{scope} \node at (6.75,3.5) {$C_j$ not sat.}; \node at (6.75,1.5) {$C_j$ sat.}; \end{tikzpicture} \end{center} \caption{A crossing between a clause $C_j$ that contains $\overline{x_i}$ (left) or $x_i$ (right), respectively.} \label{fig:cross2} \end{figure} Finally, Figure \ref{fig:cross2} shows the crossing of a variable $x_i$ and a clause $C_j$ that contains $x_i$ (on the left) and clause $C_j$ that contains $\overline{x_i}$ (on the right), respectively. Note that in these cases, the clause square can be pushed to the lower row by the variable square without changing its direction if and only if the corresponding literal is satisfied, that is, the variable square uses the column of the corresponding literal. Once again, the blockers ensure that the clause squares can leave the crossing only on one of the two clause rows. We can assume w.l.o.g. that no clause contains both $x_i$ and $\overline{x_i}$ as such clauses are always satisfied. \begin{figure}[ht] \begin{tikzpicture}[scale=0.6] \foreach \i/\p in {0/4,4/3,8/2,12/1} { \draw[fill=lightgray,color=lightgray] (0,5+\i) rectangle (19,6+\i); \draw[fill=lightgray,color=lightgray] (1,3+\i) rectangle (19,4+\i); \Down{0}{5+\i}; \SquareC{2}{3+\i}{\tiny $D_\p$}{colorD}; \SquareDown{2}{3+\i}; \FinalC{1}{2+\i}{\tiny $D_\p$}{colorD}; \SquareC{19}{5+\i}{\tiny $C_\p$}{colorC}; \FinalC{0}{3+\i}{\tiny $C_\p$}{colorC}; \SquareLeft{19}{5+\i}; } \foreach \i/\p in {0/1,4/2,8/3,12/4} { \draw[fill=lightgray,color=lightgrayX] (4+\i,0) rectangle (5+\i,20); \draw[fill=lightgray,color=lightgrayX] (6+\i,0) rectangle (7+\i,19); \FinalC{4+\i}{0}{\tiny \textcolor{white}{$\mathbf{x_\p}$}}{colorB}; \Left{6+\i}{0}; \SquareC{6+\i}{19}{\textcolor{white}{$\mathbf{x_\p}$}}{colorB}; \SquareDown{6+\i}{19}; \FinalC{5+\i}{19}{\tiny $p_\p$}{colorA}; \SquareC{7+\i}{19}{$p_\p$}{colorA}; \SquareLeft{7+\i}{19}; \Block{5+\i}{18}; } \foreach \x in {0,4,8,12,16} { \draw (3,2+\x) -- (19,2+\x); \draw (3+\x,2) -- (3+\x,18); } \foreach \x/\y in {0/0,4/8, 8/4, 8/12, 12/12} { \Left{\x+3}{\y+5}; \Left{\x+3}{\y+3}; \Left{\x+5}{\y+5}; \Left{\x+5}{\y+3}; \Down{\x+4}{\y+4}; \Down{\x+4}{\y+2}; \Down{\x+6}{\y+4}; \Down{\x+6}{\y+2}; } \foreach \x/\y in {0/12,4/12, 0/8, 12/8, 4/0, 12/0 } { \Left{\x+3}{\y+5}; \Left{\x+3}{\y+3}; \Left{\x+5}{\y+5}; \Left{\x+5}{\y+3}; \Down{\x+4}{\y+2}; \Down{\x+6}{\y+4}; \Down{\x+6}{\y+2}; \Block{\x+3}{\y+4}; } \foreach \x/\y in {8/8, 0/4, 4/4, 12/4, 8/0} { \Left{\x+3}{\y+5}; \Left{\x+3}{\y+3}; \Left{\x+5}{\y+5}; \Left{\x+5}{\y+3}; \Down{\x+4}{\y+4}; \Down{\x+4}{\y+2}; \Down{\x+6}{\y+2}; \Block{\x+5}{\y+4}; } \end{tikzpicture} \label{fig:all} \caption{GaS instance for the Satisfiability instance $(x_1\vee x_2)\wedge(x_1\vee\overline{x_3}\vee x_4)\wedge(\overline{x_1}\vee\overline{x_2}\vee\overline{x_4})\wedge(x_2\vee\overline{x_3}\vee x_4)$.} \end{figure} \begin{lemma}\label{lemma1} GaS is NP-hard. \end{lemma} \begin{proof} For a given {\sc SATISFIABILITY} instance $S$ with $n$ variables and $m$ clauses we construct a GaS instance as shown above. Obviously, this is a polynomial transformation: the GaS instance is placed on a grid of total size $O(nm)$ and contains $O(n+m)$ squares and $O(nm)$ arrows. We have seen that each variable square has to use either its left or right column in order to win the game. Once such a square has left the uppermost row it cannot be moved to the left until it has reached the lowermost row. Otherwise, the square would change its direction and cannot reach its final position. Assume that there exists a truth assignment $\pi$ satisfying $S$. Using the decision squares we push each variable square $x$ to the columns corresponding to $\pi(x)$. As the truth assignment is satisfied, for each clause $C$ there exists a literal $l\in C$ that is set to $\mathtt{true}$ by $\pi$. Now we push the clause square $C$ to the left until we reach the column assigned to $l$. Moving the variable squares down they can be used to push the clause square into their lower rows. Now they can be used to push the squares $D$ by one position to the left so that they can reach their final destination. Finally, all remaining squares can reach their final position without problems and the game is won. Now assume, that we have an instance of the game that can be won. We define a truth assignment $\pi$ by setting $\pi(x_i)=\mathtt{true}$ if the variable square $x_i$ uses its left column when moving down and $\pi(x_i)=\mathtt{false}$ otherwise. Now consider the sequence $Q$ of pushes that is used to win the game. As for all $i\in\{1,\ldots, m\}$ the square $D_i$ reaches its final position, it must be pushed by the square $C_i$ by one to the left in one of the rounds. But then $C_i$ has been pushed to its lower row, which can only be done by a variable square $x_i$ such that the corresponding literal $x_i$ or $\overline{x_i}$ is in $C_i$ and is satisfied by $\pi$. Thus each clause is satisfied by $\pi$. \end{proof} \begin{note} First note that each instance of the game can be restricted to a grid of size $O(|S|(|S|+|A|)\times O(|S|(|S|+|A|)$. If there are more than $|S|$ succeeding rows or columns that neither contain squares, final positions nor arrows, we can delete all but $|S|$ of them. Thus the size of the grid is polynomially bounded in the size of the input. For the problem to be in NP, we have to show that for every instance that can be won there exists a certificate verifiable in polynomial time. Such a certificate could be a winning sequence. Unfortunately, it is not known if there always exists a winning sequence of polynominal size. Nevertheless, the restricted version of the ``Game about Squares'' with instances, where only one horizontal and one vertical direction are allowed, is in NP: Every square can be pushed at most $O(|S|(|S|+|A|))$ times and thus the game ends after a polynomial number of rounds. By the proof of Lemma \ref{lemma1}, this restricted version is NP-hard. \end{note} \section*{Acknowledgment} The author likes to thank Andrey Shevchuk for this addictive game and Jan Schneider for valuable discussions. \nocite{*}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,448
Elegance made easy. From weddings to dinner dates, the Delilah will make a versatile addition to your new season wardrobe. Crafted in satin for a luxurious finish, it comes in a tunic silhouette for a flattering fit with slip-on-and-go appeal. The high neck and sleeveless design give a nod to the season's trends. Model is 175cm/5'9'' and is wearing a size UK 8.
{ "redpajama_set_name": "RedPajamaC4" }
2,348
Flash LED PAR 56 177x5mm prožektorius su nuotoliniu valdymo pultu. LED PAR with a 5mm LED is a popular, reliable and versatile device that will work in many different conditions - on stage, in the theater, club, restaurant, etc. Used components made it possible to achieve compact size and low weight while maintaining the optimal light performance. Dėklų komplektas būgnų komplektui GEWA SPS 22x18, 10x9, 12x10, 14x14, 14x6,5"
{ "redpajama_set_name": "RedPajamaC4" }
1,310
{"url":"https:\/\/codegolf.stackexchange.com\/questions\/60781\/harmonious-convergence","text":"# Harmonious \u201cConvergence\u201d\n\nThe Alternating Harmonic Series is a well known convergent series.\n\n$\\dpi{120}&space;\\large&space;\\frac{1}{1}-\\frac{1}{2}+\\frac{1}{3}-\\frac{1}{4}+\\frac{1}{5}-\\frac{1}{6}+\\cdots$\n\n\"Clearly\", it is obvious that it converges to the natural log of 2. Or does it?\n\nSince the series is not absolutely convergent, by simply rearranging the terms, I can make it approach anything I want. Suppose I want the series to converge to e. All I'd have to do is this:\n\n$\\dpi{120}&space;\\large&space;\\frac{1}{1}+\\frac{1}{3}+\\cdots+\\frac{1}{65}-\\frac{1}{2}+\\frac{1}{67}+\\cdots+\\frac{1}{175}-\\frac{1}{4}+\\cdots$\n\nIf you didn't catch the pattern, there isn't an obvious one. Here's how it works:\n\n1. Consider the terms of the alternating harmonic series in terms of positive and negative terms.\n2. Add together just enough positive terms to exceed our target (e). (aka sum > target)\n3. Subtract the next negative term.\n4. Go back to 2.\n\nNote that at step 2, if our sum == target, you should add another positive term.\n\nFrom this we can define a sequence associated with each number as follows:\n\n\u2022 For each positive term, output 1.\n\u2022 For each negative term, output 0.\n\nLet's call this sequence the \"Harmonious Bit Pattern\" of a number. For example, the HBP of e begins as:\n\n1, 1, 1, 1, <32 times>, 0, 1, 1, <54 times>, 0, 1, 1, ...\n\nYou will be given:\n\n\u2022 a rational input target in the range [-10, 10] (note: even reaching 10 via the harmonic series takes many millions of terms). This may be a decimal (aka 1.1) or you may take a rational directly (aka 12\/100)\n\u2022 a positive int n input, specifying the number of terms of the Harmonious Bit Pattern to output.\n\nYou are expected to output the exact Harmonious Bit Pattern of the target to the specified number of terms. You may output space separated values, comma separated, no separation, etc; just as long as the pattern of 0s and 1s is clearly visible and is read left to right with consistent separation.\n\n## Test Cases\n\n>>> 0, 1\n1\n>>> 0, 2\n10\n>>> 0, 7\n1000010\n>>> 1, 10\n1101011011\n>>> 1.01, 3\n110\n>>> 1.01, 24\n110101101101101101101101\n>>> 2.71, 32\n11111111111111111111111111111111\n>>> 2.71, 144\n111111111111111111111111111111110111111111111111111111111111111111111111111111111111111101111111111111111111111111111111111111111111111111111111\n>>> -9.8, 100\n0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\n\nNote that since -9.8 is so large, the first 1 that would be output is somewhere around the 149496620th term (that was computed via floats, so the value might not be exact).\n\n# Perl, 69 bytes\n\nuse bigrat;$s+=.5\/($s>$ARGV[$_=0]?-++$n:++$p-++$_\/2),print for 1..pop Takes inputs as command line arguments. Explanation: bigrat enables fractions everywhere for accurate calculations.$s is the current sum of terms, $ARGV[0] is the target value, pop (same as$ARGV[1]) represents the number of terms, $p and$n represent the positive and negative term counts. \\$_ is either 1 or 0 depending on whether a positive or negative term was added.\n\nimport Data.Ratio\nf=(.h 0 1 2).take\nh a p q z|a>z=0:h(a-1%q)p(q+2)z|1<2=1:h(a+1%p)(p+2)q z\n\nUsage example: f 24 1.01 -> [1,1,0,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1].\n\nh builds the infinite bit pattern by carrying four parameters around: a is the current sum. p is the denominator of the next positive term, q for negative terms. z is the target number. f starts everything up and truncates the result to length n.\n\nEdit: @Zgarb found a byte to save. Thanks!\n\n\u2022 Defining h a p q instead of h p q a saves a byte. \u2013\u00a0Zgarb Oct 14 '15 at 19:37\n\u2022 Should be noted that 7 bytes are spent on just trimming the infinite result list to one of length n. It would actually be much nicer to just give the infinite list as the result. \u2013\u00a0ceased to turn counterclockwis Oct 14 '15 at 20:33\n\n# Python 3, 128 124 bytes\n\nfrom fractions import*\nF=Fraction\n*c,s=2,1,0\nt=F(input())\nfor i in'x'*int(input()):w=s<=t;s+=F(w*2-1,c[w]);c[w]+=2;print(+w)\n\nThis makes use of Python's Fraction class.\n\nfrom fractions import*\nF=Fraction\n*c,s=2,1,0 # c = [2, 1]. s = 0\n# c is my positive\/negative term counter, s is the sum\nt=F(input()) # input a fraction\nfor i in'x'*int(input()): # Do this for for the chosen number of terms, as per the spec\nw=s<=t; # \"w\" or which one do we choose? Positive or negative?\ns+=F(w*2-1,c[w]); # w*2-1 gives 1 if w else -1. Gives 1 if we need to add, else -1\nc[w]+=2; # Increment the coefficient we chose\nprint(+w) # Output that. The +w coerces the bool to an int.\n\u2022 'x'*int(input())? \u2013\u00a0FryAmTheEggman Oct 14 '15 at 15:56","date":"2019-11-14 11:32:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 2, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6538947820663452, \"perplexity\": 1830.260015357161}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496668416.11\/warc\/CC-MAIN-20191114104329-20191114132329-00358.warc.gz\"}"}
null
null
Watco Companies This article relies too much on references to primary sources. Please improve this by adding secondary or tertiary sources. (November 2010) (Learn how and when to remove this template message) Watco Transportation Services, L.L.C. Track gauge 4 ft 8 1⁄2 in (1,435 mm) standard gauge 4,500 track miles Pittsburg, Kansas Watco Companies, L.L.C. (Watco) is a transportation company based in Pittsburg, Kansas, formed in 1983 by Charles R. (Dick) Webb. Watco is composed of four divisions: transportation, mechanical, terminal and port services, and compliance. Watco is the owner of Watco Transportation Services, L.L.C. (WTS), which operates 41 short line railroads in the U.S. and Australia. It is one of the largest short line railroad companies in the United States. As of December 2018, it operated on 5,500 miles (8,900 km) of leased and owned track. Also under transportation is the contract switching the company provides service for 30 customers. That is the service that Watco originally offered before it branched out into other areas. Watco's mechanical division has 19 car repair shops and is one of the largest mechanical services provider in the United States.[citation needed] They provide program, contract and emergency repairs. These services include maintenance of all types of cars including tank cars and coal fleets, and the preparation and cleaning of boxcars and refrigerated cars. The terminal and port services division operates ten warehouses throughout the country. They also operate several transloading facilities and specialize in loading and unloading railcars and moving commodities to their next destination. Watco also operates two port services in the Gulf Region. Greens Port Terminal on the Houston Ship Channel in Harris County, Texas and Port Birmingham Terminal on the Black Warrior River in Alabama both provide access to the Gulf of Mexico. Watco's newest division, Watco Supply Chain Services, provides supply chain logistics for highway, intermodal, rail, and international logistics. 2 Holdings CBH class hauled train at Yilliminning, Western Australia in October 2013 Watco CF7 #5 at Pittsburg, Texas in August 2015 Watco Companies was started in 1983 by Charles R. "Dick" Webb. The first operation was an industrial switching operation in DeRidder, Louisiana, that is still in existence. Webb then started his first mechanical operation, a railcar repair shop in Coffeyville, Kansas in 1985. The Coffeyville mechanical shop was held captive to the major rail lines, and during discussions with the Union Pacific the opportunity arose to purchase the line running from Nevada, Missouri, to Coffeyville. This was the Union Pacific's first short line sale. Watco then looked to the West Region, acquiring the Blue Mountain Railroad in 1998, the Palouse River and Coulee City Railroad in 1992 and the Eastern Idaho Railroad in 1993. In 1998, they began operating the Stillwater Central Railroad in Oklahoma and the Timber Rock Railroad in Texas. The Kansas and Oklahoma Railroad was acquired in 2001 and the Pennsylvania Southwestern Railroad in 2003. In 2004, they started operations of the Great Northwest Railroad in Washington, the Kaw River in Kansas and Missouri, and the Mission Mountain Railroad in Montana. In 2005 they began operating the Alabama Southern Railroad, the Louisiana Southern Railroad, the Mississippi Southern Railroad, and the Yellowstone Valley Railroad in Montana. The Austin Western Railroad was started in 2007 and shares rail with passenger rail. They also acquired Millennium Rail, Inc., a mechanical service company in 2007. The Baton Rouge Southern and the Pacific Sun Railroad were started in 2008, and they also acquired the mechanical services company Fitzgerald Railcar Services, Inc., and Reload, Inc., a 25-year experienced transloading business. The Grand Elk Railroad began operations in 2009. In December 2010 Watco entered the Australian rail haulage market when it was awarded a 10-year contract to operate grain services for the CBH Group of Western Australia.[1][2] Operations commenced in March 2012.[3][4] In late 2016 Watco Australia was awarded an infrastructure train contract with Brookfield Rail operating ballast and rail work trains. On December 15, 2010, Kinder Morgan Energy Partners, L.P. NYSE: KMP announced an agreement whereby Kinder Morgan will invest up to $150 million over the next year in Watco Companies in exchange for a preferred equity position in the company. Kinder Morgan made an initial $50 million preferred shares investment on January 3, 2011.[5] Additional $50 million equity investment completed in December of 2011.[6] Kinder Morgan will receive 3.25% quarterly distribution on the equity investment. Kinder Morgan is a leading pipeline transportation and energy storage company in North America. The transaction provides capital to Watco for further expansion of specific projects and offers Kinder Morgan the opportunity to share in the subsequent growth. In April 2011, Watco began operating the Autauga Northern Railroad (AUT), between Maplesville and Autauga Creek, Alabama, the third short line in Alabama operated by Watco.[7] On December 28, 2011 Watco began operations of the Swan Ranch Railroad (SRRR)[8] in the Swan Ranch Industrial Park in Cheyenne, Wyoming. On January 1, 2012, Watco gained majority ownership of the Wisconsin and Southern Railroad, a regional railroad in Wisconsin, and on February 1, 2012 took over operations of the Birmingham Southern Railroad.[9][10] On June 4, 2014, Watco and The Greenbrier Companies, Inc NYSE: GBX announced that they will create an equally owned joint venture, GBW Railcar Services, LLC, providing the railcar repair services.[11] This joint venture was dissolved in August of 2018. Holdings[edit] Began operations Track length (mi.) Alabama Southern Railroad (ABS) November 2005 85 [12] iron and steel, paper products, aggregates Acquired through lease agreement with KCS Alabama Warrior Railway (ABWR) August 2009 15 [13] coal, aggregates, pipe, scrap steel, cement Started as Marylee Railroad in 1895 Ann Arbor Railroad (AA) January 2013 50 [14] automotive materials Purchased from Ann Arbor Acquisition Corp, services mostly Chrysler plant producing Jeep Cherokees Arkansas Southern Railroad (ARS) October 2005 62 [15] corn and soybean products Two branches, 32-mile northern branch and a 30-mile southern branch Austin Western Railroad (AWRR) October 2007 155 [16][17] aggregates, crushed limestone, calcium bicarbonate, lumber beer, chemicals, plastic, paper Began sharing the railway with commuter operations in 2010 with Austin, TX Autauga Northern Railroad (AUT) April 2011 44 [18][19] paper products and aggregates Third shortline Watco acquired in Alabama Baton Rouge Southern Railroad (BRS) November 2008 1.5[20] chemicals, bauxite, plastic pellets, raw coke, calcinated coke Provides car storage and use by local chemical companies Bogalusa Bayou Railroad (BBAY) 2015 xxx paper Serves Bogalusa's International Paper Birmingham Terminal Railway (BHRR) February 2012 75.9 [21] iron ore, coal, steel sheets and pipe The dragon in the logo represents the fire that is used to smelt steel Blue Ridge Southern Railroad (BLU) July 2014 93 woodchips, chemicals, paper, cement Former Norfolk Southern T-Line (Murphy Branch), W-Line, and TR-line. Based in Canton, NC. Boise Valley Railroad (BVRR) November 2009 36[20] frozen vegetables, lumber, fertilizer, fuels Shares customers with the YSVR and EIRR Decatur & Eastern Illinois Railroad (DREI) September 2018[22] 127[22] Operates on ex-CSX Transportation trackage acquired in 2018 Eastern Idaho Railroad (EIRR) 1993 270[23] corn, sugar, wheat, frozen vegetables, coal Largest Union Pacific sale Ithaca Central Railroad (ITHR) December 8th, 2018 48,8[24] salt, coal, plastics, magnesium chloride Leased from Norfolk Southern Grand Elk Railroad (GDLK) March 2009 151[25] lumber products, corn, steel Interchanges with 3 Class I railroads Great Northwest Railroad (GRNW) March 2004 77 [26][27] lumber, products, fertilizers, aggregates Competition to reach Lewiston while the line was being built was called the "Clearwater River Railroad Wars" Kanawha River Railroad (KNWA) July 2016 309 [28] Chemicals, Aggregates, Agricultural Products[28] Second railroad to be acquired by Watco, LLC in the state of West Virginia. Appalachian & Ohio was briefly operated by Watco Transportation Services LLC. Operates on former Norfolk Southern tracks in Ohio and West Virginia. Kansas & Oklahoma Railroad (KO) July 2001 820 [29][30] wheat, grain products, chemicals, soybean products Has state and federal shipping agreements Kansas City Terminal Railway (KCT) xxx xxx Transfer service and scrap hauler Kansas City area Kaw River Railroad (KAW) June 2004 43 [31][32] iron and steel, corn starch, lumber products, aggregates, plastics, industrial products Expansions in 2005, 2006, and 2007 Louisiana Southern Railroad (LAS) September 2005 167 [33][34] paper products, aggregates, oils Interchanges at Gibsland, Sibley, and Pineville Mission Mountain Railroad (MMT) December 2004 40 [35][36] lumber, wheat Originally part of the Haskell Pass, built in 1904 Mississippi Southern Railroad (MSR) April 2005 28[37][38] corn and soybeans Interchanges with KCS at Newton Pacific Sun Railroad (PSRR) October 2008 62[39] corn, soy, lumber, plastic pellets, beer, paints, recyclables Crews accommodate the schedules of BNSF, coaster, Amtrak and Metro link passenger trains Palouse River & Coulee City Railroad (PCC) 1992 202 [40] wheat, frozen vegetables $25 million in state-sponsored track rehabilitation backed by 100-year lease Pennsylvania Southwestern Railroad (PSWR) April 2003 12[41][42] steel scrap, steel products First Watco operation to service a steel mill Pecos Valley Southern Railway (PVSR) 2012 19 sand, gravel, crude oil Formerly operated by Capitol Aggregates San Antonio Central Railway (SAC) 2012 xxx xxx Operates within Port San Antonio's East Kelly Railport at night South Kansas and Oklahoma Railroad (SKOL) March 1987 380 [43] grains, cement, coal, fertilizer, aggregates, steel, sand Operates out of the historic Cherryvale Depot Stillwater Central Railroad (SLWC) 1998 279 [43] crude oil, sand, gypsum, cement, stone, steel Higher than industry average rating of customer service and personal attention Swan Ranch Railroad (SRRR) December 2011 3.25 [44] asphalt When finished, the Swan Ranch Industrial Park will encompass 7,200 acres Timber Rock Railroad (TIBR) 1998 168 [45] aggregates, lumber products, plastics, fuel Additions in 2002 and 2004; services the founding operation site in DeRidder, LA Vicksburg Southern Railroad (VSOR) January 2006 21[46][47] lumber, steel Formerly known as the Redwood Branch Watco Australia May 2012 NA - operator only grain First international operation of Watco Wisconsin & Southern Railroad Co. (WSOR) January 2012 700 [48] lumber, coal, liquid and dry fertilizers, corn, beans, plastic, aggregates, ethanol, liquid petroleum Wisconsin's second-largest railroad Yellowstone Valley Railroad (YSVR) August 2005 172 [49][50] grains, plastics, ethanol, crude oil, sand Awarded BNSF Shortline Achievement Award for development of Customers in 2009 ^ US group wins CBH contract from QR National The Australian December 14, 2010 ^ Watco wins CBH grain rail contract Rail Express December 15, 2010 ^ CBH, Watco rail agreement starts early World Grain April 2, 2012 ^ CBH grain wagons go to work early Farm Weekly April 5, 2012 ^ "Kinder Morgan, Inc, Form 10-K, Annual Report, Filing Date Mar 2, 2011". secdatabase.com. Retrieved May 13, 2018. ^ "Kinder Morgan, Inc, Form 10-K, Annual Report, Filing Date Feb 23, 2012" (PDF). secdatabase.com. Retrieved May 13, 2018. ^ "Watco announces Alabama short line debut". Railway Age. 12 April 2011. Archived from the original on 6 December 2011. Retrieved 6 December 2011. ^ "Watco to operate Swan Ranch Railroad in Wyoming". Trains Magazine. 30 November 2011. Retrieved 23 December 2011. ^ "Watco to buy control of Wisconsin & Southern". Trains Magazine. 29 November 2011. Retrieved 23 December 2011. ^ "Watco adds third railroad in a week". Trains Magazine. 2 December 2011. Retrieved 23 December 2011. ^ "Greenbrier Companies, Inc, Form 8-K, Current Report, Filing Date Jun 4, 2014". secdatabase.com. Retrieved May 13, 2018. ^ Alabama Southern Railroad, accessed June 2012 ^ STB Finance Docket No. 35204, June 2012 ^ [1], accessed May 2014 ^ STB Finance Docket No. 34761, October 26, 2005 ^ Railroad Retirement Board, Employer Status Determination: Austin Western Railroad, Inc., January 22, 2008 ^ STB Finance Docket No. 35075, September 14, 2007 ^ Railroad Retirement Board, Employer Status Determination: Autauga Northern Railroad, Inc., April 14, 2011 ^ STB Finance Docket No. 35075, April 4, 2011 ^ a b STB Finance Docket No. 35169, August 1, 2008 ^ "Birmingham Southern railway acquired". www.bizjournals.com. December 2, 2012. Missing or empty |url= (help); |access-date= requires |url= (help) ^ a b "Decatur & Eastern Illinois makes debut". Trains Magazine. September 10, 2018. Retrieved September 10, 2018. ^ STB Finance Docket No. 34045, June 12, 2001 ^ "Ithaca Central Railroad (ITHR)". Watco Companies. 2018. Retrieved 2018-12-13. Ithaca Central Railroad (ITHR) begins operations on December 8, 2018. Watco leases the railroad from the Norfolk Southern Railway. The ITHR consists of 48.8 miles of track running north from Sayre, Pennsylvania, to Ludlowville, New York ^ STB Finance Docket No. 35188, November 17, 2008 ^ Railroad Retirement Board, Employer Status Determination: Great Northwest Railroad, Inc., July 9, 2004 ^ STB Finance Docket No. 34475, March 19, 2004 ^ a b "Kanawha River Railroad (KNWA)". ^ Kansas and Oklahoma Railroad, accessed December 2008 ^ Kaw River Railroad, accessed December 2008 ^ Louisiana Southern Railroad, accessed December 2008 ^ STB Finance Docket No. 34752, October 7, 2005 ^ Mission Mountain Railroad, accessed December 2008 ^ OpenDocument STB Finance Docket No. 34635, January 19, 2005 ^ Mississippi Southern Railroad, accessed December 2008 ^ STB Finance Docket No. 34683, April 21, 2005 ^ "Home". Port of Columbia. Retrieved 2019-10-24. ^ Pennsylvania Southwestern Railroad, accessed December 2008 ^ a b "Short Line Railroads". www.up.com. ^ [2][dead link] ^ Vicksburg Southern Railroad, accessed December 2008 ^ STB Finance Docket No. 34766, January 13, 2006 ^ "Wisconsin & Southern Railroad being purchased by Kansas company". www.jsonline.com. ^ Yellowstone Valley Railroad, accessed December 2008 ^ STB Finance Docket No. 34736, September 1, 2005 American railroads Alabama Southern Alabama Warrior Arkansas Southern Austin Western Autauga Northern Baton Rouge Southern Bogalusa Bayou Birmingham Terminal Blue Ridge Southern Boise Valley Decatur & Eastern Illinois Eastern Idaho Grand Elk Ithaca Central Kanawha River Kansas & Oklahoma Kansas City Terminal Kaw River Louisiana Southern Mission Mountain Mississippi Southern Palouse River & Coulee City Pennsylvania Southwestern Pecos Valley Southern San Antonio Central South Kansas and Oklahoma Stillwater Central Swan Ranch Timber Rock Vicksburg Southern Wisconsin & Southern Yellowstone Valley Australian railroads Watco Australia Retrieved from "https://en.wikipedia.org/w/index.php?title=Watco_Companies&oldid=922889450" United States railroad holding companies Companies based in Kansas Holding companies established in 1983 Transport companies established in 1983 1983 establishments in Kansas Kinder Morgan Articles lacking reliable references from November 2010
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,886
Q: Crop raster files with Polygon and writing the output with the same filename i have a lot of rasterfiles (satellite images, all available in geotiff .tif extension). Some files are splitted into all bands as single files, some files have multiple bands. As this uses a lot of space on my harddrive, i want to crop every file with the area of my interest, which i have as a shapefile polygon. I am close to my own solution and get the cropped images as new .tif files with the following code: library(raster) rasterfiles = list.files(path=getwd(), pattern = "*.TIF", full.names=TRUE) s = stack(rasterfiles) shp = readOGR("Area.shp") rasterfiles_crop = crop(s, extent(shp)) output = writeRaster(rc, 'out.tif', format="GTiff", overwrite=TRUE, bylayer = TRUE) With this code I receive the filenames out_1.tif, out_2.tif etc... Unfortunately the resulting files have only 1 band, so R recognizes the 1st band, only, when it comes to a multi-band TIF image. I want to keep all bands and the original filename and just add "_crop" at the end of the new one. May someone can help me here how i have to change the code? Thank you A: You could write them in a loop library(raster) library(rgdal) shp <- readOGR("Area.shp") infiles <- list.files(path=getwd(), pattern="*.TIF", full.names=TRUE) outfiles <- file.path(YourOutputPath, paste0(basename(tools::file_path_sans_ext(infiles)), "_crop.tif") ) for (i in seq_along(infiles)) { r <- crop(raster(infiles[i]), shp) writeRaster(r, filename=outfiles[i]) } A: I found a solution now, the following code lists all TIF files in a folder and a multi-band tif keeps its bands after the crop process: library(raster) library(rgdal) setwd("input-folder") ## polygon with crop-extend ## shp <- readOGR("area.shp") ## load tif files ## infiles = list.files(path=getwd(), pattern="*.tif$|*.TIF$") ## Filenames with desired suffix and output place ## outfiles = file.path("D:/Downloads/BDA/Output", paste0(basename(tools::file_path_sans_ext(infiles)), ".tif")) ## crop and output settings (compression and datatype) for (i in seq_along(infiles)) { r = crop(stack(infiles[i]), shp) writeRaster(r, filename=outfiles[i], bylayer=FALSE, format="GTiff", datatype="INT1U", options="COMPRESS=ZIP", overwrite=TRUE) } Thank you Richard for the nice loop code! Concerning the datatype: It would be nice, if R can check which datatype the inputfiles have and choose the same one automatically for the cropped output. Right now i have to specify the datatype manually. Otherwise the output files will be as float32 (FLT4S) even if the input files just have 8bit unsigned (INT1U) or 16bit signed (INT2S). datatype= same.as.input.file
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,724
\section{Introduction} Key components of the future quantum devices for usage in information technology will be on demand sources of single photons and entangled photon-pairs. A prominent candidate systems in this respect are currently the quantum dots (QDs). Mainly semiconductor type-I QDs where both electron and hole wavefunctions are bound inside QD body, show excellent optical properties combined with their compatibility with current semiconductor processing technology and, moreover, they offer the potential for scalability~\cite{Aharonovich2016,Senellart2017,Thomas2021,Tomm2021,Orieux2017,Huber2018a,Klenovsky_IOP2010,Klenovsky2015}. QDs currently cover a rather wide range of topics, from quantum cryptography protocols~\cite{Muller2014,Strauf2007}, sources of polarization-entangled photon pairs~\cite{Huber2018,Liu2019,Huang2021}, quantum key distribution~\cite{BassoBasset2021,Schimpf2021}, quantum gates~\cite{Stevenson2006,Burkard1999,Klenovsky2019,Klenovsky2016,Krapek2010}, or as nanomemories~\cite{Marent2011,Marent2009_microelectronics,Bimberg2011_SbQDFlash, Marent_APL2007_10y,Sala2016,Sala2018,Klenovsky2019}. One of the key challenges for turning QDs into sources of entangled photons is to zero the tiny energy separation of the bright doublet of the ground state exciton (X$^0$), dubbed the fine-structure splitting (FSS). That can be achieved,~e.g., by externally applying elastic strain~\cite{Trotta2012,Trotta2013,Trotta2016,Martin-Sanchez2017,Aberl2017,Klenovsky2018}, electric~\cite{Huang2021}, or magnetic~\cite{Lobl2019,Huber2019,Csontosova2020} fields. Further option to have QDs with negligible FSS is provided by growing QDs on lattice matched materials~\cite{Rastelli2004,Sala2020}. Another route of obtaining small FSS is utilizing type-II QDs where one of the quasiparticles, electron or hole, is bound outside of QD body, while the other resides inside~\cite{Matsuda2007,Miloszewski2014,Jo2012,Young2014,Klenovsky2010,Klenovsky_IOP2010,Klenovsky2017}. The aforementioned type-II QDs were mostly realized on group III--V materials, InAs, GaAs, and GaSb. However, there exist another class of type-II QD structures based on II--VI materials, like the Cd(Se,Te)/ZnTe dots. {\bf Note, however, that our purpose in this work is not to study the feasibility of Cd(Se,Te)/ZnTe dots for generation of entangled photons but rather to study the reduction of FSS with increased electron-hole spatial separation.} CdSe/ZnTe is a semiconductor system, which is well-known for its type-II band alignment, in which the spatially indirect optical emission appears in the near infrared spectral region,~i.e.,~at 1.0~eV--1.1~eV. This fact has been demonstrated experimentally in CdSe/ZnTe quantum wells~\cite{Mourad2012}, CdSe/ZnTe core/shell nanowires~\cite{Wojnar2021} and colloidal core/shell nanocrystals~\cite{Kim2003}. However, the lattice mismatch which is the driving force for the formation of self assembled QDs is very small and amounts to 0.003 in this material system, which prevents the formation of type-II CdSe/ZnTe QDs. On the other hand, a sufficiently large lattice mismatch of 0.07 is present in CdTe/ZnTe semiconductor system which has enabled the growth of CdTe/ZnTe QDs by molecular beam epitaxy~\cite{Tinjod2003,Wojnar2011}. Subsequently, their optical properties have been subject of extensive investigations~\cite{Leger2007,Kazimierczuk2011,Smolenski2015}. In particular, it was found that this semiconductor system is characterized by the type-I confinement which is manifested by quite short excitonic lifetimes,~i.e., below 500~ps~\cite{Kazimierczuk2010}. However, the valence band offset in these structures is quite small~\cite{Deleporte1992} whereas only strain effects are responsible for its type-I character. That is why the addition of a certain amount of selenium into Cd(Se,Te) QD-layer, leading to the shift the valence band towards lower energies, results in the transition from type-I to type-II confinement~\cite{Baranowski2020}. At the same time the Cd(Se,Te)/ZnTe lattice mismatch is sufficiently large to induce the QD formation and the optical emission from these structures is intense enough to enable the observation of the emission from individual QDs, which enables the unique study of FSS in those type-II QDs that we discuss in this work. The paper is organized as follows. We start with description of experiments,~i.e., the growth of Cd(Se,Te)/ZnTe QDs and measurements of their optical emission, revealing FSS of that system as function of Se content. That is followed by theory discussion of electronic structure of Cd(Se,Te)/ZnTe QDs, starting from analysis of the single-particle states and carrying on to computations of correlated excitons, finally showing that the trends predicted by theory match those of the experimental results very well. Furthermore, theory shows a rather unusual behavior of Cd(Se,Te)/ZnTe QDs related to light hole exciton and Aharonov-Bohm effect. \section{Experiment} \begin{figure} \includegraphics[width=0.45\textwidth]{figure_1_new_pw1_cut_pk.pdf} \caption{Fine structure splitting (FSS) of individual Cd(Se,Te)/ZnTe quantum dots (QDs) for (a) exciton (X) and biexciton (XX) emission from a single Cd(Se,Te)/ZnTe QD measured in two orthogonal linear polarizations corresponding to the anizotropy axes of the dot (b) spectral position of the exciton and biexciton emission from the same dot as a function of the polarization angle. Solid lines represent fits with sine square functions from which the FSS value of 275~$\mu$eV is determined (c) FSS values for various individual QDs as a function of the exciton emission energy. {\bf The inset of panel (a) shows the measured (points) and fitted (lines) dependencies of X (black) and XX (red) with slopes $a=0.9$ and $a=1.7$, respectively.} The dots are taken from three samples with a different average Se concentrations of~0.002,~0.03,~and~0.1 which is marked in~(c) with different colors: blue, red, and dark grey, respectively. Temperature during these measurements was kept at 7~K, the excitation laser wavelength was 405~nm and the laser spot diameter was $\sim 3 \mu$m.} \label{fig:uPLmeas} \end{figure} The samples containing self assembled Cd(Se,Te)/ZnTe QDs are grown by the molecular beam epitaxy. The details of the growth procedure are described in Ref.~\cite{Baranowski2020}. The optically active part of the samples consists of a layer of Cd(Se,Te) QDs embedded in ZnTe matrix. Three samples with a different average Se concentrations within the dots equal to~0.002,~0.03, and~0.1 are investigated for the purposes of the present work. {\bf Se content within QDs can be effectively changed by varying the growth parameters during the deposition of the QD-layer. That layer consists consecutively of three CdTe monolayers, one CdSe sub-monolayer, and two CdTe monolayers. Depending on the coverage of the central CdSe monolayer, which is controlled by the exposure time of Se-flux, the average Se concentration can be varied from~0 up to~0.17. The largest Se-concentration corresponds to the deposition of a complete central CdSe monolayer.} After the deposition of the QDs-layer the QD-formation process does not take place spontaneously despite of the large lattice mismatch, as is common for II-VI semiconductor system. It has to be additionally induced by tellurium deposition at low substrate temperature and its subsequent thermal desorption~\cite{Wojnar2011,Tinjod2003}. In the final step, the dots are capped with a 50~nm thick ZnTe layer. Photoluminescence (PL) measurements performed at low temperature reveal that the emission energy strongly depends on Se concentration within the Cd(Se,Te) QDs which is induced, most likely, by the change of confinement from type-I to type-II~\cite{Baranowski2020}. In particular, it is found that the maximum emission energy amounts to 1.98~eV, 1.83~eV, and 1.69~eV for the investigated mean Se concentrations of 0.002, 0.03, and 0.1, respectively. This choice of the samples along with the inhomogeneous broadening of the emission bands, which amounts typically to 80~meV, ensures that the emission lines from individual QDs can be found in the entire spectral range, from 1.5~eV up to 1.9~eV. In order to assess the emission from individual QDs, $\mu$-PL measurements in which the excitation laser spot is reduced to 3~$\mu$m are performed. Further reduction of the excitation area is obtained using apertures with diameter of 400~nm within a 150~nm thick gold layer deposited on top of the structures. For the measurements, the samples are placed inside a continuous flow cryostat in which the temperature is kept at 7~K. Several emission lines with the spectral width in the range of 500~$\mu$eV -- 1~meV originating from individual QDs are observed. In order to determine the corresponding excitonic FSS values, linear polarization of the optical emission spectrum has been measured. This study is performed in geometry in which light propagates perpendicular to the sample surface and the linear polarization vector is always parallel to the sample plane. It is found that the emission energy slightly depends on the linear polarization angle for all measured bands. In Figure~\ref{fig:uPLmeas}~(a), the emission lines are measured in two orthogonal linear polarizations corresponding to the largest change of the emission energy. In such configuration, the energy difference is given by the value of FSS. In order to determine the best FSS-values, the spectral position is plotted as a function of the polarization angle for both emission lines, see Fig.~\ref{fig:uPLmeas}~(b). It is found that this dependence can be well fitted with a sine square function, whereas its amplitude gives us directly the FSS value~\cite{Leger2007,Kowalik2007}. FSS values of the two emission bands presented in Fig.~\ref{fig:uPLmeas}~(a)~and~(b) are found to be very similar. However, both polarization angle dependencies are shifted by 90$^\circ$ with respect to each other, see Fig.~\ref{fig:uPLmeas}~(b). This feature is characteristic for biexciton and single exciton emission and indicates that both bands originate from the same QD. In order to identify whether the particular band corresponds to the single exciton or to biexciton emission, excitation power dependence of the optical emission spectrum has been measured,\textbf{ inset}. The intensity of the high energy line at 1.798~eV increases almost linearly with increasing excitation power whereas the intensity of the low energy line at 1.796~eV increases superlinearly which leads us to associate them to the single exciton and biexciton emission, respectively. In Figure~\ref{fig:uPLmeas}~(c), FSS values from over 40 individual QDs are plotted as a function of the single exciton emission energy. A large distribution of FSS values decreasing from $\sim 300$~$\mu$eV to almost zero is found among the investigated dots. Most importantly, they depend conspicuously on the exciton emission energy. Significantly, smaller FSS-values are observed, in average, for the dots emitting at lower energies compared to the dots emitting at higher energies. {\bf The large variation of these values at a fixed energy results, most likely, from the anisotropy of the potential localizing charge carriers which is induced by the shape and/or strain anisotropy of the dots similar to CdTe/ZnTe QDs without Se~\cite{Kazimierczuk2011}.} On the other hand, the maximum emission energy depends primarily on the Se concentration within the dots~\cite{Baranowski2020}. At the same time, it is found that the sizes and shapes of the dots do not change significantly as a function of Se-content within the investigated concentration range as demonstrated previously by atomic force microscopy~\cite{Baranowski2020}. Thus, it is reasonable to conclude that the increase of the average Se concentration within Cd(Se,Te) QDs leads to the overall decrease of FSS values. A possible explanation of this effect relies on the change of the confinement of the dot/matrix interface character from type-I to type-II, leading to the increase of the electron-hole spatial separation. Furthermore, the effect of mutual compensation of electron and hole wavefunction anisotropies may result in the observed decrease of FSS-values in type-II QDs, as predicted theoretically in Ref.~\cite{Krapek2015}. {\bf Based on the growth procedure and the optical measurements presented above we cannot draw any definite conclusion about the Se composition profile within the dots. In our considerations an uniform Se-distribution is assumed for simplicity reasons. In fact, the presence of Se- and Te-rich regions within the dot inducing additional electron-hole separation within the dots cannot be excluded. Such effects have already been studied in entirely type-I QD systems in which Cd(Se,Te) were embedded into ZnSe matrix~\cite{Sciesiek2016}. One of the conclusions of that work was that the FSS values were even slightly increased in the presence of Se-atoms as compared to pure CdSe/ZnSe QDs and CdTe/ZnTe QDs. Since in the presently described Cd(Se,Te)/ZnTe QDs a decrease of the average FSS values takes place with an increasing Se-content, we do not expect that the local variation of Se and Te within the dots impacts significantly our results.} \begin{figure} \includegraphics[width=0.45\textwidth]{Figure_2_exp_cut_pk.pdf} \caption{Dependence of selected optical properties on the Se concentration within Cd(Se,Te)/ZnTe quantum dots (a) emission energy (b) decay rates, whereas fast and slow decay are shown in blue and red, respectively, (c) average FSS determined from the data presented in Figure~\ref{fig:uPLmeas}~(c). Temperature of the measurements was 7~K and the excitation laser wavelength~405~nm.} \label{fig:uERateFSSmeas} \end{figure} The most distinct experimental trends concerning the optical emission from Cd(Se,Te)/ZnTe QDs which appear as a function of increasing Se concentration within the dots are presented in Figure~\ref{fig:uERateFSSmeas}. First of all, a considerable redshift of the emission energy from 1.98~eV down to 1.6~eV is observed,~Figure~\ref{fig:uERateFSSmeas}~(a). That is accompanied by a decrease of the decay rate by one order of magnitude, Figure~\ref{fig:uERateFSSmeas}~(b). Since the PL-decays can be well described by biexponential functions~\cite{Baranowski2020} for all samples, the fast and slow decay rates are determined and plotted in blue and red in Fig.~\ref{fig:uERateFSSmeas}~(b), respectively. Finally, those results are compared to the dependence of FSS values as a function of Se concentration, which are the main subject of this work, see Fig.~\ref{fig:uERateFSSmeas}~(c). The values presented in Fig.~\ref{fig:uERateFSSmeas}~(c) are obtained from the arithmetic average from all QD excitons observed on a given sample. The three experimental points correspond to the three samples with different average Se concentrations investigated in this work. Despite of the fact that there is a large distribution of FSS values a clear decrease of average FSS values with increasing Se concentration is observed. In the case of Cd(Se,Te)/ZnTe QDs with Se-content of 0.17 the optical emission was too weak to perform a detailed FSS investigation like in the other samples with lower Se-content. Note that the distinct decrease of the decay rates strongly indicates the type-II character of the QDs and is caused directly by the electron-hole wavefuncion spatial separation. The huge emission energy redshift of 350~meV is also consistent with the type-II confinement in CdSeTe/ZnTe QDs with relatively large Se-content. Cd(Se,Te) bandgap reduction cannot explain this effect since it is expected to amount to only 135~meV at maximum (for Se concentration of 0.4) due to the bowing effect~\cite{Wei1999}. \section{Theory} Based on the aforementioned experimental results we will now provide the theoretical reason for the reduction of FSS values with increasing Se content. In order to do that we calculate the correlated electronic structure of the ground state exciton (X$^0$) using a combination of the eight-band ${\bf k}\cdot{\bf p}$ method~\cite{Birner2007,t_zibold,Mittelstadt2022}, providing single-particle (SP) basis states for the configuration interaction~\cite{Schliwa:09} (CI) algorithm which we developed earlier, see Ref.~\cite{Klenovsky2017}. During the CI calculation our CI code evaluates also the emission radiative rate~\cite{Klenovsky2017,Klenovsky2019} utilizing the Fermi's golden rule~\cite{Dirac1927}. More specifically, we consider~\cite{Klenovsky2017,Csontosova2020,Mittelstadt2022} the SP states as linear combination of $s$~orbital~like and $x$, $y$, $z$ $p$~orbital~like Bloch waves at $\Gamma$ point of the Brillouin zone,~i.e. \begin{equation} \Psi_{a_i}(\mathbf{r}) = \sum_{\nu\in\{s,x,y,z\}\otimes \{\uparrow,\downarrow\}} \chi_{a_i,\nu}(\mathbf{r})u^{\Gamma}_{\nu}\,. \end{equation} Here $u^{\Gamma}_{\nu}$ is the Bloch wave-function of an $s$-like conduction band or a $p$-like valence band at $\Gamma$ point, $\uparrow$/$\downarrow$ mark the spin, and $\chi_{a_i,\nu}$ is~the~envelope function for $a_i \in \{ e_i, h_i \}$. On the other hand, in CI we consider the excitonic wavefunction as a linear combination of the Slater determinants (SDs) \begin{equation} \psi_i^{\rm X}(\mathbf{r}) = \sum_{\mathit m=1}^{n_{\rm SD}} \mathit \eta_{i,m} D_m^{\rm X}(\mathbf{r}), \label{eq:CIwfSD} \end{equation} where $n_{\rm SD}$ is the number of SDs $D_m^{\rm X}(\mathbf{r})$, and $\eta_{i,m}$ is the $i$-th CI coefficient which is found along with the eigenenergy using the variational method by solving the Schr\"{o}dinger equation \begin{equation} \label{CISchrEq} \hat{H}^{\rm{X}} \psi_i^{\rm X}(\mathbf{r}) = E_i^{\rm{X}} \psi_i^{\rm X}(\mathbf{r}), \end{equation} where $E_i^{\rm{X}}$ is the $i$-th eigenenergy of excitonic state $\psi_i^{\rm X}(\mathbf{r})$, and~$\hat{H}^{\rm{X}}$ is the CI Hamiltonian which reads \begin{equation} \label{CIHamiltonian} \hat{H}^{\rm{X}}=\hat{H}_0^{\rm{SP}}+\hat{V}^{\rm{X}}, \end{equation} where $\hat{H}_0^{SP}$ and $\hat{V}^{\rm{X}}$ represent the Hamiltonian of the noninteracting SP states and the~Coulomb interaction between them, respectively. The matrix element of $\hat{V}^{\rm{X}}$ is~\cite{Klenovsky2017,Klenovsky2019,Csontosova2020} \begin{equation} \begin{split} &\bra{D_n^{\rm X}}\hat{V}^{\rm{X}}\ket{D_m^{\rm X}} = -\frac{1}{4\pi\epsilon_0} \sum_{ijkl} \iint {\rm d}\mathbf{r} {\rm d}\mathbf{r}^{\prime} \frac{e^2}{\epsilon(\mathbf{r},\mathbf{r}^{\prime})|\mathbf{r}-\mathbf{r}^{\prime}|} \\ &\times \{ \Psi^*_i(\mathbf{r})\Psi^*_j(\mathbf{r}^{\prime})\Psi_k(\mathbf{r})\Psi_l(\mathbf{r}^{\prime}) - \Psi^*_i(\mathbf{r})\Psi^*_j(\mathbf{r}^{\prime})\Psi_l(\mathbf{r})\Psi_k(\mathbf{r}^{\prime})\}. \end{split} \label{eq:CoulombMatrElem} \end{equation} where $e$ labels the elementary charge and $\epsilon(\mathbf{r},\mathbf{r}^{\prime})$ is the spatially dependent dielectric function. Note that minus sign in front of the integral in Eq.~\eqref{eq:CoulombMatrElem} results from different sign of the charge of the electron and hole from which exciton is composed. The sixfold integral in Eq.~\eqref{eq:CoulombMatrElem} is evaluated using the~Green's function method~\cite{Schliwa:09,Stier2000,Klenovsky2017,Csontosova2020}. {\bf Note, that for $\epsilon(\mathbf{r},\mathbf{r}^{\prime})$ in Eq.~\eqref{eq:CoulombMatrElem} we use the positionally dependent bulk dielectric constant in our CI calculations.} Further, the multipole expansion of the exchange interaction is included in our CI for CI basis consisting of two electron and two hole SP ground states following the theory outlined in Refs.~\cite{Takagahara2000,Krapek2015}. \begin{figure}[!ht] \includegraphics[width=0.45\textwidth]{CdSeTeOZnTe_SP_bind_CI_Bloch_vs_Se.pdf} \caption{Electronic states of Cd(Se,Te)/ZnTe QDs. In panel (a) we show twelve single-particle (SP) energies of electrons (blue) and holes (red). The inset in (a) gives markings of different types of confinement in Cd(Se, Te)/ZnTe QDs,~i.e., type-I (mark T-I) for Se=0--0.15, type-II (mark T-II) for Se=0.15--0.3, and a {\bf type I/II} (mark T-I/II) for Se=0.3--0.4. In (b) ve show the SP energies (blue curve) and that computed using CI without (orange curve) and with (green curve) the inclusion of the effect of Coulomb correlation. {\bf In (b) we also give the binding energy of X$^0$ with respect to SP transition (black broken curve) with energy axis on the right.} The insets in~(b) show side cuts of our QD (green object) and the SP electron (blue object) and hole (red object) probability densities. Panel (c) gives the conduction (CB, black), heavy-hole (HH, red), light-hole (LH, green), and spin-orbit split off (SO, blue) Bloch band content of X$^0$ as a function of Se concentration computed~\cite{Csontosova2020} using CI with twelve electron and twelve hole SP basis states,~i.e., including the effect of correlation. The transitions between different confinements are marked in all panels by black dotted vertical lines.} \label{fig:TeorSPCIBloch} \end{figure} \onecolumngrid \begin{figure}[!ht] \centering \includegraphics[width=0.95\textwidth]{CdTeDensities.pdf} \caption{Cuts of the single-particle (SP) probability densities of Cd(Se,Te)/ZnTe QDs for (a) zero Se content, (b) Se content of 0.2, and (c) Se content of 0.4, corresponding to type-I, type-II, and {\bf type-I/II confinement}, respectively. The letters QD in top row mark that the first column showing the cuts of the simulated QD body, the numbers in the first row enumerate SP states, starting from the ground state marked by zero. The last column gives the Miller indices of the planes where the cut was performed in each row of the figure. The abbreviations ``el." and ``hl." mark the electrons and holes, respectively. In (b) the designations ``[001] above" and ``[001] bellow" identify that the cuts of the hole densities were performed above and bellow QD body, respectively, and correspond to the side cut given in the last row of panel (b).} \label{fig:ProbabDen} \end{figure} \twocolumngrid \clearpage \section{Results and Discussions} The electronic states of Cd(Se,Te)/ZnTe QDs computed using the aforementioned ${\bf k}\cdot{\bf p}$+CI method are shown in Fig.~\ref{fig:TeorSPCIBloch}. Motivated by typical structure pinpointed in Ref.~\cite{Baranowski2020}, the computed shape of the Cd(Se,Te) QD was truncated cone with lower and upper basis diameters of 36~nm and 22~nm, respectively, and with height of 4~nm. Except of QD body, the rest of the simulation space consisted of ZnTe. After definition of the structure, the elastic strain tensor was obtained in the whole simulated structure by grid-point-wise minimization of elastic energy. Thereafter, the Coulomb potential energy, including the effects of the piezoelectricity, was obtained by solving the Poisson's equation in the whole structure. The resulting Hamiltonian matrix was then diagonalized using Nextnano++~\cite{Birner:07,t_zibold} simulation tool, which was also used for the aforementioned computation steps. {\bf Note, that all the material parameters including the effective masses were taken from the library of the Nextnano++ software~\cite{Birner:07}.} The resulting eigenenergies and eigenfunctions are shown in Fig.~\ref{fig:TeorSPCIBloch}~(a) and Fig.~\ref{fig:ProbabDen}, respectively, for twelve electron and twelve hole SP states. Depending on Se content, we have identified three types of confinement in our QDs,~i.e., type-I for Se=0--0.15, type-II for Se=0.15--0.3, and {\bf so-called type~I/II} for Se=0.3--0.4. Note, that the type-II confinement was identified by the spatial location of hole wavefunctions \{inset in Fig.~\ref{fig:TeorSPCIBloch}~(b) and Fig.~\ref{fig:ProbabDen}~(b)\} being outside of QD body, while electrons are firmly bound inside QD for all Se contents. {\bf The type~I/II confinement is peculiar and shows features of the remaining two types of confinement, see also in the following.} While the energy of electron SP states reduces with similar rate for all Se contents, including that for SP excited states, the holes are affected by Se content and the associated type of confinement considerably more. While for type~I and associated Se contents $<0.15$, hole SP energy decreases (i.e., the absolute value of hole energy increases), for type~II that increases only slightly, and, finally, for {\bf type~I/II} the increase of hole SP energy (decrease of the absolute value of that) with increasing Se content is observed. As a result of the aforementioned discussion, the SP electron-hole transition energy \{see blue curve in Fig.~\ref{fig:TeorSPCIBloch}~(b)\} remains almost constant for increasing Se content in type~I, while its magnitude is reduced with increase of Se for type~II and type~I/II. The Se-content-dependent energies of X$^0$ computed by CI without and with considering the effect of correlation are shown by orange and green curves, respectively, in Fig.~\ref{fig:TeorSPCIBloch}~(b). {\bf Note, that by the ``effect of correlation" we mean specifically the expansion of CI complexes into the basis consisting not only from ground but also excited SP states.} Now, the binding energy of X$^0$ {\bf \{Fig.~\ref{fig:TeorSPCIBloch}~(b)\} } compared to electron-hole transition SP energy decreases in type~I from 70~meV for Se content of zero to 10~meV for content of 0.15. The large binding energies in type~I are due to the attractive Coulomb interaction between tightly quantum confined electrons and holes in QD~\cite{Klenovsky2021}. For Se content between 0.15 and 0.3 (type II) the binding energy remains constant at 10~meV and further increases with Se content to 30~meV in type-I/II. Correlation further increases the binding energy by 20~meV for Se content of zero up to by almost 30~meV for Se content of 0.4. In type~II the additional binding energy due to correlation is $\sim2$~meV. Finally, note that the magnitude of the reduction of X$^0$ energy with increasing Se content matches that observed from spectral shift of the maximum of photoluminescence spectra in Ref.~\cite{Baranowski2020}. In Fig.~\ref{fig:TeorSPCIBloch}~(c) we show the Se-dependent Bloch state content of X$^0$, obtained from the Bloch state composition of electron and hole SP states, utilizing the squares of CI wavefuntion coefficients $\eta_{i,m}$ from Eq.~\eqref{eq:CIwfSD}. The method was previously developed in Refs.~\cite{Csontosova2020,Huang2021}. Note, that the studied Bloch state composition in this work is that of the conduction band~(CB), heavy-hole~(HH), light-hole~(LH), and spin-orbit~(SO) valence bands. The content of CB in X$^0$ is $\sim50$\,\% for all studied Se contents. While HH content is also $\sim50$\,\% for smaller Se concentrations, the increase of the amount of Se causes considerable progressive admixing of LH states in X$^0$, reaching values of $\sim40$\,\% for Se contents $>0.35$. At the same time, we observe increase of admixing of SO state for Se contents $>0.3$, reaching values as high as $\sim10$\,\%. We note that for Se $<0.3$ the SO content of X$^0$ is negligibly small. Thus, both the type-II regime and in particular {\bf type-I/II} show unusual composition of X$^0$. Strikingly, in type~I/II X$^0$ has almost LH like character with small addition of SO states and negligible HH content. Such LH X$^0$ would be advantageous in quantum information technology, {\bf such as,~e.g., enabling coherent conversion of photons into electron spins~\cite{Vrijen2001},} and was first experimentally reported in Ref.~\cite{Huo:NatPhys} for GaAs/AlGaAs QD system where the LH character was obtained by externally applied tensile strain. However, we predict that in Cd(Se,Te)/ZnTe QDs such an LH exciton is present for Se contents $>0.35$ without the necessity of any external tuning. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{CdSeTeOZnTe_fss_bd_intensity_vs_Se_woExp.pdf} \caption{Fine energy structure of ground state exciton (X$^0$) in Cd(Se,Te)/ZnTe QDs as a function of Se content. In (a) we show the theory values of FSS obtained using CI with different level of approximation,~i.e., without including the effect of correlation as well as without considering the multipole expansion of exchange interaction \{``CI (i)", red curve\}, that for data with correlation included and without multipole \{``CI (ii)", blue curve\}, and without the effect of correlation but with multipole expansion included \{``CI (iii)", green curve\}. Panel (b) shows the calculated values of energy splitting between bright and dark exciton (X$^0$ BD) and in (c) we give the computed values of corresponding X$^0$ emission rates. Note that colors of theory curves in (b) and (c) correspond to the same CI approximations as was described for (a). The transitions between different confinements are marked in all panels by black dotted vertical lines.} \label{fig:FSSBDteorVsMeas} \end{figure} We now turn our attention to the theory analysis of the fine-structure of X$^0$ and show the results of that in Fig.~\ref{fig:FSSBDteorVsMeas}. We computed the fine structure employing three levels of approximation in CI,~i.e., (i) without considering the effect of correlation and with monopole-monopole term of the exchange interaction only, (ii) with correlation and monopole-monopole term of exchange, and (iii) without correlation but with assuming the monopole-monopole, monopole-dipole, and dipole-dipole terms of the exchange interaction what we call multipole expansion.~\cite{Krapek2015,Krapek2016} The results for FSS of X$^0$ are given in Fig.~\ref{fig:FSSBDteorVsMeas}~(a) by circles and full curves. Firstly, we see that FSS values computed using approximations (i) and (ii) defined in previous paragraph are $<10\,\,\mathrm{\mu eV}$ for all studied Se concentrations. However, the multipole expansion of exchange,~i.e., point (iii) above, gives more realistic estimation of FSS, when compared to experimental data \{Figs.~\ref{fig:uPLmeas}~and~\ref{fig:uERateFSSmeas}\}, in particular for type-I confinement. We, thus, conclude that FSS in type~I and type~II confinement of our system is dominated by multipole expansion of the exchange interaction, predominantly by the dipole-dipole term~\cite{Klenovsky2021}. However, in the regime of {\bf type~I/II} FSS is increased up to $20\,\,\mathrm{\mu eV}$ due to the effect of Coulomb correlation. In Fig.~\ref{fig:FSSBDteorVsMeas}~(b) we show the energy difference between optically active (bright), and inactive (dark) X$^0$ doublets and we mark that as X$^0$~BD in that panel. We see that similarly to FSS, X$^0$~BD is larger in type-I regime. Interestingly, contrary to FSS, BD is dominated in type~I by the approximation (ii),~i.e., the correlation, which causes FSS to be more than twice larger than that found by approximations~(i)~and~(iii). On the other hand, in type~II, BD is caused predominantly by approximation (iii),~i.e., multipole expansion. Finally, BD for {\bf type~I/II} is again caused mainly by correlation \{approximation (ii)\}. Furthermore, we see the theory prediction of X$^0$ radiative rate, computed by our ${\bf k}\cdot{\bf p}$+CI computational complex, in Fig.~\ref{fig:FSSBDteorVsMeas}~(c). We observe that the computed rates for all three approximations (full curves) are close to experimental results in Fig.~\ref{fig:uERateFSSmeas}~(b). As expected, the emission rate of our dots in type~II ($\sim 10^{-3}$~1/ns) is reduced by two orders of magnitude compared to type~I ($\sim 10^{-1}$~1/ns). Moreover, for the {\bf type-I/II} regime that is reduced by another two orders of magnitude (to $\sim 10^{-5}$~1/ns). {\bf However, we note that in our CI calculations we omit the short-range interaction within the unit crystal cell. Thus, we neglect also the effect of that on the emission rate, resulting in the difference to the experimental data.} {\bf The aforementioned behavior can be understood with inspection of the probability densities given in Fig.~\ref{fig:ProbabDen}. We see that in type I \{Fig.~\ref{fig:ProbabDen}~(a)\} both electron and hole probability densities reside inside QD body, hence, the overlap is largest of the studied types of confinement. On the other hand, in type II \{Fig.~\ref{fig:ProbabDen}~(b)\} while the electrons still reside inside QD, holes are pushed to the surrounding ZnTe material above and below QD, what also explains the reduced oscillator strength of X$^0$. The confinement for holes is provided by the bandedge changes caused by the elastic strain around QD and piezoelectricity originating in lattice mismatch between dot and buffer materials~\cite{Klenovsky2010,Klenovsky_IOP2010}. Finally, for type~I/II in Fig.~\ref{fig:ProbabDen}~(c) we find the holes to be confined again inside QD body. However, holes, in particular those lowest in energy, are confined on the outskirts of QD and their overlap with electrons is very faint as the wavefunctions almost exactly ``miss" each other. It is that kind of behavior, combining localization of holes inside QD with very small overlap with electrons, that led us to the nomenclature of this confinement as type-I/II. } However, the aforementioned analysis of the radiative emission rate of X$^0$ from Cd(Se,Te)/ZnTe QDs indicates that both type~II and {\bf type~I/II} regimes are very hard to be accessed by optical means, {\bf as the dots would emit one photon in $\sim 1\,\mu{\rm s}$ or even in $\sim 100\,\mu{\rm s}$ for the former and latter confinements, respectively.} Finally, we would like to comment on the topology of the hole wavefunctions in Cd(Se,Te)/ZnTe QDs. From the probability densities shown in Fig.~\ref{fig:ProbabDen} we can observe, that the holes for type-II and {\bf type-I/II} \{Fig.~\ref{fig:ProbabDen}~(b)~and~(c)\} have toroidal shape for both ground and excited SP states. Thus, the hole states in Cd(Se,Te)/ZnTe QDs with type-II or type-I/II might be utilized for the realization of the Aharonov-Bohm effect~\cite{Aharonov1959}, similarly as was proposed,~e.g., in Ref.~\cite{llorens_topology_2019}. \section{Conclusions} We have studied the excitonic structure of Cd(Se,Te) QDs embedded in ZnTe matrix. The photoluminescence spectroscopy analysis revealed reduction of FSS of ground state exciton with increasing Se content in dots. This was found to be associated with type-II character of the dots for larger Se contents. We confirmed that by detailed ${\bf k}\cdot{\bf p}$ and CI calculations, the results of which explained the experimentally observed trends very well. The theory identified the main mechanism causing larger FSS, in particular in type-I dots, to be due to the multipole expansion of the exchange interaction. Furthermore, using our theory we found that for Se contents in the dot larger than $\sim0.3$ a {\bf peculiar type~I/II} confinement occurs, causing an almost purely light hole character of exciton and toroidal shape of hole states. \section{Acknowledgments} P.W. acknowledges the financial supported from the National Centre of Science (Poland) through grant 2017/26/E/ST3/00253. P.K. was financed by the project CUSPIDOR, which has received funding from the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union's Horizon 2020 Programme. In addition, this project has received national funding from the Ministry of Education, Youth and Sports of the Czech Republic and funding from European Union's Horizon 2020 (2014-2020) research and innovation framework programme under Grant agreement No. 731473. The work reported in this paper also was partially funded by projects 20IND05 QADeT, 20FUN05 SEQUME, 17FUN06 SIQUST that received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. \clearpage \newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1} \newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,803
package timus; import java.util.Scanner; /** * Created by sherxon on 12/9/16. */ public class AmusementPark1796 { public static void main(String[] args) { Scanner in = new Scanner(System.in); int tens=in.nextInt(); int fifties=in.nextInt(); int hundreds=in.nextInt(); int fivehundreds=in.nextInt(); int thousands=in.nextInt(); int fivethousands=in.nextInt(); int price=in.nextInt(); long all=tens*10+fifties*50+hundreds*100+fivehundreds*500+thousands*1000+fivethousands*5000; long max=all/price; int min=0; if(tens>0)min=10; else if(fifties>0)min=50; else if(hundreds>0)min=100; else if(fivehundreds>0)min=500; else if(thousands>0)min=1000; else if(fivethousands>0)min=5000; long mini=(all-min)/price; System.out.println(max - mini); for (long i = mini+1; i <=max; i++) { System.out.print(i + " "); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,213
\section{Introduction} Adaptive beamforming techniques have been developed to improve the reception of a desired signal while suppressing interference at the output of a sensor array. It is an ubiquitous task in array signal processing with applications in radar, sonar, astronomy, and more recently, in wireless communications \cite{Anderson}-\cite{Vorobyov}. A number of adaptive algorithms for the beamformer design are available and have been extensively studied \cite{Trees}, \cite{Jian}. The most common are the linearly constrained adaptive algorithms \cite{Frost}-\cite{Zou}. In general, the linear constraints correspond to prior knowledge of certain parameters such as the direction of arrival (DOA) of the desired signal. An important issue that is considered in adaptive beamforming is the design criterion. Among many adaptive algorithms found in the literature, the most promising criteria employed are the constrained minimum variance (CMV) \cite{Trees} and the constrained constant modulus (CCM) \cite{Jian} due to their simplicity and effectiveness. The CMV criterion aims to minimize the beamformer output power while maintaining the array response on the DOA of the desired signal. The CCM criterion is a positive measure \cite{Jian} of the deviation of the beamformer output from a constant modulus condition subject to a constraint on the array response of the desired signal. By measuring the deviation, the CCM criterion provides more information than the CMV for the parameter estimation of constant modulus constellations in the beamformer design. Numerous adaptive algorithms have been proposed according to the constrained version criteria to realize the beamformer design \cite{Trees}, { \cite{Frost}-\cite{Pezeshki}}. The major drawback of the full-rank methods, such as stochastic gradient (SG) \cite{Pezeshki2} and recursive least-squares (RLS) \cite{Via}, \cite{Haykin}, is that these methods require a large amount of samples to reach the steady-state when the number of elements in the filter is large. Furthermore, in dynamic scenarios, filters with many elements usually fail or provide poor performance in tracking signals embedded in interference and noise. Reduced-rank signal processing was originally motivated to provide a way out of this dilemma { \cite{Haimovich}-\cite{Scharf}}. For the application of beamforming, reduced-rank schemes project the received vector onto a lower dimensional subspace and perform the filter optimization within this subspace. One of the popular reduced-rank schemes is the multistage Wiener filter (MSWF), which employs the minimum mean squared error (MMSE) \cite{Goldstein3}, and its extended versions that utilize the CMV and CCM criteria were reported in \cite{Honig}, \cite{Lamare}. Another technique that resembles the MSWF \cite{Chen}, \cite{Burykh} is the auxiliary-vector filtering (AVF) \cite{Pados}, \cite{Pados2}. A joint iterative optimization (JIO) scheme, which was presented recently in \cite{Lamare2}, employs the CMV criterion with a relative low-complexity adaptive implementation to achieve better performance than the existing methods. In this paper, we introduce a robust reduced-rank scheme based on joint iterative optimization of filters with the CCM criterion in detail and compare it with that of the CMV to show its improved performance in the studied scenarios. The developed CCM reduced-rank scheme consists of a bank of full-rank adaptive filters, which constitutes the transformation matrix, and an adaptive reduced-rank filter that operates at the output of the bank of full-rank filters. The transformation matrix maps the received signal into a lower dimension, which is then processed by the reduced-rank filter to estimate the transmitted signal. The proposed scheme provides an iterative exchange of information between the transformation matrix and the reduced-rank filter and thus leads to improved convergence and tracking performance. This paper makes two contributions: \begin{itemize} \item A reduced-rank scheme according to the constant modulus (CM) criterion subject to different constraints is proposed based on the JIO of adaptive filters. This robust reduced-rank scheme is investigated for both direct-form processor (DFP) and the generalized sidelobe canceller (GSC) \cite{Haykin} structures. For each structure, a family of computationally efficient reduced-rank SG and RLS type algorithms are derived for the proposed scheme. The Gram-Schmidt (GS) technique is employed in the proposed algorithms to reformulate the transformation matrix for further improving performance. An automatic rank selection technique is developed to determine the most adequate rank for the proposed algorithms. \item The complexity comparison is presented to show the computational costs of the proposed reduced-rank algorithms. An analysis of the convergence properties for the proposed reduced-rank scheme is carried out. Simulations are performed to show the improved convergence and tracking performance of the proposed algorithms over existing methods. The effectiveness of the GS and automatic rank selection techniques for the proposed algorithms is visible in the results. \end{itemize} \vspace{1em} The remainder of this paper is organized as follows: we outline a system model for beamforming in Section II. { Based on this model, the full-rank and reduced-rank CCM beamformer designs are reviewed}. The proposed reduced-rank scheme based on the CM criterion subject to different constraints is presented in Section III, and the proposed adaptive algorithms are detailed for implementation in Section IV. The complexity and convergence analysis of the proposed algorithms is carried out in Section V. Simulation results are provided and discussed in Section VI, and conclusions are drawn in Section VII. \section{System Model and CCM Beamformer Design} In this section, we first describe a system model to express the received data vector. Based on this model, the full-rank beamformer design according to the CM criterion subject to the constraint on the array response is introduced for the DFP and the GSC structures. \subsection{System Model} Let us suppose that $q$ narrowband signals impinge on a uniform linear array (ULA) of $m$ ($m\geq q$) sensor elements. The sources are assumed to be in the far field with DOAs $\theta_{0}$,\ldots,$\theta_{q-1}$. The received vector $\boldsymbol x(i)\in\mathbb C^{m\times 1}$ at the $i$th snapshot can be modeled as \begin{equation} \label{1} \centering {\boldsymbol x}(i)={\boldsymbol {A}}({\boldsymbol {\theta}}){\boldsymbol s}(i)+{\boldsymbol n}(i),~~~ i=1,\ldots,N, \end{equation} where $\boldsymbol{\theta}=[\theta_{0},\ldots,\theta_{q-1}]^{T}\in{ \mathbb{R}}^{q \times 1}$ is the signal DOAs, ${\boldsymbol A}({\boldsymbol {\theta}})=[{\boldsymbol a}(\theta_{0}),\ldots,{\boldsymbol a}(\theta_{q-1})]\in\mathbb{C}^{m \times q}$ comprises the normalized signal steering vectors ${\boldsymbol a}(\theta_{k})=[1,e^{-2\pi j\frac{u}{\lambda_{\textrm{c}}}cos{\theta_{k}}},\ldots$,\\$e^{-2\pi j(m-1)\frac{u}{\lambda_{\textrm{c}}}cos{\theta_{k}}}]^{T}\in\mathbb{C}^{m \times 1}, (k=0,\ldots,q-1)$, where $\lambda_{\textrm{c}}$ is the wavelength and $u$ ($u=\lambda_{\textrm{c}}/2$ in general) is the inter-element distance of the ULA, and to avoid mathematical ambiguities, the steering vectors $\boldsymbol a(\theta_{k})$ are assumed to be linearly independent. ${\boldsymbol s}(i)\in \mathbb{C}^{q\times 1}$ is the source data, ${\boldsymbol n}(i)\in\mathbb{C}^{m\times 1}$ is temporary white sensor noise, which is assumed to be a zero-mean spatially and Gaussian process, $N$ is the observation size of snapshots, and $(\cdot)^{T}$\ stands for transpose. \subsection{ Full-rank CCM Beamformer Design} The full-rank CCM linear receiver design for beamforming is equivalent to determining a filter ${\boldsymbol w}(i)=[w_{1}(i),\ldots,w_{m}(i)]^{T}\in\mathbb{C}^{m\times 1}$ that provides an estimate of the desired symbol $y(i)=\boldsymbol w^H(i)\boldsymbol x(i)$, where $(\cdot)^{H}$ denotes Hermitian transpose. The calculation of the weight vector is based on the minimization of the following cost function \begin{equation}\label{2} \begin{split} &J_{\textrm{cm}}\Big(\boldsymbol w(i)\Big)=\mathbb E\Big\{\big[{ |y(i)|^{p}}-\nu\big]^{2}\Big\},\\ &\textrm{subject~to}~~{\boldsymbol w}^{H}(i){\boldsymbol a}(\theta_{0})=\gamma, \end{split} \end{equation} where $\nu$ is suitably chosen to guarantee that the weight solution is close to the global minimum and $\gamma$ is set to ensure the convexity of (\ref{2}) \cite{Lamare}. The quantity $\theta_0$ is the direction of the desired signal, $\boldsymbol a(\theta_{0})$ denotes the corresponding normalized steering vector, and in general, $p=2$ is selected to consider the cost function as the expected deviation of the squared modulus of the beamformer output to a constant, say $\nu=1$. { The CCM criterion is a positive measure \cite{Jian} of the deviation of the beamformer output from a constant modulus condition subject to a constraint on the array response of the desired signal. Compared with the CMV criterion, it exploits a constant modulus property of the transmitted signals, utilizes the deviation to provide more information for the parameter estimation of the constant modulus constellations, and achieves a superior performance \cite{Lamare3}, \cite{Lamare}}. The CCM beamformer minimizes the contribution of interference and noise while maintaining the gain along the look direction to be constant. The weight expression of the full-rank CCM design is given in \cite{Lamare}. \subsection{ Reduced-rank CCM Beamformer Design} For large $m$, considering the high computational cost and poor performance associated with the full-rank filter, a number of recent works in the literature have been reported based on reduced-rank schemes { \cite{Haimovich}-\cite{Goldstein2}}, \cite{Goldstein3}-\cite{Wong}. Here, we will describe a reduced-rank framework that reduces the number of coefficients by mapping the received vector into a lower dimensional subspace. { The diagrams of the reduced-rank processors are depicted for the DFP and the GSC structures in Fig. \ref{fig:model22}(a) and Fig. \ref{fig:model22}(b), respectively.} \begin{figure}[htb] \begin{minipage}[h]{1.0\linewidth} \centering \centerline{\epsfig{figure=fig0.eps,scale=0.68}} \vspace{-0em}\caption{Reduced-rank scheme for (a) the DFP and (b) the GSC structures.} \label{fig:model22} \end{minipage} \end{figure} \subsubsection{ Beamformer Design for the DFP} In the DFP structure, $\boldsymbol T_r\in\mathbb C^{m\times r}$ denotes the transformation matrix that includes a set of $m\times 1$ vectors for a $r$-dimensional subspace with $r\leq m$. The transformation matrix maps the received vector $\boldsymbol x(i)$ into its low-dimension version $\bar{\boldsymbol x}(i)\in\mathbb C^{r\times 1}$, which is given by \begin{equation}\label{3} \bar{\boldsymbol x}(i)=\boldsymbol T_r^H(i)\boldsymbol x(i), \end{equation} where, in what follows, all $r$-dimensional quantities are denoted by an over bar. An adaptive reduced-rank CCM filter $\bar{\boldsymbol w}(i)\in\mathbb C^{r\times 1}$ follows the transformation matrix to produce the filter output $y(i)=\bar{\boldsymbol w}^H(i)\bar{\boldsymbol x}(i)$. { Substituting the expression of $y(i)$ into the cost function in (\ref{2}) and calculating for the reduced-rank weight vector, we have \cite{Lamare}} \begin{equation}\label{4} \bar{\boldsymbol w}(i+1)=\bar{\boldsymbol R}^{-1}(i)\Big\{\bar{\boldsymbol p}(i)-\frac{\big[\bar{\boldsymbol p}^H(i)\bar{\boldsymbol R}^{-1}(i)\bar{\boldsymbol a}(\theta_0)-\gamma\big]\bar{\boldsymbol a}(\theta_0)}{\bar{\boldsymbol a}^H(\theta_0)\bar{\boldsymbol R}^{-1}(i)\bar{\boldsymbol a}(\theta_0)}\Big\}, \end{equation} where $\bar{\boldsymbol R}(i)=\mathbb E[|y(i)|^2\boldsymbol T_r^H(i)\boldsymbol x(i)\boldsymbol x^H(i)\boldsymbol T_r(i)]\in\mathbb C^{r\times r}$, $\bar{\boldsymbol a}(\theta_0)=\boldsymbol T_r^H\boldsymbol a(\theta_0)\in\mathbb C^{r\times 1}$, and $\bar{\boldsymbol p}(i)=\mathbb E[y^{\ast}(i)\boldsymbol T_r^H(i)\boldsymbol x(i)]\in\mathbb C^{r\times 1}$. { Note that the expression in (\ref{4}) is a function of previous values of $\bar{\boldsymbol w}(i)$ (since $y(i)=\bar{\boldsymbol w}^H(i)\bar{\boldsymbol x}(i)$) and thus must be initialized to start the computation for the solution. We keep the time index in $\bar{\boldsymbol R}(i)$ and $\bar{\boldsymbol p}(i)$ for the same reason.} \subsubsection{ Beamformer Design for the GSC} The GSC structure converts the constrained optimization problem into an unconstrained one and adopts an alternative way to realize the beamformer design. { The full-rank CCM beamformer design with respect to the GSC structure has been reported in \cite{Rude}. Here, we employ an alternative way proposed in \cite{Chern2}, \cite{Chern} to describe a reduced-rank GSC structure.} As can be seen in Fig. \ref{fig:model22}(b), the reduced-rank GSC structure composes a constrained component { ($\boldsymbol a_{\gamma}(\theta_0)=\gamma\boldsymbol a(\theta_0)$) and an unconstrained component.} $\tilde{\boldsymbol x}(i)$ is a new received vector defined as \begin{equation}\label{5} \tilde{\boldsymbol x}(i)=y_{\textrm{gsc}}^{\ast}(i)\boldsymbol x(i), \end{equation} where $y_{\textrm{gsc}}(i)=\boldsymbol w^H(i)\boldsymbol x(i)$. { The definition of $\tilde{\boldsymbol x}(i)$ is valid for $p=2$ in (\ref{2}) and $|y_{\textrm{gsc}}(i)|^2=\boldsymbol w^H(i)\tilde{\boldsymbol x}(i)$. This expression is only to favor its use in the GSC structure for the case of the CM cost function. Note that $y_{\textrm{gsc}}(i)$ and $y(i)$ (full-rank or reduced-rank with $\boldsymbol T_r=\boldsymbol I_{m\times m}$) correspond to the same values but are written in a different way to indicate the structures (DFP and GSC).} { For the constrained component}, the output is $d_0(i)=\boldsymbol a_{\gamma}^H(\theta_0)\tilde{\boldsymbol x}(i)$. { With respect to the unconstrained component, the new received vector passes through a signal blocking matrix $\boldsymbol B\in\mathbb C^{(m-1)\times m}$ to get a transformed vector $\tilde{\boldsymbol x}_B(i)\in\mathbb C^{(m-1)\times1}$}, which is \begin{equation}\label{6} \tilde{\boldsymbol x}_{B}(i)=\boldsymbol B\tilde{\boldsymbol x}(i), \end{equation} where $\boldsymbol B$ is obtained by the singular value decomposition or the QR decomposition algorithms \cite{Goldstein4}. Thus, $\boldsymbol B\boldsymbol a(\theta_0)=\boldsymbol 0_{(m-1)\times 1}$ means that the term $\boldsymbol B$ effectively blocks any signal coming from the look direction $\theta_0$. The transformation matrix $\boldsymbol T_{\textrm{gsc}}(i)\in\mathbb C^{(m-1)\times r}$ maps the transformed vector $\tilde{\boldsymbol x}_B(i)$ into a low-dimension version, as described by \begin{equation}\label{7} \bar{\boldsymbol x}_B(i)=\boldsymbol T_{\textrm{gsc}}^H(i)\tilde{\boldsymbol x}_B(i). \end{equation} The reduced-rank received vector $\bar{\boldsymbol x}_B(i)$ is processed by a reduced-rank filter $\bar{\boldsymbol w}_{\textrm{gsc}}(i)\in\mathbb C^{r\times1}$ to get the unconstrained output $y_0(i)=\bar{\boldsymbol w}_{\textrm{gsc}}^H(i)\bar{\boldsymbol x}_B(i)$. The reduced-rank weight vector is \cite{Haykin} \begin{equation}\label{8} \bar{\boldsymbol w}_{\textrm{gsc}}(i+1)=\bar{\boldsymbol R}_{\bar{x}_B}^{-1}(i)\bar{\boldsymbol p}_{B}(i), \end{equation} where $\bar{\boldsymbol R}_{\bar{x}_B}(i)=\mathbb E[\boldsymbol T_{\textrm{gsc}}^H(i)\tilde{\boldsymbol x}_{B}(i)\tilde{\boldsymbol x}_{B}^H(i)\boldsymbol T_{\textrm{gsc}}(i)]\in\mathbb C^{r\times r}$ and $\bar{\boldsymbol p}_{B}(i)=\mathbb E[(d_0^{\ast}(i)-1)\boldsymbol T_{\textrm{gsc}}^H(i)\tilde{\boldsymbol x}_{B}(i)]\in\mathbb C^{r\times 1}$. { Note that this expression is a function of previous values of the weight vector and therefore must be initialized to start the computation for the solution.} The reduced-rank GSC structure can be concluded in a transformation operator $\bar{\boldsymbol S}=[\boldsymbol a_{\gamma}(\theta_0), \boldsymbol B^H\boldsymbol T_{\textrm{gsc}}]^H\in\mathbb C^{(r+1)}\times m$ and a reduced-rank weight vector $\bar{\boldsymbol w}'=[1,-\bar{\boldsymbol w}_{\textrm{gsc}}^H]^H\in\mathbb C^{(r+1)\times1}$. The equivalent full-rank weight vector can be expressed as \begin{equation}\label{9} \begin{split} \boldsymbol w(i+1)&=\bar{\boldsymbol S}^H\bar{\boldsymbol w}'(i+1)\\ &=\boldsymbol a_{\gamma}(\theta_0)-\boldsymbol B^H\boldsymbol T_{\textrm{gsc}}(i+1)\bar{\boldsymbol w}_{\textrm{gsc}}(i+1). \end{split} \end{equation} { The reduced-rank weight expressions in (\ref{4}) for the DFP and in (\ref{9}) for the GSC are general forms to the signal processing tasks. Specifically, for $r=m$ (DFP) and $r=m-1$ (GSC), the expressions are equivalent to the full-rank filtering schemes \cite{Haykin}. For $1<r<m$ (DFP) and $1<r<m-1$ (GSC), the signal processing tasks are changed and the reduced-rank filters estimate the desired signals.} The challenge left to us is how to efficiently design and calculate the transformation matrices $\boldsymbol T_r$ and $\boldsymbol T_{\textrm{gsc}}$. The { principal components} (PC) method reported in \cite{Haimovich} uses the eigenvectors of the interference-only covariance matrix corresponding to the eigenvalues of significant magnitude to construct the transformation matrix. The { cross-spectral} (CS) method \cite{Goldstein}, a counterpart of the PC method belonging to the eigen-decomposition family, forms the transformation matrix by using the eigenvectors which contribute the most towards maximizing the SINR and outperforms the PC method. Another family of adaptive reduced-rank filters such as the MSWF \cite{Goldstein3}, \cite{Honig} and the AVF \cite{Pados} generates a set of basis vectors as the transformation matrix that spans the same Krylov subspace \cite{Chen}, \cite{Burykh}. \section{Proposed CCM reduced-rank scheme} In this section, we introduce the proposed reduced-rank scheme based on the JIO approach. Two optimization problems according to the CM criterion subject to different constraints are described for the proposed scheme. Based on this scheme, we derive the expressions of the transformation matrix and the reduced-rank weight vector. For the sake of completeness, the proposed scheme is realized for both the DFP and the GSC structures. \subsection{Proposed CCM Reduced-rank Scheme for the DFP} Here we detail the principles of the proposed CCM reduced-rank scheme using a transformation based on adaptive filters. For the DFP structure depicted in Fig. \ref{fig:model33}(a), the proposed scheme employs a transformation matrix $\boldsymbol T_r(i)\in\mathbb C^{m\times r}$, which is responsible for the dimensionality reduction, to generate $\bar{\boldsymbol x}(i)\in\mathbb C^{r\times 1}$. The dimension is reduced and the key features of the original signal is retained in $\bar{\boldsymbol x}(i)$ according to the CCM criterion. The transformation matrix is structured as a bank of $r$ full-rank filters $\boldsymbol t_j(i)=[t_{1,j}(i), t_{2,j}(i), \ldots, t_{m,j}(i)]^T\in\mathbb C^{m\times 1}$, $(j=1, \ldots, r)$ as given by $\boldsymbol T_r(i)=[\boldsymbol t_1(i), \boldsymbol t_2(i), \ldots, \boldsymbol t_r(i)]$. An adaptive reduced-rank filter $\bar{\boldsymbol w}(i)\in\mathbb C^{r\times 1}$ is then used to produce the output. The transformation matrix $\boldsymbol T_r(i)$ and the reduced-rank filter $\bar{\boldsymbol w}(i)$ are jointly optimized in the proposed scheme. The filter output is a function of the received vector, the transformation matrix, and the reduced-rank weight vector, which is \begin{figure}[htb] \begin{minipage}[h]{1.0\linewidth} \centering \centerline{\epsfig{figure=fig1.eps,scale=0.65}} \vspace{-0em}\caption{Proposed reduced-rank scheme for (a) the DFP and (b) the GSC structures.} \label{fig:model33} \end{minipage} \end{figure} \begin{equation}\label{10} y(i)=\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol x(i)=\bar{\boldsymbol w}^H(i)\bar{\boldsymbol x}(i). \end{equation} We describe two optimization problems according to the CM cost function subject to different constraints for the proposed reduced-rank scheme, which are given by \begin{equation}\label{11} \begin{split} \textrm{Problem 1}:~&\min~J_{\textrm{cm}}\big(\boldsymbol T_r(i), \bar{\boldsymbol w}(i)\big)=\mathbb E\Big\{\big[|y(i)|^2-1\big]^2\Big\}\\ &\textrm{subject~to}~\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol a(\theta_0)=\gamma, \end{split} \end{equation} \begin{equation}\label{12} \begin{split} &\textrm{Problem 2}:~\min~J_{\textrm{cm}}\big(\boldsymbol T_r(i), \bar{\boldsymbol w}(i)\big)=\mathbb E\Big\{\big[|y(i)|^2-1\big]^2\Big\}\\ &\textrm{subject~to}~\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol a(\theta_0)=\gamma~\textrm{and}~\boldsymbol T_r^H(i)\boldsymbol T_r(i)=\boldsymbol I. \end{split} \end{equation} Compared with (\ref{11}), the problem in (\ref{12}) has an orthogonal constraint on the transformation matrix, which is to reformulate $\boldsymbol T_r(i)$. { The transformation matrix generated from (\ref{11}) has vectors that may perform a similar operation (e.g., take the same information twice or more), thereby making poor use of the data and losing performance. The subspace computed with (\ref{12}), which spans the same subspace as $\boldsymbol T_r(i)$, generates basis vectors that are orthogonal to each other and which does not affect the noise statistically. The reformulated transformation matrix performs an efficient operation to keep all useful information in the generated reduced-rank received vector, which is important to estimate the desired signal and improve the performance.} In the following, we will derive the CCM expressions of $\boldsymbol T_r(i)$ and $\bar{\boldsymbol w}(i)$ for solving (\ref{11}) and (\ref{12}). The cost function in (\ref{11}) can be transformed by the method of Lagrange multipliers into an unconstrained one, which is \begin{equation}\label{13} \begin{split} L_{\textrm{cm}}\big(\boldsymbol T_r(i), \bar{\boldsymbol w}(i)\big)&=\mathbb E\Big\{\big[|\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol x(i)|^2-1\big]^2\Big\}\\ &+2\mathfrak{R}\Big\{\lambda\big[\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol a(\theta_0)-\gamma\big]\Big\}, \end{split} \end{equation} where $\lambda$ is a scalar Lagrange multiplier and the operator $\mathfrak{R}(\cdot)$ selects the real part of the argument. Assuming $\bar{\boldsymbol w}(i)$ is known, { computing the gradient of (\ref{13}) with respect to $\boldsymbol T_r(i)$ (matrix calculus), equating it to a null matrix and solving for $\lambda$}, we have \begin{equation}\label{14} \begin{split} &\boldsymbol T_r(i+1)=\boldsymbol R^{-1}(i)\Big\{\boldsymbol p(i)\bar{\boldsymbol w}^H(i)-\\ &\frac{\big[\bar{\boldsymbol w}^H(i)\bar{\boldsymbol R}_{\bar{w}}^{-1}(i)\bar{\boldsymbol w}(i)\boldsymbol p^H(i)\boldsymbol R^{-1}(i)\boldsymbol a(\theta_0)-\gamma\big]\boldsymbol a(\theta_0)\bar{\boldsymbol w}^H(i)}{\bar{\boldsymbol w}^H(i)\bar{\boldsymbol R}_{\bar{w}}^{-1}(i)\bar{\boldsymbol w}(i)\boldsymbol a^H(\theta_0)\boldsymbol R^{-1}(i)\boldsymbol a(\theta_0)}\Big\}\bar{\boldsymbol R}_{\bar{w}}^{-1}(i), \end{split} \end{equation} where $\boldsymbol p(i)=\mathbb E[y^{\ast}(i)\boldsymbol x(i)]\in\mathbb C^{m\times 1}$, $\boldsymbol R(i)=\mathbb E[|y(i)|^2\boldsymbol x(i)\boldsymbol x^H(i)]\in\mathbb C^{m\times m}$, and $\bar{\boldsymbol R}_{\bar{w}}(i)=\mathbb E[\bar{\boldsymbol w}(i)\bar{\boldsymbol w}^H(i)]\in\mathbb C^{r\times r}$. Note that the reduced-rank weight vector $\bar{\boldsymbol w}(i)$ depends on the received vectors that are random in practice, thus $\bar{\boldsymbol R}_{\bar{w}}(i)$ is $r$-rank and invertible. $\boldsymbol R(i)$ and $\boldsymbol p(i)$ are functions of previous values of $\boldsymbol T_r(i)$ and $\bar{\boldsymbol w}(i)$ due to the presence of $y(i)$. Therefore, it is necessary to initialize $\boldsymbol T_r(i)$ and $\bar{\boldsymbol w}(i)$ to estimate $\boldsymbol R(i)$ and $\boldsymbol p(i)$, and start the computation. On the other hand, assuming $\boldsymbol T_r(i)$ is known, { computing the gradient of (\ref{13}) with respect to $\bar{\boldsymbol w}(i)$, equating it to a null vector, and solving for $\lambda$}, we obtain \begin{equation}\label{15} \bar{\boldsymbol w}(i+1)=\bar{\boldsymbol R}^{-1}(i)\Big\{\bar{\boldsymbol p}(i)-\frac{\big[\bar{\boldsymbol p}^H(i)\bar{\boldsymbol R}^{-1}(i)\bar{\boldsymbol a}(\theta_0)-\gamma\big]\bar{\boldsymbol a}(\theta_0)}{\bar{\boldsymbol a}^H(\theta_0)\bar{\boldsymbol R}^{-1}(i)\bar{\boldsymbol a}(\theta_0)}\Big\}, \end{equation} where $\bar{\boldsymbol R}(i)=\mathbb E[|y(i)|^2\boldsymbol T_r^H(i)\boldsymbol x(i)\boldsymbol x^H(i)\boldsymbol T_r(i)]\in\mathbb C^{r\times r}$, $\bar{\boldsymbol p}(i)=\mathbb E[y^{\ast}(i)\boldsymbol T_r^H(i)\boldsymbol x(i)]\in\mathbb C^{r\times 1}$, and $\bar{\boldsymbol a}(\theta_0)=\boldsymbol T_r^H(i)\boldsymbol a(\theta_0)$. Note that the expressions in (\ref{14}) for the transformation matrix and (\ref{15}) for the reduced-rank weight vector can be applied to solve the optimization problem (\ref{12}). The orthogonal constraint in (\ref{12}) can be realized by the Gram-Schmidt (GS) technique, which will be illustrated in the next section. \subsection{Proposed CCM Reduced-rank Scheme for the GSC} For the GSC structure, as depicted in Fig. \ref{fig:model33}(b), the proposed scheme utilizes a transformation matrix $\boldsymbol T_{\textrm{gsc}}(i)\in\mathbb C^{(m-1)\times r}$ to map the new transformed vector $\tilde{\boldsymbol x}_{B}(i)\in\mathbb C^{(m-1)\times 1}$ into a lower dimension, say $\bar{\boldsymbol x}_{B}(i)=\boldsymbol T_{\textrm{gsc}}^H(i)\tilde{\boldsymbol x}_{B}(i)\in\mathbb C^{r\times 1}$. In our design, the transformation matrix $\boldsymbol T_{\textrm{gsc}}(i)$ and the reduced-rank weight vector $\bar{\boldsymbol w}_{\textrm{gsc}}(i)$ for the sidelobe of the GSC are jointly optimized by minimizing the cost function \begin{equation}\label{16} \begin{split} &J_{\textrm{cm-gsc}}\big(\boldsymbol T_{\textrm{gsc}}(i), \bar{\boldsymbol w}_{\textrm{gsc}}(i)\big)=\\ &~~~~~~~~~~\mathbb E\Big\{\big[\big(\boldsymbol a_{\gamma}(\theta_0)-\boldsymbol B^H\boldsymbol T_{\textrm{gsc}}(i)\bar{\boldsymbol w}_{\textrm{gsc}}(i)\big)^H\tilde{\boldsymbol x}(i)-1\big]^2\Big\}, \end{split} \end{equation} where the expression in (\ref{16}) for the GSC is obtained by { substituting (\ref{5}) and (\ref{9}) into (\ref{2}) with $p=2$}. This is an unconstrained cost function that corresponds to (\ref{11}). From Fig. \ref{fig:model33} (b), this structure essentially decomposes the adaptive weight vector into constrained (array response) and unconstrained components (see also Eq. (\ref{9})). The unconstrained component can be adjusted to meet the CM criterion since the constrained component always ensures that the constrained condition is satisfied. Thus, the proposed GSC framework converts the constrained optimization problem into an unconstrained one. Assuming $\bar{\boldsymbol w}_{\textrm{gsc}}(i)$ and $\boldsymbol T_{\textrm{gsc}}(i)$ are given, respectively, { computing the gradient} of (\ref{16}) with respect to $\boldsymbol T_{\textrm{gsc}}(i)$ and $\bar{\boldsymbol w}_{\textrm{gsc}}(i)$, and solving the equations yields \begin{equation}\label{17} \boldsymbol T_{\textrm{gsc}}(i+1)=\boldsymbol R_{\tilde{x}_{B}}^{-1}(i)\tilde{\boldsymbol p}_{B}(i)\bar{\boldsymbol w}_{\textrm{gsc}}^H(i)\bar{\boldsymbol R}_{\bar{w}_{\textrm{gsc}}}^{-1}(i), \end{equation} \begin{equation}\label{18} \bar{\boldsymbol w}_{\textrm{gsc}}(i+1)=\bar{\boldsymbol R}_{\bar{x}_{B}}^{-1}(i)\bar{\boldsymbol p}_{B}(i), \end{equation} where $\boldsymbol R_{\tilde{\boldsymbol x}_{B}}(i)$ and $\tilde{\boldsymbol p}_{B}(i)$ have been defined in the previous section, and $\bar{\boldsymbol R}_{\bar{w}_{\textrm{gsc}}}(i)=\mathbb E[\bar{\boldsymbol w}_{\textrm{gsc}}(i)\bar{\boldsymbol w}_{\textrm{gsc}}^H(i)]\in\mathbb C^{r\times r}$. $\bar{\boldsymbol R}_{\bar{w}_\textrm{gsc}}(i)$ is invertible since $\bar{\boldsymbol w}_{\textrm{gsc}}(i)$ depends on the random received vector and $\bar{\boldsymbol R}_{\bar{w}_\textrm{gsc}}(i)$ is a full-rank matrix. $\bar{\boldsymbol R}_{\bar{x}_B}(i)=\mathbb E[\boldsymbol T_{\textrm{gsc}}^H(i)\tilde{\boldsymbol x}_{B}(i)\tilde{\boldsymbol x}_{B}^H(i)\boldsymbol T_{\textrm{gsc}}(i)]\in\mathbb C^{r\times r}$ and $\bar{\boldsymbol p}_{B}(i)=\mathbb E[(d_0^{\ast}(i)-1)\boldsymbol T_{\textrm{gsc}}^H(i)\tilde{\boldsymbol x}_{B}(i)]\in\mathbb C^{r\times 1}$. Again, the orthogonal constraint on the transformation matrix can be enforced in the optimization problem (\ref{16}) and the GS technique is employed to realize this. Note that the filter expressions in (\ref{14}) and (\ref{15}) for the DFP and (\ref{17}) and (\ref{18}) for the GSC are not closed-form solutions. In the DFP structure, the expression of the transformation matrix in (\ref{14}) is a function of $\bar{\boldsymbol w}(i)$ and the reduced-rank weight vector obtained from (\ref{15}) depends on $\boldsymbol T_{r}(i)$. It is necessary to set initial values of $\boldsymbol T_{r}(i)$ and $\bar{\boldsymbol w}(i)$ for the update procedures. Thus, initialization about the transformation matrix and the reduced-rank weight vector is not only to get a beamformer output $y(i)$ for estimating $\boldsymbol R(i)$ and $\bar{\boldsymbol R}(i)$, but to start the computation of the proposed scheme. In the case of the GSC, we initialize $\boldsymbol T_{\textrm{gsc}}(i)$ and $\bar{\boldsymbol w}_{\textrm{gsc}}(i)$ with the same intention. Unlike the MSWF \cite{Goldstein3} and the AVF \cite{Pados} techniques in which the transformation matrix is computed independently from the reduced-rank filter, the proposed scheme provides an iterative exchange of information between the transformation matrix and the reduced-rank filter, which leads to improved convergence and tracking performance. The transformation matrix reduces the dimension of the received vector whereas the reduced-rank filter attempts to estimate the desired signal. The key strategy lies in the joint iterative optimization of the filters. In the next section, we will derive iterative solutions via simple adaptive algorithms and introduce an automatic rank selection technique for the adaptation of the rank $r$. \section{Adaptive Algorithms of The Proposed CCM Reduced-rank Scheme} We derive SG and RLS type algorithms for the proposed CCM reduced-rank scheme. Some related works can be found in { \cite{Pezeshki}-\cite{Via}}. In this paper, the adaptive algorithms are described for the DFP and the GSC structures, respectively, to perform joint iterative updates of the transformation matrix and the reduced-rank weight vector. They are used to solve Problem 1. The Gram-Schmidt (GS) technique is employed in these algorithms and imposes an orthogonal constraint on the transformation matrix { to solve Problem 2}. An automatic rank selection technique is introduced to determine the most adequate rank for the proposed methods. \subsection{Stochastic Gradient Algorithms} Here, we derive the SG algorithms with the proposed CCM reduced-rank scheme for both the DFP and the GSC structures. \subsubsection{SG algorithm for the DFP} Assuming $\bar{\boldsymbol w}(i)$ and ${\boldsymbol T}_{r}(i)$ are known, respectively, computing the instantaneous gradient of (\ref{13}) with respect to $\boldsymbol T_r(i)$ and $\bar{\boldsymbol w}(i)$, we obtain \begin{equation}\label{19} \nabla L_{\textrm{cm}_{T_r(i)}}(i)=2e(i)y^{\ast}(i)\boldsymbol x(i)\bar{\boldsymbol w}^H(i)+2\lambda_{T_r}^{\ast}\boldsymbol a(\theta_0)\bar{\boldsymbol w}^H(i), \end{equation} \begin{equation}\label{20} \nabla L_{\textrm{cm}_{\bar{w}(i)}}(i)=2e(i)y^{\ast}(i)\boldsymbol T_r^H(i)\boldsymbol x(i)+2\lambda_{\bar{w}}^{\ast}\boldsymbol T_r^H(i)\boldsymbol a(\theta_0), \end{equation} where $e(i)=|y(i)|^2-1$. Following the gradient rules $\boldsymbol T_r(i+1)=\boldsymbol T_r(i)-\mu_{T_r}\nabla J_{\textrm{un}_{T_r(i)}}(i)$ and $\bar{\boldsymbol w}(i+1)=\bar{\boldsymbol w}(i)-\mu_{\bar{w}}\nabla J_{\textrm{un}_{\bar{w}(i)}}(i)$, substituting (\ref{19}) and (\ref{20}) into them, respectively, and solving the Lagrange multipliers $\lambda_{T_r}$ and $\lambda_{\bar{w}}$ by employing the constraint in (\ref{11}), we obtain the iterative SG algorithm for the DFP, { which is denominated as JIO-CCM-SG. A summary of this algorithm is given in Table \ref{tab:JIO-CCM-SG-DFP}}, where $\mu_{T_r}$ and $\mu_{\bar{w}}$ are the corresponding step size factors for the DFP, which are small positive values. The initialization values are set to satisfy the constraint in (\ref{11}). The transformation matrix $\boldsymbol T_r(i)$ and the reduced-rank weight vector $\bar{\boldsymbol w}(i)$ operate together and exchange information at each time instant. \begin{table}[!t] \centering \caption{THE JIO-CCM-SG ALGORITHM FOR DFP} \label{tab:JIO-CCM-SG-DFP} \begin{small} \begin{tabular}{l} \hline \bfseries {Initialization:}\\ ${\boldsymbol T}_r(1)=[{\boldsymbol I}_{r\times r}~\boldsymbol 0_{r\times (m-r)}]^T$;\\ ${\bar{\boldsymbol w}}(1)=\boldsymbol T_r^H(1)\boldsymbol a_{\gamma}(\theta_0)/(\|\boldsymbol T_r^H(1)\boldsymbol a_{\gamma}(\theta_0)\|^2)$.\\ \bfseries {Update for each time instant} $i$\\ $y(i)=\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol x(i)$;~~$e(i)=|y(i)|^2-1$\\ $\boldsymbol T_r(i+1)=\boldsymbol T_r(i)-\mu_{T_r}e(i)y^{\ast}(i)\big[\boldsymbol I-\boldsymbol a(\theta_0)\boldsymbol a^H(\theta_0)\big]\boldsymbol x(i)\bar{\boldsymbol w}^H(i)$\\ $y(i)=\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i+1)\boldsymbol x(i)$;~~$e(i)=|y(i)|^2-1$\\ $\bar{\boldsymbol a}(\theta_0)=\boldsymbol T_r^H(i+1)\boldsymbol a(\theta_0)$,~ $\bar{\boldsymbol x}(i)=\boldsymbol T_r^H(i+1)\boldsymbol x(i)$\\ $\bar{\boldsymbol w}(i+1)=\bar{\boldsymbol w}(i)-\mu_{\bar{w}}e(i)y^{\ast}(i)\big[\boldsymbol I-\frac{\bar{\boldsymbol a}(\theta_0)\bar{\boldsymbol a}^H(\theta_0)}{\bar{\boldsymbol a}^H(\theta_0)\bar{\boldsymbol a}(\theta_0)}\big]\bar{\boldsymbol x}(i)$\\ \hline \end{tabular} \end{small} \end{table} \subsubsection{SG algorithm for the GSC} For the GSC structure, assuming $\bar{\boldsymbol w}_{\textrm{gsc}}(i)$ and $\boldsymbol T_{\textrm{gsc}}(i)$ are given in (\ref{16}), respectively, we get \begin{equation}\label{21} \nabla J_{\textrm{cm-gsc}_{T_{\textrm{gsc}}(i)}}(i)=e_{\textrm{gsc}}^{\ast}(i)\tilde{\boldsymbol x}_{B}(i)\bar{\boldsymbol w}_{\textrm{gsc}}^H(i), \end{equation} \begin{equation}\label{22} \nabla J_{\textrm{cm-gsc}_{\bar{w}_{\textrm{gsc}}(i)}}(i)=e_{\textrm{gsc}}^{\ast}(i)\bar{\boldsymbol x}_{B}(i), \end{equation} where $e_{\textrm{gsc}}(i)=1-\boldsymbol w^H(i)\tilde{\boldsymbol x}(i)$ and $\boldsymbol w(i)$ is obtained from (\ref{9}). Substituting (\ref{21}) and (\ref{22}) into the gradient rules, we obtain the iterative SG algorithm for the GSC, which is summarized in Table \ref{tab:JIO-CCM-SG-GSC}. where $\mu_{T_{\textrm{gsc}}}$ and $\mu_{\bar{w}_{\textrm{gsc}}}$ are the corresponding step size factors for the GSC. \begin{table}[!t] \centering \caption{THE JIO-CCM-SG ALGORITHM FOR GSC} \label{tab:JIO-CCM-SG-GSC} \begin{small} \begin{tabular}{l} \hline \bfseries {Initialization:}\\ ${\boldsymbol T}_{\textrm{gsc}}(1)=[{\boldsymbol I}_{r\times r}~\boldsymbol 0_{r\times (m-r)}]^T$; ${\bar{\boldsymbol w}_{\textrm{gsc}}}(1)=\boldsymbol I_{r\times 1}$.\\ \bfseries {Update for each time instant} $i$\\ $\boldsymbol w(i)=\boldsymbol a_{\gamma}(\theta_0)-\boldsymbol B^H\boldsymbol T_{\textrm{gsc}}(i)\bar{\boldsymbol w}_{\textrm{gsc}}(i)$,~~ $y_{\textrm{gsc}}(i)=\boldsymbol w^H(i)\boldsymbol x(i)$\\ $\tilde{\boldsymbol x}(i)=y_{\textrm{gsc}}^{\ast}(i)\boldsymbol x(i)$,~~ $\tilde{\boldsymbol x}_{B}(i)=\boldsymbol B \tilde{\boldsymbol x}(i)$,~~ $e_{\textrm{gsc}}(i)=1-\boldsymbol w^H(i)\tilde{\boldsymbol x}(i)$\\ $\boldsymbol T_{\textrm{gsc}}(i+1)=\boldsymbol T_{\textrm{gsc}}(i)-\mu_{T_r}e_{\textrm{gsc}}^{\ast}(i)\tilde{\boldsymbol x}_{B}(i)\bar{\boldsymbol w}_{\textrm{gsc}}^H(i)$\\ $\boldsymbol w(i)=\boldsymbol a_{\gamma}(\theta_0)-\boldsymbol B^H\boldsymbol T_{\textrm{gsc}}(i+1)\bar{\boldsymbol w}_{\textrm{gsc}}(i)$\\ $y_{\textrm{gsc}}(i)=\boldsymbol w^H(i)\boldsymbol x(i)$,~~ $\tilde{\boldsymbol x}(i)=y_{\textrm{gsc}}^{\ast}(i)\boldsymbol x(i)$,~~ $\tilde{\boldsymbol x}_{B}(i)=\boldsymbol B \tilde{\boldsymbol x}(i)$\\ $e_{\textrm{gsc}}(i)=1-\boldsymbol w^H(i)\tilde{\boldsymbol x}(i)$,~~ $\bar{\boldsymbol x}_{B}(i)=\boldsymbol T_{\textrm{gsc}}^H(i+1)\tilde{\boldsymbol x}_{B}(i)$\\ $\bar{\boldsymbol w}_{\textrm{gsc}}(i+1)=\bar{\boldsymbol w}_{\textrm{gsc}}(i)-\mu_{\bar{w}_{\textrm{gsc}}}e_{\textrm{gsc}}^{\ast}(i)\bar{\boldsymbol x}_{B}(i)$\\ \hline \end{tabular} \end{small} \end{table} \subsection{Recursive Least Squares Algorithms} In this part, we derive the RLS algorithms with the proposed CCM reduced-rank scheme for both the DFP and the GSC structures. \subsubsection{RLS algorithm for the DFP} Considering the DFP case, the unconstrained least squares (LS) cost function is given by \begin{equation}\label{23} \begin{split} L_{\textrm{un}}\big(\boldsymbol T_r(i), \bar{\boldsymbol w}(i)\big)&=\sum_{l=1}^{i}\alpha^{i-l}\big[|\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol x(l)|^2-1\big]^2\\ &+2\mathfrak{R}\Big\{\lambda\big[\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol a(\theta_0)-\gamma\big]\Big\}, \end{split} \end{equation} where $\alpha$ is a forgetting factor chosen as a positive constant close to, but less than $1$. Assuming $\bar{\boldsymbol w}(i)$ and $\boldsymbol T_r(i)$ are known in (\ref{23}), respectively, we obtain \begin{equation}\label{24} \begin{split} &\boldsymbol T_r(i+1)=\\ &\hat{\boldsymbol R}^{-1}(i)\Big\{\hat{\boldsymbol p}(i)-\frac{\big[\hat{\boldsymbol p}^H(i)\hat{\boldsymbol R}^{-1}(i)\boldsymbol a(\theta_0)-\gamma\big]\boldsymbol a(\theta_0)}{\boldsymbol a^H(\theta_0)\hat{\boldsymbol R}^{-1}(i)\boldsymbol a(\theta_0)}\Big\}\frac{\bar{\boldsymbol w}^H(i)}{\|\bar{\boldsymbol w}(i)\|^2}, \end{split} \end{equation} \begin{equation}\label{25} \bar{\boldsymbol w}(i+1)=\hat{\bar{\boldsymbol R}}^{-1}(i)\Big\{\hat{\bar{\boldsymbol p}}(i)-\frac{\big[\hat{\bar{\boldsymbol p}}^H(i)\hat{\bar{\boldsymbol R}}^{-1}(i)\bar{\boldsymbol a}(\theta_0)-\gamma\big]\bar{\boldsymbol a}(\theta_0)}{\bar{\boldsymbol a}^H(i)\hat{\bar{\boldsymbol R}}^{-1}(i)\bar{\boldsymbol a}(\theta_0)}\Big\}, \end{equation} where $\hat{\boldsymbol R}(i)=\sum_{l=1}^{i}\alpha^{i-l}|y(l)|^2\boldsymbol x(l)\boldsymbol x^H(l)$, $\hat{\bar{\boldsymbol R}}(i)=\sum_{l=1}^{i}\alpha^{i-l}|y(l)|^2\bar{\boldsymbol x}(l)\bar{\boldsymbol x}^H(l)$, $\hat{\boldsymbol p}(i)=\sum_{l=1}^{i}\alpha^{i-l}y^{\ast}(l)\boldsymbol x(l)$, and $\hat{\bar{\boldsymbol p}}(i)=\sum_{l=1}^{i}\alpha^{i-l} y^{\ast}(l)\bar{\boldsymbol x}(l)$ with $y(i)$ expressed in (\ref{10}). { The derivation of (\ref{24}) is given in the appendix}. Note that $\hat{\boldsymbol R}(i)$ is not invertible if $i<m$. It can be implemented by employing the diagonal loading technique \cite{Trees}, \cite{Jian}. This same procedure is also used for the remaining matrices. To avoid the matrix inversion and reduce the complexity, we employ the matrix inversion lemma \cite{Haykin} to update $\hat{\boldsymbol R}^{-1}(i)$ and $\hat{\bar{\boldsymbol R}}(i)$ iteratively. { The resulting adaptive algorithm, which we denominate as JIO-CCM-RLS, is summarized in Table \ref{tab:JIO-CCM-RLS-DFP}}, where $\hat{\boldsymbol\Phi}(i)=\hat{\boldsymbol R}^{-1}(i)$ and $\hat{\bar{\boldsymbol\Phi}}(i)=\hat{\bar{\boldsymbol R}}^{-1}(i)$ are defined for concise presentation, $\boldsymbol k(i)\in\mathbb C^{m\times1}$ and $\bar{\boldsymbol k}(i)\in\mathbb C^{r\times1}$ are the full-rank and reduced-rank gain vectors, respectively. The recursive procedures are implemented by initializing $\hat{{\boldsymbol\Phi}}(0)={\delta}\boldsymbol I_{m\times m}$ and $\hat{\bar{\boldsymbol\Phi}}(0)=\bar{\delta}\boldsymbol I_{r\times r}$, where $\delta$ and $\bar{\delta}$ are positive scalars. \begin{table}[!t] \centering \caption{THE JIO-CCM-RLS ALGORITHM FOR DFP} \label{tab:JIO-CCM-RLS-DFP} \begin{small} \begin{tabular}{l} \hline \bfseries {Initialization:}\\ ${\boldsymbol T}_r(1)=[{\boldsymbol I}_{r\times r}~\boldsymbol 0_{r\times (m-r)}]^T$;\\ ${\bar{\boldsymbol w}}(1)=\boldsymbol T_r^H(1)\boldsymbol a_{\gamma}(\theta_0)/(\|\boldsymbol T_r^H(1)\boldsymbol a_{\gamma}(\theta_0)\|^2)$;\\ $\hat{\boldsymbol\Phi}(0)=\delta\boldsymbol I_{m\times m}$,$\hat{\bar{\boldsymbol\Phi}}(0)=\bar{\delta}\boldsymbol I_{r\times r}$,$\hat{\boldsymbol p}(0)=\boldsymbol 0_{m\times 1}$,$\hat{\bar{\boldsymbol p}}(0)=\boldsymbol 0_{r\times 1}$.\\ \bfseries {Update for each time instant} $i$\\ $y(i)=\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol x(i)$,~~$\hat{\boldsymbol p}(i)=\alpha\hat{\boldsymbol p}(i-1)+y^{\ast}(i)\boldsymbol x(i)$\\ $\boldsymbol k(i)=\frac{\alpha^{-1}\hat{\boldsymbol\Phi}(i-1)\boldsymbol x(i)}{(1/|y(i)|^2)+\alpha^{-1}\boldsymbol x^H(i)\hat{\boldsymbol \Phi}(i-1)\boldsymbol x(i)}$\\ $\hat{\boldsymbol\Phi}(i)=\alpha^{-1}\hat{\boldsymbol \Phi}(i-1)-\alpha^{-1}\boldsymbol k(i)\boldsymbol x^H(i)\hat{\boldsymbol\Phi}(i-1)$\\ $\boldsymbol T_r(i+1)=\hat{\boldsymbol \Phi}(i)\Big\{\hat{\boldsymbol p}(i)-\frac{\big[\hat{\boldsymbol p}^H(i)\hat{\boldsymbol\Phi}(i)\boldsymbol a(\theta_0)-\gamma\big]\boldsymbol a(\theta_0)}{\boldsymbol a^H(\theta_0)\hat{\boldsymbol\Phi}(i)\boldsymbol a(\theta_0)}\Big\}\frac{\bar{\boldsymbol w}^H(i)}{\|\bar{\boldsymbol w}(i)\|^2}$\\ $y(i)=\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i+1)\boldsymbol x(i)$,~$\bar{\boldsymbol a}(\theta_0)=\boldsymbol T_r^H(i+1)\boldsymbol a(\theta_0)$\\ $\bar{\boldsymbol x}(i)=\boldsymbol T_r^H(i+1)\boldsymbol x(i)$,~$\hat{\bar{\boldsymbol p}}(i)=\alpha\hat{\bar{\boldsymbol p}}(i-1)+y^{\ast}(i)\bar{\boldsymbol x}(i)$\\ $\bar{\boldsymbol k}(i)=\frac{\alpha^{-1}\hat{\bar{\boldsymbol\Phi}}(i-1)\bar{\boldsymbol x}(i)}{(1/|y(i)|^2)+\alpha^{-1}\bar{\boldsymbol x}^H(i)\hat{\bar{\boldsymbol \Phi}}(i-1)\bar{\boldsymbol x}(i)}$\\ $\hat{\bar{\boldsymbol\Phi}}(i)=\alpha^{-1}\hat{\bar{\boldsymbol \Phi}}(i-1)-\alpha^{-1}\bar{\boldsymbol k}(i)\bar{\boldsymbol x}^H(i)\hat{\bar{\boldsymbol\Phi}}(i-1)$\\ $\bar{\boldsymbol w}(i+1)=\hat{\bar{\boldsymbol \Phi}}(i)\big\{\hat{\bar{\boldsymbol p}}(i)-\frac{\big[\hat{\bar{\boldsymbol p}}^H(i)\hat{\bar{\boldsymbol\Phi}}(i)\bar{\boldsymbol a}(\theta_0)-\gamma\big]\bar{\boldsymbol a}(\theta_0)}{\bar{\boldsymbol a}^H(i)\hat{\bar{\boldsymbol\Phi}}(i)\bar{\boldsymbol a}(\theta_0)}\big\}$\\ \hline \end{tabular} \end{small} \end{table} \subsubsection{RLS algorithm for the GSC} For the GSC structure, the LS cost function is given by \begin{equation}\label{26} \begin{split} &L_{\textrm{un-gsc}}(\boldsymbol T_{\textrm{gsc}}(i), \bar{\boldsymbol w}_{\textrm{gsc}}(i))=\\ &~~~\sum_{l=1}^{i}\alpha^{i-l}\Big\{\big[\boldsymbol a_{\gamma}(\theta_0)-\boldsymbol B^H\boldsymbol T_{\textrm{gsc}}(i)\bar{\boldsymbol w}_{\textrm{gsc}}(i)\big]^H\tilde{\boldsymbol x}(l)-1\Big\}^2. \end{split} \end{equation} Assuming the optimal reduced-rank weight vector $\bar{\boldsymbol w}_{\textrm{gsc}}$ and the transformation matrix $\boldsymbol T_{\textrm{gsc}}(i)$ are known, respectively, computing the gradients of (\ref{26}) with respect to $\boldsymbol T_{\textrm{gsc}}(i)$ and $\bar{\boldsymbol w}_{\textrm{gsc}}(i)$, and equating them equal to null, we have \begin{equation}\label{27} \boldsymbol T_{\textrm{gsc}}(i+1)=\hat{\boldsymbol R}_{\tilde{x}_{B}}^{-1}(i)\hat{\tilde{\boldsymbol p}}_{B}(i)\frac{\bar{\boldsymbol w}_{\textrm{gsc}}^H(i)}{\|\bar{\boldsymbol w}_{\textrm{gsc}}(i)\|^2}, \end{equation} \begin{equation}\label{28} \bar{\boldsymbol w}_{\textrm{gsc}}(i+1)=\hat{\bar{\boldsymbol R}}_{\bar{x}_{{B}}}^{-1}(i)\hat{\bar{\boldsymbol p}}_{{B}}(i), \end{equation} where $\hat{\boldsymbol R}_{\tilde{ x}_{B}}(i)=\sum_{l=1}^{i}\alpha^{i-l}\boldsymbol B\tilde{\boldsymbol x}(l)\tilde{\boldsymbol x}^H(l)\boldsymbol B^H$, $\hat{\bar{\boldsymbol R}}_{\bar{x}_{B}}(i)=\boldsymbol T_{\textrm{gsc}}^H(i)\hat{\boldsymbol R}_{\tilde{x}_{{B}}}(i)\boldsymbol T_{\textrm{gsc}}(i)$, $\hat{\tilde{\boldsymbol p}}_{B}(i)=\sum_{l=1}^{i}\alpha^{i-l}\boldsymbol [d_0^{\ast}(l)-1]\tilde{\boldsymbol x}_{B}(l)$, and $\hat{\bar{\boldsymbol p}}_{{B}}(i)=\boldsymbol T_{\textrm{gsc}}^H(i)\hat{\tilde{\boldsymbol p}}_{{B}}(i)$. Setting $\hat{\boldsymbol\Phi}_{\tilde{ x}_{B}}(i)=\hat{\boldsymbol R}_{\tilde{ x}_{B}}^{-1}(i)$ and $\hat{\bar{\boldsymbol\Phi}}_{\bar{x}_{{B}}}(i)=\hat{\bar{\boldsymbol R}}_{\bar{x}_{{B}}}^{-1}(i)$ and employing the matrix inversion lemma yields, \begin{equation}\label{29} \boldsymbol T_{\textrm{gsc}}(i+1)=\boldsymbol T_{\textrm{gsc}}(i)-{\boldsymbol k}_{{B}}(i)\boldsymbol e_{\bar{w}_{\textrm{gsc}}}(i), \end{equation} \begin{equation}\label{30} \bar{\boldsymbol w}_{\textrm{gsc}}(i+1)=\bar{\boldsymbol w}_{\textrm{gsc}}(i)-e_{\textrm{gsc}}^{\ast}(i)\bar{\boldsymbol k}_{{B}}(i), \end{equation} where $\boldsymbol k_B(i)\in\mathbb C^{(m-1)\times1}$ and $\bar{\boldsymbol k}_B\in\mathbb C^{r\times1}$ are gain vectors, $\boldsymbol e_{\bar{w}_{\textrm{gsc}}}(i)=[1-\tilde{\boldsymbol x}^H(i)\boldsymbol w(i)]\frac{\bar{\boldsymbol w}_{\textrm{gsc}}^H(i)}{\|\bar{\boldsymbol w}_{\textrm{gsc}}(i)\|^2}$, $e_{\textrm{gsc}}(i)=1-\boldsymbol w^H(i)\tilde{\boldsymbol x}(i)$, and $\boldsymbol w(i)$ is defined by (\ref{9}). A summary of the reduced-rank RLS algorithm with the CCM design for the GSC is given in Table \ref{tab:JIO-CCM-RLS-GSC}. \begin{table}[!t] \centering \caption{THE JIO-CCM-RLS ALGORITHM FOR GSC} \label{tab:JIO-CCM-RLS-GSC} \begin{small} \begin{tabular}{l} \hline \bfseries {Initialization:}\\ ${\boldsymbol T}_{\textrm{gsc}}(1)=[{\boldsymbol I}_{r\times r}~\boldsymbol 0_{r\times (m-r)}]^T$,~~${\bar{\boldsymbol w}}_{\textrm{gsc}}(1)=\boldsymbol I_{r\times 1}$\\ $\hat{\boldsymbol\Phi}_{{\tilde{x}_{{B}}}}(0)=\delta\boldsymbol I_{m\times m}$,~~$\hat{\bar{\boldsymbol\Phi}}_{\bar{x}_{{B}}}(0)=\bar{\delta}\boldsymbol I_{r\times r}$.\\ \bfseries {Update for each time instant} $i$\\ $\boldsymbol w(i)=\boldsymbol a_{\gamma}(\theta_0)-\boldsymbol B^H\boldsymbol T_{\textrm{gsc}}(i)\bar{\boldsymbol w}_{\textrm{gsc}}(i)$,~~ $y_{\textrm{gsc}}(i)=\boldsymbol w^H(i)\boldsymbol x(i)$\\ $\tilde{\boldsymbol x}(i)=y_{\textrm{gsc}}^{\ast}(i)\boldsymbol x(i)$,~~$\tilde{\boldsymbol x}_{{B}}(i)=\boldsymbol B\tilde{\boldsymbol x}(i)$,~~$e_{\textrm{gsc}}(i)=1-\boldsymbol w^H(i)\tilde{\boldsymbol x}(i)$\\ ${\boldsymbol k}_{{B}}(i)=\frac{\alpha^{-1}\hat{\boldsymbol\Phi}_{\tilde{ x}_{{B}}}(i-1)\tilde{\boldsymbol x}_{{B}}(i)}{1+\alpha^{-1}\tilde{\boldsymbol x}_{{B}}^H(i)\hat{\boldsymbol\Phi}_{\tilde{ x}_{{B}}}(i-1)\tilde{\boldsymbol x}_{{B}}(i)}$\\ $\hat{\boldsymbol\Phi}_{\tilde{x}_{{B}}}(i)=\alpha^{-1}\hat{\boldsymbol\Phi}_{\tilde{x}_{{B}}}(i-1)-\alpha^{-1}{\boldsymbol k}_{{B}}(i)\tilde{\boldsymbol x}_{{B}}^H(i)\hat{\boldsymbol\Phi}_{\tilde{ x}_{{B}}}(i-1)$\\ $\boldsymbol e_{\bar{w}_{\textrm{gsc}}}(i)=[1-\tilde{\boldsymbol x}^H(i)\boldsymbol w(i)]\frac{\bar{\boldsymbol w}_{\textrm{gsc}}^H(i)}{\|\bar{\boldsymbol w}_{\textrm{gsc}}(i)\|^2}$\\ $\boldsymbol T_{\textrm{gsc}}(i+1)=\boldsymbol T_{\textrm{gsc}}(i)-{\boldsymbol k}_{{B}}(i)\boldsymbol e_{\bar{w}_{\textrm{gsc}}}(i)$\\ $\bar{\boldsymbol x}_{{B}}(i)=\boldsymbol T_{\textrm{gsc}}^H(i+1)\tilde{\boldsymbol x}_{{B}}(i)$\\ $\bar{\boldsymbol k}_{{B}}(i)=\frac{\alpha^{-1}\hat{\bar{\boldsymbol\Phi}}_{\bar{ x}_{{B}}}(i-1)\bar{\boldsymbol x}_{{B}}(i)}{1+\alpha^{-1}\bar{\boldsymbol x}_{{B}}^H(i)\hat{\bar{\boldsymbol\Phi}}_{\bar{ x}_{{B}}}(i-1)\bar{\boldsymbol x}_{{B}}(i)}$\\ $\hat{\bar{\boldsymbol\Phi}}_{\bar{x}_{{B}}}(i)=\alpha^{-1}\hat{\bar{\boldsymbol\Phi}}_{\bar{x}_{{B}}}(i-1)-\alpha^{-1}\bar{\boldsymbol k}_{{B}}(i)\bar{\boldsymbol x}_{{B}}^H(i)\hat{\bar{\boldsymbol\Phi}}_{\bar{ x}_{{B}}}(i-1)$\\ $\boldsymbol w(i)=\boldsymbol a_{\gamma}(\theta_0)-\boldsymbol B^H\boldsymbol T_{\textrm{gsc}}(i+1)\bar{\boldsymbol w}_{\textrm{gsc}}(i)$\\ $e_{\textrm{gsc}}(i)=1-\boldsymbol w^H(i)\tilde{\boldsymbol x}(i)$\\ $\bar{\boldsymbol w}_{\textrm{gsc}}(i+1)=\bar{\boldsymbol w}_{\textrm{gsc}}(i)-e_{\textrm{gsc}}^{\ast}(i)\bar{\boldsymbol k}_{{B}}(i)$\\ \hline \end{tabular} \end{small} \end{table} \subsection{Gram-Schmidt Technique for Problem 2} As mentioned before, the transformation matrix $\boldsymbol T_r(i+1)$ for the DFP is constituted by a bank of full-rank filters $\boldsymbol t_{j}(i+1)~(j=1,\ldots, r)$, { which cannot be guaranteed to be orthogonal. According to the optimization problem 2 in (\ref{12}), the transformation matrix $\boldsymbol T_r(i)$ can be reformulated to compose $r$ orthogonal vectors, which span the same subspace generated by the original vectors. The reformulation ensures that the projection of the received vector onto each dimension is one time and avoids the overlap (e.g., takes the same information twice or more). Compared with the original transformation matrix, the reformulated transformation matrix is more efficient to keep the useful information in the generated reduced-rank received vector for the parameter estimation}. The orthogonal procedure is realized by the Gram-Schmidt (GS) technique \cite{Golub}. Specifically, after the iterative processes for the computation of the transformation matrix, the GS technique is performed to modify the columns of the transformation matrix as follows: \begin{equation}\label{31} \boldsymbol t_{j,\textrm{ort}}(i+1)=\boldsymbol t_{j}(i+1)-\sum_{l=1}^{j-1}\textrm{proj}_{{\boldsymbol t}_{l,\textrm{ort}}(i+1)}\boldsymbol t_j(i+1), \end{equation} where $\boldsymbol t_{j,\textrm{ort}}(i+1)$ is the normalized orthogonal vector after the GS process. The projection operator is $\textrm{proj}_{{\boldsymbol t}_{l,\textrm{ort}}(i+1)}\boldsymbol t_j(i+1)=[\boldsymbol t_{l,\textrm{ort}}^H(i+1)\boldsymbol t_{l,\textrm{ort}}(i+1)]^{-1}[\boldsymbol t_{l,\textrm{ort}}^H(i+1)\boldsymbol t_j(i+1)]\boldsymbol t_{l,\textrm{ort}}(i+1)$. The reformulated transformation matrix $\boldsymbol T_{r,\textrm{ort}}(i+1)$ is constructed after we obtain a set of orthogonal $\boldsymbol t_{j,\textrm{ort}}(i+1)$. By employing $\boldsymbol T_{r,\textrm{ort}}(i+1)$ to compute the reduced-rank weight vectors, the adaptive algorithms could achieve an improved performance. Following the same procedures, we can also apply the GS technique to the adaptive algorithms for the GSC structure. Simulations will be given to show this result. We denominate the GS version of the SG and RLS algorithms as JIO-CCM-GS and JIO-CCM-RGS, respectively. \subsection{Automatic Rank Selection} The selection of the rank $r$ impacts the performance of the proposed reduced-rank algorithms. Here, we introduce an adaptive method for selecting the rank. Related works on the rank selection for the MSWF and the AVF techniques have been reported in \cite{Honig} and \cite{Qian}, respectively. Unlike these methods, we describe a rank selection method based on the CM criterion computed by the filters $\boldsymbol T_r^{(r)}(i)$ and $\bar{\boldsymbol w}^{(r)}(i)$, where the superscript $(\cdot)^{(r)}$ denotes the rank used for the adaptation at each time instant. We consider the rank adaptation technique for both the DFP and the GSC structures. Specifically, in the DFP structure, the rank is automatically selected for the proposed algorithms based on the exponentially-weighted a \textit{posteriori} least-squares cost function according to the CM criterion, which is \begin{equation}\label{32} \begin{split} &J_{\textrm{pcm}}\big(\boldsymbol T_r^{(r)}(i-1), \bar{\boldsymbol w}^{(r)}(i-1)\big)=\\ &~~~~~~~~~~\sum_{l=1}^{i}\varrho^{i-l}\big[|\bar{\boldsymbol w}^{(r)H}(l-1)\boldsymbol T_r^{(r)}(l-1)\boldsymbol x(l)|^2-1\big]^2, \end{split} \end{equation} where $\varrho$ is the exponential weight factor that is required as the optimal rank $r$ can change as a function of the time instant $i$. From the expressions in Table \ref{tab:JIO-CCM-SG-DFP} and Table \ref{tab:JIO-CCM-RLS-DFP}, the key quantities to be updated for the rank adaptation are the transformation matrix $\boldsymbol T_r(i)$, the reduced-rank weight vector $\bar{\boldsymbol w}(i)$, the associated reduced-rank steering vector $\bar{\boldsymbol a}(\theta_0)$ and the matrix $\hat{\bar{\boldsymbol\Phi}}(i)$ (for RLS only). To this end, we express $\boldsymbol T_r^{(r)}(i)$ and $\bar{\boldsymbol w}^{(r)}(i)$ for the rank adaptation as follows: \begin{equation}\label{33} \begin{split} &\boldsymbol T_r^{(r)}(i)=\begin{bmatrix} t_{1,1} & t_{1,2} & \ldots & t_{1,r_{\textrm{min}}} & \ldots & t_{1,r_{\textrm{max}}}\\ t_{2,1} & t_{2,2} & \ldots & t_{2,r_{\textrm{min}}} & \ldots & t_{2,r_{\textrm{max}}}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ t_{m,1} & t_{m,2} & \ldots & t_{m,r_{\textrm{min}}} & \ldots & t_{m,r_{\textrm{max}}}\\ \end{bmatrix},\\ &\bar{\boldsymbol w}^{(r)}(i)=\begin{bmatrix} \bar{w}_{1} & \bar{w}_{2} & \ldots & \bar{w}_{r_{\textrm{min}}} & \ldots & \bar{w}_{r_{\textrm{max}}}\\ \end{bmatrix}^{T}, \end{split} \end{equation} where $r_{\textrm{min}}$ and $r_{\textrm{max}}$ are the minimum and maximum ranks allowed, respectively. For each time instant $i$, $\boldsymbol T_r^{(r)}(i)$ and $\bar{\boldsymbol w}^{(r)}(i)$ are updated along with the associated quantities $\bar{\boldsymbol a}(\theta_0)$ and $\hat{\bar{\boldsymbol\Phi}}(i)$ for a selected $r$ according to the minimization of the cost function in (\ref{32}). The developed automatic rank selection method is given by \begin{equation}\label{34} r_{\textrm{opt}}=\arg\min_{r_{\textrm{min}}\leq j\leq r_{\textrm{max}}}J_{\textrm{pcm}}\big(\boldsymbol T_r^{(j)}(i-1), \bar{\boldsymbol w}^{(j)}(i-1)\big), \end{equation} where $j$ is an integer ranging between $r_{\textrm{min}}$ and $r_{\textrm{max}}$. Note that a smaller rank may provide faster adaptation during the initial stages of the estimation procedure and a slightly larger rank tends to yield a better steady-state performance. Our studies reveal that the range for which the rank $r$ of the proposed algorithms have a positive impact on the performance is very limited, being from $r_{\textrm{min}}=3$ to $r_{\textrm{max}}=7$. These values are rather insensitive to the number of users in the system, to the number of sensor elements, and work efficiently for the studied scenarios. The additional complexity of this automatic rank selection technique is for the update of involved quantities with the maximum allowed rank $r_{\textrm{max}}$ and the computation of the cost function in (\ref{32}). With the case of large $m$, the rank $r$ is significantly smaller than $m$ and the additional computations do not increase the computational cost significantly. The proposed algorithms with the rank adaptation technique can increase the convergence rate and improve the output performance, and $r$ can be made fixed once the algorithms reach the steady-state. Simulation results will show how the developed rank adaptation technique works. Note that the same idea can be employed in the algorithms for the GSC structure. We omit this part for simplification and readability. \section{Analysis of the proposed algorithms} In this section, we provide a complexity analysis of the proposed reduced-rank algorithms and compare them with existing algorithms. An analysis of the optimization problem for the proposed reduced-rank scheme is also carried out. \subsection{Complexity Analysis} We evaluate the computational complexity of the proposed reduced-rank algorithms and compare them with the existing full-rank and reduced-rank algorithms based on the MSWF and the AVF techniques for the DFP and the GSC structures. With respect to each algorithm, we consider the CMV and the CCM design criteria. The computational requirements are described in terms of the number of complex arithmetic operations, namely, additions and multiplications. The complexity of the proposed and existing algorithms for the DFP is depicted in Table \ref{tab: Complexity_DFP} and for the GSC in Table \ref{tab: Complexity_GSC}. Since we did not consider the AVF technique for the GSC structure, we put its complexity for the DFP in both tables for comparison. For the DFP structure, we can say that the complexity of the proposed reduced-rank SG type and extended GS version algorithms increases linearly with $rm$. The parameter $m$ is more influential since $r$ is selected around a small range that is much less than $m$ for large arrays. The complexity of the proposed reduced-rank RLS type and GS version algorithms is higher than that of the SG type and quadratic with $m$ and $r$. For the GSC structure, the complexity of the SG type algorithms has extra $m^2$ terms as compared to the DFP structure in terms of additions and multiplications due to the blocking matrix in the sidelobe canceller. There is no significant difference in complexity of the RLS type algorithms due to the presence of the blocking matrix since (\ref{29}) and (\ref{30}) are recursive expressions and, as compared to non-recursive versions, reduce the complexity. In order to illustrate the main trends in what concerns the complexity of the proposed algorithms, we show in Fig. \ref{fig:complexity_add_mul_dsp_gram_final3} and Fig. \ref{fig:complexity_add_mul_gsc_gram_final3} the complexity of both the DFP and the GSC structures in terms of additions and multiplications versus the length of the filter $m$. Since the complexity of the current algorithms according to the CMV criterion is a little less than that of the CCM criterion, we only plot the curves for the CCM criterion for simplification. { Note that the values of $r$ are different with respect to different algorithms, which are set to make the corresponding algorithms reach the best performance according to the experiments. The specific values are given in the figures}. It is clear that the proposed SG type and extended GS version algorithms have a complexity slightly higher than the full-rank SG algorithm but much less than the existing algorithms based on the MSWF and the AVF techniques for both the DFP and the GSC structures. The curves of the proposed RLS type and GS version algorithms are situated between the full-rank RLS and the MSWF RLS algorithms in both figures. \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig2.eps} \caption{Complexity in terms of arithmetic operations versus the length of the filter $m$ for the DFP structure.} \label{fig:complexity_add_mul_dsp_gram_final3} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig3.eps} \caption{Complexity in terms of arithmetic operations versus the length of the filter $m$ for the GSC structure.} \label{fig:complexity_add_mul_gsc_gram_final3} \end{center} \end{figure} \begin{table} \centering \caption{\normalsize Computational complexity of algorithms for DFP} \footnotesize \label{tab: Complexity_DFP} \begin{tabular}{l c c} \hline \\ Algorithm & Additions & Multiplications\\ \hline \\ FR-CMV-SG & $3m-1$ & $4m+1$ \\ FR-CCM-SG & $3m$ & $4m+3$ \\ FR-CMV-RLS & $4m^2-m-1$ & $5m^2+5m-1$\\ FR-CCM-RLS & $5m^2+2m-1$ & $6m^2+6m+3$\\ MSWF-CMV-SG & $rm^2+(r+1)m$ & $rm^{2}+2rm$ \\ & $+2r-2$ & $+5r+2$\\ MSWF-CCM-SG & $rm^2+(r+1)m$ & $rm^2+2rm$ \\ & $+4r-2$ & $+4r+4$\\ MSWF-CMV-RLS & $rm^2+(r+1)m$ & $(r+1)m^2+2rm$\\ & $+4r^2-3r-1$ & $+5r^2+4r$\\ MSWF-CCM-RLS & $rm^2+(r+1)m$ & $(r+1)m^2+2rm$\\ & $+5r^2-r$ & $+6r^2+7r+3$\\ AVF & $(4r+5)m^2+(r-1)m$ & $(5r+8)m^2$\\ & $-2r-1$ & $+(3r+2)m$\\ JIO-CMV-SG & $4rm+m+2r-3$ & $4rm+m+7r+3$\\ JIO-CMV-GS & $7rm-m-1$ & $7rm-2m+8r+2$\\ JIO-CCM-SG & $4rm+m+2r-2$ & $4rm+m+7r+6$\\ JIO-CCM-GS & $7rm-m$ & $7rm-2m+8r+5$\\ JIO-CMV-RLS & $4m^2+(2r-1)m$ & $5m^2+(3r+3)m$\\ & $+4r^2-4r-1$ & $+6r^2+4r$\\ JIO-CMV-RGS & $4m^2+(5r-3)m$ & $5m^2+6rm$\\ & $+4r^2-6r+1$ & $6r^2+5r-1$\\ JIO-CCM-RLS & $5m^2+rm$ & $6m^2+(2r+6)m$\\ & $+5r^2+3r-1$ & $+5r^2+9r+3$\\ JIO-CCM-RGS & $5m^2+(4r-2)m$ & $6m^2+(5r+3)m$\\ & $5r^2+r+1$ & $5r^2+10r+2$\\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{\normalsize Computational complexity of algorithms for GSC} \footnotesize \label{tab: Complexity_GSC} \begin{tabular}{l c c} \hline \\ Algorithm & Additions & Multiplications\\ \hline \\ FR-CMV-SG & $m^2+m-2$ & $m^2+2m-1$ \\ FR-CCM-SG & $m^2+m-1$ & $m^2+2m+1$ \\ FR-CMV-RLS & $4m^2-6m+4$ & $5m^2-4m$\\ FR-CCM-RLS & $4m^2-6m+2$ & $5m^2-3m$\\ MSWF-CMV-SG & $(r+1)m^2-2rm$ & $(r+2)m^{2}-(r+2)m$ \\ & $+2r-1$ & $+2r+2$\\ MSWF-CCM-SG & $(r+1)m^2-2rm+2r$ & $(r+2)m^2-(r+2)m$ \\ & & $+2r+4$\\ MSWF-CMV-RLS & $(r+1)m^2-2rm$ & $(r+2)m^2-(r+2)m$\\ & $+3r^2+r-1$ & $+4r^2+4r$\\ MSWF-CCM-RLS & $(r+1)m^2-2rm$ & $(r+2)m^2-(r+1)m$\\ & $+3r^2+r-1$ & $+4r^2+4r+1$\\ AVF & $(4r+5)m^2+(r-1)m$ & $(5r+8)m^2$\\ & $-2r-1$ & $+(3r+2)m$\\ JIO-CMV-SG & $m^2+2rm-m-r$ & $m^2+2rm+r+2$\\ JIO-CMV-GS & $m^2+5rm-3m$ & $m^2+5rm-3m$\\ & $-6r+4$ & $-r+4$\\ JIO-CCM-SG & $m^2+2rm-m-r+1$ & $m^2+2rm+r+4$\\ JIO-CCM-GS & $m^2+5rm-3m$ & $m^2+5rm-3m$\\ & $-6r+5$ & $-r+6$\\ JIO-CMV-RLS & $4m^2+(2r-8)m$ & $5m^2+(2r-6)m$\\ & $+5r^2-2r+4$ & $+7r^2+3r+2$\\ JIO-CMV-RGS & $4m^2+(5r-10)m$ & $5m^2+(5r-9)m$\\ & $+5r^2-7r+8$ & $7r^2+r+4$\\ JIO-CCM-RLS & $4m^2+(2r-7)m$ & $5m^2+(2r-4)m$\\ & $+5r^2-4r+3$ & $+7r^2+2r+1$\\ JIO-CCM-RGS & $4m^2+(5r-9)m$ & $5m^2+(5r-7)m$\\ & $5r^2-9r+7$ & $7r^2+3$\\ \hline \end{tabular} \end{table} \subsection{Analysis of the Optimization Problem} Here, we present the analysis of the proposed reduced-rank scheme according to the CCM criterion, which depends on the transformation matrix and the reduced-rank weight vector. { Our approach starts from the analysis of the full-rank constant modulus criterion and then utilizes the transformation matrix and the reduced-rank weight vector with the received vector to express the output. The constraint is enforced during the analysis. We will consider the analysis for both the DFP and the GSC structures}. { The full-rank constant modulus cost function in (\ref{2}) with $p=2$ and $\nu=1$ can be written as \begin{equation}\label{35} \begin{split} J_{\textrm{cm}}\big(\boldsymbol w(i)\big)&=\mathbb E\big[|y(i)|^4-2|y(i)|^2+1\big]\\ &=\mathbb E\big[|\boldsymbol w^H(i)\boldsymbol x(i)\boldsymbol x^H(i)\boldsymbol w(i)|^2\big]\\ &~~~-2\mathbb E\big[|\boldsymbol w^H(i)\boldsymbol x(i)|^2\big]+1, \end{split} \end{equation} where $\boldsymbol x(i)=\sum_{k=0}^{q-1}C_k d_k(i)\boldsymbol a(\theta_k)+\boldsymbol n(i)$ ($k=0, \ldots, q-1$) from (\ref{1}) with $D_k$ being the signal amplitude and $d_k$ is the transmitted bit of the $k$th user. Note that we have replaced $s_k$ in (\ref{1}) by $C_k d_k$. For the sake of analysis, we will follow the assumption in \cite{Xu} and consider a noise free case. For small noise variance $\sigma_n^2$, this assumption can be considered as a small perturbation and the analysis will still be applicable. For large $\sigma_n^2$, we remark that the term $\gamma$ can be adjusted for the analysis. Under this assumption, we write the received vector as $\boldsymbol x(i)=\boldsymbol A(\boldsymbol\theta)\boldsymbol C\boldsymbol d(i)$, where $\boldsymbol A(\boldsymbol\theta)$, as before, denotes the signature matrix, $\boldsymbol C(i)=\textrm{diag}[C_0, \ldots, C_{q-1}]\in\mathbb C^{q\times q}$, and $\boldsymbol d(i)=[d_0(i), \ldots, d_{q-1}(i)]^T\in\mathbb C^{q\times1}$. For simplicity, we drop the time instant in the quantities. Letting $\varsigma_k=C_k\boldsymbol w^H\boldsymbol a(\theta_k)$ and $\boldsymbol\varsigma=[\varsigma_0, \ldots, \varsigma_{q-1}]^T$, we have \begin{equation}\label{36} J_{\textrm{cm}}=\mathbb E[\boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma\boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma]-2\mathbb E[\boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma]+1. \end{equation} Since $d_k$ are independent random variables, the evaluation of the first two terms in the brackets in (\ref{36}) reads \begin{equation}\label{37} \begin{split} &\boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma\boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma=\sum_{k=0}^{q-1}\sum_{l=0}^{q-1}|d_k|^2|d_l|^2\varsigma_k^{\ast}\varsigma_k\varsigma_l^{\ast}\varsigma_l,\\ &\boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma=\sum_{k=0}^{q-1}|d_k|^2\varsigma_k^{\ast}\varsigma_k^{\ast}. \end{split} \end{equation} For the reduced-rank scheme with the DFP structure, we have $\boldsymbol w=\boldsymbol T_r\bar{\boldsymbol w}$. Thus, \begin{equation}\label{38} \varsigma_k=C_k(\boldsymbol T_r\bar{\boldsymbol w})^H\boldsymbol a(\theta_k)=C_k\sum_{j=1}^{r}\boldsymbol t_{\bar{w}_j}^H\boldsymbol a(\theta_k), \end{equation} where $\boldsymbol t_{\bar{w}_j}=\bar{w}_j\boldsymbol t_j\in\mathbb C^{m\times1}$ and $\boldsymbol t_j$ ($j=1, \ldots, r$) is the column vector of the transformation matrix $\boldsymbol T_r$. Given $t_j(\theta_k)=C_k\boldsymbol t_{\bar{w}_j}^H\boldsymbol a(\theta_k)$ and $\varsigma_k=\sum_{j=1}^{r}t_j(\theta_k)$, we get \begin{equation}\label{39} \varsigma_k^{\ast}\varsigma_k=\sum_{j=1}^{r}\sum_{n=1}^{r}t_j^{\ast}(\theta_k)t_n(\theta_k). \end{equation} From (\ref{38}) and the constraint condition in (\ref{11}), it is interesting to find $\varsigma_0=C_0\gamma$. Substituting this expression and (\ref{38}) into (\ref{37}), we have \begin{equation}\label{40} \begin{split} \boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma&=|d_0|^2\varsigma_0^{\ast}\varsigma_0+\sum_{k=1}^{q-1}|d_k|^2 \varsigma_k^{\ast}\varsigma_k\\ &=|d_0|^2C_0^2\gamma^2+\tilde{\boldsymbol\varsigma}^H\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\tilde{\boldsymbol\varsigma}, \end{split} \end{equation} where $\tilde{\boldsymbol\varsigma}=[\varsigma_1, \ldots, \varsigma_{q-1}]^T\in\mathbb C^{(q-1)\times1}$ and $\tilde{\boldsymbol d}=[d_1, \ldots, d_{q-1}]^T\in\mathbb C^{(q-1)\times1}$. Substituting (\ref{40}) into (\ref{36}), we get the CCM cost function in the form of reduced-rank, i.e., \begin{equation}\label{41} \begin{split} J_{\textrm{ccm}}=&\mathbb E[|d_0|^2C_0^2\gamma^2+\tilde{\boldsymbol\varsigma}^H\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\tilde{\boldsymbol\varsigma}]^2\\ &-2\mathbb E[|d_0|^2C_0\gamma^2+\tilde{\boldsymbol\varsigma}^H\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\tilde{\boldsymbol\varsigma}]+1, \end{split} \end{equation} where $\tilde{\boldsymbol\varsigma}$ is a function of the transformation matrix and the reduced-rank weight vector, as shown in (\ref{38}). This expression is important for the reduced-rank CCM analysis. The fact that $\boldsymbol T_r$ and $\bar{\boldsymbol w}$ depend on each other and exchange information claims that we need to take both of them into consideration for the analysis. The expression in (\ref{38}) combines these two quantities together and thus circumvents the complicated procedures of performing the analysis separately. Note that (\ref{41}) is a constrained expression since the constraint condition has been enclosed in the first term of each bracket. We can examine the convexity of (\ref{11}) by computing the Hessian matrix $\boldsymbol H$ with respect to $\tilde{\boldsymbol\varsigma}^H$ and $\tilde{\boldsymbol\varsigma}$, that is $\boldsymbol H=\frac{\partial}{\partial\tilde{\boldsymbol\varsigma}^H}\frac{\partial J_{\textrm{ccm}}}{\partial\tilde{\boldsymbol\varsigma}}$ yields, \begin{equation}\label{42} \boldsymbol H=2\mathbb E\big[(|d_0|^2C_0^2\gamma^2-1)\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H+\tilde{\boldsymbol\varsigma}^H\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\tilde{\boldsymbol\varsigma}\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H+\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\tilde{\boldsymbol\varsigma}\tilde{\boldsymbol\varsigma}^H\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\big], \end{equation} where $\boldsymbol H$ should be positive semi-definite to ensure the convexity of the optimization problem. The second and third terms in (\ref{42}) yield positive semi-definite matrices, while the first term provides the condition $|d_0|^2C_0^2\gamma^2-1\geq0$ to ensure the convexity. Thus, $J_{\textrm{ccm}}$ is a convex function of $\boldsymbol T_r$ and $\bar{\boldsymbol w}$ when \begin{equation}\label{43} \gamma^2\geq\frac{1}{|d_0|^2C_0^2}. \end{equation} For the reduced-rank scheme with the GSC structure, the expression of the weight vector has been given in (\ref{9}). Substituting this expression into the definition of $\varsigma_k$ and considering the fact that $\boldsymbol B\boldsymbol a(\theta_0)=\boldsymbol 0$, we obtain \begin{equation}\label{44} \varsigma_k=\left\{\begin{array}{cc} C_0\boldsymbol a_{\gamma}^H(\theta_0)\boldsymbol a(\theta_0) & \textrm{for}~k=0\\ -C_k\sum_{j=1}^{r}\boldsymbol t_{\bar{w}_{\textrm{gsc},j}}^H\boldsymbol B\boldsymbol a(\theta_k) & \textrm{for}~k=1, \ldots, q-1\\ \end{array}\right., \end{equation} where $\boldsymbol t_{\bar{w}_{\textrm{gsc},j}}=\bar{w}_{\textrm{gsc},j}\boldsymbol t_{\textrm{gsc},j}\in\mathbb C^{(m-1)\times1}$ and $\boldsymbol t_{\textrm{gsc},j}$ ($j=1, \ldots, r$) is the column vector of $\boldsymbol T_{\textrm{gsc}}$ for the GSC structure. Given $t_{j,l}^{'}(\theta_k)=C_k a_l(\theta_k)\boldsymbol t_{\bar{w}_{\textrm{gsc}},j}^H\boldsymbol b_l$ ($l=1, \ldots, m$), where $a_n(\theta_k)$ is the $l$th element of the steering vector with the direction $\theta_k$ and $\boldsymbol b_l\in\mathbb C^{(m-1)\times1}$ is the $l$th column vector of the signal blocking matrix $\boldsymbol B$, we have $\varsigma_k=-\sum_{n=1}^{m}\sum_{j=1}^{r}t_{j,n}^{'}(\theta_k)$. Thus, for $k=1, \ldots, q-1$, \begin{equation}\label{45} \varsigma_k^{\ast}\varsigma_k=\sum_{j=1}^{r}\sum_{n=1}^{r}\sum_{l=1}^{m}\sum_{p=1}^{m}{t_{j,l}^{'}}^{\ast}(\theta_k)t_{n,p}^{'}(\theta_k). \end{equation} Substituting (\ref{44}) and (\ref{45}) into (\ref{37}), we get the expression for the GSC structure, which is \begin{equation}\label{46} \begin{split} \boldsymbol\varsigma^H\boldsymbol d\boldsymbol d^H\boldsymbol\varsigma&=|d_0|^2\varsigma_0^{\ast}\varsigma_0+\sum_{k=1}^{q-1}|d_k|^2\varsigma_k^{\ast}\varsigma_k\\ &=|d_0|^2C_0^2\boldsymbol a^H(\theta_0)\boldsymbol a_{\gamma}(\theta_0)\boldsymbol a^H_{\gamma}(\theta_0)\boldsymbol a(\theta_0)+\tilde{\boldsymbol\varsigma}^H\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\tilde{\boldsymbol\varsigma}\\ &=|d_0|^2C_0^2\gamma^2+\tilde{\boldsymbol\varsigma}^H\tilde{\boldsymbol d}\tilde{\boldsymbol d}^H\tilde{\boldsymbol\varsigma}, \end{split} \end{equation} which is in the same form as in (\ref{40}) for the DFP structure but with the different expression of the quantity $\tilde{\boldsymbol\varsigma}$. Using the similar interpretation for the DFP, the quantity $\varsigma_k$ in (\ref{44}) combines the transformation matrix and the reduced-rank weight vector together and thus simplifies the analysis. By computing the Hessian matrix $\boldsymbol H$, we can obtain the same conclusion as shown in (\ref{43})}. \section{Simulations} In this section, we evaluate the output signal-to-interference-plus-noise ratio (SINR) performance of the proposed adaptive reduced-rank algorithms and compare them with the existing methods. Specifically, we compare the proposed SG and RLS type algorithms with the full-rank (FR) SG and RLS and reduced-rank methods based on the MSWF and the AVF techniques for both the DFP and the GSC structures. With respect to each algorithm, we consider the CMV and the CCM criteria for the beamformer design. We assume that the DOA of the desired user is known by the receiver. In each experiment, a total of $K=1000$ runs are carried out to get the curves. For all simulations, the source power (including the desired user and interferers) is $\sigma_{\textrm{s}}^{2}=\sigma_{\textrm{i}}^{2}=1$, the input signal-to-noise (SNR) ratio is SNR=$10$ dB with spatially and temporally white Gaussian noise, and $\gamma=1$. Simulations are performed by an ULA containing $m=32$ sensor elements with half-wavelength interelement spacing. \subsection{Comparison of CMV and CCM Based Algorithms} In this part, we compare the proposed and existing algorithms according to the CMV and the CCM criteria for the DFP structure of the beamformer design. The simulation, which includes two experiments, shows the input SNR versus the output SINR. The input SNR is varied between $-10$ dB and $10$ dB. The number of users is $q=5$ with one desired user. Fig. \ref{fig:cmv_ccm_sinr_gram_final3} (a) plots the curves of the SG type algorithms based on the full-rank, MSWF, AVF and the proposed reduced-rank scheme, and Fig. \ref{fig:cmv_ccm_sinr_gram_final3} (b) shows the corresponding RLS type algorithms. The parameters used to obtain these curves are given and the rank $r$ is selected to optimize the algorithms. From Fig. \ref{fig:cmv_ccm_sinr_gram_final3} (a), the output SINR of all SG type methods increases following the increase of the input SNR. The algorithms based on the CCM beamformer design outperform those based on the CMV since the CCM criterion is a positive measure of the beamformer output deviating from a constant modulus, which provides more information than the CMV for the parameter estimation of constant modulus constellations. The proposed CCM algorithms achieve better performance than the existing full-rank, MSWF and AVF ones. By employing the GS technique to reformulate the transformation matrix, the GS version algorithms achieve improved performance. Fig. \ref{fig:cmv_ccm_sinr_gram_final3} (b) verifies the same fact but for the RLS type algorithms. It is clear that the RLS type algorithms superior to the SG type ones for all input SNR values. This simulation verifies that the performance of the adaptive algorithms based on the CCM beamformer design has a similar trend but is better than that based on the CMV for constant modulus constellations. Considering this fact, we will only compare the CCM based adaptive algorithms in the following part for simplification. Note that all the methods in this simulation are for the DFP structure. The algorithms for the GSC structure show a similar performance, which is given in the next part. \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig4.eps} \caption{Output SINR versus input SNR with $m=32$, $q=5$, SNR$=10$ dB, (a) $\mu_{T_r}=0.002$, $\mu_{\bar{w}}=0.001$, $r=5$ for SG, $\mu_{T_r}=0.003$, $\mu_{\bar{w}}=0.0007$, $r=5$ for GS; (b) $\alpha=0.998$, $\delta=\bar{\delta}=0.03$, $r=5$ for RLS, $\alpha=0.998$, $\delta=\bar{\delta}=0.028$, $r=5$ for RGS of the proposed CCM reduced-rank scheme.} \label{fig:cmv_ccm_sinr_gram_final3} \end{center} \end{figure} \subsection{Output SINR for the DFP and the GSC} We evaluate the output SINR performance of the proposed and existing algorithms against the number of snapshots for both the DFP and the GSC structures in Fig. \ref{fig:ccm_allmethods_dsp_gram_final3} and Fig. \ref{fig:ccm_allmethods_gsc_gram_final3}, respectively. The number of snapshots is $N=500$. In Fig. \ref{fig:ccm_allmethods_dsp_gram_final3}, the convergence of the proposed SG type and extended GS version algorithms is close to the RLS type algorithm based on the MSWF, and the output SINR values are higher than other SG type methods based on the full-rank, MSWF and AVF. The convergence of the proposed RLS type and GS version algorithms is slightly slower than the AVF, but much faster than other existing and proposed methods. Its tracking performance outperforms the MSWF and AVF based algorithms. Fig. \ref{fig:ccm_allmethods_gsc_gram_final3} is carried out for the GSC structure under the same scenario as in Fig. \ref{fig:ccm_allmethods_dsp_gram_final3}. The curves of the considered algorithms for the GSC show nearly the same convergence and tracking performance as those for the DFP. It implies that the GSC structure is an alternative way for the CCM beamformer design. The difference is that the GSC processor incorporates the constraint in the structure and thus converts the constrained optimization problem into an unconstrained one. The adaptive implementation of the GSC beamformer design is different from that of the DFP but the performance is similar. The following simulations are carried out for the DFP structure to simplify the presentation. \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig5.eps} \caption{Output SINR versus the number of snapshots with $m=32$, $q=7$, SNR$=10$ dB, $\mu_{T_r}=0.003$, $\mu_{\bar{w}}=0.003$, $r=5$ for SG, $\mu_{T_r}=0.0023$, $\mu_{\bar{w}}=0.003$, $r=5$ for GS, $\alpha=0.998$, $\delta=\bar{\delta}=0.025$, $r=5$ for RLS, $\alpha=0.998$, $\delta=\bar{\delta}=0.02$, $r=5$ for RGS of the DFP structure.} \label{fig:ccm_allmethods_dsp_gram_final3} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig6.eps} \caption{Output SINR versus input SNR with $m=32$, $q=7$, SNR$=10$ dB, $\mu_{T_r}=0.0025$, $\mu_{\bar{w}_{\textrm{gsc}}}=0.002$, $r=5$ for SG, $\mu_{T_r}=0.003$, $\mu_{\bar{w}_{\textrm{gsc}}}=0.003$, $r=5$ for GS, $\alpha=0.998$, $\delta=\bar{\delta}=0.01$, $r=5$ for RLS, $\alpha=0.998$, $\delta=\bar{\delta}=0.0093$, $r=5$ for RGS of the GSC structure.} \label{fig:ccm_allmethods_gsc_gram_final3} \end{center} \end{figure} \subsection{ Mean Square Estimation Error of the Weight Solution} { In Fig. \ref{fig:weight}, we measure the mean square estimation error $\mathbb E\{\|\boldsymbol w(i)-\boldsymbol w_{\textrm{mvdr}}\|^2\}$ between the weight solutions (full-rank) of the proposed methods $\boldsymbol w(i)=\boldsymbol T_r(i)\bar{\boldsymbol w}(i)$ and that of the minimum-variance-distortionless-response (MVDR) method \cite{Trees} $\boldsymbol w_{\textrm{mvdr}}=\gamma\frac{\boldsymbol R^{-1}\boldsymbol a(\theta_0)}{\boldsymbol a^H(\theta_0)\boldsymbol R^{-1}\boldsymbol a(\theta_0)}$, where $\boldsymbol R$ is obtained by its sample-average estimation. The experiment is carried out with the same scenario as in Fig \ref{fig:ccm_allmethods_dsp_gram_final3}. It exhibits that the mean square estimation error decreases following the snapshots. The values of the proposed SG and RLS type algorithms decrease rapidly and reach a relative lower level compared with those of the existing methods. Note that $\boldsymbol w_{\textrm{mvdr}}$ is not an optimum solution for the proposed algorithms but viewed as a referenced weight solution since, for the CCM based algorithm, the weight expression is not a pure function of the received data but also depends on the previous weighting values}. \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig7.eps} \caption{Square estimation error between the weight solution and the MVDR weight solution.} \label{fig:weight} \end{center} \end{figure} \subsection{Output SINR versus Rank $r$ and Automatic Rank Selection} In the next two experiments, we assess the output SINR performance of the proposed and analyzed algorithms versus their associated rank $r$, and check the effectiveness of the automatic rank selection technique. The experiment in Fig. \ref{fig:ccm_rank_gram_final3} is intended for setting the adequate rank $r$ of the reduced-rank schemes for a given input SNR and number of snapshots. The scenario is the same as that in Fig. \ref{fig:ccm_allmethods_dsp_gram_final3} except that the number of snapshots is fixed to be $N=500$ and the rank $r$ is varied between $1$ and $16$. The result indicates that the best performance of the proposed SG and RLS type algorithms is obtained with rank $r=5$ for the proposed reduced-rank scheme. The performance of the full-rank methods is invariant with the change of the rank $r$. For the MSWF technique, its SG and RLS type algorithms achieve their best performance with ranks $r=6$ and $r=7$, respectively. For the AVF-based algorithm, the best rank is found to be $r=7$. It is interesting to note that the best $r$ is usually much smaller than the number of elements $m$, which leads to significant computational savings. For the proposed and analyzed algorithms, the range of $r$ that has the best performance is concentrated between $r_{\textrm{min}}=3$ and $r_{\textrm{max}}=7$. This range is used in the next experiment to check the performance of the proposed algorithms with the automatic rank selection technique. Since the performance of the proposed reduced-rank algorithms was found in our studies to be a function of the rank $r$ and other parameters such as the step size and the forgetting factor, we need to consider their impacts on the performance of the system. Specifically, we assume that the step size of the SG type algorithms and the forgetting vector of the RLS type algorithms are adequately chosen and we focus on the developed automatic rank selection technique introduced in the previous section. In Fig. \ref{fig:ccm_rr_sg_rls_autorank_gram_final3}, the proposed reduced-rank algorithms utilize fixed values for their rank and also the automatic rank selection technique. We consider the presence of $q=10$ users (one desired) in the system. The results show that with a lower rank $r=3$ the reduced-rank algorithms usually converge faster but achieve lower output SINR values. Conversely, with a higher rank $r=7$ the proposed algorithms converge relatively slower than with a lower rank but reach higher output SINR values. The developed automatic rank selection technique allows the proposed algorithms to circumvent the tradeoff between convergence and steady-state performance for a given rank, by adaptively choosing the best rank for a given number of snapshots which provides both fast convergence and improved tracking performance. \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig8.eps} \caption{Output SINR versus rank $r$ with $m=32$, $q=7$, SNR$=10$ dB.} \label{fig:ccm_rank_gram_final3} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig9.eps} \caption{Output SINR versus the number of snapshots with $m=32$, $q=10$, SNR$=10$ dB, (a) $\mu_{T_r}=0.003$, $\mu_{\bar{w}}=0.004$ for SG, $\mu_{T_r}=0.003$, $\mu_{\bar{w}}=0.001$ for GS; (b) $\alpha=0.998$, $\delta=\bar{\delta}=0.03$ for RLS, $\alpha=0.998$, $\delta=\bar{\delta}=0.026$, $r=5$ for RGS with the automatic rank selection technique.} \label{fig:ccm_rr_sg_rls_autorank_gram_final3} \end{center} \end{figure} \subsection{Performance in non-stationary scenarios} In the last experiment, we evaluate the performance of the proposed and analyzed algorithms in a non-stationary scenario, namely, when the number of users changes. The automatic rank selection technique is employed, and the step size and the forgetting factor are set to ensure that the considered algorithms converge quickly to the steady-state. In this experiment, the scenario starts with $q=8$ users including one desired user. From the first stage (first $500$ snapshots) of Fig. \ref{fig:ccm_allmethods_moreusers_gram_final3}, the convergence and steady-state performance of the proposed SG type algorithms is superior to other SG type methods with the full-rank, MSWF and AVF. The proposed RLS type algorithm has a convergence rate a little slower than the AVF but faster than the other analyzed methods, and the steady-state performance better than the existing ones. Three more interferers enter the system at time instant $i=500$. This change makes the output SINR reduce suddenly and degrades the performance of all methods. The proposed SG and RLS type algorithms keep faster convergence and better steady-state performance in comparison with the corresponding SG and RLS type methods based on the full-rank and MSWF techniques. The convergence of the AVF method is fast but the steady-state performance is inferior to the proposed methods. \begin{figure}[!htb] \begin{center} \def\epsfsize#1#2{1.0\columnwidth} \epsfbox{fig10.eps} \caption{Output SINR versus input SNR with $m=32$, $q_{1}=8$, $q_2=11$, SNR$=10$ dB, $\mu_{T_r}=0.003$, $\mu_{\bar{w}}=0.0038$, $r=5$ for SG, $\mu_{T_r}=0.003$, $\mu_{\bar{w}}=0.001$, $r=5$ for GS, $\alpha=0.998$, $\delta=\bar{\delta}=0.033$, $r=5$ for RLS, $\alpha=0.998$, $\delta=\bar{\delta}=0.028$, $r=5$ for RGS of the proposed CCM reduced-rank scheme.} \label{fig:ccm_allmethods_moreusers_gram_final3} \end{center} \end{figure} \section{Concluding Remarks} We proposed a CCM reduced-rank scheme based on the joint iterative optimization of adaptive filters for beamformer design. In the proposed scheme, the dimension of the received vector is reduced by the adaptive transformation matrix that is formed by a bank of full-rank adaptive filters, and the transformed received vector is processed by the reduced-rank adaptive filter for estimating the desired signal. The proposed scheme was developed for both DFP and GSC structures. We derived the CCM expressions for the transformation matrix and the reduced-rank weight vector, and developed SG and RLS type algorithms for their efficient implementation. The GS technique was employed in the proposed algorithms to reformulate the transformation matrix and thus improve the performance. The automatic rank selection technique was developed to determine the most adequate rank and make a good trade-off between the convergence rate and the steady-state performance for the proposed methods. The complexity and convexity analysis of the proposed algorithms was carried out. Simulation results for the beamforming application showed that the proposed reduced-rank algorithms significantly outperform the existing full-rank and reduced-rank methods in convergence and steady-state performance at comparable complexity. \begin{appendix} \section*{Derivation of (\ref{24})} In this appendix, we show the details of the derivation of the expression for the transformation matrix in (\ref{33}). Assuming $\bar{\boldsymbol w}(i)\neq\boldsymbol 0$ is known, taking the gradient terms of (\ref{32}) with respect to $\boldsymbol T_r(i)$, we get \begin{equation}\label{62} \begin{split} \nabla L_{\textrm{un}_{\boldsymbol T_r(i)}}&=2\sum_{l=1}^{i}|y(l)|^2\boldsymbol x(l)\boldsymbol x^H(l)\boldsymbol T_r(i)\bar{\boldsymbol w}(i)\bar{\boldsymbol w}^H(i)\\ &-2\sum_{l=1}^{i}\boldsymbol x(l)\boldsymbol x^H(l)\boldsymbol T_r(i)\bar{\boldsymbol w}(i)\bar{\boldsymbol w}^H(i)+2\lambda\boldsymbol a(\theta_0)\bar{\boldsymbol w}^H(i)\\ &=2\hat{\boldsymbol R}(i)\boldsymbol T_r(i)\bar{\boldsymbol w}(i)\bar{\boldsymbol w}^H(i)-2\hat{\boldsymbol p}(i)\bar{\boldsymbol w}^H(i)\\ &+2\lambda\boldsymbol a(\theta_0)\bar{\boldsymbol w}^H(i). \end{split} \end{equation} Making the above gradient terms equal to the zero matrix, right-multiplying the both sides by $\bar{\boldsymbol w}(i)$, and rearranging the expression, it becomes \begin{equation}\label{63} \boldsymbol T_r(i)\bar{\boldsymbol w}(i)=\hat{\boldsymbol R}^{-1}(i)\big[\hat{\boldsymbol p}(i)-\lambda\boldsymbol a(\theta_0)\big] \end{equation} If we define $\hat{\boldsymbol p}_{\hat{R}}(i)=\hat{\boldsymbol R}^{-1}(i)\big[\hat{\boldsymbol p}(i)-\lambda\boldsymbol a(\theta_0)\big]$, the solution of $\boldsymbol T_r(i)$ in (\ref{63}) can be regarded to find the solution to the linear equation \begin{equation}\label{64} \boldsymbol T_r(i)\bar{\boldsymbol w}(i)=\hat{\boldsymbol p}_{\hat{R}}(i) \end{equation} Given a $\bar{\boldsymbol w}(i)\neq\boldsymbol 0$, there exists multiple $\boldsymbol T_r(i)$ satisfying (\ref{64}) in general. Therefore, we derive the minimum Frobenius-norm solution for stability. Let us express the quantities involved in (\ref{64}) by \begin{equation}\label{65} \boldsymbol T_{r}(i)=\begin{bmatrix} \bar{\boldsymbol\rho}_{1}(i)\\ \bar{\boldsymbol\rho}_{2}(i)\\ \vdots\\ \bar{\boldsymbol\rho}_{m}(i)\\ \end{bmatrix};~~ \hat{\boldsymbol p}_{\hat{R}}(i)=\begin{bmatrix} \hat{p}_{\hat{R},1}(i)\\ \hat{p}_{\hat{R},2}(i)\\ \vdots\\ \hat{p}_{\hat{R},m}(i)\\ \end{bmatrix} \end{equation} The search for the minimum Frobenius-norm solution of (\ref{64}) is reduced to the following $m$ subproblems $(j=1, \ldots, m)$: \begin{equation}\label{66} \min\|\bar{\boldsymbol\rho}_{j}(i)\|^2~~~ \textrm{subject~to}~~ \bar{\boldsymbol\rho}_{j}(i)\bar{\boldsymbol w}(i)=\hat{p}_{\hat{R},j}(i), \end{equation} The solution to (\ref{66}) is the projection of $\bar{\boldsymbol\rho}(i)$ onto the hyperplane $\mathcal {H}_{j}(i)=\big\{\bar{\boldsymbol \rho}(i)\in\mathbb C^{1\times r}:~\bar{\boldsymbol\rho}(i)\bar{\boldsymbol w}(i)=\hat{p}_{\hat{R},j}(i)\big\}$, which is given by \begin{equation}\label{67} \bar{\boldsymbol\rho}_{j}(i)=\hat{p}_{\hat{R},j}(i)\frac{\bar{\boldsymbol w}^H(i)}{\|\bar{\boldsymbol w}(i)\|^2} \end{equation} Hence, the minimum Frobenius-norm solution of the transformation matrix is given by \begin{equation}\label{68} \boldsymbol T_r(i)=\hat{\boldsymbol p}_{\hat{R}}(i)\frac{\bar{\boldsymbol w}^H(i)}{\|\bar{\boldsymbol w}(i)\|^2} \end{equation} Substituting the definition of $\hat{\boldsymbol p}_{\hat{R}}(i)$ into (\ref{68}), we have \begin{equation}\label{69} \boldsymbol T_r(i)=\hat{\boldsymbol R}^{-1}(i)\big[\hat{\boldsymbol p}(i)-\lambda\boldsymbol a(\theta_0)\big]\frac{\bar{\boldsymbol w}^H(i)}{\|\bar{\boldsymbol w}(i)\|^2} \end{equation} The multiplier $\lambda$ can be obtained by incorporating (\ref{63}) with the constraint $\bar{\boldsymbol w}^H(i)\boldsymbol T_r^H(i)\boldsymbol a(\theta_0)=\gamma$, which is \begin{equation}\label{70} \lambda=\frac{\hat{\boldsymbol p}(i)\hat{\boldsymbol R}^{-1}(i)\boldsymbol a(\theta_0)-\gamma}{\boldsymbol a^H(\theta_0)\hat{\boldsymbol R}^{-1}(i)\boldsymbol a(\theta_0)} \end{equation} Therefore, the expression of the transformation matrix in (\ref{33}) can be obtained by substituting (\ref{70}) into (\ref{69}). \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,482
Q: ZF2 - set selected value on Select Element I've a problem with dropdown list with Zend Framework 2 & Doctrine. I would put the "selected" attribute on my dropdown list but all options pass to selected My code : Controller : public function editAction() { // get error message during addAction $this->layout()->setVariable("messageError", $this->flashMessenger()->getErrorMessages()); $auth = $this->getAuthService(); if ($auth->hasIdentity()){ $builder = new AnnotationBuilder(); // Get id of StaticContent $id = (int)$this->getEvent()->getRouteMatch()->getParam('id'); if (!$id) { $this->flashMessenger()->addErrorMessage("Aucun plan choisi !"); return $this->redirect()->toRoute('admin/plans'); } $plan = $this->getEntityManager()->getRepository("Admin\Entity\Plan")->find((int)$id); $form = $builder->createForm($plan); // Find options for Localite list (<select>) $localites = $this->getEntityManager()->getRepository("Admin\Entity\Localite")->getArrayOfAll(); $form->get('localiteid')->setValueOptions($localites); $form->get('localiteid')->setValue("{$plan->getLocaliteid()->getId()}"); // Find options for TypePlan list (<select>) $typesPlan = $this->getEntityManager()->getRepository("Admin\Entity\TypePlan")->getArrayOfAll(); $form->get('typeid')->setValueOptions($typesPlan); $form->get('typeid')->setValue("{$plan->getTypeid()->getId()}"); // Options for Statut list (<select>) $form->get('statut')->setValueOptions(array('projet'=>'Projet', 'valide'=>'Validé')); $form->get('statut')->setValue($plan->getStatut()); $form->setBindOnValidate(false); $form->bind($plan); $form->add(array( 'name' => 'submit', 'attributes' => array( 'type' => 'submit', 'value' => 'Modifier', 'id' => 'submitbutton', 'class' => "btn btn-primary" ), )); $request = $this->getRequest(); if ($request->isPost()) { [...] } } With $localites = $this->getEntityManager()->getRepository("Admin\Entity\Localite")->getArrayOfAll(); $form->get('localiteid')->setValueOptions($localites); i populate my dropdown correctly, normally with $form->get('localiteid')->setValue("{$plan->getLocaliteid()->getId()}"); just set "selected" on option defined by : $plan->getLocaliteid()->getId() So why all options are selected in my dropdown ?! Information : It's the same for typeId but no Statut A: It's probably not working because of the curly braces. According to the PHP documentation Using single curly braces ({}) will not work for accessing the return values of functions or methods or the values of class constants or static class variables. This is also unnecessary when using setValue. ZF2 will convert it to a string when formatting it in the view. When you create the arrays to pass to setValueOptions() you should make it an associative array of arrays with the following values: $form->get('select')->setValueOptions(array( 'field' => array( 'value' => 'value_of_the_option', 'label' => 'what is displayed', 'selected' => true, ), )); Which ever of the fields has the selected option set to true will be the default selection in the form element. A: Personally i don't know if getArrayOfAll() such function exists, i assume that you are correctly passing array to FORM, I think you should be doing something like this to set value. $form->get('localiteid')->setValue($plan->getLocaliteid()->getId()); But Since you are populating DROP down i guess this approach will not work best with Drop Down. You need to do something like this $form->get('localiteid')->setAttributes(array('value'=>$plan->getLocaliteid()->getId(),'selected'=>true)); A: I've found a bug ?! $plan = $this->getEntityManager()->getRepository("Admin\Entity\Plan")->find((int)$id); $idLocalite = 18;//(int)$plan->getLocaliteid()->getId(); $idTypePlan = 2;//(int)$plan->getTypeid()->getId(); When i'm using $plan->getLocaliteid()->getId(); or $plan->getTypeid()->getId() to pass parameter into Repository method getArrayOfAll($idLocalite) LocaliteRepository.php : class LocaliteRepository extends EntityRepository { public function getArrayOfAll($currentLocaliteId) { $result = $this->_em->createQuery("SELECT l.nom, l.localiteid FROM Admin\Entity\Localite l ORDER BY l.nom")->getArrayResult(); $localite = array(); foreach($result as $loc) { if ($currentLocaliteId == $loc['localiteid']) { $localite[$loc['localiteid']] = array( 'value' => $loc['localiteid'], 'label' => $loc['nom'], 'selected' => true, ); } else { $localite[$loc['localiteid']] = array( 'value' => $loc['localiteid'], 'label' => $loc['nom'], 'selected' => false ); //$localite[$loc['localiteid']] = $loc['nom']; } } return $localite; } } So, if i'm using $idLocalite = 18 instead of $idLocalite = (int)$plan->getLocaliteid()->getId() only wanted option are selected. Why ?!
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,337
\section{Introduction and preliminaries} The notion of complexity for compact 3-dimensional manifolds has been introduced by S. Matveev via simple spines. We briefly recall its definition (for further reference see \cite{M1,M2}). A polyhedron $P$ embedded into a compact connected 3-manifold $M$ is called a \textit{spine} of $M$ if $M$ collapses to $P$ in the case $\partial M \neq \emptyset$, and if $M-{\rm Int}(B)$ collapses to $P$ in the case $\partial M = \emptyset$, where $B$ is a closed 3-ball in $M$. Moreover, a spine $S$ is said to be \textit{almost simple} if the link of each point $x\in S$ can be embedded into $K_4$, a complete graph with four vertices. A \textit{true vertex} of an almost simple spine $S$ is a point $x\in S$ whose link is homeomorphic to $K_4$. The \textit{complexity} $c(M)$ of $M$ is the minimum number of true vertices among all almost simple spines of $M$. Complexity is additive under connected sum of manifolds and, for any integer $n \geqslant 0$, there are only finitely many closed prime manifolds with complexity $n$. In the closed orientable case there are only four prime manifolds of complexity zero which are ${\bf S}^3$, $\mathbb{RP}^3$, ${\bf S}^2\times {\bf S}^1$, and $L_{3,1}$. Apart from these special cases, it can be proved that $c(M)$ is the minimum number of tetrahedra needed to obtain $M$ by pasting together their faces (via face paring). A complete classification of closed orientable prime manifolds up to complexity 12 can be found in \cite{M3,M4}. In general, the computation of the complexity of a given manifold is a difficult problem. So, two-sided estimates of complexity become important, especially when dealing with infinite families of manifolds (see, for example, \cite{M2,MPV,PV}). By \cite[Theorem 2.6.2]{M2}, a lower bound for the complexity of a given manifold can be obtained via the computation of its first homology group. Moreover, for a hyperbolic manifold a lower bound can be obtained via volume arguments (see \cite{M2,MPV,PV}). On the other hand, upper bound can be found using triangulations. In this paper we deal with the possibility of calculating complexity via Heegaard decompositions. This way of representing 3-manifold has revealed to be very useful in different contests. So, it is natural to wonder whereas it is possible to calculate complexity via Heegaard diagrams. In Section~\ref{sec2} we use Heegaard diagrams to define modified Heegaard complexity of compact 3-manifolds and compare this notion with Matveev complexity. A widely studied family of manifolds, defined via Heegaard diagrams, is the one of Dunwoody manifolds (see \cite{Du}). This family coincides with the class of strongly-cyclic branched coverings of $(1,1)$-knots (see \cite{CM1}), including, for example, 2-bridge knots, torus knots and some pretzel knots. In Section~\ref{sec3} we construct a class of manifolds that generalizes the class of Dunwoody manifolds, including other interesting class of manifolds such as cyclic-branched coverings of 2-component 2-bridge links. In Section~\ref{sec4}, using modified Heegaard complexity, we obtain two-sided estimates for the complexity of some families of generalized Dunwoody manifolds. \section{Heegaard diagrams and complexity} \label{sec2} In this section we introduce the notions of modified complexity for Heegaard diagrams and for manifolds, comparing these notions with Matveev complexity of manifolds. Let us start by recalling some definitions. Let $\Sigma_g$ be a closed, connected, orientable surface of genus $g$. A \textit{system of curves} on $\Sigma_g$ is a (possibly empty) set of simple closed curves $\mathcal{C}=\{\gamma_1,\ldots,\gamma_k\}$ on $\Sigma_g$ such that $\gamma_i \cap \gamma_j = \emptyset$ if $i\ne j$, for $i,j=1,\ldots, k$. Moreover, we denote with $V(\mathcal{C})$ the set of connected components of the surface obtained by cutting $\Sigma_g$ along the curves of $\mathcal{C}$. The system $\mathcal{C}$ is said to be \textit{proper} if all elements of $V(\mathcal{C})$ have genus zero, and \textit{reduced} if either $\vert V(\mathcal{C})\vert =1$ or $V(\mathcal{C})$ has no elements of genus zero. Thus, $\mathcal{C}$ is: (i) proper and reduced if and only if it consists of one element of genus $0$; (ii) non-proper and reduced if and only if all its elements are of genus $>0$; (iii) proper and non-reduced if and only if it has more than one element and all of them are of genus $0$; (iv) non-proper and non-reduced if and only if it has at least one element of genus $0$ and at least one element of genus $>0$. Note that a proper reduced system of curves on $\Sigma_g$ contains exactly $g$ curves. We denote by $G(\mathcal C)$ the graph which is dual to the one determined by $\mathcal{C}$ on $\Sigma_g$. Thus, vertices of $G(\mathcal C)$ correspond to elements of $V(\mathcal{C})$ and edges correspond to curves of $\mathcal{C}$. Note that loops and multiple edges may arise in~$G(\mathcal C)$. A \textit{compression body} $K_g$ of genus $g$ is a 3-manifold with boundary, obtained from $\Sigma_g\times [0,1]$ by attaching a finite set of 2-handles $Y_1,\ldots, Y_k$ along a system of curves (called \textit{attaching circles}) on $\Sigma_g\times\{0\}$ and filling in with balls all the spherical boundary components of the resulting manifold, except from $\Sigma_g\times\{1\}$ when $g=0$. Moreover, $\partial_+ K_g=\Sigma_g\times\{1\}$ is called the \textit{positive} boundary of $K_g$, while $\partial_- K_g = \partial K_g-\partial_+ K_g$ is called \textit{negative} boundary of $K_g$. Notice that a compression body is a handlebody if an only if $\partial_- K_g = \emptyset$, i.e., the system of the attaching circles on $\Sigma_g\times\{0\}$ is proper. Obviously homeomorphic compression bodies can be obtained with (infinitely many) non isotopic systems of attaching circles. \begin{remark} \label{riduzione} \textup{If the system of attaching circles is not reduced then it contains at least one reduced subsystem of curves determining the same compression body $K_g$. Indeed, if $\mathcal{C}$ is the system of attaching circles, denote with $V^+(\mathcal{C})$ the set of vertices of $G(\mathcal{C})$ corresponding to the components with genus greater then zero, and with $\mathcal{A}(\mathcal{C})$ the set consisting of all the graphs $T_i$ such that: \begin{itemize} \item $T_i$ is a subgraph of $G(\mathcal C)$; \item if $V^+(\mathcal{C})=\emptyset$ then $T_i$ is a maximal tree in $G(\mathcal C)$; \item if $V^+(\mathcal{C})\ne \emptyset$ then $T_i$ contains all the vertex of $G(\mathcal C)$ and each component of $T_i$ is a tree containing exactly a vertex of $V^+(\mathcal{C})$. \end{itemize} Then, for any $T_i \in \mathcal{A}(\mathcal{C})$, the system of curves obtained by removing from $\mathcal{C}$ the curves corresponding to the edges of $T_i$ is reduced and determines the same compression body. Note that this operation corresponds to removing complementary 2- and 3-handles. Moreover, it is easy to see that if $\partial_- K_g$ has $k$ boundary components with genus $g_1,\ldots,g_k$ then $$ \vert E(T_i)\vert = \vert\mathcal{C}\vert - n - k + 1 + \sum_{j=1}^k g_j $$ for each $T_i\in\mathcal{A}(\mathcal{C})$, where $E(T_i)$ denotes the edge set of $T_i$.} \end{remark} Let $M$ be a compact, connected, orientable 3-manifold without spherical boundary components. A \textit{Heegaard surface} of genus $g$ for $M$ is a surface $\Sigma_g$ embedded in $M$ such that $M-\Sigma_g$ consists of two components whose closures $K'$ and $K''$ are (homeomorphic to), respectively, a genus $g$ handlebody and a genus $g$ compression body. The triple $(\Sigma_g, K',K'')$ is called \textit{Heegaard splitting} of $M$. It is a well known fact that each compact connected orientable 3-manifold without spherical boundary components admits a Heegaard splitting. \begin{remark} \textup{By \cite[Proposition 2.1.5]{M2} the complexity of a manifold is not affected by puncturing it. So, with the aim of computing complexity, it is not restrictive assuming that the manifold does not have spherical boundary components.} \end{remark} On the other hand, a triple $H=(\Sigma_g, \mathcal{C'},\mathcal{C''})$, where $\mathcal{C'}$ and $\mathcal{C''}$ are two systems of curves on $\Sigma_g$, such that they intersect transversally and $\mathcal{C'}$ is proper, uniquely determines a 3-manifold $M_H$ which corresponds to the Heegaard splitting $(\Sigma_g, K',K'')$, where $K'$ and $K''$ are respectively the handlebody and the compression body whose attaching circles correspond to the curves in the two systems. Such a triple is called \textit{Heegaard diagram} for $M_H$. We denote by $\Gamma(H)$ the graph embedded in $\Sigma_g$, obtained from the curves of $\mathcal{C'}\cup \mathcal{C''}$, and by $\mathcal{R}(H)$ the set of regions of $\Gamma(H)$. Note that $\Gamma(H)$ has two types of vertices: singular vertices which are 4-valent and non-singular ones which are 2-valent. A diagram $H$ is called \textit{reduced} if both the systems of curves are reduced. If $H$ is non-reduced, then we denote by $\textup{Rd}(H)$ the set of reduced Heegaard diagrams obtained from $H$ by reducing the system of curves. In \cite[Section~7.6]{M2} the notion of complexity of a reduced Heegaard diagram $H$ of a genus two closed manifold is defined as the number $c(H)$ of singular vertices of the graph $\Gamma(H)$. Moreover the author proved that $c(M_H) \leqslant c(H)$. Now we extend this definition to the general case, slightly modifying it in order to obtain a better estimate for the complexity of $M_H$. The modified complexity of a reduced Heegaard diagram $H$ is $$\widetilde{c}(H)=c(H)-\max\,\{n(R)\mid R\in\mathcal{R}(H)\},$$ where $n(R)$ denotes the number of singular vertices contained in the region $R$, and the modified complexity of a (non-reduced) Heegaard diagram $H$ is $$\widetilde{c}(H)=\min\,\{\tilde{c}(H')\mid H'\in\textup{Rd}(H)\}.$$ We define the \textit{modified Heegaard complexity} of a closed connected 3-manifold $M$ as $$ \widetilde{c}(M) = \min\,\{\widetilde{c}(H) \mid H \in\mathcal{H}(M)\} , $$ where $\mathcal{H}(M)$ is the set of all Heegaard diagrams of $M$. The following statement generalizes a result of \cite[Proposition 2.1.8]{M2} (for the case of reduced diagrams of closed manifolds) and \cite{C} (for case of Heegaard diagrams arising from gem representation of closed manifolds). \begin{proposition} \label{prop4} If $M$ is a compact connected 3-manifold then $$ c(M) \leqslant \widetilde{c}(M) . $$ \end{proposition} \begin{proof} Let $H=(\Sigma_g,\mathcal C',\mathcal C'')$ be a Heegaard diagram of $M$ and let $(\Sigma_g,K',K'')$ be the associated Heegaard splitting. We want to prove that $c(M) \leqslant \widetilde c(H)$. From the definition of modified complexity it is clear that we can suppose that $H$ is reduced. If $\partial M = \emptyset$ then the statement is given in \cite[Proposition 2.1.8]{M2}. For the case $\partial M\ne \emptyset$ the same proof works because of the following reason. The simple polyhedron obtained as the union of $\Sigma_g$ with the core of the 2-handles of $K'$ and $K''$ is a spine with $c(H)$ singular vertices of $M-{\rm Int}(B)$, where $B \subset K'$ is a closed ball. Since $\partial M$ is contained in $K''$, a spine for $M$ can be obtained by connecting $\partial B$ with $\partial M$ via pinching a region of~$\mathcal{R}(H)$. \end{proof} By results of \cite{CC}, the upper bound in Proposition~\ref{prop4} becomes an equality for the $69$ closed connected prime orientable 3-manifolds admitting a (colored) triangulation with at most $28$ tetrahedra. As far as we know there is no example where the strict inequality holds. \begin{conjecture} For every compact connected orientable 3-manifold $M$ the equality $c(M)= \widetilde{c}(M)$ holds. \end{conjecture} \section{Generalized Dunwoody manifolds} \label{sec3} In this section we define a class of manifolds that generalizes the class of Dunwoody manifolds introduced in \cite{Du}. A \textit{Dunwoody diagram} is a trivalent regular planar graph, depending on six integers $a,b,c,n,r,s$, such that $n>0$, $a, b ,c \geqslant 0$ and $d=2a+b+c>0$, and it is defined as follows (see Figure~\ref{dun}). \begin{figure} \begin{center} \includegraphics*[totalheight=6.5cm]{Dunwoody.eps} \end{center} \caption{A Dunwoody diagram.} \label{dun} \end{figure} It contains $n$ internal circles $C'_1,\ldots,C'_n$, and $n$ external circles $C''_1,\ldots,C''_n$, each having $d$ vertices. The circle $C'_i$ (resp. $C''_i$) is connected to the circle $C'_{i+1}$ (resp. $C''_{i+1}$) by $a$ parallel arcs, to the circle $C''_{i}$ by $c$ parallel arcs and to the circle $C''_{i-1}$ by $b$ parallel arcs, for every $i=1,\ldots,n$ (subscripts mod $n$). We denote by $\mathcal{A}$ the set of arcs, and by $\mathcal B$ the set of circles. By gluing the circle $C'_i$ to the circle $C''_{i+s}$ in the way that equally labelled vertices are identified together (see Figure~\ref{dun} for the labelling), we obtain a Heegaard diagram $H(a,b,c,n,r,s)=(\Sigma_n,\mathcal{C}',\mathcal{C}'')$, where $\mathcal{C}'$ is the proper, reduced system of curves arising from $\mathcal B$, containing $n$ curves, and $\mathcal{C}''$ is the system of curves arising from $\mathcal{A}$, containing $m>0$ curves. Observe that the parameters $r$ and $s$ can be considered mod~$d$ and mod~$n$ respectively. We call $H(a,b,c,r,n,s)$ \textit{closed Dunwoody diagram}. The \textit{generalized Dunwoody manifold} $M(a,b,c,n,r,s)$ is the manifold $M_{H(a,b,c,n,r,s)}$. Since both the diagram and the identification rule are invariant with respect to an obvious cyclic action of order $n$, the generalized Dunwoody manifold $M(a,b,c,n,r,s)$ admits a cyclic symmetry of order~$n$. \begin{remark}\label{equivalenza} \textup{It is easy to observe that diagrams $H(a,b,c,r,n,s)$ and $H(a,c,b,d-r,n,n-s-1)$ are isomorphic, so they represent the same manifold. } \end{remark} A generalized Dunwoody manifold $M(a,b,c,n,r,s)$ is a Dunwoody manifold when the system $\mathcal{C}''$ of curves arising from $\mathcal{A}$ is proper and reduced. In this case $H(a,b,c,n,r,s)$ is a ``classical'' Heegaard diagram (see \cite{He}) and therefore all Dunwoody manifolds are closed. As proved in \cite{CM}, the class of Dunwoody manifolds coincides with the class of strongly-cyclic branched covering of $(1,1)$-knots. So, in particular, it contains all cyclic branched coverings of 2-bridge knots. It is not known if cyclic branched coverings of 2-bridge links (with two components) admit representations as Dunwoody manifolds, but they surely are generalized Dunwoody manifolds. This can be shown by introducing a polyhedral description for generalized Dunwoody manifolds. \begin{figure} \begin{center} \includegraphics*[totalheight=10cm]{Figure2.eps} \end{center} \caption{Polyhedral description of generalized Dunwoody manifolds.} \label{pol} \end{figure} Referring to Figure \ref{pol}, let $B$ be the closed unitary 3-ball in $\mathbb{R}^3$ and consider on its boundary $n$ equally spaced meridians $m_1, \ldots m_n$ joining the north pole $N=(0,0,1)$ with the south pole $S=(0,0,-1)$. Subdivide each meridian $m_i$ into $2a+b$ arcs with endpoints $P_{i,j}$, $j=0,\ldots,2a+b$, such that $P_{i,0}=N$ and $P_{i,2a+b}=S$. Let $t_i \in \partial B$ be the shortest arc connecting $P_{i,a+b}$ with $P_{i+1,a}$, for $i=1,\ldots,n$. We subdivide $t_i$ into $c$ arcs with endpoints $Q_{i,j}$ for $j=0,\ldots,c$ such that $Q_{i,0}=P_{i,a+b}$ and $Q_{i,c}=P_{i+1,a}$. In this way $\partial B$ is subdivided into $2n$ $d$-gons with $d=2a+b+c$. We denote by $R_1,\ldots,R_n$ the $d$-gons containing the north pole $P_{i,0} = N$ and by $R'_1,\ldots,R'_n$ the $d$-gons containing the south pole. Moreover, let $$ P'_{i,0} = \begin{cases} P_{i,2a+b-r} & 0 \leqslant r \leqslant a, \\ Q_{i,r-a} & a \leqslant r \leqslant a+c, \\ P_{i+1,r-c} & a+c \leqslant r \leqslant 2a+b+c . \end{cases} $$ According to this definition $P'_{i,0}$ is a point on the boundary of $R'_i$ obtained from $S$ by giving a combinatorial $r$-twist in counterclockwise direction to the region $R'_i$. We glue $R_i$ with $R'_{i+s}$ by an orientation reversing homeomorphism matching the vertices of $R_i$ with the ones of $R'_{i+s}$ such that $P_{i,0} \in R_i$ is identified with $P'_{i+s,0} \in R'_{i+s}$. In this way we obtain a closed connected orientable pseudomanifold $\widehat M(a,b,c,n,r,s)$ with a finite number of singular points whose stars are cones over closed connected orientable surfaces. By removing the interior of a regular neighboorhood of each singular point we get a compact connected orientable 3-manifold with (possibly empty) non-spherical boundary components, which is homeomorphic to the generalized Dunwoody manifold $M(a,b,c,n,r,s)$. As a particular case, an $n$-fold cyclic branched covering of a 2-bridge link/knot $\mathbf{b}(\alpha,\beta)$ is $M(\beta,\alpha-2\beta,1,n,2\beta+1,s)$ where $s=(-1)^{\beta}$ if $\mathbf{b}(\alpha,\beta)$ is a knot (i.e. $\alpha$ is odd) and $s\ne 0$ if $\mathbf{b}(\alpha,\beta)$ has two components (i. e. $\alpha$ is even) (see \cite{Mi,Mu}). \section{Upper and lower bounds} \label{sec4} In this section we calculate the modified complexity of a closed Dunwoody diagram in order to find upper bounds for the complexity of some families of generalized Dunwoody manifolds. For $n=1$, the generalized Dunwoody manifold is a a lens space (including ${\bf S}^2\times {\bf S}^1$ and ${\bf S}^3$) in the closed case and a solid torus in the case with boundary. Since the complexity of these manifolds has been already studied (see \cite[Section 2.3.3]{M2}), we will always suppose $n>1$. \begin{theorem} \label{prop} Let $H=H(a,b,c,n,r,s)=(\Sigma_n,\mathcal{C',\mathcal C''})$ be a closed Dunwoody diagram, and $d=2a+b+c$. For each $\gamma \in \mathcal{C}''$ define $n(\gamma)$ as the number of singular vertices contained in the cycle determined by $\gamma$ in $\Gamma(H)$. Then, with the notation of Remark~\ref{riduzione} we have: \begin{equation*} \label{formula} \widetilde{c}(H) = nd - \max\left\{n(R)+\sum_{\gamma \in E(T)} n(\gamma)\mid T\in\mathcal{A}(\mathcal{C}''), R\in\mathcal{R}(H_T)\right\}, \end{equation*} where $E(T)$ is the edge set of the graph $T$ and $H_T$ is the element of $\textup{Rd}(H)$ obtained by removing from $\mathcal{C}''$ the curves belonging to $T$. \end{theorem} \begin{proof} By construction the system $\mathcal{C}'$ is proper and reduced. The statement follows from the definition of modified complexity and Remark~\ref{riduzione}. \end{proof} This result allows us to find upper bounds for the modified complexity (and so for Matveev complexity) of generalized Dunwoody manifolds. In the following subsections we specialize the estimates to the cases of some important families. \subsection{Dunwoody manifolds} \begin{proposition} \label{prop8} Let $M=M(a,b,c,n,r,s)$ be a Dunwoody manifold. Then \begin{itemize} \item[\textup{(i)}] If $abc>0$ then $$ c(M) \leqslant \left\{\begin{array}{ll} n(2a+b+c) -\max(2n,6) & \textup{ if } r\ne -b,-b\pm 1,\\ n(2a+b+c)-\max(2n,5) & \textup{ if } r=-b\pm 1.\\ \end{array}\right. $$ \item[\textup{(ii)}] If $abc=0$ and $\min(a,b+c)=0$ then $$ c(M) \leqslant \left\{\begin{array}{ll} n(2a+b+c-4) & \textup{ if } r \ne -b,-b\pm 1,\\ n(2a+b+c-3) & \textup{ if } r=-b\pm 1.\\ \end{array}\right. $$ \item[\textup{(iii)}] If $abc=0$ and $\min(a,b+c)>0$ then $$ c(M) \leqslant \left\{\begin{array}{ll} n(2a+b+c-2) & \textup{ if } n > 3,\\ n(2a+c) -\max(2n,8-2k_0) & \textup{ if } n=2,3 , \, b=0 \textup{ and } s=0,\\ n(2a+b) -\max(2n,8-k_0-k_1) & \textup{ if } n=2, c=0 \textup{ and } s=0,\\ n(2a+b) -\max(2n,8-k_0) & \textup{ if } n=3, c=0 \textup{ and } s=0, \\ n(2a+b) -\max(2n,8-k_1) & \textup{ if } n=3, c=0 \textup{ and } s=1,\\ \end{array}\right. $$ where $k_i=\left\{\begin{array}{ll} 2 & \textup{ if } r=(-1)^i b,\\ 1 & \textup{ if } r=(-1)^i b\pm 1,\\ 0 & \textup{ otherwise}. \end{array}\right.$ \end{itemize} The cases not covered by the above formulas follow from the homeomorphisms $M(a,b,c,r,n,s)\cong M(a,c,b,d-r,n,n-s-1)$ (see Remark~\ref{equivalenza}). \end{proposition} \begin{proof} The graph $\Gamma(H)$ associated to a Heegaard diagram $H$ for $M(a,b,c,n,r,s)$ is obtained from the diagram depicted in Figure~\ref{dun} by performing the prescribed identifications. Since $\Gamma(H)$ is proper and reduced, then $G(\mathcal C'')$ is an $n$-circle bouquet, so $T$ is a single point and therefore $E(T)=\emptyset$. Hence by Theorem~\ref{prop} $$ \widetilde{c}(H)\leqslant n(2a+b+c)-\max\{n(R)\mid R\in\mathcal{R}(H)\}. $$ In case (i) the upper (and lower) region of the Dunwoody diagram has $2n$ vertices that are not identified together by the gluing, while for all the other regions it is clear that $n(R)\leqslant 6$. More precisely, the six vertices of hexagonal regions remain all distinct if $r\ne-b,-b\pm 1$, while two of them are identified if $r=-b\pm 1$. If $r=-b$ then $M(a,b,c,n,r,s)$ is not a Dunwoody manifold since $\Gamma(H)$ is not reduced. In case (ii) the Dunwoody diagram has regions with $4n$ vertices. As before, they remain all distinct under identifications if $r\ne-b,-b\pm 1$, they become $3n$ if $r=-b\pm 1$, while if $r=-b$ the associated manifold is not Dunwoody. In case (iii), if $n\geqslant 4$ then the upper (or lower) region has $2n$ vertices while all other regions have at most $8$ vertices. When $n=2$ or $n=3$ the computation is more tricky. We always have a region with eight vertices, but, as before, some of them can be identified together. Given such a maximal region, the number $k_i$ counts how many vertices of the circle $C'_i$ are identified with the ones of the circle $C''_{i+s}$. \end{proof} Proposition~\ref{prop8} allows to obtain an upper bound for the complexity of cyclic branched coverings of 2-bridge knots (Corollary~\ref{Cor9}) and some families of torus knots (Corollary~\ref{Cor11}), and a family of Seifert manifolds (Corollary~\ref{Cor12}). We recall that $\mathbf{b}(\alpha,\beta)$ is a 2-bridge knot if and only if $\alpha$ is odd. \begin{corollary} \label{Cor9} Let $C_n(\alpha,\beta)$ be the $n$-fold cyclic branched covering of the 2-bridge knot $\mathbf b(\alpha,\beta)$. Then for $n>2$ we have $$c(C_n(\alpha,\beta)) \leqslant n(\alpha-2).$$ \end{corollary} \begin{proof} Since $\mathbf b(\alpha,\alpha-\beta)$ is the mirror image of $\mathbf b(\alpha,\beta)$ we can suppose that $\beta$ is even. By \cite{GM} we have that $C_n(\alpha,\beta)=M((\alpha-1)/2,0,1,n,\beta/2,s)$, for a certain $s=s(\alpha,\beta)$. \end{proof} This result improves the upper bound obtained in \cite{PV}, where the lower bound has been obtained in the hyperbolic case (i.e. $\beta\ne 1, \alpha-1$) via volume estimates. Now we give a lower bound for the remaining cases. \begin{proposition} Let $n>2$. We have $$ c(C_n(\alpha,1))=c(C_n(\alpha,\alpha-1)) \geqslant \left\{\begin{array}{ll} 2 \log_5(\alpha/d)+d-2 & \textup{ if } n \textup{ is even,}\\ 2(d-1)\log_5 2-1 & \textup{ if } n \textup{ is odd,}\end{array}\right. $$ where $d=\gcd (\alpha,n)$. \end{proposition} \begin{proof} Obviously $C_n(\alpha,\alpha-1) \cong C_n(\alpha,1)$ since $\mathbf b(\alpha,\alpha-1)$ is the mirror image of $\mathbf b(\alpha,1)$. Moreover $\mathbf b(\alpha,1)$ is the torus knot of type $(\alpha,2)$ and therefore $C_n(\alpha,1)$ is the Brieskorn manifold of type $(2,\alpha,n)$ \cite{Mil}. Its first homology group is $\mathbb{Z}^{d-1}\oplus \mathbb{Z}_{n/d}$ if $n$ is even, and $\mathbb{Z}_2^{d-1}$ if $n$ is odd (see \cite{Ra, Ca}). Since the manifold is irreducible (and different from $L_{3,1}$), the result follows by applying Theorem~2.6.2 of \cite{M2}. \end{proof} \begin{corollary} \label{Cor11} Let $T_n(k,h)$ be the $n$-fold cyclic branched covering of the torus knot of type $(k,h)$. Then we have \begin{enumerate} \item $c(T_n(k,h)) \leqslant n \, (2qk-2q-1)$ if $h=qk+1$ for $q>0$ and $k>1$; \item $c(T_n(k,h)) \leqslant n \, (2qk-2q-3)$ if $h=qk-1$ for $q, k > 1$; \item $c(T_n(k,h)) \leqslant n \, (2q_1(s-1)(qq_1+1)+2qq_1-1)$ if $k=sq_1+1$ and $h=qk+s$ for $q, q_1 > 0$ and $s>1$. \end{enumerate} \end{corollary} \begin{proof} By \cite{AGM} we have that $$ T_n(k,qk+1)=M(1,k-2,(k-1)(2q-1),n,k,k) $$ and $$ T_n(k,qk-1)=M(1,k-2,(k-1)(2q-1)-2,n,(k-1)(2q-3),k). $$ Moreover, by \cite{CM1}, there exists $s\in\mathbb{Z}$ such that $$ T_n(sq_1+1, (sq_1+1)q+s) = $$ $$ = M(q_1,q_1(2qq_1(s-1)+2q+s-2),1+(s-2)q_1,2q_1^2(s-1)+sq_1+1). $$ The result follows from Proposition~\ref{prop8}. \end{proof} We remark that an algorithm developed in \cite{CM1} allows us to obtain a presentation of each $n$-fold cyclic branched covering of a torus knot as a Dunwoody manifold and so to compute an upper bound for the complexity by using Proposition~\ref{prop8}. It is proved in \cite{GM2} that if $p > q > 0$ and $\gcd(p,q)=1$, $n>1$, $\ell > 0$, then Seifert manifolds $$ S_n(p,q,\ell)= \{Oo,0\mid -1; \underbrace{(p,q), \ldots, (p,q)}_{n-\textup{times}},(\ell, \ell-1)\} $$ are Dunwoody manifolds that generalize the class of Neuwirth manifolds introduced in \cite{Ne} and corresponding to $p=2$ and $q=\ell=1$. Below we will give upper and lower estimates for complexity of these Seifert manifolds. \begin{corollary} \label{Cor12} Suppose $\ell>1$ when $n=2$. The following estimate holds: $$ c(S_n(p,q,\ell)) \leqslant n(p + q (n\ell - 2) -2). $$ \end{corollary} \begin{proof} By results of \cite{GM2}, we have that $$ S_n(p,q,\ell) = M(q,q(n\ell-2),p-2q,n,p-q,0) $$ if $p \geqslant 2q$ and $$ S_n(p,q,\ell) = M(p-q,2q-p,q(n\ell-2),n,p-q,1) $$ otherwise. The result follows from Proposition~\ref{prop8}. \end{proof} \begin{proposition} The following estimate holds: $$ c(S_n(p,q,\ell)) \geqslant 2(n-1) \log_5p + 2 \log_5((n-1) \ell q-p)-1. $$ \end{proposition} \begin{proof} Following \cite{Or}, a standard presentation of $\pi_1(S_n(p,q,\ell))$ is $$ \langle y_1,\ldots,y_n,y,h \mid [y_i,h], \, [y,h], \, y_i^p h^q, \, y^{\ell} h^{\ell-1}, \, y_1 \cdots y_n y h; i=1,\ldots,n \rangle. $$ By abelianization, we find that a presentation matrix for $H_1(S_n(p,q,\ell))$ as a $\mathbb{Z}$-module is the circulant matrix whose first row is given by the coefficient of $f(t) = - p + \ell q \sum_{i=1}^{n-1}t^i$. By the theory of circulant matrices \cite{BaM}, there exists a complex unitary matrix $F$, called \textit{Fourier matrix}, such that $$ FBF^* = D = \textup{Diag}(f(\zeta_1), f(\zeta_2), \ldots, f(\zeta_n)), $$ where $\zeta_1,\zeta_2,\ldots,\zeta_n$ are the $n$-roots of the unity. So it follows that $$ \vert Tor(H_1(S_n(p,q,\ell)))\vert = p^{n-1}((n-1) \ell q - p). $$ Moreover, since $S_n(p,q,\ell)$ is irreducible (and different from $L_{3,1}$), the result follows from Theorem~2.6.2 of \cite{M2}. \end{proof} \subsection{Cyclic branched coverings of two-bridge links} We recall that $\mathbf{b}(\alpha,\beta)$ is a 2-component 2-bridge link if and only if $\alpha$ is even. In the next statement we deal with cyclic branched coverings of 2-component 2-bridge links of singly type (see \cite{MM}). \begin{proposition} Let $\mathbf{b}(\alpha,\beta)$ be a 2-bridge link with two components and denote by $m_1$ and $m_2$ the homology classes of the meridian loops of the two components. If $C_{n,s}(\alpha,\beta)$ is the $n$-fold cyclic branched covering of $\mathbf{b}(\alpha,\beta)$ with monodromy $\omega(m_1)=1$, $\omega(m_2)=s \in \mathbb{Z}_n - \{ 0 \}$ then $$ c(C_{n,s}(\alpha,\beta)) \leqslant n(\alpha-2)+\frac{n}{d}-\alpha, $$ where $d=\gcd(n,s)$. \end{proposition} \begin{proof} By results of \cite{Mi,Mu}, we have $$ C_{n,s}(\alpha,\beta)=M(\beta,\alpha-2\beta,1,n,2\beta+1,s), $$ so we can use Theorem~\ref{prop} to calculate $\widetilde{c}(H)$ in order to obtain an upper bound for $c(C_{n,s}(\alpha,\beta))$. The system of curves $\mathcal C''$ of the Dunwoody diagram $$ H=H(\beta,\alpha-2\beta,1,n,2\beta+1,s)=(\Sigma_n,\mathcal{C'},\mathcal{C''}) $$ is not reduced. Indeed, taking advantage of its symmetries, it is easy to see that it consists of $n+d$ curves. More precisely, $d$ curves (that we call of type A) arise from all $n$ ``radial'' arcs (i.e. the ones connecting the circles $C'_i$ and $C''_i$). Each of these curves intersects $\mathcal{C'}$ in $n/d$ points. The other $n$ curves (that we call of type B) arise from the remaining arcs and each of these curves intersect $\mathcal{C'}$ in $\alpha$ points. \begin{figure} \begin{center} \includegraphics*[totalheight=4cm]{Figure3.eps} \end{center} \caption{} \label{due-ponti} \end{figure} The graph $G(\mathcal{C}'')$ is the one depicted in Figure \ref{due-ponti}, and each of its maximal tree $T$ consists of $d-1$ edges corresponding to curves of type A and one edge corresponding to a curve of type B. So, the total number of vertices of $\Gamma(H)$ that belong to curves corresponding to the edges of $T$ is $\alpha+(d-1)n/d$. By removing the curves corresponding to $T$ from $\mathcal{C''}$, we obtain a reduced Heegaard diagram which has a region, namely the upper one in Figure \ref{dun}, with at least $2n$ vertices. Indeed, except for sporadic cases, $2n$ is the maximal number of vertices in a region. Anyway, the statement follows from Theorem~\ref{prop}. \end{proof} An asymptotically equivalent estimate has been obtained in~\cite{PV}, where a lower bound has been obtained in the hyperbolic case (i.e. $\beta\ne 1, \alpha-1$) via volume arguments. We give a lower bound for the remaining cases. \begin{proposition} Let $(n,s)\ne(3,1),(3,2)$ if $\alpha=2$. We have $$c(C_{n,s}(\alpha,1)) = c(C_{n,s}(\alpha,\alpha-1)) \geqslant $$ $$ \geqslant 2 \log_5 \left(M\left(\frac{nm}{hD}\right)^m\left( \frac{\alpha M}{2D}\right)^{M-1}\right) + D - M - m $$ where $D=\gcd(n,\frac{\alpha}{2}(s-1))$, $M=\gcd(n,s-1)$, $h=\gcd(n,s)$ and $m=\gcd(D,h)$. \end{proposition} \begin{proof} Since $\mathbf b(\alpha,\alpha-1)$ is the mirror image of $\mathbf b(\alpha,1)$ then $C_{n,s}(\alpha,\alpha-1)\cong C_{n,s}(\alpha,1)$. Moreover $\mathbf b(\alpha,1)$ is the 2-component torus link of type $(\alpha,2)$. So $c(C_{n,s}(\alpha,1))$ is a Seifert manifold and then it is irreducible. The first homology group is computed in \cite{Mu}. So, the statement follows applying Theorem 2.6.2 of \cite{M2}. \end{proof} \subsection{A class of cyclic branched coverings of theta graphs} Let $\Theta(\alpha,\beta)$ be the theta graph in ${\bf S}^3$ obtained from a two bridge knot of type $(\alpha,\beta)$ by adding a lower tunnel $\tau$ as in Figure~\ref{tunnel}. Without loss of generality we can assume that $$ \frac{\alpha}{\beta} = c_1 + \frac{\displaystyle 1}{\displaystyle c_2 + \, \cdots \, + \frac{\displaystyle 1}{\displaystyle c_{m-1} + \frac{\displaystyle 1}{\displaystyle c_{m}}}} , $$ where $m>0$ and $c_1, \ldots, c_m$ can be taken as even integers (see \cite[p.~26]{Kawa}). Let $n>2$ and $s \in \mathbb Z_n - \{ 0, 1\}$, then we denote by $\Theta_{n,s}(\alpha, \beta)$ the $n$-fold cyclic branched covering of $\Theta(\alpha,\beta)$ having monodromy $\omega(m_1)=1$, $\omega(m_2)=s$ and $\omega(m_3)=s-1$, where $m_3$ is a meridian loop around the tunnel and $m_1,m_2$ are meridian loops around the other two edges of the graph, according to the orientations depicted in Figure~\ref{tunnel}. By result of \cite{Mu2}, $\Theta_{n,s}(\alpha,\beta)$ is a pseudomanifold with two singular points whose links are both homeomorphic to a closed surface of genus $(1+n-\gcd(n,s)-\gcd(n,s-1))/2$. \begin{figure} \begin{center} \includegraphics*[totalheight=1.7cm]{Figure4.eps} \end{center} \caption{The theta graph $\Theta(\alpha,\beta)$.} \label{tunnel} \end{figure} \begin{proposition} Let $\widehat{\Theta}_{n,s}(\alpha,\beta)$ be the compact manifold obtained by removing regular neighborhoods of the two singular points of $\Theta_{n,s}(\alpha, \beta)$, then $$ c(\widehat{\Theta}_{n,s}(\alpha,\beta)) \leqslant n(\alpha-1). $$ \end{proposition} \begin{proof} It follows from a result of \cite{Mu2} that $\widehat{\Theta}_{n,s}(\alpha,\beta)$ is homeomorphic to the generalized Dunwoody manifold $M(\beta,\alpha-2\beta,1,n,2\beta-\alpha,s)$. Thus we can use Theorem~\ref{prop} to calculate $\widetilde{c}(H)$ in order to obtain an upper bound for $c(\widehat{\Theta}_{n,s}(\alpha,\beta))$. \begin{figure}[h] \begin{center} \includegraphics*[totalheight=3cm]{Figure5.eps} \end{center} \caption{} \label{theta} \end{figure} The system of curves $\mathcal C''$ of the Dunwoody diagram $$ H=H(\beta,\alpha-2\beta,1,n,2\beta+1,s) = (\Sigma_n,\mathcal{C'},\mathcal{C''}) $$ is reduced. Indeed, taking advantage of its symmetries, it is easy to see that it consists of $n'=\gcd(n,s)+\gcd(n,s-1)$ curves. More precisely, $\gcd(n,s)$ curves arise from the ``radial'' arcs, while the other $\gcd(n,s-1)$ curves arise from the remaining arcs. The graph $G(\mathcal{C}'')$ is the one depicted in Figure~\ref{theta}, where each vertex corresponds to a region of genus $(1+n-n')/2 > 0$. So $T$ consists of two isolated vertices and the system $\mathcal{C}''$ is already reduced. Since $\alpha$ is odd, then $\alpha \ne 0$ and $\alpha - 2\beta \ne 0$. So, referring to Figure~\ref{dun}, the region with the maximum number of vertices is always the upper one, which has $2n$ vertices. The statement follows from Theorem~\ref{prop}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,048
\section{INTRODUCTION} The ability to deal with dynamic change of environment is important for robots to achieve lifelong and robust autonomy. Map-based localization approaches could fail if the map is far different from the current environment, and planning can be harder without the knowledge of the dynamic environment. Some methods are proposed to model the dynamic change of the environment in different aspects. The basic idea is maintaining a database that saves all different observations of the environment, and localization is performed in all past map\cite{Churchill2013a}. However, this is only a kind of data collection without the modeling process for the change of environment. And with the increasing of the database, computation efficiency and localization in real-time are rapidly influenced. In some scenarios, the change of environment is periodic, which inspires frequency map approach that models the dynamic environment as the sum of some periodic functions\cite{Krajnik2017}. This is a signal level modeling and is able to predict the future of the environment. In general, the dynamic change between neighboring scenes are related, such as traffic flow changing between intersections, and learning the relationships between them is another kind of modeling. Nicholas\cite{Carlevaris-bianco} applies mutual information based method to learn the temporal observability relationships between them. This work addresses on a new problem: can we make inference by modeling the correlations of scene dynamics on history observations? As illustrated in Fig. 1, two scenes are adjacent, such as nearby gates of the campus, successive intersections on a single road, gates of a subway station, etc. Dynamic changes of the two scenes are correlated, which are triggered by some common events, such as working and teaching schedules of the campus, the period of the traffic signal, the train's pit stop and so forth. There have been history observations of both scenes, whereas they are not synchronized as they could be measured by different robots, i.e. the observations are not necessarily in pairs. At a certain time, given the observation of one scene, we want to predict the dynamic state of the others. This research proposes a method of cross-scene prediction via modeling dynamic correlation using latent space shared auto-encoders, which is developed based on an assumption that the inherent correlation of scene dynamics can be represented by shared latent space, and a common latent state is reached if the observations of both scenes are at an approximate time. A learning model is thus developed by connecting two auto-encoders through the latent space, and a prediction model is built by concatenating the encoder of the input scene with the decoder of the target one. Simulation datasets are generated, where two scenes are designed imitating the dynamic flows at two adjacent gates of Peking Univ., and a simulator is developed to obtain scene maps for hours. Accuracy of cross-scene prediction is examined, and the performance at various conditions of scene correlation and pairwise observations are elaborated. Potentials of the proposed method are demonstrated by comparing with conventional end-to-end methods and linear predictions. This paper is organized as follows. The related works are reviewed in Section\ref{section:relatedWorks}. Section\ref{section:methodology} explain the detail of our method. Section\ref{section:implementationDetails} and Section\ref{section:simulation} show implementation details of model and simulation. Experimental results are in Section\ref{section:experiments}. \section{RELATED WORKS}\label{section:relatedWorks} There are some researches focus on how to model or predict the dynamic change of the environment. Spectrum-analysis based methods\cite{Krajnik2014a}\cite{Krajnik2017} discretize environment into binary voxels indicating they are occupied or not, and model each voxel as the sum of a series of periodic signals by the frequency spectra of observed data. Their ability to predict environment improve localization accuracy\cite{Krajnik2014c}\cite{Fentanes2015a} and efficiency of map updating\cite{Santos2016b}\cite{Duckett}. Some methods apply long-term and short-term memory in dynamic scene mapping to remove nonexistent features and increase emerging features\cite{Dayoub2008b}\cite{Morris2014}\cite{Berrio2019}. The bag-of-word method is also used to predict image between seasons\cite{Neubert2013}. Mutual information based method\cite{Carlevaris-bianco} predict images of neighboring scenes by calculating the correlation of collected data, this work is similar to ours but the essence is different because it only learns the temporal relationship between data without considering what makes data correlate and model the essence. In this paper, the dynamic of a scene is caused by moving objects like pedestrians, and there are lots of studies about traffic behavior and scene modeling in the surveillance field. There is a shift from detecting-and-tracking of vehicle state and defining interested events towards machine learning-based approaches to automatically extract meaningful pattern\cite{Morris2013}. Similar trajectories are clustered to model structure or path of scene\cite{Makris2005}\cite{Piciarelli2006}. Topic model based methods convert conceptions of natural language processing into traffic behavior, and LDA\cite{Kwak2011}\cite{Song2011}/HDA\cite{Wang2009}\cite{Haines2011} approaches achieve good results in scene modeling without accurate tracking. Scene modeling methods in the surveillance field are mainly used for abnormal events detection or scene semantic understanding\cite{Saleemi2009}, but there are few predictions for the future of the full scene. Besides, they do not consider the correlation between neighboring scenes. We can learn some methods in this field, but our conception of scene modeling is essentially different from theirs. This work makes an attempt to model the dynamic correlation between neighboring scenes on simulation datasets. In order to quantify how the correlation influences our algorithm, we generate datasets with different correlation coefficient between scenes. Training data with different pairwise observations are randomly sampled to simulate robots data acquisition situation in the true world. \section{METHODOLOGY}\label{section:methodology} \subsection{Problem definition} As illustrated in Fig.1, $a$ and $b$ are two neighbor scenes such as adjacent gates of a campus or consecutive intersections on a single road, where the scene dynamics are strongly correlated. Let $\mathbf{S}^a=\{<S_1^a,t_1^a>,...,<S_{n}^a,t_{n}^a>\}$ and $\mathbf{S}^b=\{<S_1^b,t_1^b>,...,<S_{m}^b,t_{m}^b>\}$ be the history observations of both scenes. $S_i^k$ denotes the $i$th observation of scene $k$ at time $t_i^k$, which can be a grid map that represents the dynamic state of the scene. The observations of both scenes are not necessarily pairwise in time, i.e. $\{t_1^a,...,t_{n}^a\} \neq \{t_1^b,...,t_{n}^b\}$, as they could be obtained independently by different robots. The purpose of this work is to learn a predictor ${\cal F}$ on $\mathbf{S}^a$ and $\mathbf{S}^b$ by addressing the correlation of scene dynamics, where given the observation $S_i^j$ of one scene at the current time $t$, predict the dynamic state of the other, e.g. \begin{equation} \hat{S}.^b,t = \mathbf{F} (S.^a,t) \end{equation} \begin{equation} \hat{S}.^a,t = \mathbf{F} (S.^b,t) \end{equation} The formulations can be easily extended to define the problems involving three or more scenes. \subsection{Modeling dynamic correlation using latent space shared auto-encoders} Assumes that there exists a latent space $\mathbf{Z}$ that records the inherent correlation of scene dynamics at $a$ and $b$, after encoding the observations $\mathbf{S}^a$ and $\mathbf{S}^b$ individually to the latent space $\mathbf{Z}$, \begin{equation} Z_i^a = \mathbf{E}_a (S_i^a) \end{equation} \begin{equation} Z_j^b = \mathbf{E}_b (S_j^b) \end{equation} $S_i^a$ and $S_j^b$ may share a common state, i.e. \begin{equation} \Delta Z = ||Z_i^a - Z_j^b||_2 \rightarrow 0 \nonumber \end{equation} if they are the observations of an approximate time, i.e. \begin{equation} \Delta t = dis (t_i^a , t_j^b) \rightarrow 0 \nonumber \end{equation} where $dis$ is an operator of time difference by addressing the periodic nature of scene dynamics. \begin{figure}[tp] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/networkArchitecture.png} \caption{Modeling dynamic correlation using latent space shared auto-encoders.} \label{networkArchitecture} \end{figure} As illustrated in Fig.\ref{networkArchitecture}, the procedure is modeled by combining the auto-encoder structures in this work. Given a pair of history observations of both scenes $S_i^a$ and $S_j^b$ that are measured at $t_i^a$ and $t_j^b$ respectively, each scene map is processed individually through the corresponding encoding-decoding path of the scene. \begin{eqnarray} Z_i^a = \mathbf{E}_a (S_i^a), \hat{S}_i^a = \mathbf{D}_a (Z_i^a)\\ Z_j^b = \mathbf{E}_b (S_j^b), \hat{S}_j^b = \mathbf{D}_b (Z_j^b) \end{eqnarray} Two reconstruction losses $\mathcal{L}_{recon}^a$ and $\mathcal{L}_{recon}^b$ are defined to evaluate the auto-encoder's accuracy of each scene, \begin{eqnarray} \mathcal{L}_{recon}^a = \left\|S_{i}^{a} - \hat{S}_{i}^{a}\right\|_2\\ \mathcal{L}_{recon}^b = \left\|S_{j}^{b} - \hat{S}_{j}^{b}\right\|_2 \end{eqnarray} and a correlation loss are defined to constrain equivalent latent states if the scene dynamics are observed at approximative time points. \begin{eqnarray} &&\mathcal{L}_Z = \exp(-c\cdot \Delta t) \cdot \left\| Z_{i}^{a} - Z_{j}^{b} \right\|_2\\ &&\Delta t = dis (t_i^a , t_j^b) \nonumber \end{eqnarray} Therefore, model learning is conducted by optimizing the following total loss \begin{equation} \mathop{\min}_{E_{a},E_{b},D_{a},D_{b}} \mathcal{L}_{recon}^a + \mathcal{L}_{recon}^b + \lambda\mathcal{L}_{Z} \label{optTarget} \end{equation} where $\lambda$ is a hyperparameter that is assigned 0.1 in this research. \begin{figure}[tp] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/predictionProcess.png} \caption{Cross-scene prediction by concatenating the encoder of the input scene with the decoder of the target one.} \label{predictionProcess} \end{figure} The prediction model $\mathbf{F}$ is built by concatenating the encoder of the input scene with the decoder of the target one, as illustrated in Fig.\ref{predictionProcess}. For example, at the current time $t$, given the observation $S^a$ of scene $a$, the dynamic state of scene $b$ can be predicted by \begin{eqnarray} &&\hat{S}^b,t = \mathbf{F}_{ab} (S^a,t) \\ &&\mathbf{F}_{ab}=E_{a}oD_{b} \end{eqnarray} and vice versa \begin{eqnarray} &&\hat{S}^a,t = \mathbf{F}_{ba} (S^b,t) \\ &&\mathbf{F}_{ba}=E_{b}oD_{a} \end{eqnarray} \section{IMPLEMENTATION DETAILS}\label{section:implementationDetails} \subsection{Scene map} A grid map is used to represent the dynamic state of a scene, where each pixel is a four-dimensional vector, recording the number of dynamic objects passing through the location during a short time window $\tau$ on four discretized directions. In this research, the map has a dimension of $512 \times 512$ and a pixel size of 0.2 meters, $\tau=1$ min, and the four directions correspond to the East, West, South, and North in the world coordinate system. In this paper, pixels of scene map are visualized by the most dominant flow crossing the pixels at the time, where red, blue, purple and green represents the four discretized directions to the west, east, south and north, respectively, the brighter the color, the higher the dynamic flow. \subsection{Network design} As illustrated in Fig.\ref{networkArch}, the network contains two autoencoders that have the same structure. We use PyTorch framework to realize the autoencoder \cite{Chakravarty2019 }, which is composed of convolutional, fully connected and upsample layers. \begin{figure}[hb] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/networkArch.png} \caption{The network struture of autoencoders. } \label{networkArch} \end{figure} There is no pooling layer in the encoder part, and input size is reduced only by convolutional layers with stride=2. For the decoder part, we use $\times 2$ upsampling with same-padding convolutional layers to extend the size of the input, instead of deconvolutional layers. Such a structure can make the network retain more information. In encoders, the input size changes from $512 \times 512 \times 4$ to $64 \times 64 \times 8$ by 3 Conv2d layes, and then reduced to 2 dimensions(the latend variable Z) by 2 FC layers. In decoders, the size is extented from 2 to $64 \times 64 \times 8$ by 3 FC layes, and then restores to $512 \times 512 \times 4$ by 3 Upsample and Conv2d layers. \section{SIMULATION DATASETS}\label{section:simulation} A simulator is developed to generate simulation datasets for experiments. The simulation pipeline is shown in Fig.\ref{simulatorPipeline}. Without loss of generality, we assume that each scene has its inherent structure of the dynamic flows that connect a set of entrance and exit points of the scene, shown as red points in Fig.\ref{simulatorPipeline}(a), whereas the volume of each flow may change with time due to some underlying events.Therefore, time series of a set of control variables are designed as illustrated in Fig.\ref{simulatorPipeline}(b) to guide the simulation of dynamic objects. In this research, two control variables are designed, which are the total people number $PN_t$ and the main flow direction $FD_t$. Two main flow directions are defined, where $FD_t$ is the percentage of people entering the campus, leaving the rest $1-FD_t$ going out. At a time $t$, if the total people number at the frame is less than $PN_t$, new people are generated to meet the insufficient number. \begin{figure}[] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/simulatorPipeline.png} \caption{Simulation pipeline. (a)scene layout, (b)series of control variables of the dynamic flows, (c) a simulation frame of the dynamic objects(blue points), (d) a scene map on a frame sequence during a short time window.} \label{simulatorPipeline} \end{figure} Among the new people, $FD_t$ are generated at the entrance point of the gate following a randomly chosen flow entering the campus, while $1-FD_t$ are generated randomly at the start point of a flow going out of the campus. People flows are simulated by referring to Helbing's work\cite{Helbing1995}.Each scene map is estimated on a sequence of simulation frames as \begin{equation} S_i, t_i = \mathbf{OGM} (f_{1,...,n}) \end{equation} \begin{figure}[] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/peopleNumCurves.png} \caption{Scene dynamic correlation simulated by designing correlative time series of control variables. Three patterns are designed with different correlation coefficient $\rho$ of the time series on the control variables $PN_t$.} \label{peopleNumCurves} \end{figure} In this research, simulation frames $f_{1,...,n}$ are generated at 10Hz. Each scene map represent the dynamic state during a short time window of $\tau=1$ min, therefore $n=600$ frames are used to estimate a $S_i$ at $t_i$. Two scenes are simulated by imitating the dynamic flows at two adjacent gates of Peking Univ., which are triggered by almost the same events, e.g. working and teaching schedules of the campus. Similar scenarios can also be found at such as adjacent intersections on a single road, subway stations, gates of a stadium, etc. Therefore, correlated time series of control variables at both scenes are designed as shown in Fig. \ref{peopleNumCurves}. Three kinds of patterns are designed with the correlations coefficients $\rho=$1.0, 0.84 and 0.5 of $PN_t$ of two scenes, representing the strong, middle and less correlative scenes. Here people number $PN_t$ of two scenes are designed to control the correlation of two scenes, and we keep the main flow direction $FD_t$ of two scenes the same. \begin{figure}[] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/alphaDataCollection.png} \caption{Datasets generation of various percentage of pairwise scene maps, $\alpha=$0\%,31\%,72\%,100\%.} \label{fig:alphaDataCollection} \end{figure} Following each pattern of time series in Fig.\ref{peopleNumCurves}, a simulation is conducted from 8:00 to 20:00, where 720 scene maps are generated every 1 minute for both scenes. Part of scene maps are selected to simulate the different data acquisition situation, which have $\alpha=$0\%,31\%,72\%,100\% of pairwise observations as shown in Fig.\ref{fig:alphaDataCollection}. Therefore, a total of 12 datasets containing three correlation patterns and four percentages of pairwise observations are generated, which are used in the experiments. In each particular experiment, although the proposed and baseline methods are trained and test on the same dataset, the number of scene maps used in training could be different due to the requirements on pairwise observations of each method, which is detailed in Tab. \ref{tab:datasets}. \begin{table}[h] \centering \caption{The number of maps in training and testing of the cross-scene prediction models} \label{tab:datasets} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}}\diagbox{Datasets}{Methods}\end{tabular}} & $\mathbf{F}_{our}$ & $\mathbf{F}_{E2E}$ & $\mathbf{F}_{E2E\Delta t}$ & $\mathbf{F}_{linear}$ \\ \hline \multirow{4}{*}{Training} & $\alpha=0$ & 72 & 0 & 72 & - \\ \cline{2-6} & $\alpha=31\%$ & 72 & 22 & 72 & - \\ \cline{2-6} & $\alpha=72\%$ & 72 & 51 & 72 & - \\ \cline{2-6} & $\alpha=100\%$ & 72 & 72 & 72 & - \\ \hline Testing & - & 36 & 36 & 36 & 36 \\ \hline \end{tabular} \end{table} \section{EXPERIMENTAL RESULTS}\label{section:experiments} \subsection{Evaluation measures} \subsubsection{Prediction error} Given two maps $S_1$ and $S_2$ of size $W\times H\times C$, mean square error(MSE) is used to measure the difference between them \begin{equation} {\cal D}_s(S_1,S_2) = \frac{1}{W\times H\times C}\sum_{W,H,C}^{}(S_1-S_2) ^2 \end{equation} Subsequently, for a predicted map $\hat{S}$ with a ground truth $S$, the prediction error ${\cal E}_{s}$ is defined as \begin{equation} {\cal E}_{s}(\hat{S})={\cal D}_s(\hat{S},S) \end{equation} \subsubsection{Dataset variance} A scene map describes the dynamic state of a scene, which is generated by taking statistics on the data frames during a short time window around the time, i.e. $n_f=600$ frames during $\tau=1$ min in this research. A scene map has the nature of randomness due to uncontrollable scene dynamics and the method of time windowing, the variance of such randomness is an important reference to prediction accuracy. Given each series $\cal C$ of control variables, simulations are conducted for $n$ times. At each sampled time $t$ corresponding to frame number $i_t$, a time window $[i_0,i_0+n_f]$ is randomly chosen for $m$ times with $i_0 \in [i_t-n_f,i_t]$, and a scene map is subsequently generated on data frames $f_{i0,...,i_0+n_f}$. Therefore, $n*m$ scene maps $\{S_1,...S_{n*m}\}$ are generated, and inherent variance of scene map for $\cal C$ and $t$ is estimated below. \begin{equation} {\cal V}_s ({\cal C},t)= \frac{1}{n*m}\sum_{i=1}^{n*m}{\cal D}_s(S_i,\overline{S}) \end{equation} where, $\overline{S} = \frac{1}{n*m}\sum_{i=1}^{n*m}S_i$ is the mean map. By repeating the above estimations at all sampled time points $t \in \Omega_t$ and control series $\cal C \in \Omega_C$, variance at the level of control series and data sets can also be found. \begin{eqnarray} {\cal V}_c ({\cal C})= \frac{1}{|\Omega_t|}\sum_{t \in \Omega_t} {\cal V}_s ({\cal C},t) \\ {\cal V}_d = \frac{1}{|\Omega_t \times \Omega_C|}\sum_{t,C \in \Omega_t \times \Omega_C} {\cal V}_s ({\cal C},t) \end{eqnarray} Dataset variance is the lower bounder of prediction error for any methods, and the closer the prediction error to the dataset variance, the better the result. \subsection{Baseline methods}\label{section:baselineMethods} \begin{figure}[b] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/endtoend.png} \caption{The baseline methods. Top: conventional end-to-end prediction trained by pairwise maps only. Down: end-to-end prediction with compensation of time difference.} \label{endtoend} \end{figure} \subsubsection{$\mathbf{F}_{E2E}$ - Conventional end-to-end prediction} By using only pairwise observations in training datasets, a pair of conventional end-to-end predictors $\mathbf{F}_{E2E}$ can be trained as illustrated in Fig. \ref{endtoend} to predict a $\hat{S}_b$ of scene $b$ on $S_a$ of $a$, and vice versa. \begin{eqnarray} &&\hat{S}^b, t=\mathbf{F}_{E2E,ab} (S^a,t) \\ &&\hat{S}^a, t=\mathbf{F}_{E2E,ba} (S^b,t) \end{eqnarray} \subsubsection{$\mathbf{F}_{E2E\Delta t}$ - End-to-end prediction with compensation of time difference} However, the observations are not necessarily pairwise, which could be measured by a multi-robot system. Therefore, the pairwise observations in training datasets are limited when $alpha=$31\%, and none when $alpha=$0\% for $\mathbf{F}_{E2E}$, as shown in TABLE \ref{tab:datasets}. A pair of conventional end-to-end predictors with compensation of time difference is \begin{eqnarray} &&\hat{S}^b, t+\Delta t=\mathbf{F}_{E2E\Delta t,ab} (S^a,t,\Delta t) \\ &&\hat{S}^a, t+\Delta t=\mathbf{F}_{E2E\Delta t,ba} (S^b,t,\Delta t) \end{eqnarray} \subsubsection{$\mathbf{F}_{linear}$ - Linear interpolation} A scene map can also be predicted by finding two history observations of the nearest time by considering the periodic nature of scene dynamics and conducting linear interpolation. Let $S_.(t_1)$ and $S_.(t_2)$ be the two history observations of the scene at time $t_1$ and $t_2$ respectively, and a predicted one of time $t$ is estimated as below. \begin{equation} \hat{S}_., t= \frac{S_.(t_2) - S_.(t_1)}{t_2-t_1} \times (t-t_2) + S_.(t_2) \end{equation} \subsection{Prediction results} We evaluated our method's performance at various conditions of scene correlation($\rho$) and pairwise observations($\alpha$) comparing with baseline methods, and the quantitative results are shown in TABLE.\ref{predictionAccuracy}. Besides, case study of prediction results is illustrated in Fig.\ref{predictionResults}. Given the input scene map, the ground truth map of the other scene is compared with our prediction result, and error maps of ours and baseline methods are also shown on the right four columns. Finally, the study about per map prediction error on the single dataset is exhibited in Fig.\ref{fig:studyOnADataset}. \begin{figure*}[h] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/predictionResults.png} \caption{Case study of prediction results, comparison with baseline methods at various conditions of scene correlation ($\rho$) and pairwise observations ($\alpha$).} \label{predictionResults} \end{figure*} \begin{table*}[h] \centering \caption{Average prediction error on each dataset corresponding to a pair of $\rho$ and $\alpha$} \label{predictionAccuracy} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\diagbox[width=7em]{Methods}{Datasets}\end{tabular}} & $\rho$ & \multicolumn{4}{c|}{1.00} & \multicolumn{4}{c|}{0.84} & \multicolumn{4}{c|}{0.50} \\ \cline{2-14} & \begin{tabular}[c]{@{}c@{}}$\alpha$\end{tabular} & 100\% & 72\% & 31\% & 0\% & 100\% & 72\% & 31\% & 0\% & 100\% & 72\% & 31\% & 0\% \\ \hline \multicolumn{2}{|c|}{Ours} & 0.549 & 0.581 & 0.558 & 0.569 & 0.685 & 0.698 & 0.686 & 0.693 & 1.037 & 0.974 & 1.080 & 0.925 \\ \hline \multicolumn{2}{|c|}{E2E} & 0.602 & 0.660 & 0.864 & - & 0.702 & 0.784 & 0.933 & - & 0.894 & 0.938 & 1.087 & - \\ \hline \multicolumn{2}{|c|}{E2E$_{\Delta t}$} & 0.716 & 0.731 & 0.745 & 0.700 & 0.810 & 0.819 & 0.820 & 0.743 & 0.897 & 0.876 & 0.905 & 0.897 \\ \hline \multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}}Linear\\ prediction\end{tabular}} & \multicolumn{4}{c|}{2.180} & \multicolumn{4}{c|}{2.162} & \multicolumn{4}{c|}{1.777} \\ \hline \multicolumn{2}{|c|}{\begin{tabular}[c]{@{}c@{}}Dataset\\ variance\end{tabular}} & \multicolumn{4}{c|}{0.305} & \multicolumn{4}{c|}{0.296} & \multicolumn{4}{c|}{0.248} \\ \hline \end{tabular} \end{table*} \subsubsection{Prediction accuracy v.s. scene correlation} We explore how the correlation $\rho$ between scenes influences our algorithm, which is taking datasets of the same $\alpha$ but different $\rho$ to experiment. We take datasets with $\alpha=31\%$ for example. Quantitative analysis is shown in Fig. \ref{coefficient}. When there is high correlation($\rho=1/0.84$) between scenes, ours(blue) is better than other methods. E2E and E2E$_{\Delta t}$ model have no prior knowledge of scenes but only learn the data mapping of two scenes, and that's why they are worse than ours in high correlation situation. The prediction error of ours increases with the decrease of correlation $\rho$ because the core idea of our method is the latent space of two scenes is shared only when the dynamic change of scenes is correlated. When the correlation between scenes decreases, the performance is down. And that's why when the scenes are less correlative i.e. $\rho=0.5$, the prediction error of ours is larger than E2E/ E2E$_{\Delta t}$ methods. The linear prediction model is always the worst. There are the same results for other $\alpha$ shown in TABLE \ref{predictionAccuracy}. Qualitative case study is illustrated in Fig.\ref{predictionResults}(a). Error map $A_1$ is almost white which means our method achieves good result in high correlation situation. From $A_1$ to $A_3$, with the decrease of correlation $\rho$, the error maps become darker and darker, meaning worse and worse prediction results, and our result $A_3$ is even worse than $E2E_{\Delta t}$'s result $C_3$ in less correlation ($\rho=0.5$) situation. \begin{figure}[h] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/coefficient.png} \caption{Average prediction error changes with scene correlation level $\rho$, a result of $\alpha$=31\%. } \label{coefficient} \end{figure} \subsubsection{Prediction accuracy v.s. non-pairwise observation} We discuss the influence of percentage $\alpha$ of pairwise data, that is taking datasets of the same $\rho$ but different $\alpha$ to experiment. We take datasets with $\rho=0.84$ for example. Quantitative analysis is illustrated in Fig. \ref{pairdDataPrecent}. The prediction error of our method is always the lowest in all percentage $\alpha$. Ours(blue) and E2E$_{\Delta t}$(yellow) method are not sensitive to if scene maps are pairwise, because the time difference between scene maps is considered in them. The E2E method only processes pairwise data in the training step, so the decrease of pairwise data leads to the reduction of training data, causing the prediction error to raise. And that's also the reason for the lack of results on 0\% paired data of E2E method. There are the same results for other correlation $\rho$ in TABLE \ref{predictionAccuracy}. Qualitative case study is shown in Fig. \ref{predictionResults}(b). Percentage $\alpha$ does not influence a lot on our methods, and the slight difference between prediction error lead to the similar error maps of all methods. \begin{figure}[htp] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/pairdDataPrecent.png} \caption{Average prediction error changes with the percentage of pairwise observations $\alpha$, a results of $\rho$=0.84.} \label{pairdDataPrecent} \end{figure} \subsubsection{Study on single dataset} \begin{figure*}[] \centering \includegraphics[keepaspectratio=true,width=\linewidth]{figures/studyOnADataset.png} \caption{Per map prediction error on each dataset corresponding to a pair of $\rho$ and $\alpha$.} \label{fig:studyOnADataset} \end{figure*} There are similar results in the study on per map prediction error on single dataset shown in Fig. \ref{fig:studyOnADataset}. Our prediction error is close to data variance and always lower than baseline methods through the day when the correlation is strong $\rho=1$, and the percentage $\alpha$ of pairwise scene maps rarely influence the performance of our method, shown in Fig. \ref{fig:studyOnADataset}(a)\&Fig. \ref{fig:studyOnADataset}(b). But when there is less correlation $\rho=0.5$ between scenes, ours sometimes can be worse than baseline methods, shown in Fig. \ref{fig:studyOnADataset}(c)\&Fig. \ref{fig:studyOnADataset}(d). Finally, Fig. \ref{fig:studyOnADataset}(e) \& Fig. \ref{fig:studyOnADataset}(f) are the people number of one day, and the data variance changes with it. This is because in our pedestrian simulator, every pedestrian's movement is influenced by its nearby people, and when there are lots of people in the scene, the randomness of pedestrians' movement increases, leading to the raising of data variance. \section{CONCLUSIONS} This paper is the first try to answer the question: can we make inference by modeling the correlations of scene dynamics on history observations? We formulate the problem as given a set of unsynchronized history observations of two scenes that are correlative on their dynamic changes, learn a cross-scene predictor, wherewith the observation of one scene, a robot can onlinely predict the dynamic state of another. The problem is solved by developing a method by modeling the inherent correlation of scene dynamics using latent space shared auto-encoders, where a learning model is established by connecting two auto-encoders through the latent space, and a prediction model is built by concatenating the encoder of the input scene with the decoder of the target one. The method is examined through simulation, where the dynamic flows at two adjacent gates of campus are imitated. The problem is adaptive to other scenarios such as successive intersections on a single road, gates of subway stations, etc., where the dynamic changes are triggered some common events. Cross-scene prediction accuracy is examined at various conditions of scene correlation and pairwise observations, and the results show that the proposed method can better solve the problem than the conventional end-to-end and linear predictions ones. Future work will be addressed on real-data collection and processing, and the inference on dynamic correlations of more adjacent scenes will also be studied. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,207
Capitol Hill (CNSNews.com) - Rep. Ed Markey (D-Mass.) on Thursday appeared in Washington, D.C., to promote a plug-in hybrid car that supporters claim can travel 150 miles or more on a gallon of gasoline and to answer critics who accuse Congress of posturing on global warming issues. Appearing with Markey was actor Rob Lowe, who told reporters and fans that "clean energy is something I've been interested in for a while." He said he had decided to take a public stand on the issue, because "I've been watching as the climate changes." Lowe, a former star on the long-running NBC political drama "West Wing," took Markey and two staffers on a short ride in a modified Toyota Prius, which Markey later described as "incredible." Lowe said he plans to have a hybrid modified with plug-in technology so he can drive it -- and show it off -- around his home in California. Modifications performed by Massachusetts-based A123 Systems added a battery that can be charged overnight through a normal electrical outlet, allowing the car to run on more electricity than a normal hybrid. There are currently about one dozen operating plug-in hybrids on the road, according to A123 Systems President and CEO David Vieau. The modifications cost nearly $10,000, but Vieau said the cost would drop as the modification becomes more popular. Critics accuse federal lawmakers of declaring their interest in environmental and global warming issues without following through on promises to pass legislations aimed at addressing the problems. In a statement Wednesday, National Center for Policy Analysis senior fellow H. Sterling Burnett said that "when it comes to global warming, Congress has been long on talk but short on action. The special climate change committee has been a bust, yielding more press statements and photo ops than concrete proposals." Burnett praised House Energy and Commerce Committee Chairman John Dingell (D-Mich.) for threatening to introduce legislation that would impose a tax on carbon emissions. Introducing such a bill, Burnett said, would "lay bare the hypocrisy in Congress." Because consumers would not be likely to support candidates who vote to increase the cost of energy, "it is unlikely a new energy tax will get a majority of support in this, or any, Congress, despite their rhetoric otherwise," he said. "The most important point he [Dingell] is making," Burnett wrote," is that there is no such thing as a free lunch in fighting climate change. Regardless of whether we attempt to slow warming through new energy taxes or a cap-and-trade approach, the costs will be quite high, will harm the poor the most, and, by most estimates will do little or nothing to prevent global warming." Markey took the criticism in stride. "The criticism is accurate as far as the past is concerned," he told Cybercast News Service. But he pledged the energy legislation the House will focus on this month will change that. He promised "substantial new incentives for these new future-oriented automotive technologies" and other measures to address environmental issues. Markey said that in spite of the high initial costs of a modification such as the $10,000 plug-in technology, he expects Americans will eventually catch on to the ideas and accept them. "It will take a couple of years to get people educated about the technology and then you'll see it adopted," he said.
{ "redpajama_set_name": "RedPajamaC4" }
487
Branchville kan verwijzen naar de volgende plaatsen in de Verenigde Staten: Branchville (Alabama) Branchville (New Jersey) Branchville (South Carolina) Branchville (Virginia)
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,793
Apple to add 20K jobs, pay $38B US tax bill to repatriate cash (Kārlis Dambrāns / Flickr) FOX NEWS - Apple (AAPL) on Wednesday said it will create 20,000 new jobs and establish a new U.S.-based campus as part of $350 billion in new "direct contribution" to the economy. Apple also said it expects repatriation tax payments of roughly $38 billion due to changes enacted by the recently-passed GOP tax reform bill. The new tax code calls for a 15.5% repatriation tax rate. The company listed $252.3 billion in overseas cash in its most recent filing with the SEC. "The company plans to establish an Apple campus in a new location, which will initially house technical support for customers. The location of this new facility will be announced later in the year," the company said in a statement. The Cupertino, California-based tech giant said its investments will focus on job creation, spending with U.S. suppliers and an expansion of the App Store, Apple's digital content hub. Apple said the $350 billion over the next five years does not include the sale of Apple products or tax payments. Apple's announcement came weeks after the passage of a GOP tax bill that slashed the corporate tax rate to 21% from 35%. President Donald Trump told the Wall Street Journal last July that Apple CEO Tim Cook had committed to building three manufacturing plants in the U.S. In addition to its new campus location, Apple said it would build a new facility in Reno, Nevada. It's unclear if the company plans to build the manufacturing facilities mentioned by Trump. READ MORE @ FOX BUSINESS California man allegedly lived in airport for months because he was afraid to fly: report Biden team says Trump lift of ban on travel from Brazil, UK, Ireland amid COVID-19 surge in US won't stand
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
733
Q: Unable to build ODK Aggregate I am Unable to build ODK Aggregate. Getting this error. [ERROR] Failed to execute goal on project aggregate-gae: Could not resolve dependencies for project org.opendatakit:aggregate-gae:war:1.0: The following artifacts could not be reso lved: org.opendatakit:aggregate-src:jar:latest, org.opendatakit:odk-gae-it-settings:jar:latest, com.googlecode.gwt-google-maps-v3:gwt-google-maps-v3:jar:snapshot, com.google.gwt.go ogle-apis:gwt-visualization:jar:1.1.1, org.javarosa:javarosa-libraries:jar:2013-09-30, org.opendatakit:odk-httpclient-gae:jar:1.1, org.opendatakit:odk-tomcatutil:jar:1.0, org.openi d4java:openid4java-nodeps:jar:0.9.6.662.odk-SNAPSHOT, org.springframework.security:spring-security-config:jar:3.1.3.odk-SNAPSHOT, org.springframework.security:spring-security-core: jar:3.1.3.odk-SNAPSHOT, org.springframework.security:spring-security-crypto:jar:3.1.3.odk-SNAPSHOT, org.springframework.security:spring-security-openid:jar:3.1.3.odk-SNAPSHOT, org. springframework.security:spring-security-web:jar:3.1.3.odk-SNAPSHOT: Failure to find org.opendatakit:aggregate-src:jar:latest in http://repo1.maven.org/maven2 was cached in the loc al repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced -> [Help 1] Any help is appreciated. A: This is because you have not installed the jar files required. please read the instruction on; http://aggregate.opendatakit.googlecode.com/hg/CONFIGURE.txt Read it line by line. it might be wise to run maven from the terminal
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,715
Рональдо Луис Пенья Варгас (; родился 10 марта 1997 года в Акаригуа, Венесуэла) — венесуэльский футболист, нападающий. Клубная карьера Пенья — воспитанник клуба «Каракас». 23 февраля 2014 года в матче против «Депортиво Ла-Гуайра» он дебютировал в венесуэльской Примере. В этом же поединке Рональдо забил свой первый гол за «Каракас». Летом того же года для получения игровой практики Пенья на правах аренды перешёл в клуб «Португеса». 14 сентября в матче против «Туканес» он дебютировал за новую команду. Летом 2015 года Пенья на правах двухлетней аренды перешёл испанский «Лас-Пальмас». Из-за высокой конкуренции, он полгода провёл без игровой практики. В начале 2016 года Ричард был переведён в команду дублёров «Лас-Пальмас Атлетико». Летом 2017 года Пенья на правах аренды присоединился в португальскому «Морейренсе». 6 августа в матче против «Витории Сетубал» он дебютировал в Сангриш-лиге. В этом же поединке Рональдо забил свой первый гол за «Морейренсе». 5 июля 2018 года Пенья перешёл в клуб MLS «Хьюстон Динамо». В американской лиге он дебютировал 4 августа в матче против «Спортинга Канзас-Сити». 23 августа в техасском дерби против «Далласа» он забил свой первый гол за «Хьюстон Динамо». По окончании сезона 2020 «Хьюстон Динамо» не стал продлевать контракт с Пеньей. Международная карьера В 2013 году в составе юношеской сборной Венесуэлы Пенья завоевал серебряные медали юношеского чемпионата Южной Америки в Аргентине. На турнире он сыграл в матчах против команд Эквадора, Колумбии, Бразилии, Перу, Уругвая, а также дважды против Аргентины и Парагвая. В поединке против аргентинцев Рональдо забил гол. В 2017 года Пенья принял участие в молодёжном чемпионате Южной Америки в Эквадоре. На турнире он сыграл в матче против команды Перу, Боливии, Колумбии, Эквадора, Бразилии а также дважды Уругвая и Аргентины. В том же году Пенья завоевал серебряные медали молодёжного чемпионата мира в Южной Корее. На турнире он сыграл в матчах против команд Германии, Вануату, Мексики, Японии, Уругвая и Англии. В поединке против немцев Рональдо забил гол. Достижения Международные Венесуэла (до 17) Чемпионат Южной Америки среди юношеских команд — 2013 Венесуэла (до 20) Чемпионат мира среди молодёжных команд — 2017 Чемпионат Южной Америки среди молодёжных команд — 2017 Примечания Ссылки Профиль на сайте sports.ru Профиль на сайте Soccer Sport Group Футболисты Венесуэлы Игроки сборной Венесуэлы по футболу (до 20 лет) Игроки ФК «Каракас» Игроки ФК «Португеса» Игроки ФК «Лас-Пальмас Атлетико» Игроки ФК «Морейренсе» Игроки ФК «Хьюстон Динамо» Игроки ФК «Рио-Гранде Валли Торос»
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,026
<?php namespace BootPress\Admin\Pages; use BootPress\Admin\Files; use BootPress\Admin\Component as Admin; class Folders { public static function setup($auth, $path) { return ($auth->isAdmin(1)) ? Admin::$bp->icon('folder', 'fa').' Folders' : false; } public static function page() { extract(Admin::params('bp', 'page')); $html = ''; $media = ''; $dir = $page->dir('folders'); if (!is_dir($dir)) { mkdir($dir, 0755, true); } $form = $bp->form('admin_folders'); if ($edit = $page->get('edit')) { if (!is_dir($dir.$edit)) { $page->eject($page->url('delete', '', '?')); } } if ($page->get('delete')) { if ($edit) { list($dirs, $files) = Files::iterate($dir.$edit); foreach ($files as $file) { unlink($dir.$edit.$file); } if (empty($dirs)) { rmdir($dir.$edit); } } $page->eject($page->url('delete', '', '?')); } if ($edit) { $form->values['path'] = $page->get('folder'); $index = Files::textarea($form, 'index', $dir.$edit.'/index.php'); $media = Files::view($dir.$edit, array('exclude' => 'index.php')); if ($page->get('image')) { return Admin::box('default', array( 'head with-border' => $bp->icon('image', 'fa').' Image', 'body' => $media, )); } } $folders = array(); list($dirs) = Files::iterate($dir, 'recursive'); foreach ($dirs as $folder) { if (is_file($dir.$folder.'/index.php')) { $folders[$folder] = $folder; } } if (!empty($folders)) { $form->menu('edit', $folders, $edit ? null : '&nbsp;'); if ($edit) { $form->values['folder'] = $edit; $form->values['edit'] = $edit; } } $form->validator->set(array( 'folder' => 'required', 'edit' => '', )); if ($vars = $form->validator->certified()) { $folder = Files::format($vars['folder'], 'slashes'); if (!empty($folder)) { if ($edit) { // renaming if ($edit != $folder) { if (is_file($dir.$folder.'/index.php')) { $form->validator->errors['folder'] = 'Sorry, this folder has already been taken.'; } else { $path = $dir.$edit.'/'; $rename = $dir.$folder.'/'; if (!is_dir($rename)) { mkdir($rename, 0755, true); } list($dirs, $files) = Files::iterate($path); foreach ($files as $file) { rename($path.$file, $rename.$file); } if (empty($dirs) && strpos($rename, $path) === false) { rmdir($path); } $form->eject = $page->url('add', $form->eject, 'edit', $folder); } } } else { // creating if (!is_dir($dir.$folder)) { mkdir($dir.$folder, 0755, true); } if (!is_file($dir.$folder.'/index.php')) { file_put_contents($dir.$folder.'/index.php', ''); } $form->eject = $page->url('add', $form->eject, 'edit', $folder); } } if (empty($form->validator->errors)) { $page->eject($form->eject); } } // Open Form $html .= $form->header(); // Link to Folder if ($edit) { $delete = $bp->button('sm danger delete pull-right', $bp->icon('trash'), array('title' => 'Click to delete this folder', 'style' => 'margin-left:20px;')); $html .= '<p class="lead"><a href="'.$page->url('base', $edit).'" target="_blank">'.$page->url('base', $edit).' '.$bp->icon('new-window').'</a> '.$delete.'</p><br>'; $page->jquery(' $(".delete").click(function(){ bootbox.confirm({ size: "large", backdrop: false, message: "Are you sure you would like to delete this folder?", callback: function (result) { if (result) { window.location = "'.str_replace('&amp;', '&', $page->url('add', '', 'delete', 'folder')).'"; } } }); }); '); } // Select a Folder to 'edit' if (!empty($folders)) { $html .= $form->field(array(($edit ? 'Folder' : 'Select'), 'Select a folder that you would like to edit.', ), $form->select('edit')); } // Edit or Create a 'folder' $prepend = $page->url('base'); $append = array(); if (!empty($page->url['suffix'])) { $append[] = $page->url['suffix']; } $append[] = $bp->button('primary', 'Submit', array('type' => 'submit', 'data-loading-text' => 'Submitting...')); if ($edit) { $html .= $form->field(array('Save As', 'Use only lowercase letters, dashes (-), and slashes (/).', ), $form->group($prepend, $append, $form->text('folder'))); } else { $html .= $form->field(array('Create', 'Use only lowercase letters, dashes (-), and slashes (/). The folder you create here will be directly accessible at: '.$page->url('base').'[folder]/... You, of course, will have to deal with the dot dot dot\'s. Alternatively, you can create any url rule structure that you like in the .htaccess file, and direct it to the main index.php file with the additional parameter: ?page=[folder]', ), $form->group($prepend, $append, $form->text('folder'))); } // 'index.php' wyciwyg if ($edit) { $html .= $form->field(array('index.php', 'This is the main file where you can manage the content of your folder.', ), $index); } // Close Form $html .= $form->close(); // jQuery $page->jquery(' $("#'.$form->validator->id('edit').'").change(function(){ window.location = "'.$page->url('delete', '', '?').'?edit=" + $(this).val(); }); '); return Admin::box('default', array( 'head with-border' => array( $bp->icon('folder', 'fa').' Folders', $bp->button('md link', 'Documentation '.$bp->icon('new-window'), array( 'href' => 'https://www.bootpress.org/docs/folders/', 'target' => '_blank', )), ), 'body' => $html.$media, )); } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,751
{"url":"https:\/\/gitcode.net\/paddlepaddle\/Quantum\/-\/blame\/master\/tutorial\/quantum_simulation\/ClassicalShadow_Application_EN.ipynb?from_codechina=yes","text":"# PaddlePaddle \/ Quantum \u5927\u7ea6 1 \u5c0f\u65f6 \u524d\u540c\u6b65\u6210\u529f\n\n Q Quleaf \u5df2\u63d0\u4ea4 8\u6708 20, 2021 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 { \"cells\": [ { \"cell_type\": \"markdown\", \"id\": \"4f560a55\", \"metadata\": {}, \"source\": [ \"# Estimation of Quantum State Properties Based on the Classical Shadow\" ] }, { \"cell_type\": \"markdown\", \"id\": \"a894cbba\", \"metadata\": {}, \"source\": [ \" Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. \" ] }, { \"cell_type\": \"markdown\", \"id\": \"e4cdaa03\", \"metadata\": {}, \"source\": [ \"## Overview\" ] }, { \"cell_type\": \"markdown\", \"id\": \"3b3ea48a\", \"metadata\": {}, \"source\": [ \"In [The Classical Shadow of Unknown Quantum States](.\/ClassicalShadow_Intro_EN.ipynb), we introduced some theoretical knowledge of the classical shadow and showed how to construct the classical shadow of a quantum state $\\\\rho$. According to the theoretical derivation of the classical shadow, it is very suitable for estimating the linear properties of quantum states. At present, its basic applications are: quantum fidelity estimation, entanglement verification, the estimation of local observables' expectation values, and the estimation of global observables' expectation values [1]. Among them, the estimation of observables' expectation values widely appears in current quantum algorithms, such as [Variational Quantum Eigensolver](.\/VQE_EN.ipynb) (VQE) for estimating the ground state energy of complex molecular Hamiltonian. Next, we will focus on the estimation of observables' expectation values algorithms based on the classical shadow and show how to use the shadow function in Paddle Quantum to do so.\" ] }, { \"cell_type\": \"markdown\", \"id\": \"db8b954f\", \"metadata\": {}, \"source\": [ \"## Estimation of Observables' Expectation Values\" ] }, { \"cell_type\": \"markdown\", \"id\": \"e2869ea5\", \"metadata\": {}, \"source\": [ \"### Problem description\" ] }, { \"cell_type\": \"markdown\", \"id\": \"a6adea50\", \"metadata\": {}, \"source\": [ \"In the field of quantum chemistry, one of the core tasks is to solve for ground state energy of the Hamiltonian $\\\\hat{H}$ on a closed physical system on a quantum scale and its corresponding ground state. The main method is to prepare a parameterized trial wave function $|\\\\Psi(\\\\theta)\\\\rangle$. Then, the parameter $\\\\theta$ is being continuously adjusted and optimized to minimize the expected value $\\\\langle\\\\Psi(\\\\theta)|\\\\hat{H}| \\\\Psi(\\\\theta)\\\\rangle$ using classical optimization algorithms (such as gradient descent method). The principle of this scheme is based on Rayleigh-Ritz variational principle,\\n\", \"\\n\", \"$$\\n\", \"E_{0}=\\\\min _{\\\\theta}\\\\langle\\\\Psi(\\\\theta)|\\\\hat{H}| \\\\Psi(\\\\theta)\\\\rangle \\\\tag{1}\\n\", \"$$\\n\", \"\\n\", \"where $E_{0}$ represents the ground state energy of the system. Numerically, the problem can be understood as solving for the minimum eigenvalue $\\\\lambda_{\\\\min }$ of a discretized Hamiltonian $\\\\hat{H}$ (Hermitian matrix) and its corresponding eigenvector $\\\\left|\\\\Psi_{0}\\\\right\\\\rangle$. Where the classical shadow comes into play is to calculate the $\\\\langle\\\\Psi(\\\\theta)|\\\\hat{H}| \\\\Psi(\\\\theta)\\\\rangle = \\\\operatorname{tr}(\\\\hat{H}\\\\rho )$ part in every optimization iteration where $\\\\rho = | \\\\Psi(\\\\theta)\\\\rangle\\\\langle\\\\Psi(\\\\theta)|$.\\n\", \"\\n\", \"The problem is then transformed into: for a quantum state $\\\\rho$ of $n$ qubits and an observable (Hamiltonian) $\\\\hat{H}$ that can be written as a linear combination of a set of Pauli operators $\\\\{I, X, Y, Z\\\\}^{\\\\otimes n}$,\\n\", \"\\n\", \"$$\\n\", \"\\\\hat{H}=\\\\sum_{Q \\\\in\\\\{I, X, Y, Z\\\\} ^{\\\\otimes n}} \\\\alpha_{Q} Q \\\\quad \\\\text{where} \\\\quad \\\\alpha_{Q} \\\\in \\\\mathbb{R}, \\\\tag{2}\\n\", \"$$\\n\", \"\\n\", \"how to estimate the observable's expectation value $\\\\operatorname{tr}(\\\\hat{H}\\\\rho )$ using the classical shadow?\\n\", \"\\n\", \"\\n\", \"The most intuitive method is to use each term of the Hamiltonian as a measurement base, and the corresponding Pauli measurements are made for the quantum state $\\\\rho$. A certain number of repetitions are made for each of the measurements, and then the measurement results are being processed to obtain the estimation value. Here we refer to this method as the item-by-item measurement method.\\n\", \"\\n\", \"Readers can see that when both $n$ and the number of terms of the Hamiltonian $\\\\hat{H}$ are small, we can get $\\\\operatorname{tr}(\\\\hat{H}\\\\rho )$ through the item-by-item measurement method. However, when $n$ is large and the number of terms of $\\\\hat{H}$ increases, the cost of the this method will increase significantly. The classical-shadow-based method that will be introduced can obtain the same precision estimation of $\\\\operatorname{tr}(\\\\hat{H}\\\\rho )$ with less cost.\" ] }, { \"cell_type\": \"markdown\", \"id\": \"ff8af318\", \"metadata\": {}, \"source\": [ \"### The improved algorithm based on the classical shadow\" ] }, { \"cell_type\": \"markdown\", \"id\": \"a3db3328\", \"metadata\": {}, \"source\": [ \"When constructing the classical shadow, a critical step is to uniformly and randomly sample the unitary transformation from the fixed set. In [The Classical Shadow of Unknown Quantum States](.\/ClassicalShadow_Intro_EN.ipynb), we showed the case when the selected set is the Clifford group. When the selected set is a Clifford group on single qubit, the process of sampling and measurement is equivalent to making Pauli measurements on the quantum states. The classical shadow algorithm using random Pauli measurements (CS) is provided in Paddle Quantum. Briefly, in the CS algorithm, we repeatedly choose a Pauli basis for each qubit uniformly at random to measure the quantum state $\\\\rho$ and estimate the observable's expectation value based on the measurement results, and the reader may refer to [1-2] to learn the details of the principle. Further, when the Pauli measurement base is no longer selected uniformly and randomly, improved algorithms are proposed [2-3]. Relevant algorithm functions are also provided in Paddle Quantum: Locally-biased classical shadows (LBCS) [2], Adaptive Pauli shadows (APS) [3]. Readers can refer to [1-3] to learn these algorithms in detail.\" ] }, { \"cell_type\": \"markdown\", \"id\": \"1b772de7\", \"metadata\": {}, \"source\": [ \"## Paddle Quantum Implementation\" ] }, { \"cell_type\": \"markdown\", \"id\": \"e094e752\", \"metadata\": {}, \"source\": [ \"In Paddle Quantum, we provide the shadow function, which mainly includes two parts to use the above three algorithms to estimate the expectation value of the observable and obtain the classical shadow data of unknown quantum states. Next, we will show how to implement finding the ground state energy estimation of $H_{2}$ and $LiH$ based on the shadow function in Paddle Quantum.\" ] }, { \"cell_type\": \"code\", \"execution_count\": 3, \"id\": \"efc769b4\", \"metadata\": { \"scrolled\": false }, \"outputs\": [], \"source\": [ \"# Import all the dependencies\\n\", \"import numpy as np\\n\", \"from numpy import pi as PI\\n\", \"import paddle\\n\", \"from paddle_quantum.circuit import UAnsatz\\n\", \"from paddle_quantum.VQE.chemistrysub import H2_generator\\n\", \"from paddle_quantum.utils import Hamiltonian\" ] }, { \"cell_type\": \"markdown\", \"id\": \"a58e8871\", \"metadata\": {}, \"source\": [ \"### Estimate the ground state energy of hydrogen molecule ($H_{ 2}$)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"ebda5aeb\", \"metadata\": {}, \"source\": [ \"Import the Hamiltonian of $H_2$ with 4 qubits (for details, please refer to [Variational Quantum Eigensolver](.\/VQE_CN.ipynb) tutorial to obtain $H_2$ molecular Hamiltonian).\" ] }, { \"cell_type\": \"code\", \"execution_count\": 2, \"id\": \"70b8fdef\", \"metadata\": { \"scrolled\": false }, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"H2 hamiltonian = -0.04207897647782277 I0\\n\", \"0.17771287465139946 Z0\\n\", \"0.1777128746513994 Z1\\n\", \"-0.2427428051314046 Z2\\n\", \"-0.24274280513140462 Z3\\n\", \"0.17059738328801055 Z0, Z1\\n\", \"0.04475014401535163 Y0, X1, X2, Y3\\n\", \"-0.04475014401535163 Y0, Y1, X2, X3\\n\", \"-0.04475014401535163 X0, X1, Y2, Y3\\n\", \"0.04475014401535163 X0, Y1, Y2, X3\\n\", \"0.12293305056183797 Z0, Z2\\n\", \"0.1676831945771896 Z0, Z3\\n\", \"0.1676831945771896 Z1, Z2\\n\", \"0.12293305056183797 Z1, Z3\\n\", \"0.1762764080431959 Z2, Z3\\n\" ] } ], \"source\": [ \"# Set up the Hamiltonian of hydrogen molecule\\n\", \"H2_pauli_str, H2_qubit = H2_generator()\\n\", \"# Construct a Hamiltonian class instance using the H2_pauli_str\\n\", \"H2_hamiltonian = Hamiltonian(H2_pauli_str)\\n\", \"print('H2 hamiltonian = ', H2_hamiltonian)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"0d6a6532\", \"metadata\": {}, \"source\": [ \"To show how to estimate the ground state energy using the classical-shadow-based algorithm, we first get the estimated ground state of $H_{2}$ using the VQE algorithm in Paddle Quantum.\" ] }, { \"cell_type\": \"code\", \"execution_count\": 6, \"id\": \"1bc042c1\", \"metadata\": {}, \"outputs\": [], \"source\": [ \"def U_theta(theta, hamiltonian, N, D):\\n\", \" \\\"\\\"\\\"\\n\", \" Quantum Neural Network\\n\", \" \\\"\\\"\\\"\\n\", \" \\n\", \" # Initialize the quantum neural network according to the number of qubits N\\n\", \" cir = UAnsatz(N)\\n\", \" \\n\", \" # Built-in {R_y + CNOT} circuit template\\n\", \" cir.real_entangled_layer(theta[:D], D)\\n\", \" \\n\", \" # Lay R_y gates in the last row\\n\", \" for i in range(N):\\n\", \" cir.ry(theta=theta[D][i][0], which_qubit=i)\\n\", \" \\n\", \" # The quantum neural network acts on the default initial state |0000>\\n\", \" cir.run_state_vector()\\n\", \" \\n\", \" # Calculate the expected value of a given Hamiltonian\\n\", \" expectation_val = cir.expecval(hamiltonian)\\n\", \"\\n\", \" return expectation_val, cir\\n\", \"\\n\", \"class StateNet(paddle.nn.Layer):\\n\", \" \\\"\\\"\\\"\\n\", \" Construct the model net\\n\", \" \\\"\\\"\\\"\\n\", \"\\n\", \" def __init__(self, shape, dtype=\\\"float64\\\"):\\n\", \" super(StateNet, self).__init__()\\n\", \" \\n\", \" # Assign the theta parameter list to be the trainable parameter list of the circuit\\n\", \" self.theta = self.create_parameter(shape=shape, \\n\", \" default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2*PI),\\n\", \" dtype=dtype, is_bias=False)\\n\", \" \\n\", \" # Define loss function and forward propagation mechanism\\n\", \" def forward(self, hamiltonian, N, D):\\n\", \" \\n\", \" # Calculate the loss function\/expected value\\n\", \" loss, cir = U_theta(self.theta, hamiltonian, N, D)\\n\", \"\\n\", \" return loss, cir\" ] }, { \"cell_type\": \"markdown\", \"id\": \"1c53b204\", \"metadata\": {}, \"source\": [ \"After building the quantum neural network of the VQE algorithm and defining the loss function, we can estimate the ground state energy of $H_{2}$ and the quantum circuit corresponding to the ground state by training the quantum neural network.\" ] }, { \"cell_type\": \"code\", \"execution_count\": 4, \"id\": \"40036ae4\", \"metadata\": {}, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"iter: 20 loss: -1.0865\\n\", \"iter: 20 Ground state energy: -1.0865 Ha\\n\", \"iter: 40 loss: -1.1284\\n\", \"iter: 40 Ground state energy: -1.1284 Ha\\n\", \"iter: 60 loss: -1.1355\\n\", \"iter: 60 Ground state energy: -1.1355 Ha\\n\", \"iter: 80 loss: -1.1360\\n\", \"iter: 80 Ground state energy: -1.1360 Ha\\n\", \"\\n\", \"The trained circuit:\\n\", \"--Ry(-1.57)----*--------------x----Ry(4.713)----*--------------x----Ry(3.129)--\\n\", \" | | | | \\n\", \"--Ry(5.091)----x----*---------|----Ry(1.768)----x----*---------|----Ry(3.711)--\\n\", \" | | | | \\n\", \"--Ry(0.997)---------x----*----|----Ry(4.696)---------x----*----|----Ry(3.180)--\\n\", \" | | | | \\n\", \"--Ry(4.753)--------------x----*----Ry(4.701)--------------x----*----Ry(-0.37)--\\n\", \" \\n\" ] } ], \"source\": [ \"ITR = 80 # Set the number of optimization iterations\\n\", \"LR = 0.4 # Set the learning rate\\n\", \"D = 2 # Set the depth of the repetitive calculation module in QNN\\n\", \"N = H2_hamiltonian.n_qubits \\n\", \"\\n\", \"# Determine the parameter dimension of the network\\n\", \"net = StateNet(shape=[D + 1, N, 1])\\n\", \"\\n\", \"# Generally speaking, we use Adam optimizer to obtain relatively good convergence,\\n\", \"# You can change it to SGD or RMS prop.\\n\", \"opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())\\n\", \"\\n\", \"# Record optimization results\\n\", \"summary_iter, summary_loss = [], []\\n\", \"\\n\", \"# Optimization loop\\n\", \"for itr in range(1, ITR + 1):\\n\", \"\\n\", \" # Forward propagation to calculate loss function\\n\", \" loss, cir = net(H2_hamiltonian, N, D)\\n\", \"\\n\", \" # Use back propagation to minimize the loss function\\n\", \" loss.backward()\\n\", \" opt.minimize(loss)\\n\", \" opt.clear_grad()\\n\", \"\\n\", \" # Record optimization results\\n\", \" summary_loss.append(loss.numpy())\\n\", \" summary_iter.append(itr)\\n\", \"\\n\", \" # Print result\\n\", \" if itr % 20 == 0:\\n\", \" print(\\\"iter:\\\", itr, \\\"loss:\\\", \\\"%.4f\\\" % loss.numpy())\\n\", \" print(\\\"iter:\\\", itr, \\\"Ground state energy:\\\", \\\"%.4f Ha\\\" \\n\", \" % loss.numpy())\\n\", \" if itr == ITR:\\n\", \" print(\\\"\\\\nThe trained circuit:\\\") \\n\", \" print(cir)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"a6c9d23f\", \"metadata\": {}, \"source\": [ \"#### Introduction to shadow function\" ] }, { \"cell_type\": \"markdown\", \"id\": \"c9891733\", \"metadata\": {}, \"source\": [ \"At this point, we've obtained the quantum circuit for generating the ground state of $H_{2}$. We can run the shadow_trace function on this circuit to obtain the estimated ground state energy. In the shadow_trace function, our inputs are the Hamiltonian to be estimated, the number of samples, and the method selected. You can choose the sampling mode you want by specifying the parameter method. Among them, CS has a broader application range and the fastest speed, but its estimation accuracy may be slightly poor; LBCS has a higher accuracy, but it runs slower as the number of terms of Hamiltonian grows; APS also has higher accuracy, but it runs slower when the number of qubits is large.\" ] }, { \"cell_type\": \"code\", \"execution_count\": 5, \"id\": \"3b31e57d\", \"metadata\": {}, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"H2 ground state energy = -1.1360331063822373\\n\", \"H2 ground state energy CS= -1.1652093330967717\\n\", \"H2 ground state energy LBCS= -1.1213982275348622\\n\", \"H2 ground state energy APS= -1.137720214050516\\n\" ] } ], \"source\": [ \"# The actual energy value corresponding to the estimated ground state\\n\", \"H2_energy = cir.expecval(H2_hamiltonian).numpy()[0]\\n\", \"\\n\", \"# Sampling times\\n\", \"sample = 1500 \\n\", \"# Three algorithms are used to estimate the expectation value of observable.\\n\", \"H2_energy_CS = cir.shadow_trace(H2_hamiltonian, sample, method=\\\"CS\\\")\\n\", \"H2_energy_LBCS = cir.shadow_trace(H2_hamiltonian, sample, method=\\\"LBCS\\\")\\n\", \"H2_energy_APS = cir.shadow_trace(H2_hamiltonian, sample, method=\\\"APS\\\")\\n\", \"\\n\", \"print('H2 ground state energy = ', H2_energy)\\n\", \"print('H2 ground state energy CS= ', H2_energy_CS)\\n\", \"print('H2 ground state energy LBCS= ', H2_energy_LBCS)\\n\", \"print('H2 ground state energy APS= ', H2_energy_APS)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"1cb856da\", \"metadata\": {}, \"source\": [ \"Now let's use the item-by-item measurement method to estimate the ground state energy of $H_{2}$.\" ] }, { \"cell_type\": \"code\", \"execution_count\": 6, \"id\": \"3ae0242c\", \"metadata\": {}, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"H2 ground state energy traditional = -1.1367622727687419\\n\" ] } ], \"source\": [ \"# Use the item-by-item measurement method to estimate the ground state energy\\n\", \"H2_energy_traditional = cir.expecval(H2_hamiltonian, shots=100).numpy()[0]\\n\", \"print('H2 ground state energy traditional = ', H2_energy_traditional)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"b2e1ad02\", \"metadata\": {}, \"source\": [ \"We can see that under 1500 samples, the estimated ground state energy by these three algorithms are very close to the energy of the estimated ground state by VQE. The item-by-item measurement method is to make 100 measurements for each item of the Hamiltonian, and there are 15 items of the Hamiltonian of $H_{2}$, which is equivalent to 1500 measurements in total. The difference between the obtained results and the energy of the estimated ground state by VQE is also tiny. In this small-scale situation, the classical-shadow-based algorithms do not show significant advantages over the item-by-item measurement method. But in large-scale qubits scenarios, this algorithm requires only constant-level growth of the number of the Hamiltonian terms. In contrast, the item-by-item measurement method or some other methods require polynomial-level or even exponential-level growth in the number of samples to get the same accuracy[1]. In fact, it is pointed out in [2] that for CS algorithm and LBCS algorithm, the average error of estimation $\\\\epsilon$, variance $\\\\operatorname {var}(\\\\nu)$ and number of samples $S$ are related as follows:\\n\", \"\\n\", \"$$\\n\", \"S = O(\\\\epsilon^{-2} \\\\operatorname{var}(\\\\nu) ), \\\\tag{3}\\n\", \"$$\\n\", \"\\n\", \"where the variance $\\\\operatorname{var} (\\\\nu)$ is independent of the number of samples, and the variance is related to the number of terms of the Hamiltonian. Therefore, knowing our expected precision, we can calculate the required number of samples. Similarly, our average error can also be defined according to the number of samples: \\n\", \"\\n\", \"$$\\n\", \"\\\\epsilon = \\\\sqrt{\\\\frac{\\\\operatorname{var}}{S}}. \\\\tag{4}\\n\", \"$$\\n\", \"\\n\", \"We can be seen that in the above experiments, the errors obtained by the CS algorithm and LBCS algorithm are within the accuracy of theoretical estimation in reference [3].\" ] }, { \"cell_type\": \"markdown\", \"id\": \"5e0e174d\", \"metadata\": {}, \"source\": [ \"At the same time, Paddle Quantum provides a sampling function shadow_sample, which supports pre-sampling of unknown quantum states, which is convenient for readers to explore other applications of the classical shadow. The specific usage is as follows:\" ] }, { \"cell_type\": \"code\", \"execution_count\": 7, \"id\": \"fc6b465c\", \"metadata\": {}, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"[('yxxz', '0010'), ('xyzx', '0101'), ('yxyx', '1010'), ('yxyy', '1101'), ('yyyx', '0101'), ('xyxz', '0010'), ('zyxz', '1010'), ('xzyz', '0110'), ('zzzx', '1101'), ('xyxx', '0101')]\\n\" ] } ], \"source\": [ \"from paddle_quantum.shadow import shadow_sample\\n\", \"\\n\", \"# Run the circuit in the vector form\\n\", \"H2_rho = np.array(cir.run_state_vector())\\n\", \"# Get the data of classical shadow and output it in the form of a list\\n\", \"H2_sample_data_CS = shadow_sample(H2_rho, H2_qubit, sample_shots=10, mode='state_vector', \\n\", \" hamiltonian=H2_hamiltonian, method='CS')\\n\", \"print(H2_sample_data_CS)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"e74ffd18\", \"metadata\": {}, \"source\": [ \"### Estimate the ground state energy of lithium hydride ($LiH$)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"e4480cb6\", \"metadata\": {}, \"source\": [ \"Next, we consider the ground state energy of $LiH$. First, we load from a pre-computed file to generate the molecular Pauli Hamiltonian of $LiH$ with 12 qubits.\" ] }, { \"cell_type\": \"code\", \"execution_count\": 4, \"id\": \"6916c8a4\", \"metadata\": {}, \"outputs\": [], \"source\": [ \"with open('.\/LiH_hamiltonian.txt', 'r') as lih_file:\\n\", \" unprocessed_pauli_str = lih_file.read()\\n\", \" LiH_pauli_str = [term.split(maxsplit=1) for term in unprocessed_pauli_str.split('\\\\n')]\\n\", \" LiH_pauli_str = [[float(term[0]), term[1]] for term in LiH_pauli_str]\\n\", \" LiH_hamiltonian = Hamiltonian(LiH_pauli_str)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"61d8fd59\", \"metadata\": {}, \"source\": [ \"Then, we also use the VQE algorithm to obtain the ground state energy and the ground state of $LiH$ and then use the classical shadow algorithm to estimate the ground state energy. Due to the large size of the $LiH$ molecular Hamiltonian, it will take a long time to train the VQE circuit. Hence we provide a set of pre-trained parameters, using which the users could test the classical-shadow-based algorithms on $LiH$ Hamiltonian directly.\" ] }, { \"cell_type\": \"code\", \"execution_count\": 10, \"id\": \"fe326409\", \"metadata\": {}, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"The pre-trained VQE gets a ground state energy of: -7.7720 \\n\" ] } ], \"source\": [ \"# Load the pre-trained parameters\\n\", \"pretrained_parameters = paddle.load('LiH_VQE_parameters.pdtensor')\\n\", \"N = LiH_hamiltonian.n_qubits\\n\", \"# Run the VQE circuit with pre-trained parameters\\n\", \"energy, cir = U_theta(pretrained_parameters, LiH_hamiltonian, N, D)\\n\", \"print('The pre-trained VQE gets a ground state energy of: %.4f ' % energy.numpy())\" ] }, { \"cell_type\": \"markdown\", \"id\": \"8f8ff501\", \"metadata\": {}, \"source\": [ \"Once we have the circuit corresponding to the $LiH$ ground state, we can directly use the shadow_trace function to perform random measurements. Also, since this molecular Hamiltonian has 631 terms, we specify sample = 1262 for the function shadow_trace and shots = 2 for the function expecval in order to ensure that the number of measurements is the same for both types of methods.\\n\", \"\\n\", \"Since ground state of $LiH$ has 12 qubits, there are $3^{12}$ possible combinations of measurements when doing different Pauli measurements on the ground state of $LiH$. So it is too random to just perform 1262 samples to get the valuation. Thus, we run each of the above four methods 20 times. Then take the mean of these 20 samples of data as the estimation for each algorithm, and calculate the sample variance for a simple comparison of the algorithms.\uff08It may take at least 1 hour to run the following code blocks.\uff09\" ] }, { \"cell_type\": \"code\", \"execution_count\": 12, \"id\": \"8f3684ce\", \"metadata\": {}, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"LiH ground state energy = -7.771980394176657\\n\", \"ave LiH ground state energy CS = -7.835791570579005\\n\", \"ave LiH ground state energy LBCS = -7.7622296662623445\\n\", \"ave LiH ground state energy APS = -7.762836542787509\\n\", \"ave LiH ground state energy traditional = -7.8964746269601465\\n\", \"time = 4206.216086864471\\n\" ] } ], \"source\": [ \"import time\\n\", \"\\n\", \"begin = time.time()\\n\", \"estimator_CS = []\\n\", \"estimator_LBCS = []\\n\", \"estimator_APS = []\\n\", \"estimator_traditional = []\\n\", \"\\n\", \"# The actual energy value corresponding to the estimated ground state\\n\", \"LiH_energy = cir.expecval(LiH_hamiltonian).numpy()[0]\\n\", \"\\n\", \"# Number of repetition times\\n\", \"n = 20 \\n\", \"\\n\", \"for i in range(n):\\n\", \" LiH_energy_CS = cir.shadow_trace(LiH_hamiltonian, 1262, method=\\\"CS\\\")\\n\", \" LiH_energy_LBCS = cir.shadow_trace(LiH_hamiltonian, 1262, method=\\\"LBCS\\\")\\n\", \" LiH_energy_APS = cir.shadow_trace(LiH_hamiltonian, 1262, method=\\\"APS\\\")\\n\", \" LiH_energy_traditional = cir.expecval(LiH_hamiltonian, shots=2).numpy()[0]\\n\", \"\\n\", \" estimator_CS.append(LiH_energy_CS) \\n\", \" estimator_LBCS.append(LiH_energy_LBCS) \\n\", \" estimator_APS.append(LiH_energy_APS) \\n\", \" estimator_traditional.append(LiH_energy_traditional) \\n\", \"\\n\", \"ave_LiH_energy_CS = np.mean(estimator_CS)\\n\", \"ave_LiH_energy_LBCS = np.mean(estimator_LBCS)\\n\", \"ave_LiH_energy_APS = np.mean(estimator_APS)\\n\", \"ave_LiH_energy_traditional = np.mean(estimator_traditional)\\n\", \"end = time.time() \\n\", \"\\n\", \"print(\\\"LiH ground state energy = \\\", LiH_energy)\\n\", \"print(\\\"ave LiH ground state energy CS = \\\", ave_LiH_energy_CS)\\n\", \"print(\\\"ave LiH ground state energy LBCS = \\\", ave_LiH_energy_LBCS)\\n\", \"print(\\\"ave LiH ground state energy APS = \\\", ave_LiH_energy_APS)\\n\", \"print('ave LiH ground state energy traditional = ', ave_LiH_energy_traditional)\\n\", \"print('time = ', end-begin)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"89469273\", \"metadata\": {}, \"source\": [ \"From the results, the mean values obtained by the classical-shadow-based algorithms are closer to the energy of the estimated ground state of $LiH$ by VQE than the item-by-item measurement method, and the errors of the algorithms are all within the accuracy of theoretical estimation in reference [3]. So, what about the sample variance of each algorithm?\" ] }, { \"cell_type\": \"code\", \"execution_count\": 13, \"id\": \"742f4f15\", \"metadata\": {}, \"outputs\": [ { \"name\": \"stdout\", \"output_type\": \"stream\", \"text\": [ \"LiH variance CS = 0.034596840359755784\\n\", \"LiH variance LBCS = 0.016602696085670984\\n\", \"LiH variance APS = 0.0016603026356630662\\n\", \"LiH variance traditional = 0.13200055652163223\\n\" ] } ], \"source\": [ \"# Calculate the sample variance\\n\", \"variance_CS = []\\n\", \"variance_LBCS = []\\n\", \"variance_APS = []\\n\", \"variance_traditional = []\\n\", \"\\n\", \"for i in range(n):\\n\", \" variance_CS.append((estimator_CS[i] - ave_LiH_energy_CS) ** 2)\\n\", \" variance_LBCS.append((estimator_LBCS[i] - ave_LiH_energy_LBCS) ** 2)\\n\", \" variance_APS.append((estimator_APS[i] - ave_LiH_energy_APS) ** 2)\\n\", \" variance_traditional.append((estimator_traditional[i] - ave_LiH_energy_traditional) ** 2)\\n\", \"\\n\", \"var_CS = sum(variance_CS)\/(n-1)\\n\", \"var_LBCS = sum(variance_LBCS)\/(n-1)\\n\", \"var_APS = sum(variance_APS)\/(n-1)\\n\", \"var_traditional = sum(variance_traditional)\/(n-1)\\n\", \"\\n\", \"print('LiH variance CS = ', var_CS)\\n\", \"print('LiH variance LBCS = ', var_LBCS)\\n\", \"print('LiH variance APS = ', var_APS)\\n\", \"print('LiH variance traditional = ', var_traditional)\" ] }, { \"cell_type\": \"markdown\", \"id\": \"d40ef806\", \"metadata\": {}, \"source\": [ \"It can be seen that the APS algorithm has the lowest sample variance, followed by the LBCS algorithm, then the CS algorithm, and finally the item-by-item measurement method. Accordingly, we can find that after the increase in the number of terms of the Hamiltonian, the classical-shadow-based algorithm has higher accuracy and more stability at the same cost than the item-by-item measurement method. Among them, the APS algorithm is the most stable one.\\n\", \"\\n\", \"It is worth mentioning that for the classical shadow algorithm, the scene of 12 qubits still can not show its significant advantages compared with some existing algorithms. In large-scale systems with more qubits, its advantages can be better demonstrated [1].\" ] }, { \"cell_type\": \"markdown\", \"id\": \"fa9e6a26\", \"metadata\": {}, \"source\": [ \"## Conclusion\" ] }, { \"cell_type\": \"markdown\", \"id\": \"91a60770\", \"metadata\": {}, \"source\": [ \"This tutorial discusses how to use the classical-shadow-based algorithms to estimate the observable's expectation value and shows how to use the shadow function in Paddle Quantum. It can be seen that the improved algorithm based on the classical shadow can get a good estimate of the observable's expectation value. Compared with the item-by-item measurement method, when the number of samples are the same, its estimated value is more accurate and the algorithm is more stable. In the large-scale qubit scenario, the number of samples required by the classical shadow method is only a constant increase in some tasks. So its role in the NISQ (noisy intermediate-scale quantum) era will continue to be explored.\" ] }, { \"cell_type\": \"markdown\", \"id\": \"5fcdcad3\", \"metadata\": {}, \"source\": [ \"_______\\n\", \"\\n\", \"## References\" ] }, { \"cell_type\": \"markdown\", \"id\": \"947b52db\", \"metadata\": {}, \"source\": [ \"[1] Huang, Hsin-yuan, R. Kueng and J. Preskill. \u201cPredicting many properties of a quantum system from very few measurements.\u201d [Nature Physics (2020): 1-8.](https:\/\/www.nature.com\/articles\/s41567-020-0932-7?proof=t)\\n\", \"\\n\", \"[2] Hadfield, Charles, et al. \\\"Measurements of quantum hamiltonians with locally-biased classical shadows.\\\" [arXiv preprint arXiv:2006.15788 (2020).](https:\/\/arxiv.org\/abs\/2006.15788)\\n\", \"\\n\", \"[3] Hadfield, Charles. \\\"Adaptive Pauli Shadows for Energy Estimation.\\\" [arXiv preprint arXiv:2105.12207 (2021).](https:\/\/arxiv.org\/abs\/2105.12207)\" ] } ], \"metadata\": { \"kernelspec\": { \"display_name\": \"Python 3\", \"language\": \"python\", \"name\": \"python3\" }, \"language_info\": { \"codemirror_mode\": { \"name\": \"ipython\", \"version\": 3 }, \"file_extension\": \".py\", \"mimetype\": \"text\/x-python\", \"name\": \"python\", \"nbconvert_exporter\": \"python\", \"pygments_lexer\": \"ipython3\", \"version\": \"3.7.11\" } }, \"nbformat\": 4, \"nbformat_minor\": 5 }","date":"2022-01-28 17:01:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6127623915672302, \"perplexity\": 2596.244216865318}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320306301.52\/warc\/CC-MAIN-20220128152530-20220128182530-00444.warc.gz\"}"}
null
null
\section{Introduction} \subsection{Motivation} Many of the existing asymptotic minimaxity results for estimating regression functions are predicated on the assumption that certain smoothness conditions hold, which can be rarely satisfied/verified when confronted with real data. This creates a disconnect between theory and practice, limiting the scope of many of the theoretical results. For example, in nonparametric regression involving multiple predictors, the assumption of {\em isotropic smoothness} can be unnecessarily restrictive. A more realistic scenario is when the function exerts different degrees of smoothness in different directions and areas, with possible discontinuities that allow further flexibility. This work is motivated by the desire to evaluate theoretical performance of Bayesian forests, one of the workhorses of Bayesian machine learning, in such broad scenarios. Bayesian trees and their ensembles have achieved a notable empirical success in statistics and machine learning \citep{chipman1998bayesian,denison1998bayesian,chipman2010bart}. Relative to other Bayesian machine learning alternatives, tree-based methods require comparatively less tuning and can be scaled to higher dimensions \citep{lakshminarayanan2013top,bleich2014variable,he2019xbart}. The popularity of Bayesian forests such as Bayesian additive regression trees (BART) \citep{chipman2010bart} is growing rapidly in many areas including causal inference \citep{hill2011bayesian,hahn2020bayesian}, mean-variance function estimation \citep{pratola2020heteroscedastic}, smooth function estimation \citep{linero2018bayesian-jrssb}, variable selection \citep{bleich2014variable,linero2018bayesian-jasa}, interaction detection \citep{du2019interaction}, survival analysis \citep{sparapani2016nonparametric} and time series \citep{taddy2011dynamic}, to list a few. Despite the remarkable success in empirical studies, theoretical properties of Bayesian forests were unavailable for a long time with first studies emerging only very recently \citep{rockova2020posterior,linero2018bayesian-jrssb,rockova2019theory,castillo2021uncertainty}. Although these pioneering findings divulge why tree-based methods perform very well, they are limited to isotropic regression function surfaces, which exhibit the same level of smoothness in every direction. Isotropy is one of the archetypal assumptions in theoretical studies, but can be restrictive in real-world applications. This assumption is particularly unattractive in higher dimensions where the function can behave very poorly in certain directions. However, from the plentiful empirical evidence, Bayesian forests are expected to adapt to more intricate smoothness situations. For example, Figure~\ref{fig:bart} shows that BART successfully adapts to a piecewise smooth function or a Doppler-type function. The successful performance beyond isotropy is for at least three reasons: (i) tree methods are based on top-down recursive partitioning, where splits occur more often in areas where the function is locally uneven or bumpy, making the procedure spatially adaptive; (ii) the choice of coordinates for split is data-driven, making the domain divided more often in directions in which the function is less smooth; and (iii) tree-based learners are piecewise constant and, as such, are expected to adapt to discontinuous functions by detecting smoothness boundaries and jumps. These considerations naturally create an expectation that Bayesian forests achieve optimal estimation properties in more complex function classes {\em without any prior modification.} \begin{figure}[t!] \centering \begin{subfigure}[b]{5.5in} \includegraphics[width=5.5in]{BARTfullwTitle-1.png} \caption{Piecewise smooth function estimation } \end{subfigure} \begin{subfigure}[b]{5.5in} \includegraphics[width=5.5in]{BARTfullwTitle-2.png} \caption{Doppler-type function estimation } \end{subfigure} \caption{Function estimation in nonparametric regression with complicated smoothness using Bayesian CART and BART.} \label{fig:bart} \end{figure} \subsection{Our contribution} The main goal of this paper is to study optimality and posterior contraction of Bayesian forests under relaxed smoothness assumptions. More specifically, we introduce a class of functions whose domain has been cleaved into hyper-rectangles where each rectangular piece has its own anisotropic smoothness (with the same harmonic mean). We allow for possible discontinuities at the boundaries of the pieces. We call this new class of functions {\em piecewise heterogeneous anisotropic functions} (see Definitions~\ref{def:hol}--\ref{def:genhol} in Section~\ref{sec:function}). We establish approximation theory for this general class which blends anisotropy with spatial inhomogeneity and which, to the best of our knowledge, has not yet been pursued in the literature. Our results complement the body of existing work on piecewise isotropic smoothness classes \citep[e.g.,][]{candes2000curvelets,candes2004new,le2005sparse,petersen2018optimal,imaizumi2019deep}. Our function class subsumes the usual (homogeneous) anisotropic space for which adaptive procedures exist with optimal convergence rate guarantees, including the dyadic CART of \citet{donoho1997cart}. We refer to \citet{barron1999risk}, \citet{neumann1997wavelet}, \citet{hoffman2002random}, \citet{lepski2015adaptive}, and references therein for a more complete list. There are also adaptive Bayesian procedures for anisotropic function estimation with desired asymptotic properties \citep[e.g.,][]{bhattacharya2014anisotropic,shen2015adaptive}. There appear to be {\em no} theoretical properties for adaptation in the more general case of piecewise heterogeneous anisotropic smoothness. Indeed, existing theoretical studies for discontinuous piecewise smooth classes impose the isotropy assumption \citep[e.g.,][]{candes2000curvelets,candes2004new,le2005sparse,petersen2018optimal,imaizumi2019deep} and the convergence rates in spatially adaptive estimation depend on global smoothness parameters \citep[e.g.,][]{pintore2006spatially,liu2010data,wang2013smoothing,tibshirani2014adaptive}. In this respect, our study appears to be the first theoretical investigation of piecewise anisotropic function classes. The majority of frequentist/Bayesian methods for anisotropic function estimation rely on multiple scaling (bandwidth) parameters, one for each direction. As noted by \citet{bhattacharya2014anisotropic}, selecting optimal scaling parameters in a frequentist way can be computationally difficult as adaptation in anisotropic spaces presents several challenges \citep{lepski1999adaptive}. The Bayesian paradigm provides an effective remedy by assigning priors over these unknown parameters. One such example is the generalized Gaussian process priors or spline basis representations \citep{bhattacharya2014anisotropic,shen2015adaptive}. Although these priors enjoy elegant theoretical guarantees in typical anisotropic spaces, it is unclear whether they can adapt to piecewise heterogeneous anisotropic spaces without substantial modification. Bayesian forests, on the other hand, are expected to work in these more complex scenarios without any additional scaling parameters. The approximability is controlled merely by the depth of a tree and the orientation of its branches, where no prior modifications should be required to achieve optimal performance. Moreover, computation with Gaussian processes can be quite costly \citep{banerjee2013efficient,liu2020gaussian}, while Bayesian forests are more scalable and faster than their competitors. In the context of regression or classification, Bayesian forests often rely on observed covariate values for splits in recursive partitioning \citep{chipman1998bayesian,denison1998bayesian,chipman2010bart}. This facilitates theoretical investigation under the fixed regression design. In the context of nonparametric Gaussian regression, \citet{rockova2020posterior} and \citet{rockova2019theory} investigated posterior contraction for BART based on this conventional manner of partitioning. The dyadic CART \citep{donoho1997cart}, on the other hand, splits at dyadic midpoints of the domain and can achieve optimal performance as well \citep{castillo2021uncertainty}. We generalize the dyadic CART by introducing the notion of split-nets which form a collection of candidate split-points that are not necessarily observed covariate values and/or dyadic midpoints. Our findings show that optimality can be achieved with split-nets which are sufficiently evenly distributed. By allowing the split-points occur beyond observed values, we show that Bayesian forests enjoy the general recipe of the posterior contraction theory \citep{ghosal2000convergence,ghosal2007convergence} which applies to other statistical setups such as density estimation or regression/classification with random design. Asymptotic minimaxity is often used to evaluate optimality of statistical procedures. \citet{yang2015minimax} derived the minimax rates of sparse function estimation in high dimensions but their results are restricted to the isotropic cases. In fixed (low) dimensions, minimax rates over anisotropic function spaces have been extensively studied in the literature \citep{ibragimov2013statistical,nussbaum1985spline,birge1986estimating}. If the true function only depends on a subset of coordinates, the minimax rate is improved and determined by smoothness parameters of active coordinates \citep{hoffman2002random}. However, to the best of our knowledge, there are no available studies on minimax rates over piecewise anisotropic function spaces like ours. While there exist results on piecewise isotropic classes \citep[e.g.,][]{imaizumi2019deep}, even the simpler fixed-dimensional setup without sparsity has {\em not} been studied for piecewise anisotropic classes. Focusing on Gaussian nonparametric regression, we derive the minimax lower bound for our piecewise heterogeneous anisotropic spaces under the high-dimensional scenario. This result verifies that our obtained contraction rates for Bayesian forests are indeed minimax-optimal, up to a logarithmic factor. We summarize the contribution of this paper as follows. \begin{itemize} \item {\bf Approximation theory}: The true function should be approximable by tree-based learners in order to establish the optimal rate of posterior contraction. Approximation theory for piecewise heterogeneous anisotropic classes is much more intricate when there are discontinuities and heterogeneity. We establish such approximation theory here under suitable regularity conditions (with smoothness up to $1$ due to the limitation of piecewise constant learners). \item {\bf Posterior contraction}: For functions belonging to piecewise heterogeneous anisotropic spaces, posterior contraction of Bayesian forests is established under the high dimensional setup with a Dirichlet sparse prior. The derived rates consist of the risk of variable selection uncertainty and the risk of function estimation, similar to isotropic cases \citep{yang2015minimax,rockova2020posterior}. \item {\bf Minimax optimality}: Minimax rates in high-dimensional spaces have been unavailable even for simple anisotropic classes. For Gaussian nonparametric regression with high-dimensional inputs, we formally derive the minimax lower bound over piecewise heterogeneous anisotropic spaces. This certifies that our obtained contraction rate for Bayesian forests is optimal, up to a logarithmic factor. \item {\bf Applications beyond regression}: Unlike the asymptotic studies of the traditional tree priors \citep{rockova2020posterior,rockova2019theory}, our findings show that splits for recursive partitioning do not necessarily have to be at observed covariate values. This implies that our proving technique extends beyond fixed-design regression to other estimation problems such as density estimation or regression/classification with random design. \end{itemize} \subsection{Preview and outline of the paper} The main results of this study begin to appear in Section~\ref{sec:app} after excessive preliminary steps. Before going into preparatory phase, here we provide a preview of our main results. Let us focus on a fixed design regression setup, \begin{align} Y_i=f_0(x_i)+\varepsilon_i, \quad\varepsilon_i\sim \text{N}(0,\sigma_0^2), \quad i=1,\dots,n, \label{eqn:fixreg} \end{align} with a response $Y_i\in\mathbb R$ and a covariate $x_i\in[0,1]^p$, where $f_0:[0,1]^p\mapsto \mathbb R$ and $\sigma_0^2<\infty$. Assume that $f_0$ depends only on $d$ variables among the $p$ coordinates. Assume further that $f_0$ is a piecewise heterogeneous anisotropic function with a global smoothness harmonic mean $\bar\alpha\in(0,1]$ (see Definitions~\ref{def:hol}--\ref{def:sparse} for a more precise definition). Assigning BART priors on $f_0$, the posterior contraction rate is obtained as $\sqrt{(d\log p)/n}+(\log n)^c n^{-\bar\alpha/(2\bar\alpha+d)}$ for some $c>0$ (Theorem~\ref{thm:nonreg}). It is also shown that this rate is minimax-optimal up to a log factor (Theorem~\ref{thm:minimax}). The same contraction rates are also achieved in other statistical setups (Theorems~\ref{thm:nonregrd}--\ref{thm:binary}). For the additive true function, the rate has an additive form (Theorems~\ref{thm:addreg}). The rest of this paper is organized as follows. In Section~\ref{sec:background}, we describe the background of function spaces and Bayesian forests. In high-dimensional scenarios, the tree priors on functions are specified in Section~\ref{sec:prior}. Section~\ref{sec:appgen} sheds light on the approximation theory for our function spaces. In Section~\ref{sec:nonreg}, we study posterior contraction of Bayesian forests and their minimax optimality in nonparametric regression with a fixed design. The section also includes a numerical study that shows the outstanding performance of BART over other methods such as random forests and deep neural networks, which are believed to work well for discontinuous or complicated smooth functions. Posterior contraction properties in other statistical models such as density estimation and binary classification are investigated in Section~\ref{sec:furapp}. An example of additive regression is also considered in Section~\ref{sec:furapp} to emphasize a theoretical advantage of Bayesian forests over single tree models. Section~\ref{sec:disc} concludes with discussion. All technical proofs are collected in Appendix. \section{Modus operandi}\label{sec:background} \subsection{Notation and terminology} Although the main focus of this paper is BART for regression in \eqref{eqn:fixreg}, we work with a general statistical experiment $P_f$ indexed by a measurable function $f:[0,1]^p\mapsto\mathbb R$ for some $p>0$, which will be modeled by Bayesian forests. This allows us to incorporate other statistical setups, such as density estimation, into our theoretical framework. Each statistical model we are dealing with will be specified for our examples in Sections~\ref{sec:nonreg}--\ref{sec:furapp}. We observe $n$ observations with the true function denoted by $f_0$ and assume that $p$ is possibly increasing with the sample size $n$. The notation $\mathbb E_0$ and $\mathbb P_0$ denote the expectation and probability operators under the true model with $f_0$. For sequences $a_n$ and $b_n$, we write $a_n\lesssim b_n$ (or $b_n\gtrsim a_n$ equivalently) if $a_n\le C b_n$ for some constant $C>0$, and $a_n\asymp b_n$ implies $a_n\lesssim b_n\lesssim a_n$. We also write $a_n\ll b_n$ (or $b_n\gg a_n$ equivalently) if $a_n/b_n\rightarrow 0$ as $n\rightarrow\infty$. For a subspace $E$ of the Euclidean space, $\mathcal C(E)$ denotes a class of continuous functions $f:E\mapsto \mathbb R$. For a given measure $\mu$ and a measurable function $f$, we denote by $\lVert f \rVert_{v,\mu}=(\int |f|^v d\mu)^{1/v}$ the $L_v(\mu)$-norm, $1\le v<\infty$. We particularly denote by $\mathcal L_2(\mu)$ the linear space of real valued functions equipped with inner product $\langle f,g \rangle_\mu=\int f g d\mu$ and norm $\lVert f \rVert_{2,\mu}=\langle f,f \rangle_\mu^{1/2}$. The support of a measure $\mu$ is denoted by ${\rm supp}(\mu)$. The supremum norm of a function $f$ is denoted by $\lVert f \rVert_\infty$. For convenience, we write $\lVert f \rVert_v$ for the $L_v$-norm with the Lebesgue measure on a unit hypercube. For a given vector $u$, the notations $\lVert u \rVert_v$ and $\lVert u \rVert_\infty$ represent the $\ell_v$-norms, $1\le v<\infty$, and the maximum-norm, respectively. For a semimetric space $(\mathcal F,\rho)$ endowed with a semimetric $\rho$, the expressions $D(\epsilon,\mathcal F, \rho)$ and $N(\epsilon,\mathcal F, \rho)$ are $\epsilon$-packing and $\epsilon$-covering numbers of $\mathcal F$, respectively. For a subset $S\subseteq\{1,\dots, p\}$ and $x=(x_1,\dots, x_p)^\top\in\mathbb R^p$, let $x_S=(x_j,j\in S)\in\mathbb R^{|S|}$ be the indices chosen by $S$. A $p$-dimensional hyper-rectangle $\Psi\subseteq[0,1]^p$ with any $p>0$ is simply called a {\em box}. Boxes can be closed (the form of $\prod_{j=1}^p[a_j,b_j]$) or half-closed (the form of $\prod_{j=1}^p(a_j,b_j]$) depending on the context. A partition $\mathfrak Y=(\Psi_1,\dots,\Psi_J)$ of $[0,1]^p$, consisting of $J$ disjoint boxes $\Psi_r\subseteq[0,1]^p$, $r=1,\dots,J$, is called a {\em box partition}. For a $p$-dimensional Cartesian product $E\subseteq \mathbb R^p$ with any $p>0$, we denote the $j$th projection mapping of $E$ by $[E]_j=\{x_j\in\mathbb R: (x_1,\dots,x_p)^\top\in E\}$. The length and interior of an interval $I\in\mathbb R$ is denoted by $\mathsf{len}(I)$ and $\mathsf{int}(I)$, respectively. \subsection{Heterogeneous anisotropic function spaces with sparsity} \label{sec:function} In this subsection, we introduce our function spaces with heterogeneous smoothness and sparsity in high dimensions. The first assumption is that the true regression function $f_0:[0,1]^p\mapsto \mathbb R$ is $d$-sparse, i.e., it depends on a small subset of $d$ variables. This means that there exist a function $h_0:[0,1]^d\mapsto\mathbb R$ and a subset $S_0\subseteq\{1,\dots, p\}$ with $|S_0|=d$, such that $f_0(x)=h_0(x_{S_0})$ for any $x\in[0,1]^p$. For example, suppose the true function is defined as $f_0(x_1,x_2)=\sin(x_1)$ on $[0,1]^2$ with $p=2$. This function can be completely expressed by the one-dimensional function $h_0(x_1)=\sin(x_1)$ on $[0,1]$, and hence is 1-sparse by definition. For now we focus on the function $h_0$ on the low-dimensional domain $[0,1]^d$. The complete characterization of $f_0$ will soon be discussed. We assume that $[0,1]^d$ partitioned into many boxes and $h_0$ is H\"older continuous with possibly different smoothness in each box. Moreover, the smoothness inside each box is anisotropic, i.e., different for each coordinate. Focusing on a single box, we first define an {\em anisotropic H\"older space} in the usual sense. \begin{definition}[Anisotropic H\"older space] \label{def:hol} For a smoothness parameter $\alpha=(\alpha_1,\dots,\alpha_d)^\top\in(0,1]^d$, a box $\Psi\subseteq[0,1]^d$, and a H\"older coefficient $\lambda<\infty$, we denote by $\mathcal H_\lambda^{\alpha,d}(\Psi)$ an anisotropic $\alpha$-H\"older space on $\Psi$, i.e., \begin{align*} \mathcal H_\lambda^{\alpha,d}(\Psi) =\left\{ h:\Psi \mapsto \mathbb R ; ~ |h(x)-h(y)|\le \lambda\sum_{j=1}^d |x_j-y_j|^{\alpha_j} ,~ x,y\in\Psi\right\}. \end{align*} \end{definition} Note that the definition above imposes a restriction $\alpha\in(0,1]^d$. Although one can generalize this definition to smoother classes \citep[e.g.][]{bhattacharya2014anisotropic}, we do not consider such extensions here since step function estimators cannot be optimal in classes smoother than Lipschitz. As discussed above, our targeted function class is not necessarily globally anisotropic over the entire domain $[0,1]^d$. Instead, we assume that $h_0$ has different anisotropic smoothness on several disjoint boxes of the domain with the same harmonic mean (an important assumption for obtaining the minimax rate). To be more precise, we define a set of $R$-tuples for smoothness parameters, \begin{align*} \mathcal A_{\bar\alpha}^{R,d} = \left\{(\alpha_1,\dots,\alpha_R): \alpha_r=(\alpha_{r1},\dots,\alpha_{rd})^\top\in(0,1]^d, ~ \bar\alpha^{-1}=d^{-1}\sum_{j=1}^d \alpha_{rj}^{-1},~r=1,\dots, R \right\}. \end{align*} We assume that anisotropic smoothness of $h_0$ is specified on an unknown underlying box partition $\mathfrak X=(\Xi_1,\dots,\Xi_R)$ of $[0,1]^d$ with $R\ge1$ boxes. If $R=1$, we write $\mathfrak X=([0,1]^d)$ with $\Xi_1=[0,1]^d$. The function space is formed by agglomerating anisotropic H\"older spaces for all boxes. We emphasize that the resulting function space is not necessarily continuous, which provides a lot more flexibility relative to the conventional H\"{o}lderian class. Considering that smoothness parameters can vary across boxes and functions can be discontinuous at their boundaries, we call this new class a {\em piecewise heterogeneous anisotropic H\"older space}. We define these functions formally below. \begin{definition}[Piecewise heterogeneous anisotropic H\"older space] \label{def:genhol} Consider a box partition $\mathfrak X=(\Xi_1,\dots,\Xi_R)$ of $[0,1]^d$ with boxes $\Xi_r\subseteq[0,1]^d$ and a smoothness parameter $A_{\bar\alpha}=(\alpha_r)_{r=1}^R\in\mathcal A_{\bar\alpha}^{R,d}$ for some $\bar\alpha\in(0,1]$. We define a piecewise heterogeneous anisotropic H\"older space as \begin{align*} \mathcal H_\lambda^{A_{\bar\alpha},d}(\mathfrak X) =\left\{ h:[0,1]^d \mapsto \mathbb R ; ~ h|_{\Xi_r} \in \mathcal H_\lambda^{\alpha_r,d}(\Xi_r) ,~r=1,\dots,R\right\}. \end{align*} \end{definition} \begin{figure}[t!] \centering \resizebox{2in}{!}{ \begin{tikzpicture} \draw (0,0) rectangle (7,7); \draw (0,2.5) -- (7,2.5); \draw (2,2.5) -- (2,7); \draw (4.5,2.5) -- (4.5,0); \draw (2,5) -- (7,5); \node at (1,5) {$\Xi_1$}; \node at (4.5,6) {$\Xi_2$}; \node at (4.5,3.75) {$\Xi_3$}; \node at (2.25,1.25) {$\Xi_4$}; \node at (5.75,1.25) {$\Xi_5$}; \node [rotate=270,blue] at (0.25,5) {\footnotesize$\alpha_{12}$}; \draw [->,blue] (0.25,5.35) to (0.25,5.75); \draw [->,blue] (0.25,4.65) to (0.25,4.25); \node [rotate=270,blue] at (4.75,1.25) {\footnotesize$\alpha_{52}$}; \draw [->,blue] (4.75,1.6) to (4.75,2); \draw [->,blue] (4.75,0.9) to (4.75,0.5); \node [rotate=270,blue] at (0.25,1.25) {\footnotesize$\alpha_{42}$}; \draw [->,blue] (0.25,1.6) to (0.25,2); \draw [->,blue] (0.25,0.9) to (0.25,0.5); \node [rotate=270,blue] at (2.25,6) {\footnotesize$\alpha_{22}$}; \draw [->,blue] (2.25,6.35) to (2.25,6.75); \draw [->,blue] (2.25,5.65) to (2.25,5.25); \node [rotate=270,blue] at (2.25,3.75) {\footnotesize$\alpha_{32}$}; \draw [->,blue] (2.25,4.1) to (2.25,4.5); \draw [->,blue] (2.25,3.4) to (2.25,3); \node [blue] at (4.5,2.75) {\footnotesize$\alpha_{31}$}; \draw [->,blue] (4.85,2.75) to (5.25,2.75); \draw [->,blue] (4.15,2.75) to (3.75,2.75); \node [blue] at (4.5,5.25) {\footnotesize$\alpha_{21}$}; \draw [->,blue] (4.85,5.25) to (5.25,5.25); \draw [->,blue] (4.15,5.25) to (3.75,5.25); \node [blue] at (2.25,0.25) {\footnotesize$\alpha_{41}$}; \draw [->,blue] (2.6,0.25) to (3,0.25); \draw [->,blue] (1.9,0.25) to (1.5,0.25); \node [blue] at (5.75,0.25) {\footnotesize$\alpha_{51}$}; \draw [->,blue] (6.1,0.25) to (6.5,0.25); \draw [->,blue] (5.4,0.25) to (5,0.25); \node [blue] at (1,2.75) {\footnotesize$\alpha_{11}$}; \draw [->,blue] (1.35,2.75) to (1.75,2.75); \draw [->,blue] (0.65,2.75) to (0.25,2.75); \end{tikzpicture} } \caption{A graphical illustration of a piecewise heterogeneous anisotropic H\"older space with five boxes. Each piece has its own smoothness parameter, but the harmonic mean is assumed to be the same.} \label{fig:anisohol} \end{figure} A graphical illustration of the piecewise heterogeneous anisotropic H\"older spaces is given in Figure~\ref{fig:anisohol}. Clearly, Definition~\ref{def:genhol} subsumes the anisotropic H\"older space in Definition~\ref{def:hol} with $R=1$. According to Definition~\ref{def:genhol}, any $h\in\mathcal H_\lambda^{A_{\bar\alpha},d}(\mathfrak X)$ is anisotropic on each $\Xi_r$ with a smoothness parameter $\alpha_r\in(0,1]^d$ and the same harmonic mean $\bar\alpha$ for all $\Xi_r$. We again emphasize that discontinuities are allowed at the boundaries of boxes $\Xi_r$, $r=1,\dots,R$. Definition~\ref{def:genhol} does not impose a specific structure on the partition $\mathfrak X$ other than a box partition. However, we will later see that depending on the approximation metric, our approximation theory will require it to be a tree-based recursive structure defined in the next section (see Figure~\ref{fig:partition} below). Nonetheless, since every box partition can be extended to the required form by adding more splits, this discrepancy is not practically an issue. We refer the reader to Section~\ref{sec:partition} and Section~\ref{sec:dennet} for more discussion. \begin{remark} We compare Definition~\ref{def:genhol} with piecewise smooth function spaces widely investigated in the literature. Approximation rates for piecewise smooth functions with smooth jump curves/surfaces have been extensively studied in two dimensions \citep[e.g.,][]{candes2000curvelets,candes2004new,guo2007optimally} as well as in higher dimensions \citep{chandrasekaran2008representation,petersen2018optimal,imaizumi2019deep}. All these studies deal with smooth functions with smooth jump curves/surfaces under the isotropy assumption. On the other hand, our definition deals with different anisotropic smoothness parameters for the boxes in a box partition, and hence seems takes some flexibility. Our jump surfaces, however, are restricted to hyper-planes parallel to the coordinates. \end{remark} Note that Definition~\ref{def:genhol} is for the mapping $h_0$ from the lower dimensional domain $[0,1]^d$ while the true function $f_0$ maps the entire $[0,1]^p$ to $\mathbb R$. We now characterize a {\em sparse} elaboration of Definition~\ref{def:genhol} for the mapping $f_0:[0,1]^p\mapsto \mathbb R$. For any $S\subseteq\{1,\dots,p\}$, we denote with $W_S^p:\mathcal C(\mathbb R^{|S|})\mapsto \mathcal C(\mathbb R^p)$ the map that transmits $h\in\mathcal C(\mathbb R^{|S|})$ onto $W_S^p h : x\mapsto h(x_S)$. Similarly to \citet{yang2015minimax} for the isotropic cases, we now formalize $d$-sparse function spaces as follows. \begin{definition}[Sparse function space] For the space $\mathcal H_\lambda^{A_{\bar\alpha},d}(\mathfrak X)$ in Definition~\ref{def:genhol}, we define a $d$-sparse piecewise heterogeneous anisotropic H\"older space as \begin{align*} \Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)=\bigcup_{S\subseteq\{1,\dots,p\}: |S|=d} W_S^p\big(\mathcal H_\lambda^{A_{\bar\alpha},d}(\mathfrak X) \big). \end{align*} \label{def:sparse} \end{definition} For an unknown smoothness parameter $A_{\bar\alpha}=(\alpha_r)_{r=1}^R\in\mathcal A_{\bar\alpha}^{R,d}$ (with possibly decreasing $\bar\alpha$) and model components $R$, $d$, $p$, and $\lambda$ (which are possibly increasing with $n$), the true function $f_0$ is assumed to belong to the class $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$ which allows for discontinuities, or to its continuous variant $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$.\footnote{Although $\bar\alpha$, $R$, $d$, $p$, and $\lambda$ can be sequences of $n$ rather than fixed constants, we suppress subscript $n$ for the sake of notational simplicity.} This means that there exists a function $h_0:[0,1]^d\mapsto\mathbb R$ and a subset $S_0\subseteq\{1,\dots,p\}$ with $|S_0|=d$ such that $f_0=W_{S_0}^p h_0$. The continuous variant $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$ achieves approximability under more relaxed assumptions (see Theorem~\ref{thm:approx2} below). The two spaces are identical if $R=1$. Note that $\mathfrak X$ is the box partition of the $d$-dimensional cube $[0,1]^d$. Considering the domain $[0,1]^p$ of $f_0$, it will be convenient to extend $\mathfrak X$ to the corresponding box partition of the $p$-dimensional cube $[0,1]^p$. To this end, we extend each $\Xi_r$ to the $p$-dimensional box $\Xi_r^\ast=\{x\in[0,1]^p: x_{S_0}\in \Xi_r ,x_{S_0^c}\in[0,1]^{p-d} \} \subseteq[0,1]^p$ using the true sparsity index $S_0$; that is, $\Xi_r$ is the projection of $\Xi_r^\ast$ onto the coordinates in $S_0$. The boxes $\Xi_r^\ast$ then constitute the box partition $\mathfrak X^\ast=(\Xi_1^\ast,\dots,\Xi_R^\ast)$ of $[0,1]^p$. It should be noted that $\mathfrak X^\ast$ depends on the unknown sparsity index $S_0$ of the true function $f_0$. Observe also that our definition gives rise to $\mathfrak X^\ast=([0,1]^p)$ with $\Xi_1^\ast=[0,1]^p$ if $R=1$. Apart from the notion of sparsity for functions, we also introduce sparsity of box partitions as follows. \begin{figure}[t!] \centering \begin{subfigure}[b]{2in} \resizebox{2in}{!}{ \begin{tikzpicture} \draw [-stealth, dashed] (0,0,5) -- (0,0,6) node [left] {\Large$1$}; \draw [-stealth, dashed] (5,0,0) -- (6,0,0) node [above right] {\Large$2$}; \draw [-stealth, dashed] (0,5,0) -- (0,6,0) node [above left] {\Large$3$}; \draw (0,0,0) -- (5,0,0) -- (5,5,0) -- (0,5,0) -- (0,0,0) ; \draw (5,5,5)-- (0,5,5)-- (0,0,5)-- (5,0,5)-- (5,5,5); \draw (0,0,0)-- (0,0,5); \draw (0,5,0)-- (0,5,5); \draw (5,0,0)-- (5,0,5); \draw (5,5,0)-- (5,5,5); \draw [dashed,blue] (2.5,0,0)-- (2.5,0,5)-- (2.5,5,5)-- (2.5,5,0)-- (2.5,0,0); \draw [dashed,blue] (2.5,0,2)-- (2.5,5,2)-- (5,5,2)-- (5,0,2)-- (2.5,0,2); \draw [dashed,blue] (2.5,0,3)-- (2.5,5,3)-- (0,5,3)-- (0,0,3)-- (2.5,0,3); \end{tikzpicture} } \caption{$\{1,2\}$-chopped partition} \label{fig:sppar1} \end{subfigure} \begin{subfigure}[b]{2in} \resizebox{2in}{!}{ \begin{tikzpicture} \draw [-stealth, dashed] (0,0,5) -- (0,0,6) node [left] {\Large$1$}; \draw [-stealth, dashed] (5,0,0) -- (6,0,0) node [above right] {\Large$2$}; \draw [-stealth, dashed] (0,5,0) -- (0,6,0) node [above left] {\Large$3$}; \draw (0,0,0) -- (5,0,0) -- (5,5,0) -- (0,5,0) -- (0,0,0) ; \draw (5,5,5)-- (0,5,5)-- (0,0,5)-- (5,0,5)-- (5,5,5); \draw (0,0,0)-- (0,0,5); \draw (0,5,0)-- (0,5,5); \draw (5,0,0)-- (5,0,5); \draw (5,5,0)-- (5,5,5); \draw [dashed,blue] (2.5,0,0)-- (2.5,0,5)-- (2.5,5,5)-- (2.5,5,0)-- (2.5,0,0); \draw [dashed,blue] (1.2,0,0)-- (1.2,0,5)-- (1.2,5,5)-- (1.2,5,0)-- (1.2,0,0); \draw [dashed,blue] (4,0,0)-- (4,0,5)-- (4,5,5)-- (4,5,0)-- (4,0,0); \end{tikzpicture} } \caption{$\{2\}$-chopped partition} \label{fig:sppar2} \end{subfigure} \caption{Examples of sparse partitions in three dimensions.} \label{fig:sppar} \end{figure} \begin{definition}[Sparse partition] Consider a box partition $\mathfrak Y=(\Psi_1,\dots,\Psi_J)$ of $[0,1]^p$ with boxes $\Psi_r\subseteq [0,1]^p$, $r=1,\dots,J$.\footnote{ We often write $\mathfrak Y=(\Psi_r)_r$ to denote an arbitrary box partition of $[0,1]^p$ with boxes $\Psi_r\subseteq[0,1]^p$, $r=1,2,\dots$, and write $\Psi\subseteq[0,1]^p$ to denote an arbitrary $p$-dimensional box. The notations $\mathfrak X=(\Xi_1,\dots,\Xi_R)$ and $\mathfrak X^\ast=(\Xi_1^\ast,\dots,\Xi_R^\ast)$ are used only for the box partitions associated with the piecewise heterogeneous anisotropic spaces in Definitions~\ref{def:genhol}--\ref{def:sparse}. } For a subset $S\subseteq\{1,\dots,p\}$, the partition $\mathfrak Y$ is called $S$-chopped if $\max_{j \in S}\mathsf{len}([\Psi_r]_j)<1$ and $\min_{j\notin S}\mathsf{len}([\Psi_r]_j)=1$ for every $r=1,\dots,J$. \label{def:sppartition} \end{definition} A graphical illustration of sparse partitions is provided in Figure~\ref{fig:sppar}. According to Definition~\ref{def:sppartition}, the extended box partition $\mathfrak X^\ast$ is $S$-chopped for some $S\subseteq S_0$. Observe that $\mathfrak X^\ast$ is not always $S_0$-chopped, since $\mathfrak X$ may not have been cleaved in some coordinates. For example, if $f_0(x_1,x_2,x_3)=h_0(x_1,x_3)=\sin(x_1)\cos(x_3)\mathbbm 1(0\le x_1\le 0.5)\mathbbm 1(0\le x_2\le 1)$, then $S_0=\{1,3\}$ but $\mathfrak X^\ast=([0,0.5]\times[0,1]^2,(0.5,1]\times[0,1]^2)$ is $\{1\}$-chopped. In particular, $\mathfrak X^\ast$ is $\varnothing$-chopped if $R=1$ no matter what $S_0$ is. From this it is clear that sparsity of $\mathfrak X^\ast$ is not the same as sparsity of $f_0$. In what follows, we use the notation $S(\mathfrak X^\ast)\subseteq S_0$ to denote sparsity of $\mathfrak X^\ast$; that is, $\mathfrak X^\ast$ is $S(\mathfrak X^\ast)$-chopped. \subsection{Tree-based partitions} \label{sec:partition} In this work, for estimators of the true function $f_0$, we focus on piecewise constant learners, i.e., step functions that are constant on each piece of a box partition of $[0,1]^p$. A precise description of piecewise constant learners requires an underlying partitioning rule that produces a partition for these step functions. In tree-structured models, the idea is based on recursively applying binary splitting rules to split the domain $[0,1]^p$. Here we shed light on this mechanism to construct tree-based partitions, while deferring a complete description of the induced step functions to Section~\ref{sec:bayesforest}. For a given box $\Psi\subseteq[0,1]^p$, choose a {\em splitting coordinate} $j\in\{1,\dots, p\}$ and a {\em split-point} $\tau_j\in{\mathsf{int}}([\Psi]_j)$. The pair $(j,\tau_j)$ then dichotomizes $\Psi$ along the $j$th coordinate into two boxes: $\{ x\in\Psi: x_j \le \tau_j \}$ and $\{ x\in\Psi:x_j > \tau_j \}$, where $x_j$ is $j$th entry of $x$. Starting from the root node $[0,1]^p$, the procedure is iterated $K-1$ times in a top-down manner by picking one box for a split each time. This generates $K$ disjoint boxes $\Psi_1,\dots,\Psi_K$, called {\em terminal nodes}, which constitute a tree-shaped partition of $[0,1]^p$, called a {\em tree partition}. We call this iterative procedure the {\em binary tree partitioning}. We will further refer to resulting tree partitions as {\em flexible tree partitions} to emphasize that splits can occur everywhere in the domain $[0,1]^p$ (not necessarily at dyadic midpoints or observed covariate values). According to Definition~\ref{def:sppartition}, we say that a flexible tree partition is $S$-chopped if splitting coordinates $j$ are restricted to a subset $S\subseteq\{1,\dots,p\}$. Note that while flexible tree partitions are always box partitions, the reverse is not generally true; see Figure~\ref{fig:partition}. \begin{figure}[t!] \centering \begin{subfigure}[b]{1.8in} \resizebox{1.8in}{!}{\quad \begin{tikzpicture} \draw (-33.5,0) rectangle (-26.5,7); \draw (-31.5,7) -- (-31.5,5) -- (-28.5,5) -- (-28.5,0); \draw (-33.5,2) -- (-28.5,2); \draw (-31.5,2) -- (-31.5,5); \draw (-26.5,5) -- (-28.5,5); \end{tikzpicture}\quad } \caption{A non-tree box partition} \label{fig:partition1} \end{subfigure} \begin{subfigure}[b]{1.8in} \resizebox{1.8in}{!}{\quad \begin{tikzpicture} \draw (-33.5,0) rectangle (-26.5,7); \draw (-30.5,7) -- (-30.5,5) -- (-28,5) -- (-28,0); \draw (-33.5,2.5) -- (-30.5,2.5); \draw (-30.5,0) -- (-30.5,5); \draw (-26.5,5) -- (-28,5); \end{tikzpicture}\quad } \caption{A tree partition.} \label{fig:partition2} \end{subfigure} \caption{Examples of non-tree box partitions and tree partitions } \label{fig:partition} \end{figure} Although the binary tree partitioning allows splits to occur anywhere in the domain, Bayesian tree models usually take advantage of priors that choose split-points from a predetermined discrete set. For example, in regression with continuous covariates, observed covariate values are typically used for split-points \citep{chipman1998bayesian,denison1998bayesian,chipman2010bart}. Following this manner, \citet{rockova2020posterior} and \citet{rockova2019theory} investigated posterior contraction of BART in Gaussian nonparametric regression with fixed covariates. Here we relax this restriction while keeping split-points chosen from a discrete set. To this end, we define a discrete collection of locations where splits can occur, which we call a split-net. \begin{definition}[split-net] For an integer sequence $b_n$, a split-net $\mathcal Z=\{z_i\in[0,1]^p,~i=1,\dots,b_n\}$ is a set of $b_n$ points $z_i=(z_{i1},\dots,z_{ip})^\top\in[0,1]^p$ at which possible splits occur along coordinates. \label{def:split-net} \end{definition} For a given split-net $\mathcal Z$, we call each point $z_i=(z_{i1},\dots,z_{ip})^\top$ a {\em split-candidate}. For a given splitting coordinate $j$ and a split-net $\mathcal Z$, a split-point will be chosen from $[\mathcal Z]_j\cap\mathsf{int}([\Psi]_j)$ to dichotomize a box $\Psi$. Note that $[\mathcal Z]_j=\{z_{ij}\in[0,1],i=1,\dots,b_n\}$ may have fewer elements than $\mathcal Z$ due to duplication. We denote by $b_j(\mathcal Z)$ the cardinality of $[\mathcal Z]_j$, i.e., the number of unique values in the $b_n$-tuple $(z_{1j},\dots,z_{b_n j})$. Then we have $\max_{1\le j\le p} b_j(\mathcal Z)\le b_n$ by definition. For example, consider a regular (equidistant) grid system illustrated in Figure~\ref{fig:spn1}, where $b_j(\mathcal Z)=b_n^{1/p}<b_n$, $j=1,\dots,p$. This simplest split-net will be further discussed in Section~\ref{sec:grid}. It is also possible to construct a split-net such that $b_j(\mathcal Z)=b_n$, $j=1,\dots, p$, as shown in Figure~\ref{fig:spn2}. As noted above, another typical example of $\mathcal Z$ is the observed covariate values in fixed-design nonparametric regression with $b_n=n$ (supposing that all $x_i$ are different). This specific example will be discussed in Section~\ref{sec:fixeddesign}. Our definition of split-nets yields additional flexibility in situations when no deterministic covariate values are available, such as density estimation or in the analysis of nonparametric regression with random covariates. A subset of the observed covariate values can also be used in a fixed-design regression setup. In assigning a prior over tree partitions, we will assume that splits in the binary partitioning rule occur only at the points in $\mathcal Z$; that is, for every splitting box $\Psi\subseteq[0,1]^p$ with a splitting coordinate $j$, a split-point $\tau_j$ is chosen such that $\tau_j\in[\mathcal Z]_j\cap\mathsf{int}([\Psi]_j)$. Since a split is restricted to the interior of a given interval, some split-candidates may have already been eliminated in the previous steps of the splitting procedure (see Figure~\ref{fig:spn1}). Clearly, a tree partition constructed by $\mathcal Z$ is an instance of flexible tree partitions, but the reverse is not the case. To distinguish the two more clearly, we make the following definition. \begin{figure}[t!] \centering \begin{subfigure}[b]{2in} \includegraphics[width=\linewidth]{fig-spn1-eps-converted-to.pdf}\vspace*{-0.1in} \caption{Regular grid system} \label{fig:spn1} \end{subfigure} \begin{subfigure}[b]{2in} \includegraphics[width=\linewidth]{fig-spn2-eps-converted-to.pdf}\vspace*{-0.1in} \caption{Split-net without duplication} \label{fig:spn2} \end{subfigure} \caption{Examples of the split-net with $b_n=25$ in two dimensions. For the regular grid in (a), one can easily see that $b_j(\mathcal Z)=5$, $j=1,2$; hence, initial splits eliminate the possibility of other splits. The split-candidates of the split-net in (b) are unique in every coordinate, so $b_j(\mathcal Z)=b_n$, $j=1,2$. } \label{fig:spn} \end{figure} \begin{definition}[$\mathcal Z$-tree partition] For a given split-net $\mathcal Z$, a flexible tree partition $\mathcal T=(\Omega_1,\dots,\Omega_K)$ of $[0,1]^p$ with boxes $\Omega_k\subseteq[0,1]^p$, $k=1,\dots,K$, is called a $\mathcal Z$-tree partition if every split occurs at points $z_i\in\mathcal Z$.\footnote{ The notation $\mathcal T=(\Omega_k)_k$ is used only for the $\mathcal Z$-tree partitions with a split-net $\mathcal Z$, with some suitable superscript and/or superscript if required. We denote flexible tree partitions by $\mathfrak Y=(\Psi_k)_k$ as general box partitions. } \end{definition} \iffalse \begin{figure}[t!] \centering \resizebox{3.2in}{!}{ \begin{tikzpicture} \tikzstyle{block} = [rectangle, draw, minimum width=5em, text centered, rounded corners, minimum height=4em] \node[block] at (-1.5,5) {$\begin{array}{c} {\mathcal Z}\text{-tree} \\ \text{Partitions} \end{array}$}; \node[block] at (1.75,5) {$\begin{array}{c} \text{Flexible Tree} \\ \text{Partitions} \end{array}$}; \node[block] at (5,5) {$\begin{array}{c} \text{Box} \\ \text{Partitions} \end{array}$}; \node at (3.5,5) {$\subseteq$}; \node at (0,5) {$\subseteq$}; \end{tikzpicture} } \caption{The relationship among the three types of partitions.} \label{fig:partirel} \end{figure} \fi In summary, we have the following relationship among the three types of partitions: $\{\text{$\mathcal Z$-tree partitions}\}\subseteq\{\text{Flexible tree partitions}\}\subseteq\{\text{Box partitions}\}$. Similar to flexible tree partitions, $\mathcal Z$-tree partitions can be $S$-chopped for a subset $S\subseteq\{1,\dots,p\}$ no matter what $\mathcal Z$ is employed. Since we aim to do sparse estimation in high-dimensional setups, we are mostly interested in $S$-chopped $\mathcal Z$-tree partitions for some low-dimensional $S$. In what follows, we denote by $\mathscr T_{S,K,\mathcal Z}$ the set of all $S$-chopped $\mathcal Z$-tree partitions with $K$ boxes. \begin{remark} The definition of a $\mathcal Z$-tree partition is introduced to restrict possible splits to a discrete set. This means that we assign a discrete prior on the tree topologies (see Section~\ref{sec:prior}). One may instead assign a prior on the topology of flexible tree partitions, in which case a split-net $\mathcal Z$ is not needed. For regression problems, most of the recent BART procedures deploy a discrete set of split-candidates in their prior constructions using the observed covariate values. We aim to generalize this conventional way while incorporating it into our framework. A discrete prior has an advantage in that it is invariant to a transformation of predictor variables \citep{chipman1998bayesian}. This paper only considers putting a discrete tree prior using a given split-net $\mathcal Z$, and a continuous prior on flexible tree partitions is not considered. \end{remark} \subsection{Bayesian trees and forests} \label{sec:bayesforest} We now describe our piecewise constant learners using $\mathcal Z$-tree partitions. While single tree learners have received some attention \citep{chipman1998bayesian, denison1998bayesian}, it is widely accepted that additive aggregations of small trees are much more effective for prediction \citep{chipman2010bart}. Noting that single trees are a special case of tree ensembles (forests), we will focus on forests throughout the rest of the paper. We consider a fixed number $T$ of trees. For a given split-net $\mathcal Z$ and for each $ t\leq T$, we denote with $\mathcal T^t=(\Omega_1^t,\dots,\Omega_{K^t}^t)$ a $\mathcal Z$-tree partition of size $K^t$ and with $\beta^t=(\beta_1^t,\dots,\beta_{K^t}^t)^\top\in \mathbb R^{K^t}$ the heights of the step function, called the {\em step-heights}. An additive tree-based learner is then fully described by a tree ensemble $\mathcal E =(\mathcal T^1,\dots, \mathcal T^T)$ and terminal node parameters $B=({\beta^{1\top}},\dots,{\beta^{T\top}})^\top\in \mathbb R^{\sum_{t=1}^T K^t}$ through \begin{align} f_{\mathcal E, B}(x)=\sum_{t=1}^T \sum_{k=1}^{K^t} \beta_k^t \mathbbm 1(x\in\Omega_k^t). \label{eqn:treelearner} \end{align} That is, $f_{\mathcal E, B}$ is constant on the boxes constructed by overlapping $\mathcal Z$-tree partitions $\mathcal T^1,\dots,\mathcal T^T$. \citet{chipman2010bart} recommended the choice $T=200$ which was seen to provide good empirical results. For a given ensemble $\mathcal E$, we henceforth define $\mathcal F_{\mathcal E}=\{f_{\mathcal E, B}: B\in\mathbb R^{\sum_{t=1}^T K^t}\}$ the set of functions in \eqref{eqn:treelearner}. If $\mathcal E$ consists of a single tree $\mathcal T$, we instead write $\mathcal F_{\mathcal T}$ to denote $\mathcal F_{\mathcal E}$. Our objective is to characterize the posterior asymptotic properties of the tree learners in \eqref{eqn:treelearner} in estimating the true function $f_0$ belonging to $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$ or $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$. This goal requires two nice attributes of the procedure. First, appropriate prior distributions should be assigned to the tree learners $f_{\mathcal E, B}$ in \eqref{eqn:treelearner} so that the induced posterior can achieve the desired asymptotic properties. Second, there should exist a piecewise tree learner approximating $f_0$ with a suitable approximation error matched to our target rate. The following two sections aim to elucidate these in detail. \section{Tree and forest priors in high dimensions} \label{sec:prior} \subsection{Priors over tree topologies with sparsity} \label{sec:treeprior} Conventional tree priors \citep{chipman1998bayesian,denison1998bayesian} are not designed for high-dimensional data with a sparse underlying structure. Prior modifications are thus required for trees to meet demands of high-dimensional applications \citep{linero2018bayesian-jasa,linero2018bayesian-jrssb,rockova2020posterior}. \citet{rockova2020posterior} adopted a spike-and-slab prior for BART to achieve adaptability to unknown sparsity levels, but the computation of the posterior distribution is much more challenging than the original BART algorithm due to the nature of a point mass prior. \citet{linero2018bayesian-jasa} and \citet{linero2018bayesian-jrssb} considered a sparse Dirichlet prior on splitting coordinates for a computationally feasible algorithm, while achieving the theoretical optimality in the high-dimensional scenario. In this paper, we deploy the sparse Dirichlet prior developed by \citet{linero2018bayesian-jasa} for ease of computation of the posterior. Unlike the original tree priors, the BART model with the sparse Dirichlet prior chooses a splitting coordinate $j$ is from a proportion vector $\eta=(\eta_1,\dots,\eta_p)^\top$ belonging to the $p$-dimensional simplex $\mathbb S^p=\{(x_1,\dots,x_p)^\top\in\mathbb R^p: \sum_{j=1}^p x_j=1, x_j\ge 0,j=1,\dots,p\}$. A proportion vector $\eta$ has a Dirichlet prior with $\zeta>0$ and $\xi>1$, \begin{align} \eta=(\eta_1,\dots,\eta_p)^\top\sim \text{Dir}(\zeta/p^{\xi},\dots,\zeta/p^{\xi}). \label{eqn:dir} \end{align} The requirement $\xi>1$ is needed for technical reasons. The prior imposes a sparsity into splitting variables (we refer the reader to Figure~2 of \citet{linero2018bayesian-jasa}). Given a proportion vector $\eta$, the BART prior is assigned as in \citet{chipman2010bart} with a minor modification. Assuming an independent product prior for $\mathcal E$, i.e., $\Pi(\mathcal E)=\prod_{t=1}^T\Pi(\mathcal T^t)$, a Bayesian CART prior \citep{chipman1998bayesian} is assigned to each $\mathcal T^t$. The procedure begins with the root node $[0,1]^p$ of depth $\ell=0$, where the depth of a node means the number of nodes along the path from the root node down to that node. For each $\ell=0,1,2,\dots$, each node of depth $\ell$ is split with prior probability $\nu^{\ell+1}$ for $\nu\in(0,1/2)$. If a node corresponding to a box $\Omega$ is split, a splitting coordinate $j$ is drawn from the proportion vector $\eta$ and a split-point $\tau_j$ will be chosen randomly from $[\mathcal Z]_j\cap\mathsf{int}([\Omega]_j)$ for a given $\mathcal Z$. The procedure repeats until all nodes are terminal. The original CART prior proposed by \citet{chipman1998bayesian} uses a splitting probability that decays polynomially. \citet{rockova2019theory} showed that this decay may not be fast enough, and suggested using an exponentially decaying probability as ours. This modification gives rise to the desirable exponential tail property of tree sizes. \citet{linero2018bayesian-jrssb} handled this issue by assigning a prior on the number $T$ of trees. Since we want to fix $T$ as in the practical implementation of BART, we use the exponentially decaying prior probability for splits. \subsection{Prior on step-heights} \label{sec:betaprior} To complete the prior on the sparse function space, what remains to be specified is the prior on step-heights $B$ in \eqref{eqn:treelearner}. Given $K^1,\dots,K^T$ induced by $\mathcal E$, \citet{chipman2010bart} suggested using a Gaussian prior on $B$: \begin{align*} d\Pi(B| K^1,\dots, K^T) = \prod_{t=1}^T \prod_{k=1}^{K^t} \phi(\beta_k^t ; 0,1/T), \end{align*} where $\phi(\, \cdot\, ;\mu,\sigma^2)$ is the Gaussian density with mean $\mu$ and variance $\sigma^2$. The variance $1/T$ shrinks step-heights toward zero, limiting the effect of individual components by keeping them small enough for large $T$. This choice is preferred in view of the practical performance, but any zero-mean multivariate Gaussian prior on $B$ gives rise to the same optimal properties as soon as the eigenvalues of the covariance matrix are bounded below and above. Throughout the paper, we put a Gaussian prior on the step-heights $B$ in most cases. From the computational point of view, this choice is certainly appealing in Gaussian nonparametric regression due to its semi-conjugacy. For theoretical purposes, a prior with exponentially decaying thicker tails, such as a Laplace distribution, can easily replace a Gaussian prior for the same optimality under relaxed conditions. Although such a prior may loosen a restriction on $\lVert f_0\rVert_\infty$ \citep{rockova2019semi}, we mostly consider normal priors throughout the paper, even for non-Gaussian models for the sake of simplicity. We consider non-Gaussian priors only when required for theoretical purposes; see, for example, a truncated prior for regression with random design in Section~\ref{sec:furapp}. \section{Approximating the true function} \label{sec:appgen} Recall that tree learners $f_{\mathcal E, B}$ in \eqref{eqn:treelearner} are piecewise constant, whereas the true function $f_0$ does not have to be. This will not be an issue as long as there exists a tree learner which can approximate $f_0$ sufficiently well. In this section, we establish the approximation theory for tree ensembles in the context of our targeted function spaces. For isotropic classes, it is well-known that balanced $k$-d trees \citep{bentley1979multidimensional} give rise to rate-optimal approximations under mild regularity conditions \citep{rockova2020posterior}. This is not necessarily the case for our general setup where smoothness may vary over the domain and where cycling repeatedly through the coordinates (as is done in the $k$-d tree) may not be enough to capture localized features of $f_0$. In this section, we generalize the notion of $k$-d trees and show that there exists a good partitioning scheme for piecewise heterogeneous anisotropic classes. Although our primary interest lies in additive tree aggregations in \eqref{eqn:treelearner}, we show that a single deep tree can approximate well. We thereby consider only single trees $\mathcal T$ and suppress the superscript $t$ throughout this section. \subsection{Split-nets for approximation} \label{sec:splitnet} Approximation properties of tree-based estimators are driven by the granularity and fineness of a chosen split-net. Roughly speaking, a good approximation requires that a split-net have two properties: (i) it should be dense enough so that the boundaries of the box partition $\mathfrak X^\ast=(\Xi_1^\ast,\dots,\Xi_R^\ast)$, extended from $\mathfrak X=(\Xi_1,\dots,\Xi_R)$, can be detected by a $\mathcal Z$-tree partition with a minimal error; and (ii) it should be regular enough so that there exists a $\mathcal Z$-tree partition that captures local/global features of $f_0$ on each $\Xi_r^\ast$. We elucidate these two properties. \subsubsection{Dense split-nets: Global approximability} \label{sec:dennet} Recall that the underlying partition $\mathfrak X^\ast=(\Xi_1^\ast,\dots, \Xi_R^\ast)$ for the true function is unknown. From the sheer flexibility of binary tree partitioning, we expect that the boundaries can be detected well enough by a $\mathcal Z$-tree partition if $\mathfrak X^\ast$ is a flexible tree partition. If the prior rewards partitions that are sufficiently close to $\mathfrak X^\ast$, Bayesian CART (BART) is expected to adapt to unknown $\mathfrak X^\ast$ without much loss of efficiency. We examine when this adaptivity can be achieved in more detail below. \begin{figure}[t!] \centering \resizebox{4.5in}{!}{ \begin{tikzpicture} \draw (-15.25,0) rectangle (-8.25,7); \draw (-13.25,7) -- (-13.25,0); \draw (-13.25,3) -- (-8.25,3); \node at (-14.25,3.5) {\Large$\Psi_1^1$}; \node at (-10.75,5) {\Large$\Psi_2^1$}; \node at (-10.75,1.5) {\Large$\Psi_3^1$}; \draw (-7.5,0) rectangle (-0.5,7); \draw [dashed] (-4.75,7) -- (-4.75,0); \draw [dashed] (-4.75,4) -- (-0.5,4); \node at (-6,3.5) {\Large$\Psi_1^2$}; \node at (-2.75,5.25) {\Large$\Psi_2^2$}; \node at (-2.75,2.25) {\Large$\Psi_3^2$}; \draw (1,0) rectangle (8,7); \draw (3,7) -- (3,0); \draw (3,3) -- (8,3); \draw (1,0) rectangle (8,7); \draw [dashed] (3.75,7) -- (3.75,0); \draw [dashed] (3.75,4) -- (8,4); \draw [stealth-stealth,blue] (7.75,4) -- (7.75,3); \node [blue] at (6.5,3.5) {\Large$\Upsilon(\mathfrak Y^1,\mathfrak Y^2)$}; \node at (0.25,3.5) {\LARGE$\Rightarrow$}; \node at (-11.75,7.5) {\Large$\mathfrak Y^1$}; \node at (-4,7.5) {\Large$\mathfrak Y^2$}; \end{tikzpicture} } \caption{A two-dimensional example of the Hausdorff-type divergence in Definition~\ref{def:haus}. The divergence is the maximum dependency of the boxes in the partitions.} \label{fig:diver} \end{figure} The ability to detect $\mathfrak X^\ast$ is thus closely tied to the density of the split-net $\mathcal Z$; it should be dense enough so that a $\mathcal Z$-tree partition can be constructed that is sufficiently close to $\mathfrak X^\ast$. Therefore, we need a gadget to measure the closeness between two partitions. To this end, below we introduce a Hausdorff-type divergence; see Figure~\ref{fig:diver} for an illustration. \begin{definition}[Hausdorff-type divergence] For any two box partitions $\mathfrak Y^1=(\Psi_1^1,\dots,\Psi_J^1)$ and $\mathfrak Y^2=(\Psi_1^2,\dots,\Psi_J^2)$ with the same number $J$ of boxes, we define a divergence between $\mathfrak Y^1$ and $\mathfrak Y^2$ as \begin{align*} \Upsilon(\mathfrak Y^1,\mathfrak Y^2) =\min_{(\pi(1)\dots\pi(J))\in P_\pi[J]}\, \max_{1\le r\le J} \, {\rm Haus}(\Psi_r^1,\Psi_{\pi(r)}^2) , \end{align*} where $P_\pi[J]$ denotes the set of all permutations $(\pi(1)\dots\pi(J))$ of $\{1,\dots,J\}$ and ${\rm Haus}(\cdot,\cdot)$ is the Hausdorff distance. \label{def:haus} \end{definition} The permutation in Definition \ref{def:haus} makes the specification immune to the ordering of boxes. We want the split-net $\mathcal Z$ to produce a $\mathcal Z$-tree partition $\mathcal T$ such that $\Upsilon(\mathfrak X^\ast,\mathcal T)$ is smaller than some threshold. Section~\ref{sec:app} establishes how small these thresholds should be so that the tree learner is close to $f_0$ (for various approximation metrics). The following definition will be useful in characterizing the details. \begin{definition}[Dense split-net] For a given subset $S\subseteq\{1,\dots,p\}$ and an integer $J\ge1$, consider an $S$-chopped partition $\mathfrak Y=(\Psi_1,\dots,\Psi_J)$ of $[0,1]^p$ with boxes $\Psi_r\subseteq [0,1]^p$, $r=1,\dots,J$. For any given $c_n\ge0$, a split-net $\mathcal Z=\{z_i\in[0,1]^p,i=1,\dots, b_n\}$ is said to be $(\mathfrak Y, c_n)$-dense if there exists an $S$-chopped $\mathcal Z$-tree partition $\mathcal T=(\Omega_1,\dots,\Omega_J)$ of $[0,1]^p$ such that $\Upsilon(\mathfrak Y,\mathcal T)\le c_n$. \label{def:den-net} \end{definition} In Section~\ref{sec:app}, the approximation theory will require that $\mathcal Z$ be $(\mathfrak X^\ast, c_n)$-dense for some suitable $c_n\ge0$. Note that the ideal case $c_n=0$ can be achieved only when $\mathfrak X^\ast$ is a $\mathcal Z$-tree partition. This condition, while obviously satisfied in the case $R=1$, is very restrictive in the most situations. This is because, if $J=1$, i.e., $\mathfrak Y=([0,1]^p)$, we have $\Upsilon(\mathfrak Y,\mathcal T)=0$ for $\mathcal T=([0,1]^p)$. Hence, every split-net $\mathcal Z$ is $(([0,1]^p),0)$-dense. However, we will see in Theorem~\ref{thm:approx2} that in many cases, it is sufficient that $c_n$ tends to zero at a suitable rate. This means that $\mathfrak X^\ast$ should be at least a flexible tree partition, but not necessarily a $\mathcal Z$-tree partition. If $\mathfrak X^\ast$ is a box partition but not a flexible tree partition, we can redefine $\mathfrak X^\ast$ by adding more splits to make it a flexible tree partition. For example, the non-tree box partition in Figure~\ref{fig:partition} can be extended to a tree partition with a single extra split. In Section~\ref{sec:exmspnet}, we present some examples of dense split-nets. Dense split-nets have nested properties. That is to say a $(\mathfrak Y, c_n)$-dense split-net is also $(\mathfrak Y, \tilde c_n)$-dense for every $\tilde c_n\ge c_n$. We are interested in the smallest possible $c_n$. In particular, every split-net $\mathcal Z$ is $(\mathfrak Y, 1)$-dense for any box partition $\mathfrak Y$. \subsubsection{Regular split-nets: Local approximability} \label{sec:regular} Beyond closely tracking smoothness boundaries, good tree partitions should be able to capture local/global smoothness features of $f_0$. In other words, there should exist a $\mathcal Z$-tree partition that achieves an optimal approximation error determined by our target rate. In Section~\ref{sec:dennet}, we focused on more {\em global} approximability of underlying partitions which requires split-nets to be suitably dense. Now, we focus on {\em local} approximability. Assume that $\mathfrak X^\ast$ can be approximated well (as discussed in the previous section) by an $S(\mathfrak X^\ast)$-chopped $\mathcal Z$-tree partition $\mathcal T^\ast=(\Omega_1^\ast,\dots,\Omega_R^\ast)$.\footnote{ The notation $\mathcal T^\ast=(\Omega_1^\ast,\dots,\Omega_R^\ast)$ with an asterisk is only used to denote an $S(\mathfrak X^\ast)$-chopped $\mathcal Z$-tree partition approximating $\mathfrak X^\ast=(\Xi_1^\ast,\dots,\Xi_R^\ast)$. } More specifically, for a $(\mathfrak X^\ast, c_n)$-dense split-net $\mathcal Z$ with a given $c_n>0$, $\mathcal T^\ast$ is defined as ${\text{argmin}}_{\mathcal T \in\mathscr T_{S(\mathfrak X^\ast),R,\mathcal Z}}\Upsilon(\mathfrak X^\ast,\mathcal T)$. We now focus on local approximability inside each box $\Omega_r^\ast$. Ideally, one would like to construct a sub-tree partition of this local box that balances out approximation errors in all coordinates. Therefore, we first need to devise a splitting scheme to achieve this balancing condition. The regularity of split-nets can then be spelled out based on such a law. We now zoom onto a single box $\Omega_r^\ast$. Recall that the true function $f_0$ has anisotropic smoothness on each of $\Xi_r^\ast$. Intuitively, denser subdivisions are required for less smooth coordinates to capture the local features. Allowing splits to occur more often in certain directions, below we define the {\em anisotropic $k$-d tree}, which achieves the desired approximation error for anisotropic smoothness. The definition requires the notion of {\em midpoint-splits} defined as follows. For a given box $\Psi$ and a splitting coordinate $j$, a midpoint-split picks up the $\lceil \tilde b_j(\mathcal Z,\Psi)/2 \rceil$th split-candidate in $[\mathcal Z]_j\cap {\mathsf{int}}([\Psi]_j)$ as a split-point $\tau_j$, where $\tilde b_j(\mathcal Z,\Psi)$ is the cardinality of $[\mathcal Z]_j\cap {\mathsf{int}}([\Psi]_j)$. \begin{definition}[Anisotropic $k$-d tree] Consider a smoothness vector $\alpha=(\alpha_1,\dots,\alpha_d)^\top\in(0,1]^d$, a box $\Psi\subseteq[0,1]^p$, a split-net $\mathcal Z=\{z_i\in[0,1]^p,~i=1,\dots,b_n\}$, an integer $L >0$, and an index set $S=\{s_1,\dots, s_d\}\subseteq\{1,\dots,p\}$ with $|S|=d$. We define the {\em anisotropic $k$-d tree} $AKD(\Psi;\mathcal Z,\alpha,L,S)$ as the iterative splitting procedure that partitions $\Psi$ into disjoint boxes as follows. \begin{enumerate} \item Start from the root node by setting $\Omega_1^\circ=\Psi$ and set $l_j=0$, $j=1,\dots, d$. \item \label{sp:2} For splits at iteration $1+\sum_{j=1}^d l_j$, choose $j$ corresponding to the smallest $l_j\alpha_j$. If the smallest $l_j\alpha_j$ is duplicated with multiple $j$s, choose the smallest $j$ among such $j$'s. \item For all boxes $\Omega_k^\circ$, $k=1,\dots, 2^{\sum_{j=1}^d l_j}$, at the current iteration, do the midpoint-splits with the given $\mathcal Z$ and the splitting coordinate $s_j$ chosen by $j$. Relabel the generated new boxes as $\Omega_k^\circ$, $k=1,\dots, 2^{1+\sum_{j=1}^d l_j}$, and then increase $l_j$ by one for chosen $j$. \item Repeat 2--3 until either $\sum_{j=1}^d l_j=L$ or the midpoint-split is no longer available. Return $(l_1,\dots,l_d)^\top$ and $\mathcal T^\circ=(\Omega_1^\circ,\dots, \Omega_{2^{L^\circ}}^\circ)$, where $L^\circ = \sum_{j=1}^d l_j$. \end{enumerate} \label{def:sp} \end{definition} \begin{figure}[t!] \centering \includegraphics[width=2.5in]{fig1-eps-converted-to.pdf}\vspace*{-0.1in} \caption{A realization of the anisotropic $k$-d tree with smoothness parameters $\alpha_1=0.25$ (for the horizontal axis) and $\alpha_2=0.5$ (for the vertical axis), and some box $\Psi$ (the shaded box) that is a subspace of $[0,1]^2$ (the outer square). Since $2\alpha_1=\alpha_2$, the subset $\Psi$ splits twice as often in the vertical direction than in the horizontal direction. } \label{fig:akd} \end{figure} Note that the anisotropic $k$-d tree construction depends on the smoothness that is unknown. Rather than a practical estimator, we use this to show that there exists a good tree approximator in the technical proof. One possible realization of the anisotropic $k$-d tree generating process is given in Figure~\ref{fig:akd}. Observe that $AKD(\Psi;\mathcal Z,\alpha,L,S)$ returns a tree partition $\mathcal T^\circ=(\Omega_1^\circ,\dots,\Omega_{2^{L^\circ}}^\circ)$ of $\Psi$ and a vector $(l_1,\dots,l_d)^\top$ such that $L^\circ=\sum_{j=1}^d l_j\le L$.\footnote{ The notation $\mathcal T^\circ = (\Omega_k^\circ)_k$ with a circle is used only for tree partitions of some box $\Psi\subseteq[0,1]^p$, returned by the anisotropic $k$-d trees, with some suitable subscript if required. } Although these returned items clearly depend on the inputs of the anisotropic $k$-d tree procedure (i.e., $\Psi$, $\mathcal Z$, $\alpha$, $L$, and $S$), we suppress them throughout the paper. Each $l_j$ is a counter of how many times the $j$th coordinate has been used. The procedure is designed so that every $l_j$ is approximately proportional to $\alpha_j^{-1}$ after enough iterations. The total number of splits for the $j$th coordinate is thus close to $2^{C/\alpha_j}$ for every $j$ with some $C>0$. In the proof of Theorem~\ref{thm:approx2} below, one can see that this matching is indeed optimal and minimizes the induced bias. To play a role as a `sieve' for approximation, $\Psi$ needs to be sufficiently finely subdivided to capture the global/local behavior of a function. The threshold $L$ determines the resolution of the returned tree partition $\mathcal T^\circ=(\Omega_1^\circ,\dots,\Omega_{2^{L^\circ}}^\circ)$. For a good approximation we are particularly interested in the situation when ${L^\circ}=L$, i.e., the resulting tree has the desired depth. If ${L^\circ}<L$ due to insufficient split-candidates, the resolution may not be good enough. Now, we can define the regularity of a split-net on $\Psi\subseteq[0,1]^p$ using $\mathcal T^\circ$. The desirable situation is when all the splits occur nearly at the center of boxes such that for any given $j\in S$, all $\mathsf{len}([\Omega_k^\circ]_j)$, $k=1,\dots,2^L$, are balanced well. The evenness of the returned partition is solely determined by the regularity of a split-net $\mathcal Z$. Intuitively, the split-net should be sufficiently regularly distributed to give rise to an appropriate partition, in which we say a split-net is regular. We make the definition technically precise below, which will be used as a basis for approximating the function classes. See \citet{verma2009spatial} for a related regularity condition. \begin{definition}[Regular split-net] For a given box $\Psi\subseteq[0,1]^p$, an integer $L>0$, and an index set $S=\{s_1,\dots,s_d\}\subseteq\{1,\dots,p\}$, we say that a split-net $\mathcal Z$ is $(\Psi,\alpha,L, S)$-regular if $\mathcal T^\circ = (\Omega_1^\circ,\dots,\Omega_{2^{L^\circ}}^\circ)$ and $ (l_1,\dots,l_d)^\top$ returned by $AKD(\Psi;\mathcal Z,\alpha,L,S)$ satisfy $L^\circ=L$ and $\max_k\mathsf{len}([\Omega_k^\circ]_{s_j})\lesssim \mathsf{len}([\Psi]_{s_j}) 2^{-l_j}$ for every $j=1,\dots, d$. \label{def:reg-net} \end{definition} The condition $\max_k\mathsf{len}([\Omega_k^\circ]_{s_j})\lesssim \mathsf{len}([\Psi]_{s_j}) 2^{-l_j}$ is the key to obtaining optimal approximation results. In the ideal case that all the splits occur exactly at the center, this condition is trivially satisfied as $\max_k\mathsf{len}([\Omega_k^\circ]_{s_j})= \mathsf{len}([\Psi]_{s_j}) 2^{-l_j}$. The inequality provides a lot more flexibility where the condition can be satisfied in most cases except for very extreme situations. See Section~\ref{sec:exmspnet} for examples of regular split-nets. Similar to dense split-nets, regular split-nets also have nested properties. If a split-net $\mathcal Z$ is $(\Psi,\alpha,L, S)$-regular for some $\Psi$, $\alpha$, $L$, and $S$, then it is also $(\Psi, \alpha,\tilde L, S)$-regular for any $\tilde L\le L$. This can be easily shown by noting that the latter is determined only by a sub-tree of a blown tree for the former. We are particularly interested in the largest possible $L$. \begin{remark} \label{rmk:reg} Since regular split-nets require the desired depth, i.e., $L^\circ=L$, it is of interest to see which $L$ achieves this precondition. Consider a box $\Psi\subseteq[0,1]^p$ and a split-net $\mathcal Z=\{z_i\in[0,1]^p,~i=1,\dots,b_n\}$. If there are no ties in $\mathcal Z$ for any coordinate, i.e., $b_j(\mathcal Z)=b_n$, $j=1,\dots,p$, it can be easily checked that any integer $L\le\lfloor\log_2(\tilde b_j(\mathcal Z;\Psi)+1)\rfloor$ gives rise to $L^\circ=L$ with the anisotropic $k$-d tree. (Observe that all $\tilde b_j(\mathcal Z;\Psi)$ are identical in this case.) If there are ties, $L$ may need to be much smaller to achieve $L^\circ=L$, but a tight upper bound may not be obtained for the general case. \end{remark} \subsection{Approximation theory} \label{sec:app} Our goal is to establish asymptotic properties of the posterior distribution. This requires that tree learners be able to approximate functions in the spaces $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$ and $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$ appropriately. Here, we establish the approximation properties for these sparse function spaces. Recall that a split-net $\mathcal Z$ is required to be suitably dense and regular. First, a split-net $\mathcal Z$ should be $(\mathfrak X^\ast, c_n)$-dense for some suitable $c_n$. The boundaries of $\mathfrak X^\ast=(\Xi_1^\ast,\dots, \Xi_R^\ast)$ should thus be detected well by the binary tree partitioning rule. Since $\mathfrak X^\ast$ is approximated by a $\mathcal Z$-tree partition with a given $\mathcal Z$, the underlying partition $\mathfrak X^\ast$ should be at least a flexible tree partition, but a stronger result is obtained if $\mathfrak X^\ast$ is a $\mathcal Z$-tree partition (see Theorem~\ref{thm:approx2} below). Denoting by $\mathcal T^\ast=(\Omega_1^\ast,\dots,\Omega_R^\ast)$ the $S(\mathfrak X^\ast)$-chopped $\mathcal Z$-tree partition approximating $\mathfrak X^\ast$, each box $\Omega_r^\ast$ should be appropriately subdivided to capture the local/global nature of the true function on $\Xi_r^\ast$. (If $R=1$, we write $\mathcal T^\ast=\mathfrak X^\ast=([0,1]^p)$ with $\Omega_1^\ast=[0,1]^p$.) Hence, for a smoothness parameter $A_{\bar\alpha}\in\mathcal A_{\bar\alpha}^{R,d}$ and $L_0$ specified below, $\mathcal Z$ should also be $(\Omega_r^\ast,\alpha_r,L_0,S_0)$-regular, $r=1,\dots,R$. The integer $L_0$ is chosen below such that the approximation error is balanced with our target rate. Let $\mathcal T_r^\circ=(\Omega_{r1}^\circ,\dots, \Omega_{r2^{L_0}}^\circ)$ be the tree partition of $\Omega_r^\ast$ returned by $AKD(\Omega_r^\ast;\mathcal Z,\alpha_r,L_0,S_0)$, $r=1,\dots,R$. Then, the approximating partition $\widehat{\mathcal T}$ is formed by agglomerating all sub-tree partitions $\mathcal T_r^\circ$, leading to an $S_0$-chopped $\mathcal Z$-tree partition $\widehat{\mathcal T}=(\Omega_{11}^\circ,\dots, \Omega_{12^{L_0}}^\circ,\dots, \Omega_{R1}^\circ,\dots, \Omega_{R2^{L_0}}^\circ)$. A graphical illustration of constructing $\widehat{\mathcal T}$ is given in Figure~\ref{fig:apptree}. \begin{figure}[t!] \centering \resizebox{1.5in}{!}{ \begin{tikzpicture} \draw[opacity=0] (0,0) rectangle (6,6); \node at (2,4.45) {\large$\Xi_1^\ast$}; \node at (4.8,4.45) {\large$\Xi_2^\ast$}; \node at (1.6,1.55) {\large$\Xi_3^\ast$}; \node at (4.4,1.55) {\large$\Xi_3^\ast$}; \end{tikzpicture} } \hspace{-1.5in}\includegraphics[width=1.5in]{fig-T1-eps-converted-to.pdf} \resizebox{1.5in}{!}{ \begin{tikzpicture} \draw[opacity=0] (0,0) rectangle (6,6); \node at (2,4.45) {\large$\Omega_1^\ast$}; \node at (4.8,4.45) {\large$\Omega_2^\ast$}; \node at (1.6,1.55) {\large$\Omega_3^\ast$}; \node at (4.4,1.55) {\large$\Omega_3^\ast$}; \end{tikzpicture} } \hspace{-1.5in}\includegraphics[width=1.5in]{fig-T2-eps-converted-to.pdf} \resizebox{1.5in}{!}{ \begin{tikzpicture} \draw[opacity=0] (0,0) rectangle (6,6); \end{tikzpicture} } \hspace{-1.5in}\includegraphics[width=1.5in]{fig-T3-eps-converted-to.pdf} \caption{An example of constructing $\widehat{\mathcal T}$. First, $\mathfrak X^\ast=(\Xi_1^\ast,\dots,\Xi_4^\ast)$ is approximated by $\mathcal T^\ast=(\Omega_1^\ast,\dots,\Omega_4^\ast)$. Then, each $\Omega_r^\ast$ is subdivided by the anisotropic $k$-d tree, producing $\mathcal T_r^\circ$ a constituent of $\widehat{\mathcal T}$ displayed on the rightmost panel.} \label{fig:apptree} \end{figure} The strongest approximation results relative to the supremum norm for $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$ are of particular interest. Due to the possible discontinuity or heterogeneity at the unknown boundaries of $\mathfrak X^\ast$, however, such results are not practically obtained except for the case $R=1$. As the following lemma shows, the conditions can be relaxed if we opt for weaker metrics, which often suffice in many statistical setups. For example, in our examples of Gaussian nonparametric regression in Section~\ref{sec:nonregcon}, we only need an approximation rate in $L_2$- or empirical $L_2$-sense. The approximation results for the continuous variant $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$ require even milder conditions. \begin{theorem}[Approximation theory] For a split-net $\mathcal Z$ that is $(\mathfrak X^\ast,c_n)$-dense for $c_n>0$ specified below, let $\mathcal T^\ast=(\Omega_1^\ast,\dots,\Omega_R^\ast)={\text{argmin}}_{\mathcal T \in\mathscr T_{S(\mathfrak X^\ast),R,\mathcal Z}}\Upsilon(\mathfrak X^\ast,\mathcal T)$. For every $r=1,\dots, R$, assume that $\mathcal Z$ is $(\Omega_r^\ast,\alpha_r,L_0, S_0)$-regular for a smoothness parameter $A_{\bar\alpha}\in \mathcal A_{\bar\alpha}^{R,d}$ and an integer sequence $L_0>0$ such that $2^{L_0}\asymp ( \lambda^2 d^2 n/(R\log n))^{d/(2\bar\alpha+d)}$. Let $\bar\epsilon_n = (\lambda d)^{d/(2\bar\alpha+d)}(({R\log n})/{n})^{\bar\alpha/(2\bar\alpha+d)}$ and construct the $S_0$-chopped $\mathcal Z$-tree partition $\widehat{\mathcal T}$ as above. Then, the following assertions hold. \begin{enumerate}[label=\rm(\roman*)] \item \label{lmm0:sup2} If $c_n^{\min_{r,j}\alpha_{rj}}\lesssim \bar\epsilon_n/(\lambda |S(\mathfrak X^\ast)|)$, then for every $f_0\in\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$, there exists $\hat f_0\in\mathcal F_{\widehat{\mathcal T}}$ such that $\lVert f_0 - \hat f_0\rVert_\infty\lesssim \bar \epsilon_n$. \item \label{lmm0:Leb1} Fix $v\ge 1$. If \begin{align} c_n \lesssim \left(\frac{\bar\epsilon_n}{\lVert f_0\rVert_\infty}\right)^v \frac{ \min_{r,j}\mathsf{len}([\Xi_r^\ast]_j)}{|S(\mathfrak X^\ast)|}, \label{eqn:cncond1} \end{align} then for every $f_0\in\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$, there exists $\hat f_0\in\mathcal F_{\widehat{\mathcal T}}$ such that $\lVert f_0 - \hat f_0\rVert_{v}\lesssim \bar \epsilon_n$. If \eqref{eqn:cncond1} is not satisfied but \begin{align} c_n^{1+v\min_{r,j}\alpha_{rj}} \lesssim \left(\frac{\bar\epsilon_n }{\lambda}\right)^v \frac{\min_{r,j}\mathsf{len}([\Xi_r^\ast]_j)}{|S(\mathfrak X^\ast)|^{v+1}}, \label{eqn:cncond2} \end{align} then for every $f_0\in\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$, there exists $\hat f_0\in\mathcal F_{\widehat{\mathcal T}}$ such that $\lVert f_0 - \hat f_0\rVert_{v}\lesssim \bar \epsilon_n$. \item \label{lmm0:emp1} Fix $v\ge1$. For every $f_0\in \Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$ and $c_n=1$, there exists $\hat f_0\in\mathcal F_{\widehat{\mathcal T}}$ such that $\lVert f_0 - \hat f_0\rVert_{v,P_{\mathcal Z}}\lesssim \bar \epsilon_n$, where $P_{\mathcal Z}(\cdot)=b_n^{-1} \sum_{i=1}^{b_n} \delta_{z_i}(\cdot)$. \end{enumerate} \label{thm:approx2} \end{theorem} \begin{proof} See Section~\ref{sec:prooflmm1} in Appendix. \end{proof} The assertion in \ref{lmm0:sup2} is given for the supremum norm which is the most comprehensive and universally used in many statistical estimation problems. This, however, imposes the continuity restriction. The assertion in \ref{lmm0:Leb1} is with respect to the $L_v$-norm, $v\ge 1$, which is useful in many statistical setups. The result allows for the discontinuity of $f_0$ if \eqref{eqn:cncond1} is satisfied. Notice that \eqref{eqn:cncond2} is not always milder than \eqref{eqn:cncond1} despite the continuity restriction. It should be understood that the expression of \ref{lmm0:Leb1} is adopted for the sake of simplicity. One may easily see that \eqref{eqn:cncond2} is milder than \eqref{eqn:cncond1} only if $\lambda|S(\mathfrak X^\ast)|c_n^{\min_{r,j}\alpha_{rj}}\lesssim \lVert f_0\rVert_\infty$. This is often satisfied since we are usually interested in a decreasing polynomial in $n$ for $c_n$. The assertion in \ref{lmm0:emp1} is particularly useful in regression setups with $\mathcal Z$ chosen by fixed covariates (see Section~\ref{sec:fixeddesign}). Note that \ref{lmm0:emp1} only requires the regularity of a split-net $\mathcal Z$, as all split-nets are $(\mathfrak X^\ast,1)$-dense. Observe that $|S(\mathfrak X^\ast)|=0$ if $R=1$. In this particular case, the conditions for $c_n$ in \ref{lmm0:sup2} and \ref{lmm0:Leb1} are automatically satisfied since all split-nets are $(([0,1]^p),0)$-dense. The assertions then only require a split-net to be suitably regular. This fact certainly makes sense because there are no boundaries to be detected if $R=1$. Note also that $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)=\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$ if $R=1$, and hence \ref{lmm0:sup2} is the strongest in this case. If $R>1$, the conditions in \ref{lmm0:sup2} and \ref{lmm0:Leb1} are not trivially satisfied. The conditions are oracle-type conditions since they depend on unknown model components, e.g., $A_{\bar\alpha}\in\mathcal A_{\bar\alpha}^{R,d}$, $\lambda$, and $|S(\mathfrak X^\ast)|$. More practical conditions can be obtained by plugging in reasonable bounds of the unknown components. For example, we cannot hope for better than $\bar\epsilon_n\gtrsim (\lambda d R (\log n )/n)^{1/3} $ due to the fundamental limitation of piecewise constant learners. For the procedure to be consistent, we need the necessary conditions $d/\bar\alpha\ll \log n$ and $\lambda^{\bar\alpha/d}R\ll n$ (see the rate in \eqref{eqn:rate} below). We can also assume that $\min_{r,j}\mathsf{len}([\Xi_r^\ast]_j)$ is bounded away from zero or decreases sufficiently slowly. Putting everything together, the conditions in \ref{lmm0:sup2} and \ref{lmm0:Leb1} can be easily satisfied if $c_n$ is a decreasing polynomial in $n$ with a suitable exponent (a suitable decreasing order of $\min_{r,j}\alpha_{rj}$ is required for \ref{lmm0:sup2}). Note again that \ref{lmm0:emp1} does not require a decreasing $c_n$. No upper bounds for $b_n$ and $b_j(\mathcal Z)$ are made for Theorem~\ref{thm:approx2}; the approximation results are more easily achieved with larger values of $b_j(\mathcal Z)$, $j=1,\dots, p$. However, values increasing too fast may harm the contraction rate as they escalate the model complexity. In Section~\ref{sec:nonreg}, we will see that our main results on the optimal posterior contraction require that $\max_{1\le j\le p}\log b_j(\mathcal Z)\lesssim \log n$. Hence, we are ultimately interested in split-nets with $b_j(\mathcal Z)$, $j=1,\dots, p$, balanced very well. \subsection{Examples of split-nets for approximation} \label{sec:exmspnet} Although the notion of dense and regular split-nets is crucial in characterizing the approximation theory in Section~\ref{sec:app}, it remains unclear how to obtain such a good split-net in practice. We will show that the two split-nets in Figure~\ref{fig:spn} are suitably dense and regular, and hence fulfill the requirements of Theorem~\ref{thm:approx2}. Throughout this section, $\mathcal T^\ast=(\Omega_1^\ast,\dots,\Omega_R^\ast)$ is defined as $\mathcal T^\ast={\text{argmin}}_{\mathcal T \in\mathscr T_{S(\mathfrak X^\ast),R,\mathcal Z}}\Upsilon(\mathfrak X^\ast,\mathcal T)$ for a given $\mathcal Z$. \subsubsection{Regular grid} \label{sec:grid} We first consider a regular grid $\mathcal Z = \{(i-1/2)/b_n^{1/p},i=1,\dots,b_n^{1/p}\}^p$ for $b_n$ such that $b_n^{1/p}$ is an integer. This simplest example is a split-net according to Definition~\ref{def:split-net}. A two-dimensional example is illustrated in Figure~\ref{fig:spn1}. The following lemma shows that, with an appropriately chosen $b_n$, a regular grid is suitably dense and regular under mild conditions. \begin{lemma}[Regular grid] Consider a regular grid $\mathcal Z$ with $b_n=n^{cp}$ for a constant $c\ge1$. If $\min_{r,j}\mathsf{len}([\Xi_r^\ast]_j)\gg n^{-c}$ and $\lambda d \mathbbm 1(R>1) /\min_{r,j}\mathsf{len}([\Xi_r^\ast]_j)^{\bar\alpha/d+1/2}\lesssim n^{c \bar\alpha /d+(c-1)/2}\sqrt{R \log n}$, then $\mathcal Z$ is $(\mathfrak X^\ast, c_n)$-dense and $(\Omega_r^\ast,\alpha_r,L_0,S_0)$-regular for $r=1,\dots,R$, where $c_n=n^{-c}\mathbbm 1(R>1)$. \label{lmm:grid2} \end{lemma} \begin{proof} See Section~\ref{sec:prooflmm2} in Appendix. \end{proof} The second condition is simplified as $\lambda d\mathbbm 1(R>1) /\sqrt{\min_{r,j}\mathsf{len}([\Xi_r^\ast]_j)}\lesssim n^{(c-1)/2}\sqrt{R\log n}$ by taking $\bar\alpha\rightarrow 0$. We pay particular attention to the case $R=1$, i.e., $\mathfrak X^\ast=([0,1]^p)$, where the conditions are trivially satisfied. In this case, we obtain the strongest result in \ref{lmm0:sup2} of Theorem~\ref{thm:approx2} (recall that $\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)=\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$ if $R=1$). For the case $R>1$, the conditions are very mild if $c$ is large enough with the necessary conditions $d\ll \log n$ and $\lambda^{\bar\alpha/d}R\ll n$ for consistent estimation. The choice $c=1$ can even be sufficient with stronger boundedness conditions. Since $c_n$ is a decreasing polynomial in $n$ with our choice of $b_n$, the assertions in \ref{lmm0:sup2} and \ref{lmm0:Leb1} of Theorem~\ref{thm:approx2} hold with a large enough $c$. Note that \ref{lmm0:emp1} of Theorem~\ref{thm:approx2} also holds trivially with this $\mathcal Z$. Since $\max_{1\le j\le p}\log b_j(\mathcal Z)=p^{-1}\log b_n\lesssim \log n$, a regular grid satisfies the condition for the optimal posterior contraction specified in Section~\ref{sec:nonreg}. This nice property makes a regular grid very appealing for practical use given its simplicity, and there is no much benefit of considering more complicated split-nets. The only exception is a set of fixed design points commonly used in the literature of BART \citep{chipman2010bart,rockova2020posterior}. A regular grid can easily be extended to an irregular rectangular grid with boxes of different sizes. If every mesh-size of this irregular checkerboard is asymptotically proportional to $1/b_n^{1/p}$, the above results still hold with minor modification. This extension is particularly interesting in a regression setup where the distribution of covariates is explicitly available. For example, it allows us to use the quantiles for grid points, which is a natural way to generate a weakly balanced system \citep{castillo2021uncertainty}. \subsubsection{Fixed design points} \label{sec:fixeddesign} Now we focus on a fixed design regression setup, where observed covariate values are readily available. Using fixed design points is particularly appealing in that \ref{lmm0:emp1} of Theorem~\ref{thm:approx2} (coupled with this split-net) gives an approximation error relative to the empirical probability measure as soon as it is suitably regular. The strategy is conventional in the literature of Bayesian CART and BART \citep{chipman1998bayesian,denison1998bayesian,chipman2010bart}. Suppose that a split-net $\mathcal Z=\{z_i\in[0,1]^p ,~i=1,\dots,n\}$ consists of the observed covariate values in a regression setup. We need to assume that the design points are sufficiently evenly distributed in unknown $S_0$. Thus we make the following assumption on $\mathcal Z$. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(F)] \item \label{asm:fixeddesign} For every $\alpha\in(0,1]^d$ and every box $\Psi\subseteq[0,1]^p$ with $n P_{\mathcal Z}(\Psi)\gg 1$, $\mathcal Z$ is $(\Psi,\alpha,L,S_0)$-regular with $L = \lfloor\log_2 (c n P_{\mathcal Z}(\Psi))\rfloor$ for some constant $c>0$. \end{enumerate} The condition $n P_{\mathcal Z}(\Psi)\gg 1$ implies that the number of split-candidates contained in $\Psi$ increases. If $\mathcal Z$ is balanced very well in $S_0$ and there are no ties so that splits can occur $n P_{\mathcal Z}(\Psi)$ times, then $\mathcal Z$ is $(\Psi,\alpha,L,S_0)$-regular for $L=\lfloor\log_2 (n P_{\mathcal Z}(\Psi)+1)\rfloor$ (see Remark~\ref{rmk:reg}). Our requirement is thus milder. \begin{lemma}[Fixed design points] Consider fixed design points $\mathcal Z=\{z_i,i=1,\dots,n\}$ with Assumption in \ref{asm:fixeddesign}. If $\lambda d \lesssim (n/R)^{\bar\alpha/d} \sqrt{\log n}$, $\min_r P_{\mathcal Z}(\Xi_r^\ast)\gtrsim R^{-1}$, and $R\ll n$, then $\mathcal Z$ is $(\Omega_r^\ast,\alpha_r,L_0,S_0)$-regular for $r=1,\dots,R$. \label{lmm:fixed2} \end{lemma} \begin{proof} See Section~\ref{sec:prooflmm4} in Appendix. \end{proof} Since $n P_{\mathcal Z}(\Xi_r^\ast)$ is the number of split-candidates in $\Xi_r^\ast$, the condition $\min_r P_{\mathcal Z}(\Xi_r^\ast)\gtrsim R^{-1}$ implies that the number of split-candidates should be balanced well among the $R$ boxes. Our condition $\lambda d \lesssim (n/R)^{\bar\alpha/d} \sqrt{\log n}$ slightly relaxes the condition $\lambda d \lesssim \sqrt{\log n}$ of Theorem~4.1 in \citet{rockova2020posterior} (for the case of global isotropy). The two conditions are comparable if we take $\bar\alpha\rightarrow 0$ from the practical perspective. We see that \ref{lmm0:emp1} of Theorem~\ref{thm:approx2} directly follows from this lemma. Since design points are used as $\mathcal Z$, the term $\lVert f_0 - \hat f_0\rVert_{v,P_{\mathcal Z}}$ is translated into the approximation error relative the empirical probability measure. In regression setups, this fact makes fixed design points much more attractive than other split-nets in the previous sections. We also note that the requirement $\max_{1\le j\le p}\log b_j(\mathcal Z)\lesssim \log n$ for the optimal posterior contraction is trivially satisfied in this case. \section{BART in nonparametric regression} \label{sec:nonreg} \subsection{Posterior contraction rates} \label{sec:nonregcon} BART is an archetypal example of Bayesian forests \citep{chipman1998bayesian,denison1998bayesian,chipman2010bart}. For a fixed design Gaussian nonparametric regression, \citet{rockova2020posterior} and \citet{rockova2019theory} established $L_2$ rate-optimal posterior contraction of BART for high-dimensional isotropic regression functions. Our investigation goes beyond these studies in three aspects: (i) we treat the variance parameter $\sigma^2$ as unknown with a prior; (ii) we consider both fixed and random regression design; and, most importantly, (iii) the true function is assumed to be in the piecewise heterogeneous anisotropic space introduced earlier. The last point significantly enlarges the optimality scope of BART. We separately deal with fixed and random designs. This section is focused on the fixed design case, while the random design case will be considered in Section~\ref{sec:nonran}. The fixed design regression model writes as \begin{align} Y_i=f_0(x_i)+\varepsilon_i,\quad \varepsilon_i\sim \text{N}(0,\sigma_0^2),\quad i=1,\dots,n, \label{eqn:modelreg} \end{align} where $x_i=(x_{i1},\dots,x_{ip})^\top\in[0,1]^p$, $i=1,\dots, n$, are fixed. The model is independent but not identically distributed, and hence the asymptotic studies are established under the product measure for the $n$ observations. The general theory of posterior contraction requires an exponentially powerful test function of a semimetric under this product measure \citep{ghosal2017fundamentals}. In nonparametric regression with fixed design, such a good test function can be directly constructed for the empirical $L_2$-distance even when the noise error is unknown \citep{salomond2018testing}. We also refer to \citet{ning2020bayesian} and \citet{jeong2021unified} for the construction of relevant test functions with respect to the R\'enyi divergence. The general theory also requires desirable properties of the prior. We show that the tree priors in Section~\ref{sec:prior} satisfy those conditions. We impose the following assumptions on the true parameters $f_0$ and $\sigma_0^2$. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(A\arabic*)] \item \label{asm:truef} For $d>0$, $\lambda>0$, $R>0$, $\mathfrak X=(\Xi_1,\dots,\Xi_R)$, and $A_{\bar\alpha}\in\mathcal A_{\bar\alpha}^{R,d}$ with $\bar\alpha\in(0,1]$, the true function satisfies $f_0\in\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$ or $f_0\in\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$. \item \label{asm:dp} It is assumed that $d$, $p$, $\lambda$, $R$, and $\bar\alpha$ satisfy $\epsilon_n\ll 1$, where \begin{align} \epsilon_n=\sqrt{\frac{d\log p}{n}}+(\lambda d)^{d/(2\bar\alpha+d)}\left(\frac{R\log n}{n}\right)^{\bar\alpha/(2\bar\alpha+d)}. \label{eqn:rate} \end{align} \item \label{asm:fsup} The true function satisfies $\lVert f_0 \rVert_\infty\lesssim \sqrt{\log n}$. \item \label{asm:sig} The true variance parameter satisfies $\sigma_0^2\in[C_0^{-1},C_0]$ for some sufficiently large $C_0>1$. \end{enumerate} Assumption \ref{asm:truef} means that the true regression function $f_0$ lies on a sparse piecewise heterogeneous anisotropic space. If the continuity assumption is further imposed, the approximation results in Theorem~\ref{thm:approx2} are obtained under milder conditions. Assumption \ref{asm:dp} is required to make our target rate $\epsilon_n$ tend zero. The boundedness condition in \ref{asm:fsup} is made to guarantee a sufficient prior concentration under the normal prior on the step-heights specified in \ref{pri:normal} below. Although the Gaussian prior can be replaced by a thick-tailed prior \citep[e.g.,][]{rockova2019semi}, we only consider the Gaussian prior to leverage its semi-conjugacy. Assumption \ref{asm:sig} allows one to assign a standard prior to $\sigma^2$, e.g., an inverse gamma distribution. It is also important to choose a suitable split-net so that Theorem~\ref{thm:approx2} can be deployed. For regression with fixed design, we need an approximation result with respect to the empirical $L_2$-norm $\lVert\cdot\rVert_n$ defined as $\lVert f \rVert_n^2=n^{-1}\sum_{i=1}^n |f(x_i)|^2$. We make the following assumptions on the split-net $\mathcal Z$. Below the notation $\mathsf{dep}$ means the depth of a node, the number of nodes along the path from the root node down to that node. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(A\arabic*),resume] \item \label{asm:spnet0} The split-net $\mathcal Z$ satisfies $\max_{1\le j\le p}\log b_j(\mathcal Z)\lesssim\log n$. \item \label{asm:spnet} The split-net $\mathcal Z$ is suitably dense and regular to construct a $\mathcal Z$-tree partition $\widehat {\mathcal T}$ such that there exists $\hat f_0\in\mathcal F_{\widehat {\mathcal T}}$ satisfying $\lVert f_0-\hat f_0 \rVert_n\lesssim \bar\epsilon_n$ by Theorem~\ref{thm:approx2}. \item \label{asm:spnet1} The $\mathcal Z$-tree partition $\mathcal T^\ast=(\Omega_1^\ast,\dots,\Omega_R^\ast)$ approximating $\mathfrak X^\ast$ satisfies $\max_r\mathsf{dep}(\Omega_r^\ast)\lesssim \log n$. \end{enumerate} Assumption \ref{asm:spnet0} is required for a suitable bound of the entropy and a good prior concentration (see Lemma~\ref{lmm:priorcon}). Assumption \ref{asm:spnet} provides the desired approximation error with respect to the $\lVert\cdot\rVert_n$-distance. Due to Theorem~\ref{thm:approx2} and Lemma~\ref{lmm:fixed2}, using fixed design points as $\mathcal Z$ is of particular interest, as $\lVert\cdot\rVert_{2,P_{\mathcal Z}}$ is equivalent to the empirical $L_2$-norm $\lVert\cdot\rVert_n$ in this case. Assumption \ref{asm:spnet1} is a technical requirement which is certainly mild. This condition is trivially satisfied if $R$ is bounded. Lastly, a careful prior specification is required to obtain the optimal posterior contraction. We consider the following prior distributions discussed in Section~\ref{sec:prior}. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(P\arabic*)] \item \label{pri:tree} For a fixed $T>0$, each tree $\mathcal T^t$, $t=1,\dots,T$, is independently assigned a tree prior with Dirichlet sparsity. \item \label{pri:normal} The step-heights $B$ are assigned a normal prior with a zero-mean and a covariance matrix whose eigenvalues are bounded below and above. \item \label{pri:invgam} The variance parameter $\sigma^2$ is assigned an inverse gamma prior. \end{enumerate} Under the above assumptions and priors, the following theorem formalizes the posterior contraction rate of model \eqref{eqn:modelreg}. \begin{theorem}[Nonparametric regression, fixed design] Consider model \eqref{eqn:modelreg} with Assumptions \ref{asm:truef}--\ref{asm:spnet1} and the prior assigned through \ref{pri:tree}--\ref{pri:invgam}. Then, there exists a constant $M>0$ such that for $\epsilon_n$ in \eqref{eqn:rate}, \begin{align*} \mathbb E_0 \Pi\Big\{(f,\sigma^2):\lVert f-f_0\rVert_n+|\sigma^2-\sigma_0^2|>M \epsilon_n \,\big|\, Y_1,\dots,Y_n\Big\}\rightarrow 0. \end{align*} \label{thm:nonreg} \end{theorem} \begin{proof} See Section~\ref{sec:proofthm1-2} in Appendix. \end{proof} Intuitively, the rate in \eqref{eqn:rate} resembles a near-minimax rate of estimation of high-dimensional anisotropic functions. {The first part in \eqref{eqn:rate} is the near-minimax risk of the penalty for not knowing the subset $S_0$ \citep{raskutti2011minimax}.} The second part in \eqref{eqn:rate} is incurred by anisotropic regression function estimation. Although $\lambda$ and $R$ can be a polynomial in $n$ with a suitably small power to satisfy $\epsilon_n\rightarrow 0$, a particularly interesting case is when both are at most $\log^c n$ for some $c>0$. The second term then corresponds to the near-minimax rate of anisotropic function estimation \citep{hoffman2002random}. Whether or not the rate in \eqref{eqn:rate} is in fact the actual (near) minimax rate remains to be established. The answer to this question is provided in the following subsection, where we formally derive the minimax lower bound with respect to the $L_2$-risk. \begin{remark} In isotropic regression using BART, \citet{rockova2020posterior} assumed that the first part of the rate in \eqref{eqn:rate} is dominated by the second part, whereby the resulting rate is simplified such that it only depends on the risk of function estimation. Since this restriction is not required, we keep the rate in the form of \eqref{eqn:rate}. \end{remark} \subsection{Minimax lower bounds} \label{sec:minimax} In Section \ref{sec:nonregcon}, we established the posterior contraction rate of BART under relaxed smoothness assumptions. Although the rate in \eqref{eqn:rate} consists of two logical components (a penalty for variable selection uncertainty and a rate of anisotropic function estimation), it is not guaranteed that the {\em whole rate} is (nearly) minimax optimal. While the minimax rates in high-dimensional {\em isotropic} function estimation were studied exhaustively in \citet{yang2015minimax}, extensions to (piecewise) {\em anisotropic} functions {\em have not} been obtained in the literature. We fill this gap by deriving the minimax lower bound in our general smoothness setup. These results will certify that the rates obtained in Section~\ref{sec:nonregcon} are indeed minimax optimal (with respect to the $L_2$-risk) up to a logarithmic factor. To deploy the conventional minimax theory, we consider the model with random design given by \begin{align} Y_i=f_0(X_i)+\varepsilon_i,\quad X_i\sim Q,\quad\varepsilon_i\sim \text{N}(0,\sigma_0^2),\quad i=1,\dots,n, \label{eqn:modelregrd} \end{align} where $X_i=(X_{i1},\dots,X_{ip})$, $i=1,\dots,n$, are $p$-dimensional random covariates and $Q$ is a probability measure such that ${\rm supp}(Q)\subseteq [0,1]^p$. We assume (without loss of generality) that $\sigma_0^2$ is fixed to $1$. To obtain a lower bound of the minimax rate, we use the Le Cam equation \citep{birge1993rates,wong1995probability,barron1999risk}. Now the density $q$ of $Q$ is assumed to satisfy the following assumption under which the $L_2(Q)$-norm is replaced by $L_2$-norm. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(M),start=8] \item \label{asm:density} There exist constants $0<\underline q\le\overline q\le\infty$ such that the density $q$ satisfies $\underline q\le \inf_{x} q(x)\le\sup_{x} q(x)\le \overline q$. \end{enumerate} We define the minimax risk for any function space $\mathcal F\in L_2(Q)$ as \begin{align*} r_n^2 (\mathcal F,Q) = \inf_{\hat f \in\mathcal B_n} \sup_{f_0\in \mathcal F}\mathbb E_{f_0,Q}\lVert \hat f-f_0 \rVert_{2,Q}^2, \end{align*} where $\mathcal B_n$ is the space of all $\mathcal L_2(Q)$-measurable function estimators and $\mathbb E_{f,Q}$ is the expectation operator under the model with $f$ and $Q$. The Le Cam equation requires suitable upper and lower bounds of the metric entropy of the target function space. We thus define the bounded function space $\overline{\Gamma}^{A_{\bar\alpha},d,p}_{\lambda,M}(\mathfrak X)=\{f\in{\Gamma}^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X) : \lVert f \rVert_\infty\le M\lambda\}$ for any $M>0$. Since our contraction rate is the same for both ${\Gamma}^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)$ and ${\Gamma}^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap {\mathcal C}([0,1]^p)$, we aim to construct a lower bound of $r_n \big(\overline{\Gamma}^{A_{\bar\alpha},d,p}_{\lambda,M}(\mathfrak X)\cap {\mathcal C}([0,1]^p),Q\big)$ close enough to $\epsilon_n$. \begin{theorem}[Minimax lower bound] Consider model \eqref{eqn:modelregrd} for $\sigma_0^2=1$ with Assumption \ref{asm:density}. For $d>0$, $\lambda>0$, $R>0$, a partition $\mathfrak X=(\Xi_1,\dots,\Xi_R)$ of $[0,1]^d$, and a smoothness parameter $A_{\bar\alpha}\in \mathcal A_{\bar\alpha}^{R,d}$ for $\bar\alpha\in(0,1]$ such that $\log\mathsf{len}([\Xi_r]_j)\gtrsim -1/\alpha_{rj}$, $1\le r\le R$, $1\le j\le d$, there exists $M_d>0$ depending only on $d$ such that \begin{align*} r_n \big(\overline{\Gamma}^{A_{\bar\alpha},d,p}_{\lambda,M}(\mathfrak X)\cap {\mathcal C}([0,1]^p),Q\big)\ge M_{d}\gamma_n, \end{align*} where $\gamma_n = \sqrt{n^{-1}\log\binom{p}{d}}+(\lambda^{d/\bar\alpha}/n)^{\bar\alpha/(2\bar\alpha+d)}$, if $\gamma_n\ll 1/d$. \label{thm:minimax} \end{theorem} \begin{proof} See Section~\ref{sec:proofthm3} in Appendix. \end{proof} Since $M_d$ can be dependent on $d$, the correct interpretation of the result is with a bounded $d$. Also, our contraction rate $\epsilon_n$ is derived under the condition $\lVert f_0\rVert_\infty\lesssim \sqrt{\log n}$, and hence we assume that $\lambda\lesssim \sqrt{\log n}$ to match the two spaces. One can easily verify that the condition $\log\mathsf{len}([\Xi_r]_j)\gtrsim -1/\alpha_{rj}$, $1\le r\le R$, $1\le j\le d$, leads to the restriction $\log R\lesssim d/\bar\alpha$ which removes the term $R$ is removed from our rate $\epsilon_n$ in \eqref{eqn:rate}. Putting the bounds together, $\epsilon_n$ matches $\gamma_n$ up to a logarithmic factor. Even if $d$ is increasing, recall that we must have $d/\bar\alpha\ll \log n$ to guarantee consistent estimation. Therefore, our rate $\epsilon_n$ matches the lower bound $\gamma_n$ up to a logarithmic factor even for increasing $d$, implying that the rate $\epsilon_n$ is near-minimax optimal. \subsection{Numerical study} In this section, we provide a numerical study that shows the successful performance of BART for model \eqref{eqn:modelreg} with a piecewise smooth function. For competitors we consider the random forest and deep neural network (DNN) models with the rectified linear unit (ReLU) activation functions, which are believed to work well for discontinuous or complicated smooth functions. In particular, the recent literature has reported that DNN models are instrumental in adapting to complicated function classes with the guaranteed optimal properties \citep[e.g.,][]{petersen2018optimal,imaizumi2019deep,schmidt2020nonparametric,hayakawa2020minimax}. The random forest is expected to perform similarly to BART, as it is also based on the additive tree structure. Our numerical study shows that BART outperforms these competitors in estimating piecewise smooth classes. The test datasets are generated from model \eqref{eqn:modelreg} with the true function $f_0:[0,1]^p\mapsto \mathbb R$, \begin{align} f_0(x_1,\dots,x_p) = 1+\frac{1}{p}\Bigg\{\sum_{j=1}^p(-1)^j\mathbbm 1(x_j\ge 1/2)\Bigg\}\Bigg\{\sum_{j=1}^p (x_j-1/2)^2\Bigg\}, \label{eqn:piecefun} \end{align} with given $p$ and $\sigma_0^2$. The function $f_0$ is discontinuous on $2^p$ pieces of $[0,1]^p$. For a fair comparison to the other methods, we do not use the Dirichlet sparse prior in \eqref{eqn:dir}. Instead, we assign a uniform prior that corresponds to the Dirichlet prior with concentration parameter $1$, with a priori assumption that all predictor variables contribute equally to the observations. Hence, $p$ must be much smaller than $n$. For given predictor variables $X_i$ generated uniformly on $[0,1]^p$, the response variable $Y_i$ is generated from model \eqref{eqn:modelreg}, $i=1,\dots,n$. With the sample size $n=10^3$, we consider the simulation settings $p\in\{2,10,20,50\}$ and $\sigma_0^2\in\{0.05^2,0.5^2\}$. We fit BART with 200 trees using the R package \texttt{BART}. The random forest is fitted by the \texttt{randomForest} package with 200 trees and the maximal node size $5$ or $50$ for each tree. The DNN models are trained by TensorFlow with the Keras interface. We consider two DNN models with 2 and 4 hidden layers with $(64,32)$ and $(256,128,64,32)$ hidden units. All hidden units take the ReLU activation function with the dropout of rate $0.3$ for regularization. \begin{figure}[t!] \centering \includegraphics[width=6in]{sim-eps-converted-to.pdf} \caption{RMSPEs obtained from 100 replicated datasets for each simulation setting. RF1 stands for the random forest of 200 trees with maximal node size 5. RF1 stands for the random forest of 200 trees with maximal node size 5 for each tree. RF2 stands for the random forest of 200 trees with maximal node size 50 for each tree. NN1 stands for the DNN model with two hidden layers and $(64,32)$ hidden units. NN2 stands for the DNN model with four hidden layers and $(256,128,64,32)$ hidden units. } \label{fig:sim} \end{figure} Figure~\ref{fig:sim} shows the root mean squared prediction error (RMSPE) obtained by the five methods. The RMSPEs are calculated with randomly drawn out-of-samples in 100 replicated datasets for each simulation setting. The result shows that BART clearly outperforms the other learning algorithms for the piecewise smooth function in \eqref{eqn:piecefun}. In particular, the DNN models often fall behind in the competition, while requiring long computation time. We surmise that this is because the training procedure of DNN models often gets stuck at a local mode due to the susceptibility; it is very challenging to fully optimize complicated DNN models. We also tested many other tuning parameter setups and network structures for the random forest and the DNN, but found no improvement. \section{Further applications} \label{sec:furapp} Section~\ref{sec:nonreg} establishes the posterior contraction rate of BART for the nonparametric regression model and justifies its near-minimax optimality. Since our approximation theory only requires very general conditions on a split-net, the results can be extended to statistical models beyond nonparametric regression with fixed design. In this section, we consider other applications such as nonparametric regression with random design, density estimation, and nonparametric binary classification. Moreover, since the technical results in Section~\ref{sec:nonreg} hold even with the single tree model ($T=1$), one can find no theoretical advantages of BART over Bayesian CART. A theoretical advantage of BART can be recognized if the true function has an additive structure \citep{linero2018bayesian-jrssb,rockova2020posterior}. Such an extension is also considered in this section. \subsection{Nonparametric regression with random design} \label{sec:nonran} Theorem~\ref{thm:nonreg} quantifies the posterior contraction rate of nonparametric regression with fixed design where the predictor variables are not random variables. Now we consider a random design regression in \eqref{eqn:modelregrd}. The random design assumption is often necessary, for example, in measurement error models \citep{tuo2015efficient} or causal inference models \citep{hahn2020bayesian,ray2020semiparametric}. Here we establish the posterior contraction rate of BART for the random design model in \eqref{eqn:modelregrd}. It is important to note that fixed design points in Section~\ref{sec:fixeddesign} can not be used for a split-net, as the procedure is not truly Bayesian if the prior is dependent on the data. Instead, a regular grid in Section~\ref{sec:grid} can be useful. We consider model \eqref{eqn:modelregrd} for $Q$ a probability measure that satisfies ${\rm supp}(Q)\subseteq [0,1]^p$ with a bounded density. Unlike model \eqref{eqn:modelreg}, model \eqref{eqn:modelregrd} is independent and identically distributed. The well-known fact that exponentially powerful tests exist with respect to the Hellinger metric $\rho_{\rm H}(\cdot,\cdot)$ allows one to establish the contraction rate for the corresponding metric \citep{ghosal2000convergence}. However, in normal models, the Hellinger distance is matched to the $L_2$-type metric only when $\lVert f\rVert_\infty$ and $|\log\sigma^2|$ are bounded in the entire parameter space, not only for the true values \citep[e.g.,][]{xie2018adaptive}. Unlike in Theorem~\ref{thm:nonreg}, this restriction requires that $f_0$ be uniformly bounded and a prior be appropriately truncated. Note also that we need a good approximation error with respect to the integrated $L_2$-norm. Below we summarize the required modifications of \ref{asm:fsup}, \ref{asm:spnet}, \ref{pri:normal}, and \ref{pri:invgam}. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(A\arabic*)] \item[\mylabel{asm:fsup2}{\rm(A3{$^\ast$})}] The true function $f_0$ satisfies $\lVert f_0 \rVert_\infty\le C_0^\ast$ for some sufficiently large $C_0^\ast>0$. \item[\mylabel{asm:spnet2}{\rm(A6{$^\ast$})}] The split-net $\mathcal Z$ is suitably dense and regular to construct a $\mathcal Z$-tree partition $\widehat {\mathcal T}$ such that there exists $\hat f_0\in\mathcal F_{\widehat {\mathcal T}}$ satisfying $\lVert f_0-\hat f_0 \rVert_2\lesssim \bar\epsilon_n$ by Theorem~\ref{thm:approx2}. \item[\mylabel{pri:truncb}{\rm(P2{$^\ast$})}] A prior on the compact support $[-\overline C_1,\overline C_1]$ is assigned to the step-heights $B$ for some $\overline C_1>C_0^\ast$. \item[\mylabel{pri:trucsig}{\rm(P3{$^\ast$})}] A prior on the compact support $[\overline C_2^{-1},\overline C_2]$ is assigned to $\sigma^2$ for some $\overline C_2>C_0$. \end{enumerate} Assumption \ref{asm:spnet2} requires good approximability with respect to the $L_2$-norm. Due to Theorem~\ref{thm:approx2}, a regular grid in Section~\ref{sec:grid} can be useful to meet this requirement. We wrap up this section with a theorem that formalizes the posterior contraction of BART for model~\eqref{eqn:modelregrd}. \begin{theorem}[Nonparametric regression, random design] Consider model \eqref{eqn:modelregrd} with Assumptions \ref{asm:truef}, \ref{asm:dp}, \ref{asm:fsup2}, \ref{asm:sig}, \ref{asm:spnet0}, \ref{asm:spnet2}, and \ref{asm:spnet1}, and the prior assigned through \ref{pri:tree}, \ref{pri:truncb}, and \ref{pri:trucsig}. Then, there exists a constant $M>0$ such that for $\epsilon_n$ in \eqref{eqn:rate}, \begin{align*} \mathbb E_0 \Pi\Big\{(f,\sigma^2):\lVert f-f_0\rVert_{2,Q}+|\sigma^2-\sigma_0^2|>M \epsilon_n \,\big|\, (X_1,Y_1),\dots,(X_n,Y_n)\Big\}\rightarrow 0. \end{align*} \label{thm:nonregrd} \end{theorem} \begin{proof} See Section~\ref{sec:proofthm4-5} in Appendix. \end{proof} \subsection{Density estimation} For some probability measure $P$ that satisfies ${\rm supp}(P)\subseteq [0,1]^p$, suppose $n$ independent observations $X_i$, $i=1,\dots,n$, are drawn from $P$, i.e., \begin{align} X_i \sim P,\quad i=1,\dots, n. \label{eqn:density} \end{align} Assume that $P$ is absolutely continuous with respect to the Lebesgue measure with the true density $p_0$. We assign a prior on $p_f$ indexed by $f$ such that $p_f=\exp(f)/\int_{[0,1]^p} \exp(f(x))dx$ with $f$ assigned the forest priors in Section~\ref{sec:prior}. We write $f_0=\log p_0$ while assuming \ref{asm:truef}--\ref{asm:fsup}. We leverage the existence of an exponentially powerful test for the Hellinger metric $\rho_{\rm H}(\cdot,\cdot)$. Due to the relationship between Hellinger balls and supremum-norm balls in density estimation with the exponential link, we need an approximation result with respect to the supremum-norm. This is obtained by \ref{lmm0:sup2} of Theorem~\ref{thm:approx2} with an appropriate split-net, but requires the continuity restriction on the true function. We make the following assumptions to satisfy this requirement. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(A\arabic*)] \item [\mylabel{asm:truef2}{\rm(A1{$^{\ddag}$})}] For $d>0$, $\lambda>0$, $R>0$, $\mathfrak X=(\Xi_1,\dots,\Xi_R)$, and $A_{\bar\alpha}\in\mathcal A_{\bar\alpha}^{R,d}$ with $\bar\alpha\in(0,1]$, the true function satisfies $f_0\in\Gamma^{A_{\bar\alpha},d,p}_{\lambda}(\mathfrak X)\cap \mathcal C([0,1]^p)$. \item[\mylabel{asm:spnet3}{\rm(A6{$^{\ddag}$})}] The split-net $\mathcal Z$ is suitably dense and regular to construct a $\mathcal Z$-tree partition $\widehat {\mathcal T}$ such that there exists $\hat f_0\in\mathcal F_{\widehat {\mathcal T}}$ satisfying $\lVert f_0-\hat f_0 \rVert_{\infty}\lesssim \bar\epsilon_n$ by Theorem~\ref{thm:approx2}. \end{enumerate} We assign the tree prior with Dirichlet sparsity and a normal prior on the step-heights. Under suitable assumptions, the following theorem provides the posterior contraction rate for $p_f$ with respect to the Hellinger distance. \begin{theorem}[Density estimation] Consider model \eqref{eqn:density} with Assumptions \ref{asm:truef2}, \ref{asm:dp}--\ref{asm:fsup}, \ref{asm:spnet0}, \ref{asm:spnet3}, and \ref{asm:spnet1}, and the prior assigned through \ref{pri:tree}--\ref{pri:normal}. Then, there exists a constant $M>0$ such that for $\epsilon_n$ in \eqref{eqn:rate}, \begin{align*} \mathbb E_0 \Pi\Big\{f:\rho_{\rm H}(p_f,p_0)>M \epsilon_n \,\big|\, X_1,\dots,X_n\Big\}\rightarrow 0. \end{align*} \label{thm:density} \end{theorem} \begin{proof} See Section~\ref{sec:proofthm4-5} in Appendix. \end{proof} As mentioned in Section~\ref{sec:nonregcon}, the normal prior in \ref{pri:normal} is not necessary and a heavy-tailed prior can relax the assumption on $\lVert f \rVert_\infty$. Since normal priors are not conjugate to the model likelihood in the density estimation example, there is no clear benefit of adopting \ref{pri:normal} anymore. This is also the case in the example of binary classification given in the next subsection. Nevertheless, we employ \ref{pri:normal} for the sake of simplicity. \subsection{Nonparametric binary classification} For a binary response $Y_i\in\{0,1\}$ and a random covariate $X_i\in\mathbb R^p$, assume that we have $n$ independent observations $(X_1,Y_1),\dots(X_n,Y_n)$ from the binary classification model: \begin{align} \mathbb P_0(Y_i=1|X_i=x) = \varphi_0(x), \quad X_i \sim Q,\quad i=1,\dots, n, \label{eqn:binary} \end{align} for some $\varphi_0: [0,1]^p\mapsto [0,1]$ and some probability measure $Q$ such that ${\rm supp}(Q)\subseteq [0,1]^p$ with a bounded density. We thus consider a binary classification problem with random design. We parameterize the probability function using the logistic link function $H:\mathbb R\mapsto [0,1]$ such that $\varphi_f=H(f)$ for $f$ on which the forest priors in Section~\ref{sec:prior} are assigned. For true function $\varphi_0$, we write $f_0=H^{-1}(\varphi_0)$ while assuming \ref{asm:truef}--\ref{asm:fsup} as in the density estimation problem. In the proof, it will be shown that the Hellinger metric is bounded by the $L_2(Q)$-distance in this example, and hence \ref{asm:spnet2} is assumed. Similar to Section~\ref{sec:nonran}, fixed design points are not available for a split-net, but a regular grid in Section~\ref{sec:regular} can be useful. The following theorem formalizes the posterior contraction rate with respect to the $L_2(Q)$-distance. \begin{theorem}[Binary classification] Consider model \eqref{eqn:binary} with Assumptions \ref{asm:truef}--\ref{asm:fsup}, \ref{asm:spnet0}, \ref{asm:spnet2}, and \ref{asm:spnet1}, and the prior assigned through \ref{pri:tree}--\ref{pri:normal}. Then, there exists a constant $M>0$ such that for $\epsilon_n$ in \eqref{eqn:rate}, \begin{align*} \mathbb E_0 \Pi\Big\{f:\lVert H(f)- H (f_0)\rVert_{2,Q}>M \epsilon_n \,\big|\, (X_1,Y_1),\dots,(X_n,Y_n)\Big\}\rightarrow 0. \end{align*} \label{thm:binary} \end{theorem} \begin{proof} See Section~\ref{sec:proofthm4-5} in Appendix. \end{proof} \subsection{Additive nonparametric regression} Thus far we have considered statistical models with the true function $f_0$ that belongs to the piecewise heterogeneous anisotropic H\"older space with sparsity. Since Theorems~\ref{thm:nonreg}-\ref{thm:binary} hold even with the single tree model ($T=1$), the empirical success of BART is not well explained by the previous examples, though the empirical performance of BART should be attributed to its fast mixing to some extent. However, \citet{linero2018bayesian-jrssb} and \citet{rockova2020posterior} observed that BART optimally adapts to a larger class of additive functions which single tree models do not adapt to. In this section, we consider additive nonparametric regression to show theoretical advantages of BART over Bayesian CART. We consider the nonparametric regression model with fixed design in \eqref{eqn:modelreg}, but the true function $f_0$ is assumed to have an additive structure with $T_0$ components, $f_0=\sum_{t=1}^{T_0} f_{0t}$, where each $f_{0t}$ belongs to the piecewise heterogeneous anisotropic H\"older space with sparsity. We also need suitable conditions on a split-net $\mathcal Z$ such that the approximation theory works for every additive component. We thus make the following modifications of the conditions used in Section~\ref{sec:nonregcon}. In what follows, the subscript or superscript $t$ stands for additive component-specific extensions of the model elements used in Section~\ref{sec:nonregcon}. \begin{enumerate}[leftmargin=2.0\parindent,label=\rm(A\arabic*)] \item [\mylabel{asm:truefadd}{\rm(A1{$^{\mathsection}$})}] For $d_t>0$, $\lambda_t>0$, $R_t>0$, $\mathfrak X_t=(\Xi_{t1},\dots,\Xi_{tR})$, and $A_{t,\bar\alpha_t}\in\mathcal A_{\bar\alpha_t}^{R_t,d_t}$ with $\bar\alpha_t\in(0,1]$, $1\le t\le T_0$, the true function satisfies $f_0=\sum_{t=1}^{T_0} f_{0t}$ for $f_{0t}\in\Gamma^{A_{t,\bar\alpha_t},d_t,p}_{\lambda_t}(\mathfrak X_t)$ or $f_{0t}\in\Gamma^{A_{t,\bar\alpha_t},d_t,p}_{\lambda_t}(\mathfrak X_t)\cap \mathcal C([0,1]^p)$. \item [\mylabel{asm:dpadd}{\rm(A2{$^{\mathsection}$})}] It is assumed that $d_t$, $p_t$, $\lambda_t$, $R_t$, and $\bar\alpha_t$ satisfy $\epsilon_{t,n}\ll 1$, where $\epsilon_{t,n}=\sqrt{(d_t\log p)/{n}}+(\lambda_t d_t)^{d_t/(2\bar\alpha_t+d_t)}\left(({R_t\log n})/{n}\right)^{\bar\alpha_t/(2\bar\alpha_t+d_t)}$. \item [\mylabel{asm:spnetadd}{\rm(A6{$^{\mathsection}$})}] The split-net $\mathcal Z$ is suitably dense and regular to construct a $\mathcal Z$-tree partition $\widehat {\mathcal T}^t$ such that for $\bar\epsilon_{t,n}=(\lambda_t d_t)^{d_t/(2\bar\alpha_t+d_t)}\left(({R_t\log n})/{n}\right)^{\bar\alpha_t/(2\bar\alpha_t+d_t)}$, there exists $\hat f_{0t}\in\mathcal F_{\widehat {\mathcal T}^t}$ satisfying $\lVert f_{0t}-\hat f_{0t} \rVert_n\lesssim \bar\epsilon_{t,n}$ by Theorem~\ref{thm:approx2}, $1\le t\le T_0$. \item [\mylabel{asm:spnet1add}{\rm(A7{$^{\mathsection}$})}] The $\mathcal Z$-tree partition $\mathcal T_t^\ast=(\Omega_{t1}^\ast,\dots,\Omega_{tR}^\ast)$ approximating $\mathfrak X_t^\ast$ satisfies $\max_r\mathsf{dep}(\Omega_{tr}^\ast)\lesssim \log n$, $1\le t\le T_0$. \end{enumerate} These simply mean that the assumptions in Section~\ref{sec:nonregcon} hold for every additive component $f_{0t}$. It is worth noting that we do not need to modify the prior distribution for additive regression, which makes BART very appealing in that the procedure truly adapts to the unknown true function. This fact is due to the use of the Dirichlet prior in \eqref{eqn:dir}; the spike-and-slab prior does not yield such a nice property \citep{rockova2020posterior}. The next theorem provides the posterior contraction rate for the additive regression model. \begin{theorem}[Additive nonparametric regression] Consider model \eqref{eqn:modelreg} with Assumptions \ref{asm:truefadd}--\ref{asm:dpadd}, \ref{asm:fsup}--\ref{asm:spnet0}, and \ref{asm:spnetadd}--\ref{asm:spnet1add} and the prior assigned through \ref{pri:tree}--\ref{pri:invgam}. If $T_0\le T$, there exists a constant $M>0$ such that for $\epsilon_n^\ast=\sqrt{\sum_{t=1}^{T_0}\epsilon_{t,n}^2}$, \begin{align*} \mathbb E_0 \Pi\Big\{(f,\sigma^2):\lVert f-f_0\rVert_n+|\sigma^2-\sigma_0^2|>M \epsilon_n^\ast \,\big|\, Y_1,\dots,Y_n\Big\}\rightarrow 0. \end{align*} \label{thm:addreg} \end{theorem} \begin{proof} See Section~\ref{sec:proofthm4-5} in Appendix. \end{proof} Theorem~\ref{thm:addreg} shows that the posterior contraction rate for additive regression is the sum of the rates for the additive components. If the function space is reduced to a high-dimensional isotropic class, then our rate $\epsilon_n^\ast$ matches the minimax rate for high-dimensional additive regression \citep{yang2015minimax}. We strongly believe that $\epsilon_n^\ast$ is indeed near-minimax optimal and this can be formally justified by combining the proof technique of our Theorem~\ref{thm:minimax} and the tools for additive scenarios developed in \citet{yang2015minimax}. Considering the length of the paper, however, we do not purse this direction in this study. \section{Discussion} \label{sec:disc} In this paper, we have enlarged the scope of theoretical understanding of Bayesian forests in the context of function estimation by considering relaxed smoothness assumptions. We introduced a new class of piecewise anisotropic sparse functions, which form a blend of anisotropy and spatial inhomogeneity. We have derived the minimax rate of estimation of these functions in high-dimensional regression setups, extending existing results obtained earlier {\em only} for isotropic functions. Next, we have formalized that Bayesian forests attain the near-optimal posterior concentration rate for these general function classes without any need for prior modification. Our results apply to a general class of estimation problems including nonparametric regression with a fixed and random design, binary classification, and density estimation.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,688
Geared to any health care professional practicing in or rotating into a CCU, this quick referenceadopts a similar format to the author's highly regarded Cardiac Care Unit Survival Guide. Packed with full-page diagnosis treatment algorithms and management pathways, Herzog's CCU Bookensures you acquire in-depth knowledge and understand the subtleties in treating the different kinds of patients you encounter in a CCU setting. Features: Richly illustrated algorithms and pathways for making accurate diagnosis, treatment, care management, and patient education decisions. Organized to help you quickly access pathophysiology, epidemiology, testing, and therapy information for angina, acute coronary syndrome, arrhythmias, organ failure, and other cardiac conditions. Perfect for physicians, residents, cardiac care and critical care nurses, physician assistants, and other CCU practitioners. Identifies what you need to tell the patient and family members about the patient's stay and condition. Edited by the Director of the CCU at St. Luke's Roosevelt Hospital Center and author of Lippincott Williams & Wilkins' The Cardiac Care Unit Survival Guide. Your book purchase includes a complimentary download of the enhanced eBook for iOS, Android, PC & Mac. Take advantage of these practical features that will improve your eBook experience: The ability to download the eBook on multiple devices at one time-providing a seamless reading experience online or offline. Powerful search tools and smart navigation cross-links that allow you to search within this book, or across your entire library of VitalSource eBooks. Multiple viewing options that enable you to scale images and text to any size without losing page clarity as well as responsive design. The ability to highlight text and add notes with one click.
{ "redpajama_set_name": "RedPajamaC4" }
5,703
{"url":"https:\/\/paperswithcode.com\/paper\/fast-graph-metric-learning-via-gershgorin\/review\/","text":"Paper\n\n### Graph Metric Learning via Gershgorin Disc Alignment\n\nWe propose a fast general projection-free metric learning framework, where the minimization objective $\\min_{\\textbf{M} \\in \\mathcal{S}} Q(\\textbf{M})$ is a convex differentiable function of the metric matrix $\\textbf{M}$, and $\\textbf{M}$ resides in the set $\\mathcal{S}$ of generalized graph Laplacian matrices for connected graphs with positive edge weights and node degrees. Unlike low-rank metric matrices common in the literature, $\\mathcal{S}$ includes the important positive-diagonal-only matrices as a special case in the limit... (read more)\n\nResults in Papers With Code\n(\u2193 scroll down to see all results)","date":"2021-01-24 21:10:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7531633973121643, \"perplexity\": 929.0297964947658}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703557462.87\/warc\/CC-MAIN-20210124204052-20210124234052-00602.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intro} Classical scalar fields coupled to quantum matter play an important role in various settings in cosmology. They are used to study the creation of seed perturbations for structure formation, reheating processes, particle production and the creation of baryon asymmetry. Almost exclusively in these treatments it is assumed that the scalar field evolves in some classical, possibly quantum corrected but fixed effective potential. One rarely accounts for the backreaction of the non-equilibrium quanta that may be created during the dynamical process. However, such quanta may be produced copiously during out-of-equilibrium phase transitions~\cite{Traschen:1990sw,Amin:2014eta} by parametric resonance~\cite{Kofman:1994rk,kofman:1997yn,Greene:1997fu,Braden:2010wd} or by spinodal instability~\cite{Calzetta:1989bj,Guth:1985ya,Weinberg:1987vp,Braden:2010wd,Dufaux:2006ee,Fairbairn:2018bsw,Markkanen:2015xuw}, and they could significantly affect the evolution of the system~\cite{Boyanovsky:1992vi,Boyanovsky:1993pf,Arrizabalaga:2004iw,Arrizabalaga:2005tf}. In this paper we study the effects of quantum backreaction on the scalar field evolution using two-particle irreducible (2PI) effective action methods. A crucial step in the rigorous analysis of the problem is performing a consistent renormalization of the equations of motion derived from the 2PI effective action. This is a highly non-trivial task, because in any finite truncation of the 2PI expansion, a number of auxiliary vertex and self-energy functions appear that require setting up consistent renormalization conditions~\cite{Berges:2005hc}. In this paper we carefully go through the renormalization of our model using the method of cancellation of the sub-divergences~\cite{Fejos:2007ec,Arai:2012sh,Pilaftsis:2013xna,Pilaftsis:2017enx}. We emphasize that while the renormalization counterterms are constants, the divergences that get subtracted, and hence also the vacuum state of the system, depend on the infrared physics, such as temperature, or even the shape of the non-equilibrium particle spectrum. To be specific, we study a simple $\lambda \phi^4$-model with a spontaneous symmetry breaking tree-level potential. We work in the Hartree approximation and perform the auxiliary renormalizations using the $\overbar {\mathrm{ MS}}$ subtraction scheme. The renormalized equations of motion and the 2PI effective action are however scale independent and completely specified in terms of physical parameters. We present explicit results for the vacuum and finite temperature effective potentials as well as for the vacuum potential in the presence of non-equilibrium fluctuations. We stress that in the non-equilibrium case the effective potential can only be constructed \emph{a posteriori} and it is not in general a useful quantity for solving the equations of motion. With our renormalized equations we can follow in real time how the potential energy of the classical field is transferred into quantum fluctuations by the non-perturbative processes. We identify a strong parametric resonance, even though our self-coupled system is too complicated to admit a comprehensive analytical stability analysis. We also show that due to backreaction from spinodal instability the field can pass through a potential barrier even when starting with less energy than the initial barrier height. We also follow the full thermal history of a system that starts with pure potential energy, until it is fully thermalized with nearly all of its energy stored in thermal plasma. We also show that at the initial stages of reheating the quantum system is highly coherent, but the coherence is gradually erased by interactions as the system thermalizes. This paper is organized as follows. In section~\cref{sec:2PI} we review the 2PI effective action techniques and introduce our truncation scheme, the Hartree approximation. In section~\cref{sec:renormalization-main} we show how to self-consistently renormalize the 2PI equations of motion and express them in terms of physical quantities. We also study both resummed vacuum and thermal effective potentials in the Hartree case and compare them with other approximations. In section~\cref{sec:wigner} we write our equations of motion in the Wigner space in terms of moment functions following references~\cite{Herranen:2008di,Herranen:2010mh}, and also complement the equations with phenomenological friction terms. Section~\cref{sec:results} is dedicated to numerical results. We compute the evolution of various quantities, such as the classical field, particle number and coherence functions using the fully coupled 2PI equations. Finally, section~\cref{sec:conc} contains our conclusions. \begin{figure}[t] \begin{center} \includegraphics[scale=0.95]{ctp.pdf} \end{center} \caption{The Keldysh contour in the complex time plane, running from some initial time to an arbitrary future time and back again.} \label{fig:ctp} \end{figure} \section{2PI effective action and equations of motion} \label{sec:2PI} We will study the non-equilibrium dynamics of a scalar field theory with the potential $V(\phi) = -\frac{1}{2}\mu^2\phi^2 + \frac{1}{4!}\lambda\phi^4$ using the two-particle irreducible (2PI) effective action technique of non-equilibrium quantum field theory~\cite{Cornwall:1974vz,Berges:2004yj}. The 2PI effective action for this theory is \begin{equation} \Gamma_{\rm 2PI} [\varphi, {\Delta}] = \mathcal{S}[\varphi] - \frac{\mathrm{i}}{2}\mathrm{Tr}_{\mathcal{C}}\bigl[ \ln ({\Delta})\bigr] + \frac{\mathrm{i}}{2}\mathrm{Tr}_{\mathcal{C}}\bigl[{\Delta}_0^{-1}{\Delta}\bigr] + \Gamma_2[\varphi, {\Delta}], \label{2PI-effective-action} \end{equation} where $\varphi(x)$ is the classical field and ${\Delta}(x,y)$ is the classical connected two-point function and the trace contains integration over the Keldysh contour~\cite{Keldysh:1964ud} $\mathcal{C}$ of figure \ref{fig:ctp}. Moving to a real-time representation the classical action can be written as $\mathcal{S}[\varphi] = \sum_{a=\pm } a\delta^{ab}\mathcal{S}[\varphi_b]$, where $a$ and $b$ indicate the branch on the complex time-contour, and \begin{equation} \mathcal{S}[\varphi_b] = \int {\rm d}^4x \bigg[ \frac{1}{2}(\partial_\mu\varphi_b)^2 + \frac{1}{2}\mu^2\varphi_b^2 - \frac{1}{4!}\lambda \varphi_b^4 \bigg]. \label{eq:classical-effective-action} \end{equation} Similarly, the inverse classical propagator is given by \begin{equation} \mathrm{i}{\Delta}_{0,ab}^{-1}(x,y;\varphi) = - \bigg( \Box_x - \mu^2 + \frac{1}{2}\lambda \varphi_a^2\bigg)\delta^{(4)}(x-y)\delta_{ab}. \label{eq:free-inverse-propagator} \end{equation} Finally, $\Gamma_2$ consists of all 2PI vacuum graphs with lines corresponding to the full propagator ${\Delta}$ and interactions inferred from the shifted action \begin{equation} S_{\rm int}\bigl[\varphi,\phi_q\bigr] = - \sum_{a=\pm} a\delta^{ab} \int {\rm d}^4x\bigg(\frac{1}{3!}\lambda\varphi_b \phi_{qb}^3 + \frac{1}{4!}\lambda\phi_{qb}^4\bigg), \label{eq:shifted-action} \end{equation} where $\phi = \varphi + \phi_q$ and $\phi_q$ is the quantum field. The stationarity conditions of $\Gamma_{\rm 2PI}$ will give the equations of motion for the one- and two-point functions: \begin{equation} \frac{\delta \Gamma_{\rm 2PI}}{\delta \varphi_a}=0 \qquad {\rm and} \qquad \frac{\delta \Gamma_{\rm 2PI}}{\delta \Delta_{ab}}=0. \label{eq:stationarity} \end{equation} When the classical solution to the latter equation, parametrized in terms of $\varphi$, is reinserted back into the effective action, we \emph{formally} recover the 1PI action $\hat \Gamma_{\rm 1PI}[\varphi ] = \Gamma_{\rm 2PI}[\varphi,\Delta[\varphi]]$. In the full dynamical case the two equations are however strongly coupled and should be solved simultaneously, as we will do in our study. For the classical field $\varphi_+(x) = \varphi_-(x)$ and we may drop the branch index and find: \begin{equation} \bigg[ \Box_x - \mu^2 + \frac{1}{6}\lambda\varphi^2(x) + \frac{1}{2} \lambda{\Delta}(x,x)\bigg] \varphi(x) = \frac{\delta \Gamma_2}{\delta \varphi(x)}. \label{eq:eom-for-one-point-function} \end{equation} We also left the branch indices out from the local correlation function ${\Delta}(x,x)$, which is the same for all components of the two-point function ${\Delta}^{ab}(x,y)$. The stationarity condition for ${\Delta}^{ab}(x,y)$ leads to the Schwinger--Dyson equation \begin{equation} \Big[ \Box_x - \mu^2 + \sfrac{1}{2}\lambda \varphi^2(x)\Big]\mathrm{i}{\Delta}^{ac}(x,y) = a\delta^{ac} \delta^{(4)}(x-y) + b\hspace{-.2em}\int \hspace{-.2em} \mathrm{d}^4z \, \Pi^{ab}(x,z){\Delta}^{bc}(z,y), \label{eq:eom-for-two-point-function} \end{equation} where summation over $b$ is implied and the self-energy function is given by \begin{equation} \Pi^{ab}(x,y) = 2\mathrm{i}ab \, \frac{\delta \Gamma_2[\varphi, {\Delta}]}{\delta {\Delta}^{ba}(y,x)} = a\delta^{ab}\delta^{(4)}(x-y)\Pi_{\mathrm{sg}}(x) + \Pi_{\mathrm{nsg}}^{ab}(x,y). \end{equation} To proceed we also have to specify an approximation for the interaction term $\Gamma_2$. \begin{figure}[t] \begin{fmffile}{diagram} \begin{equation*} \hspace{-1em} \parbox{17mm} { \begin{fmfgraph*}(70,40) \fmfleft{i} \fmfright{o} \fmf{phantom}{i,v,v,o} \fmf{plain}{v,v} \fmf{plain,left=90}{v,v} \fmfv{label=\scriptsize{$\sim\!\lRidx{0}\!+\delta_\lambda^\idx{0}$},label.angle=-90,label.dist=.5w}{v} \end{fmfgraph*} } \hspace{0.7em} + \hspace{1.3em} \parbox{20mm} { \begin{fmfgraph*}(40,40) \fmfleft{v1} \fmf{plain,left,tension=.3}{v1,v2,v1} \fmf{plain}{v1,v2} \fmfright{v2} \fmf{phantom}{v1,v3,v2} \fmfv{label=\scriptsize{$\sim\!\left(\lRidx{1}\!+\delta_\lambda^\idx{1}\right)^2$},label.angle=-90,label.dist=.8w}{v3} \end{fmfgraph*} } \hspace{-0.2em} + \hspace{1.4em} \parbox{20mm} { \begin{fmfgraph*}(40,40) \fmfleft{v1} \fmf{plain,left,tension=.3}{v1,v2,v1} \fmf{plain,left=0.4}{v1,v2,v1} \fmfright{v2} \fmf{phantom}{v1,v3,v2} \fmfv{label=\scriptsize{$\sim\!\left(\lRidx{0}\!+\delta_\lambda^\idx{0}\right)^2$},label.angle=-90,label.dist=.8w}{v3} \end{fmfgraph*} } \hspace{-0.2em} + \hspace{1.4em} \parbox{20mm} { \begin{fmfgraph*}(40,40) \fmfforce{(0w,0.5h)}{v1} \fmfforce{(1.0w,0.5h)}{v2} \fmfforce{(.5w,1h)}{v3} \fmfforce{(0.07w,0.25h)}{v4} \fmfforce{(0.93w,0.25h)}{v5} \fmfforce{(0.5w,0.5h)}{v6} \fmf{plain,left,tension=.3}{v1,v2,v1} \fmf{plain}{v6,v3} \fmf{plain}{v6,v4} \fmf{plain}{v6,v5} \fmfv{label=\scriptsize{$\sim\!\left(\lRidx{1}\!+\delta_\lambda^\idx{1}\right)^4$},label.angle=-90,label.dist=.8w}{v6} \end{fmfgraph*} } \hspace{-0.2em} + \hspace{1.4em} \parbox{20mm} { \begin{fmfgraph*}(40,40) \fmfforce{(0w,0.5h)}{v1} \fmfforce{(1.0w,0.5h)}{v2} \fmfforce{(.5w,0h)}{v3} \fmfforce{(0.2w,0.9h)}{v4} \fmfforce{(0.8w,0.9h)}{v5} \fmf{plain,left,tension=.3}{v1,v2,v1} \fmf{plain}{v3,v4} \fmf{plain}{v3,v5} \fmfforce{(0.5w,0.5h)}{v6} \fmfv{label=\scriptsize{$\sim\!\left(\lRidx{0}\!+\delta_\lambda^\idx{0}\right)\left(\lRidx{1}\!+\delta_\lambda^\idx{1}\right)^2$},label.angle=-90,label.dist=.8w}{v6} \end{fmfgraph*} } \hspace{-0.2em} +\hspace{0.4em} \cdots \end{equation*} \end{fmffile} \vspace{2.5em} \caption{The first few terms contributing to $\Gamma_2$, including their precise coupling constant dependences.} \label{fig:gamma2-expansion} \end{figure} \subsection{Hartree approximation} \label{sec:hartree} The first few terms contributing to $\Gamma_2$, arising from the action~\cref{eq:shifted-action}, are shown in figure~\cref{fig:gamma2-expansion} (the role of the indices in the couplings is related to renormalization and will be explained in the next section). In this work we shall work in the Hartree approximation, which includes only the first term in the series, given by \begin{equation} \Gamma^{\rm H}_2 = -\frac{\lambda}{8} \int \mathrm{d}^4x \, {\Delta}^2(x,x). \end{equation} In this case the self-energy has only a singular or local part: \begin{equation} \Pi_{\mathrm{sg}}(x) = -\frac{\mathrm{i}\lambda}{2} {\Delta}(x,x), \end{equation} while $\Pi_{\mathrm{nsg}}^{ab}(x,y) = 0$. Obviously $\partial \Gamma^{\rm H}_2/\partial\varphi =0$ as well, so there is no contribution to equation~\cref{eq:eom-for-one-point-function} in the Hartree approximation. We can now write the non-renormalized equations of motion compactly as \begin{subequations} \begin{align} \bigg[ \Box_x - \mu^2 + \frac{1}{6}\lambda\varphi^2(x) + \frac{1}{2} \lambda{\Delta}(x,x)\bigg] \varphi(x) &= 0 , \label{eq:eom-for-1pt-fun} \\ \bigg[ \Box_x - \mu^2 + \frac{1}{2}\lambda \varphi^2(x) + \frac{1}{2} \lambda{\Delta}(x,x) \bigg]\mathrm{i}{\Delta}^{ab}(x,y) &= a\delta^{ab} \delta^{(4)}(x-y), \label{eq:eom-for-2pt-fun} \end{align} \end{subequations} Eventually we will move to the Wigner space defined in section~\cref{sec:wigner} and solve these equations numerically in some example cases for homogeneous systems, but before we can do that, we have to address the divergences in ${\Delta}^{ab}$ and in particular in the local correlation function ${\Delta}(x,x)$. \section{Renormalization} \label{sec:renormalization-main} Systematic renormalization in the context of the 2PI expansion was thoroughly discussed in reference~\cite{Berges:2005hc}. Here we use the method introduced in reference~\cite{Fejos:2007ec}, and later used in references~\cite{Arai:2012sh,Pilaftsis:2017enx}, and we include also a connection to physical parameters. The key issue is that any finite order truncation of $\Gamma_2[\varphi,{\Delta}]$ leads to an approximation for $\hat \Gamma_{\rm 1PI}[\varphi]$ that contains infinite resummations of 1PI diagrams and the associated counterterms. This gives rise to a number of {\em auxiliary} $n$-point functions which need independent renormalization conditions. These conditions can be defined by requiring that all sub-divergences cancel~\cite{Fejos:2007ec}, but one needs to introduce a different renormalized parameter for each different operator. To be precise, all $n$-point functions can be classified in terms of the number of classical fields that connect to them, and all functions that are connected also to propagator lines are auxiliary. Below we shall first renormalize the auxiliary $n$-point functions in the $\widebar{\rm MS}$-scheme and show that the resulting 1PI action is independent of the renormalization scale. We start by defining the renormalized fields, propagators, couplings and masses: \begin{equation} \begin{aligned} \phi &\equiv Z^{1/2}_\idx{2} \phi_{\rm R}, &\hspace{5em} \lambda &\equiv \lRidx{I} + \delta\lambda^\idx{I}, \\ {\Delta} &\equiv Z_\idx{0} {\Delta}_{\rm R}, &\hspace{5em} \mu^2 &\equiv \mu^2_{{\rm R}\idx{I}} - \delta\mu^2_\idx{I}, \end{aligned} \label{eq:ren-cond} \end{equation} where the index, ${\rm I}=0,1,2,4$, follows the power of the classical field associated with the $n$-point function. Written in terms of the renormalized quantities, the 2PI effective action becomes: \begin{equation} \begin{split} \Gamma_{\rm 2PI} [{\varphi_{\rm R}}, {\Delta}_{\rm R}] &= \mathcal{S}[{\varphi_{\rm R}}] - \frac{\mathrm{i}}{2}\mathrm{Tr}_{\mathcal{C}}\bigl[\ln( Z_\idx{0}{\Delta}_{\rm R}) \bigr] + \frac{\mathrm{i}}{2}\mathrm{Tr}_{\mathcal{C}}\left[{\Delta}_{0\rm R}^{-1}{\Delta}_{\rm R}\right] \\ &+ \delta\mathcal{S}[{\varphi_{\rm R}}] + \frac{\mathrm{i}}{2}\mathrm{Tr}_{\mathcal{C}}\bigl[\delta {\Delta}_{0}^{-1}{\Delta}_{\rm R}\bigr] + \Gamma_2\Bigl[{\varphi_{\rm R}}, {\Delta}_{\rm R};\lRidx{I} + \delta_\lambda^\idx{I}\Bigr], \label{2PI-effective-action_R} \end{split} \end{equation} where ${S}[{\varphi_{\rm R}}]$ is the same as in equation~\cref{eq:classical-effective-action} with $\varphi \rightarrow {\varphi_{\rm R}}$, $\mu^2 \rightarrow \mu^2_{\mathrm{R}\idx{2}}$ and $\lambda \rightarrow \lRidx{4}$, and ${\Delta}_{0 \rm R}^{-1}$ is the same as~\cref{eq:free-inverse-propagator} with $\varphi \rightarrow {\varphi_{\rm R}}$, $\mu^2 \rightarrow \mu^2_{\mathrm{R}\idx{0}}$ and $\lambda \rightarrow \lRidx{2}$. Moreover we defined the classical counterterm action \begin{equation} \delta \mathcal{S}[\varphi_{{\rm R}b}] \equiv \int {\rm d}^4x \Bigg[ \frac{\delta^\idx{2}_\varphi}{2}(\partial_\mu\varphi_{{\rm R}b})^2 - \frac{1}{2}\delta_\mu^\idx{2}\varphi_{{\rm R}b}^2 - \frac{1}{4!}\delta_\lambda^\idx{4} \varphi_{{\rm R}b}^4 \Bigg] \label{eq:classical-effective-action_dR} \end{equation} and the inverse classical counterterm propagator \begin{equation} \mathrm{i}\delta{\Delta}_{0,ab}^{-1}(x,y;{\varphi_{\rm R}}) \equiv - \bigg( \delta_\varphi^\idx{0}\Box_x + \delta_\mu^\idx{0} + \frac{1}{2}\delta_\lambda^\idx{2} \varphi_{{\rm R}a}^2\bigg)\delta^{(4)}(x-y)\delta_{ab}, \label{eq:free-inverse-propagator_dR} \end{equation} where $\delta_\varphi^\idx{I} \equiv Z_\idx{I} - 1 $ and the other effective counterterms are defined as: \begin{subequations} \label{eq:effcts} \begin{align} \delta_\lambda^\idx{0} &\equiv Z^2_\idx{0} \big(\lRidx{0}+ \delta \lambda^\idx{0}\big) - \lRidx{0},\\ \delta_\lambda^\idx{2} &\equiv Z_\idx{0}Z_\idx{2} \big(\lRidx{2}+ \delta \lambda^\idx{2}\big) - \lRidx{2},\\ \delta_\lambda^\idx{4} &\equiv Z^2_\idx{2} \big(\lRidx{4}+ \delta \lambda^\idx{4}\big) - \lRidx{4},\\ \delta_\mu^\idx{I} &\equiv Z_\idx{I} \bigl(-\mu^2_{{\rm R}\idx{I}} + \delta \mu^2_\idx{I}\bigr) + \mu^2_{{\rm R}\idx{I}}. \end{align} \end{subequations} Also in the interaction term in~\eqref{2PI-effective-action_R} the renormalized couplings in the combination $\lRidx{I} + \delta^\idx{I}_{\lambda}$ follow the power of the classical field in the interaction term~\cref{eq:shifted-action}, rewritten in terms of the renormalized quantities. The renormalized equations of motion can now be derived from the renormalized effective action, or more directly from~\eqref{eq:eom-for-1pt-fun} and~\eqref{eq:eom-for-2pt-fun}, by writing the the non-renormalized quantities in terms of the renormalized ones: \begin{subequations} \begin{align} \biggl[ Z_\idx{2}\Box_x - \mu^2_{{\rm R}\idx{2}} + \delta_\mu^\idx{2} + \frac{1}{6}\Bigl(\lRidx{4}+\delta_\lambda^\idx{4}\Bigr)\varphi_{\rm R}^2 + \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr){\Delta}_{\rm R}\bigg] {\varphi_{\rm R}} &= 0, \label{eq:eom-for-one-point-function-hartree-R} \\ \hspace{-.4em}\biggl[ Z_\idx{0}\Box_x - \mu^2_{{\rm R}\idx{0}} + \delta_\mu^\idx{0} + \frac{1}{2}\Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr) \varphi_{\rm R}^2 + \frac{1}{2} \Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr){\Delta}_{\rm R} \bigg]\mathrm{i}{\Delta}_{\rm R}^{ab}(x,y) &= a\delta^{ab}\delta^{(4)}. \label{eq:eom-for-two-point-function-hartree-R} \end{align} \end{subequations} Here we suppressed the arguments in the local functions ${\varphi_{\rm R}}(x)$ and ${\Delta}(x,x)$, as well as in $\delta^{(4)}(x-y)$, for brevity. We now proceed to determine the various counterterms appearing in these equations and in the end find the renormalized equations of motion that include the effects of quantum corrections. \paragraph{Auxiliary renormalization conditions.} Because the operator acting on $\Delta^{ab}_{\rm R}$ in~\cref{eq:eom-for-two-point-function-hartree-R} is independent of branch indices, we can concentrate on the time ordered component $\Delta_{\mathrm{R}}^{11}$ of the two-point function. We choose the mass-shell renormalization condition in the vacuum configuration $\varphi_{\mathrm{R}} = v_{\mathrm{R}}$, which simultaneously minimizes the effective action. That is, we set \begin{equation} \mathrm{i}\bigl({\Delta}^{11}_{\rm R}\bigr)^{-1} = p^2 - m_{\mathrm{R}}^2, \quad \frac{\rm d}{{\rm d}p^2}\mathrm{i}\bigl({\Delta}^{11}_{\rm R}\bigr)^{-1} = 1, \quad {\rm and} \quad \frac{\delta\Gamma_{\rm 2PI}}{\delta \varphi_{\mathrm{R}}}\Big|_{\varphi_{\mathrm{R}}=v_{\mathrm{R}}} = 0. \label{eq:intermediate-ren-conditions} \end{equation} These conditions imply that $Z_\idx{0} = 1$ in the Hartree approximation, and in our current scheme we can also set $Z_\idx{2}=1$ (see footnote~\cref{fnote:physical-parameters} below). The renormalization conditions~\cref{eq:intermediate-ren-conditions} then become: \begin{subequations} \begin{align} - \mu^2_{{\rm R}\idx{2}} + \delta_\mu^\idx{2} + \frac{1}{6}\Bigl(\lRidx{4}+\delta_\lambda^\idx{4}\Bigr)v_{\rm R}^2 + \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr){\Delta}_{\rm R}(v_{\rm R}) &= 0, \label{eq:eom-phi-R} \\ - \mu^2_{{\rm R}\idx{0}} + \delta_\mu^\idx{0} + \frac{1}{2}\Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr) v_{\rm R}^2 + \frac{1}{2} \Bigl(\lRidx{0}+\delta_\lambda^\idx{0}\Bigr){\Delta}_{\rm R}(v_{\rm R}\bigr) &= m_{\mathrm{R}}^2, \label{eq:eom-delta-R} \end{align} \end{subequations} where ${\Delta}_{\rm R}(v_{\rm R})$ refers to the still divergent local correlator computed at the renormalization point. It should be noted that ${\Delta}^{ab}_{\rm R}$ is an auxiliary function and the parameter $m^2_{\mathrm{R}}$ is not yet related to any physical one. Similarly, none of the couplings are yet related to observables, and there is considerable amount of choice related to their definition. We will choose the following conditions:% \footnote{These choices are partly specific for the Hartree approximation, where the self-energy $\Pi^{ab}$ is proportional to the local correlation function. In any higher order 2PI truncation $\lRidx{0}$ and $\lRidx{2}$ would need to be renormalized separately.} \begin{subequations} \label{eq:rencons} \begin{align} \delta_\lambda^\idx{0} &= \delta_\lambda^\idx{2}, \label{eq:rencons-1} \\[.5em] -\mu^2_{{\rm R}\idx{0}} + \delta_\mu^\idx{0} &= -\mu^2_{{\rm R}\idx{2}} + \delta_\mu^\idx{2}, \label{eq:rencons-2} \\ \lRidx{4} &= \lRidx{2} - \frac{1}{3}\delta_\lambda^\idx{4} + \delta_\lambda^\idx{2}. \label{eq:rencons-3} \end{align} \end{subequations} Because $Z_\idx{0,2}=1$ here, equation~\cref{eq:rencons}, together with~\cref{eq:ren-cond,eq:effcts} implies that $\lRidx{0} = \lRidx{2}$. Equation~\cref{eq:rencons-2} is less restrictive: it merely states that both renormalized mass terms are related to the same bare mass term. Conditions~\cref{eq:rencons-2,eq:rencons-3} still allow us to choose $\delta_\mu^\idx{2}$ and $\delta_\lambda^\idx{4}$ such that $m_{\rm R}^2$ and $\lRidx{4}$ can be matched to a physical mass parameter and a physical coupling. Using the conditions~\cref{eq:rencons} and equation~\cref{eq:eom-delta-R} we can write equation~\cref{eq:eom-phi-R} simply as \begin{equation} m_{\rm R}^2 - \frac{1}{3}\lRidx{4}v_{\rm R}^2 = 0. \end{equation} That is, we are able to keep the tree-level relation between the coupling $\lRidx{4}$, the background field $v_{\rm R}$ and the mass parameter $m_{\rm R}^2$. \paragraph{Cancelling the sub-divergences.} In order to proceed, we need to find out the divergence structure of the local correlation function. Using dimensional regularization we can write \begin{equation} {\Delta}_{\rm R}(v_{\rm R}) = Q^\epsilon \int \frac{{\rm d}^dp}{(2\uppi )^d}\,{\Delta}_{\rm R}^{11} (p) = -\frac{m_{\mathrm{R}}^2}{16\uppi^2}\biggl[\frac{2}{\overline{\epsilon}} + 1 - \ln\biggl(\frac{m^2_{\mathrm{R}}}{Q^2}\biggr)\biggr], \end{equation} where $\epsilon \equiv 4-d$ and $2/{\overline\epsilon} \equiv 2/\epsilon - \gamma_{\mathrm{E}} + \ln(4\uppi)$ and $Q$ is an arbitrary renormalization scale. We now separate ${\Delta}_{\rm R}$ into divergent and finite parts as follows: \begin{equation} {\Delta}_{\rm R}(v_{\rm R}) \equiv m_{\mathrm{R}}^2 {\Delta}_{\overline\epsilon} + {\Delta}_{\rm F0}\bigl(m_{\rm R}^2,Q\bigr), \label{eq:division-R} \end{equation} where ${\Delta}_{\overline\epsilon} \equiv -1/\bigl(8\uppi^2\overline\epsilon\bigr)$. In what follows we will suppress the $Q$-dependence of the function $\Delta_{\rm F0}$ for brevity. Next we insert decomposition~\cref{eq:division-R} back into equation~\cref{eq:eom-delta-R}, use relations~\cref{eq:rencons} and let the leading order terms define the renormalized mass independently from the terms containing divergences or counterterms. In this way we get two equations out of the equation~\cref{eq:eom-delta-R}: \begin{align} m_{\rm R}^2 &\equiv -\mu_{{\rm R}\idx{2}}^2 + \frac{1}{2}\lRidx{2} \Bigl[ v_{\rm R}^2 + {\Delta}_{\rm F0}\bigl(m_{\rm R}^2\bigr)\Bigr], \label{eq:tree-level-def} \\ 0 &=\delta_\mu^\idx{2} + \frac{1}{2}\delta_\lambda^\idx{2}\Bigl[ v_{\rm R}^2 + {\Delta}_{\rm F0}\bigr(m_{\rm R}^2\bigr)\Bigr] + \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr)m_{\rm R}^2{\Delta}_{\overline\epsilon}. \label{eq:loop-level-eq} \end{align} Using definition~\cref{eq:tree-level-def} again in equation~\cref{eq:loop-level-eq} and rearranging we get \begin{equation} \phantom{H} \delta_\mu^\idx{2} - \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr)\mu_{{\rm R}\idx{2}}^2{\Delta}_{\overline\epsilon} + \frac{1}{2}\Bigl[ v_{\rm R}^2 + {\Delta}_{\rm F0}\bigl(m_{\rm R}^2\bigr)\Bigr] \biggl[ \delta_\lambda^\idx{2} + \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr)\lRidx{2}{\Delta}_{\overline\epsilon}\bigg] = 0. \label{eq:loop-level-rearr} \end{equation} This equation has a consistent solution where the leading constant terms and the terms multiplying the combination (the sub-divergence part) $v_{\rm R}^2+\Delta_{\rm F0}$ cancel independently. This gives two constraint equations, \begin{subequations} \label{eq:last-ren-eqs} \begin{align} \delta_\lambda^\idx{2} + \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr)\lRidx{2}{\Delta}_{\overline\epsilon} &= 0, \label{eq:last-ren-eqs-1} \\ \delta_\mu^\idx{2} - \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr)\mu_{{\rm R}\idx{2}}^2{\Delta}_{\overline\epsilon} &= 0, \label{eq:last-ren-eqs-2} \end{align} \end{subequations} from which we can finally solve the counterterms $\delta_\lambda^\idx{2}$ and $\delta_\mu^\idx{2}$: \begin{equation} \delta_\lambda^\idx{2} = - \frac{\sfrac{1}{2}\bigl(\lRidx{2}\bigr)^2{\Delta}_{\overline\epsilon}}{1+\sfrac{1}{2}\lRidx{2}{\Delta}_{\overline\epsilon}} \qquad {\rm and} \qquad \delta_\mu^\idx{2} = \mu^2_{{\rm R}\idx{2}}\frac{\sfrac{1}{2}\lRidx{2}{\Delta}_{\overline\epsilon}}{1+\sfrac{1}{2}\lRidx{2}{\Delta}_{\overline\epsilon}}. \label{eq:auxiliary-cts} \end{equation} \paragraph{Scale dependence.} The scale dependence of the auxiliary couplings and the mass parameters can now be worked out as usual by requiring that the bare parameters do not run: $\partial_Q\bigl[Q^\epsilon\bigl(\lRidx{2} + \delta_\lambda^\idx{2}\bigr)\bigr] = 0$ and $\partial_Q\bigl[Q^\epsilon\bigl( \mu^2_{{\rm R}\idx{I}} - \delta_\mu^\idx{I}\bigr)\bigr] = 0$. Using equations~\cref{eq:auxiliary-cts} one then immediately finds: \begin{equation} Q\frac{\partial\lRidx{2}}{\partial Q}= \frac{\bigl(\lRidx{2}\bigr)^2}{16\uppi^2} \qquad {\rm and} \qquad Q\frac{\partial\mu^\idx{I}_{\rm R}}{\partial Q} = \frac{\lRidx{2}\mu^2_{{\rm R}\idx{2}}}{16\uppi^2}. \label{eq:running-equations} \end{equation} The latter equation applies for both mass parameters, assuming that $\delta_\mu^\idx{0}$ and $\delta_\mu^\idx{2}$ differ by at most a finite and \(Q\)-independent term, which is the case in the Hartree approximation. Equations~\cref{eq:running-equations} can be easily integrated: \begin{equation} \lRidx{2}(Q) = \frac{\lambda^\idx{2}_{{\rm R}}(Q_0)}{1+\frac{\lambda^\idx{2}_{{\rm R}}(Q_0)}{32\uppi^2}\ln\Bigl(\frac{Q^2_0}{Q^2}\Bigr)} \qquad {\rm and} \qquad \mu^2_{{\rm R}\idx{I}}(Q) = \frac{\mu^2_{{\rm R}\idx{I}}(Q_0)} {1+\frac{\lambda^\idx{2}_{{\rm R}}(Q_0)}{32\uppi^2}\ln\Bigl(\frac{Q^2_0}{Q^2}\Bigr)}. \label{eq:running-parameters} \end{equation} Remember that as a result of equation~\cref{eq:rencons-1} $\lRidx{0}=\lRidx{2}$. On the other hand, the coupling $\lRidx{4}$ does not run at all; indeed, to keep $\lRidx{4}$ finite, we must have $\delta_\lambda^\idx{4} = 3\delta_\lambda^\idx{2}$ up to finite terms according to equation~\cref{eq:rencons-3}, which implies \begin{equation} Q\frac{\partial\lRidx{4}}{\partial Q}= 0 \qquad \Rightarrow \qquad \lRidx{4} = \rm constant. \label{eq:running-equations-four} \end{equation} We shall see below that $\lRidx{4}$ can be further fixed by some physical condition. \subsection{Renormalized equations of motion} \label{sec:renormalized-eom} It is essential to impose a correct treatment of the local correlation function away from the renormalization point in the equations of motion~\cref{eq:eom-for-one-point-function-hartree-R,eq:eom-for-two-point-function-hartree-R}. Analogously to~\cref{eq:tree-level-def}, we first define a leading order mass function that contains all finite terms in equation~\cref{eq:eom-for-two-point-function-hartree-R}: \begin{equation} m^2({\varphi_{\rm R}},{\Delta}_{\rm F}) \equiv - \mu^2_{{\rm R}\idx{2}} + \frac{1}{2}\lRidx{2}\Bigl(\varphi_{\rm R}^2 + {\Delta}_{\rm F}\Bigr). \label{eq:effective-mass-full} \end{equation} Here ${\Delta}_{\rm F}$ is the finite part of the local correlation function, which must be defined analogously to equation~\cref{eq:division-R}: \begin{equation} {\Delta}_{\rm R} \equiv m^2({\varphi_{\rm R}},{\Delta}_{\rm F}) {\Delta}_{\overline\epsilon} + {\Delta}_{\rm F}. \label{eq:division-full} \end{equation} Note that both the finite part and the divergence contain non-trivial contributions from both the vacuum and the non-equilibrium fluctuations. Using definitions~\cref{eq:effective-mass-full} and~\cref{eq:division-full} we can write equation~\cref{eq:eom-for-two-point-function-hartree-R} as follows: \begin{equation} \begin{split} \phantom{H} \biggl[\Box_x + m^2({\varphi_{\rm R}},{\Delta}_{\rm F}) &+ \delta_\mu^\idx{2} + \frac{1}{2}\delta_\lambda^\idx{2}\Bigl( \varphi_{\rm R}^2 + {\Delta}_{\rm F}\Bigr) \\ &+ \frac{1}{2} \Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr)m^2({\varphi_{\rm R}},{\Delta}_{\rm F}){\Delta}_{\overline\epsilon}\biggr]\mathrm{i}{\Delta}^{ab}_{\rm R}(x,y) = a\delta^{ab}\delta^{(4)}(x-y). \label{eq:loop-level-eq-full} \end{split} \end{equation} Using definition~\cref{eq:effective-mass-full} again one can show that all divergent terms in equation~\cref{eq:loop-level-eq-full} arrange as in equation~\cref{eq:loop-level-rearr} and cancel as a result of the renormalization conditions~\cref{eq:last-ren-eqs}. Then, noting that $\lRidx{4}+\delta_\lambda^\idx{4} = -2\lRidx{4} + {\mathcal O}(\epsilon )$, the same manipulations can be done also in equation~\cref{eq:eom-for-one-point-function-hartree-R}. This results in the final renormalized equations of motion \begin{subequations} \begin{align} \Big[ \Box_x + m^2({\varphi_{\rm R}},{\Delta}_{\rm F}) \Big] {\varphi_{\rm R}} &= \frac{1}{3}\lRidx{4}\varphi_{\rm R}^3, \label{eq:eom-for-phi-R} \\ \phantom{H} \Big[ \Box_x + m^2({\varphi_{\rm R}},{\Delta}_{\rm F}) \Big]\mathrm{i}{\Delta}_{\rm R}^{ab}(x,y) &= a\delta^{ab}\delta^{(4)}(x-y). \label{eq:eom-for-delta-R} \end{align} \end{subequations} These equations appear deceivingly simple: when written for the Wightman function $\Delta^\lt_{\rm R} = \Delta^{+-}_{\rm R}$, equation~\cref{eq:eom-for-delta-R} takes the form of a wave equation with a time-dependent mass and, as we shall see in the next section, equation~\cref{eq:eom-for-phi-R} describes the motion of the one-point function in a quantum corrected effective potential including backreaction from non-equilibrium modes. However, despite their apparent simplicity, the equations are strongly coupled through the local correlator in the gap equation~\cref{eq:effective-mass-full} for the mass term. \subsection{Effective potential and physical parameters} Let us now consider the {\em adiabatic limit} of the evolution equations, where $\Delta_{\rm F}$ is given purely by vacuum fluctuations with no physical excitations. In this case definition~\cref{eq:effective-mass-full} reduces to \begin{equation} \overbar m^2({\varphi_{\rm R}}) \equiv - \mu^2_{{\rm R}\idx{2}} + \frac{1}{2}\lRidx{2}\Bigl[\varphi_{\rm R}^2 + {\Delta}_{\rm F0}\bigl(\overbar m^2({\varphi_{\rm R}})\bigr)\Bigr], \label{eq:effective-mass-vacuum} \end{equation} and correspondingly \begin{equation} {\overbar {\Delta}}_{\rm R}({\varphi_{\rm R}}) \equiv \overbar m^2({\varphi_{\rm R}}) {\Delta}_{\overline\epsilon} + {\Delta}_{{\rm F0}}\bigl(\overbar m^2({\varphi_{\rm R}})\bigr). \label{eq:division-bg} \end{equation} Note that $\overbar m^2({\varphi_{\rm R}})$ and $\overbar\Delta_{\mathrm{R}}$ differ from definitions~\cref{eq:effective-mass-full} and~\cref{eq:division-full} only through a different value of the background field ${\varphi_{\rm R}}$. Using the equation of motion~\eqref{eq:eom-for-two-point-function-hartree-R} in the renormalized 2PI action~\eqref{2PI-effective-action_R} we can write down the 1PI effective potential in the Hartree approximation as follows: \begin{equation} V_{\rm H}({\varphi_{\rm R}}) = -\frac{1}{V}\, \Gamma_{\rm 2PI}^{\rm H} \bigl({\varphi_{\rm R}}, {\overbar {\Delta}}\bigr) = V_0({\varphi_{\rm R}}) + \frac{\mathrm{i}}{2V} \, {\rm Tr}\Bigl[\ln\bigl({\overbar{\Delta}}_{\rm R}\bigr)\Bigr] -\frac{1}{8}\Bigl(\lRidx{2}+\delta_\lambda^\idx{2}\Bigr){\overbar{\Delta}}^2_{\rm R}({\varphi_{\rm R}}), \label{2PI-effective-potential} \end{equation} where $V$ is the space-time volume and the tree-level effective potential is \begin{equation} V_0({\varphi_{\rm R}}) = \frac{1}{2}\Bigl(-\mu^2_{{\rm R}\idx{2}} + \delta_\mu^\idx{2}\Bigr)\varphi_{\rm R}^2 +\frac{1}{4!}\Bigl(\lRidx{4} + \delta_\lambda^\idx{4}\Bigr)\varphi_{\rm R}^4 = -\frac{\lRidx{4}}{12}\varphi_{\rm R}^4, \end{equation} where in the last step we dropped all terms of order $\epsilon$. Writing \(\mathrm{i}{\rm Tr}\Bigl[\ln\bigl({\overbar{\Delta}}_{\rm R}\bigr)\Bigr] = V \int \mathrm{d}\overbar{m}^2 \,{\overbar{\Delta}}_{\rm R}\) and using equation~\cref{eq:division-bg} one finds that the divergences cancel between the two last terms in equation~\cref{2PI-effective-potential} and the finite part of ${\rm Tr}\Bigl[\ln\bigl( {\overbar{\Delta}}_{\rm R}\bigr)\Bigr]$ creates the one-loop correction to the effective potential. After a little algebra one finds the result: \begin{equation} V_{\rm H}({\varphi_{\rm R}}) = -\frac{\lRidx{4}}{12}{\varphi_{\rm R}}^4 + \frac{\overbar m^4({\varphi_{\rm R}})}{2\lRidx{2}} -\frac{\overbar m^4({\varphi_{\rm R}})}{64\uppi^2}\biggl[\ln\biggl(\frac{\overbar m^2({\varphi_{\rm R}})}{Q^2}\biggr) - \frac{1}{2} \biggr]. \label{2PI-effective-potential-final} \end{equation} This is the vacuum effective potential in the Hartree approximation, found for example in reference~\cite{AmelinoCamelia:1992nc}. Despite the apparent $Q$-dependence, $V_{\mathrm{H}}({\varphi_{\rm R}})$ is in fact scale-independent. Indeed, one can first show from definition~\cref{eq:effective-mass-vacuum}, using~\cref{eq:running-equations}, that $\partial_Q\overbar m^2({\varphi_{\rm R}})=0$. Then by a direct differentiation and using equations~\cref{eq:running-equations} and~\cref{eq:running-equations-four} one finds that $\partial_Q V_{\mathrm{H}}({\varphi_{\rm R}})=0$. \paragraph{Physical parameters.} Differentiating~\cref{eq:effective-mass-vacuum} with respect to~${\varphi_{\rm R}}$ one can first derive the identity \begin{equation} \frac{\partial \overbar m^2}{\partial {\varphi_{\rm R}}}\biggl[1-\frac{\lRidx{2}}{32\uppi^2}\ln\biggl(\frac{\overbar m^2}{Q^2}\biggr)\biggr] = \lRidx{2}{\varphi_{\rm R}}. \label{eq:vacuum-mass-eq} \end{equation} Using~\cref{eq:vacuum-mass-eq} it is simple to show that the first derivative of the potential can be written as \begin{equation} \frac{\partial V_{\rm H}}{\partial {\varphi_{\rm R}}} = -\frac{1}{3}\lRidx{4}\varphi_{\rm R}^3 + \overbar m^2({\varphi_{\rm R}}){\varphi_{\rm R}}. \label{eq:vacuum-split} \end{equation} Comparing equation~\cref{eq:vacuum-split} with equation~\cref{eq:eom-for-phi-R} we can see that in the case of pure vacuum fluctuations the equation of motion can be written as $\partial_t^2 {\varphi_{\rm R}} + \partial V_{\rm H}/\partial{\varphi_{\rm R}} = 0$. Differentiating equation~\cref{eq:vacuum-split} once more with respect to ${\varphi_{\rm R}}$ one finds \begin{equation} \frac{\partial^2 V_{\rm H}}{\partial \varphi_{\rm R}^2} = \overbar m^2({\varphi_{\rm R}}) + \Bigl[ \lRidx{2}\bigl(\overbar m^2({\varphi_{\rm R}})\bigr) - \lRidx{4} \Bigr]\varphi_{\rm R}^2. \label{eq:second-derivative-of-potential} \end{equation} Because $\overbar m^2(v_{\rm R}) = m_{\mathrm{R}}^2$, we now see that the on-shell mass parameter $m_{\mathrm{R}}$ of the auxiliary propagator can be identified with a physical mass,% \footnote{\label{fnote:physical-parameters} In fact these relations imply that $m_{\mathrm{R}}$ corresponds to a mass defined at $p^2=0$, but in the Hartree case this is the same as the physical pole mass. Going beyond Hartree approximation, one can still make $m_{\rm R}$ agree with the physical on-shell mass using the remaining freedom in definitions~\cref{eq:rencons} and in the definition of the wave-function counterterm $Z_\idx{2}$, which allow adding finite parts to $\delta_\varphi^\idx{2}$, $\delta_\mu^\idx{2}$ and $\delta_\lambda^\idx{4}$.} if we at the same time define \begin{equation} \lRidx{2}\bigl(m_{\mathrm{R}}^2\bigr) \equiv \lRidx{4}. \label{eq:fix-lambda} \end{equation} This is the choice of parameters we shall use in the rest of this paper. So far we have defined the counterterm $\delta_\lambda^\idx{4}$ only up to a finite constant. This, and other remaining freedom in choosing the counterterms (see footnote~\cref{fnote:physical-parameters}) would allow us to further match $\lRidx{4}$ to some observable. Given that $\lRidx{4}$ does not run, equations~\cref{eq:fix-lambda,eq:second-derivative-of-potential} are enough to fix the parameters of the theory. Going beyond the Hartree approximation would lead to more complicated calculations and relations between the auxiliary parameters, but would not change the derivation conceptually. \subsection{Finite temperature effective potential} \label{sec:effpot} In our derivation in section~\cref{sec:renormalized-eom} we did not specify the finite part of the local correlation function $\Delta_{\rm F}$, and in what follows we will compute it numerically from the equations of motion. Before that it is useful to make one more observation concerning thermal corrections. Indeed, we can include thermal corrections by a simple generalization of equations~\cref{eq:effective-mass-vacuum,eq:division-bg}: \begin{equation} \overbar m^2({\varphi_{\rm R}},T) \equiv - \mu^2_{{\rm R}\idx{2}} + \frac{1}{2}\lRidx{2}\Bigl[\varphi_{\rm R}^2 + \overbar{\Delta}_{\rm F}({\varphi_{\rm R}},T)\Bigr], \label{eq:effective-mass-thermal} \end{equation} with $\overbar{\Delta}_{\rm R}({\varphi_{\rm R}},T) \equiv \overbar m^2({\varphi_{\rm R}},T) {\Delta}_{\overline\epsilon} + \overbar{\Delta}_{\rm F}({\varphi_{\rm R}},T)$ and \begin{equation} \overbar{\Delta}_{\rm F}({\varphi_{\rm R}},T) \equiv {\Delta}_{\rm F0}\bigl(\overbar m^2({\varphi_{\rm R}},T)\bigr) + T^2{\mathcal I}\bigl(\overbar m^2({\varphi_{\rm R}},T)/T^2\bigr), \label{eq:division-bg-T} \end{equation} where ${\mathcal I}\bigl(x\bigr) = 2\partial_{x}{\mathcal J}\bigl(x\bigr)$ and ${\mathcal J}\bigl(x\bigr)$ is the dimensionless bosonic one-loop thermal integral \begin{equation} {\mathcal J}(x) \equiv \frac{1}{2\uppi^2}\,{\rm Re} \int_0^{\infty} {\rm d}y \, y^2\ln\Bigl(1-\mathrm{e}^{-\sqrt{y^2+x-{\mathrm i}\varepsilon}}\Bigr). \end{equation} Here the infinitesimal imaginary part ${\mathrm i}\varepsilon$ defines the correct branch of the logarithm for a negative $\overbar m^2$. With these definitions one can go through the analysis of the previous paragraph and show that the equation of motion of the homogeneous field is now $\partial_t^2 {\varphi_{\rm R}} + \partial V_{\rm HT}({\varphi_{\rm R}},T)/\partial{\varphi_{\rm R}} = 0$, where $V_{\rm HT}({\varphi_{\rm R}},T)$ is the thermally corrected, scale independent effective potential in the Hartree approximation: \begin{equation} V_{\rm HT}({\varphi_{\rm R}},T) = V_{\rm H}\bigl({\varphi_{\rm R}},\overbar m_T^2\bigr) - \frac{1}{2}\overbar m_T^2T^2{\mathcal I}\bigl(\overbar m^2_T/T^2\bigr) + T^4{\mathcal J}\bigl(\overbar m^2_T/T^2\bigr), \label{eq:finite-T-pot} \end{equation} where $\overbar m^2_T \equiv \overbar m^2({\varphi_{\rm R}},T)$. Note that in the 2PI approach also the vacuum part $V_{\rm H}\bigl({\varphi_{\rm R}},\overbar m_T^2\bigr)$ of the potential is computed with the thermally corrected mass, which is the solution to equations~\cref{eq:effective-mass-thermal,eq:division-bg-T}. It is worth mentioning that in each special case considered above, from the vacuum renormalization~\cref{eq:division-R} to the quantum corrected effective action with~\cref{eq:division-bg-T} and without~\cref{eq:division-bg} thermal corrections, and finally to the general case~\cref{eq:division-full}, the divergence that gets removed by counterterms is different and depends on the value of the background field, the temperature and the particle distribution. This is an unavoidable feature of the 2PI equations with a finite order truncation. However, in all cases the counterterms themselves remain the same, uniquely defined constants. \paragraph{Comparison to ring-resummed potentials.} Equations~\cref{eq:effective-mass-thermal,eq:division-bg-T} and~\cref{eq:finite-T-pot} give a consistent generalization of the Parwani resummation method~\cite{Parwani:1991gq} to super-daisy level. Here thermal corrections affect all modes on equal footing, whereas it can be argued~\cite{Arnold:1992rz} that only the long wavelength modes should be screened by the short wavelength modes in a thermal plasma. The advantage of the potential~\cref{eq:finite-T-pot} is that it provides a consistently renormalized, smooth continuation between the non-relativistic and relativistic regimes. In all other ring-resummed potentials this behaviour has to be put in by hand. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{thermal_mass_functions.pdf} \hspace{4mm}\includegraphics[width=0.45\textwidth]{thermal_potential_differences.pdf}\hspace{5mm} \end{center} \caption{Left: the thermal mass function computed from the full gap equation~\cref{eq:effective-mass-thermal} (Hartree), in the high-temperature approximation (denoted HTE) and in the one-loop approximation (denoted 1-loop). Here we used $T_0 = 356.4$ GeV, corresponding to $m^2({\varphi_{\rm R}},T)=0$ in the high temperature approximation. Right: ratios between several one-loop approximations and the Hartree result for the thermal correction to the vacuum potential defined as $\delta V_\mathrm{X}({\varphi_{\rm R}},Q,T) \equiv V_\mathrm{X}({\varphi_{\rm R}},Q,T)-V_\mathrm{X}({\varphi_{\rm R}},Q)$, with ${\varphi_{\rm R}} = 250$ GeV. In both panels the values $\lambda_{\rm R} = 1$, $m_{\rm R} = 100$ GeV and $Q = 200$ GeV were used.} \label{fig:masses-and-potentials} \end{figure} In the left panel of figure~\cref{fig:masses-and-potentials} we show an example of a finite Hartree-resummed thermal mass function (solid blue line), and compare it to different one-loop approximations $m_0^2({\varphi_{\rm R}},T)$, defined by setting $\overbar \Delta_{\rm F} \rightarrow \frac{1}{12}T^2$ (high-temperature expansion, HTE) and $\overbar \Delta_{\rm F} \rightarrow T^2 \mathcal{I}\bigl(m^2_0({\varphi_{\rm R}})/T^2\bigr)$ (complete one-loop integral) in~\cref{eq:division-bg-T}. In the right panel we show the ratio of thermal corrections from different one-loop ring-resummed potentials to that of the Hartree potential, using both the HTE and the complete one-loop thermal mass functions. (The Hartree result is of course $Q$-independent.) To be precise, we define: \begin{equation} V_{1\text{-loop}}(\varphi,T) \equiv -\frac{1}{2}\mu_{\rm R}^2\varphi^2 + \frac{1}{4!}\lambda_{\rm R}\varphi^4 +\frac{m_0^4(\varphi)}{64\uppi^2}\biggl[\ln\biggl(\frac{m_0^2(\varphi)}{Q^2}\biggr) - \frac{3}{2} \biggr] + T^4\mathcal{J}\biggl(\frac{m_0^2(\varphi)}{T^2}\biggr), \label{eq:VCW} \end{equation} where $m_0^2(\varphi) = -\mu_{\rm R}^2 + \frac{1}{2}\lambda_{\rm R}\varphi^2$. In the Parwani approximation~\cite{Parwani:1991gq} one replaces $m_0^2(\varphi)$ with the lowest order thermal mass% \footnote{To be precise, one obtains $m_0^2({\varphi_{\rm R}},T)$ as a lowest order iterative solution, replacing $\overbar m^2({\varphi_{\rm R}},T) \rightarrow m_0^2({\varphi_{\rm R}})$ in the right hand side of equation~\cref{eq:effective-mass-thermal}. Also note, that in the thermal Hartree potential~\cref{eq:finite-T-pot} it is not $V_{\rm H}$, but the sum of the first two terms that reduces to the Parwani-resummed vacuum potential when $\overbar m^2 \rightarrow m_0^2$.} $m_0^2(\varphi,T)$ in~\cref{eq:VCW} and in the ring approximation~\cite{Carrington:1991hz,Arnold:1992rz}, where only the zero-mode is dressed by thermal corrections, one finds: \begin{equation} V_{\rm ring}(\varphi,T) \equiv V_{1\text{-loop}}(\varphi,T) + \frac{T}{12\uppi} {\mathrm Re} \Big(m_0^3(\varphi) - m_0^3(\varphi,T)\Big). \end{equation} It is curious that the Hartree result is closest to the one-loop result, which uses the vacuum mass in the loop corrections. In particular, the failure of both Parwani and ring approximations in the non-relativistic limit with the HTE-mass term is evident and not surprising, because the resummation of thermal corrections was not consistently implemented in these potentials in this limit. \section{Wigner space equations} \label{sec:wigner} We now proceed to solving the general equations~\cref{eq:eom-for-delta-R,eq:eom-for-phi-R} for homogeneous non-equilibrium systems. Of these, equation~\cref{eq:eom-for-phi-R} is already in the desired form, when we assume that field ${\varphi_{\rm R}}$ is homogeneous, but equation~\cref{eq:eom-for-delta-R} for the correlation function will be easier to handle in the mixed representation. Because of the homogeneity an ordinary Fourier transformation is sufficient for the spatial coordinates, but for the time variable we need the Wigner transformation: \begin{equation} {\Delta}_{{\rm R}\bm{k}}^{ab}(k_0,t) = \int{\rm d}r_0 \, {\Delta}_{{\rm R}\bm{k}}^{ab}\biggl(t+\frac{r_0}{2},t-\frac{r_0}{2}\biggr)\mathrm{e}^{\mathrm{i}k_0r_0}, \end{equation} where $t \equiv \frac{1}{2}(x_0+y_0)$ and $r_0 \equiv x_0-y_0$. Because all correlation functions ${\Delta}^{ab}(x,y)$ have the same local limit, it suffices to consider the equation for the lesser Wightman function ${\Delta}^{+-}\equiv {\Delta}^\lt$. Starting from equation~\cref{eq:eom-for-delta-R}, we find that in the Wigner representation it satisfies the equation, \begin{equation} \left[\sfrac{1}{4}\partial_t^2-k^2-{\mathrm i}k_0\partial_t + \mathrm{e}^{-\frac{\mathrm{i}}{2}{\partial_t^m}\partial_{k_0}}{m^2}\big({\varphi_{\rm R}}, {\Delta}_{\rm R}\big)\right] {\Delta}_{{\rm R}\bm{k}}^\lt(k_0,t) = 0. \label{eq:wigner-equation-lt} \end{equation} Here the index $m$ in the derivative $\partial_t^m$ signals that the time-derivative acts only on the mass function and not on the propagator. Equation~\cref{eq:wigner-equation-lt} is still equivalent to~\cref{eq:eom-for-delta-R} and highly complicated because of the infinite tower of \(t\)- and \(k_0\)-derivatives involved. It can be recast into a simpler form by introducing a moment expansion. Following reference~\cite{Herranen:2008di} we first introduce the moment functions: \begin{equation} \rho_{\bm{k}n}(t) = \int\frac{\mathrm{d}k_0}{2\uppi}\,k_0^n\,{\Delta}^<_{\mathrm{R}\bm{k}}(k_0,t). \label{eq:moment} \end{equation} Then taking the real and imaginary parts of equation~\cref{eq:wigner-equation-lt} integrated over $k_0$ and the imaginary part of the same equation integrated over $k_0$ and weighted by $k_0$, we get three equations coupling the moments $\rho_{n\bm{k}}$ with $n=0,1,2$ to the field equation for a homogeneous field ${\varphi_{\rm R}}(t)$: \begin{subequations} \label{eq:moments} \begin{align} \frac{1}{4}\partial_t^2\rho_{0\bm{k}} - \rho_{2\bm{k}} + \omega^2_{\bm k}(t) \,\rho_{0\bm{k}} &= 0, \label{eq:moments-rho0} \\[.2em] \partial_{t}\rho_{1\bm{k}} &= 0, \label{eq:moments-rho1} \\[.2em] \partial_{t}\rho_{2\bm{k}} -\frac{1}{2}\Bigl[\partial_t \bigl(m^2_{\rm eff}(t)\bigr) \Bigr] \rho_{0\bm{k}} &= 0, \label{eq:moments-rho2} \\ \phantom{\frac{1}{2}}\Bigl[\partial_t^2 + m^2_{\rm eff}(t)\Bigr]{\varphi_{\rm R}} &= \frac{1}{3}\lambda^\idx{2}_{\rm R}\varphi_{\rm R}^3. \label{eq:moments-phi} \end{align} \end{subequations} We used the shorthand $m^2_{\rm eff}(t)\equiv m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$ for the mass defined in~\cref{eq:division-full,eq:effective-mass-full} and defined $\omega^2_{\bm{k}}(t) \equiv \bm{k}^2+m^2_{\rm eff}(t)$. The gap equation~\cref{eq:effective-mass-full}, including the out-of-equilibrium (or thermal) modes, can be written explicitly as \begin{equation} m^2_{\rm eff}(t) = - \mu^2_{{\rm R}\idx{2}} + \frac{1}{2}\lRidx{2}\Biggl\{\varphi_{\rm R}^2 + \Delta_{\rm F0}\bigl(m^2_{\rm eff}(t)\bigr) + \int_{\bm k} \nolimits \Biggl[ \rho_{0\bm{k}}(t) - \frac{\theta\bigl(\omega_{\bm k}^2(t)\bigr)}{2\omega_{{\bm k}}(t)}\Biggr] \Biggr\}, \label{eq:meff-gap-equation} \end{equation} where we defined \(\int_{\bm k} \equiv \frac{1}{2\uppi^2}\int_0^{\infty} {\rm d}|{\bm k}| |{\bm k}|^2\), and the Heaviside theta-function $\theta\bigl(\omega_{\bm k}^2(t)\bigr)$ ensures that the vacuum does not contain the unstable spinodal modes.% \footnote{Spinodal modes are the unstable modes that appear when the effective mass function is negative. We define them explicitly in equation~\cref{eq:spinodal_modes} below. Note that the vacuum energy integral in the spinodal region, computed with the Heaviside function, is identical with the integral computed taking the absolute value of the mass squared function and integrating over all momenta.} If $\rho_{0\bm{k}}(t)$ is identified with a thermal distribution (including the vacuum part), equation~\cref{eq:meff-gap-equation} clearly reduces to~\cref{eq:effective-mass-thermal}. After discretizing the momentum variable, equations~\cref{eq:moments-rho2,eq:moments-rho0,eq:moments-phi,eq:meff-gap-equation} can be written as a closed set of coupled first order differential equations, which include backreaction from the non-equilibrium modes into the evolution of the homogeneous field ${\varphi_{\rm R}}$. The gap equation~\cref{eq:meff-gap-equation} must be solved first at the entry to the routine, after which the solution is advanced using~\cref{eq:moments-rho2,eq:moments-rho0,eq:moments-phi}. In practice one must introduce a UV-cutoff for the magnitude of the momentum $|\bm {k}|$, but results should not depend on its precise value, because all non-trivial physics results from gradient terms acting in the infrared region. We have indeed shown that this is the case in our numerical examples. \paragraph{Friction.} Our main goal is to study the dynamical evolution of ${\varphi_{\rm R}}$ including the backreaction from the modes excited during the zero-crossings (parametric resonance) and from the unstable modes (spinodal, or tachyonic, instability). We would also like to study dissipative interactions in our system. To do this correctly, we should go beyond the Hartree approximation. This would be in principle a straightforward but very laborious task. Instead, we will add phenomenological friction terms to our equations. Following references~\cite{Herranen:2008di} and~\cite{Herranen:2010mh} we generalize equations~\cref{eq:moments-rho0,eq:moments-rho1,eq:moments-rho2} as follows: \begin{subequations} \label{eq:moments-fric} \begin{align} \frac{1}{4}\partial_t^2\rho_{0\bm{k}} - \rho_{2\bm{k}} + \omega_{\bm{k}}^2(t) \rho_{0\bm{k}} &= -{c_1}\partial_t \rho_{0\bm{k}}, \label{eq:moments-rho0-fric} \\ \partial_t \rho_1 &= -{c_2}\bigl(\delta\rho_{1\bm{k}} - \delta\rho_{1\bm{k}}^{\mathrm{eq}}\bigr), \label{eq:moments-rho1-fric} \\ \partial_{t}\rho_{2\bm{k}} -\frac{1}{2}\Bigl[\partial_t \bigl(m^2_{\rm eff}(t)\bigr) \Bigr] \rho_{0\bm{k}} &= -{c_2}\bigl(\delta\rho_{2\bm{k}} - \delta\rho_{2\bm{k}}^{\mathrm{eq}}\bigr), \label{eq:moments-rho2-fric} \end{align} \end{subequations} where $\delta \rho_{n\bm{k}} \equiv \rho_{n\bm{k}} - \rho_{n\bm{k}}^{\rm vac}$ with $\rho_{n\bm{k}}^{\rm vac}$ being the vacuum moments defined in equations~\cref{eq:non-coherent-vacuum} below, and the explicit forms for the equilibrium distributions $\rho_{n{\bm k}}^{{\rm eq}}$ have to be provided externally depending on the problem. Collision integrals could be computed accurately in the context of the cQPA formalism following reference~\cite{Herranen:2010mh}, but here we are only interested in qualitative effects, for which the above phenomenological approach is sufficient. Even then the coefficients $c_{i}$ could be some momentum dependent functions, but for simplicity we will assume that they are constants. Note that $\rho_{n\bm k}$ and $\rho_{n{\bm k}}^{{\rm eq}}$ in general have different vacuum distributions due to different respective solutions to mass gap equations. \paragraph{Number densities and coherence function.} We can get a better understanding of the physical meaning of the moments by comparing them with the spectral cQPA solutions found in reference~\cite{Herranen:2008di}. As explained in section 4.2 of reference~\cite{Herranen:2008di}, the moments are in a one-to-one correspondence with the cQPA mass-shell functions $f^m_{{\bm k}\pm}$ and the coherence function $f^c_{\bm k}$. The former can be further related to the particle and antiparticle number densities $n_{\bm k}$ and $\overbar n_{\bm k}$, so that one eventually finds~\cite{Herranen:2008di,Herranen:2010mh}: \begin{subequations} \label{f-rho_HOM} \begin{align} n_{\bm{k}} &= \frac{1}{\omega_{\bm k}}\rho_{2\bm{k}} + \rho_{1\bm{k}}, \label{f-rho_HOM-n}\\ \overbar{n}_{\bm{k}} &= \frac{1}{\omega_{\bm k}}\rho_{2\bm{k}} - \rho_{1\bm{k}} - 1, \label{f-rho_HOM-nbar}\\ f^{c\pm}_{\bm k} &= \omega_{\bm k}\rho_{0\bm{k}} - \frac{1}{\omega_{\bm k}}\rho_{2\bm{k}} \pm \frac{\mathrm{i}}{2}\partial_t\rho_{0\bm{k}}. \label{f-rho_HOM-fc} \end{align} \end{subequations} The coherence functions $f^{c\pm}_{\bm k}$ measure the degree of quantum coherence, or squeezing, between particle-antiparticle pairs with opposite 3-momenta~\cite{Fidler:2011yq}. A {\em non-coherent} vacuum state must then be defined as a state with no squeezing in addition to having no particles. This corresponds to setting $n_{\bm k} = \overbar n_{\bm k} = f^{c\pm}_{\bm k} \equiv 0$, which is equivalent to: \begin{equation} \rho_{0\bm k}^{\rm vac} = \frac{\Theta_{\bm k}}{2\omega_{\bm k}}, \qquad \partial_t\rho_{0\bm k}^{\rm vac}=0, \qquad \rho_{1\bm k}^{\rm vac} = -\frac{1}{2} \quad \mathrm{and} \quad \rho_{2\bm k}^{\rm vac} = \frac{\omega_{\bm k}}{2}\Theta_{\bm k}, \label{eq:non-coherent-vacuum} \end{equation} where $\Theta_{\bm k} \equiv \theta\bigl(\omega_{\bm k}^2(t)\bigr)$. Because we are assuming that ${\varphi_{\rm R}}$ is a real scalar field we also have $\overbar n_{\bm k} = n_{\bm k}$, which implies that $\rho_{1\bm k} = -1/2$ at all times, so that the equation for $\rho_{1\bm{k}}$ is actually redundant. This is indeed a consistent solution even with the friction terms included. \section{Numerical results} \label{sec:results} We shall now solve the coupled dynamical equations~\cref{eq:moments-rho0-fric,eq:moments-rho1-fric,eq:moments-rho2-fric,eq:meff-gap-equation,eq:moments-phi} in a few examples chosen to illustrate the rich physics of the strongly coupled system including the quantum backreaction. We will uncover some known results and find new phenomena associated with spinodal and resonant particle production at phase transitions. We will show that a strong spinodal instability can cause a quantum assisted barrier penetration without tunneling, and we emphasize the difficulty of giving any sensible definition for the effective potential in a non-equilibrium system. Eventually, we will follow the full thermalization process of a scalar field starting at rest in the vacuum potential until the end, when the energy in the field is almost completely transformed into thermal fluctuations.% \footnote{Let us make a note on units: in section~\cref{sec:effpot}, when discussing the thermal effective potentials, we gave the mass parameter a value characteristic for the electroweak phase transition, $m_{\rm R} = 100$ GeV. Below we continue to use the same value as a benchmark, and we shall be measuring all dimensionful quantities in the GeV-units. In particular, we will be measuring time in units GeV$^{-1}$, while we will be suppressing time-units in all plots. However, in all examples that we will consider below, the physical mass $m_{\rm R}$ is the only mass scale in the problem. Thus, all results are in fact valid as such for an arbitrary mass value, if only one rescales all dimensionful parameters by a suitable power of $m_{\rm R}/$GeV.} \subsection{Particle production and reheating via parametric resonance} \label{sec:parmetric-resoanance} We first consider a case where the field starts from a relatively large value and oscillates several times between positive and negative field values. Because we are also interested in the spinodal instability, we consider a tree-level potential with a negative mass term. As physical parameters we use $m_{\mathrm{R}} = 100$ GeV and $\lambda_{\rm R} = 1$, given which, $\mu_{\rm R\idx{2}}^2(Q_0)$ can be solved from~\cref{eq:tree-level-def}, while the running couplings and masses are defined in~\cref{eq:running-parameters}. We compute the initial value for the effective mass function $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$ using the vacuum Hartree approximation~\cref{eq:effective-mass-vacuum}. We used running parameters everywhere in our calculations. This served as a useful consistency check, since all final results must be (and indeed were) scale independent. In this example we also set the friction terms to zero, $c_i=0$. The essential results for this run are shown in figures~\cref{fig:figure4,fig:figure5}. In the left panel of figure~\cref{fig:figure4} we show the evolution of the classical field ${\varphi_{\rm R}}$, which here displays an orderly oscillation pattern with a slowly decaying amplitude. The middle panel shows the evolution of the fluctuations in the zeroth moment integrated over the 3-momentum, which is the non-equilibrium contribution to the local correlation function: $\int_{\bm k}\delta \rho_{0\bm{k}} = \int_{\bm k}\bigl(\rho_{0\bm{k}} - \rho_{0\bm{k}}^{{\rm vac}}\bigr) \equiv \delta \Delta_{\rm F}(t,t)$. The rapid increase of $\delta \Delta_{\rm F}(t,t)$ at early times is caused by two non-perturbative processes, the spinodal instability and the parametric resonance. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\textwidth]{Figs/Figure4.pdf} \end{center} \caption{Shown is the evolution of the classical field as a function of time (left), evolution of the integrated non-equilibrium part of the local correlation function (middle), and the effective mass function $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$ (right). We used $\lambda_{\rm R} = 1$, $m_{\rm R}=100$ GeV, $\varphi_{\mathrm{R,in}}=300$ GeV and $\partial_t \varphi_{\rm R,in}=0$. The moment functions were initialized to the non-coherent vacuum values~\cref{eq:non-coherent-vacuum}. We also assumed no friction, setting $c_i$ to zero.} \label{fig:figure4} \end{figure} \paragraph{Spinodal instability.} The presence of a spinodal instability is manifest in the right panel of figure~\cref{fig:figure4}, where the effective mass term $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$ is seen to become periodically negative in the region $t\lesssim 0.25$. Indeed, whenever the mass-function is negative, all ${\bm k}$-modes satisfying \begin{equation} {\bm k}^2 + m^2({\varphi_{\rm R}},{\Delta}_{\rm R}) < 0 \label{eq:spinodal_modes} \end{equation} are unstable and can grow exponentially. This is the \emph{spinodal} or \emph{tachyonic} instability. One might then be tempted to associate the growth in fluctuations in the period $t \lesssim 0.25$ fully to the spinodal instability. If this was true, the excited modes should satisfy the condition~\cref{eq:spinodal_modes}, which here translates to $|{\bm k}| \lesssim 60$ GeV. However, from figure~\cref{fig:figure5} we see that this is not the case. The fast production of modes is clearly visible in the upper panels which show the integrated particle number (left) and the integrated modulus of the coherence functions (right). But from the lower panels, showing time-momentum heat plots of the same quantities, we see that the excited modes are concentrated on a frequency band which lies entirely above the spinodal region~\cref{eq:spinodal_modes}. \paragraph{Parametric resonance.} While our equations are highly non-linear and strongly self-coupled, it is apparent that the structures seen in the heat plots in figure~\cref{fig:figure5} correspond to Mathieu instabilities associated with parametric resonance, familiar from the studies of inflationary reheating~\cite{kofman:1997yn}. If we identify the mass squared of the mode function in the Mathieu equation with our mass function $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$, and follow the analysis of section V in reference~\cite{kofman:1997yn}, we can (very roughly) estimate the Mathieu equation $q$-parameter in our case to be \begin{equation} q \sim 2 \frac{\Delta m^2_{\rm eff}}{(2\uppi \nu)^2} \approx 2, \label{eq:q-equation} \end{equation} where $\Delta m^2_{\rm eff} \approx 2\times 10^4$ GeV$^2$ is the instantaneous amplitude and $\nu \approx 21$ GeV is the local frequency of oscillations of the effective mass term $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$, shown in figure~\cref{fig:figure4}. The value of the $q$-parameter, which remains roughly the same throughout the calculation, suggests an intermediate resonance between the narrow and broad regimes. Similarly, the expected position of the first resonance band is by and large estimated to be \begin{equation} |\bm{k}|_{\rm rb}\sim \frac{\uppi \nu}{\sqrt[4]{2}} \approx 60 \,\mathrm{GeV}. \end{equation} This result, and the expected width of the resonance~\cite{kofman:1997yn} $\smash\Delta |\bm{k}| \sim |\bm{k}|_{\rm rb} \approx 60$ GeV are in qualitative agreement with our results. In figure~\cref{fig:figure5} we can even observe a second, much narrower band below the first one, which dominates the particle production at $t \approx 1$. While this is again in agreement with the qualitative expectations, its interpretation via Mathieu equation methods becomes even more tenuous. At late times $t \gtrsim 0.3$ the shape of the growth pattern fits well in the standard picture~\cite{kofman:1997yn}, but in the spinodal region the resonant production appears to be more efficient than usual: upon spinodal zero-crossings the resonant production that normally shows (as it indeed does at later times also in our example) a period of anti-correlation, is here always positively correlated. While individual growth bursts are not enhanced, this positive correlation leads to particularly strong particle production. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\textwidth]{Figs/Figure5.pdf} \end{center} \caption{Shown is the evolution of the integrated number density (top left) and the absolute value of the integrated coherence function $\bigl|f_{\bm k}^{\rm c\pm}\bigr|$ (top right), defined in equations~\cref{f-rho_HOM}, for the same parameters as in figure~\cref{fig:figure4}. The bottom row shows the heat plots in the momentum and time variables for the unintegrated distributions multiplied by the phase space factors: $\frac{{\bm k}^2}{2\uppi^2}n_{\bm k}$ (lower left) and $\frac{{\bm k}^2}{2\uppi^2}\bigl|f^{\rm c\pm}_{\bm k}\bigr|$ (lower right).} \label{fig:figure5} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.85\textwidth]{Figs/Figure6.pdf} \hspace{1em} \caption{The upper left panel shows the time evolution of ${\varphi_{\rm R}}$ (in units GeV) and the lower left panel that of the effective mass function $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$ (in units GeV$^2$) in the case of a strong spinodal instability. In the right panel we show the time-evolution of the instantaneous effective potential~\cref{eq:V1PI} (dashed black line), embedded in a plot of the vacuum Hartree potential (dashed red line). The colored dots indicate select times at which the instantaneous potential was evaluated as indicated in the left panels. The solid blue line shows the instantaneous value of the non-equilibrium vacuum potential~\cref{eq:effpot}.} \label{fig:wwo2pt} \end{center} \end{figure} Because we did not include interactions in this run, the fluctuation band structure remains stable at all times. The system also remains highly coherent, as is evident both from the increase of the integrated coherence function and the stability of the heat plot of the coherence function shown in the right panels of figure~\cref{fig:figure5}. \subsection{Strong spinodal instability} \label{sec:out-of-eq-potential} In the above analysis we made little reference to the effective potential. Indeed, the one-particle irreducible effective action is not a very useful quantity in an out-of-equilibrium setting and it can even be defined only \emph{after} the equations of motion have been solved. Even then one cannot define it universally, but only as a quantity evaluated locally in time. We will now study this question in the case of a very strong spinodal instability. To be specific, we still use the values $m_{\mathrm{R}} = 100$ GeV, $\lambda_{\rm R} = 1$ and $\partial_t \varphi_{\rm R,in} = 0$, but we take $\varphi_{\rm R,in} = 243.5$ GeV and include also friction. We assume that collisions drive the system to the vacuum state, i.e.\ we take $\delta \rho_{{n\bm k}}^{\rm eq} \equiv 0$, and we specify the coefficients to be $c_{1,2} = 0.6$ GeV.% \footnote{Although we gave the friction terms only in a qualitative form, we can provide an estimate for the magnitude of the $c_i$-coefficients. From equations~\cref{eq:moments-fric} it is clear that $c_i$ have the dimensions of mass. The lowest order contribution to the collision integrals arises at the second order in coupling in the 2PI expansion. Hence the naïve scale of the coefficients $c_i$ is given by $\bigl(\frac{\lambda}{4\uppi}\bigr)^2m$, which for $\lambda_{\rm R} = 1$ and $m_{\mathrm{R}} = 100$ GeV gives $c_i \simeq 0.6$ GeV.} In this case the initial potential energy of the field is lower than the peak of the vacuum potential at ${\varphi_{\rm R}} = 0$. This can be seen in the right panel of figure~\cref{fig:wwo2pt}, where we plot the Hartree-resummed vacuum potential (red dashed line) and indicate the initial field value by the black dot. Obviously, if the potential was held fixed, the field would simply oscillate around the positive minimum with a decaying amplitude. However, when backreaction is included, the picture changes dramatically. The actual field evolution is shown in the upper left panel of figure~\cref{fig:wwo2pt}. Curiously, the field stays around the positive minimum during only one oscillation cycle, after which it apparently passes through the potential barrier, spending a rather long time near the middle of the potential with the effective mass function close to zero. Of course what happens is that in the first passage of the field into the spinodal region, an explosive creation of fluctuations takes place. This is clearly demonstrated in figure~\cref{fig:kdistributions}, which shows the integrated fluctuations in the moment functions (upper panels) and the associated heat plots in the time-momentum plane (lower panels). These fluctuations absorb a large amount of entropy, which decreases the \emph{free energy} in the system and lowers the barrier between the minima allowing the field to pass to the negative side. The key issue is to not confuse the total internal energy of the system and the free energy, which may vary strongly depending on the entropy production. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\textwidth]{Figs/Figure7.pdf} \caption{The upper panels: shown are the integrated non-equilibrium fluctuations of the moment functions, $\int_{\bm{k}}\nolimits\delta\rho_{0,2\bm{k}}$. The colored dots have the same interpretation as in figure~\cref{fig:wwo2pt}. The lower panels: heat plots showing the momentum distributions $\frac{1}{2\uppi^2}{\bm k}^2\delta\rho_{n\bm{k}}$ corresponding to the upper panels. The left panels show the zeroth moment $n=0$ and the right panels the second moment $n=2$.} \label{fig:kdistributions} \end{center} \end{figure} \paragraph{Non-equilibrium effective potentials.} While the effective potential cannot be defined \emph{a priori}, it is illustrative to construct it \emph{a posteriori} as a time dependent potential that reproduces the equation of motion~\cref{eq:moments-phi} at all times. This potential can be constructed as the definite integral \begin{equation} V_{\rm 1PI}(t;{\varphi_{\rm R}}) \equiv \int_{t_{\mathrm{in}}}^t \biggl[ -\frac{1}{3}\lambda^\idx{2}_{\rm R}\varphi_{\rm R}^3 + m^2({\varphi_{\rm R}},{\Delta}_{\rm R}){\varphi_{\rm R}}\biggr](\partial_{\tilde{t}} {\varphi_{\rm R}})\mathrm{d}\tilde{t}, \label{eq:V1PI} \end{equation} where ${\varphi_{\rm R}}$ and ${\Delta}_{\rm R}$ are the solutions of the equations of motion. We show this potential as the dashed black line in figure~\cref{fig:wwo2pt}. After the crossing to the negative side, the shape of the potential function settles and the field oscillates around the negative minimum with a decaying amplitude. We stress that $V_{\rm 1PI}$ is only useful for the visualization and interpretation of results. As was already mentioned in section~\cref{sec:effpot}, in any finite truncation the renormalized 2PI vacuum becomes dependent on the IR-physics. Another interesting potential function then is the equivalent of the vacuum Hartree potential in the presence of fluctuations. This potential is defined as \begin{equation} V_{\rm H{\Delta}}({\varphi_{\rm R}},{\Delta}_{\rm R}) \equiv V_{\rm H}({\varphi_{\rm R}},{\Delta}_{\rm R}) - \frac{1}{2}m^2({\varphi_{\rm R}},{\Delta}_{\rm R})\int_{\bm k}\nolimits \delta \rho_{0\bm{k}}, \label{eq:effpot} \end{equation} where $V_{\rm H}({\varphi_{\rm R}},{\Delta}_{\rm R})$ is the 2PI vacuum potential~\cref{2PI-effective-potential-final} evaluated replacing the vacuum mass function $\overbar m^2({\varphi_{\rm R}})$ with the general mass function $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$. Note that the integral term over the fluctuations of the zeroth moment is a part of the vacuum Hartree potential, similarly to the case with the thermal potential~\cref{eq:finite-T-pot}. The potential~\cref{eq:effpot} is shown with the blue solid line in the right panel of figure~\cref{fig:wwo2pt}. It represents changes in the 2PI Hartree vacuum energy including the backreaction effects, and like the instantaneous $V_{\rm 1PI}$-potential, its barrier around ${\varphi_{\rm R}}=0$ is temporarily lowered by the backreaction. This example demonstrates that the final stages of a phase transition may involve very complicated quantum dynamics, where classical expectations and constraints do not hold. We conclude this subsection by stressing on the difference of the fluctuation spectra in the present case, shown in the lower panels of figure~\cref{fig:kdistributions}, and in the parametric resonance case shown in figure~\cref{fig:figure5}. Even though we used the same mass and coupling parameters, essentially all fluctuations are here created by the spinodal instability. Indeed, they occupy a region in the phase space which is consistent with the instability constraint~\cref{eq:spinodal_modes}, continues all the way to zero momentum and lies entirely below the parametric resonance band. \begin{figure}[t] \begin{center} \includegraphics[width=0.85\textwidth]{Figs/Extra_figure1.pdf}\hspace{1em} \end{center} \caption{Shown is the time-evolution of the classical field (left panel) and that of the total energy in the fluctuations and the classical field (right panel). ${\mathcal H}_\varphi(t)$ is the energy in the classical field and ${\mathcal H}_{\Delta}(t)$ is the energy in the fluctuations. The physical parameters and the specific form of the collision integrals used in this run are described in the text.} \label{fig:figure8} \end{figure} \subsection{Self-thermalization} \label{sec:self-thermalization} As our final example we study thermalization of the scalar field energy in a self-interacting system. We use the same physical parameters and initial conditions as in section~\cref{sec:parmetric-resoanance} but include collision terms with the friction coefficients $c_{0,1}=0.6$ GeV, and assume that the collisions drive the system to thermal equilibrium, i.e.\ we take $\delta \rho_{n\bm{k}}^{\rm eq} \equiv \delta \rho_{n\bm{k}}^{\rm th}$. With rigorously computed collision terms the thermal state would emerge automatically as an attractor solution, but in our phenomenological approach we need to give a definition for the instantaneous temperature. In thermal equilibrium a general moment can be written as \begin{equation} \rho_{n{\bm k}}^{\mathrm{th}} = \frac{1}{2}\,\omega_{\bm k}^{n-1} \Bigl[ n_\mathrm{BE}(\omega_{\bm k}) + (-1)^n \bigl( 1 + n_\mathrm{BE}(\omega_{\bm k}) \bigr) \Bigr ], \end{equation} where $n_{\mathrm{BE}}(k_0) = (\mathrm{e}^{k_0/T} - 1)^{-1}$ is the Bose--Einstein distribution function. In particular \begin{equation} \delta\rho_{0{\bm k}}^{\mathrm{th}} = \frac{1}{\omega_{\bm k}}n_\mathrm{BE}(\omega_{\bm k}) \quad \mathrm{and} \quad \delta\rho_{2{\bm k}}^{\mathrm{th}} = \omega_{\bm k}n_\mathrm{BE}(\omega_{\bm k}). \end{equation} while $\delta \rho_{1\bm{k}}^{\rm th} = 0$. We define the equivalent temperature $T = T(t)$ by requiring that the thermal state has the same energy as what is stored in the fluctuations: \begin{equation} {\mathcal H}_\Delta(t) \equiv \int_{\bm k}\nolimits \delta \rho_{2\bm{k}}(t) \equiv \int_{\bm k} \nolimits \omega_{\bm k} n_{\rm BE}(\omega_{\bm k}). \label{eq:equivalent-temperature} \end{equation} In all these equations $\omega_{\bm k}^2 = {\bm k}^2 + m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$ is a function of time. The energy stored in the classical field is \begin{equation} {\mathcal H}_\varphi(t) \equiv \frac{1}{2}\bigl(\partial_t\varphi_{\rm R}(t)\bigr)^2 + V_{\rm H{\Delta}}({\varphi_{\rm R}}(t),{\Delta}_{\rm R}(t)). \label{eq:Hvarphi} \end{equation} With our definitions of the temperature and the collision integrals the total energy ${\mathcal H} = \mathcal{H}_\varphi + \mathcal{H}_\Delta$ should be conserved, and we checked that this is indeed the case to a high accuracy in our calculations. For more details on this, and on the numerical setup in general, see appendix~\cref{sec:impl}. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\textwidth]{Figs/Figure9.pdf} \end{center} \caption{Shown are the evolution of the number density (left) and the modulus of the coherence functions (right). In the upper panels the quantities are integrated over momentum. We used the same parameters as in figure~\cref{fig:figure5}, except for non-zero friction coefficients $c_i=0.6$ GeV in the collision integrals with thermal equilibrium solutions.} \label{fig:figure9} \end{figure} \paragraph{Spinodal slowing.} In the left panel of figure~\cref{fig:figure8} we show the evolution of the classical field ${\varphi_{\rm R}}$. Initially ${\varphi_{\rm R}}$ evolves as in the collisionless case, oscillating with a nearly constant frequency and a large amplitude, but around $t\sim 2$ the frequency starts to decrease until it reaches a minimum around $t\sim 3$. After this the field gets trapped around the positive minimum while the oscillation frequency increases again. This \emph{spinodal slowing} effect was already seen in connection with the barrier crossing in section~\cref{sec:out-of-eq-potential}. The bearing of the spinodal modes is revealed in the inset in the left panel of figure~\cref{fig:temperature-and-w}, which shows that the effective mass term $m^2({\varphi_{\rm R}},{\Delta}_{\rm R})$ repeatedly becomes negative in this region. In the right panel of figure~\cref{fig:figure8} we show the energy components ${\mathcal H}_\varphi$ and $\mathcal{H}_\Delta$. Initially all energy is stored in the classical field, but the fraction of energy in the fluctuations increases until the system is {\em reheated}, with almost all of the energy contained in the fluctuations. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{Figs/thermal_distribution_rho2} \hspace{1em} \includegraphics[width=0.45\textwidth]{Figs/thermal_distribution_fca} \end{center} \caption{Shown are the momentum distributions $\frac{\bm{k}^2}{2\uppi^2}\delta\rho_{2\bm k}$ (left) and $\frac{\bm{k}^2}{2\uppi^2}\bigl|f^{\rm c\pm}_{\bm k}\bigr|$ (right) for three different times: $t=0.2$ (solid blue lines) $t=1.3$ (red dotted lines) and $t=6$ (black dashed lines). Also shown in the left plot is the weighted thermal distribution $\frac{\bm{k}^2}{2\uppi^2}\omega_{\bm k}n_{\rm BE}(\omega_{\bm k})$ for the equivalent temperature $T(t=6)=144.9$ GeV (black dotted line).} \label{fig:profiles_with_insets} \end{figure} \paragraph{Mode transfer and decoherence.} In figure~\cref{fig:figure9} we again show the evolution of the number density and coherence functions, including both the integrated quantities and the time-momentum heat plots. There are striking, but expected differences between these plots and the corresponding non-interacting results shown in figure~\cref{fig:figure5}. First, the number density stops growing already at $t \sim 1$ and eventually starts to decrease for $t\gtrsim 2$. As is seen from figure~\cref{fig:figure8}, fluctuations dominate the total energy already for $t \gtrsim 1$, and the subsequent decrease of particle number results from a transfer of modes to higher energies. Thermalization process should also lead to decoherence, and this is indeed clearly visible in the upper right panel of figure~\cref{fig:figure9}, which shows the integrated function $\bigl|f^{c\pm}_{\bm k}\bigr|$. From the heat plots we see that particle production gets progressively less efficient and moves to smaller frequencies, as less and less energy is left in the classical field. From the heat plot in the lower right panel we see that coherence is erased throughout the phase space at late times. \paragraph{Thermalization.} In figure~\cref{fig:profiles_with_insets} we show the $|\bm k|$-distributions of $\delta \rho_{2\bm{k}}$ (left panel) and the coherence function $\bigl|f^{c\pm}_{\bm{k}}\bigr|$ (right panel) weighted by the phase space factor, for selected times during the evolution. At a relatively early time $t=0.2$ the distributions shown in solid blue still display a clear parametric resonance band structure. At a later time $t=1.3$ (red dotted lines) the resonant spectrum is already much more complex, apparently with contributions from many narrow bands. Also a significant mode-transfer to the thermal region has already taken place. Indeed, from the main plot in the left panel of figure~\cref{fig:temperature-and-w} we see that the equivalent temperature at $t=1.3$ is roughly 140 GeV, and as the field is relatively light, $\langle m^2_{\rm eff} \rangle^{1/2}/T \lesssim 1$ with $\langle m^2_{\rm eff} \rangle$ being the local average of the oscillating effective mass function, the expected maximum of the thermal spectrum is located at $\langle |{\bm k}|\rangle \approx 3T \approx 400$ GeV. At the end of the simulation, $t=6$ (black dashed curve), the system has essentially thermalized. Almost all energy is in the fluctuations and very little particle production activity remains. The particle number in the resonance bands is small and the coherence is almost vanishing everywhere and in particular in the thermal region. Also the fluctuations in the equivalent temperature have but a small residual amplitude left. For the final time we also plotted (black dotted line in the left panel of figure~\cref{fig:profiles_with_insets}) the equivalent thermal spectrum $\frac{\bm{k}^2}{2\uppi^2}\omega_{\bm k}n_{\rm BE}(\omega_{\bm k})$ with $T = 144.9$ GeV, corresponding to the equivalent temperature at $t=6$. The close agreement between the actual and thermal distributions shows that the system has indeed thermalized to a very high accuracy. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{Figs/Figure10a.pdf} \includegraphics[width=0.45\textwidth]{Figs/Figure10b.pdf} \end{center} \caption{In the left panel we show the equivalent temperature defined through equation~\cref{eq:equivalent-temperature} as a function of time. The inset shows the parameter $x_{\rm eff} \equiv {\rm sgn}\bigl(m^2_{\rm eff}\bigr)\bigl|m^2_{\rm eff}\bigr|^{1/2}/T$. In the right panel we show the EOS-parameter of the system defined in equation~\cref{eq:EOS}. The black arrows indicate the limiting cases of vacuum ($w=-1$) and kinetic ($w=1$) energy dominance as well as matter ($w=0$) and radiation ($w=1/3$) EOS's, shown by horizontal lines. In all graphs shown the red arrow points the region of maximal spinodal slowing.} \label{fig:temperature-and-w} \end{figure} \paragraph{Equation of state.} Let us finally study the evolution of the equation of state (EOS) in the system. The EOS-parameter is defined as \begin{equation} w \equiv \frac{\mathcal P}{\mathcal H}, \label{eq:EOS} \end{equation} where ${\mathcal H} = {\mathcal H}_\varphi + {\mathcal H}_{\Delta}$ is the total energy and the total pressure ${\mathcal P} = {\mathcal P}_\varphi + {\mathcal P}_{\Delta}$ is similarly the sum of the pressures in the classical field and in the fluctuations. The former is given by \begin{equation} {\mathcal P}_\varphi = \frac{1}{2}(\partial_t{\varphi_{\rm R}})^2 - V_{\rm H{\Delta}}({\varphi_{\rm R}},{\Delta}_{\rm R}), \label{eq:pressure-in-fields} \end{equation} where $V_{\rm H{\Delta}}$ was defined in~\cref{eq:effpot}. The pressure contained in the fluctuations can be computed as the spatial component of the energy-momentum tensor~\cite{Herranen:2008di}, and it can be written in terms of the moment functions as follows: \begin{equation} {\mathcal P}_{\Delta}({\varphi_{\rm R}},{\Delta}_{\rm R}) = \int_{\bm k} \nolimits \biggl[ \delta \rho_{2\bm k}(t) + \biggl( \frac{1}{3}{\bm k}^2 - \omega_{\bm k}^2 \biggr) \delta \rho_{0\bm k}(t) \biggr]. \label{eq:PcQPA} \end{equation} It is easy to see that in the thermal limit~\cref{eq:PcQPA} reduces to the negative of the thermal part of the effective potential in the Hartree approximation: ${\mathcal P}_{\Delta} =- T^4 {\mathcal J}\bigl( \overbar m^2_T/T^2\bigr)$. We plot the EOS-parameter $w$ in the right panel of figure~\cref{fig:temperature-and-w}. The EOS-parameter starts from $w=-1$ and initially oscillates between $w=-1$, corresponding to total vacuum energy dominance, and $w=1$, corresponding to kinetic energy dominance (kination) in the classical field sector. However, as the energy is moved out from the field and the system thermalizes, the EOS-parameter moves to the band $0 < w < 1/3$ corresponding to normal matter. From the inset of the left panel we see that the average value $\langle |x_{\rm eff}|\rangle = \langle |m^2_{\rm eff}|^{1/2}/T\rangle \approx 0.6$ at late times. This indicates that the reheated thermal plasma is almost relativistic and indeed, the EOS-parameter is asymptoting close to $w=1/3$ at late times. (In a purely thermal plasma with $x_{\rm eff}=0.6$ one would get $w\approx 0.315$.) The periodic deviation below this value seen in figure~\cref{fig:temperature-and-w} is due to the field contributions to energy and pressure. \section{Conclusions} \label{sec:conc} We have studied the non-equilibrium evolution of a system consisting of a classical scalar field coupled to the two-point function describing quantum fluctuations. We derived renormalized evolution equations for the system using 2PI methods in the Hartree approximation. We derived the effective potential for this system in vacuum and in thermal equilibrium and compared the latter with the known one-loop-resummed effective potentials. We showed that the Parwani-resummed thermal potential~\cite{Parwani:1991gq} is closest in spirit to the Hartree-resummed effective potential. We showed that in a non-equilibrium situation the 2PI method, in any finite truncation, leads to an effective vacuum potential (the vacuum state) that depends on the infrared physics. Indeed, even though the renormalization procedure provides unique and constant counterterms, the split of the system into divergent and non-divergent parts depends on the IR-physics. We wrote our renormalized evolution equations as a set of coupled moment-equations for the correlation function and a field equation for the one-point function in the mixed representation and included phenomenological collision integrals describing friction. We used this system to study the non-perturbative particle production and spinodal instability at the end of phase transitions. We found out that quantum backreaction can have significant effects on the evolution of the system and addressed the problems in trying to define any practical effective potential for such dynamical systems. In particular we were able to follow the full thermal history of a self-interacting system starting from a cold initial state where all energy in the system was stored in the classical potential, until the end when the system was reheated and thermalized and the field stayed at the minimum of the thermal (Hartree) effective potential. In this work we assumed that the quantum system lived in the Minkowski space-time. Generalization to an expanding FRLW space-time is straightforward by a simple transform to conformal coordinates~\cite{Jukkala:2021cys}. Moreover, in many realistic systems the time scales involved in the phase transition are much faster than the Hubble expansion. In those cases our results are representative of the physics as such. Also, we used only a phenomenological form for the collision integrals. It would be interesting to derive more realistic collision terms using the methods developed in~\cite{Herranen:2010mh,Fidler:2011yq}. Also it would be interesting to couple the scalar field also to other quantum fields. This should be straightforward by combining the current results with the quantum transport equations for fermions developed in~\cite{Jukkala:2019slc}. In this way one should be able to study reheating at the end of inflation in a realistic setup. \section*{Acknowledgements} \label{sec:ack} This work was supported by the Academy of Finland grant 318319. OK was in addition supported by a grant from the Magnus Ehrnrooth Foundation. We wish to thank Alexandre Alvarez, Amitayus Banik, Haye Hinrichsen, Sami Nurmi, Werner Porod and Anna Tokareva for discussions and comments on the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
35
Kozłowice is a village in the administrative district of Gmina Gorzów Śląski, within Olesno County, Opole Voivodeship, in south-western Poland. It lies approximately south-west of Gorzów Śląski, north of Olesno, and north-east of the regional capital Opole. References Villages in Olesno County
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,673
Q: How to install full ubuntu on a USB drive Okay. I have booted from the same USB drive with the toram option. I have unmounted the partition on the USB drive through the terminal, basically following this answer: Can Ubuntu be installed to the pendrive it was booted from? I am now in the installation stage, but I can't figure out the partitions I have to choose in order to install it on only the USB drive, as I'm running windows 10 as my main operating system on my laptop. Please help EDIT 1: Like so? A: Set the USB partition to type ext4, and set the mount point to "/" . You could optionally delete the existing partition first, then create two partitions on the jumpdrive: a 1G swap partition, and the root ( "/" ) partition that uses the rest of the space. The swap space is nice to have but not really essential.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,703
Q: error xlPrimary not defined in Python win32com I keep getting errors where xlCategory, xlValue and xlPrimary are not recognised in my python script. I am trying to label the axes of my graph and was successfully doing so yesterday with this code: chart = excel.Charts.Add() chart.Name = "Chart Title" chart.ChartType = -4169 #xlXYScatter chart.SetSourceData(firstSheet.Range("$A:$B")) series = chart.SeriesCollection(1) series.Name = "Series Name" chart.Axes(win32com.client.constants.xlCategory).HasTitle = True chart.Axes(win32com.client.constants.xlCategory).AxisTitle.Caption = "x Axis" chart.Axes(win32com.client.constants.xlValue).HasTitle = True chart.Axes(win32com.client.constants.xlValue).AxisTitle.Caption = "y Axis" This produced the following error: Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> startGraphBuild() File "C:\Python33\InCAS_Study_Analysis\VMDvsMODVMDG.py", line 33, in startGraphBuild chart.Axes(win32com.client.constants.xlCategory).HasTitle = True File "C:\Python33\lib\site-packages\win32com\client\__init__.py", line 170, in __getattr__ raise AttributeError(a) AttributeError: xlCategory So I tried this from this stackoverflow question changing axis labels in excel 2007 charts using python win32com: pAxis = chart.Axes(AxisGroup = xlPrimary) xAxis = pAxis(1) yAxis = pAxis(2) xAxis.HasTitle = True yAxis.HasTitle = True xAxis.AxisTitle.Caption = "VMD" yAxis.AxisTitle.Caption = "MOD VMD" But this produced the following error: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> startGraphBuild() File "C:\Python33\InCAS_Study_Analysis\VMDvsMODVMDG.py", line 37, in startGraphBuild pAxis = chart.Axes(AxisGroup = xlPrimary) NameError: global name 'xlPrimary' is not defined Has anyone else experienced this? Since it was working yesterday I have tried restarting everything, uninstalling and reinstalling pyWin but these haven't worked. I am using Python 3.3 and Excel 2010. A: The constants are defined. However, they will only be loaded if you have created the COM Type Library for the COM objects of interest. There are several ways to do that (my self-answer to Accessing enumaration constants in Excel COM using Python and win32com has some links you'll find useful). But basically try this: Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win Type "help", "copyright", "credits" or "license" for more information. Portable Python >>> import win32com Portable Python >>> win32com.__gen_path__ # path to COM typelib generated by win32com 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\gen_py\\2.7' Now try with Dispatch: Portable Python >>> from win32com import client Portable Python >>> xl=client.Dispatch('Excel.Application') Portable Python >>> client.constants.xlPrimary Traceback (most recent call last): File "<stdin>", line 1, in <module> File "G:\Portable Python 2.7.5.1\App\lib\site-packages\win32com\client\__init_ __getattr__ raise AttributeError(a) AttributeError: xlPrimary Now use EnsureDispatch from gencache: Portable Python >>> xl=client.gencache.EnsureDispatch('Excel.Application') Portable Python >>> client.constants.xlPrimary 1 Portable Python >>> You only need to use EnsureDispatch once, since once the Type library has been created, even Dispatch will load the constants. If you need to clear the cache for whatever reason, wasn't easy to find, but you can remove the gen_py folder, its path can be found from win32com.__gen_path__. A: The main reason for this attribute error is because your COM-server has shifted from late-binding (dynamic) to early binding (static). * *In Late Binding, whenever a method is called, the object is queried for the method and if it succeeds, then the call can be made. *In Early Binding, the information of the object model is determined in advance from type information supplied by the object call. Early binding makes use of MakePy. Also, early binding is case sensitive. There are two ways to fix this issue: * *Use the dynamic module to force your code to work in a late-bound oriented way. Example use: "win32com.client.dynamic.Dispatch()" instead of "win32com.client.Dispatch()" *Use camelcase sensitive keywords for the early bound oriented way. Example use: "excel.Visible()" instead of "excel.VISIBLE()" or "excel.visible()" If you want to use variables without case sensitive issues, you should delete the gen_py folder and use win32com.client.Dispatch()
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,104
Green Revolution: A Multi-pronged Approach Reacting to Tsegaye's original article, Yemane T. raised important issues such as pollution, declining soil quality, and human health costs and risks associated with excessive use of fertilizer, pesticides and other chemicals. And lately, Getachew Mequanent, one of the intellectual godfathers who provide consistent conceptual smoke screen for the ruling party, tried to remind us that, though not explicitly or officially declared, the government's agricultural-led industrialization development strategy (ADLI) "has as a core objective the promotion of green revolution technologies." It is not so much Tsegaye and Yemane that spurred me to put my computer ink and paper together; I share most of their legitimate ideas and insights, which, I believe, emanate from their genuine concern about the inexplicably pervasive economic hardships and social deprivations in our country. Though there are also some truths in Getachew's assertions, they are, however, candy-coated with conceptual convulsions (or an excess of statistical legerdemain, if you choose) that are the hallmarks of both write-ups. For instance, based on his 1998 agricultural input application data in North Gondar, the farmers used a total of 49,000 quintals of fertilizers between 1994/95 and 1996/97, an adoption rate which he trumpets as "very high." Yet, as you say, the devil is in the details, and if we disaggregate the total figure into three agricultural production periods, the farmers actually consumed on average 16 333 quintals of fertilizers per annum (or less than 6kg/household), which is an extremely low level of adoption, especially if we take into account the fact that more than 300,000 agricultural households inhabit the North Gondar administrative zone. Moreover, even if we assumed that 6kg/household (still much less than the national average of 16kg/hectare in 1999) is a "high rate" of adoption, we still run the risk of making irrelevant generalization for the whole country based on the findings of an empirical investigation conducted in one specific zone. In fact, according to FAO, the high adoption rates were observed in the three provinces of central Ethiopia, namely, Shoa, Arsi and Gojjam, whose peasants together consumed 75 percent of the total fertilizer input in 1994/95. Now there could be several explanations as to why the level of fertilizer consumption in Ethiopia is not only very low but it is also concentrated in small pockets in the central part of the country. One possible reason is the existence of fast and affordable transportation facilities in these areas. Central Ethiopia has better road transportation networks linked to the capital, with an average road density of over 40 kilometres per 1000 square kilometre area (which is higher than the national average of 30 km/1000 sq km area). So, conceivably, farmers in remote and mountainous regions like North Gondar Administrative Zone (with an average road density of 21 kms/1000sq km area and several hundred kilometres away from Addis compared with Arsi or Shoa) are very likely to be discouraged by the punitive costs of fertilizers and other farm inputs as traders would charge higher prices to redress the high costs associated with transportation. So green revolution is not only about production-boosting inputs but also about the availability of fast, convenient and modern transportation networks to facilitate the supply of these inputs to farmers timely and affordably. Fertilizer for Fertility Another important factor that determines the rate of adoption and the success of new agricultural technologies in Ethiopia is household income. Without the financial capacity to purchase and apply new technologies, farmers in Ethiopia will remain impoverished for the foreseeable future. Even extending farm credits, as evidenced by the outcome of the government's rural-farm-input-credit scheme implemented over the past two decades, will not make the slightest dent in Ethiopia's dreadful rural poverty. As I tried to argue in my previous article–what shall we do with the tools?—we must devise ways and means of subsidizing agricultural production in our country by building the capacity of the farmer at household level. So far the fertilizer market in Ethiopia has only enriched those rapacious fertilizer importing-and-distributing individuals, private firms, if any, and party parastatals that have been literally sucking the blood of the poor farmer, perpetuating his hardships as he remains bogged down in an escapable trap of taking farm credit and paying the debt thereof without any prospect of achieving economic independence and self-reliance. Many questions spring to mind when we raise the issue of providing subsidies to our farmers till they accumulate enough assets and become economically and financially independent. Considering our limited resources, can we give subsidies to all Ethiopian farmers who currently number more than 68 million encompassing 12 million rural households? If we can't include the entire farming community into our support programme due to resource constraints, how many should be included? From which regions? What should be the criteria on which we identify and pick eligible recipients? And most importantly for how long do we need to sustain such support to our farmers, that is, when is the right time to press the exit button? These are crucial questions of practical significance whose answers require the joint contributions and participation of statisticians, economists, high ranking government officials, agricultural experts and other stakeholders including local residents, administrators and kebele sheriffs. It is a veritable rigmarole which involves a lot of money, time and man power. It requires diligence, a sense of commitment, and greater transparency. It takes the effort, big and small, of each and every patriotic Ethiopian who has the vision to break the long standing bond between famine and Ethiopianity. Regions or provinces may be selected based on their agricultural potential such as soil fertility, availability of suitable land that supports the efficient utilization of agricultural technologies, the amount of annual precipitation or availability of rivers that can be developed for irrigation use etc. Our criteria for the selection of target areas/districts need to give greater weight to economic efficiency than to egalitarian considerations, which are impossible to attain. Thus, (resentful) residents in less productive regions which do not participate in the support scheme would benefit from higher quality food produced in more productive regions and supplied at lower prices. Moreover, the government can also support less fortunate regions—those that do not have comparative advantage in food production—by creating off-farm employment opportunities such as through the expansion of labour intensive, small and medium sized manufacturing firms that specialize in the production and distribution of low-technology household consumption goods like soap and footwear (what Tsegaye and Getachew call industrial decentralization). Once we agree on how to select our regions, the next task will be selecting eligible households, which certainly will be the most complex and controversial work to do. It is also in here which I hope I will make my "groundbreaking" contribution towards solving the age-old food insecurity puzzle in our country. I call my criteria "fertilizer for fertility," and target the youngest, most productive rural couples or households, with as few children as possible (say, three and below). The choice of young couples with lower number of children has several advantages: To begin with, it serves as a powerful instrument in managing Ethiopia's population explosion problem, which has been growing much faster than the country's ability to produce enough food. Combating such Malthusian crisis (higher population growth with slow agricultural expansion) requires targeting the younger generation which is more likely to be open and easy to edify about the long-term risks of having more children. People naturally respond to incentives, and more enthusiastically to financial incentives. Therefore, by providing fertilizer subsidies to such young rural couples, the government literally achieves two objectives simultaneously: increase food production through its green revolution and sell its family planning programmes. Whether the couple will stick to abstinence, prophylactic or any other family planning stratagem depends on many factors including the level of their edification, the community they live in, the availability of effective health system, the role of religious and traditional leaders in promoting such programmes etc. Knowing that the fertilizer subsidies they receive is conditional on controlling the number of children they would like to have, the target rural couples develop a sense of self-restraint which they willingly want to impose on themselves in exchange for free fertilizer which would increase their farm output, with a room for surplus produce that could be cashed and saved for any eventuality. There is no doubt that the existence of such incentives would encourage competition among peer groups for greater prudence in managing their family sizes. Efficient Agricultural Financial System It is more than a surprise that a government that boasts to be promoting the vital interests of the farming and pastoral communities in Ethiopia has not established a specialized agricultural bank that effectively addresses the special needs and circumstances of the country's troubled agricultural sector. Even though the Development Bank of Ethiopia (DBE) has been assigned with the responsibility to support the country's development efforts by providing loans to agricultural and industrial investors, its impact on modernizing Ethiopia's agriculture has been hardly noticeable. For instance, as of June 2007, the latest report available online, the DBE extended a total loan portfolio of about 5.9 billion Birr of which only 30 percent, close to 1.8 billion Birr, was channelled to cooperatives, individuals and public enterprises investing in agriculture; the remaining 70 percent went to finance industrial and other non-agricultural business activities. Moreover, apart from its meagre capital base, for a country with a huge rural population of some 65 million, the DBE cannot be expected to discharge its duties and functions effectively with only 33 branches, most of which are concentrated in major towns and cities. For an overwhelmingly agrarian economy like ours, having one agricultural bank branch for every constituency/district is not a banking extravaganza. Several studies have shown that there is strong positive relationship between an efficient financial/banking system and strong economic growth, including agricultural growth, even though the direction of causation is often disputed. Despite such theoretical and empirical foundations for a modern financial sector, Ethiopia's financial system remains among the least developed in the world. One important measure of the degree of maturity of a country's financial system is the total credit advanced by commercial banks and other financial institutions to the private sector as a percentage of its gross domestic product (private sector credit/GDP). According to the World Bank, as of 2008, this ratio is 18% for Ethiopia, which, for instance, compares with Sudan (10.9%), Eritrea (18.4%), Djibouti (27.3%), Kenya (30%), Egypt (42.9%), China 108.3%), South Africa (145.2%), United States (193.7%), Cyprus (257.3%). We may be in a better position than the Sudan in "financial sector innovation" but we should not feel complacent about that and we need to improve the availability of credit to individuals and businesses, and particularly to the hitherto forgotten vast agrarian regions of the country. One important area of intervention is for the government to sell up to half of its stake in the Commercial Bank of Ethiopia to Ethiopians at home or abroad (no to foreigners!) and use the proceeds thereof to support poor rural households by extending credit services at lower interest rates. Because where there is profit the power of greed works miraculously and it is for this reason that private banks in Ethiopia, non-existent during the socialist Dreg regime, flourished rapidly since 1995 controlling 53% of the banking market in terms of branch network and 33% in terms of capital as of 2008. The government would help the economy if it withdraws its interfering hands from profit-making areas and focus on risky and so far unsuccessful ventures like Micro Finance Institutions (MFIs). These institutions, which claim to be working for those in the social periphery but some of them charging as high as 30% interest rates from the poorest of the poor, cannot be in the service of the needy. The government would practically prove its populist agenda if it relaxes its grip on profit-oriented financial enterprises such as the CBE and concentrate on sectors for socially disadvantaged citizens. Of course there are government sponsored regional MFIs operating in Tigray, Amhara, Oromia, etc. but as of July 2010 these regional MFIs have been able to reach out only 1.7 million poor Ethiopians, which is insufficient given the fact that some 40% of Ethiopians, most of them rural, live in absolute poverty. Still another significant factor that should complement our green revolution campaign is a comprehensive rural electrification programme. As we all know Ethiopia loses some 150,000-200,000 hectares of its forest cover through deforestation on annual basis. Without any doubt much of this deforestation occurs due to increased population pressure which in turn translates to increased demand for cultivable land and fuel. So programmes that aim at electrifying rural Ethiopia not only help reduce the dramatic rate of deforestation and desertification threatening the country but also help contain its rapid population growth. It is also possible that households with electrified houses tend to have smaller number of children, access to radio and community TV services (increased awareness on issues that matter to them), and their children will have better education, among others. Moreover, availability of electricity contributes to economic growth and development by providing carbon-clean energy for irrigation and other projects in rural neighbourhoods. While little or no access to rural electricity facilities should present a sense of national urgency, we have been hearing about Ethiopia's plans to sell electric energy abroad. But, why would Ethiopia plan to export electricity to the Sudan and other neighbouring countries, when such services are badly needed at home? Is that economically, environmentally and politically sensible? Do not get me wrong, am one of those proud Ethiopians to encourage the government to push on its Gilgel Gibe III and other similar mega projects. But I do not see any economic, social or political rationale for exporting electricity at the moment when more than 85% Ethiopians are groping their way in complete darkness, and increasing deforestation and desertification are endangering the ecological balance of the nation. To put things in perspective: as of 2008 the total population of Ethiopia having access to electricity is 15% (rural 2%) while the corresponding figure for the Sudan is 30% (rural 19%). On per capita basis an Ethiopian consumes 3.8 watts per person, Somalia (3.48), Eritrea (5.91), Sudan (10.4), Kenya (14.9). So given the low level of electrification in (rural) Ethiopia giving priority for domestic concerns would appear more sensible than providing light to a neighbouring country with (relatively) higher rate of electrification. But the government insists on selling electricity to the Sudan and other neighbouring countries, and in doing so, I fear, it would add another item on a pool of long-standing accusations that the regime does not have Ethiopia's national interests at heart. 'Green Politics' Peace, democracy, and freedom are vital ingredients for any society not only for their own sake but also to effectively attain its social and development goals. Paul Collier, the renowned maestro on development matters in Africa and other struggling nations, has forwarded useful insights on this subject in his celebrated book 'The Bottom Billion' (a euphemism for much of the destitute Sub Saharan Africa). Analyzing the causes of poverty in these countries, Collier identifies four principal traps: the conflict trap, the resource trap (e.g. 'blood diamond'), the bad governance trap and the trap of being landlocked, all of which have significant implications for Ethiopia. For instance, because of the myopic decisions of our leaders in Asmara and Addis Ababa to separate and antagonize the sisterly peoples of Ethiopia and Eritrea, the citizens of both countries are suffering a lot economically, psychologically and politically. While these countries should have been working together to improve the lives and livelihoods of the poor under their jurisdictions through increased economic integration and cooperation by allowing greater mobility of people, goods, ideas and capital across their borders, both governments have chosen to keep their people under worsening poverty by wasting millions of dollars on military operations and hate campaign. If our leaders were far-sighted and worked to promote peace (domestically and externally) the nearly one billion dollar money Ethiopia spends on port service annually could be used to develop joint irrigation schemes to ensure food-self sufficiency in both countries. But, confusingly, they have preferred soldiers to goods to send across their borders. Without both internal and external peace Ethiopia will never be able to solve its chronic food insecurity problem, let alone join the club of middle income countries in the coming two decades. Either we democratize, improve our governance institutions, make peace with our people and our neighbours, or keep this country in eternal poverty and political instability. Worse yet, judicious Ethiopians who recognize the critical importance and necessity of better governance, democracy, freedom and peace for development are labled by the government as "terrorist." At this crucial juncture, by ruling out any chance for peaceful democratic transition in Ethiopia and for restoring peace with its neighbours, none other than Meles and company are responsible for plunging this nation to looming violence and conflict, which will further deepen our poverty and misery. Stimulating Demand So far we have been focusing on the supply side to increase production through green revolution. Yet subsidized production is not an end in itself; there must be enough demand to absorb the surplus produce obtained through the application of technologically intensive methods so that farmers can enjoy the fruition of their labour. With sufficient demand for their produce farmers will be shielded from the disastrous consequences of price collapse in the wake of bumper harvests as happened in southern Ethiopia in 2001 following a record maize harvest when farmers had to dump their maize like garbage. Thus sufficient demand presupposes the existence of quality roads; strong and expanding urban markets and an efficient agricultural banking sector adapted to the unique needs and circumstances of Ethiopia's backward agricultural sector. It also presupposes the existence of an economically capable urban class with access to decent job opportunities that guarantee adequate remunerations. And creating enough demand requires tackling the country's rampant unemployment problem, especially in urban and agriculturally unsustainable areas. Though correct data on unemployment in Ethiopia is hard to come by, it does not take an economic guru to recognize the severity of the problem. So tackling an Ethiopian social hydra such as unemployment requires strong government intervention through the expansion of credit and public sector investment in the short-run and tackling the country's population explosion (which translates to unsustainable increase in labour force) in the long-run. Of course deficit hawks and free market aficionados (e.g. the IMF) do not want the government to assume active roles by spending on public projects to fight joblessness and other serious social ills. But Ethiopia is a poor country and cannot afford to indulge in the luxury of 'low' inflation and support an expensive welfare state policy on the model of rich Europe, United States and Canada. The European Union, for example, demands its member countries that they arrest annual inflation below 3 percent per annum. So such policies—while good at maintaining low and stable price levels–discourage national governments from undertaking important public investments that create jobs to their desperate citizens, and the result would be huge unemployment (as high as 20 percent in Spain, for instance). But since most EU member states are wealthy enough to provide acceptable unemployment benefits and other social security payments to the unemployed, they are to some extent successful in ensuring social cohesion and political stability, and their leaders do not leave in permanent paranoia unlike our leaders who are constantly scared of the unemployed youth. Thus public policies designed to reduce unemployment, besides to their role in creating sufficient demand for our green revolution, will also relieve our leaders of painful paranoia if such policies focus on long-term economic gains than short-term political advantages such as the pork barrel expenditures undertaken between 2005 and 2010, whose only tangible effects were shattering inflation and massive macroeconomic dislocations. The trade-off between tackling unemployment and running the risk of sliding into uncontrollable and socially destabilizing price hike should not escape the attention of our policy makers. To sum up, green revolution in Ethiopia needs a comprehensive strategic intervention led by the government, and in addition to the traditional approach based on intensification of farm inputs like fertilizer and pesticides, it requires the expansion of fast and durable road and other transportation networks; the establishment of an efficient agricultural banking sector; launching an innovative population control policy; designing and implementing a comprehensive rural electrification programme; the existence of a peaceful and healthy society; making peace at home and with our neighbours and most importantly a strong (and democratic) government with a goal of maintaining social cohesion and economic security by acting as the guardian of those in the social periphery where the market has little or no interest to meet their special needs and circumstances. For comments bishangary@yahoo.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,108