text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
\section{Introduction}
Multi-view multi-person 3D pose estimation aims to localize 3D skeleton joints for each person instance in a scene from multi-view camera inputs.
It is a fundamental task that benefits many real-world applications (such as surveillance, sportscast, gaming and mixed reality) and
is mainly tackled by reconstruction-based~\cite{dong2019fast,huang2020end,chen2020multi} and volumetric~\cite{Tu2020} approaches in previous literature, as shown in Fig.~\ref{fig:compare_to_others} (a) and (b).
The former first estimates 2D poses in each view independently
and then aggregates them and reconstructs their 3D counterparts via triangulation or a 3D pictorial structure model.
The volumetric approach~\cite{Tu2020} builds a 3D feature volume through heatmap estimation and 2D-to-3D un-projection at first, based on which instance localization and 3D pose estimation are performed for each person instance individually.
Though with notable accuracy, the above paradigms are inefficient
due to highly relying on those intermediate tasks.
Moreover, they estimate 3D pose for each person separately, making the computation cost grow linearly with the number of persons.
Targeted at a more simplified and efficient pipeline, we were wondering if it is possible to \textit{directly} regress 3D poses from multi-view images without relying on any intermediate task?
Though conceptually attractive, adopting such a direct mapping paradigm is highly non-trivial as it remains unclear how to perform skeleton joints detection and association for multiple persons within a single stage.
{In this work, we address these challenges by developing a novel \textbf{M}ulti-\textbf{v}iew \textbf{P}ose transformer (MvP) model which significantly simplifies the multi-person 3D pose estimation.}
Specifically, MvP represents each skeleton joint as a learnable positional embedding, named \textit{joint query}, which is fed into the model and mapped into final 3D pose estimation directly (Fig.~\ref{fig:compare_to_others} (c)), via a specifically designed attention mechanism to fuse multi-view information
and globally reason over the joint predictions to assign them to the corresponding person instances.
We develop a novel hierarchical query embedding scheme to represent the multi-person joint queries. It shares joint embedding across different persons and introduces person-level query embedding to
help the model in learning both person-level and joint-level priors. Benefiting from exploiting the person-joint relation, the model can more accurately localize the 3D joints.
Further, we propose to update the joint queries with input-dependent scene-level information (\emph{i.e.}, globally pooled image features from multi-view inputs) such that the learnt joint queries can adapt to the target scene with better generalization performance.
To effectively fuse the multi-view information, we propose a geometrically-guided projective attention mechanism. Instead of applying full attention to densely aggregate features {across spaces and views},
{it projects the estimated 3D joint into 2D anchor points for different views, and then selectively fuses the multi-view local features near to these anchors to precisely refine the 3D joint location.}
we propose to encode the camera rays into the multi-view feature representations via a novel RayConv operation to integrate multi-view positional information into the projective attention.
In this way, the strong multi-view geometrical priors can be exploited by projective attention to obtain more accurate 3D pose estimation.
Comprehensive experiments on 3D pose benchmarks Panoptic~\cite{joo2015panoptic}, as well as Shelf and Campus~\cite{belagiannis20143d} demonstrate our MvP works very well.
{Notably, it obtains 92.3\% AP$_{25}$ on the challenging Panoptic dataset, improving upon the previous best approach VoxelPose~\cite{Tu2020} by 9.8\%, while achieving nearly $2\times$ speed up.}
Moreover, the design ethos of our MvP can be easily extended to more complex tasks\textemdash we show that a simple body mesh branch with SMPL representation~\cite{loper2015smpl} trained on top of a pre-trained MvP can achieve competitively qualitative results.
Our contributions are summarized as follows:
1) We strive for simplicity in addressing the challenging multi-view multi-person 3D pose estimation problem
by casting it as a \textbf{direct regression problem} and accordingly develop a novel Multi-view Pose transformer (MvP) model, which achieves state-of-the-art results on the challenging Panoptic benchmark.
2) Different from query embedding designs in most transformer models, we propose a more tailored and concise hierarchical joint query embedding scheme to enable the model to effectively encode person-joint relation.
Additionally, we mitigate the commonly faced generalization issue by a simple query adaptation strategy.
3) We propose a novel projective attention module along with a RayConv operation for fusing multi-view information effectively, which we believe are also inspiring for model designs in other multi-view 3D tasks.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/fig1.pdf}
\caption{Difference between our method and others for multi-view multi-person 3D pose estimation. Existing methods adopt complex multi-stage pipelines that are either (a) reconstruction-based or (b) volumetric representation based, which incur heavy computation burden. (c) Our method solves this task as a \textbf{direct regression problem} without relying on any intermediate task by a novel Multi-view Pose Transformer, and largely simplifies the pipeline and boosts the efficiency.}
\label{fig:compare_to_others}
\end{figure}
\section{Related Works}
\paragraph{3D Human Pose Estimation}
3D pose estimation from monocular inputs~\cite{martinez2017simple,mehta2017vnect,zhou2017towards,popa2017deep,sun2018integral,nie2019spm,zhang2020inference,gong2021poseaug,zhang2021bmp} is an ill-posed problem as multiple 3D predictions may result in the same 2D projection.
To alleviate such projective ambiguities, multi-view methods have been explored.
Research works on single-person scenes use either multi-view geometry~\cite{Hartley2003MVG} for feature fusion~\cite{qiu2019cross,he2020epipolar} and triangulation~\cite{iskakov2019learnable,remelli2020lightweight}, or pictorial structure models for fast and robust 3D pose reconstruction~\cite{pavlakos2017harvesting,qiu2019cross}, achieving promising results.
However, it is more challenging as we progress towards multi-person scenes.
Current approaches mainly exploit a multi-stage pipeline for multi-person tasks, including reconstruction-based~\cite{dong2019fast,chen2020multi,huang2020end,kadkhodamohammadi2021generalizable,lin2021multi} and volumetric~\cite{Tu2020} paradigms.
Despite their notable accuracy, these methods suffer expensive computation cost from the intermediate tasks, such as cross-view matching and heatmap back-projection.
Moreover, the total computation cost grows linearly with the number of persons in the scene, making them hardly scalable for larger scenes.
Different from all previous approaches that rely on a multi-stage pipeline with computation redundancy, our method views multi-person 3D pose estimation as a \textbf{direct regression problem} based on a novel Multi-view Pose transformer model, enables an intermediate task-free single stage solution.
\paragraph{Attention and Transformers}
Driven by the recent success in natural language fields, there have been growing interests in exploring the Transformers for computer vision tasks, such as image recognition~\cite{dosovitskiy2020} and generation~\cite{jiang2021transgan}, as well as more complicated object detection~\cite{carion2020end,zhu2020deformabledetr} and video instance segmentation~\cite{wang2020end}.
However, multi-person 3D pose estimation has not been explored along this direction.
In this study, we propose a novel Multi-view Pose Transformer architecture with a joint query embedding scheme and a projective attention module to regress 3D skeleton joints from multi-view images directly, delivering a simplified and effective pipeline.
\section{Multi-view Pose Transformer (MvP)}
To build a direct multi-person 3D pose estimation framework from multi-view images, we introduce a novel \textbf{M}ulti-\textbf{v}iew \textbf{P}ose transformer (MvP). MvP takes in the multi-view feature representations, and transforms them into groups of 3D joint locations directly (Fig.~\ref{fig:overview_and_pattn} (a)), delivering multi-person 3D pose results, with the following carefully designed query embedding and attention schemes for detecting and grouping the skeleton joints.
\subsection{Joint Query Embedding Scheme}
\label{sec:joint_query}
Inspired by transformers~\cite{vaswani2017attention}, MvP represents each skeleton joint as a learnable positional embedding, which is fed into the transformer decoder and mapped into final 3D joint location by jointly attending to other joints and the multi-view information (Fig.~\ref{fig:overview_and_pattn} (a)).
The learnt embeddings encode \emph{a prior} knowledge about the skeleton joints and we name them as \emph{joint queries}. MvP develops the following concise query embedding scheme.
\paragraph{Hierarchical Query Embeddings}
The most straightforward way for designing joint query embeddings is to maintain a learnable query vector for each joint per person. However, we empirically find this scheme does not work well, likely because such a naive strategy cannot share the joint-level knowledge between different persons.
To tackle this problem, we develop a hierarchical query embedding scheme to explicitly encode the person-joint relation for better generalization to different scenes.
The hierarchical embedding offers joint-level information sharing across different persons
and reduces the learnable parameters, helping the model to learn useful knowledge from the training data, and thus generalize better.
Concretely, instead of using the set of independent joint queries $\{\textbf{q}_{m}\}^{M}_{m=1} \subset \mathbb{R}^C$, we employ a set of person level queries $\{\textbf{h}_{n}\}^{N}_{n=1} \subset \mathbb{R}^C$, and a set of joint level queries $\{\textbf{l}_{j}\}^{J}_{j=1} \subset \mathbb{R}^C$ to represent different persons and different skeleton joints, where $C$ denotes the feature dimension, $N$ is the number of persons, $J$ is the number of joints per person, and $M=NJ$.
Then the query of joint $j$ of person $n$ can be hierarchically formulated as
\begin{equation}
\textbf{q}_{n}^{j}=\textbf{h}_{n}+\textbf{l}_{j}.
\label{eqn:query_add}
\end{equation}
With such a hierarchical embedding scheme, the number of learnable query embedding parameters is reduced from $NJC$ to $(N+J)C$.
\paragraph{Input-dependent Query Adaptation}
In the above, the learned joint query embeddings are shared for all the input images, independent of their contents, and thus may not generalize well on the novel target data.
{To address this limitation, we propose to augment the joint queries with input-dependent scene-level information in both model training and deployment, such that the learnt joint queries can be adaptive to the target data and generalize better.}
Concretely, we augment the above joint queries with a globally pooled feature vector $\textbf{g}\in \mathbb{R}^C$ from the multi-view image feature representations:
\begin{equation}
\begin{split}
\textbf{q}_{n}^{j}&=\textbf{g}+\textbf{h}_{n}+\textbf{l}_{j}.
\end{split}
\end{equation}
Here $\textbf{g}=\mathrm{Concat}(\mathrm{Pool}(\textbf{Z}_1),\ldots, \mathrm{Pool}(\textbf{Z}_V))\textbf{W}^g$, where $\textbf{Z}_v$ denotes image feature from $v$-th view and $V$ is the total number of camera views; $\mathrm{Concat}$ and $\mathrm{Pool}$ denote concatenation and pooling operations, and $\textbf{W}^g$ is a learnable linear weight.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/fig2.pdf}
\caption{
(a) Overview of the proposed MvP model.
Upon the multi-view image features from several convolution layers, it deploys a transformer decoder with a stack of decoder layers to map the input joint queries and the multi-view features to 3D poses directly.
(b) The projective attention of MvP projects 3D skeleton joints to anchor points (the green dots) on different views and samples deformable points (the red dots) surrounding these anchors to aggregate local contextual features via learned weights (the brighter color density means larger weights).
}
\label{fig:overview_and_pattn}
\end{figure}
\subsection{Projective Attention for Multi-view Feature Fusion}
\label{sec:projective_attn}
It is crucial to aggregate complementary multi-view information to transform the joint embeddings into accurate 3D joint locations.
We consider the dot product attention mechanism of transformers~\cite{vaswani2017attention} to fuse the multi-view image features.
However, naively applying such dot product attention densely over all spatial locations and camera views will incur enormous computation cost.
Moreover, such dense attention is difficult to optimize and delivers poor performance empirically since it does not exploit any 3D geometric knowledge.
Therefore, we propose a geometrically-guided multi-view projective attention scheme, named projective attention.
{The core idea is to take the 2D projection of the estimated 3D joint location as the anchor point in each view, and only fuse the local features near those projected 2D locations from different views.}
Motivated by the deformable convolution~\cite{dai2017deformable,zhu2019deformable}, we adopt an adaptive deformable sampling strategy to gather the localized context information in each camera view, as shown in Fig.~\ref{fig:overview_and_pattn} (b). Other local attention operations~\cite{zhao2020exploring,wu2020lite,wu2019pay} can also be adopted as an alternative. Formally, given joint query feature $\textbf{q}$ and 3D joint position $\textbf{y}$, the projective attention is defined as
\begin{equation}
\begin{split}
\mathrm{PAttention}(\textbf{q},\textbf{y}, \{\textbf{Z}_{v}\}^{V}_{v=1})&=\mathrm{Concat}(\textbf{f}_1, \textbf{f}_2,\ldots, \textbf{f}_V)\textbf{W}^P, \\
\text{where}~\textbf{f}_v&=\sum_{k=1}^K \textbf{a}(k) \cdot \textbf{Z}_v \big(\Pi(\textbf{y},\textbf{C}_v)+\Delta \textbf{p}(k)\big)\textbf{W}^f.
\end{split}
\label{eqn:p_attention}
\end{equation}
Here the view-specific feature $\textbf{f}_v$ is obtained by aggregating features from $K$ discrete offsetted sampling points from an anchor point $\textbf{p}=\Pi(\textbf{y},\textbf{C}_v)$, located by projecting the current 3D joint location $\textbf{y}$ to 2D, where $\Pi: \mathbb{R}^3 \to \mathbb{R}^2$ denotes perspective projection~\cite{Hartley2003MVG} and $\textbf{C}_v$ the corresponding camera parameters.
$\textbf{W}^P$ and $\textbf{W}^f$ are learnable linear weights.
The attention weight $\textbf{a}$ and the offset to the projected anchor point $\Delta \textbf{p}$ are
estimated from
the fusion of query feature $\textbf{q}$ and the view-dependent feature at the projected anchor point $\textbf{Z}_v(\textbf{p})$, \emph{i.e.}, $\textbf{a}=\mathrm{Softmax}((\textbf{q}+\textbf{Z}_v(\textbf{p}))\textbf{W}^a)$ and $\Delta \textbf{p}=(\textbf{q}+\textbf{Z}_v(\textbf{p}))\textbf{W}^p$, where $\textbf{W}^a$ and $\textbf{W}^p$ are learnable linear weights.
If the projected location and the offset are fractional, we use bilinear interpolation to obtain the corresponding feature $\textbf{Z}_v (\textbf{p})$ or $\textbf{Z}_v (\textbf{p}+\Delta \textbf{p}(t))$.
The projective attention incorporates two geometrical cues, \emph{i.e.}, the corresponding 2D spatial locations across views from the 3D to 2D projection and the deformed neighborhood of the anchors from the learned offsets to gather view-adaptive contextual information.
Unlike naive attention where the query feature densely interacts with the multi-view key features across all the spatial locations,
{the projective attention is more selective for the interaction between the query and each view\textemdash only the features from locations near to the projected anchors are aggregated, and thus is much more efficient.}
\paragraph{Encoding Multi-view Positional Information with RayConv}
The positional encoding~\cite{vaswani2017attention} is an important component of the transformer, which provides positional information of the input sequence.
However, a simple per-view 2D positional encoding scheme cannot encode the multi-view geometrical information.
To tackle this limitation,
we propose to encode the camera ray directions {that represent positional information in 3D space} into the multi-view feature representations.
Concretely, the camera ray direction $\textbf{R}_v$, generated with the view-specific camera parameters,
is concatenated channel-wisely to the corresponding image feature representation $\textbf{Z}_v$. Then a standard convolution is applied to obtain the updated feature representation $\hat{\textbf{Z}}_v$, with the view-dependent geometric information:
\begin{equation}
\hat{\textbf{Z}}_v=\mathrm{Conv}(\mathrm{Concat}(\textbf{Z}_v, \textbf{R}_v)).
\end{equation}
{We name the operation as \emph{RayConv}. With it, the obtained feature representation $\hat{\textbf{Z}}_v$ is used for the projective attention by replacing ${\textbf{Z}}_v$ in Eqn.~\eqref{eqn:p_attention}.
Such drop-in replacement
introduces negligible computation, while injecting strong multi-view geometrical prior to augment the projective attention scheme,
thus helping more precisely predict the refined 3D joint position.
}
\subsection{Architecture}
\label{sec:arch}
Our overall architecture (Fig.~\ref{fig:overview_and_pattn} (a)) is pleasantly simple.
It adopts a convolution neural network, designed for 2D pose estimation~\cite{xiao2018simple}, to obtain high-resolution image features $\{\textbf{Z}_{v}\}^{V}_{v=1}$ from multi-view inputs $\{\textbf{I}_{v}\}^{V}_{v=1}$.
The features are then fed into the transformer decoder consisting of multiple decoder layers to predict the 3D joint locations.
Each layer conducts a self-attention to perform pair-wise interaction between all the joints from all the persons in the scene; a projective attention to selectively gather the complementary multi-view information; {and a feed-forward regression to predict the 3D joint positions and their confidence scores.
Specifically, the transformer decoder applies a \textit{multi-layer progressive regression scheme}, \emph{i.e.}, each decoder layer outputs 3D joint offsets to refine the input 3D joint positions from previous layer.}
\paragraph{Extending to Body Mesh Recovery}
MvP learns skeleton joints feature representations and is extendable to recovering human mesh with a parametric body mesh model~\cite{loper2015smpl}. Specifically, after average pooling on the joint features into per-person feature, a feed-forward network is used to predict the corresponding body mesh represented by the parametric SMPL model~\cite{loper2015smpl}.
Similar to the joint location prediction, the SMPL parameters follow multi-layer progressive regression scheme.
\subsection{Training}
\label{sec:training}
MvP infers a fixed set of $M$ joint locations for $N$ different persons, {where $M=NJ$}.
The main training challenge is how to associate the skeleton joints correctly for different person instances. Unlike the post-hoc grouping of detected skeleton joints as in bottom-up pose estimation methods~\cite{papandreou2018personlab,kreiss2019pifpaf}, MvP learns to directly predict the multi-joint 3D human pose in a group-wise fashion as shown in Fig.~\ref{fig:overview_and_pattn} (a). This is achieved by a grouped matching strategy during model training.
\paragraph{Grouped Matching}
Given the predicted joint positions $\{\textbf{y}_m\}^{M}_{m=1} \subset \mathbb{R}^{3}$ and associated confidence scores $\{s_m\} ^{M}_{m=1}$, \
{we group every consecutive $J$-joint predictions into per-person pose estimation $\{\textbf{Y}_n\} ^{N}_{n=1} \subset \mathbb{R}^{J\times 3}$, and average their corresponding confidence scores to obtain the per-person confidence scores $\{p_n\} ^{N}_{n=1}$.} The same grouping strategy is used during inference.
The ground truth set $\textbf{Y}^*$ of 3D poses of different person instances
is smaller than the prediction set of size $N$, which is padded to size $N$ with empty element $\varnothing$. Then we find a bipartite matching between the prediction set and the ground truth set by searching for a permutation of $\hat{\sigma} \in \aleph_N$ that achieves the lowest matching cost:
\begin{equation}
\hat{\sigma}=\argmin_{\sigma \in \aleph_N }\sum_{n=1}^N \mathcal{L}_\mathrm{match}(\textbf{Y}^*_n,\textbf{Y}_{\sigma(n)}).
\end{equation}
We consider both the regressed 3D joint position and confidence score for the matching cost:
\begin{equation}
\mathcal{L}_\mathrm{match}(\textbf{Y}^*_n,\textbf{Y}_{\sigma(n)})= -p_i+\mathcal{L}_1(\textbf{Y}^*_n,\textbf{Y}_{\sigma(n)})
\end{equation}
where $\textbf{Y}^*_n\neq\varnothing$, and $\mathcal{L}_1$ computes the $L_1$ loss error.
Following~\cite{carion2020end,sutskever2014sequence}, we employ the Hungarian algorithm~\cite{kuhn1955hungarian} to compute the optimal assignment $\hat{\sigma}$ with the above matching cost.
\paragraph{Objective Function}
We compute the \textit{Hungarian loss} with the obtained optimal assignment $\hat{\sigma}$:
\begin{equation}
\mathcal{L}_\mathrm{Hungarian}(\textbf{Y}^*,\textbf{Y})=\sum_{n=1}^N \left[ \mathcal{L}_\mathrm{conf}(\textbf{Y}^*_n, p_{\hat{\sigma}(n)}) + \mathds{1}_{\{\textbf{Y}^*_n\neq \varnothing\}}\lambda \mathcal{L}_\mathrm{pose}(\textbf{Y}^*_n,\textbf{Y}_{\hat{\sigma}(n)}) \right].
\end{equation}
Here $\mathcal{L}_\mathrm{conf}$ and $\mathcal{L}_\mathrm{pose}$ are losses for confidence score and pose regression, respectively. $\lambda$ balances the two loss terms. We use focal loss~\cite{lin2017focal} for confidence prediction which adaptively balances the positive and negative samples.
{For pose regression, we compute $L_1$ loss for 3D joints and their projected 2D joints in different views.
}
{To learn multi-layer progressive regression, the above matching and loss are applied for each decoder layer. The total loss is thus $\mathcal{L}_\mathrm{total}=\sum_{l=1}^L \mathcal{L}^{l}_\mathrm{Hungarian}$, where $\mathcal{L}^{l}_\mathrm{Hungarian}$ denotes loss of the $l$-th decoder layer and $L$ is the number of decoder layers. When extending MvP to body mesh recovery, we apply $L_1$ loss for 3D joints from the SMPL model and their 2D projections, as well as an adversarial loss following HMR~\cite{hmrKanazawa17,jiang2020coherent,zhang2021bmp} due to lack of GT SMPL parameters.}
\section{Experiments}
\label{experiments}
In this section, we aim to answer following questions.
1) Can MvP provide both efficient and accurate
multi-person 3D pose estimation?
2) How does the proposed attention mechanism help multi-view multi-person skeleton joints information fusing?
3) How does each individual design choice affect model performance?
To this end, we conduct extensive experiments on several benchmark datasets.
\begin{table}[b]
\renewcommand{\tabcolsep}{4pt}
\centering
\small
\caption{Result on the Panoptic dataset. MvP is more accurate and faster than VoxelPose. }
\begin{tabular}{cccccccc}
\toprule
Methods & AP$_{25}$ & AP$_{50}$ & AP$_{100}$ & AP$_{150}$ & Recall$_{@500}$ & MPJPE[mm] & Time[ms] \\
\midrule
VoxelPose~\cite{Tu2020} & 84.0 & 96.4 & \textbf{97.5} & \textbf{97.8} & 98.1 & 17.8 & 320\\
MvP (Ours) &\textbf{92.3} & \textbf{96.6} & \textbf{97.5} & 97.7 & \textbf{98.2} & \textbf{15.8} & \textbf{170}\\
\bottomrule
\end{tabular}
\label{panoptic_main}
\end{table}
\paragraph{Datasets}
\textit{Panoptic}~\cite{joo2017panoptic} is a large-scale benchmark with 3D skeleton joint annotations. It captures daily social activities in an indoor environment.
We conduct extensive experiments on Panoptic to evaluate and analyze our approach.
Following VoxelPose~\cite{Tu2020}, we use the same data sequences except `160906\_band3' in the training set due to broken images. Unless otherwise stated, we use five HD cameras (3, 6, 12, 13, 23) in our experiments. All results reported in the experiments follow the same data setup. We use Average Precision (AP) and Recall~\cite{Tu2020}, as well as Mean Per Joint Position Error (MPJPE) as evaluation metrics.
\textit{Shelf} and \textit{Campus}~\cite{belagiannis20143d} are two multi-person datasets capturing indoor and outdoor environments, respectively. We split them into training and testing sets following~\cite{belagiannis20143d,dong2019fast,Tu2020}. We report Percentage of Correct Parts (PCP) for these two datasets.
\paragraph{Implementation Details}
Following VoxelPose~\cite{Tu2020}, we adopt a pose estimation model~\cite{xiao2018simple} build upon ResNet-50~\cite{he2016deep} for multi-view image features extraction.
Unless otherwise stated, we use a stack of six transformer decoder layers. The model is trained for 40 epochs, with the Adam optimizer of learning rate $10^{-4}$. During inference, a confidence threshold of 0.1 is used to filter out redundant predictions. Please refer to supplementary for more implementation details.
\subsection{Main Results}
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{figures/inf_time_num_person.pdf}
\caption{Inference time versus the number of person instances. Benefiting from its direct inference framework, MvP maintains almost constant inference time regardless of the number of persons.}
\label{ablation:inf_time_vs_person_num}
\end{wrapfigure}
\paragraph{Panoptic}
We first evaluate our MvP model on the challenging Panoptic dataset and compare it with the state-of-the-art VoxelPose model~\cite{Tu2020}.
As shown in Table~\ref{panoptic_main}, Our MvP achieves 92.3 AP$_{25}$, improving upon VoxelPose by 9.8\%, and achieves much lower MPJPE (15.8 \textit{v.s} 17.8).
Moreover, MvP only requires 170ms to process a multi-view input, about $2\times$ faster than VoxelPose\footnote{We count averaged per-sample inference time in millisecond on Panoptic test set. For all methods, the time is counted on GPU GeForce RTX 2080 Ti and CPU Intel i7-6900K @ 3.20GHz.}. These results demonstrate both accuracy and efficiency advantages of MvP from estimating 3D poses of multiple persons in a direct regression paradigm.
To further demonstrate efficiency of MvP, we compare its inference time with VoxelPose's when processing different numbers of person instances.
As shown in Fig.~\ref{ablation:inf_time_vs_person_num}, the inference time of VoxelPose grows linearly with the number of persons in the scene due to the per-person regression paradigm. In contrast, MvP keeps constant inference time no matter how many instances in the scene. Notably, it takes only 185ms for MvP to process scenes even with 100 person instances (the blue line), demonstrating its great potential to handle crowded scenarios.
\paragraph{Shelf and Campus}
We further compare our MvP with state-of-the-art approaches on the Shelf and Campus datasets.
The reconstruction-based methods~\cite{belagiannis20153d,ershadi2018multiple,dong2019fast} use 3D pictorial model~\cite{belagiannis20153d,dong2019fast} or conditional random field~\cite{ershadi2018multiple} within a multi-stage paradigm; and the volumetric approach VoxelPose~\cite{Tu2020} highly relies on computationally intensive intermediate tasks. As shown in Table~\ref{shelf_campus}, our MvP achieves the best performance in all the actors on the Shelf dataset. Moreover, it obtains a comparable result on the Campus dataset as VoxelPose~\cite{Tu2020} without relying on any intermediate task. These results further confirm the effectiveness of MvP for estimating 3D poses of multiple persons directly.
\begin{table}[h]
\centering
\caption{Results (in PCP) on Shelf and Campus datasets.}
\renewcommand{\tabcolsep}{3.5pt}
\small
\begin{tabular}{c|cccc|cccc}
\toprule
\multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Shelf} & \multicolumn{4}{c}{Campus} \\ \cmidrule{2-9}
& Actor 1 & Actor 2 & Actor 3 & Average & Actor 1 & Actor 2 & Actor 3 & Average \\ \midrule
Belagiannis \textit{et al.}~\cite{belagiannis20153d} & 75.3 & 69.7 & 87.6 & 77.5 & 93.5 & 75.7 & 84.4 & 84.5 \\
Ershadi \textit{et al.}~\cite{ershadi2018multiple} & 93.3 & 75.9 & 94.8 & 88.0 & 94.2 & 92.9 & 84.6 & 90.6 \\
Dong \textit{et al.}~\cite{dong2019fast} & 98.8 & 94.1 & \textbf{97.8} & 96.9 & 97.6 & 93.3 & 98.0 & 96.3 \\
VoxelPose~\cite{Tu2020} & \textbf{99.3} & 94.1 & 97.6 & 97.0 & 97.6 & {93.8} & \textbf{98.8} & \textbf{96.7} \\
MvP (Ours) & \textbf{99.3} & \textbf{95.1} & \textbf{97.8} & \textbf{97.4} & \textbf{98.2} & \textbf{94.1} & 97.4 & 96.6 \\ \bottomrule
\end{tabular}
\label{shelf_campus}
\end{table}
\subsection{Visualization}
\paragraph{3D Pose and Body Mesh Estimation}
We visualize some 3D pose estimations of MvP on the challenging Panoptic dataset in Fig.~\ref{fig:exp_vis}. It can be observed that MvP is robust to large pose deformation (the 1st example) and severe occlusion (the 2nd example), and can achieve geometrically plausible results w.r.t. different viewpoints (the rightmost column). Moreover, MvP is extendable to body mesh recovery and can achieve fairly good reconstruction results (the 2nd and 4th rows). All these results verify both effectiveness and extendability of MvP.
Please see supplementary for more examples.
\paragraph{Attention Mechanism}
We visualize the projective attention and the self-attention in Fig.~\ref{fig:exp_vis_attention}. Benefiting from the 3D-to-2D projection, the projective attention can accurately locate the skeleton joint in each camera view (the green point) based on the current estimated 3D joint location. We observe it learns to gather adaptive local context information (the red points) with the deformable sampling operation. For instance, when regressing the 3D position of mid-hip (the 1st example), the projective attention selectively attends to informative joints such as the left and right hips as well as thorax, which offers sufficient contextual information
for accurate estimation. We also visualize the self-attention, which learns pair-wise interaction between all the skeleton joints in the scene. From the 3D plot in Fig.~\ref{fig:exp_vis_attention}, we can observe a certain skeleton joint mainly attends to other joints of the same person instance (more opaque). It also attends to joints from other person instances, but with less attention (more transparent). This phenomenon is reasonable as the skeleton joints of a human body are strongly correlated to each other, \emph{e.g.}, with certain pose priors and bone length.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/poseformer-results.pdf}
\caption{
Example 3D pose estimations from Panoptic dataset. The left four columns show the multi-view inputs and the corresponding body mesh estimations. The rightmost column shows the estimated 3D poses from two different viewpoints.
Best viewed in color.
}
\label{fig:exp_vis}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/attention_vis.pdf}
\caption{Visualization of projetive attention and self-attention on example skeleton joints. The attention weights are obtained with the $4$-th decoder layer of a trained model.
\textit{Projective attention} (in the cropped image triplets): the green points denote the projected anchor points in each camera view, and the red points denote the offsetted spatial locations, with brighter color for stronger attention.
\textit{Self-attention} (in the 3D skeleton plots): example skeleton joint (green) to all the other skeleton joints (red) in the scene. The color density indicates attention weight. Best viewed in color and $2\times$ zoom.}
\label{fig:exp_vis_attention}
\end{figure}
\subsection{Ablation}
\begin{table}[t]
\renewcommand{\tabcolsep}{2pt}
\small
\begin{subtable}[!t]{0.32\textwidth}
\centering
\begin{tabular}{cccccccc}
\toprule
\textit{RConv} & AP$_{25}$ & AP$_{100}$ & MPJPE \\
\midrule
w/ & 92.3 & 97.5 & 15.8\\
\ \ w/o & 87.5 & 96.2 & 17.4\\
\bottomrule
\end{tabular}
\caption{The effect of \emph{RayConv}. w/o means removing RayConv.
}
\label{ablation:rayconv}
\end{subtable}
\hspace{\fill}
\begin{subtable}[!t]{0.32\textwidth}
\centering
\begin{tabular}{cccccccc}
\toprule
\textit{Query} & AP$_{25}$ & AP$_{100}$ & MPJPE \\
\midrule
Per-joint & 67.4 & 84.7 & 41.2\\
Hier. & 82.5 & 93.2 & 19.5\\
Hier.+ad. & 92.3 & 97.5 & 15.8\\
\bottomrule
\end{tabular}
\caption{Different joint query embedding schemes.
\label{ablation:query_embed}
\end{subtable}
\hspace{\fill}
\begin{subtable}[!t]{0.32\textwidth}
\centering
\begin{tabular}{cccccccc}
\toprule
\textit{Thr.} & AP$_{25}$ & AP$_{100}$ & MPJPE \\
\midrule
0.0 & 93.1 & 98.5 & 16.3\\
0.1 & 92.3 & 97.5 & 15.8\\
0.2 & 91.1 & 96.2 & 15.5\\
0.4 & 89.2 & 93.7 & 15.0\\
\bottomrule
\end{tabular}
\caption{Different confidence threshold during evaluation.
\label{ablation:conf_threshold}
\end{subtable}
\hspace{\fill}
\begin{subtable}[b]{0.32\textwidth}
\centering
\begin{tabular}{cccccccc}
\toprule
\textit{Dec.} & AP$_{25}$ & AP$_{100}$ & MPJPE \\
\midrule
2 & 6.3 & 92.5 & 49.6\\
3 & 63.4 & 95.6 & 22.8\\
4 & 86.8 & 96.8& 17.5\\
5 & 91.8 & 97.6 & 16.2\\
6 & 92.3 & 97.5 & 15.8\\
7 & 92.0 & 97.5 & 15.9\\
\bottomrule
\end{tabular}
\caption{Number of decoder layers.
}
\label{ablation:decoder_layer}
\end{subtable}
\hspace{\fill}
\begin{subtable}[b]{0.32\textwidth}
\centering
\begin{tabular}{cccccccc}
\toprule
\textit{Cam.} & AP$_{25}$ & AP$_{100}$ & MPJPE \\
\midrule
1 & 4.7 & 61.0 & 93.8\\
2 & 37.7 & 93.0 & 34.8\\
3 & 71.8 & 95.1 & 21.1\\
4 & 84.1 & 96.7 & 19.3\\
5 & 92.3 & 97.5 & 15.8\\
\bottomrule
\end{tabular}
\caption{Number of camera views.
}
\label{ablation:camera_view}
\end{subtable}
\hspace{\fill}
\begin{subtable}[b]{0.32\textwidth}
\centering
\begin{tabular}{cccccccc}
\toprule
$K$ & AP$_{25}$ & AP$_{100}$ & MPJPE \\
\midrule
1 & 88.6 & 96.3 & 18.2\\
2 & 89.3 & 97.5 & 17.4\\
4 & 92.3 & 97.7 & 15.8\\
8 & 84.4 & 91.1 & 20.3\\
\bottomrule
\end{tabular}
\caption{Number of deformable points $K$.
}
\label{ablation:deform_points}
\end{subtable}
\hspace{\fill}
\caption{Ablations on Panoptic. In (b), \textit{Hier.} denotes the hierarchical query embedding scheme, \textit{Hier.+ad.} means further adding the adaptation strategy. Please see supplement for more ablations.}
\label{ablations}
\end{table}
\paragraph{Importance of RayConv} MvP introduces RayConv to encode multi-view geometric information, \emph{i.e.}, camera ray directions into image feature representations.
As shown in Table~\ref{ablation:rayconv}, if removing RayConv, the performance drops significantly\textemdash4.8 decrease in AP$_{25}$ and 1.6 increase in MPJPE. This indicates the multi-view geometrical information is important for the model to more precisely localize the skeleton joints in 3D space. {Without RayConv, the transformer decoder cannot accurately capture positional information in 3D space, resulting in performance drop.}
\paragraph{Importance of Hierarchical Query Embedding}
As shown in Table~\ref{ablation:query_embed}, compared with the straightforward and unstructured per-joint query embedding scheme, the proposed hierarchical query embedding boosts the performance sharply\textemdash 14.1 increase in AP$_{25}$ and 23.4 decrease in MPJPE.
Its advantageous performance clearly verifies introducing the person-level queries to collaborate with the joint-level queries can better exploit human body structural information and improve model to better localize the joints.
Upon the hierarchical query embedding scheme, adding the query adaptation strategy further improves the performance significantly, reaching AP$_{25}$ of 92.3 and MPJPE of 15.8. This shows the proposed approach effectively adapts the query embeddings to the target scene and such adaptation is indeed beneficial for the generalization of MvP to novel scenes.
\paragraph{Different Model Designs } We also examine effects of varying the following designs of the MvP model to gain better understanding on them.
\textbf{Confidence Threshold } During inference, a confidence threshold is used to to filter out the low-confidence and erroneous pose predictions, and obtain the final result. Adopting a higher confidence will select the predictions in a more restrictive way.
As shown in Table~\ref{ablation:conf_threshold}, a higher confidence threshold brings lower MPJPE as it selects more accurate predictions; but it also filters out some true positive predictions and thus reduces the average precision.
\textbf{Number of Decoder Layers } Decoder layers are used for refining the pose estimation. Stacking more decoder layers thus gives better performance (Table~\ref{ablation:decoder_layer}). For instance, the MPJPE is as high as 49.6 when using only two decoder layers, but it is significantly reduced to 22.8 when using three decoder layers. This clearly justifies the progressive refinement strategy of our MvP model is effective. However the benefit of using more decoder layers diminishes when the number of layers is large enough, implying the model has reached the ceiling of its model capacity.
\textbf{Number of Camera Views } Multi-view inputs provide complementary information to each other which is extremely useful when
handling some challenging environment factors in 3D pose estimation like occlusions. We vary the number of camera views to examine whether MvP can effectively fuse and leverage multi-view information to continuously improve the pose estimation quality (Table~\ref{ablation:camera_view}).
As expected, with more camera views, the 3D pose estimation accuracy monotonically increases, demonstrating the capacity of MvP in fusing multi-view information.
\textbf{Number of Deformable Sampling Points } Table~\ref{ablation:deform_points} shows the effect of the number of deformable sampling points $K$ used in the projective attention. With only one deformable point, MvP already achieves a respectable result, \emph{i.e.}, 88.6 in AP$_{25}$ and 17.4 in MPJPE. Using more sampling points further improves the performance, demonstrating the projective attention is effective at aggregating information from the useful locations. When $K=4$, the model gives the best result. Further increasing $K$ to 8, the performance starts to drop. It is likely because using too many deformable points introduces redundant information and thus makes the model more difficult to optimize.
\section{Conclusion}
\label{sec:conclusion}
We introduced a direct and efficient model, named Multi-view Pose transformer (MvP), to address the challenging multi-view multi-person 3D human pose estimation problem. Different from existing methods relying on tedious intermediate tasks, MvP substantially simplifies the pipeline into a direct regression one by carefully designing the transformer-alike model architecture with a novel hierarchical joint query embedding scheme and projective attention mechanism. We conducted extensive experiments to verify its superior performance and speed over the well-established baselines.
We empirically found MvP needs sufficient data for model training since it learns the 3D geometry implicitly. In the future, we will study how to enhance the data-efficiency of MvP by leveraging the strategy like self-supervised pre-training or exploring more advanced approaches. Similar to prior works, we also found MvP suffers from performance drop for cross-camera generalization, that is, generalizing on novel camera views. We will explore approaches like disentangling camera parameters and multi-view feature learning to improve this aspect.
Besides, we will explore the large-scale applications of MvP and further extend it to other relevant tasks. Thanks to its efficiency, MvP would be scalable to handle very crowded scenes with many persons. Moreover, the framework of MvP is general and thus extensible to other 3D modeling tasks like dense mesh recovery of common objects.
{\small
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,191
|
\section{Introduction}
\hide{Outline:
-- quick intro of state of the art of logic programming, its fitness to undergraduate and high school teaching [YL]
-- According to our teaching experience, a bottleneck in programming is the lack a "good" environment which has a low cost to maintain and is easy to learn and use
-- Our objective is to develop a user friendly environment
-- Our solution is ... [following the logic in the poster] (emphasis online app, interface design, online folder and why we propose those. For folders, it is easy to navigate/share compared with a folder in Windows/Mac] etc.)
}
\hide{ Significant advances in theory and application of Declarative Programming have been made in the last two decades, thanks to the progress in both the language design and the solvers for constraint solving and logic based reasoning.
Examples include Constraint Programming \cite{rossi2006handbook} and Answer Set Programming \cite{GelK14} both of which originated from Artificial Intelligence studies.}
Answer Set Programming (ASP) \cite{GelK14} is becoming a dominating language in the knowledge representation community
\cite{McIlraith11,kowalski2014}
because it has offered elegant and effective solutions not
only to classical Artificial Intelligence problems but
also to many challenging application problems. Thanks to its
simplicity and clarity in both informal and formal
semantics, Answer Set Programming provides a ``natural"
modeling of many problems. At the same time,
the fully declarative nature of ASP also cleared a
major barrier to teach logic programming, as the procedural features of
classical logic programming systems such as PROLOG are
taken as the source of misconceptions in students' learning
of Logic Programming \cite{mendelsohn1990programming}.
ASP has been taught to undergraduate students, in the course of Artificial Intelligence, at Texas Tech for more
than a decade. We believe ASP has become mature enough
to be a language for us to introduce programming and problem solving to high school students. We have offered
many sessions to
students at New Deal High School and a three week long ASP course to high school students involved in the TexPREP program (http://www.math.ttu.edu/texprep/).
In our teaching practice, we found that ASP is well accepted by the students and the students were able to focus on
problem solving, instead of the language itself. The students were able to write programs to answer questions about the relationships (e.g., parent, ancestor) amongst family members and to find solutions for Sudoku problems.
However, we have some
major issues while using existing tools: installation of the tools
to computers at a lab or at home is complex, and the existing tools are sensitive to the local settings of a computer. As a result, the flow of teaching the class
was often interrupted by the problems associated with the use of the tools.
Strong technical support needed for
the management and use of the tools
is prohibitive for teaching ASP to general undergraduate
students or K-12 students.
\hide{Since its installation is quite involved, the students were not able to
install it on their own computer and thus lost the opportunity
to practice the programming outside the class.
}
During our teaching practice, we also found the need for
a more vivid presentation of the results of a logic program
(more than just querying the program or getting the answer sets
of the program). We also noted observations in literature that multimedia and
visualization play a positive role in promoting students'
learning \cite{guzdial2001use,clark2009rethinking}.
To overcome the issues related to
software tool management and use,
we have designed and built an online development
environment for Answer Set Programming.
The environment provides an editor for users
to edit their programs, an online file system for
them to store and retrieve their program and a few simple
buttons allows querying the program inside the
editor or getting answer
sets of the program. The environment
uses SPARC \cite{BalaiGZ13} as the ASP language.
SPARC is designed to further facilitate the teaching of logic programming by introducing sorts (or types) which simplify the difficult programming concept of {\em domain variables} in classical ASP systems such as Clingo \cite{gebser2011potassco} and help programmers to identify errors early thanks to sort information. Initial experiment of teaching SPARC to high school students is promising \cite{reyes2016using}.
To promote students' interests and learning,
our environment also
introduces predicates
for students to present their
solutions to problems in
a more visually
straightforward and
exciting manner (instead of
the answer sets which are
simply a set of literals). The URL for the online environment
is \textit{http://goo.gl/ukSZET}.
The rest of the paper is organized as follows.
Section~\ref{sec:sparc} recalls SPARC. The design
and implementation of the online environment
are presented in Section~\ref{sec:online}. The design
and rendering of the drawing and animation predicates
are presented in Section~\ref{sec:drawing}. The paper
is concluded by Section~\ref{sec:discussion}.
\hide{
The language we are proposing can be used as an introduction to programming language, SPARC, is an example of an ASP programming language. According to \cite{dovierreasoning}, ASP has become a very promising course to be taught in a high school computer science class. This is because ASP teaches students about problem solving and using rigorous and precise definitions, according to \cite{reyes2016using}.
}
\hide{
The only downside to ASP languages is that they produce an output of answer sets, which represent the knowledge of what is believed and not believed from a program by a rational thinker \cite{GelK14}. Answer sets are not always straightforward to interpret and understand. This can hurt a student's interest in learning computer science, but because of the drawings/animations offered through onlineSPARC we hypothesize that this possible setback may be overcome. Instead of having to interpret the answer sets, our system allows the display of answer sets in an intuitive way using drawings/animations! The creation of drawings/animations will not only peak the students' interests but will also serve to protect them from the deterrent of complex, unreadable answer sets.
}
\hide{
\section{Background on SPARC}
SPARC is a logical programming language under answer set semantics which allows for the explicit representation of sorts. A SPARC program consists of three sections. The first section contains sort definitions and serves to define the sorts of objects in the program's domain. For example $\#color=\{blue, green\}$ is a sort definition. It means $\#color$ is a set of two colors, blue and green (as objects). The second section consists of a collection of predicate declarations in which the program's predicates and the sorts of their parameters are declared. $line\_color(\#style, \#color)$ is an example of a predicate declaration where $line\_color$ is a predicate and $\#style$ and $\#color$ are the sorts of parameters of $line_color$. Intuitively, $line\_color(X, Y)$ means that when drawing a line using style $X$, the color of the line is $Y$. Finally, the rules section of SPARC consists of a collection of standard ASP rules. Consider the rule $$parent(X, Y) :- mother(X, Y).$$
where $parent(X, Y)$ means person $X$ is parent of $Y$ and $mother(X, Y)$ means person $X$ is the mother of $Y$, and $:-$ is read as ``if". This rule is read as for any $X$ and $Y$, $X$ is parent of $Y$ if $X$ is the mother of $Y$.
}
\hide{
SPARC is a sorted ASP language that contains three sections:
\begin {itemize}
\item sorts
\begin{itemize}
\item Sorts are the objects of SPARC.
\item This is the section that the objects required for animation are declared.
\item Some such objects include different drawing techniques, coordinates, and frames.
\end{itemize}
\item predicates
\begin{itemize}
\item Predicates define relations between the sorts.
\item This is the section where the predicates draw and animate are declared.
\end{itemize}
\item rules
\begin{itemize}
\item Rules are the declarations of what is true and what is false.
\item Drawing and animation commands declared true will be animated.
\end{itemize}
\end{itemize}
For example, the map coloring problem can easily be described using SPARC. The problem consists of coloring a map of the USA, where neighboring states can not have the same color. Consider three states: Texas, Oklahoma, and New Mexico. We can use {\tt neighbor(X,Y)} to denote that state {\tt X} is a neighbor of state {\tt Y}. We can represent Texas is a neighbor of New Mexico as {\tt neighbor(texas, newMexico).} "We noted that $X$ is a neighbor of $Y$ if $Y$ is a neighbor $X$. This knowledge is represented as an ASP rule {\tt neighbor(X, Y) :- neighbor(Y, X).} where {\tt :-} is read as if. To express the intention that we would like to assign a color for {\tt r, g, b} to a state, we introduce a relation $colorOf(S, C)$ which denotes that the color of state $S$ is $C$. The knowledge that a color is assigned to a state is represented as an ASP rule {\tt color(S,r) | color(S,g) | color(S,b).} where {\tt | } is read as or. The rule reads, for any state S, S has a color of {\tt r}, {\tt g}, or {\tt b}.
\\This structure was chosen as the background architecture for drawings and animations because "its simplicity and clarity in both informal and formal semantics ... provides natural modeling of many knowledge intensive problems" \cite{reyes2016using}. This hopefully will allow students learning computer science for the first time to have an easier time with creating these drawings and animations.
}
\begin{comment} OLD BACKGROUND SECTION FOR WINNERS AND HAPPINESS AND CATS IN CONCLUSION
Declarative programming is an approach to describe properties and relationships between different objects in a given domain by means of using logical descriptions. Declarative programming is useful for the challenges that arise in knowledge representation and artificial intelligence. A subsection of declarative programming is Answer Set Programming (ASP). ASP has provided new ways to understand and examine search problems using relationships. SPARC, the ASP language we used, declares sorts, predicates and rules to interpret the different association of objects.
\begin{itemize}
\item The sorts are objects, atoms and parameters.
\item The predicates are the relationships between the different objects and parameters
\item The rules are logical statements that represent the relations between the sorts.
\end{itemize}
Additionally, ASP has become one of the most promising courses to be taught in a high school computer science class \textbf{(Reasoning in High Schools: do it with ASP!)}. Yet, answer sets are intricate and can provide many challenges, therefore visualization will help students be engaged while interpreting ASP programs. To aid the students, The Texas Tech online SPARC solver now supports animation and static drawings using a library of commands.
\end{comment}
\section{Answer Set Programming Language -- SPARC}
\label{sec:sparc}
\hide{
-- Introduction of ASP and demo ease of programming [YL]
}
SPARC is an Answer Set Programming language which allows for the explicit representation of sorts. A SPARC program consists of three sections: {\em sorts}, {\em predicates} and {\em rules}.
We will use the map coloring problem as an example to illustrate
SPARC: can the USA map be colored using red, green and blue
such that no two neighboring states have the same color?
The first step is to identify the objects and their sorts in the problem.
For example, the three colors are important and they form the
sort of color for this problem. In SPARC syntax, we use
$\#color=\{red, green, blue\}$ to represent the
objects and their sort.
The sorts section of the SPARC program is
\begin{verbatim}
sorts
#color = {red,green,blue}.
#state = {texas, colorado, newMexico, ......}.
\end{verbatim}
The next step is to identify relations in the problem and declare in
the predicates section the sorts of the parameters of the predicates
corresponding to the relations. The predicates section of the program is
\begin{verbatim}
predicates
neighbor(#state, #state).
ofColor(#state, #color).
\end{verbatim}
The last step is to identify the knowledge needed in the problem
and translate it into rules. The rules section of a SPARC program
consists of rules in the typical ASP syntax.
The rules section of a SPARC program will include the following.
\begin{verbatim}
rules
neighbor(texas, colorado).
neighbor(S1, S2) :- neighbor(S2, S1).
ofColor(S, red) | ofColor(S, green) | ofColor(S, blue).
:- ofColor(S1, C), ofColor(S2, C), neighbor(S1, S2), S1 != S2.
\end{verbatim}
\hide{-- Introduction of existing development environment (a quick survey of the state of the art dev env can start by reading \cite{FebbraroRR11}, and difference beteween our work and the existing work.
}
\hide{
These tools help a lot in our teaching but still present enough challenge. As an expert, when we teach
ASP to New Deal High School students, we have an IT support member from our university who worked with
the IT support personnel in the high school to install the ASPIDE in their lab.
But the ASPIDE on a majority of computers failed to work in our last hands-on session (no hands-on in the previous sessions). Similarly in our summer school for TexPREP students. Although we had a very experienced IT professional to install and manage ASPIDE, there was frequent long interrupt almost for each class
due to the failure of ASPIDE on some of the computers, which negatively affected the flow of the teaching. }
\section{Online Development Environment Design and Implementation}
\label{sec:online}
\subsection{Environment Design}
The principle of the design is that the environment,
with the simplest possible interface,
should provide full support, from writing programming to
getting the answer sets of the program, for teaching
Answer Set Programming.
The design of the interface is shown in Figure~\ref{fig:onlineSPARC}. It
consists of 3 components: 1) the editor to edit a program, 2) the file navigation
system and 3) the operations over the program.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{onlineSPARC.png}
\caption{\label{fig:onlineSPARC} User Interface of the System (the red numbers indicate the areas/components in the interface)}
\end{figure}
One can edit a SPARC program directly inside the editor which
has syntax highlighting features (area 1). The file inside the editor
can be saved by clicking the ``Save" button (2.4). The files and folders
are displayed in the area 2.1. The user can traverse them using the mouse
like traversing a file system on a typical operating system.
Files can be deleted and their names can be changed. To create a folder or
a file, one clicks the ``New" button (2.3). The panel showing files/folders can be toggled
by clicking the ``Directory" button (2.2) (so that users can have more space
for the editing or result area (4)).
To ask queries to the program inside the editor, one can type a query (a conjunction of literals) in the text box (3.1) and then press the ``Submit"
button (3.1).
The answer to the query will be shown in area 4.
For a ground query (i.e., a query without
variables), the answer is {\em yes} if every literal in the query is in every
answer set of the program, is {\em no} if the complement ($p$ and $\neg p$, where $p$ is an atom, are complements) of some literal is in every answer set
of the program, and {\em unknown} otherwise. An answer to a query with variables is a set of ground terms for the variables in the query such that
the answer to the query resulting from replacing the variables by the corresponding ground terms is yes. Formal definitions of queries and answers to queries can be found in Section~2.2 of \cite{GelK14}.
To see the answer sets of a program, click the ``Get Answer Sets" button (3.2).
When ``Execute" button (3.3) is clicked, the atoms with drawing and animation
in the answer set of the program will be rendered in the display area (4). (For now,
when there is more than one answer set, the environment displays an error.)
A user can only access the full interface discussed above
after login. The user will log out by clicking the ``Logout" button (5).
Without login, the interface is much simpler, with all the file navigation
related functionalities invisible. Such an interface is convenient
for testing or doing a quick demo of a SPARC program.
\subsection{Implementation}.
The architecture of the online environment follows
that of a typical web application. Is consists of a front
end component and a back end component.
The front end provides the user interface and sends
users' request to the back end, and the back end
fulfills the request and returns results, if needed,
back to the front end. After getting the results
from the back end, the front end will update the
interface correspondingly (e.g., display query
answers to the result area). Details about the components
and their interactions are given below.
\medskip\noindent {\bf Front End}. The front end is implemented by HTML and
JavaScript. The editor in our front end uses
ACE which is an embeddable (to any web page) code
editor written in JavaScript ({\tt https://ace.c9.io/}).
The panel for file/folder navigation is based on JavaScript
code by Yuez.me.
\medskip\noindent {\bf Back End and Interactions between the Front End and the Back End}. The back end is mainly implemented using PHP and is hosted
on the server side. It has three components:
1) file system management, 2) inference engine and 3) drawing/animation rendering.
The {\bf file system management}
uses a database to manage the files
and folders of all users of the environment. The ER diagram of the
system is shown below:
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.7\textwidth]{ER-Diagram.png}
\end{center}
\caption{The ER diagram for file/folder management. Most names have a straightforward meaning. The Folderurl and Fileurl above
refer to the full path of the folder/file in the file system.}
\end{figure}
The SPARC files are saved in the server file system, not in a database table. The sharing is managed by the sharing information in the relevant database tables.
In our implementation, we use mySQL database
system.
The file management system gets request such as creating a new file/folder, deleting a file, saving a file, getting the files and folders, etc,
from the front end. It then updates the tables and local file
system correspondingly and returns the needed results to
the front end. After the front end gets the results, it will
update the graphical user interface (e.g., display
the program returned from the back end inside the
editor) if needed.
The {\bf inference engine} gets the request of answering a query or obtaining all answer sets of a program. It calls the SPARC solver
\cite{BalaiGZ13} to find all answer sets. Then in terms of these
answer sets, it returns requested information to the front end.
After the front end gets the response from the back end, it will
show the result in the display area of the web page.
Details of the design and implementation of
{\bf drawing/animation rendering} can be found in Section~\ref{sec:drawingImplementation}.
\begin{comment}
1. In our online ASP environment, we select SPARC system which consists of
SPARC language and SPARC solver. A few sentences about the key of SPARC language
... and some sentence about solver?
2. User interface design. We provide only the necessary buttons in the interface: an editor to edit a program. A button and a box for asking a query to a program in the editor, and button to display the stable models of the program. Show a screen shot? When a casual user goes to the web application, she can directly edit a SPARC program, ask queries and find answer sets of the program.
3. Online file system. For a student taking ASP class, she may want to work on a file on different times. In this case, she needs to save the file to the online file system. She needs to login (sign up first). As she wrote more programs, she may need folders to organize her programs. So, the interface provides the following buttons (functions): log-in, create a file or a folder, navigation of the file system. A screen shot?
4. Sharing.
A student may share her programs by simply copying the link from the browser provided by IDE.
5. System description.
The application consists of 3 components:
-file system component
-front-end component
-back-end
* see share folder / presentation for information from Christian and Mbathio.
\begin{verbatim}
-- design description and justification.
1. Online app
2. Interface design (what functions/buttons)
3. Directory navigation
4. Sharing
5. ...
-- implementation
1. Overall how system is built (or major system components and how they interact, maybe use a diagram to illustrate)
(after reading this, a reader has an idea on
how to develop this system)
2. Implementation of each component
2.1 Example: database design to support the whole application
2.2 ...
(All non-trivial implementation goes here)
\end{verbatim}
\end{comment}
\section{Drawing and Animation Design and Implementation}
\label{sec:drawing}
\hide{
\begin{verbatim}
Elias: you will work on this section.
Related document:
1. on shared google drive: onlineSPARCviz-2016REU
animation manual/
animationReport.docx
2. Different versions of earlier paper EAAI
https://www.overleaf.com/5712007swnvfm#/18622321/
3. Some comments, in the current version, left from
earlier version.
Todo:
audience: researchers know Logic Programming and even
ASP pretty well.
Extend Section 3.1 as needded (e.g., in terms of your manual
and report on google drive). In the section, only
include information a programmer needs to know to
do drawing and animation. 3.1 is currently in a
decent shape.
Section 3.2 needs more substantial extension. You can
include the algorithm [in fact, we do have an algorithm
in this version, but commented away using \hide]
and other implementation information
(e.g., HTML Canvas 5 etc.).
pages: you have 2.5 *additional* pages for Section 3.
You don't have to use up all of them. If you need
more pages, let me know as early as you can.
\end{verbatim}
}
\subsection{Drawing and Animation Design}
\hide{ We may use the following format. (This is just a writing outline which will be deleted in the final version.)
\begin{itemize}
\item intro
\item syntax for drawing and intuitive meaning
\item Examples to further illustrate the meaning and to illustrate its use.
\item Syntax for animation (some basics of animation may be needed before or after
\item Examples for animation ...
\end{itemize}
}
To allow programmers to create drawings and animations using SPARC, we simply design two predicates, called {\em display predicates}: one for drawing and one for animation.
The atoms using these predicates are called {\em display atoms}. To use these atoms
in a SPARC program, a programmer
needs to include
sorts (e.g., sort of colors,
fonts and numbers) and the corresponding predicate declaration which
are predefined. In the following,
we only focus on the atoms and their
use for drawing and animation.
\medskip
\hide{
\noindent {\bf Styling}
Before we can think about displaying drawings or animations, we must first define some terms to do with styling. For example, we may introduce a style name
{\tt greenline} and associate it to the green color by the {\em style command} {\tt line\_color(greenline, green)}.
In general, there are two variations of style commands, ones that modify text style and ones that modify line style.
\begin{itemize}
\item could talk more in detail about styling here
\item add a section about shape
\item expand on what the canvas is
\item two types of styling
\item more about stylenames
\item maybe include more commands in general in this section that can be executed?
\item drawing
\item subsection shape
\item subsection style
\end{itemize}
}
\noindent {\bf Drawing}.
A {\em drawing predicate} is of the form: {\tt draw($c$)}
where $c$ is called a {\em drawing command}.
Intuitively the atom containing this predicate draws texts and graphics as instructed by the command $c$. By drawing a picture, we mean a {\em shape} is drawn with a {\em style}. We define
a {\em shape} as either text or a geometric line or curve. Also, a {\em style} specifies the physical visual properties of the shape it is
applied to. For example, visual properties include color, thickness, and font. For modularity, we introduce {\em style names}, which are labels that can be associated with different styles so that the style may be reused without being redefined. A drawing is completed by associating this shape and style to a certain position in the {\em canvas}, which is simply the display board. Note, the origin of the coordinate system is at the top left corner of the canvas.
Here is a an example of drawing a red line from point $(0,0)$ to $(2,2)$. First, we introduce a style name
{\tt redline} and associate it to the red color by the {\em style command} {\tt line\_color(redline, red)}. With this defined style we then draw the red line by the {\em shape command} {\tt draw\_line(redline, 0, 0, 2, 2)}. Style commands and shape commands form all drawing commands.
The SPARC program rules to draw the given line are
{\tt draw(line\_color(redline, red))}.
{\tt draw(draw\_line(redline, 0, 0, 2, 2))}.
The style commands of our system include the following:\\
{\tt linewidth(sn, t)} specifies that lines drawn with style name {\tt sn} should be drawn with a line thickness {\tt t}.
{\tt textfont(sn, fs, ff)} specifies that text drawn with style name {\tt sn} should be drawn with a font size {\tt fs} and a font family {\tt ff}.
{\tt linecap(sn, c)} specifies that lines drawn with style name {\tt sn} should be drawn with a capping {\tt c}, such as an arrowhead.
{\tt textalign(sn, al)} specifies that text drawn with style name {\tt sn} should be drawn with an alignment on the page {\tt al}.
{\tt line\_color(sn, c)} specifies that lines drawn with style name {\tt sn} should be drawn with a color {\tt c}.
{\tt textcolor(sn, c)} specifies that text drawn with style name {\tt sn} should be drawn with a color {\tt c}.
The shape commands include the following:\\
{\tt draw\_line(sn, xs, ys, xe, ye)} draws a line
from starting point {\tt (xs, ys)} to ending point
{\tt (xe, ye)} with style name {\tt sn};
{\tt draw\_quad\_curve(sn, xs, ys, bx, by, xe, ye)}
draws a quadratic Bezier curve, with style name
{\tt sn}, from the current point {\tt (xs, ys)}
to the end point {\tt (xe, ye)} using the control point {\tt (bx, by)};
{\tt draw\_bezier\_curve(sn, xs, ys, b1x, b1y, b2x, b2y, xe, ye)} draws a cubic Bezier curve, using
style name {\tt sn}, from the current point {\tt (xs, ys)} to the end point {\tt (xe, ye)} using the control points {\tt (b1x, b1y)} and {\tt (b2x, b2y)};
{\tt draw\_arc\_curve(sn, xs, ys, r, sa, se)} draws
an arc using style name {\tt sn} and the arc is centered at
{\tt (x, y)} with radius {\tt r} starting at angle {\tt sa} and ending at angle {\tt se} going in the clockwise direction;
{\tt draw\_text(sn, x, xs, ys)} prints value of {\tt x} as text
to screen from point {\tt (xs, ys)} using
style name {\tt sn}.
\medskip \noindent {\bf Animation}.
A {\em frame}, a basic concept in animation, is defined as a drawing.
When a sequence of frames, whose content is normally relevant, is shown on the screen in rapid succession (usually 24, 25, 30, or 60 frames per second), a fluid animation is seemingly created. To design an animation, a designer
will specify the drawing for each frame. Given that the order of frames matters, we give a frame a value equal to its index in a sequence of frames. We introduce the {\em animate predicate} { \tt animate($c, i$)}
which indicates a desire to draw a picture at the $i^{th}$ frame using drawing command $c$ and $i$ starts from $0$.
The frames will be shown on the screen at a rate of 60 frames per second, and the $i^{th}$ frame will be showed at time $(i*1/60)$
(in a unit of second) from the start of the animation for a duration of $1/60$ of a second.
As an example, we would like to elaborate on an animation where a red box (with side length of 10 pixels) moves from the point $(1,70)$ to $(200, 70)$.
We will create 200 frames with the box (whose bottom left corner is) at point $(i+1, 70)$ in $i^{th}$ frame.
Let the variable I be of a sort called frame, defined from 0 to some large number.
In every frame $I$, we specify the drawing styling $redline$:
\noindent {\tt animate(line\_color(redline, red), I).}
To make a box at the $I^{th}$ frame, we need to draw the box's four sides using the style associated with style name {\tt redline}. The following describes the four sides of a box at any frame: bottom - $(I+1,70)$ to $(I+1+10, 70)$, left - $(I+1, 70)$ to $(I+1, 60)$, top - $(I+1, 60)$ to $(I+1+10, 60)$ and right - $(I+1+10, 60)$ to $(I+1+10, 70)$. Hence we have the rules
\noindent {\tt \small animate(draw\_line(redline,I+1,70,I+11,70),I).}
\noindent {\tt \small animate(draw\_line(redline,I+1,70,I+1,60),I).}
\noindent {\tt \small animate(draw\_line(redline,I+1,60,I+11,60),I).}
\noindent {\tt \small animate(draw\_line(redline,I+11,60,I+11,70),I).}
Note that the drawing predicate produces the intended drawing throughout all the frames
creating a static drawing. On the other hand, the animate predicate produces a drawing only for a specific frame.
\subsection{Algorithm and Implementation}
\label{sec:drawingImplementation}
\hide{
The input to the main algorithm is a SPARC program $P$. The output is an HTML5 program containing a canvas which will be rendered by the browser. The algorithm finds an answer set (i.e., all atoms that are true under the program by stable model semantics \cite{GelK14}), extracts all display atoms, and generates an HTML5 program that uses canvas to set the drawing style properly according to the
style atoms for the $i^{th}$ frame and then renders all shape commands specified by the animate atoms for the $i^{th}$ frame. The drawing commands inside the display atoms will be rendered for every frame. (An optimization is made to reduce repeated rendering efforts.)}
We first define our input and output: The input to the main algorithm is a
SPARC program $P$. The output is an HTML5 program segment containing a canvas
element which will be rendered by an Internet browser.
A key part of our
algorithm is to render the display atoms (specified in the answer set of $P$) using canvas methods.
HTML5 canvas element is used to draw graphics via scripting using JavaScript. In the following, we will use an example to demonstrate how
a drawing command is implemented by JavaScript code using canvas methods.
Consider again
{\tt draw(line\_color(redline, red))}.
{\tt draw(draw\_line(redline, 0, 0, 2, 2))}.
When we render the shape command {\tt draw\_line}, we need
to know the meaning of the {\tt redline} style.
From the style command {\tt line\_color},
we know it means {\tt red}.
We first create an object {\tt ctx} for a given
canvas (simply identified by a name) where
we would like to render the display atoms.
The object offers methods to render the graphics in the canvas. We then use the following JavaScript code to implement the
shape command to draw a line from (0,0) to (2,2): \\
{\tt ctx.beginPath();}\\
{\tt ctx.moveTo(0,0);}\\
{\tt ctx.lineTo(2,2);}\\
{\tt ctx.stroke();}
To make the line in red color, we have to insert the following
JavaScript statement before the {\tt ctx.stroke()} in
the code above: \\
{\tt ctx.strokeStyle="red";}
The meaning of the canvas methods in the code above is straightforward.
We don't explain them further.
Now we are in a position to present the algorithm.
\hide{ [??Elias, clarify the following sentences??] However, all of this JavaScript code is abstracted away, and is only reproduces here to help the reader understand what is accomplished in the following full delineated algorithm for drawing and animation:}
\medskip\noindent Algorithm:
\begin{itemize}
\item Input: a SPARC program $P$ with display predicates.
\item Output: a HTML program segment which allows the rendering of the display atoms in the answer set of $P$ in an Internet Browser.
\item Steps:
\begin{enumerate}
\item Call SPARC solver to obtain an answer set $S$ of $P$.
\item Let $script$ be an array of empty strings. $script[i]$ will hold the JavaScript statements to render the graphics for $i^{th}$ frame.
\item For each display atom $a$ in $S$,
\begin{itemize}
\item If any error is found in the display atoms, present an error to the user detailing the incorrect usage of the atoms.
\item If $a$ contains a shape command, let its style name be $sn$, find all style commands defining $sn$. For each style command, translate it into the corresponding JavaScript code $P_s$ on modifying the styling of the canvas pen. Then translate the shape command into JavaScript code $P_r$ that renders that command. Let $P_d$ be the proper combination of $P_s$ and $P_r$ to
render $a$.
\begin{itemize}
\item if $a$ is an drawing atom, append $P_d$
to $script[i]$ for every frame $i$ of the animation.
\item if $a$ is an animation atom, let $i$ be the frame referred to in $a$. Append $P_d$ to $script[i]$.
\end{itemize}
\end{itemize}
\item Formulate the output program $P$ as follows:
\begin{itemize}
\item add, to $P$, the canvas element {\tt <canvas id="myCanvas" width="500" height="500"> </canvas>}.
\item add, to $P$, the script element {\tt <script> </script>} whose content includes
\begin{itemize}
\item the JavaScript code to associate the drawings in this script element with the canvas element above.
\item an array $drawings$ initialized by the content of $script$ array.
\item Javascript code executing the statements in $drawings[i]$ when the time to show frame $i$ starts.
\end{itemize}
\end{itemize}
\hide{
\item Print the created array into a JavaScript tag within an HTML program along with all the necessary code to start the animation when it is loaded by a browser:
\begin{itemize}
\item print code to create the HTML5 canvas element in the browser
\item for each array of commands corresponding with a frame
\begin{itemize}
\item print out the styling commands associated with the frame
\item print out the drawing commands associated with the frame
\end{itemize}
\item print code to execute each frame consecutively whenever the browser refreshes using requestAnimationFrame JavaScript function
\item set off first frame by initial call to requestAnimationFrame
\end{itemize}}
\end{enumerate}
\end{itemize}
\noindent End of algorithm.
\medskip\noindent {\bf Implementation}. The ``Execute" button in the webpage (front end) of the online SPARC environment is for programmers to render the display atoms in the answer set of their programs.
The Java program implementing our algorithm
above is at
the server side. When the ``Execute" button is clicked,
the programmer's SPARC program will be
sent to the server side and the algorithm will
be invoked with the program. The output
(i.e., the canvas
and script elements) of the
algorithm will be sent back to the front end
and the JavaScript in the front end will catch
the output and insert it into the result
display area of the front web page (See Figure~\ref{fig:onlineSPARC}). The Internet
browser will then automatically render the
updated web page and the drawing or animation
will be rendered as a result.
\hide{
The web page of our environment is shown in Figure 1.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.7\textwidth]{wave.png}
\end{center}
\caption{Online SPARC Environment (The programmer can edit a SPARC program in area 1. After clicking the Execute button (area 2), the drawing/animation will be shown in area 3.)}
\end{figure}
}
Example SPARC programs with drawing and animation can be found at https://goo.gl/nLD4LD.
\hide{ output in the
The script element from the algorithm will be inserted immediately below the canvas element in the main PHP program for our on-line system, and the browser will render the drawing and animation inside the canvas element when the script is automatically run.}
\hide{
We integrated our algorithm into the existing online SPARC environment. The interface of our environment is shown in Figure 1.
\begin{figure}[H]
\includegraphics[width=0.45\textwidth]{wave.png}
\caption{Online SPARC Environment}
\end{figure}
}
\hide{
The number labels above correspond with the numbered list below:
\begin{enumerate}
\item Writing an Animation Program
\begin{itemize}
\item One simply types the code into the box that appears at the left side of the web page (Figure 1.1). It is important that these programs contain sorts for different drawing commands and also either the animation command predicate 'animate' or 'draw', as those are how animations will be executed.
\end{itemize}
\item Executing an Animation Program
\begin{itemize}
\item To execute an animation all one needs to do is click on the button "Animate" in the top right corner of the web page (Figure 1.2). The code will be sent to a solver, and then the answer set of the program will be parsed to produce a JavaScript animation using HTML Canvas.
\end{itemize}
\item Watching an Animation
\begin{itemize}
\item After clicking the "Animate" button an animation will be produced in the canvas that is on the right of the web page(Figure 1.3). If no animation is produced make sure the animation and drawing commands are properly defined.
\end{itemize}
\end{enumerate}
}
\begin{comment}
NOTE: THIS SECTION HAS BEEN COVERED IN THE PREVIOUS SECTION
\section{Description of Online Environment}
In \textit{wave.ttu.edu} the user encounters a interface where half the page is a text editor for an ASP program using a SPARC solver, the other half is where answer sets, drawings and animations are displayed. On top of the webpage a tool bar with different action buttons is available (Figure 1).
The button that we are primarily concerned with is the animate button. When this button is pressed, the drawing/animation program the user has produced will be shown on the right.
\begin{figure}[H]
\includegraphics[width=0.5\textwidth]{webpage.png}
\caption{wave.ttu.edu}
\end{figure}
In the text editor the user specifies the sorts, predicates and rules of the program.
\end{comment}
\begin{comment}
NOTE: THIS SECTION HAS BEEN COVERED IN THE ALGORITHM SECTION
\section{Implementation Method}
The implementation of the drawings and animations in SPARC preforms several steps.
\begin{enumerate}
\item The SPARC program is solved by a solver to produce an answer set.
\item The answer set is exported to a Java parser that finds the keyword atoms for animation that a user includes.
\item The keyword atoms and specified parameters are paired with their corresponding drawing commands.
\item The Java parser produces HTML and JavaScript code from the pairings.
\item The code is executed, creating the specified animation.
\end{enumerate}
\end{comment}
\section{Discussion and Related Work}
\label{sec:discussion}
\hide{ YL: by discussion I mean both related work and future work. I didn't revise your writing, but just insert a piece to show a flavor of writing a comparison with existing work. }
\hide{
By studying the issues in our teaching practices, we realized the following obstacles.1) The existing tools are standalone software and it is expensive to maintain those tools during and outside of class.
2) The tools use of the local computer environment needs students to have some knowledge of directory structures and how they are connected to the development tools.
3) The complex user interfaces are packed with many functions that distract the attention of students from the key ASP concepts and from problem solving.
4) Sharing created programs (e.g., submission to the instructors) is challenging. }
As ASP has been applied to more and more problems, the
importance of ASP software development tools has been
realized by the community.
Some integrated development environment (IDE) tools, e.g., APE \cite{sureshkumar2007ape}, ASPIDE\cite{FebbraroRR11}, iGROM\cite{iGROM} and SeaLion \cite{oetsch2013sealion}
have previously been developed.
They provide a graphical user interface for users to carry out a sequence
of tasks from editing an ASP program to debugging that program, easing
the use of ASP significantly. However, the target audience of these tools
is experienced software developers. Compared with the existing environments, our environment is online, self contained (i.e., fully
independent of the users' local computers) and
provides a very simple interface, focusing on teaching only.
The interface is operable by any person who is able to use a typical
web site and traverse a local file system.
As for drawing and animation, our work is based on
the work of Cliffe et al.
\cite{cliffe2008aspviz}. They are the first to introduce, to ASP, a design of display predicates and to render drawings and animations using the program ASPviz. Our drawing commands are similar to theirs.
The syntax of their animation atoms is not clear from their paper. It seems
(from examples on github at {\tt goo.gl/kgUzJK} accessed on 4/30/2017)
that multiple answer sets may be needed to produce an animation.
In our work we use a design where the programmers
are allowed to draw at any frame (specifying a range of the frames) and
the real time difference between two neighboring frames is 1/60 second.
Another clear difference is that our implementation is online while
theirs is a standalone software. A more recent
system, Kara, a standalone software by Kloimullner et al. \cite{kloimullner2013kara}, deals with drawing only. Another system
ARVis \cite{ambroz2013arvis} offers method
to visualize the relations between answer sets
of a given program.
We also note an online environment for IDP (which is a knowledge representation paradigm close to ASP) by Dasseville and Janssens \cite{dasseville2015web}.
It also utilizes a very simple interface for the IDP system and allows
drawing and animation using IDP through IDPD3 (a library to visualize
models of logic theories) by Lapauw et al. \cite{lapauw2015visualising}.
In addition to drawing and animation, IDPD3 allows users' interaction
with the IDP program (although in a limited manner in its current
implementation), which is absent from most other systems including
ours.
Our environment is also different from the online IDP environment in
that ours targets ASP and offers an online file system.
Both DLV and Clingo offer online environments ({\tt http://asptut.gibbi.com/} and
{\tt http://potassco.sourceforge.net/clingo.html} respectively)
which provide an editor and a window to show the output of
the execution of dlv and clingo command, but provide no other functionalities. We also noted the SWISH \\ ({\tt http://lpsdemo.interprolog.com}) which offers
an online environment for Prolog and a more
recent computer language Logic-based Production
Systems \cite{kowalski2016programming}.
A unique functionality of our online environment is to query a program.
It allows to teach (particular to general students)
basics of Logic Programming without first
touching the full concept of answer sets.
\hide{
Most existing programming environments are proved useful for experienced programmers, but it is still challenging to use them in teaching students who are novices in programming. Our online environment, with a carefully designed simple interface and a self contained file system, provided easy access and sharing, reduced the learning curve, and removed the installation and maintenance expenses, by our experience of using it in our teaching in Fall 2015. We take it as an enabler to teach ASP to more students, including students in high school. }
\hide{
However, the animate predicate is different from theirs (?? Can someone make the difference very specific in the design section?)
and we feel that our design is more natural in modeling animation tasks ( future work is to study these two designs -- goes to future work part). There are a few other major differences between our work and theirs. We use SPARC (e.g., sorts etc make it easier/ more effective to write correct programs compared with Clingo \cite{gebser2011potassco} or DLV cite...). Next, ASPviz requires two programs one for drawing/animation while the other
is for information needed by the drawing/animation. We don't
require them to be separate program although we encourage the methodology of organize one program into two parts.
Finally, we provide an online environment instead of a standalone application with the advantage: very expensive to install and manage a standalone program in a teaching environment for genera students particularly younger students (e.g., high school ones) \cite{reyes2016using}). Our online version allows us to HTML canvas will significantly reduce the development efforts. }
\hide{
Some of the challenges that when overcome will improve our online environment is organizing the server in a more user friendly and functional manner. Also, improving the efficiency of the solvers to be able to produce more complicated animations in less time, so that more time can be spent learning and coding.}
\hide{It is noted that thanks to ASP/SPARC rules,
one can define more abstract and easy to use
drawing/animation ``commands'' from the base set of commands.
With the new drawing and animation features,
students can not only solve problems such as Sudoku and AI problems, but can also present the results in vivid and straightforward drawings and animations. We hope
the new environment will inspire more interest in Logic Programming, AI, and computer science in general, as well as provide a more effective learning environment. }
\hide{
and classical AI planning problems but also
more students may be inspired to learn computer science. W
e hope that schools will see the value in ASP and begin to use it with the aid of the animation tools.}
When we outreached to a
local high school before, we needed an experienced student
to communicate with the school lab several times before the
final installation of the software on their computers could be completed. A carefully
drafted document is prepared for students to install the software
on their computers. There are still unexpected issues
during lab or when students use/install the software at home.
These difficulties made it almost impossible to outreach to
the high school with success.
With the availability of our online environment, we only
need to focus on the teaching content of ASP without
worrying about the technical support. We hope our environment, and other online environments, for knowledge representation systems will
expand the teaching of knowledge representation to
a much wider audience in the future. The drawing
and animation are new features of the online
environment and was not tested in high school
teaching. We have used the drawing and animation
in a senior year course -- special topics in AI --
in spring 2017.
Students demonstrated interests in drawing and animation
and they were able to produce interesting
animation. We also noted that it can be
very slow for ASP solvers to produce
the answer set of an animation program
when the ground program is big.
In the future, it will be interesting to have a
more rigorous evaluation of the online environment.
\section{Acknowledgments}
The authors were partially supported by National Science Foundation (grant\# CNS-1359359).
We thank Evgenii Balai, Mbathio Diagne, Michael Degraw, Peter Lee, Maede Rayatidamavandi, Crisel Suarez, Edward Wertz and Shao-Lon Yeh
for their contribution to the implementation of the environment.
We thank Michael Gelfond and Yinan Zhang for their input and help.
\bibliographystyle{splncs03}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,997
|
Extreme 2 helmed by Dan Cheresh, a new M32 owner, was tied with Ryan Devos's XS Energy after day one for the lead after seven races that saw four different winners and lead changes throughout every lap. However, on day two Ryan Devos and his crew of Morgan Larson, Garth Ellingham, Mark Buckley, and Will Ryan showed that they were a step ahead winning the final four races and taking the first event title. Convexity, helmed by Don Wilson, was second while REV, skippered by Rick Devos, rounded out the podium in third.
Morgan has all the fun!
Is it THAT DeVos family?
Think it was only Ryan.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,168
|
Aiken for another win
EAST LONDON, SOUTH AFRICA - FEBRUARY 16: Thomas Aiken during day 4 of the Africa Open at East London Golf Club on February 16, 2014 in East London, South Africa. EDITOR'S NOTE: For free editorial use. Not available for sale. No commercial usage. (Photo by Petri Oeschger/Sunshine Tour/Gallo Images)
Thomas Aiken last won the European Tour in 2014 and will be salivating at the thought of a fourth title when he begins the Portugal Masters on Thursday.
At 73rd in the Race to Dubai he'll need good results to make the Finals Series, but his recent share of 33rd place at the British Masters will go a long way to boosting the confidence.
Also on his side is a good track record at Victoria Clube de Golfe, host of this week's €2-million tournament, the penultimate event before the regular season ends.
Last year Aiken shot a first round of 65 at the seaside Portuguese course on his way to a share of 25th place, and a year prior he shared 12th place after the tournament was reduced to 36 holes due to intense rainstorms.
The 33-year-old won his PGA Tour card and has spent most of the year competing in the USA. His hold on 73rd in the Race to Dubai is impressive, considering he's only played six events on the European Tour this season.
He returns needing one last push.
Also in the mix is Justin Walters, who was sole second the Portugal Masters in 2013, one shot behind winner David Lynn.
Walters ranks 98th in the Race to Dubai and also needs to play his way into the Finals Series, which includes the Nedbank Golf Challenge for the first time this year.
The South African also tied for 33rd at the British Masters last week and his fond memories of Portugal will be a useful tool in his hunt for glory.
Posted in News Tagged Africa Open, David Lynn, European Tour, Justin walters, Portugal Masters, Race to Dubai, Thomas Aiken, Victoria Clube de Golfe
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,604
|
.PHONY: clean build build-js test bundle dist-ubuntu install-user
UNAME := $(shell uname -a)
OPTS := --pedantic
all: clean build build-js test bundle
clean:
stack clean $(NIX)
# rm www/app.js
build:
stack build $(OPTS)
build-js:
cd www && npm install && npm run setup && npm run build
test:
- mkdir var/
stack test $(NIX) health:unit --test-arguments --color
bundle:
rm -rf bundle
mkdir bundle
mkdir bundle/www
cp `stack path --local-install-root`/bin/health bundle/health-server
cp `stack path --local-install-root`/bin/orizentic bundle/orizentic
strip bundle/health-server
strip bundle/orizentic
cp www/index.html bundle/www/index.html
cp www/index.js bundle/www/index.js
cp -r www/static/ bundle/www/static
dist-ubuntu:
rm -rf dist/
mkdir -p dist/opt/health/bin
cp bundle/health-server dist/opt/health/bin
cp bundle/orizentic dist/opt/health/bin
mkdir -p dist/opt/health/www
cp -r bundle/www dist/opt/health/www
mkdir -p dist/opt/health/etc/
cp etc/health-example.yml dist/opt/health/
mkdir -p dist/lib/systemd/system
cp etc/health-example.service dist/opt/health/health-example.service
cd dist && fpm -f -s dir -t deb -n health -v `git describe --abbrev=4 HEAD` opt lib
rm -rf dist/opt lib
install-user:
-systemctl --user stop health
cp bundle/health-server ~/.local/bin
cp bundle/orizentic ~/.local/bin
mkdir -p ~/.local/share/health/
mkdir -p ~/.local/share/health/www
cp -r bundle/www/* ~/.local/share/health/www
systemctl --user daemon-reload
systemctl --user enable health
systemctl --user start health
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,604
|
namespace _01.School
{
using System;
public class Student
{
private const int MaxNumber = 99999;
private const int MinNumber = 10000;
private int number;
private string name;
public Student(string name, int number)
{
this.Name = name;
this.Number = number;
}
public string Name
{
get
{
return this.name;
}
set
{
if (string.IsNullOrEmpty(value))
{
throw new ArgumentException("The name could not be null or empty!");
}
this.name = value;
}
}
public int Number
{
get
{
return this.number;
}
set
{
if (value < Student.MinNumber || value > Student.MaxNumber)
{
throw new ArgumentOutOfRangeException("The number of student" + this.Name +
"should be between 10000 and 99999!");
}
this.number = value;
}
}
public override bool Equals(object obj)
{
var other = obj as Student;
if (other == null)
{
throw new ArgumentException("The student for comparison have to be object of class Student!");
}
return this.Number.Equals(other.Number);
}
public override int GetHashCode()
{
return this.Number.GetHashCode();
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,249
|
Number Plate Listings / C / CF
The AAA 1 series of plates, however, did not last forever. By 1950, some more popular areas were running out of numbers again. In order to avoid this situation, a reversed series was introduced. This meant that instead of AAA 1, they began working backwards from 1 AAA. Although, with the increasing popularity of cars, even this began to be an issue towards 1960. A new series was then introduced, a 4 number scheme from 1 A to 9999 YY.
It's important to remember that the date on a car's number plate, may not accurately describe the date the car itself was produced. This is because it only tells us when the vehicle was registered and given a plate. This means that a car could be produced in one area and registered in another a few years later so you can't always rely on the number plate date. Some manufacturers stored their cars for a few years before registering them, which caused complications as they appeared new, but were in fact not.' G The first series of number plates were issued in 1903 and ran until 1932, using the series A 1 to YY 9999. The letter or pair of letters indicated the local authority in whose area the vehicle was registered, for example A - London, B - Lancashire, C - West Riding of Yorkshire. In England and Wales the letter codes were initially allocated in order of population size (by the 1901 census) whilst Scotland and Ireland had their own sequences incorporating the letters S and I respectively, which were allocated alphabetically: IA = Antrim, IB = Armagh, etc. When a licensing authority reached 9999, it was allocated another two letter mark, but there was no pattern to these subsequent allocations as they were allocated on a first come first served basis. There are three interesting anomalies where a zero has been issued - The Lord Provost of Edinburgh has S 0 and his Glasgow counterpart has G 0 while the official car of the Lord Provost of Aberdeen has RG 0. In addition the Lord Mayor of London has the registration LB 0."
Some sell for huge amounts of money, including those that have price tags counted in the hundreds of thousands of pounds. However, the majority of the personalised registrations you see on your commute to work or during your weekend drives were probably bought for a lot less - hundreds of pounds instead of thousands.
Each number plate has 1, 2 or 3 letters and one or more numbers. Number plates are displayed showing available plates on the UK market. For transfer times, VAT status and any other information you require, please call us or visit our main number plate website
DOC 71R
541 Y
6 GTL
SCT 65
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,148
|
\section{Introduction}
Bright quasars at $z \approx 6$ are very luminous and rare objects that
can can be detected out to huge cosmological distances in very large
area surveys like the Sloan Digital Sky Survey \citep{fan04}. Their
estimated space density is $\approx 2.2 \cdot 10^{-9} (Mpc/h)^{-3}$
\citep{fan04}, that is about one object per about $200$ deg$^2$ of sky,
assuming a depth of $\Delta z = 1$ centered at $z=6.1$ under the third
year WMAP cosmology \citep{WMAP3}. Their luminosity is thought to be
due to accretion onto a super-massive black hole \citep[e.g.,
see][]{hop05}.
A common expectation is that the luminous high-z quasars sit at the
center of the biggest proto-clusters at that time. Some observational
evidence of over-densities of galaxies in two deep HST-ACS fields
containing a bright z=6 quasar has been claimed \citep{sti05,zhe06},
but it is unclear whether this is true in general. In fact an ACS
image only probes a long and narrow field of view of about $6 \times 6
\times 320 (Mpc/h)^3$ in the redshift range $[5.6:6.6]$, so a
significant number of detections may come from galaxies unrelated to
the environment of the host halo of the bright quasar.
Numerical simulations to address the formation of bright quasars are
extremely challenging given their low number density. A huge
simulation cube with edge of $\gtrsim 700~Mpc/h$ is required just to
expect, on average, one such object in the simulation box. A major
computational investment, like the Millennium run \citep{MR05}, is
required to resolve at high redshift ($z \gtrsim 20$) virialized halos
on this volume and to follow their merging history down to $z \approx
6$. Even assuming that the simulation volume is big enough that there
is the expectation to find halos hosting bright quasars, how can these
halos be identified? In principle two, non mutually exclusive,
alternatives appear plausible: either the super-massive black holes
are hosted in the most massive halos with the corresponding number
density of SDSS quasars or these black holes have grown from the first
PopIII Intermediate Mass Black Hole seeds, therefore representing the
descendant of the rarest density peaks that hosted first stars.
The first scenario implies that the $m_{BH}-\sigma$ relation
\citep{fer00,geb00} is already in place at high redshift
\citep{vol03,hop05,dimatteo05}. In that case multigrid simulations can
be carried out to follow in detail the growth of the supermassive
black hole (e.g., see \citealt{li06}). In the second scenario the
quasars progenitors would be traced back to the first PopIII stars
created in the universe within $\approx 10^6 M_{\sun}$ mass halos
virialized at $z \approx 50$ \citep{bro04,abel02}. These PopIII stars
are very massive $M>100M_{\sun}$, so after a short life of a few
million years explode and may leave intermediate mass black holes,
plausible seeds for the super-massive black holes observed at lower
redshift. Of course the two scenarios can be consistent with each
other if the first perturbations to collapse are also the most massive
at $z=6$. This seems to be implied, e.g. in \citet{MR05}, where the
bright quasar candidate in the simulation is traced back to one of the
18 collapsed halos at $z=16.7$.
In this paper we explore the link between the first PopIII halos
collapsed in a simulation box and the most massive halos at lower
redshifts to gain insight on the scenarios of bright quasar
formation. This is a numerically challenging problem as the dynamical
range of masses involved is very large: a simulation volume of $5
\cdot 10^{8} (Mpc/h)^{3}$ has a mass of about $3.3 \cdot 10^{19}
M_{\sun}/h$, that is more than $10^{13}$ times the mass of a PopIII
dark matter halo. We have adopted an original approach to the problem,
broadly inspired by the tree method by \citet{col94}. We first
simulate at relatively low resolution the evolution of a simulation
volume down to $z=0$. Then, starting from the density fluctuations
field in the initial conditions of the numerical simulation, we
compute analytically the redshift distribution of the oldest PopIII
halo collapsed within each single grid cell. The information is then
used as input for a Monte Carlo code to sample for each particle of
the simulation the collapse redshift of the first PopIII progenitor
dark matter halo. The formation time of the oldest PopIII remnant
within the most massive halos identified at $z \approx 6$ is finally
compared with that of the oldest PopIII star sampled over the whole
simulation volume and the implications for the growth of supermassive
black holes are discussed.
Our approach is tuned to investigate the formation and the subsequent
remnant distribution of the first, rare density peaks that hosted
PopIII stars at $z \gtrsim 30$. With this respect our study has a
similar goal to \citet{ree05}, with the important difference that we
search for the first PopIII star in the complete simulation box and
not by means of progressive refinements around substructures that
probe only a small fraction of the total box volume. As our method is
tuned at finding very rare fluctuations, it is not easily applied to
the significantly more common $3 \sigma$ peaks with mass
$\approx 10^{6}M_{\sun}$ that collapse at $z\approx 20$ and that might
constitute the majority of PopIII stars, if these are terminated by
chemical feedback at $z \lesssim 20$ \citep[e.g. see,][]{gre06} and
not by photo-dissociation of molecular hydrogen at $z \gtrsim 25$
\citep{hai00}.
This paper is organized as follows. In Sec.~\ref{sec:num} we present
the details of the numerical simulations carried out. In
Sec.~\ref{sec:first_halos} we analyze the numerical results focusing
on the merging history of the first PopIII halos formed in the
simulation box. In Sec.\ref{sec:end_of_fs} we review when the first
stars epoch end, while in Sec.~\ref{sec:BHgrow} we discuss the
implications of the PopIII distribution that we find for the build-up
of supermassive black hole population at $z \lesssim 6$. We conclude
in Sec.~\ref{sec:conc}.
\section{Numerical Methods} \label{sec:num}
\subsection{N-body simulations}
The numerical simulations presented in this paper have been carried
out using the public version of the PM-Tree code Gadget-2
\citep{spr05}. Our standard choice is to adopt a cosmology based on
the third year WMAP data \citep{WMAP3}: $\Omega_{\Lambda}=0.74$,
$\Omega_{m}=0.26$, $H_0=70~km/s/Mpc$, where $\Omega_m$ is the total
matter density in units of the critical density ($\rho_{c}= 3H_0^2/(8
\pi G)$) with $H_0$ being the Hubble constant (parameterized as $H_0 =
100 h~ km/s/Mpc$) and $G$ the Newton's gravitational constant
\citep{peebles}. $\Omega_{\Lambda}$ is the dark energy density. As
for $\sigma_8$, the root mean squared mass fluctuation in a sphere of
radius $8Mpc/h$ extrapolated at $z=0$ using linear theory, we consider
both $\sigma_8=0.9$ and $\sigma_8=0.75$, focusing in particular on the
higher value that provides a better match to the observed clustering
properties of galaxies \citep{evr07}.
The initial conditions have been generated using a code based on the
Grafic algorithm \citep{bert01}. An initial uniform lattice is
perturbed using a discrete realization of a Gaussian random field
sampled in real space and then convolved in Fourier space with a
$\Lambda CDM$ transfer function computed using the fit by \citet{eis99}
and assuming a scale invariant long-wave spectral index ($n=1$). The
initial density field is saved for later reprocessing through the
first light Monte Carlo code (see Sec.~\ref{sec:MC}). The particles
velocities and displacements are then evolved to the desired starting
redshift ($z_{start}=65.67$, i.e. $a_{start}=0.015$) using the
Zel'dovich approximation and the evolution is followed using Gadget-2
\citep{spr05}. Dark matter halos are identified in the simulations
snapshots using the HOP halo finder \citep{eis98}.
To find the optimal trade-off between mass resolution and box size,
both critical parameters to establish a connection between PopIII
halos and the most massive halos identified at $z=6$, we resort to
simulations (see Tab.~\ref{tab:sim}) with three different box sizes,
all simulated with $N=512^3$ particles:
\begin{itemize}
\item[(i)] A ``large'' box size of edge $720~Mpc/h$ that is large
enough to contain on average about one bright high-z quasar. The mass
resolution is $3.7 \cdot 10^{12} M_{\sun}/h$ (corresponding to a halo
of 20 particles).
\item[(ii)] A ``medium'' box size of edge $512~Mpc/h$ that represents
a compromise between a slightly higher mass resolution than $(i)$ and
a still reasonably large simulation volume.
\item[(iii)] A ``small'' box size of edge $60~Mpc/h$. While this box
size is too small to host a bright $z \approx 6$ quasar, its volume
is still larger than that of deep surveys like the UDF
\citep{beck06}, that spans a volume about $20$ times smaller than
this box in the redshift interval $z \in [5.6:6.6]$ (the typical
redshift uncertainty for $i$-dropouts). Halos down to about $3 \cdot
10^{9} M_{\sun}/h$ can be identified in this box. The analysis of
the results from this simulation will show the fundamental role
played by the large volume employed for simulations (i) and (ii).
\end{itemize}
\subsection{Monte Carlo code for first light sources}\label{sec:MC}
Given the initial density fluctuations field on the simulation grid,
where a cell has a mass of order $10^{10}- 10^{11} M_{\sun}/h$, our
goal is to estimate the redshift of the first virialized perturbation
within each cell at the mass scale of early PopIII dark matter halos
(i.e. $\lesssim 10^6 M_{\sun}$, see e.g. \citealt{bro04}). For this we
resort to an analytical treatment based on a linear approximation for
structure formation.
The initial conditions for a N-body simulation in a box of size $L$
with $N$ particles and a single particle mass $m_p$ define a Gaussian
random field $\delta {\rho}$ for the N cells (associated to the
location of the N particles) of the simulation grid. This density
field is usually generated by convolving white noise with the transfer
function associated to the power spectrum of the density perturbations
(e.g., see \citealt{bert01}) and is used to obtain the initial
velocity and positions displacements for the particles (e.g. see
Eq.~5.115 in \citealt{peebles}). The density fluctuation in each cell
has a contribution from different uncorrelated frequencies in the
power spectrum. When the initial conditions for an N-body simulation
are generated, the power spectrum has an upper cutoff around the
Nyquist frequency for the grid used (i.e. around the frequency
associated to the average inter-particle distance) and a lower cutoff
at the frequency associated to the box size (if periodic boundary
conditions are enforced). A higher resolution version of the initial
density field can be obtained by simply increasing the grid size and
adding the density perturbations associated to the power spectrum
between the old and the new cutoff frequencies.
In linear approximation one can use the field $\delta \rho$ to obtain
the redshift of virialization of a structure of mass $M_h>m_p$ at a
given position $\vec{x}$ in the grid. To do this one averages the
field $\delta \rho$ using a spherical window centered at $\vec{x}$
with a radius such that the window encloses a mass $M_h$ and computes
assuming linear growth the redshift at which the average density
within the window reaches $\delta \rho = 1.69$ (in units where the
average density of the box is 1). In fact, for a spherical collapse
model, when $\delta \rho = 1.69$ in linear theory, then the halo has
reached virial equilibrium under the full non-linear dynamics. This
concept is at the base of the various proposed methods for computing
analytically the mass function of dark matter halos (e.g., see
\citealt{PS,bond,she99}).
We apply this idea to estimate the formation rate and the location in
the simulation volume of dark matter halos at a mass scale below the
single particle mass used in the simulation. A straightforward
implementation consists in generating first the density field
associated to the N-body simulation, and then to refine at higher
resolution the field by means of a constrained realization of the
initial conditions used in the N-body run (e.g. see
\citealt{bert01}). This provides exact and complete information on the
whole density field, but the price to pay is the execution of very
large Fast Fourier Transforms on the refined grid. If the goal is to
compute density fluctuations down to a mass of $\approx 10^6
M_{\sun}/h$ over a box of edge $720 Mpc/h$, a grid of $29184^3$ is
needed, which would require about $100$ TB of RAM, that is well beyond
the current memory capabilities of the largest supercomputers.
A shortcut is however available, if one trades information for
numerical complexity. Given a realized numerical simulation, we are in
fact not interested in getting a detailed picture of the dynamics at
sub-grid resolution, but only in identifying for each grid point the
redshift of virialization of its first progenitor at a given sub-grid
mass scale. For example, given a simulation with single particle mass
of $10^{10} M_{\sun}/h$ our aim is to quantify the redshift of
virialization of the \emph{first} dark matter halo of mass $10^6
M_{\sun}/h$ within the volume associated to the $10^{10} M_{\sun}/h$
particle. In that case, if we were to have the full sub-grid
information we would search for the maximum realized value of the
density within the $10^4$ sub-grid cells of mass $10^6 M_{\sun}/h$
that constitute our $10^{10} M_{\sun}/h$ single particle cell. As the
density fluctuation field is a Gaussian random field, the density in
sub-grid cells will be a Gaussian centered at the density of the
parent cell and with variance given by integration of the power
spectrum of density fluctuations truncated between the Niquist
frequency of the parent cell and that of the sub-cells.
Therefore, for a single cell of mass $m_p$, the redshift of collapse
of a sub-grid progenitor at a mass scale $m_{fs}$ can be obtained
simply by sampling from the probability distribution of the maximum of
the sub-grid fluctuations of the $k=m_p/m_{fs}$ sub-cells of mass
$m_{fs}$ that are within a cell of mass $m_p$.
The probability distribution for the maximum of these
fluctuations is available in analytic form when the field is Gaussian,
as in the case considered here.
In fact, given a probability distribution $p(x)$, with partition function
$P(x)$:
\begin{equation}
P(x)=\int_{- \infty}^x p(a) da,
\end{equation}
the probability distribution $q(m,k)$ for the maximum $m$ of $k$
random numbers extracted from $p(x)$ ($m=Max(x_1,...,x_k)$) is the
derivative of the partition function for $m$, that in turns is simply
the $k$-th power of the partition function for $x_i$,
i.e. $P(x)$. Therefore we have:
\begin{equation} \label{eq:maxpdf}
q(m,k) = k \cdot p(m) \cdot P(m)^{k-1}.
\end{equation}
Eq.~(\ref{eq:maxpdf}) has a simple interpretation: the probability
that the maximum of $k$ random numbers lies in the interval
$[m,m+\delta m]$ is given by the probability of sampling one of the $k$
numbers exactly in that interval and all the other numbers below $m$.
With the aid of Eq.~(\ref{eq:maxpdf}) we can sample the distribution
of the maximum of the additional sub-grid density fluctuations that
need to be considered in order to probe the mass scale of PopIII
halos. The variance $\sigma_{fs}$ to be used in $p(x)$ may be computed
from the power spectrum of the density fluctuations by considering an
upper cut-off at the wavelength of one cell size in the initial
conditions. Or, equivalently, if the complete power spectrum of
density fluctuations has variance $\sigma(M_{grid\_cell})$ at the mass
scale of one grid cell and variance $\sigma(M_{PopIII\_halo})$ at the
mass scale of a halo hosting a first star, we set $\sigma_{fs}$ such
that:
\begin{equation} \label{eq:sigma}
\sigma_{fs}^2 = \sigma(M_{PopIII\_halo})^2 - \sigma(M_{grid\_cell})^2.
\end{equation}
Therefore our recipe for estimating the age of the earliest progenitor
formed in each cell is the following:
\begin{itemize}
\item[(i)] Starting from the initial density fluctuation field on the
grid used to initialize the N-body run compute the mass refinement
factor $k$ to go from the mass of a single particle (i.e. the mass
within one grid cell) to that of a PopIII star halo
($k=M_{grid\_cell}/M_{PopIII\_halo}$).
\item[(ii)] Given the power spectrum of the density fluctuations,
$M_{grid\_cell}$ and $M_{PopIII\_halo}$ compute $\sigma_{fs}$.
\item[(iii)] Extract one random number $r$ from $q(m,k)$ (see
Eq.~\ref{eq:maxpdf}) where $p(m)$ is a Gaussian distribution with zero
mean and variance $\sigma_{fs}$.
\item[(iv)] Sum $r$ to the value of the density field in the cell to
obtain $(\delta \rho_{fs} / \rho)_{max}$ in the cell. From the value
of $(\delta \rho_{fs}/ \rho)_{max}$ it is then straightforward to
compute the non-linear redshift for that perturbation, i.e. the
redshift $z_{nl}$ when the linear density contrast reaches a value
$(\delta \rho_{fs} / \rho)_{max} (z_{nl}) = 1.69$.
\end{itemize}
The particles of the simulation now carry the additional information
of the redshift at which their \emph{first} PopIII star dark matter
halo progenitor has collapsed in linear theory (a proxy for the
redshift of actual virialization). Once halos have been identified in
simulation snapshots, the redshift of the earliest PopIII progenitor
within the halo is easily obtained. It is similarly easy to identify
in a snapshot what is the environment in which the particles with the
oldest progenitors live. This procedure is robust with respect to
variations of the simulation resolution, as long as the focus is on
rare density peaks, with an average occupation number per simulation
cell (i.e. particle) much smaller than unity. Numerical tests are
presented in Appendix~\ref{sec:app}.
This method has two main advantages:
\begin{enumerate}
\item[(i)] It allows to use relatively inexpensive ``low resolution''
simulations to identify the largest objects at low redshift ($z
\lesssim 6$). In fact if we are interested in identifying the most
massive halos at $z \approx 6$ as host halos for quasar candidates a
mass resolution of $\approx 10^{11} M_{\sun}/h$ is sufficient (e.g. in
\citealt{MR05} the mass of the largest halo at $z=6.2$ is $3.9 \cdot
10^{12} M_{\sun}/h$ for a simulation volume of $(500 Mpc/h)^3$).
\item[(ii)] For a given numerical simulation, several Monte Carlo
realization can be generated to gather robust statistical constraints
on the properties of dark matter halos hosting first light sources as
well as the spatial distributions of the first halo remnants in halos
at lower redshift.
\end{enumerate}
However our method has the drawback that it cannot be easily extended
to the investigation of the detailed merging history at the sub-grid
level, as only the virialization time of the earliest progenitor of
each particle at a given mass scale is provided. In addition, the
identification of the first virialized PopIII halos is expressed in
terms of the halos with the highest $z_{nl}$. We are therefore
neglecting the non-linear evolution and the environmental dependences
on the dynamics of the dark matter collapse, such as tidal forces,
therefore missing the precise redshift at which a PopIII halo
virializes. These are limitations that we need to accept as the non
linear evolution could be followed over the whole box only at the
price of running a simulation prohibitively intensive in cpu and
memory resources, with at least $10^3$ time more particles than in the
Millenium Run \citep{MR05}. This appears unfeasible for the time
being, even considering next generation dedicated supercomputers, like
the GrapeDR \citep{mak05}.
\section{The fate of the first PopIII halos} \label{sec:first_halos}
\subsection{Analytical considerations}\label{sec:est}
The general picture for the connection between first halos and the
most massive halos at $z \approx 6$ can be obtained using analytical
considerations, that will be later confirmed in Sec.~\ref{sec:MCres}
by the results of our numerical investigation.
Following the choice for our large box simulation, we consider a
volume of $(720 Mpc/h)^3$ of mass $M_{Box}$, large enough to host a
bright $z \approx 6$ quasar. We estimate from the Press-Schechter
formalism (see also the masses of the $z=6$ halos in our ``large'' box
simulation in Sec.~\ref{sec:MCres}) that the most massive halo at $z
\approx 6$ has a mass (that we call $M_{qh}$) of about $10^{12} -
10^{13} M_{\sun}/h$ (see also \citealt{MR05}). Since the most massive
halo is the first at its mass scale to be formed, through the use of
Eq.~\ref{eq:maxpdf} we can obtain the distribution of its initial
density fluctuation (see Fig.~\ref{fig:pdf}). If we assume $M_{qh} =
M_{Box}/180^3 = 4.3 \cdot 10^{12} M_{\sun}/h$ (in agreement with
\citealt{MR05}), this halo is expected to have originated from a
density fluctuation in the range $[5:5.7] \sigma(M_{qh})$ at $90 \%$
of confidence level. We now consider the volume initially occupied by
the mass $M_{qh}$ and we compute from the primordial power spectrum
the variance $\sigma_{fs}$ of density perturbations at mass scale of a
PopIII halo ($M_{fs} = 10^6 M_{\sun}/h = 1/160^3 M_{qh}$) considering
only contributions from wavelengths at a scale below the volume
enclosed by $M_{qh}$ (see Eq.~\ref{eq:sigma}). We obtain
$\sigma_{fs}=4.85 \sigma(M_{qh})$. From Eq.~\ref{eq:maxpdf} follows
that the maximum of $160^3$ Gaussian random numbers with variance
$\sigma_{fs}$ is distributed in the range $[23.4:27.0] \sigma(M_{qh})
$ at $90\%$ of confidence level. Combining the two $90\%$ confidence
level intervals, this means that the first PopIII progenitor of a
bright quasar originated from a perturbation in the range $[28.4:32.7]
\sigma(M_{qh})$. If we consider instead a random sub-cell among the
$160^3$, the probability that the maximum sub-grid perturbation is
smaller than $32.7 \sigma(M_{qh})$ is only $0.99995$, so several
hundreds of the $180^3$ cells among the whole simulation volume are
expected to have a PopIII progenitor formed before that of the most
massive $z=6$ halo. In fact from integration of Eq.~\ref{eq:maxpdf},
the sigma peak associated to the \emph{first} star in the box is
expected to be greater than $35.5 \sigma(M_{qh}) $ at 99.99 \% of
confidence level (and in the interval $[36.2:38.8] \sigma(M_{qh})$ at
$90\%$ of confidence level). Therefore the rarity of the earliest
PopIII progenitor of the most massive halo at $z=6$ is about $1.5
\sigma_{fs}$ less than that of the \emph{first} PopIII star formed in
the simulation volume. In terms of formation redshift, the
\emph{first} PopIII star dark matter halo in the simulation volume
virializes in the redshift interval $z \in [49:53]$, while the
earliest PopIII progenitor of the QSO halo is formed at $z \in
[38:44]$ (both intervals at 90\% of confidence level and computed for
$\sigma_8 = 0.9$).
The picture changes quite
significantly if we consider a smaller box size. E.g. in our $S1$
simulation (see Tab.~\ref{tab:sim}) with a volume of $(60 Mpc/h)^3$ a
perturbation on a mass scale $M_{qh} = 7 \cdot 10^{11} M_{\sun}/h
\approx M_{Box}/27^3$ is expected to be the most massive at $z \approx
6$. Such a halo derives at $90 \%$ of confidence level from a
fluctuation $[3.6:4.6] \sigma(M_{qh})$. If we further assume $M_{fs} =
8.5 \cdot 10^5 M_{\sun}/h = 1/94^3 M_{qh}$, we have a sub-grid
variance $\sigma_{fs}=3.66$ so that the maximum of the $94^3$ random
first light perturbation in a cell of mass $M_{qh}$ is distributed in
the range $[16.4:19.4] \sigma(M_{qh})$ at $90\%$ of confidence
level. By combining the two intervals as above, we expect that the
first PopIII progenitor of the most massive $z=6$ halo derives from a
$[20.0:24.0] \sigma(M_{qh})$ peak. The \emph{first} PopIII star
derives instead from a $[23.8:26.1] \sigma(M_{qh})$ peak (always
$90\%$ of confidence level). At variance with the larger box, here the
correlation between the most massive halo at $z=6$ and the
\emph{first} PopIII star in the simulation is expected to be stronger and the
most massive halo is likely to have as progenitor one of the first 10-100 Pop III
stars.
From these simple analytical estimates it is clear that the most
massive and rarest structures collapsed around $z \approx 6$ do not
descend from the rarest sigma peaks at the first light mass scale in
the simulation volume, when the simulation box represents a
significant fraction of the Hubble volume. Conversely the black holes
remnants of the \emph{first} PopIII stars in the universe do not
provide the seeds for super-massive black holes within the most
massive halos at $z \lesssim 6$. The descendants of first PopIII stars
are instead expected to be found at the center of a variety of halos,
as we quantify in the next Section by means of N-body simulations.
\subsection{Simulations Results}\label{sec:MCres}
In constructing the $z=6$ halo catalogs we adopt the following
parameters for the HOP halo finder \citep{eis98}. The local density
around each particle is constructed using a 16 particles smoothing
kernel. For the regrouping algorithm we use: $\delta_{peak} = 240$,
$\delta_{saddle} = 180$, $\delta_{outer} = 100$ and a minimum group
size of $20$ particles. In the large simulation box (run $L1$ in
Tab~\ref{tab:sim}) we identify $47$ halos with 20 particles or more
and the most massive halo ($37$ particles) has a mass of $6.9 \cdot
10^{12} M_{\sun}/h$. In the medium simulation box (run $M1$ in
Tab~\ref{tab:sim}) the higher mass resolution allows us to identify
$694$ halos with at least 20 particles and the most massive halo has
$92$ particles for a total mass of $6.1 \cdot 10^{12} M_{\sun}/h$,
consistent with the results from the larger box. Finally in the small
box simulations (runs $S1$ and $S2$ in Tab~\ref{tab:sim}) there are
14972 halos with at least $100$ particles in $S1$ ($\sigma_8 = 0.9$)
and 7531 halos with at least $100$ particles in $S2$ ($\sigma_8 =
0.75$). The most massive halo has a mass of $2.4\cdot 10^{12}
M_{\sun}/h$ in $S1$ and of $7.1\cdot 10^{11} M_{\sun}/h $ in $S2$. The
$z=6$ halo mass distribution for these two simulations is well
described (with displacements within $\approx 15\%$) by a
\citet{she99} mass function.
The link between the halos identified in the snapshots and the first
light sources is established using the Monte Carlo method described in
Sec.~\ref{sec:MC}. For the large box we consider a refinement factor
$k=57^3$ to move from the single particle mass of the simulation to a
typical PopIII halo mass, so that $M_{fs} = 1.0 \cdot 10^{6}
M_{\sun}/h$. For the first 10 most massive halos at $z=6$ we show in
Fig.~\ref{fig:card720} the distribution of the redshift at which the
oldest progenitor crosses the virialization density contrast threshold
in linear theory ($\delta \rho /\rho = 1.69$) and the distribution of
the ranking of the collapse time computed over all the PopIII
progenitors of the simulation particles. The collapse rank of the
first PopIII progenitor of the most massive $z=6$ halo is in the
interval $[474:45075]$ at $90\%$ of confidence level, with median
$8535$. The corresponding virialization redshifts are in the interval
$[39.3:44.1]$ with median $41.1$. For comparison the \emph{first}
PopIII halo in the box virializes in the redshift interval
$[49.0:52.7]$ with median $50.3$; the 100th first light in the box
collapses in the redshift range $[45.5:45.8]$ with median
$45.7$. These results from the combined N-body simulation and Monte
Carlo code are in excellent quantitative agreement with the analytical
estimates of Sec.~\ref{sec:est} and confirm that in a large simulation
box the most massive halos at $z=6$ do not derive from the rarest
sigma peaks at the first light mass scale. This result is robust with
respect to the adopted typical mass for PopIII halos. In
Fig.~\ref{fig:mass} we show the results obtained by changing the mass
of the halos hosting the first stars considering larger halos ($M_{fs}
= 3.4 \cdot 10^6 M_{\sun/h}$ with $k=38^3$) and smaller halos ($M_{fs}
= 3.0 \cdot 10^5 M_{\sun/h}$ with $k=85^3$). The formation redshift
varies as the first halos are formed earlier when they are less
massive, but the relative ranking between the first PopIII halo in the
box and the first PopIII progenitor of the most massive structures at
$z=6$ remains similar. In passing we note that our distribution of the
formation redshift for the first $3 \cdot 10^{5} M_{\sun}$ progenitor
of the most massive $z=6$ halo (formed at $z \approx 46$) is in
agreement with the results by \citet{ree05}, obtained by means of
N-body simulations with adaptive refinements. However this halo is not
the first one formed in the simulation box as we find that the first
structure on this mass scale is formed at $z \gtrsim 55$ (see
Fig.~\ref{fig:mass}).
The results are similar for the medium box, which has a volume that is
only three times smaller than the large one (see
Fig.~\ref{fig:card512}). The refinement factor used here is $k=40$
that gives $M_{fs} = 1.0 \cdot 10^{6} M_{\sun}/h$. The \emph{first}
PopIII halo in the box virializes in the redshift range $[48.3:52.4]$
with median $49.7$, while the oldest PopIII progenitor of the most
massive halo virializes in the redshift range $[38.5:43.2]$ (median
$40.4$) and has a collapse ranking in $[574:37499]$ at $90\%$ of
confidence level, with median $7261$.
The picture changes significantly (see Fig.~\ref{fig:card60}) for the
small box that has a volume more than $10^3$ times smaller than the
large box. Here we use a refinement factor $k=5$, that leads to
$M_{fs} = 8.6 \cdot 10^{5} M_{\sun}/h$. The collapse rank of the first
light progenitor of the most massive $z=6$ halo is in the range
$[1:103]$ at the $90 \%$ confidence level with median $13$. The
correlation between the \emph{first} PopIII star and the most massive
structure at $z=6$ is therefore strong due to the small volume of the
box. This means that \emph{locally} the oldest remnants of first stars
are expected to be within the largest collapsed structures.
From the medium box size numerical simulation we have also
characterized the fraction of \emph{first} PopIII remnants that end up
in identified halos at $z=0$. If we consider one of the first 100
first light halos collapsed in the box, there is an average
probability of $0.72$ of finding its remnant in a halo identified at
$z = 0$ with more than 100 particles (that is of mass above $6.7 \cdot
10^{12} M_{\sun}/h$). The median distribution for the mass of a halo
hosting one of the remnants of these first light sources is $\approx 3
\cdot 10^{13} M_{\sun}/h$. At $95\%$ of confidence level the remnants
are hosted by a halo of mass less than $3.6\cdot 10^{14}
M_{\sun}/h$. For comparison, the most massive halo in the simulation
has a mass of $ 4.4 \cdot 10^{16} M_{\sun}/h$ and there are about
$15000$ halos more massive than $ 3 \cdot 10^{12} M_{\sun}/h$. This
is a consequence of the poor correlation between first PopIII halos
and most massive halos at low redshift.
Finally, combining the results from all our three simulation boxes, we
construct in fig.~\ref{fig:sfr} the PopIII star formation rate at high
$z$. The total number of PopIII halos that virialize is increasing
with redshift, reaching a number density of $\approx 0.1 (Mpc/h)^{-3}$
at $z\approx 30$. In our small box simulation, this means that the
average density of PopIII halos is $\approx 10^{-3}$ per grid
cell. Therefore there is a very small probability of having two
collapsed halos within the same cell, an event that would not be
captured in our model.
\section{When does the first stars epoch end?} \label{sec:end_of_fs}
In Sec.~\ref{sec:first_halos} we show that the most massive halos at
$z=6$ have first light progenitors that have been formed when already
several thousands of other PopIII stars existed. Are these progenitors
still entitled to be called \emph{first stars}? That is, when does
the \emph{first stars} epoch end? Here we review the question adopting
two different definitions to characterize the transition from the
first to the second generation of stars, namely (i) a threshold for
the transition given by the destruction of molecular hydrogen and (ii)
a metallicity based threshold.
\subsection{Molecular Hydrogen destruction} \label{sec:h2}
One criterion for the end of the first light epoch can be based on the
destruction of Molecular Hydrogen in the ISM due to photons in the
Lyman-Werner ($[11.15:13.6] eV$) energy range emitted by PopIII
stars. $H_2$ is in fact needed for cooling of the gas in dark matter
halos of mass $\approx 10^6$ \citep[e.g. see][]{bro04}. The flux in
the Lyman-Werner band is about $7.5\%$ of the ionizing flux (i.e. with
an energy range above $13.6 eV$). A PopIII star is expected to emit a
total of about $7.6 \cdot 10^{61}$ photons per solar mass
\citep{sti04}, so if we assume $300 M_{\sun}$ as a typical mass we
have about $1.7 \cdot 10^{63}$ $H_2$-destroying photons emitted over
the stellar lifetime. Only a fraction $\approx 0.15$ of these photons
can effectively destroy an $H_2$, molecule, as the most probable
outcome of absorption of a Lyman Werner photon is a first decay to a
highly excited a vibrational level that later returns to the
fundamental state, with resulting re-emitted photons below the $11.15$
eV threshold \citep{shu82,glo01}. Therefore we estimate that $\approx
2.5 \cdot 10^{62}$ $H_2$ molecules will be destroyed by a PopIII
star. Given the neutral hydrogen number density $6.2 \cdot 10^{66}
Mpc^{-3}$ this means that a PopIII star destroys $H_2$ over a volume
$4 \cdot 10^{-5} Mpc^{3} /\xi $, where $\xi$ is the ratio of molecular
to atomic hydrogen. Assuming a primordial molecular hydrogen fraction
$\xi \approx 10^{-6}$ (e.g. see \citealt{peebles}), we obtain that a
PopIII star has the energy to destroy primordial $H_2$ in a volume of
$\approx 14 (Mpc/h)^3$. This number is in broad agreement with
detailed radiative transfer simulations by \citet{joh07b}. From
Fig.~\ref{fig:sfr}, it is immediate to see that by $z\approx 30$ the
PopIII number density has reached the critical level of $0.1
(Mpc/h)^{-3}$ and therefore around that epoch the radiation background
destroys all the primordial $H_2$. Once all the primordial $H_2$ has
been cleared the universe becomes transparent in the Lyman Werner
bands and the new $H_2$ formed during the collapse of gas clouds is
dissociated by the background radiation. In fact, assuming that the
abundance of $H_2$ formed during collapse is $\xi_{coll}\approx 5
\cdot 10^{-4}$ (e.g. see \citealt{hai00}), this means that a
collapsing $10^6M_{\sun}$ halo produces about $1.3\cdot 10^{59}$ $H_2$
molecules, a negligible number with respect to the $2.5\cdot 10^{62}$
$H_2$ that are destroyed. Our simple estimate therefore suggests that
around $z\approx 30$ the star formation rate of PopIII stars in
$10^{6} M_{\sun}$ halos is greatly suppressed and proceeds in a
self-regulated fashion where only a fraction $\approx 10^{-3}$ of the
collapsing halos are actually able to cool and lead to the formation
of massive PopIII stars. Eventually the Lyman Werner background is
maintained by PopIII stars formed in more massive halos ($M \approx
10^8 M_{\sun}$), cooled by atomic hydrogen, and, at later times, by
PopII stars.
Inspired by these ideas we set the end of the \emph{primordial} epoch
for PopIII formation at the point where the primordial $H_2$ has been
destroyed, that is around $z\approx 30$. Of course this is only an
order of magnitude estimate and to fully address the feedback due to
photo-dissociating Lyman Werner photons realistic radiative transfer
cosmological simulations are needed, which may led even to positive
feedback \citep[e.g. see][]{ric01}. In particular our estimate does
not take into account the effects of self-shielding and the fact that
the formation timescale for $H_2$ during the halo collapse may be
faster than the timescale for photo-dissociation by the background
radiation. Thus it is possible that the PopIII star formation rate at
$z \lesssim 30$ is not suppressed as much as predicted by our
argument. However our estimate seems to be in broad agreement with the
more realistic model by \citet{hai00} that predicts the onset of a
significant negative feedback at $z \gtrsim 25$, depending on the
assumed efficiency of Lyman Werner photon production.
\subsection{Metal enrichment}
Another possibility to end the first star epoch is based on a ISM
metallicity threshold. However, in this case a clear transition epoch
is missing (e.g. see \citealt{sca03,fur05}). This is because metal
enrichment, driven by stellar winds whose typical velocities are many
orders of magnitude lower than the speed of light, is mainly a local
process. Therefore pockets of primordial gas may exists in regions of
space that have experienced relatively little star formation, such as
voids, even when the average metallicity in the universe is above the
critical threshold assumed to define the end of the PopIII era.
In any case this definition provides a longer duration for the first
star era. In fact to enrich the local metallicity above the $Z =
10^{-4} Z_{\sun}$ threshold, relevant for stopping PopIII formation by
chemical feedback (see \citealt{bro04}), one $300 M_{\sun}$ SN must
explode for every $\approx 2\cdot 10^{8} M_{\sun}$ total mass volume
(DM+barions), assuming on average a PopIII mass of $300 M_{\sun}$ with yield
$0.2$. For a Milky Way like halo, this means that about 3000 first
stars SN are needed to enrich the IGM to the critical
metallicity. Accordingly to this definition, the Pop III epoch would
end within a significant fraction of the total simulated volume around
$z \approx 20$, when there is the collapse of dark matter halos
originated from $3\sigma$ peaks at the $10^{6} M_{\sun}$ mass scale
\citep[e.g., see][]{mad01}, if the suppression in the PopIII star
formation rate due to lack of $H_2$ cooling is neglected. A further
caveat is that very massive stars may end up directly in Intermediate
Mass Black Holes without releasing the produced metals in the IGM
\citep{meg02,san02}.
\section{Growth of the PopIII black hole seeds} \label{sec:BHgrow}
From our investigation it is clear that, before the first PopIII
progenitor of the most massive halo at $z=6$ is born, several
thousands of intermediate mass ($m_{BH}\approx 10^2{M_{\sun}}$) black
hole seeds are planted by PopIII stars formed in a cosmic volume that
will on average host a bright $z=6$ quasar. This result does not allow
to establish an immediate correlation between the very first PopIII
stars created in the universe and the bright $z=6$ quasars, but
neither does it exclude such a link, as the formation epoch for the
quasar seed is still at very high redshift ($z \gtrsim 40$), when
radiative feedback from other PopIII stars already formed is unlikely
to affect the formation and evolution of the seed (see
Sec.~\ref{sec:h2}). Here we investigate with a simple merger tree code
what is the fate of the black holes seeds formed up to the formation
time of the quasar seed and what are the implications for the observed
quasar luminosity function.
We assume Eddington accretion for the BH seeds, so that the evolution
of the BH mass is given by:
\begin{equation}
m_{BH} = m_{0} \exp{\left [(t-t_0)/t_{sal} \right ]},
\end{equation}
where $m_0$ is the mass at formation time $t_0$ and $t_{sal}$ is the
Salpeter time \citep{sal64}:
\begin{equation} \label{eq:accr}
t_{sal} = \frac{\epsilon ~m_{BH} ~c^2}{(1-\epsilon)L_{Edd}} = 4.507 \cdot
10^{8} yr \frac{\epsilon}{(1-\epsilon)},
\end{equation}
where $\epsilon$ is the radiative efficiency.
Using Eq.~\ref{eq:accr} we can immediately see that a difference of
$\Delta t \lesssim 2 \cdot 10^7 yr$, that is of $\Delta z \approx 10$
at $z=40$, in the formation epoch of the BH seed of a bright quasar is
not too important in terms of the final mass that can be accreted by
$z=6$, as this corresponds to about half a folding time. Assuming $\epsilon
= 0.1$ until $z=6.4$, the highest redshift in the SDSS quasar sample
\citep{fan04}, we obtain a ratio of final to initial mass
$m_{BH}/m_0 = 2.62 \cdot 10^7$ for $z=50$ and $m_{BH}/m_0 = 1.78
\cdot 10^7$ for $z=40$. Therefore in both cases there has been enough
time to build up a $z\approx 6$ supermassive black hole with mass
$m_{BH} \gtrsim 10^9$ starting from a PopIII remnant.
This estimate however highlights that only a minor fraction of the
PopIII BH seeds formed before $z=40$ can accrue mass with high
efficiency, otherwise the number density of supermassive black holes
at low redshift would greatly exceed the observational
constraints. The first BH seeds in the box are distant from each
other, so they evolve in relative isolation, without possibly merging
among themselves. Therefore other mechanisms must be responsible for
quenching accretion of the first BH seeds. Interestingly if we were to
assume that accretion periods are Poisson distributed in time for each
seed, we would not be able to explain the observed power law
distribution of BH masses at $z\lesssim 6$ around the high mass end. A
Poisson distribution would in fact give too little scatter around the
median value and a sharp (faster than exponential) decay of the
displacements from the mean accreted mass. An exponential distribution
of the accretion efficiency is instead required to match the observed
BH mass function. In addition, it is necessary to assume that the duty
cycle of the BH accretion is roughly proportional to the mass of the
halo it resides in. This is sensible, since an accretion model
unrelated to the hosting halo mass may lead to the unphysical result
of possibly accruing more mass than the total baryon mass available
in that halo. In fact, a BH seed formed at $z\approx 40$ is within a
halo of median mass $\approx 10^{11} M_{\sun}/h$ at $z=6$ and a few
percent of the seeds may be in halos with mass below $\approx 10^{9}
M_{\sun}/h$ at that redshift.
To explore this possibility we follow the merging history of PopIII
halos formed at $z=40$ by means of a merger-tree code based on
\citet{lac93}. We implement a BH growth based on Eq.~\ref{eq:accr},
but at each step of the tree we limit the BH mass to $m_{BH} \leq
\eta~ m_{bar}$, where $m_{bar}$ is the total baryon mass of the halo
that hosts the BH. The results are reported in fig.~\ref{fig:accr}. If
the BH growth is not constrained (or only mildly constrained), then a
significant fraction of the seeds grows above $10^{10} M_{\sun}$,
which would result in an unrealistic number density of supermassive
black holes at $z=6$. However, if $\eta \approx 6 \cdot 10^{-3}$ (like
in \citealt{yoo04}; see also \citealt{wyi03}), then we obtain an
expected mass for the BH powering bright $z=6$ quasar of $\approx 5
\cdot 10^{9} M_{\sun}$, which is in agreement with the observational
constraints from SDSS quasars \citep{fan04}. By fitting a power law
function to the BH mass function in the range $[0.055:0.2]\cdot
10^{10} M_{\sun}$ we obtain a slope $\alpha \approx -2.6$, while the
slope is $\alpha \approx -3.7$ in the mass range $[0.2:1.0] \cdot
10^{10} M_{\sun}$, a value that is consistent within the $1 \sigma$
error bar with the slope of the bright end of the quasar luminosity
function measured by \citet{fan04}.
Another contribution to ease an overproduction of bright quasars may
be given by the suppression of the early growth of the PopIII BH seeds
for the first $\approx 10^8 yr$ after formation, that is for about $2
t_{sal}$ \citep{joh07}. In fact the radiation from a PopIII star may
evacuate most of the gas from its host halo, so that the subsequent BH
growth is quenched until a merger provides a new gas reservoir to
enable growth at near Eddington rate \citep{joh07}. Also the BH seeds
situated in more massive halos would probably be more likely to
replentish their gas supply earlier.
\section{Conclusion} \label{sec:conc}
In this paper we investigate the link between the first PopIII halos
collapsed in a simulation box and the most massive structures at $z
\approx 6$, with the aim of establishing the relationship between the
first intermediate mass black holes created in the universe and the
super-massive black holes that power the emission of bright $z=6$
quasars. We show that almost no correlation is present between the
sites of formation of the first few hundred $10^6 M_{\sun}/h$ halos
and the most massive halos at $z \lesssim 6$ when the simulation box
has an edge of several hundred $Mpc$. Here the PopIII progenitors
(halos of mass $M_{fs} \approx 10^6 M_{\sun}$) of massive halos at $z
\lesssim 6$ formed from density peaks that are $\approx 1.5
\sigma(M_{fs})$ more common than that of the \emph{first} PopIII star
in the $(512 Mpc/h)^3$ simulation box. These halos virialize around
$z_{nl} \approx 40$, to be compared with $z_{nl} \gtrsim 48$ of the
\emph{first} PopIII halo.
This result has important consequences. We show that, if bright
quasars and supermassive black holes live in the most massive halos at
$z\approx 6$, then their progenitors at the $10^6 M_{\sun}$ mass scale
are well within the PopIII era, regardless of the PopIII termination
mechanism. On the other hand, if the $m_{BH}/\sigma$ relationship is
already in place at $z=6$, then bright quasars are not linked to the
remnants of the very first intermediate mass black holes (IMBHs) born
in the universe, as their IMBH progenitors form when already several
thousands of PopIII stars have been created within the typical volume
that hosts a bright $z=6$ quasar. The IMBH seeds planted by this very
first PopIII stars have sufficient time to grow up to $m_{BH} \in
[0.2:1] \cdot 10^{10} M_{\sun}$ by $z=6$ if we assume Eddington
accretion with radiative efficiency $\epsilon \lesssim 0.1$. Instead,
quenching of the BH accretion is required for the seeds of those
PopIII stars that will not end up in massive halos at $z=6$, otherwise
the number density of supermassive black holes would greatly exceed
the observational constraints. One way to obtain growth consistent
with the observations is to limit the accreted mass at a fraction
$\eta \approx 6 \cdot 10^{-3}$ of the total baryon halo mass. This
gives a slope of the BH mass function $\alpha = -3.7$ in the BH mass
range $m_{BH} \in [0.2:1] \cdot 10^{10} M_{\sun}$, which is within the
$1 \sigma$ uncertainty of the slope of the bright end of the $z=6$
quasar luminosity function ($\alpha \approx -3.5$) measured by
\citet{fan04}.
Another important point highlighted by this study is that rich
clusters do not preferentially host the remnants of the first PopIII
stars. In fact the remnants of the first 100 Pop-III stars in our
medium sized simulation box (volume of $(512 Mpc/h)^3$) end up at
$z=0$ on halos that have a median mass of $3 \cdot 10^{13}
M_{\sun}/h$. This suggests caution in interpreting the results from
studies that select a specific volume of the simulation box, like a
rich cluster, and then progressively refine smaller and smaller
regions with the aim of hunting for the first lights formed in the
whole simulation \citep[see e.g., ][]{ree05}. Only by considering
refinements over the complete volume of the box the rarity and the
formation ranking of these progenitors can be correctly evaluated.
\acknowledgements
We thank Mike Santos for sharing his code to generate the initial
conditions and for a number of useful and interesting discussions. We
are grateful to the referee for constructive suggestions. This work
was supported in part by NASA JWST IDS grant NAG5-12458 and by
STScI-DDRF award D0001.82365. This material is based in part upon work
supported by the National Science Foundation under the following NSF
programs: Partnerships for Advanced Computational Infrastructure,
Distributed Terascale facility (DTF) and Terascale Extensions:
enhancements to the Extensible Terascale Facility - Grant AST060032T.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,438
|
From all of us here at All Mountain Technologies, we wish you a very happy and safe holiday season! Below is a little snippet of what we've been up to this winter season.
Our new website and the latest features offer helpful navigation for both prospective and current clients. We hope that our site provides a solid introduction to All Mountain Tech. Our new "client connect" tab allows our current clients to easily access links to useful tools that always-on™ provides as a service. This page will also be a good place for clients to keep up with new events previewed on our events calendar, see below. Looking into the new year, we have exciting plans for new technology educational classes and community events that will always be posted to our calendar on the client connect tab.
We are excited to begin our monthly All Mountain Tech Team Introductions! Starting with the new year, we will feature an amt team member with details of what they're all about, certifications, and educational background. Until then, here is a short introduction of three of our rotating office pups, Penelope, Bear, and BB.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,472
|
If a different version of rkt is required than what ships with CoreOS, a
oneshot systemd unit can be used to download and install an alternate version
on boot.
The following unit will use curl to download rkt, its signature, and the CoreOS
app signing key. The downloaded rkt is then verified with its signature, and
extracted to /opt/rkt.
```
[Unit]
Description=rkt installer
Requires=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/mkdir -p /opt/rkt
ExecStart=/usr/bin/curl --silent -L -o /opt/rkt.tar.gz <rkt-url>
ExecStart=/usr/bin/curl --silent -L -o /opt/rkt.tar.gz.sig <rkt-sig-url>
ExecStart=/usr/bin/curl --silent -L -o /opt/coreos-app-signing-key.gpg https://coreos.com/dist/pubkeys/app-signing-pubkey.gpg
ExecStart=/usr/bin/gpg --keyring /tmp/gpg-keyring --no-default-keyring --import /opt/coreos-app-signing-key.gpg
ExecStart=/usr/bin/gpg --keyring /tmp/gpg-keyring --no-default-keyring --verify /opt/rkt.tar.gz.sig /opt/rkt.tar.gz
ExecStart=/usr/bin/tar --strip-components=1 -xf /opt/rkt.tar.gz -C /opt/rkt
```
The URLs in this unit must be filled in before the unit is installed. Valid
URLs can be found on [rkt's releases page][rkt-releases].
This unit should be installed with either [ignition][ignition] or a [cloud config][cloud-config].
Other units being added can then contain a `After=rkt-install.service` (or
whatever the service was named) to delay their running until rkt has been
installed.
[rkt-releases]: https://github.com/coreos/rkt/releases
[ignition]: https://coreos.com/ignition/docs/latest/
[cloud-config]: https://coreos.com/os/docs/latest/cloud-config.html
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,771
|
{"url":"https:\/\/mathcracker.com\/minimum-sample-size-for-proportion.php","text":"# Minimum Sample Size Required Calculator \u2013 Estimating a Population Proportion\n\nInstructions: This calculator finds the minimum sample size required to estimate a population proportion ($$p$$) within a specified margin of error. Please select type the the significance level ($$\\alpha$$) and the required margin of error (E), along with an estimate of the population proportion if one exists, and the solver will find the minimum sample size required:\n\nRequired Margin of Error (E)\nEstimate of pop. proportion (leave empty if none)\nSignificance level ($$\\alpha$$)\n\n## Minimum Required Sample Size for a Set Maximum Error\n\nMore information about the minimum sample size required so you can better use the results delivered by this solver: In general terms, the larger the sample size n, the more precise of an estimate can be obtained of a population parameter, via the use of confidence interval. In this case specifically, use the formula for the margin of error of a confidence interval for a population proportion $$p$$:\n\n$E = z_c \\sqrt{\\frac{\\hat p(1-\\hat p)}{n} }$\n\nSo, it can be observed from the above formula that if the sample size n increases (which is in the denominator), the margin of error $$E$$ will decrease, provided that that the critical value $$z_c$$ and $$\\hat p$$ do not change. So, the formula for obtaining the required sample size is obtained by taking the above equation and solving for n.\n\nIf you want to find instead a confidence interval for the mean, please use this confidence interval calculator.\n\nThis sample size calculator is for the population proportion. If you are dealing with a population mean instead of a population proportion, you should use our minimum required sample size calculator for population mean.\n\nIn case you have any suggestion, or if you would like to report a broken solver\/calculator, please do not hesitate to contact us.","date":"2019-08-22 21:30:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8374133706092834, \"perplexity\": 278.2940644667297}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027317359.75\/warc\/CC-MAIN-20190822194105-20190822220105-00445.warc.gz\"}"}
| null | null |
The life of a GlobalGrandparent is full of highs and lows, and so many mixed emotions. We're excited to be on one of the high points at the moment, as we've just booked to visit our family 'down under' next month. It's been 15 months since we've seen our granddaughters, which is the longest we've been apart from family in all our years of expat / global life, and are now literally counting the sleeps until we see them! The long journey is more daunting than ever, and becoming harder with age, but we thank our lucky starts that we're able to do it – physically and financially.
Having our granddaughters using GlobalGrandparents means we're much more involved in what's been happening with them in Oz, so we'll be arriving more 'in the know', which is great. We're also starting to plan the trip via our shared family calendar which makes things so much easier, and is increasing the excitement levels for the girls as well. We haven't implemented the 'sleeps until next visit' feature on GlobalGrandparents yet, but it's coming soon!
Are you a GlobalGranpdarent? We'd love to hear from you – please send me an email – details below!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,048
|
\section{Introduction}
One of the simplest conceivable models in quantum field theory is a free boson confined to a one-dimensional ``box''. This typical textbook example, used to elucidate several basic notions, ranging from second quantization, renormalization, quantum vacuum phenomena, etc., can develop remarkably intricate dynamics once fields are allowed to interact. Equally complex behaviors arise when the underlying background is non-trivial (as in the case of stationary or explicitly time-dependent backgrounds, or in the presence of external potentials). This added level of complexity has made the ``simple'' scalar field setup an active playground for investigating quantum field dynamics in both perturbative and non-perturbative regimes. Novel results have been achieved not only in theoretical aspects but also in concrete experimental settings. Thanks to the advancement in the use of confining optical traps, it is now possible to accurately control the interaction strength of atoms at very low temperatures and the boundary conditions that arise due to the confinement of the particles in space~\cite{Lewenstein:2012,Zohar:2016}.
One-dimensional confined bosons in the presence of a rotating barrier provide a natural (scalar field) setup of the type described above. The complexity in the dynamics manifests then in the form of non-trivial one-loop effects and the occurrence of persistent currents. Usually, persistent currents appear when a quantum field is subject to the influence of a gauge flux with superimposed periodicity, as prescribed by the Aharonov-Bohm effect. A similar phenomenon can occur, although subtly, in stationary setups, as in the presence of rotation, even in the absence of external gauge fields. While rotation does indeed give rise to (artificial) gauge degrees of freedom, the requirement of periodicity rules out the possibility that any physical effect may emerge from such synthetic gauge fields. The obvious reason for this is the gauge invariance that impedes, in such a setting, a laboratory observer to detect any effect of rotation, similarly to a co-rotating observer, making rotation-induced gauge fields \textit{fake}. The situation is different when, in conjunction with rotation, periodicity is modified by physical boundary conditions (in other words, when the barrier rotates - something that can be implemented by the presence of impurities or by a discontinuity, or cut, along the ring). The presence of the barrier breaks the gauge invariance and allows the laboratory observer to detect rotation (differently from what happens for a co-rotating observer that, by definition, does not detect rotation). Then, rotation-induced, \textit{artificial} gauge fields acquire a physical dimension and may lead to the appearance of a persistent current that is in principle observable. Concrete examples of this include the creation of Josephson junctions on a toroidal Bose-Einstein condensates \cite{Ryu:2007}, toroidal Bose-Einstein condensate stirred by a rotating optical barrier \cite{Wright:2013}, or spinor (${}^{87}$Rb) condensates \cite{Beattie:2013}. Ref.~\cite{Cominotti:2014} has thoroughly analyzed such a scenario for an interacting one-dimensional quantum fluid modeled by a non-relativistic scalar field with a delta-function barrier. The analysis of Ref.~\cite{Cominotti:2014} has shown that the presence of the barrier deforms the ideal sawtooth profile of the current to a smeared one, tending to a sinusoid in the strong barrier limit. A corollary of these results with interesting experimental implications for cold atoms on mesoscopic rings is that the current-amplitude reaches an optimal regime, i.e., a maximum, for barriers of any (finite) height, and it is only slightly deformed by the presence of impurities for a large range of interaction strength.
In the present work, we plan to analyze how quantum effects deform the ground state in the presence of rotation from a different perspective, i.e., within a slightly different setting and following a different approach. First of all, while we will focus on bosons confined on a rotating ring, we will model the effect of the barrier directly through externally-controlled, \textit{free} boundary conditions, rather than adding a delta-function potential with varying strength. While the barrier affects the ground state of the system even in the absence of rotation, a persistent current occurs only when rotation is switched on simultaneously. Here, rather than looking at the current, we will focus on understanding how the ground state is deformed. Secondly, the role of interactions is incorporated in our treatment in a different and somewhat simpler way by imposing a constraint in field space, analogously to what is done in the context of nonlinear sigma models \cite{Zinn-justin}. Finally, our approach will be based on a direct computation of the one-loop effective action and performed using zeta-function regularisation. Extremizing the effective action will give us the properties of the ground state. This approach may also be of interest \textit{per se}. A general discussion of the formalism can be found, for instance, in Ref.~\cite{Toms} for the case of scalar fields in static backgrounds: we will generalize some of those results here to the case of stationary backgrounds.
\section{The model setup}
\label{sec2}
The focus of our work is a non-relativistic complex Scr\"odinger field $\Phi$,
\begin{eqnarray}
\Phi = (\phi_1 + i \phi_2)/\sqrt{2},~~~~~~~~~~ \phi_k \in \mathbb{R}
\label{eq2}
\end{eqnarray}
whose dynamics is determined by the action
\begin{eqnarray}
S_0 = \int d t \int d x &&\left\{
{i\over 2} \left(\Phi^\dagger \dot{\Phi}- \Phi \dot{\Phi}^\dagger\right)
- {1\over 2m R^2}\left|{\partial \Phi\over \partial \varphi} \right|^2
- V(x)\left|{\Phi} \right|^2 \right.\nonumber\\
&&
\left.+ {i\Omega \over 2R} \left(\Phi^\dagger {\Phi'} - \Phi {\Phi'}^\dagger\right)
-{m\over 2}\Omega^2 \left|{\Phi} \right|^2
\right\}.
\label{eq1}
\end{eqnarray}
The above action is obtained from the one at zero rotation $\Omega=0$, after performing a change of coordinates to pass from the ($\Omega=0$) co-rotating frame to the laboratory frame, $\left(t_0, \varphi_0\right) \rightarrow \left(t,\varphi\right)$, where the index $0$ indicates the coordinates at $\Omega=0$ (we have defined $x=R \varphi$):
\begin{eqnarray}
{t} &=& t_0,~~~~~~~~~~~~~{\varphi} = \varphi_0 + \Omega t_0,
\label{eq3}
\end{eqnarray}
and
\begin{eqnarray}
{\partial \over \partial t_0} &=& {\partial \over \partial t} + \Omega {\partial \over \partial \varphi},~~~~~~~~~~~~~ {\partial \over \partial \varphi_0} = {\partial \over \partial \varphi},
\label{eq5}
\end{eqnarray}
along with the following unitary transformation
\begin{eqnarray}
\Phi \to e^{+ i {m\over 2}\Omega^2 t}\Phi.
\end{eqnarray}
We should first of all notice that rotation in (\ref{eq1}) appears as a constant gauge field, $A_\varphi = m \Omega R$. Furthermore, upon rescaling of the angular velocity in terms of a new angular velocity $\Omega_C$,
\begin{eqnarray}
\Omega = {\Omega_C\over m R},
\end{eqnarray}
we see that the above action is the field theoretical equi\-valent of the free part of the Hamiltonian used in Ref.~\cite{Cominotti:2014} (in other words, $\Omega_C$ coincides with the angular velocity of Ref.~\cite{Cominotti:2014}).
The external potential $V(x)$ can be chosen to incorporate a chemical potential or to be a generic function of the spatial coordinate, as in the case of a confining potential. While it is relatively easy to add an additional external electromagnetic field, we will not explore this possibility here.
\subsection{Normal modes}
Before introducing interactions in the model and proceed with the computation of the effective action and the current, it is instructive to focus on the non-interacting rotating case, for which the calculation can be carried out straightforwardly. In the following we set $V(x)=0$. The equation of motion for the field $\Phi$ (and similarly for $\Phi^\dagger$) is, in absence of any potential,
\begin{eqnarray}
i {\partial \Phi \over \partial t} + i {\Omega\over R} {\partial \Phi\over \partial \varphi} + {1\over \rho}{\partial^2 \Phi \over \partial \varphi^2} - {m\Omega^2\over {2}} \Phi = 0\,,
\label{eom}
\end{eqnarray}
where we have introduced the length scale $\rho$,
\begin{eqnarray}
\rho = {2} m R^2.
\end{eqnarray}
Thus, the time-independent Scr\"odinger equation yields the eigenfunctions $f_p(\varphi)$, which satisfy
\begin{eqnarray}
{1\over \rho}{\partial^2f_p(\varphi) \over \partial \varphi^2}
+ i {\Omega\over R} {\partial f_p(\varphi) \over \partial \varphi}
= \left({1\over 2}m\Omega^2- \lambda_p\right) f_p(\varphi),
\end{eqnarray}
whose solution can be written as
\begin{eqnarray}
f_p(\varphi) = N_p e^{-i {\rho\Omega\over 2 R}\varphi} \sin\left(
{\varphi \Delta}
\right),
\end{eqnarray}
where we have defined
\begin{eqnarray}
\Delta^2 = \lambda_p \rho\,,
\end{eqnarray}
and imposed the condition $f_p(0)=0$. Imposing also the condition $f_p(2 \pi)=0$ gives
\begin{eqnarray}
\Delta = {p\over 2},~~~ p \in \mathbb{N}
\end{eqnarray}
from which we can read out the eigenvalues
\begin{eqnarray}
\lambda_p = {p^2\over 4\rho}\,.
\label{lambdas}
\end{eqnarray}
Here, we keep the positive values of $p$ in order not to duplicate the solutions. Imposing different boundary conditions, the eigenvalues change, and $p\in \mathbb{Z}$ may have to be included, as in the case of periodic boundary conditions. For completeness, we should mention that, compatibly with the topology of the background, one can require the solution to have an additional dependence on an arbitrary phase $\Xi$,
\begin{eqnarray}
f_p(\varphi) = f_p(\varphi +2 \pi) e^{i\Xi}.
\end{eqnarray}
The above condition, leads to the following constraint for the phase $\Xi$:
\begin{eqnarray}
\sin\left({\varphi \Delta }\right)
= e^{-i \left({\rho\Omega\pi\over R} -\Xi\right)} \sin\left(
{(\varphi +2 \pi)\Delta}\right).
\end{eqnarray}
In what follows we set $\Xi=0$.
\subsection{Free fields, Effective action}
The one-loop effective action can be obtained passing to Euclidean time, $t\to - i \tau$, and integrating over quantum fluctuation $\delta \Phi$, where $\Phi = \bar\Phi+\delta \Phi$, and $\bar\Phi$ is a background field. Starting from Eq.~(\ref{eq1}), we obtain the Euclideanized effective action
\begin{eqnarray}
\hspace{-0.1cm}\Gamma &=& \int_0^\beta d\tau \int d x\;
\bar\Phi^\dagger
\left[
{\partial \over \partial \tau}
- {1\over \rho} {\partial^2 \over \partial \varphi^2} - {i\Omega \over R} {\partial \over \partial \varphi}+{m\over 2}\Omega^2
\right]\bar\Phi + \delta \Gamma\,,\nonumber
\end{eqnarray}
where
\begin{eqnarray}
\delta \Gamma
= \log \det \left(
{\partial \over \partial \tau} - {1\over \rho}{\partial^2 \over \partial \varphi^2} - i {\Omega\over R} {\partial \over \partial \varphi}+{m\over 2}\Omega^2
\right).
\label{oneloopdet}
\end{eqnarray}
From the above expression, it is evident that the one-loop contribution to the effective action does not depend on the background field $\bar \Phi$. This implies that the effective equations for $\bar\Phi$ will depend only on the background part of the action and not on $\delta \Gamma$. It is important to stress that the background field equation must be equipped with some boundary conditions at the edges of the interval $\left[0,~2\pi\right]$; thus, different solutions for $\bar\Phi$ will arise for different boundary conditions. The interacting case is different, as $\delta \Gamma$ develops a dependence on the background fields.
Assuming periodic boundary conditions in Euclidean time, the complete eigenfunctions will have the form
\begin{eqnarray}
e^{-i \omega_n \tau} f_p(\varphi)\,,
\end{eqnarray}
with the frequencies given by
$$
\omega_n=2 \pi n/\beta,~~~~~~~~~~~~~n\in\mathbb{Z}.
$$
The quantity $\beta$ can be thought as the inverse temperature or as the size of the Euclidean box that is let to infinity at the end of the calculations (zero temperature limit). The eigenvalues of the full differential operator in (\ref{oneloopdet}) then become
\begin{eqnarray}
{\mathcal E}_{np} =
i \omega_n
+ \lambda_p.
\end{eqnarray}
We use zeta-regularization along with the results of the preceding sub-section and express the one-loop effective action in terms of the following \textit{generalized} zeta function,
\begin{eqnarray}
\zeta(s) = \sum_{n=-\infty}^\infty \sum_{p} {\mathcal E}_{np}^{-s}\,,
\label{zetasum}
\end{eqnarray}
as
\begin{eqnarray}
\delta \Gamma &=& 2\pi R\beta\left( \zeta(0) \log \ell - \zeta'(0)\right),
\label{effacczt}
\end{eqnarray}
with $\ell$ indicating a renormalization scale with dimension of length (see Refs.~\cite{Kirsten,elizalde94} for an introduction to spectral zeta functions and zeta-regularization). The advantage of this approach is that it reduces the problem to the computation of the analytically continued values at $s=0$ of $\zeta(s)$ and its derivative. This is customarily done by finding a (integral) representation for the series (\ref{zetasum}) for which the analytical continuation can be carried out. In the present case, we will limit our discussion to the zero temperature limit, therefore it is convenient to rearrange the zeta function by separating out the zero temperature contribution,
\begin{eqnarray}
\zeta(s) = \sum_{p>0} \lambda_p^{-s} + \sigma(s),
\label{zeta0T}
\end{eqnarray}
where the second term,
\begin{eqnarray}
\sigma(s) = {\beta^s\over \Gamma(s)} \sum_{n=1}^\infty \sum_{\lambda_p>0} {e^{-n\beta \lambda_
} \over n^{1-s}},
\end{eqnarray}
encodes the finite temperature corrections. It is easy to show that this term does not contribute to the effective action in the $T\to 0$ limit:
\begin{eqnarray}
\lim_{T\to 0} \sigma'(0) = - \lim_{\beta\to \infty} \sum_p \ln\left(1-e^{-\beta \lambda_
}\right) = 0.
\end{eqnarray}
Thus, only the first term in (\ref{zeta0T}) contributes to the effective action for vanishing temperature. For $\Omega=0$, the first term corresponds to the zero-point energy or Casimir term. This is nothing but the extension to the stationary case of what can be found in Ref.~\cite{Toms}. In the absence of rotation, this term only produces a constant shift in the effective action, and it does not contribute to the current. In the presence of rotation, $\Omega\neq 0$, in the non-interacting regime, the story is similar. The vacuum energy contribution can be expressed (as it may be expected) in terms of Riemann zeta functions:
it is straightforward to see that the zero-point energy contribution is related to the analytically continued values at $s=0$ of the function
\begin{eqnarray}
\xi(s) = (4\rho)^{s}\sum_{p=1}^{\infty} p^{-2s} = (4\rho)^{s} \zeta_R(2s),
\end{eqnarray}
with $\zeta_R(s)$ representing the Riemann zeta. At this point, the analytic continuation is trivial and, following relation (\ref{effacczt}) yields
\begin{eqnarray}
\delta \Gamma =
2 \pi R \beta {1\over 2} \log\left({16\pi^2 \rho\over \ell}\right).
\end{eqnarray}
Some remarks are in order. First of all, we notice that there is no dependence on the angular velocity. This is specific to the (Dirichlet) boundary conditions that we have imposed. Changing boundary conditions to Robin, for instance, will shift the eigenvalues of a quantity depending on $\Omega$, thus reintroducing the angular velocity in the eigenvalues. The analytical continuation in this more general case can be carried out using the Chowla-Selberg formula \cite{ChowlaSelberg,FlachiTanaka}. However, what is more interesting is that imposing Dirichlet boundary conditions leads to an $\Omega$-independent vacuum energy, and therefore implies the vanishing of a persistent current; this is not surprising: Dirichlet boundary conditions correspond to a vanishing flux through the boundary. This agrees with what found in \cite{Cominotti:2014}.
\section{Interacting non-relativistic problem}
The problem becomes more interesting when interactions are included. The simplest way to incorporate interactions in this model is by enforcing a constraint on the dynamical fields. This can be implemented by a Lagrange multiplier $\lambda( x)$,
\begin{eqnarray}
S_\lambda &=& S_0 - \int d t \int d x \lambda( x)\left(\left|\Phi\right|^2 - z^2 \right),
\label{Slambda}
\end{eqnarray}
where $S_0$ is given by (\ref{eq1}) and $z$ is a constant. Requiring the Lagrange multiplier to extremize the effective action enforces the constraint:
\begin{eqnarray}
0 = {\delta S_\lambda \over \delta \lambda}= \left|\Phi\right|^2 - z^2.
\end{eqnarray}
The constraint results in an interaction between the fields, since any change in individual degrees of freedom (e.g., $\phi_1$) is reflected on the other degrees of freedom (e.g., $\phi_2$) that need to change to enforce the constraint, i.e., keep the length $\left|\Phi\right|^2$ constant (and equal to $z^2$). We should remark that the constrained theory we consider here is different from the \textit{Lieb-Liniger} model considered in \cite{Cominotti:2014}. In our formulation, $z$ represents the coupling constant, and $z \to 0 ~\&~ \lambda \to 0$ returns the noninteracting-dilute limit.
In the interacting case, our first step is also to compute the effective action. Proceeding in a similar way as in the preceding section, we obtain the following expression for the Euclidean effective action to one-loop:
\begin{eqnarray}
\Gamma =\! \int_0^\beta\!\!\! d\tau \int\! d x\;
\bar\Phi^\dagger
\left[
{\partial \over \partial \tau}
- {1\over \rho} \mathcal{D}^2
\right]\bar\Phi
+\lambda( x)\left(\left|\Phi\right|^2 - z^2 \right)
+ \delta \Gamma\,,
\label{effact}~~~~~~~~~
\end{eqnarray}
where
\begin{eqnarray}
\delta \Gamma
= \log \det \left(
{\partial \over \partial \tau}
- {1\over \rho} \mathcal{D}^2
+\lambda(x)
\right),
\label{oneloopdet2}
\end{eqnarray}
and where we have defined the following covariant derivative
\begin{eqnarray}
\mathcal{D} = {\partial \over \partial \varphi} +i {\rho \over 2 R} \Omega.
\end{eqnarray}
\subsection{One-loop effective action in the presence of rotation and Lagrange multiplier}
The difference with the non-interacting case lies in the presence of the Lagrange multiplier $\lambda$. Extremization of the effective action with respect to this term controls the dependence on the background field $\bar \Phi$ through the effective field equations. The presence of an \textit{a priori} unknown function in the determinant impedes us to proceed as in the previous section. There are several ways to bypass the problem; here, we will follow an approach (see, for example, Ref.~\cite{NinoTakahiro}) that consists in expressing the effective action in terms of the heat-kernel of the differential operator in (\ref{oneloopdet2}), from which a derivative expansion can be obtained.
Thus, the first step is to re-express the determinant in (\ref{oneloopdet2}) as
\begin{eqnarray}
\delta \Gamma
= - \lim_{s\to 0} {d\over ds}
{1\over \Gamma(s)} \sum_{n=-\infty}^{\infty}
\int_0^\infty {dt \over t^{1-s}} e^{-i\omega_n t} \mbox{Tr}\; e^{- {t\over \rho}D}
\label{oneloopdet3}
\end{eqnarray}
where the differential operator $D$ is defined as
\begin{eqnarray}
D =
-\mathcal{D}^2
+\rho\lambda(x).
\label{Diff}
\end{eqnarray}
A chemical potential or an external potential can be straightforwardly included in the present treatment. To make the physics more transparent, we proceed by rescaling the integration variable $t = {\rho u}$ leading to
\begin{eqnarray}
\delta \Gamma
= - \lim_{s\to 0} {d\over ds}
{ \rho^{s}\over \Gamma(s)} \sum_{n=-\infty}^{\infty}
\int_0^\infty {du \over u^{1-s}} e^{-i \varpi_n u} \mbox{Tr}\; e^{-{u} D}\,,~~~~~~~~~
\label{oneloopdet4}
\end{eqnarray}
where
$$
\varpi_n = {2 \pi n \over \eta},~~~~~~~~~~~~~\eta = {\beta \over \rho}.
$$
Notice that $\eta$ is the ratio of two length scales, thus is dimension-less, and so $\varpi_n$ is. The change of the integration variable essentially corresponds to a rescaling of the inverse temperature $\beta$. Expressing the determinant as in (\ref{oneloopdet4}) has the effect of rescaling the temperature by a factor proportional to $\rho$. This illustrates that the usual small-t (high temperature) heat-kernel asymptotics (see Ref.~\cite{Actor:1987,Toms:1992dq}) occurs in this case for small values of the parameter $\eta$. If we write the functional trace in the above integral in terms of the eigenvalues $\xi_p$ of the operator $D$, we have
\begin{eqnarray}
\mbox{Tr}\; e^{-{u} D} = \sum_{\xi_p>0} e^{- \xi_p u}.
\end{eqnarray}
Under the assumption that the eigenvalues are non-negative, the integrand is exponentially suppressed for large-$u$. We will return on the validity of this assumption, and in what follows we will adopt a small-argument approximation for the kernel in the above integral,
\begin{eqnarray}
\mbox{Tr}\; e^{-{u} D} \approx K(u) =
{\sqrt{1 \over 2\pi u}} \sum_{k\in \mathbb{N}} a_{k} {u}^{k} + \mbox{boundary terms},~~~~~~~\label{hkexpansion}
\label{hkexp}
\end{eqnarray}
with the coefficients $a_{k}\equiv a_{k}\left(\lambda\right)$ depending on powers and derivatives of $\lambda$. In order to write the bulk equation for the dynamical fields, we only need the bulk part. Boundary contributions will be considered later.
Putting everything together allows us to express the one-loop effective action as
\begin{eqnarray}
\delta \Gamma
= - \lim_{s\to 0} {d\over ds}
{ \rho^{s}\over \Gamma(s)} \sum_{n=-\infty}^{\infty}
\int_0^\infty {du \over u^{1-s}} e^{-i \varpi_n u} K(u).~~~~~~~~~
\label{oneloopdet5}
\end{eqnarray}
In a region of the complex $s$-plane where the above expression converges, we can swap the summation over $n$ with the integral and re-express the sum using the identity
\begin{eqnarray}
\sum_{n=-\infty}^\infty \exp\left(i n u \right)
=
\eta \sum_{n=-\infty}^\infty \delta\left(u -\eta n\right).
\end{eqnarray}
This allows us to write
\begin{eqnarray}
\delta \Gamma
= - \lim_{s\to 0} {d\over ds}
{ \eta \rho^{s}\over \Gamma(s)} \sum_{n=-\infty}^{\infty}
\int_0^\infty {du \over u^{1-s}} \delta\left(u -\eta n\right) K(u)\theta_{reg}(u).~~~~~~~~~
\label{oneloopdet6}
\end{eqnarray}
Formula (\ref{oneloopdet6}) is just a formal re-writing of (\ref{oneloopdet2}) and returns the correct non-interacting ($\lambda\to 0$) limit discussed in the previous section. Also, in order to perform the integration over $u$, we first introduce a regularized step function $\theta_{reg}(u) \to \theta(u)$, that returns the ordinary step function in the limit where the regularization is removed (the details of the regularization are unimportant, as it will become clear shortly). This step is necessary to keep the step function (and the integrand) continuous at $u=0$. Carrying out the integral requires the assumption of continuity at the origin, leading to
\begin{eqnarray}
\delta \Gamma
&=&
- \lim_{s\to 0} \lim_{n\to 0}{d\over ds}
{\left({\rho\eta}\right)^{s}\over \Gamma(s)}
{n^{s-1}} K\left(\eta n\right) \theta_{reg}(\eta n)\nonumber\\
&&- \lim_{s\to 0} {d\over ds}
{\left({\rho \eta}\right)^{s}\over \Gamma(s)} \sum_{n=1}^{\infty}
{n^{s-1}} K\left(\eta n\right).
\label{oneloopdet7}
\end{eqnarray}
In the above expression, we have kept the regularized $\theta$-function only for the $n=0$ term in the sum while removed the regularization for the $n>1$ contributions. We may notice that the first term in the heat-kernel expansion does not contribute to the effective action, being independent of $\lambda$ or $\bar\Phi$; we then arrive at
\begin{eqnarray}
\delta \Gamma
&=&
-\lim_{n\to 0}\left[{a_0 \over n^{3/2}}+{\eta} {a_1\over \sqrt{n}} \right]{\theta_{reg}(0) \over \sqrt{2\pi \eta}}
\nonumber\\
&&- {1 \over \sqrt{2\pi \eta}} \lim_{s\to 0} {d\over ds}
{\left({\rho \eta}\right)^{s}\over \Gamma(s)}
\sum_{k\in \mathbb{N}} \zeta_R\left({3\over 2}-{k}-s\right)
\eta^{k}a_{k}\,, \nonumber
\label{oneloopdet8}
\end{eqnarray}
from which we get
\begin{eqnarray}
\delta \Gamma
&=&
-\sqrt{\eta\over 2\pi} {a_1}\lim_{n\to 0} {\theta_{reg}(0) \over \sqrt{n}}
-
{1\over \sqrt{2\pi\eta}}\sum_{k=1}^\infty \zeta(3/2-k) \eta^k a_k,~~~~~~~~~~
\label{oneloopdet9}
\end{eqnarray}
where we have introduced the renormalization scale $\ell$. We have dropped the term proportional to $a_0$ since it does not depend on the background fields or the Lagrange multiplier and disappears from the equation of motion. Physically, this term corresponds to a renormalization of the vacuum energy. The other term, which diverges in the limit $n \to 0$, proportional to the $a_1$ heat-kernel coefficient, corresponds to a renormalization of the (inverse) coupling $z$ in the classical action. More importantly, we should observe that the expansion appears in powers of $\eta \propto (T m R^2)^{-1}$. {Thus, the expression (\ref{oneloopdet9}) can be safely used in the limit of high temperature and small mass or size, or in the limit of small temperature and large mass or size.}
We should remark that within this approximation, we are ignoring large-$t$ contributions to the heat-kernel. In principle, these terms should yield infrared-sensitive logs that will repair infrared divergences. It is possible to include such terms using a more elaborate regularization scheme, however here we are simply igno\-ring them. {Based on dimensional analysis, one may conclude that a derivative expansion of the effective action takes the same form (\ref{oneloopdet9}) even beyond the range of validity discussed above.
This, along with the \textit{assumption} that the ground state is not rapidly varying, allows to ignore high-order derivatives. While it is physically reasonable, certainly in a non-relativistic context, to assume that a rapidly varying background is not the ground state, the results of Ref.~\cite{Cominotti:2014} clearly show that this is the case for the present problem.}
The advantage of the present approach lies in the expansion (\ref{hkexpansion}). The coefficients $a_k$ are integrals of local quantities that can be obtained from the knowledge of the differential operator $D$ (See any of the books in Refs~\cite{Avramidi,ParkerToms,Gilkey} for an in-depth introduction). For any operator of the form $\Theta = g^{\mu\nu}\nabla_\mu \nabla_\nu + f(x)$, where $\nabla_\mu$ is any covariant derivative that may include gauge potentials and $f = f(x)$ is any regular function (in general, $f(x)$ is an operator that does not contain derivatives), the coefficients can be found in any of the references \cite{Avramidi,ParkerToms,Gilkey}. In the present case, the metric tensor is trivial, the spin structure absent, and $f(x) \to \lambda(x)$, leading to the following expressions for the first four coefficients that are relevant to our case:
\begin{align}
&a_0= \beta \int dx 1 \nonumber
\\
&a_1=\beta \int dx \left(- \lambda \right)\nonumber
\\
&a_2=\beta \int dx \left(\frac{1}{2}\lambda^2-\frac{1}{6} {\mathcal D}^2 \lambda \right) \nonumber
\\
&a_3=\beta \int dx \left(-\frac{1}{6}\lambda^3 +\frac{1}{12}\left( {\mathcal D}\lambda \right)^2+\frac{1}{6} \lambda{{\mathcal D}^2\lambda}-\frac{1}{60}{{\mathcal D}^4\lambda}\right).
\nonumber
\end{align}
\subsection{Effective equations and boundary conditions}
Relations (\ref{effact}), (\ref{oneloopdet9}), along with the above explicit form of the coefficients yield an explicit expression for the effective action from which the equation for the background fields, $\bar\Phi$ and its conjugate, and for the Lagrange multiplier $\lambda$ can be obtained. Here, we will truncate the derivative expansion to order $k=3$ (i.e., including up to the coefficient $a_3$), which allows us to obtain the following system of nonlinear coupled differential equations\footnote{The second order equations prior to the first-order reduction are
\begin{eqnarray}
0&=& {1\over \rho}{\partial^2 \bar\Phi \over \partial \varphi^2}
+ i {\Omega\over R} {\partial \bar\Phi\over \partial \varphi}
- \left({m\Omega^2\over 2} +\lambda(\varphi)\right) \bar\Phi \nonumber\\
0&=& {d^2 \lambda(\varphi) \over d\varphi^2} -{3} \lambda^2 + {\mathcal M} \lambda
+ \mathcal{U},
\nonumber
\end{eqnarray}
}:
\begin{eqnarray}
X_1' &=& X_2 \\
X_2' &=& {m\rho\Omega^2\over 2} X_1 +\rho Z_1 X_1 + {\rho \Omega\over R}Y_2\\
Y_1' &=& Y_2 \\
Y_2' &=& {m\rho\Omega^2\over 2} Y_1 +\rho Z_1 Y_1 - {\rho \Omega\over R}X_2 \\
Z_1' &=& Z_2 \\
Z_2' &=& 3 Z_1^2 - \mathcal M Z_1 - \mathcal U
\end{eqnarray}
where we have defined ($z_{ren}$ is the renormalized coupling)
\begin{eqnarray}
X_1 &=& \Re \bar \Phi,~~~
Y_1 = \Im \bar \Phi,~~~
Z_1 = \lambda,~~~\\
X_2 &=& \Re \bar \Phi',~~~
Y_2 = \Im \bar \Phi',~~~
Z_2 = \lambda',~~~
\end{eqnarray}
\begin{figure*}[t!]
\begin{center}
\begin{tabular}{ccc}
$\Omega=0$&$\Omega=0.5$&$\Omega=1$\\
\includegraphics[scale=0.42,trim={1cm 3.3cm 1.3cm 3cm},clip]{Omega0Mod.png}&
\includegraphics[scale=0.42,trim={1.2cm 3.3cm 1.3cm 3cm},clip]{Omega05Mod.png}&
\includegraphics[scale=0.42,trim={1.2cm 3.3cm 1cm 3cm},clip]{Omega1Mod.png}\\
\includegraphics[scale=0.42,trim={1cm 3.5cm 1.3cm 4cm},clip]{Omega0Lambda.png}&
\includegraphics[scale=0.42,trim={1.2cm 3.5cm 1.3cm 4cm},clip]{Omega05Lambda.png}&
\includegraphics[scale=0.42,trim={1.2cm 3.5cm 1cm 4cm},clip]{Omega1Lambda.png}\\
$\varphi$&$\varphi$&$\varphi$
\end{tabular}
\put(-520,30){$\Phi^2$}
\put(-513,-30){$\lambda$}
\end{center}
\caption{The figure shows the numerical solutions for $\Phi^2$ and $\lambda$ for illustrative values of the rotational velocity $\Omega$. We have selected the parameters as follows: $m\times R = 0.3$, $\beta/ R= 10$, and $z=0.1$. The curves correspond to the following solutions: for $\Omega=0$, orange $\Rightarrow (\bar \Phi_{\varphi=0}=1.121,~ \bar \Phi'_{\varphi=0}=0.95 , ~ \lambda'_{\varphi=0}=-1.66)$, cyan $\Rightarrow (0.001,0.60,-1.19)$, black $\Rightarrow (0.001,0.44,-0.76)$;
for $\Omega=0.5$, orange $\Rightarrow (1.011, 0.96, -1.57)$, cyan $\Rightarrow (0.001,0.60,-1.23)$, black $\Rightarrow (0.001,0.44,-0.80)$;
for $\Omega=1$, orange $\Rightarrow (1.001, 0.58, -1.37)$, cyan $\Rightarrow (0.001,0.61,-1.37)$, black $\Rightarrow (0.001,0.47,-0.92)$.}
\label{figure}
\end{figure*}
\begin{eqnarray}
\mathcal U &=&
{\pi \rho^3 \Omega^2\over 3 R^2 \beta}{\zeta(3/2)\over \zeta(5/2)}
-{6 \rho^2\over \beta^2}{\zeta(1/2)\over \zeta(-3/2)}\nonumber\\
&&
-{6\sqrt{2\pi\rho^5\over \beta^5}}{\left(X_1^2+X_2^2-z_{ren}^2\right)\over \zeta(-3/2)}
-{1\over 10}\left({\rho\Omega\over 2R}\right)^4\,,
\\
\mathcal M &=& \left(8\pi {\zeta(3/2)\over \zeta(5/2)}{\rho\over \beta}
- 3\left({\rho \Omega\over 2R}\right)^2\right)\,.
\end{eqnarray}
As anticipated in the introduction, we are interested in the response of the system (in particular of its ground state) to changes in the boundary conditions. Thus, we enforce the boundary conditions directly on the bulk solutions and see how these will change when the boundary conditions change. We should also notice that since we have introduced interactions as a constraint in field space through a Lagrange multiplier, quantum effects enter in the effective equation for $\lambda$ that, through the nonlinear structure of the effective equations (i.e., the coupling between the background fields $\bar\Phi$ and $\bar\Phi^\dagger$, and $\lambda$), affects the ground state.
The numerical calculation is carried out in \textit{Python} and the equation is solved by fixing the boundary conditions on the left at $\varphi=0$ and shooting to the right. The right-hand boundary is regulated by shifting it by an amount $\epsilon$ increasingly smaller until the solution satisfies the imposed requirements. All plots shown refer to $\epsilon = 10^{-2}$. The boundary values of the real and imaginary parts of $\bar \Phi$ at $\varphi=0$---i.e. $\Re \bar \Phi_{\varphi=0}$ and $\Im \bar \Phi_{\varphi=0}$---and $\lambda_{\varphi=0}$, along with their derivatives have been varied within the following intervals: $10^{-3} \leq \Re \bar \Phi_{\varphi=0} \leq 1.4$, $10^{-3} \leq \Im \bar \Phi_{\varphi=0} \leq 1.4$, $-10^{-3} \leq \Re \bar \Phi'_{\varphi=0} \leq 1.1$, $-10^{-3} \leq \Im \bar \Phi'_{\varphi=0} \leq 1.1$, $-1.90\leq \lambda'_{\varphi=0} \leq 0.81$. Also, we have re-scaled the value of $\lambda$ at $\varphi=0$ to unity. With the boundary values on the right fixed, we have numerically searched for solutions that satisfied continuity and periodicity or anti-periodicity for the real and imaginary part (leading to a periodic modulus square $\Phi^2$) with a tolerance of $1\%$ and repeated the numerical search at increments of $10^{-2}$ on all boundary values. We should note here that solutions that do not satisfy this added constrains are still valid, despite being non-periodic or discontinuous at the boundary. The values of the physical parameters have been set as follows: $m\times R=0.3$, $\beta/R = 10$, and $z=0.1$. In the numerical simulations we have set $R=1$. {The rationale behind this choice was to keep both the mass and the temperature small. Notice that this choice of parameters requires the additional assumption that the solution is not rapidly varying; in other words, such solutions are eliminated from the spectrum of the possible ones.} Although we do not report them here, we have explored other parameter sets which have led to similar numerical solutions.
Some illustrative results of the numerical calculation are given in Fig.~\ref{figure} for several values of the angular velocity, $\Omega=0, 0.5, 1$. We have also explored the vicinity of each of these values (e.g., for $\Omega=0$, we have checked $\Omega=0.1, 0.2, 0.3$, etc.) without finding any significant deformation in the numerical solutions.
Several remarks are in order. First of all, one should not confuse the variety of solutions with excited states. Each solution corresponds to a specific choice of boundary conditions and is unique; so, despite different solutions leading, in principle, to different values of the action, no transition between them occurs, as long as the boundary conditions are kept fixed. This may be an interesting point, as the boundary conditions can, in principle, be controlled by optical traps. Secondly, some of our solutions clearly show a behavior similar to those of Ref.~\cite{Cominotti:2014}: solutions are peaked at $\varphi=\pi$ and descend smoothly in both directions towards the boundaries. Third, we find \textit{out-of-phase} solutions with larger amplitude, peaked near $\varphi=\pi/2$; such solutions join continuously at the boundaries and are dephased with respect to the solutions peaked at $\varphi=\pi$ (again, these are not higher energy solutions, but just solutions obeying different boundary conditions). It is interesting to notice the similarity between the amount of dephasing and the detuning of the boundary conditions; if boundary conditions deviate from those producing the solutions symmetric with respect to the center of the interval, then the resulting solution acquires a phase. While from the mathematical point of view this is a trivial observation (i.e., it also happens for plane waves), in the present case, it suggests a way to measure a deviation from specified boundary conditions. The numerical profiles of the Lagrange multiplier (which have no correspondence with Ref.~\cite{Cominotti:2014}) correlate with those of the amplitude and show a peak in correspondence to that of the amplitude.
\section{Conclusions}
In this work, we have studied a system of (free and interacting) confined non-relativistic bosons in one dimension in the presence of rotation.
Confinement in the angular direction occurs due to boundary conditions that can be physically implemented by optical trapping or impurities. Boundary conditions essentially mimic the presence of a barrier and prevent the possibility of gauging away the synthetic gauge field associated with rotation, making the combination rotation-barrier an intriguing way to alter the properties of the system. In this work, we have studied how the ground state (i.e., the extremal of the one-loop effective action) changes when rotation or boundary conditions are changed. After discussing the free case, we have considered the much more complicated problem of interacting fields. Here, we have introduced interactions as a constraint in field space, which slightly simplifies our treatment and the computation of the one-loop effective action. The latter is carried out by using an approach based on heat-kernels. This method, adapted for the stationary case discussed here, allowed us to obtain an expansion in terms of the background fields (and their derivatives), assuming these were generic spatially varying functions (and a particular dimensionless combination of the physical parameters to be small). This approach proves to be rather valuable to deal with the case of general boundary conditions, or in other words, for any barrier's property, that induces an inhomogeneous ground state. Furthermore, although in the numerical calculations we have kept the temperature small, the results include (within the validity of our approximations) also finite temperature effects and can be extended at finite density straightforwardly. The method itself can be a helpful complement to fully non-perturbative numerical calculations.
The machinery developed here has been ultimately implemented numerically, and it allowed us to explore the ground state solution for varying boundary conditions. We have found three classes of solutions, two of which are compatible with the behavior of Ref.~\cite{Cominotti:2014}, presenting a maximum at the center of the interval and symmetrically descending towards the boundaries, where the background field profile attains a minimum. We also found a third type of solution with a similar profile but dephased and with a higher amplitude. Such dephased solutions also reach a minimum, close to $\varphi = 2\pi/3$, and can be mapped into center-symmetric solutions by a translation; that is, such solutions are topologically equivalent. This gives further support to the argument of Ref.~\cite{Cominotti:2014} that the presence and type of impurities only minimally deform the properties of the system (in this case, its ground state).
\acknowledgements
AF acknowledges the support of the Japanese Society for the Promotion of Science Grant-in-Aid for Scientific Research KAKENHI (Grants n. 18K03626, 21K03540). VV has been partially supported by the H2020 programme and by the Secretary of Universities and Research of the Government of Catalonia through a Marie Sk{\l}odowska-Curie COFUND fellowship -- Beatriu de Pin{\'o}s programme no. 801370. VV would like to thank Laura Bonavera and Joaqu{\'i}n Gonz{\'a}lez-Nuevo for a useful discussion.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,953
|
{"url":"https:\/\/codegolf.stackexchange.com\/questions\/133754\/one-oeis-after-another?page=9&tab=votes","text":"# One OEIS after another\n\nAs of 13\/03\/2018 16:45 UTC, the winner is answer #345, by Scrooble. This means the contest is officially over, but feel free to continue posting answers, just so long as they follow the rules.\n\nAs well, just a quick shout out to the top three answerers in terms of numbers of answers:\n\n3. Hyper Neutrino - 26 answers\n\nThis is an answer chaining question that uses sequences from OEIS, and the length of the previous submission.\n\nThis answer chaining question will work in the following way:\n\n\u2022 I will post the first answer. All other solutions must stem from that.\n\u2022 The next user (let's call them userA) will find the OEIS sequence in which its index number (see below) is the same as the length of my code.\n\u2022 Using the sequence, they must then code, in an unused language, a program that takes an integer as input, n, and outputs the nth number in that sequence.\n\u2022 Next, they post their solution after mine, and a new user (userB) must repeat the same thing.\n\nThe nth term of a sequence is the term n times after the first, working with the first value being the first value given on its OEIS page. In this question, we will use 0-indexing for these sequences. For example, with A000242 and n = 3, the correct result would be 25.\n\n## However!\n\nThis is not a , so shortest code doesn't matter. But the length of your code does still have an impact. To prevent the duplication of sequences, your bytecount must be unique. This means that no other program submitted here can be the same length in bytes as yours.\n\nIf there isn't a sequence for then length of the last post, then the sequence for your post is the lowest unused sequence. This means that the sequences used also have to be unique, and that the sequence cannot be the same as your bytecount.\n\nAfter an answer has been posted and no new answers have been posted for more than a week, the answer before the last posted (the one who didn't break the chain) will win.\n\n## Input and Output\n\nGeneric input and output rules apply. Input must be an integer or a string representation of an integer and output must be the correct value in the sequence.\n\n## Formatting\n\n# N. language, length, [sequence](link)\n\ncode\n\n*anything else*\n\n\n## Rules\n\n\u2022 You must wait for at least 1 hour before posting an answer, after having posted.\n\u2022 You may not post twice (or more) in a row.\n\u2022 The index number of a sequence is the number after the A part, and with leading zeros removed (e.g. for A000040 the index number is 40)\n\u2022 You can assume that neither the input nor the required output will be outside your languages numerical range, but please don't abuse this by choosing a language that can only use the number 1, for example.\n\u2022 If the length of your submission is greater than 65536 characters long, please provide a link to a way to access the code (pastebin for example).\n\u2022 n will never be larger than 1000, or be out of bounds for the sequence, simply to prevent accuracy discrepancies from stopping a language from competing.\n\u2022 Every 150 (valid) answers, the number of times a language may be used increases. So after 150 solutions have been posted, every language may be used twice (with all previous answers counting towards this). For instance, when 150 answers have been posted, Python 3 may be used twice, but due to the fact that it has already been used once, this means it can only be used once more until 300 answers have been posted.\n\u2022 Please be helpful and post a link to the next sequence to be used. This isn't required, but is a recommendation.\n\u2022 Different versions of languages, e.g. Python 2 and Python 3 are different languages. As a general rule, if the different versions are both available on Try It Online, they are different languages, but keep in mind that this is a general rule and not a rigid answer.\n\u2022 It is not banned, but please try not to copy the code from the OEIS page, and actually try to solve it.\n\u2022 Hardcoding is only allowed if the sequence is finite. Please note that the answer that prompted this (#40) is the exception to the rule. A few answers early in the chain hardcode, but these can be ignored, as there is no good in deleting the chain up to, say, #100.\n\nvar QUESTION_ID=133754,OVERRIDE_USER=66833;function shareUrl(i){return\"https:\/\/codegolf.stackexchange.com\/a\/\"+i}function answersUrl(e){return\"https:\/\/api.stackexchange.com\/2.2\/questions\/\"+QUESTION_ID+\"\/answers?page=\"+e+\"&pagesize=100&order=desc&sort=creation&site=codegolf&filter=\"+ANSWER_FILTER}function commentUrl(e,s){return\"https:\/\/api.stackexchange.com\/2.2\/answers\/\"+s.join(\";\")+\"\/comments?page=\"+e+\"&pagesize=100&order=desc&sort=creation&site=codegolf&filter=\"+COMMENT_FILTER}function getTemplate(s){return jQuery(jQuery(\"#answer-template\").html().replace(\"{{PLACE}}\",s.index+\".\").replace(\"{{NAME}}\",s.user).replace(\"{{LANGUAGE}}\",s.language).replace(\"{{SEQUENCE}}\",s.sequence).replace(\"{{SIZE}}\",s.size).replace(\"{{LINK}}\",s.link))}function search(l,q){m=jQuery(\"<tbody id='answers'><\/tbody>\");e.forEach(function(s){if(!q||(l==0&&RegExp('^'+q,'i').exec(s.lang_name))||(l==1&&q===''+s.size)){m.append(jQuery(getTemplate(s)))}});jQuery(\"#answers\").remove();jQuery(\".answer-list\").append(m)}function sortby(ix){t=document.querySelector('#answers');_els=t.querySelectorAll('tr');els=[];for(var i=0;i<_els.length;i++){els.push(_els[i]);}els.sortBy(function(a){a=a.cells[ix].innerText;return ix==0||ix==4?Number(a):a.toLowerCase()});for(var i=0;i<els.length;i++)t.appendChild(els[i]);}function checkSize(x){if(!x)return jQuery(\"#size-used\").text(\"\");var i=b.indexOf(+x);if(i<0)return jQuery(\"#size-used\").text(\"Available!\");var low=+x,high=+x;while(~b.indexOf(low))low--;while(~b.indexOf(high))high++;jQuery(\"#size-used\").text((\"Not available. The nearest are \"+low+\" and \"+high).replace(\"are 0 and\",\"is\"))}function checkLang(x){}function getAnswers(){jQuery.ajax({url:answersUrl(answer_page++),method:\"get\",dataType:\"jsonp\",crossDomain:!0,success:function(e){answers.push.apply(answers,e.items),answers_hash=[],answer_ids=[],e.items.forEach(function(e){e.comments=[];var s=+e.answer_id;answer_ids.push(s),answers_hash[s]=e}),e.has_more||(more_answers=!1),comment_page=1,getComments()}})}function getComments(){jQuery.ajax({url:commentUrl(comment_page++,answer_ids),method:\"get\",dataType:\"jsonp\",crossDomain:!0,success:function(e){e.items.forEach(function(e){e.owner.user_id===OVERRIDE_USER&&answers_hash[e.post_id].comments.push(e)}),e.has_more?getComments():more_answers?getAnswers():process()}})}function getAuthorName(e){return (e.owner.user_id==OVERRIDE_USER?\"<span id='question-author'>\"+e.owner.display_name+\"<\/span>\":e.owner.display_name)}function process(){b=[];c=[];answers.forEach(function(s){var r=s.body;s.comments.forEach(function(e){OVERRIDE_REG.test(e.body)&&(r=\"<h1>\"+e.body.replace(OVERRIDE_REG,\"\")+\"<\/h1>\")});var a=r.match(SCORE_REG);if(a){e.push({user:getAuthorName(s),size:+a[4],language:a[2],lang_name:a[3],index:+a[1],sequence:a[5],link:shareUrl(s.answer_id)});if(b.indexOf(+a[4])>=0&&c.indexOf(+a[4])<0){c.push(+a[4])};b.push(+a[4])}else{jQuery('#weird-answers').append('<a href=\"'+shareUrl(s.answer_id)+'\">This answer<\/a> is not formatted correctly. <b>Do not trust the information provided by this snippet until this message disappears.<\/b><br \/>')}}),e.sortBy(function(e){return e.index});e.forEach(function(e){jQuery(\"#answers\").append(getTemplate(e))});var q=\"A\"+(\"000000\"+e.slice(-1)[0].size).slice(-6);jQuery(\"#next\").html(\"<a href='http:\/\/oeis.org\/\"+q+\"'>\"+q+\"<\/a>\");c.forEach(function(n){jQuery('#weird-answers').append('The bytecount '+n+' was used more than once!<br \/>')})}Array.prototype.sortBy=function(f){return this.sort(function(a,b){if(f)a=f(a),b=f(b);return(a>b)-(a<b)})};var ANSWER_FILTER=\"!*RB.h_b*K(IAWbmRBLe\",COMMENT_FILTER=\"!owfmI7e3fd9oB\",answers=[],answers_hash,answer_ids,answer_page=1,more_answers=!0,comment_page,e=[];getAnswers();var SCORE_REG=\/<h\\d>\\s*(\\d+)\\.\\s*((?:<a [^>]+>\\s*)?((?:[^\\n,](?!<\\\/a>))*[^\\s,])(?:<\\\/a>)?),.*?(\\d+)(?=[^\\n\\d<>]*(?:<(?:s>[^\\n<>]*<\\\/s>|[^\\n<>]+>)[^\\n\\d<>]*)*, ((?:<a[^>]+>\\s*)?A\\d+(?:\\s*<\\\/a>)?)\\s*<\\\/h\\d>)\/,OVERRIDE_REG=\/^Override\\s*header:\\s*\/i;\nbody{text-align:left!important;font-family:Roboto,sans-serif}#answer-list,#language-list{padding:10px;\/*width:290px*\/;float:left;display:flex;flex-wrap:wrap;list-style:none;}table thead{font-weight:700}table td{padding:5px}ul{margin:0px}#board{display:flex;flex-direction:column;}#language-list li{padding:2px 5px;}#langs-tit{margin-bottom:5px}#byte-counts{display:block;margin-left:15px;}#question-author{color:purple;text-shadow: 0 0 15px rgba(128,0,128,0.1);}#label-info{font-weight: normal;font-size: 14px;font-style: italic;color: dimgray;padding-left: 10px;vertical-align: middle; }\n<script src=\"https:\/\/ajax.googleapis.com\/ajax\/libs\/jquery\/2.1.1\/jquery.min.js\"><\/script><link rel=\"stylesheet\" type=\"text\/css\" href=\"\/\/cdn.sstatic.net\/codegolf\/all.css?v=83c949450c8b\"><p id=\"weird-answers\"><\/p><p>Currently waiting on <span id=\"next\"><\/span><\/p><span>Search by Byte Count: <input id=\"search\" type=\"number\" min=1 oninput=\"checkSize(this.value);search(1,this.value)\" onclick=\"document.getElementById('search2').value='';!this.value&&search(0,'')\"\/> <span id=\"size-used\"><\/span><\/span><br><span>Search by Language: <input id=\"search2\" oninput=\"checkLang(this.value);search(0,this.value)\" onclick=\"document.getElementById('search').value='';!this.value&&search(0,'')\"\/> <span id=\"language-used\"><\/span><\/span><h2>Answer chain <span id=\"label-info\">click a label to sort by column<\/span><\/h2><table class=\"answer-list\"><thead><tr><td onclick=\"sortby(0)\">#<\/td><td onclick=\"sortby(1)\">Author<\/td><td onclick=\"sortby(2)\">Language<\/td><td onclick=\"sortby(3)\">Sequence<\/td><td onclick=\"sortby(4)\">Size<\/td><\/tr><\/thead><tbody id=\"answers\"><\/tbody><\/table><table style=\"display: none\"><tbody id=\"answer-template\"><tr><td>{{PLACE}}<\/td><td>{{NAME}}<\/td><td>{{LANGUAGE}}<\/td><td>{{SEQUENCE}}<\/td><td>{{SIZE}}<\/td><td><a href=\"{{LINK}}\">Link<\/a><\/td><\/tr><\/tbody><\/table><table style=\"display: none\"><tbody id=\"language-template\"><tr><td>{{LANGUAGE}}<\/td><td>{{NAME}}<\/td><td>{{SEQUENCE}}<\/td><td>{{SIZE}}<\/td><td><a href=\"{{LINK}}\">Link<\/a><\/td><\/tr><\/tbody><\/table>\n\n\u2022 Comments are not for extended discussion; this conversation has been moved to chat. \u2013\u00a0Dennis Oct 31 '17 at 2:49\n\u2022 Is it OK if a program would need a better floating-point accuracy for the builtin float\/double type in order to produce values for larger n? \u2013\u00a0NieDzejkob Nov 21 '17 at 15:15\n\u2022 @Giuseppe No, as you're generating the numbers by doing the maths, rather than just placing them into an array\/string \u2013\u00a0caird coinheringaahing Dec 15 '17 at 22:14\n\u2022 @cairdcoinheringaahing In my opinion that's hardcoding the gamma constant. It doesn't work \"in theory\" for larger numbers. \u2013\u00a0user202729 Dec 22 '17 at 12:44\n\u2022 Chat room \u2013\u00a0user202729 Dec 22 '17 at 12:45\n\n# 209. APL (Dyalog), 100 bytes, A000075\n\n{\ndims \u2190 1 + 2 * \u2375\ntable \u2190 \u2218.{+\/2 3\u00d7\u237a \u2375*2}\u2368 \u2373dims\nuniq \u2190 \u222a (dims*2)\u2374 table\n+\/ (\u00d7\u00d7(2*\u2375)\u2265\u22a2) uniq\n}\n\n\nusing \u2395IO\u21900.\n\nTry it online!\n\nNext sequence\n\n\u2022 Ugh. I was about to test this. I think this is the theme of this challenge. \u2013\u00a0NieDzejkob Oct 10 '17 at 12:48\n\u2022 @NieDzejkob yea, was hit the same way a few times. they should have set some way to reserve spots \u2013\u00a0Uriel Oct 10 '17 at 12:49\n\u2022 Yeah, I've seen some other [answer-chaining] question do that. \u2013\u00a0NieDzejkob Oct 10 '17 at 12:53\n\n# 213. Befunge-98 (PyFunge), 117 bytes, A000079\n\nv v2<\n* \\\n\\ - did you know that comments in befunge start and end with nothing?\n1\n>&1\\>:|\n>$.@ Try it online! Next sequence! \u2022 Wait, I thought it was one of these selfreferential sequences like \"OEIS sequences that contain their own index number\" \u2013 NieDzejkob Oct 10 '17 at 13:57 \u2022 lol no meta sequences pls \u2013 totallyhuman Oct 10 '17 at 13:58 # 214. Julia 0.4, 383 bytes, A000117 Because of sequence relationships, I've implemented A000013 four times now. function EulerPhi(n) x = 0 for i = 1:n if gcd(i,n) == 1 x = x + 1 end end return x end function A000013(n) if n <= 1 return 1 end x = 0 for d = 1:n if n\/d == n\u00f7d x = x + (EulerPhi(2*d) * 2^(n\/d))\/(2*n) end end return x end function A000011(n) return (A000013(n) + 2^(n\u00f72))\/2 end function A000117(n) return A000011(2*n) end Next Sequence Try it online! \u2022 ...Dammit, all these n-nacci sequences and I can't answer. ;-; 37 minutes left... \u2013 totallyhuman Oct 10 '17 at 14:14 \u2022 @icrieverytim I was well on my way with a Funciton tetranacci when you posted yours. Serves you right. \u2013 KSmarts Oct 10 '17 at 14:19 # 212. JavaScript (SpiderMonkey), 79 bytes, A000078 a = n => n < 3 ? 0 : n == 3 ? 1 : a(--n) + a(--n) + a(--n) + a(--n) \/\/ balloon Try it online! Next sequence. \u2022 @Mr.Xcoder he copypasted the codegolf submission from TIO to edit it later. Common practice if you don't want to get ninjad \u2013 NieDzejkob Oct 10 '17 at 13:50 \u2022 Give a guy some time. :P \u2013 totallyhuman Oct 10 '17 at 13:50 \u2022 I was just looking at the code snippet, and it looks like Groovy has already been used twice. \u2013 KSmarts Oct 10 '17 at 19:05 \u2022 @KSmarts Oops... \u2013 totallyhuman Oct 10 '17 at 19:13 # 219. Maxima, 127 bytes, A000702 load(\"ratpow\")$\np(n):=num_partitions(n);\nq(n):=ratp_dense_coeffs( product(1+x^(2*k-1),k,1,n) ,x)[n];\na(n):=(p(n+2)+3*q(n+3))\/2;\n\n\nNext Sequence\n\nTry it online!\n\nThis uses the first formula on the OEIS page, with a built-in for p and the generating function for q. It's basically the same as the Mathematica code.\n\n# 220. Visual Basic .NET (.NET Core), 107 bytes, A000127\n\nFunction A000172(n As Integer) As Integer\nn += 1\nReturn(n^4 - 6*n^3 + 23*n^2 - 18*n + 24) \/ 24\nEnd Function\n\n\nTry it online!\n\nNext Sequence\n\nNote that the implementation of VB.NET on TIO does not have BigIntegers (I tried it and it wants a .dll) so I can use You can assume that neither the input nor the required output will be outside your languages numerical range, but please don't abuse this by choosing a language that can only use the number 1, for example.\n\n\u2022 Maple and Mathematica for the next sequence seem to be using a formula not mentioned in the formula section. \u2013\u00a0NieDzejkob Oct 11 '17 at 14:52\n\u2022 Why do so many of the sequences' formula sections say G.f. A(x) = Sum_{n>=1} a(n)*x^n = x \/ Product_{n>=1} (1-x^n)^a(n)? That's a definition of the generating function. It's not helpful. \u2013\u00a0KSmarts Oct 11 '17 at 15:00\n\u2022 @KSmarts The first equation does nothing but implicitly saying that the first value is zero and introducing the name a, but the second is unique to this sequence. It allows to incrementally compute the series. If you see it many times maybe that is because A000081 appears so often... \u2013\u00a0Christian Sievers Oct 11 '17 at 17:38\n\n# 234. Rust, 512 bytes, A000141\n\nfn a(n: i64) -> i64 {\nlet mut ret = 0;\n\nfor a in !n..-!n {\nfor b in !n..-!n {\nfor c in !n..-!n {\nfor d in !n..-!n {\nfor e in !n..-!n {\nfor f in !n..-!n {\nif a * a + b * b + c * c + d * d + e * e + f * f == n {\nret += 1;\n}\n}\n}\n}\n}\n}\n}\n\nreturn ret;\n}\n\n\nTry it online!\n\nNext sequence.\n\nNice byte count, eh?\n\n\u2022 I know how to code the next sequence, but idk what language to do it in :( \u2013\u00a0Husnain Raza Oct 23 '17 at 12:08\n\u2022 @HusnainRaza I posted a bruteforce solution with a broken equivalence checker in Squirrel (the embeddability of Lua, typing of Python and the syntax of C) in The Nineteenth Byte, feel free to salvage stuff. \u2013\u00a0NieDzejkob Oct 23 '17 at 12:31\n\u2022 @HusnainRaza I have discovered a truly marvelous demonstration of this proposition that this margin is too narrow to contain. \u2013\u00a0KSmarts Oct 23 '17 at 14:14\n\u2022 @NieDzejkob But the equivalence checking is the hard part, isn't it? \u2013\u00a0KSmarts Oct 23 '17 at 14:17\n\u2022 @KSmarts yes, but the point is an easy to use language with the boilerplate serving at least as a good syntax example. \u2013\u00a0NieDzejkob Oct 23 '17 at 14:37\n\n# 238. LOLCODE, 429 bytes, A000089\n\nHAI 1.2\n\nI HAS A INPUT\nGIMMEH INPUT\nINPUT IS NOW A NUMBR\nINPUT R SUM OF INPUT AN 1\n\nI HAS A COUNT ITZ 0\nI HAS A INDEX ITZ 1\nI HAS A TEMP ITZ 0\n\nIM IN YR LOOP\nTEMP R PRODUKT OF INDEX AN INDEX\nTEMP R SUM OF TEMP AN 1\nTEMP R MOD OF TEMP AN INPUT\n\nBOTH SAEM TEMP AN 0\nO RLY?\nYA RLY\nCOUNT R SUM OF COUNT AN 1\nOIC\n\nBOTH SAEM INDEX AN INPUT\nO RLY?\nYA RLY\nGTFO\nOIC\n\nINDEX R SUM OF INDEX AN 1\nIM OUTTA YR LOOP\n\nVISIBLE COUNT\nKTHXBYE\n\n\nNext Sequence\n\nTry it online!\n\nThis just directly counts the solutions. None of that Legendre symbol nonsense.\n\n\u2022 Mathematics is never nonsense. \u2013\u00a0user202729 Oct 27 '17 at 15:10\n\u2022 @user202729 That code looks like nonsense to me :P \u2013\u00a0caird coinheringaahing Oct 27 '17 at 15:10\n\u2022 @user202729 I just meant that it seems to over-complicate the problem. Finding the divisors of n and computing the product of a modified Legendre symbol of those divisors is not simpler than just counting, imo. \u2013\u00a0KSmarts Oct 27 '17 at 15:32\n\u2022 @user202729 Actually... \u2013\u00a0KSmarts Oct 27 '17 at 16:33\n\n# 240. CHICKEN Scheme, 256 bytes, A000346\n\n(define (factorial x)\n(cond\n((< x 2) 1)\n(else (* x (factorial (- x 1))))))\n(define (choose n k)\n(cond\n((= n 0) 1)(else\n(\/ (factorial n) (* (factorial k) (factorial (- n k)))))))\n(define (a000346 n)\n(- (expt 2 (+ 1 (* 2 n))) (choose (+ 1 (* n 2)) (+ n 1))))\n\n\nTry it online!\n\nNext sequence\n\n\u2022 2*>U2XmX\u00b9>c- crying +1 though lol, despite making mine no longer matter. \u2013\u00a0Magic Octopus Urn Oct 27 '17 at 19:18\n\u2022 Nice bytecount! \u2013\u00a0NieDzejkob Oct 27 '17 at 19:19\n\u2022 I hope I will finish my Trefunge program before someone else finds an easier language for this sequence... \u2013\u00a0NieDzejkob Oct 27 '17 at 20:39\n\u2022 @NieDzejkob I found a bunch of easier languages. Go to tio.run, deselect \"recreational,\" and you'll get a whole list of them! \u2013\u00a0KSmarts Oct 27 '17 at 23:08\n\u2022 @KSmarts actually, this doesn't work. Haskell still shows up on the list. \u2013\u00a0NieDzejkob Oct 28 '17 at 11:12\n\n# 153. Emojicode, 1344 bytes, A000136\n\n\ud83d\udc0b\ud83c\udf68\ud83c\udf47\n\ud83d\udc74 Find the index of e in the list by comparing the elements with c\n\ud83d\udc16\ud83d\udd0ee Element c\ud83c\udf47Element Element\u27a1\ud83d\udc4c\ud83c\udf49\u27a1\ud83c\udf6c\ud83d\ude82\ud83c\udf47\n\ud83d\udd02i\u23e90\ud83d\udc14\ud83d\udc15\ud83c\udf47 \ud83c\udf4a\ud83c\udf6dc e\ud83c\udf7a\ud83d\udc3d\ud83d\udc15i\ud83c\udf47\ud83c\udf4ei\ud83c\udf49\ud83c\udf49\n\ud83c\udf4e\u26a1\n\ud83c\udf49\n\ud83c\udf49\n\n\ud83d\udc07\u262f\ud83c\udf47\n\ud83d\udc74 Checks whether a permutation is a possible way to fold stamps\n\ud83d\udc07\ud83d\udc16\ud83c\udd71p\ud83c\udf68\ud83d\udc1a\ud83d\ude82\u27a1\ud83d\udc4c\ud83c\udf47\n\ud83d\udd02k\u23e90\u2796\ud83d\udc14p 1\ud83c\udf47\n\ud83d\udd02r\u23ed\ud83d\udeaek 2\u2796\ud83d\udc14p 1 2\ud83c\udf47\n\ud83c\udf66c\ud83c\udf47a\ud83d\ude82b\ud83d\ude82\u27a1\ud83d\udc4c\ud83c\udf4e\ud83d\ude1ba b\ud83c\udf49\n\ud83c\udf66A\ud83c\udf7a\ud83d\udd0ep k c\n\ud83c\udf66B\ud83c\udf7a\ud83d\udd0ep r c\n\ud83c\udf66C\ud83c\udf7a\ud83d\udd0ep\u2795k 1 c\n\ud83c\udf66D\ud83c\udf7a\ud83d\udd0ep\u2795r 1 c\n\ud83c\udf4a\ud83c\udf89\ud83c\udf89\ud83c\udf89\ud83c\udf8a\ud83c\udf8a\u25c0A B\u25c0B C\u25c0C D\ud83c\udf8a\ud83c\udf8a\u25c0B C\u25c0C D\u25c0D A\ud83c\udf8a\ud83c\udf8a\u25c0C D\u25c0D A\u25c0A B\ud83c\udf8a\ud83c\udf8a\u25c0D A\u25c0A B\u25c0B C\ud83c\udf47\ud83c\udf4e\ud83d\udc4e\ud83c\udf49\n\ud83c\udf49\n\ud83c\udf49\n\ud83c\udf4e\ud83d\udc4d\n\ud83c\udf49\n\n\ud83d\udc74 Iterates over the permutations\n\ud83d\udc07\ud83d\udc16\ud83c\udd70n\ud83d\ude82\u27a1\ud83d\ude82\ud83c\udf47\n\ud83c\udf66a\ud83d\udd37\ud83c\udf68\ud83d\udc1a\ud83d\ude82\ud83d\udc27n\n\ud83c\udf66c\ud83d\udd37\ud83c\udf68\ud83d\udc1a\ud83d\ude82\ud83d\udc27n\n\ud83d\udd02i\u23e90 n\ud83c\udf47\n\ud83d\udc37a i i\n\ud83d\udc37c i 0\n\ud83c\udf49\n\n\ud83c\udf6ei 0\n\ud83c\udf6er 1\n\ud83d\udd01\u25c0i n\ud83c\udf47\n\ud83c\udf4a\u25c0\ud83c\udf7a\ud83d\udc3dc i i\ud83c\udf47\n\ud83c\udf6ej 0\n\ud83c\udf4a\ud83d\ude1b\ud83d\udeaei 2 1\ud83c\udf47\ud83c\udf6ej\ud83c\udf7a\ud83d\udc3dc i\ud83c\udf49\n\ud83c\udf66t\ud83c\udf7a\ud83d\udc3da j\n\ud83d\udc37a j\ud83c\udf7a\ud83d\udc3da i\n\ud83d\udc37a i t\n\ud83c\udf4a\ud83c\udf69\ud83c\udd71\u262fa\ud83c\udf47\ud83c\udf6e\u2795r 1\ud83c\udf49\n\ud83d\udc37c i\u27951\ud83c\udf7a\ud83d\udc3dc i\n\ud83c\udf6ei 0\n\ud83c\udf49\ud83c\udf53\ud83c\udf47\n\ud83d\udc37c i 0\n\ud83c\udf6e\u2795i 1\n\ud83c\udf49\n\ud83c\udf49\n\ud83c\udf4er\n\ud83c\udf49\n\ud83c\udf49\n\n\ud83c\udfc1\ud83c\udf47\n\ud83c\udf4a\ud83c\udf66i\ud83d\ude82\ud83d\udd37\ud83d\udd21\ud83d\ude2f\ud83d\udd24\ud83d\udd2410\ud83c\udf47\n\ud83d\ude00\ud83d\udd21\ud83c\udf69\ud83c\udd70\u262f\u27951 i 10\n\ud83c\udf49\n\ud83c\udf49\n\n\nTry it online!\n\nNext sequence!\n\nI was going to use something esoteric, but then I changed my mind. Pretty self explanatory, I don't think an explanation is necessary.\n\nNote: for inputs larger than 7, you have to specify a heap size larger than 512 MB during the compilation of the Real-Time Engine. For example, use cmake .. -DheapSize=1073741824 for 1 GB. If you run out of heap space, the VM segfaults, probably because someone does not check for NULL after malloc.\n\nEdit: make this faster and less memory-hungry. Now I wonder whether the garbage collector is working properly\n\nEdit almost 2 months later: this code had to work around a bug in the implementation of the \u23e9 type. Now that this bug is fixed and TIO updated, I had to offset the range parameters again. I also managed to add some comments, all while keeping the bytecount the same.\n\n\u2022 It segfaults for n=7? oookay. \u2013\u00a0Giuseppe Sep 12 '17 at 0:10\n\u2022 ... which makes this answer invalid. \u2013\u00a0pppery Sep 12 '17 at 1:26\n\u2022 @ppperry \"fixed\" \u2013\u00a0NieDzejkob Sep 12 '17 at 5:36\n\u2022 This isn't at all self-explanatory. \u2013\u00a0Peter Taylor Sep 12 '17 at 10:31\n\u2022 @INCOMING Emoji (the stack based language) is definitely esoteric, but Emojicode is object oriented and stuff. Believe it or not, but there is real software written in Emojicode, ~~in contrast to Haskell~~. \u2013\u00a0NieDzejkob Sep 12 '17 at 12:44\n\n# 249. PicoLisp, 171 bytes, A000904\n\n(de a000904 (n)\n(cond\n((= n 0) 0)\n((= n 1) 3)\n((= n 2) 13)\n(T\n(+ (* (+ n 2) (a000904 (- n 1)))\n(* (+ n 3) (a000904 (- n 2)))\n(a000904 (- n 3)))\n)\n)\n)\n\n\nTry it online!\n\nnext sequence\n\nWoo! eleventh page!\n\n\u2022 Eleventh? I don't see deleted posts (except my own) and I'm on the 9th... \u2013\u00a0NieDzejkob Nov 7 '17 at 19:02\n\u2022 @NieDzejkob yeah, the eleventh page if you include all deleted posts. \u2013\u00a0Giuseppe Nov 7 '17 at 19:37\n\u2022 \"Eleventh page\" is not as important as \"301 answers\". If the 301 number is not counting deleted posts, then \"all languages available again\". I was actually quite disappointed to see that 301 is counting-deleted. \u2013\u00a0user202729 Nov 8 '17 at 11:07\n\u2022 Yay! No formula! \u2013\u00a0user202729 Nov 8 '17 at 11:08\n\u2022 @user202729 From Wolfram Mathworld: \"The number of self-complementary graphs on n nodes can be obtained from the \"graph polynomial\" P_n(x) derived form P\u00f3lya's enumeration theorem used to enumerate the numbers of simple graphs as P_n(-1).\" This is the method that the given Mathematica code uses, because of course Mathematica has a built-in for that. \u2013\u00a0KSmarts Nov 8 '17 at 14:28\n\n# 157. C (gcc), 376 bytes, A002466\n\nint f(int n)\n{\nint i = 0, s[100] = {0};\ns[0] = 1;\ns[1] = 1;\ns[2] = 2;\n\nwhile (i < 97) {\nif ((i % 5) == 3) {\ns[3+i] = s[i] + s[2+i];\n}\nelse if ((i % 5) == 4) {\ns[3+i] = s[1+i] + s[2+i];\n}\nelse {\ns[3+i] = s[i] + s[1+i] + s[2+i];\n}\ni++;\n}\n\nreturn s[n];\n}\n\n\nIt's like an obnoxious Fibonacci! 0-indexed.\n\nNext sequence\n\nTry it online!\n\nThe function for this sequence said this:\n\na(1) = a(2) = 1, a(3) = 2, a(5*k+2) = a(5*k+1) + a(5*k-1), a(5*k+3) = a(5*k+2) + a(5*k+1), a(5*k+b) = a(5*k+b-1) + a(5*k+b-2) + a(5*k+b-3) for b=-1,0,1\n\n\nIt's completely unhelpful. However, I noticed that the integers in the sequence were usually the previous three terms added together, but more obnoxious. Here's my work:\n\n1. 1, 1, 2 -> 4 (ok)\n2. 1, 2, 4 -> 7 (ok)\n3. 2, 4, 7 -> 13 (ok)\n4. 4, 7, 13 -> 17 (no) sum does not include second value\n5. 7, 13, 17 -> 30 (no) sum does not include first value\n6. 13, 17, 30 -> 60 (ok)\n7. 17, 30, 60 -> 107 (ok)\n8. 30, 60, 107 -> 197 (ok)\n9. 60, 107, 197 -> 257 (no) sum does not include second value\n10. 107, 197, 257 -> 454 (no) sum does not include first value\n11. 197, 257, 454 -> 908 (ok)\n12. 257, 454, 908 -> 1619 (ok)\n13. 454, 908, 1619 -> 2981 (ok)\n14. 908, 1619, 2981 -> 3889 (no) sum does not include second value\n15. 1619, 2981, 3889 -> 6870 (no) sum does not include first value\n\n\nSo for each five terms, the last two would be calculated differently. The modulo's and if-statements in the while loop of the code handle that.\n\n\u2022 Isn't next-seq a PPCG challenge lol \u2013\u00a0HyperNeutrino Sep 18 '17 at 23:54\n\u2022 It's a casewise formula. E.g. a(5*k+2) = a(5*k+1) + a(5*k-1) says that if n % 5 == 2 then a(n) = a(n-1) + a(n-3). \u2013\u00a0Peter Taylor Sep 19 '17 at 6:56\n\u2022 And this code doesn't calculate it correctly. Input 10 should give 197 and gives 137. \u2013\u00a0Peter Taylor Sep 19 '17 at 7:00\n\u2022 Guessing the formula was a very bad idea... should I edit to correct this or make it a new answer? After all, the order is already ruined... @PeterTaylor what do you think? \u2013\u00a0NieDzejkob Sep 19 '17 at 8:45\n\u2022 There's also the problem of segfaulting for n > 100 - keep only the last 3 numbers or allocate this array dynamically \u2013\u00a0NieDzejkob Sep 19 '17 at 9:38\n\n# 255. Perl 5, 454 bytes, A000098\n\nsub part {\nmy $S; if ($_[1]==0) { $S = 1 } elsif ($_[1] < $_[0]) {$S = 0 } else { $S = part($_[0],$_[1]-$_[0])+part($_[0]+1,$_[1]) }\n$S; } sub partsum { my @a = (0..$_[0]);\nmy $S = 0; for my$i (@a) {\n$S += part(1,$i);\n}\n$S; } sub A000097 { my @a = (0..$_[0]\/\/2);\nmy $S = 0; for my$i (@a) {\n$S += partsum($_[0]-2*$i); }$S;\n}\n\nsub A000098 {\nmy @a = (0..$_[0]\/\/3); my$S = 0;\nfor my $i (@a) {$S += A000097($_[0]-3*$i);\n}\n$S; } Next Sequence Try it online! There's a lot of redundant subroutine calls, so memoization would speed this up a lot. # 257. TI-NSpire CX Basic, 144 bytes, A000562 Define a562(n)= Func :If n=4 Then : Return 9 :Else : Return ((27)\/(8))*n^(4)-((135)\/(4))*n^(3)+((921)\/(8))*n^(2)-((539)\/(4))*n :EndIf :EndFunc Next sequence Uses Plouffe's conjecture mentioned in the OEIS entry, note that this is different than TI-Nspire CAS Basic as they are two different models of calculator. Next sequence hopefully is easy. This allows the language to be used again before the 300th. \u2022 Ugh, the next sequence is gross. \u2013 Engineer Toast Nov 20 '17 at 21:25 \u2022 What if the conjecture is false? Don't you need to prove it? \u2013 user202729 Nov 21 '17 at 9:55 \u2022 The formula a(n) = 27\/8n^4 - 135\/4n^3 + 921\/8n^2 - 539\/4n, n>4 does not seem to be labeled as a conjecture. \u2013 NieDzejkob Nov 21 '17 at 10:39 \u2022 @user202729 alternatively, he can verify the results for n up to a thousand. \u2013 NieDzejkob Nov 21 '17 at 10:41 # 263. Visual Basic .NET (.NET Core), 493 bytes, A000091 It is a multiplicative function. Using a simple prime checking function. Now VB .NET was used twice. Function IsPrime(p As Integer) As Boolean For i = 2 To p - 1 If p Mod i = 0 Then Return False Next Return True End Function Function A000091(n As Integer) As Integer n += 1 If n = 1 Then Return 1 If n Mod 9 = 0 Then Return 0 Dim ans As Integer = 1 If n Mod 2 = 0 Then ans *= 2 If n Mod 3 = 0 Then ans *= 2 While n Mod 2 = 0 n \/= 2 End While For i = 5 To n If IsPrime(i) And n Mod i = 0 Then If i Mod 3 = 1 Then ans *= 2 Else Return 0 End If Next Return ans End Function Try it online! Next sequence I chose a simple sequence. The smallest unused sequence until now is A000099. \u2022 As far as I can tell, A000064 has been used \u2013 caird coinheringaahing Nov 24 '17 at 17:21 \u2022 @cairdcoinheringaahing Thanks, fixed. (I sorted the snippet by sequence number and there are no A000064. Did something get wrong?) \u2013 Colera Su Nov 24 '17 at 23:43 # 215. MATL, 95 bytes, A000383 'this is just a random string for padding aagghhhh need at least 80 chars!'xT6Y\"i:\"t-5:0)sh]GQ) Try it online! Next sequence '...'x string and delete from stack T6Y\" push 6 ones i:\" iterate user input number of times t-5:0)sh dup, index last 6, sum, horzcat ] end loop GQ) copy user input, increment, index (return) \u2022 Sorry @NieDzejkob :( but you couldn't have answered anyway, your hour wasn't up! \u2013 Giuseppe Oct 10 '17 at 14:41 \u2022 Yeah, right. And you ninjad me by 4 minutes anyway. I just didn't notice your answer and then after posting mine, the page reloaded. \u2013 NieDzejkob Oct 10 '17 at 14:44 # 264. PowerShell, 102 bytes, A000493 function A000493($n) {return [math]::floor([math]::sin(\\$n))}\n\n\n\nTry it online!\n\nNext sequence.\n\n\u2022 We've already had 64 \u2013\u00a0caird coinheringaahing Nov 24 '17 at 17:20\n\u2022 ...I looked at the last answer and it said 64. >.> Padding, hang on. \u2013\u00a0totallyhuman Nov 24 '17 at 17:21\n\u2022 Actually, there was 95. However, it seems that the MATL submission missed the sequence number in the title, so it isn't shown in the snippet. \u2013\u00a0Colera Su Nov 24 '17 at 23:46\n\u2022 @totallyhuman well, we got the shit together :P \u2013\u00a0caird coinheringaahing Nov 25 '17 at 0:12\n\u2022 I just noticed that the bytecount of this answer is a palindrome of the bytecount of the other PowerShell answer :) \u2013\u00a0NieDzejkob Jan 3 '18 at 14:55\n\n# 270. Husk, 103 bytes, A000118\n\n\u2190\u2192-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1-1+1+*1=\u20700*\u00b1\u2070\u2260*8\u03a3u\u2020\u03a0\u1e56p\u2070*<1%4\u2070*32\u03a3u\u2020\u03a0\u1e56p\/4\u2070\n\n\nTry it online!\n\nNext Sequence.\n\nIgnore the madness that precedes the actual code, it\u2019s just fluff to make this 103 bytes.\n\n### How it works\n\nHusk does not have a \u201cdivisors\u201d built-in, so (\u1e0a has been implemented) this was actually fun to solve.\n\n\u2022 First off, it computes the divisors of N using prime factorization (prime factors, powerset, product of each, deduplicate) and sum them up. Then, it multiplies the result by 8.\n\n\u2022 After that, it computes the sum of N \/ 4\u2019s divisors, and multiplies it by 32 if N is divisible by 4, or evaluates the whole thing to 0 otherwise.\n\n\u2022 Takes the absolute difference between the above terms.\n\n\u2022 For handling the special case of 0, this just multiplies our result by the sign of N, and adds 1 iff N is 0.\n\nTo put it simply, it calculates (N==0 ? 1 : 0)+sgn(N)(8\u03c3(N)+32\u03c3(N\/4)(N%4==0 ? 1:0)).\n\n\u2022 I was just about to post a solution in Actually... \u2013\u00a0NieDzejkob Nov 30 '17 at 22:00\n\u2022 @NieDzejkob Oh sorry for ninja\u2019ing again. At least it was nonetrivial for me, Husk doesn\u2019t have a divisor built-in so... \u2013\u00a0Mr. Xcoder Nov 30 '17 at 22:02\n\u2022 \u1e0a is divisors in Husk \u2013\u00a0H.PWiz Nov 30 '17 at 23:55\n\u2022 @H.PWiz It has been added recently right? \u2013\u00a0Mr. Xcoder Dec 1 '17 at 6:12\n\u2022 @HusnainRaza I said that I will solve it means I will solve it. Don't push me. \u2013\u00a0user202729 Dec 7 '17 at 13:46\n\n# 135. Husk, 29 bytes, A000277\n\n*1*1*1*1*1*1*1+5-*2\u230a\u221a+5*4\u2070*3\u2070\n\n\nTry it online!\n\nNext Sequence.\n\n# 144. Chez Scheme, 68 bytes, A000120\n\n(define(e n)(cond((< n 1)0)((+(e(fx\/ n 2))(remainder n 2)))))\n\n\nTry it online!\n\nNext sequence\n\n\u2022 +100 rep if someone does the next one in Beatnik \u2013\u00a0NieDzejkob Sep 7 '17 at 16:22\n\n# 273. Ohm v2, 123 bytes, A000129\n\n2\u00ac1+\u00b3\u207f1 2\u00ac-\u00b3\u207f-2\u00acd\/\u00a6\n\n\nTry it online! (note the trailing spaces)\n\nNext Sequence.\n\nI didn't want to nearly-kill the challenge again, so I chose A000123, which isn't that hard.\n\nImplements a(n) = ((1+\u221a2)n - (1-\u221a2)n) \/ (2\u221a2), rounded to the nearest integer (because there are small floating-point inaccuracies).\n\n\u2022 I was working on 6502 asm for 272 when this was posted \u2013\u00a0Tornado547 Dec 7 '17 at 20:22\n\u2022 @Tornado547 I feel ya. It's tough to get an answer in when the sequences are so simple, but you'll have lots of chances! \u2013\u00a0Giuseppe Dec 7 '17 at 20:38\n\n# 276. MATLAB, 188 bytes, A000113\n\nn=input('')+1;\nF=factor(n);\npsi = @(x) x*prod(1+1.\/unique(F))-(x<2);\ni=sum(F==2);\nj=sum(F==3);\nif i > 5\ni = 3;\nelse\ni = floor(i.\/2);\nend\nif j > 1\nj = 1;\nelse\nj = 0;\nend\npsi(n) .\/ (2^i*3^j)\n\n\nTry it online!\n\nNext Sequence\n\nAs this is a carefully written polyglot with Octave, it will run on TIO.\n\nImplements the formula with Dedekind's psi function listed in the notes.\n\nOddly, there's a dead link for transformation groups on this OEIS wiki page and I'm not entirely sure how to count transformation groups!\n\n# 266. UCBLogo, 2572 bytes, A000636\n\n; ------ helper function ------ (unrelated to the problem)\n\n; minimum of two numbers\nto min :x :y\noutput ifelse :x<:y [:x] [:y]\nend\n\n; clone an array\nto clone :arr\nlocalmake \"ans (array (count :arr) (first :arr))\nlocalmake \"offset -1+first :arr\nrepeat count :arr [\nsetitem (#+:offset) :ans (item (#+:offset) :arr)\n]\noutput :ans\nend\n\n; ------ coefficient list manipulators ------\n\n; apply (map binary function)\nto ap :func :x :y\nlocalmake \"ans arz min count :x count :y\nfor [i 0 [-1+count :ans]] [\nsetitem :i :ans (invoke :func item :i :x item :i :y)\n]\noutput :ans\nend\n\n; zero-indexing zero array : f(x) = 0\nto arz :n\nlocalmake \"ans (array :n 0)\nrepeat :n [setitem #-1 :ans 0]\noutput :ans\nend\n\n; polynomial multiplication\nto convolve :x :y [:newsize (count :x)+(count :y)-1]\nlocalmake \"ans arz :newsize\nfor [i 0 [-1+count :x]] [\nfor [j 0 [min -1+count :y :newsize-:i-1] 1] [\nsetitem :i+:j :ans (item :i+:j :ans) + (item :i :x) * (item :j :y)\n]\n]\noutput :ans\nend\n\n; given arr = coefficients of f(x), calculate factor * f(x^n)\nto extend :arr :n [:factor 1]\nlocalmake \"ans arz (-1+count :arr)*:n+1\nrepeat count :arr [\nsetitem (#-1)*:n :ans (:factor*item #-1 :arr)\n]\noutput :ans\nend\n\n; calculate :const + :factor * x * p(x)\nto shift :p :factor :const [:size 1+count :p]\nlocalmake \"ans (array :size 0)\nsetitem 0 :ans :const ; 1+...\nrepeat :size-1 [\nsetitem # :ans (item #-1 :p)*:factor\n]\noutput :ans\n\nend\n\n; calculate multiplication inverse (1\/p(x))\nto inverse :p [:n (count :p)]\nlocalmake \"one arz :n\nsetitem 0 :one 1\n\nlocalmake \"p_1 clone :p\nsetitem 0 :p_1 (-1+item 0 :p_1) ; p_1(x) = p(x) - 1\n\nlocalmake \"q (array 1 0)\nsetitem 0 :q (1\/item 0 :p) ; q(x) = 1\/p(0) (coefficient of x^0)\n\nrepeat :n [\nmake \"q ap \"difference :one (convolve :p_1 :q #)\n]\n\noutput :q\nend\n\n; ------ define functions ------\n\n; calculate r(x) first n coefficients\nto r :n\nlocalmake \"ans {1}@0\nrepeat :n [\nmake \"ans (shift (ap \"sum ap \"sum\nconvolve convolve :ans :ans :ans ; r[x]^3\nconvolve :ans (extend :ans 2 3) ; 3*r[x]*r[x^2]\n(extend :ans 3 2) ; 2 r[x^3]\n) 1\/6 1 #)\n]\noutput :ans\nend\n\n; calculate R(x) first n coefficients\nto BigR :n\nlocalmake \"rn r :n\noutput (extend\nap \"sum\nconvolve :rn :rn ; r[x]^2\nextend :rn 2 ; r[x^2]\n1 0.5) ; \/2\nend\n\n; main function\nto main :n\nlocalmake \"Rx bigR :n+1\nlocalmake \"inv_1_xRx inverse shift :Rx -1 1 ; 1 - x*R[x]\noutput item :n+1 (extend (ap \"sum\n:inv_1_xRx\nconvolve\n(shift :Rx 1 1) ; 1 + x*R[x]\n(extend :inv_1_xRx 2) ; 1\/(1 - x^2 * R[x^2])\n) 1 0.5)\nend\n\n\nTry it online! (at Anarchy golf performance checker)\n\nPaste the code there, and then append print main 10 at the end.\n\nAlthough Logo has been used twice, UCBLogo only once and FMSLogo only once. In other word, programming languages that has multiple versions tend to have more advantage in this challenge. Next time it will probably be Elica.\n\nNext sequence.\n\n\u2022 Probably the next answer is \"using the recurrence formula in the Mathematica code\". I don't like that, if anyone do that please also include formula proof. \u2013\u00a0user202729 Nov 30 '17 at 15:19\n\u2022 Sorry for \"just using the formula\"... but I promise I will add mathematical explanation. \u2013\u00a0user202729 Nov 30 '17 at 16:00\n\n# 275. Whispers, 113 bytes, A001021\n\n> Input\n> 12\n>> 2*1\n>> Output 3\nRather than add a bunch of spaces,\nI think it'd be better to actually have\nwords.\n\n\nTry it online!\n\nNext sequence\n\n## How it works\n\nThe parser automatically removes lines that don't match any of the valid regexes, meaning that the code that actually gets executes is\n\n> Input\n> 12\n>> 2*1\n>> Output 3\n\n\nWhich is executed non-linearly way (as most Whispers programs are), and runs as follows:\n\nOutput ((12) * (Input))\n\n\nWhere * is exponentiation (raising x to the power y)\n\n\u2022 What are the rules on using languages newer than the challenge? \u2013\u00a0KSmarts Dec 7 '17 at 20:28\n\u2022 @KSmarts Just so long as it isn't specifically designed for this challenge (or violate any other standard loophole), it's the same as posting an answer with a newer language on any other question. \u2013\u00a0caird coinheringaahing Dec 7 '17 at 21:01\n\u2022 What is this sorcery? \u2013\u00a0NieDzejkob Dec 7 '17 at 21:29\n\u2022 Does this still work if you change acually to actually and remove the duplicate !? \u2013\u00a0Esolanging Fruit Dec 8 '17 at 4:17\n\u2022 This language reminds me of LLVM IR... \u2013\u00a0NieDzejkob Dec 9 '17 at 14:22\n\n# 279. Fortran (GFortran), 2852 bytes, A005692\n\nprogram beans\n\ninteger :: n, first_move, last_move, new_move1, new_move2, turn_count, win_accomplished, cycleq\n\ninteger, dimension(270,1000000) :: current_paths, next_paths\ninteger :: next_path_count, current_path_count, path_iter, move_iter, cycle_iter\n\nfirst_move = 2*n+1\nturn_count = 0\nwin_accomplished = 0\n\ncurrent_paths(1,1)=first_move\ncurrent_path_count=1\n\ndo while(.true.)\n\nturn_count = turn_count + 1 !current path length\n\nif (first_move == 1) then\nturn_count = 1\nexit\nend if\n\nnext_path_count = 1\n\ndo path_iter=1,current_path_count\n\ncycleq = 0\nlast_move = current_paths(turn_count,path_iter)\n!print *,'prince'\n!print *,last_move\n!print *,'prince'\n\nif (mod(last_move,2)==1) then\nif (last_move>1) then\nif (mod(turn_count,2) == 0) then\ncycle\nend if\nend if\nend if\n\nif (last_move==1) then\nif (mod(turn_count,2) == 0) then\nwin_accomplished = 1\nend if\n\nelse if (mod(last_move,2)==0) then\nnew_move1 = last_move\/2\n\ndo cycle_iter = 1,turn_count\nif (current_paths(cycle_iter,path_iter) == new_move1) then\ncycleq=1\nend if\nend do\n\nif (cycleq == 0) then\ndo cycle_iter=1,turn_count\nnext_paths(cycle_iter,next_path_count) = current_paths(cycle_iter,path_iter)\nend do\nnext_paths(turn_count+1,next_path_count) = new_move1\nnext_path_count = next_path_count + 1\nend if\n\nelse\nnew_move1 = 3*last_move + 1\nnew_move2 = 3*last_move - 1\n\ncycleq=0\ndo cycle_iter = 1,turn_count\nif (current_paths(cycle_iter,path_iter) == new_move1) then\ncycleq=1\nend if\nend do\n\nif (cycleq == 0) then\ndo cycle_iter=1,turn_count\nnext_paths(cycle_iter,next_path_count) = current_paths(cycle_iter,path_iter)\nend do\nnext_paths(turn_count+1,next_path_count) = new_move1\nnext_path_count = next_path_count + 1\nend if\n\ncycleq=0\ndo cycle_iter = 1,turn_count\nif (current_paths(cycle_iter,path_iter) == new_move2) then\ncycleq=1\nend if\nend do\n\nif (cycleq == 0) then\ndo cycle_iter=1,turn_count\nnext_paths(cycle_iter,next_path_count) = current_paths(cycle_iter,path_iter)\nend do\nnext_paths(turn_count+1,next_path_count) = new_move2\nnext_path_count = next_path_count + 1\nend if\n\nend if\n\nend do\n\nif (win_accomplished==1) then\nexit\nend if\n\ndo path_iter=1,next_path_count-1\ndo move_iter=1,turn_count+1\ncurrent_paths(move_iter,path_iter) = next_paths(move_iter,path_iter)\nnext_paths(move_iter,path_iter) = 0\nend do\nend do\n\ncurrent_path_count = next_path_count-1\n\nend do\n\nprint *,turn_count-1\n\nend program beans\n\n\nTry it online!\n\nNext Sequence\n\nI originally wrote a similar program in Python. It properly compiled in Pyon but I wanted to see if I could write it in Fortran first and save Pyon\/Python for harder sequences.\n\nThe program relies on the fact that if Jack moves to an odd number then the Giant gains control and will be able to use a never-lose strategy. It starts with a the Giant's fist move [n] then enumerates all the possible games until the first win is found which must be the shortest one since moves are added to each possible game pathway once per iteration of the main loop. If Jack loses in one particular game or if the Giant takes control then that particular game pathway is abandoned which helps save on memory and time.\n\nThe first term in the sequence doesn't make any sense to me since it isn't a win for Jack but I hard coded it in anyway.\n\nThe paper OEIS references helped me understand the sequence. They suggest an algorithm to us on a computer (they calculated the first 525 terms of the sequence by hand) which I didn't use because it seemed a little more difficult to implement but it would probably be faster and much less memory intensive.\n\nBecause of memory limitations, the program won't actually work on all the terms but is fine for terms up to at least 119 (n=109). Note that the two arrays declared at the top have 270x10^6 elements. This allows in the algorithm for 10^6 branches of possible game paths of length 270 moves. None of the first 1000 terms are bigger than 263 so this is still valid for the challenge.\n\nThe next sequence (terms in the continued fraction of Eulers constant) looks pretty hard, but maybe that's just because I don't know how to do it of the top of my head. I promise I didn't engineer my byte-count to pick this, it's just what I ended up with.\n\nEdit: It seems calculating the terms doesn't seem too hard once you have the actual constant (which is irrational [edit: apparently this is actually not known]). Here are some resources:\n\n\u2022 119 (n=109) Huh? \u2013\u00a0user202729 Dec 15 '17 at 6:06\n\u2022 NOTE There has already been a deleted answer that hardcodes. Don't do this again if you don't want downvotes. \u2013\u00a0user202729 Dec 15 '17 at 6:11\n\u2022 @user202729 Haha sorry to disappoint you. I know what a continued fraction is just didn\u2019t know how to calculate it off the top of my head. I agree the next one isn\u2019t that hard \u2013\u00a0dylnan Dec 15 '17 at 13:14\n\u2022 @Giuseppe it kind of hardcodes the sequence to get a number from which you can compute the sequence. Don't you think that's a little pointless? \u2013\u00a0NieDzejkob Dec 15 '17 at 21:22\n\u2022 numberworld.org\/y-cruncher\/internals\/formulas.html \u2013\u00a0politza Dec 16 '17 at 8:56\n\n# 283. Ohm v2, 384 bytes, A000126\n\n\u00b34+\u00fd\u00b3-2-\n\n\nTry it online!\n\nNext Sequence. (This one is trivial, the hexagonal numbers!)\n\nIf this were code golf, I could solve this in 6 bytes: 4+\u00fda\u2039\u2039.\n\nThis one was really just Fibonacci(N + 4) - N - 2.\n\n\u2022 Please let me do the next one in Hexagony! \u2013\u00a0NieDzejkob Dec 24 '17 at 14:41\n\u2022 Wait, I still need to wait 45 minutes... I'll try anyway \u2013\u00a0NieDzejkob Dec 24 '17 at 14:41\n\n# 287. Scratch 2, 7437 Bytes, A001118\n\nTry it online\n\nHere is the text version of the full program.\n\nBelow is the portion of the program that represents the code blocks in the image. The full text is much longer, dealing with things irrelevant to this challenge.\n\n \"scripts\": [[275.8,\n75.2,\n[[\"procDef\", \"Power %n %n\", [\"Base\", \"exponent\"], [1, 1], false],\n[\"setVar:to:\", \"presult\", \"1\"],\n[\"doRepeat\",\n[\"getParam\", \"exponent\", \"r\"],\n[[\"setVar:to:\", \"presult\", [\"*\", [\"readVariable\", \"presult\"], [\"getParam\", \"Base\", \"r\"]]]]]]],\n[539.1,\n73.2,\n[[\"procDef\", \"Binomial %n %n\", [\"a\", \"b\"], [1, 1], false],\n[\"call\", \"FactLoop %n\", [\"getParam\", \"a\", \"r\"]],\n[\"call\", \"FactLoop %n\", [\"getParam\", \"b\", \"r\"]],\n[\"call\", \"FactLoop %n\", [\"-\", [\"getParam\", \"a\", \"r\"], [\"getParam\", \"b\", \"r\"]]],\n[286.7,\n244,\n[[\"procDef\", \"FactLoop %n\", [\"n\"], [1], false],\n[\"setVar:to:\", \"count\", [\"getParam\", \"n\", \"r\"]],\n[\"setVar:to:\", \"result\", \"1\"],\n[\"doUntil\",\n[\"changeVar:by:\", \"count\", -1]]]]],\n[44,\n71,\n[[\"whenGreenFlag\"],\n[\"setVar:to:\", \"sum\", 0],\n[\"setVar:to:\", \"i\", 0],\n[\"doRepeat\",\n5,\n[[\"call\", \"Power %n %n\", -1, [\"readVariable\", \"i\"]],\n[\"call\", \"Binomial %n %n\", 5, [\"readVariable\", \"i\"]],\n[\"changeVar:by:\", \"i\", 1]]],\n\n\nNext Sequence: Inverse Moebius transformation of triangular numbers\n\n\u2022 +1 for using Scratch and for giving us an easy sequence. Solved in Swift 4 :) \u2013\u00a0Mr. Xcoder Dec 25 '17 at 20:01\n\u2022 Scratch is verbose. (depends on how is the number of bytes counted) \u2013\u00a0user202729 Dec 26 '17 at 14:02\n\n# 290. Hy, 337 bytes, A000149\n\n(import decimal)\n(setv ctx (.getcontext decimal))\n(setv ctx.prec 500)\n(setv iterations 2800)\n\n(defn factorial [n]\n(reduce *\n(range 1 (inc n))\n1))\n\n(defn exp [n series-terms]\n(sum\n(list-comp\n(\/ (decimal.Decimal (** n k)) (factorial k))\n(k (range series-terms)))))\n\n(defn A000149 [n]\n(int (exp n iterations)))\n\n\nNext sequence--here's a nice easy one so you can break out the obscure esolangs. ;^)\n\nWe implement the exponential function by taking partial sums of its MacLaurin series: e^n = Sum[k=0..inf] n^k\/k!. For values far away from 0, the series converges more slowly, so we need to calculate a lot of terms to get an accurate result. Trial and error showed that 2800 terms was sufficient for n=1000. The decimal module provides arbitrary-precision arithmetic; 500 significant digits gets us up to n=1000 easily.\n\n# 292. Gaia, 132 bytes, A000130\n\n\u1ecd0D\u00a61C1=\n\u2505f\u2191\u00a6\u03a32\u00f7\n\n\nTry it online!\n\nNext sequence.\n\nThe next one should not be too hard (there is a n\u00e4ive approach to generate all the integer partitions, and for each partition P check if P is of length 5 and that each element in P is a square :P). I have solved this in Pyth first, then got a much more efficient technique, which I then ported to Gaia.\n\n## How it works?\n\n\u1ecd0D\u00a61C1= | Helper function.\n\n\u1ecd | Compute the incremental differences (deltas).\n0D\u00a6 | For each, calculate the absolute difference to 0 (absolute value).\n1C | Count the 1's.\n1= | Return 1 if the result is equal to 1, 0 otherwise.\n\n\u2505f\u2191\u00a6\u03a32\u00f7 | Main function. Let N be the input.\n\n\u2505 | The integers in the range 1 ... N.\nf | Compute the permutations.\n\u2191\u00a6 | For each permutation, apply the above helper function called monadically.\n\u03a3 | Sum.\n2\u00f7 | Halve and implicitly display the result.\n\n\u2022 This one has useful and easy to calculate g.f., but note that (1)\u00b2 and (-1)\u00b2 are different ways. You can implement FFT if you like. \u2013\u00a0user202729 Dec 31 '17 at 13:51\n\n# 298. dc, 1008 bytes, A001460\n\n# Define a factorial macro f; expects the accumulator in register a and the loop number\n# (> 0) on the stack\n[\nd # Duplicate the loop number\nla * sa # Load accumulator, multiply by loop number, store back in accumulator\n1 - # Decrement loop number\nd 0<f # Duplicate; if greater than 0, call the macro again\n] sf\n\n# Define a wrapper macro F; expects the argument on the stack, and can handle 0\n[\n1 sa # Store 1 in the accumulator register\nd 0<f # Duplicate argument; if greater than 0, call macro f\ns_ # This leaves a 0 on the stack; store in register _ to get rid of it\nla # Load accumulator onto stack\n] sF\n\n# Main program\n? sn # Read input and store in register n\nln 5 * lF x # Load n, multiply by 5, take the factorial\nln 2 * lF x # Load n, multiply by 2, take the factorial\nln lF x 3 ^ # Load n, take the factorial, raise to the 3rd power\n* # Multiply the last two results\n\/ # Divide the first result by the product\np # Print\n\n\nTry it online!\n\nNext sequence.\n\n\u2022 The next one seems to be doable in Unefunge... \u2013\u00a0NieDzejkob Jan 3 '18 at 15:51\n\u2022 The next one is a lot easier if you have a built-ins for finding lowest common multiple and simplifying fractions. The series is the numerator of the simplified fraction Sum(LCM(1...n)\/n) \/ LCM(1...n). \u2013\u00a0Engineer Toast Jan 3 '18 at 16:18","date":"2020-01-19 16:57:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.35921892523765564, \"perplexity\": 6407.624861973478}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250594662.6\/warc\/CC-MAIN-20200119151736-20200119175736-00465.warc.gz\"}"}
| null | null |
Fountainhead, The (1949)
"Do you want to stand alone against the whole world?"
A visionary young architect (Gary Cooper) refuses to compromise his artistic integrity at any cost.
Gary Cooper Films
King Vidor Films
Non-Conformists
Patricia Neal Films
Raymond Massey Films
King Vidor's adaptation of Ayn Rand's bestselling novel is just as stilted as its source material. Rand wrote her own screenplay, and, like her central hero, refused to compromise the integrity of her philosophical vision; as a result, the characters are — as noted by DVD Savant — simply "walking ideas and arguments", and the film itself comes across as "a presentation of a radical social philosophy using a soap opera format". With that said, some believe there's more to The Fountainhead than meets the eye; Savant himself refers to it as an "emotionally powerful piece of cinematic insanity, a movie that bears careful watching."
I've seen the film twice now, and must admit I find it difficult to take seriously — while it's nothing if not sincere, it fails to involve viewers on anything more than a superficial level, and the didactic dialogue is an enormous distraction. Cooper's infamous, lengthy courtroom speech in the final section of the film is frustrating rather than satisfying, given that his logic is hopelessly skewed; ultimately, it's hard to root for this staunchly selfish man, who considers his own needs more important than everyone else's.
Patricia Neal — stilted, but undeniably beautiful in her first major film role
Robert Burk's stark cinematography
Yes, for its status as an over-the-top cult favorite.
FilmCritic.com Capsule Review
DVD Beaver Review
Time Out Capsule Review
Posted on July 20th, 2007 by admin
One Response to "Fountainhead, The (1949)"
David Csontos, on October 10th, 2007 at 9:39 am Said:
A must – if often for the wrong reasons.
I've not read the 752-page Rand novel – it's one of those I've always meant to get to…but actually don't really want to. Since Rand herself streamlined the book into a screenplay, I suppose in some sense I don't really have to.
So classy and yet so clearly deranged, 'The Fountainhead' is a woman's picture at odds with the world of ideas – and the clash of one against the other makes for ping-pong excitement. Oddly, I don't find the film stilted at all; even if the characters are often nothing more than mouthpieces, director Vidor infuses the proceedings with such utter conviction, it's almost easy to believe that people could talk this way. Almost.
With its theme of "the individual against the collective", the film cozies up to any number of others: 'Rebel Without a Cause', Ionesco's play (later, the film) 'Rhinoceros', Cooper's own 'High Noon' three years later, etc. But here the theme is a 'VERY IMPORTANT' one – as witness the often elephantine production design, the remarkably evocative b&w camerawork, and the often bombastic Max Steiner score. (If the film weren't huffing and puffing so much on its own, it would have trouble breathing under the music.)
Ultimately, the performances are the real thrust of this phallus-centric film. Neal is playing a woman driven nearly insane by the 'fact' that "…beauty and genius and greatness have no chance – [in] the world of the mob." Her initial sexual encounter with Cooper has the torrential abandonment of the wildest of one-night-stands – yet the very thing she wants becomes the very thing she fears: "I'll do anything to escape from you.", she tells Cooper just before kissing him passionately.
What's perhaps most satisfying as a fantasy is the fact that, in Cooper, Neal has found not only a genius and staunch iconoclast but someone who's HOT as well (so rare, as we all know!). Note the moment when the quarry foreman greets Neal:
Foreman: Let me show you around, Miss Francon. This is the best great granite in the whole state of Connecticut. Why, last month we shipped-
Neal: [sharply; but then 'sharply' is her keynote throughout] Who is that man?
This, of course, leads to the kind of intellectual, dinner party repartee endemic to Rand's mind:
Neal: I wish I'd never seen – your building. It's the things that we admire or want that enslave us and I'm not easy to bring into submission.
Cooper: That depends upon the strength of your adversary…Miss Francon.
Hubba-hubba!
Neal and Cooper were rarely better than they are here. Watch Cooper's face in particular throughout; though he often says little (until the lengthy courtroom speech he apparently pleaded with Rand to shorten), his face is constantly responding to what's said to him.
[Though they are somewhat dwarfed here by comparison, special mention should be made of Raymond Massey – who, two years earlier, also played a 'rebound marriage partner' in Joan Crawford's stunning film, 'Possessed'; Robert Douglas, in a role that would echo somewhat in Addison DeWitt in the following year's 'All About Eve'; and Kent Smith, whose character seems to design every building in New York City not commissioned to Cooper.]
As for the film itself, it's too easy to simply see it as a camp classic. Its sheer audacity and endlessly quotable dialogue at times render it such, but this is one ffs can savor on various levels through various viewings.
« Bonjour Tristesse (1958) French Lieutenant's Woman, The (1981) »
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,078
|
require File.join(File.dirname(__FILE__), 'base')
require 'rufus/doric'
class Audit < Rufus::Doric::Model
db :doric
doric_type :audits
open
_id_field { "audit_#{name}" }
property :name
end
class Participant < Rufus::Doric::Model
db :doric
doric_type :audits
_id_field { "participant_#{name}" }
property :name
end
class UtOpenModelTest < Test::Unit::TestCase
def setup
Rufus::Doric.db('doric').delete('.')
Rufus::Doric.db('doric').put('.')
Rufus::Doric.db('doric').http.cache.clear
# CouchDB feeds the same etags for views, even after a db has
# been deleted and put back, so have to do that 'forgetting'
end
#def teardown
#end
def test_openness
a = Audit.new(:name => 'nada')
a.save!
assert_equal nil, a.clerk
assert_equal nil, a.whatever
a = Audit.all.first
a.clerk = 'Wilfried'
a.save!
assert_equal 'Wilfried', a.clerk
end
def test_remove
a = Audit.new(:name => 'berlusconi')
a.h['country'] = 'x'
a.remove(:country)
assert_equal %w[ doric_type name ], a.h.keys.sort
end
def test_remove_on_a_closed_model
Participant.new(:name => 'Joseph').save!
pa = Participant.all.first
assert_raise RuntimeError do
pa.remove(:nationality)
end
end
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,764
|
\section[]{Introduction}
Deciphering star formation histories of early-type galaxies is
important for understanding their formation and evolution.
While early-type galaxies were traditionally considered to
be simple single stellar populations, it is now clear that their star
formation histories must be more complicated. Both absorption
linestrengths \citep[e.g.][]{trager00} and UV-optical colours
\citep{yi05,kaviraj07} indicate that a significant portion of
early-type galaxies have recently formed stars ($\approx30\%$
from the UV-optical colours). Reassuringly, molecular gas, the raw material for star formation, is
also found in early-type
galaxies \citep{henkel97}. Two recent surveys give detection rates of 28\% for
E/S0s in the SAURON sample \citep{combes07} and 78\% for S0s in a
volume-limited sample \citep{sage06}.
NGC~4550 is an unusual galaxy (see Table 1), containing two coplanar
counter-rotating
stellar exponential discs with nearly identical scale lengths \citep*{rubin92,rix92}. However, integral field unit data shows that one disc is
thicker and has a higher velocity dispersion than the other
\citep{kenney00, emsellem04}. Schwarzschild modelling by \citet{cappellari07} confirms the difference in scale heights and additionally shows that the two discs have equal mass within the SAURON field-of-view. The presence of these two discs with different scale
heights may cause the failure of some bulge-disc
decompositions. \citet{rix92} find a bulge to disc ratio of only 0.19
while \citet{scorza98} report a ratio of 5. The
S\'ersic fit of \citet{ferrarese06} gives a best-fit S\'ersic shape
parameter of 1.7,
demonstrating that the surface brightness profile is not dominated by
a de Vaucouleurs bulge. With an effective
$|\rm B - \rm V|$ colour of 0.890 mag, NGC~4550 is very likely a red sequence
galaxy, which would make it one of a very small population of disc-dominated
red-sequence galaxies.
Two main scenarios exist for explaining the counter-rotating stellar discs in
NGC~4550. \citet{rubin92} first suggested that accretion of
counter-rotating gas and subsequent star formation could create a
second disc rotating as observed. This gas accretion scenario has been
investigated by \citet{thakar98}, who find it difficult to
create extended, exponential gas discs in their
simulations. However, recent observations of NGC~5719
show an extended counter-rotating and star-forming gas disc clearly
accreted from external gas \citep{vergani07}, demonstrating the feasibility
of this origin. The second scenario invokes a
coplanar major merger of two counter-rotating disc galaxies. While
requiring a precise alignment of the two discs to avoid excessive
heating, this merger scenario has been shown to produce resultant
galaxies resembling NGC~4550 \citep{puerari01}.
The ionised gas in NGC~4550 co-rotates with the thicker disc
\citep{rubin92, sarzi06}, not with the thinner, colder disc. An irregular dust
distribution is also observed
within the central 20\arcsec~in diameter. The dust is distributed in
clumpy arcs and is stronger on the northern half of the galaxy
\citep{wiklind01}. H{\small{I}} observations have not detected neutral gas in
NGC~4550, the strictest upper limit being $7 \times 10^{7}$ $\rm M_{\sun}$~ from
\citet{duprie96}. Molecular gas was first discovered in NGC~4550 by
\citet{wiklind01} using the IRAM 30m telescope. They reported a small
mass of molecular gas, $1.3 \times 10^{7}$ $\rm M_{\sun}$, noting that the
distribution of the molecular gas is likely to resemble that of the dust,
due to a strong asymmetry in the observed CO emission line toward
positive relative velocities.
Here we present interferometric observations of the CO emission in NGC~4550,
deriving the extent, kinematics and total mass of the molecular gas.
To investigate what the kinematics of the gas reveal
about the formation history of this unusual galaxy, we run a major merger
simulation (importantly including gas) and describe the features of its
evolution, comparing the results to observations. We also analyse various star formation indicators to
determine whether the molecular gas in NGC~4550 alerts us to another
early-type galaxy with ongoing star formation, or whether the
molecular gas instead appears to be stable.
\begin{table}
\caption{Basic properties of NGC~4550. The left and middle columns
list the different quantities and their values; the right column
lists the corresponding references.}
\begin{tabular}{@{}llr}
\hline
Quantity & Value & Ref. \\
\hline
R.A. (J2000.0) & 12$^{h}$ 35$^{m}$ 30.6$^{s}$ & 1\\
Dec. (J2000.0) & +12$^{\circ}$ 13$\arcmin$ 15$\arcsec$ & 1\\
Heliocentric velocity & $435$ km s$^{-1}$ & 2\\
Distance & $15.5$ Mpc & 3\\
Scale & $1\arcsec = 75$~pc & 3\\
Type & SB0 & 4\\
Corrected apparent B mag & $12.31$ & 5\\
Corrected absolute B mag & $-18.64$ & 5\\
(B-V)$_{e}$ & 0.89 & 5\\
L$_{\mathrm{B}}$ & $4.3\times 10^{9}$ L$_{\sun}$ & 6\\
L$_{\mathrm{FIR}}$ & $5.8\times10^{7}$ L$_{\sun}$ & 6\\
L$_{\mathrm{FIR}}$/L$_{\mathrm{B}}$ & $1.4\times10^{-2}$ & 6\\
L$_{\mathrm{FIR}}$/M$_{\mathrm{H}_{2}}$ & $8.0$
L$_{\sun}$/M$_{\sun}$ & 6\\
\hline
\end{tabular}
\label{tab:basic}
References: (1) NED; (2) Derived from SAURON stellar kinematic data
\citep{emsellem04} (3) \citet{mei07};
(4) \citet{devaucouleurs91}; (5) HyperLEDA;
(6) Derived quantity using data from NED, IRAS \citep{moshir90} and this paper.
\end{table}
\section[]{Plateau de Bure observations}
\subsection[]{Observations}
\begin{table}
\caption{CO observation calibrators}
\begin{tabular}{@{}lc}
\hline
Type & Calibrators\\
\hline
Bandpass & 3C273\\
Phase & 3C273, 1156+295\\
Flux & 3C273, 1156+295, 0528+134, 3C84 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\begin{center}
\rotatebox{270}{\includegraphics[width=6.5cm]{figures/chmap.ps}}
\caption{Channel maps of NGC~4550. The channels are
20 km s$^{-1}$ wide and contours are plotted at -3 (dashed), 3, 6,
and 9 $\sigma$ with $\sigma =$ 2.77 mJy beam$^{-1}$. The number in
the top left corner of each frame is the central velocity of that
frame, relative to the observed central velocity of
NGC~4550 ($V_{\mathrm{sys}}=435$ km s$^{-1}$, determined using
optical absorption lines). The synthesized beam is shown in the
bottom left corner of each frame. The cross represents the centre of
the galaxy as given by 2MASS.\label{fig:chmap10}}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\rotatebox{270}{\includegraphics[width=7cm]
{figures/new-sum.ps}}
\hspace{5mm}
\rotatebox{270}{\includegraphics[width=7cm]
{figures/new-velo.ps}}\\
\caption{ {\em Left:} CO(1-0) integrated intensity map of NGC~4550. {\em
Right:} CO(1-0) mean velocity map relative to the systemic
velocity of 435 km s$^{-1}$ (see Table 1). The synthesized beam is shown in the
bottom-left corner of each map. The cross represents the centre of
the galaxy as given by 2MASS. \label{fig:meanvelo}}
\end{center}
\end{figure*}
NGC~4550 was observed in the $^{12}$CO(1-0) line with the
IRAM Plateau de Bure Interferometer (PdBI) on August 10, 2007. The
observations were taken in the D configuration with 5 antennas and used the new
generation dual-polarization receivers at 3~mm. The spectral
correlators were centred at 115.12~GHz, the transition frequency of
CO(1-0) roughly corrected for the heliocentric velocity of NGC~4550. The
correlator configuration used four 320~MHz-wide (834 km s$^{-1}$)
units with a frequency resolution of 2.5~MHz (6.6 km s$^{-1}$),
covering a total usable bandwidth of 950~MHz (2475 km s$^{-1}$). The
correlator was regularly calibrated by a noise source inserted in the
IF system.
We obtained visibilities with series of thirty 45 s integrations on
source, followed by three $45$ s phase and amplitude calibrations on
each of the two phase calibrators (see Table 2). To flux calibrate, we
set the flux density of 3C273 to the expected value (15 Jy), then
checked that the fluxes found for the other flux calibrators were
reasonable. The uncertainty in our flux calibration is $\approx20$\%.
The data were reduced with the Grenoble Image and Line Data Analysis
System (\textsc{gildas}) software packages \textsc{clic} and \textsc{mapping} \citep{GL}.
We calibrated the data using the standard pipeline in \textsc{clic}.
After calibration, we used \textsc{mapping} to
create data cubes centred at 115.104~GHz (corrected more precisely for
a systemic velocity of 435 km s$^{-1}$) with velocity planes separated by 20 km
s$^{-1}$. Natural weighting was used. The primary beam size is $44\arcsec$ for
CO(1-0) and we chose the spatial dimensions of the datacube to be
about twice the diameter of the primary beam,
$90\arcsec\times90\arcsec$. The synthesized beam is $4\farcs7 \times
4\farcs0$, so we choose spatial pixels of
$1\arcsec\times1\arcsec$. Continuum subtraction was unnecessary (see
below). The dirty beam has small
side lobes that necessitated cleaning the datacube. The cleaning was
done using the H\"ogbom method \citep{hogbom74}; we stopped cleaning
in each velocity plane after the brightest
residual pixel had a value lower than the rms noise of the
uncleaned datacube.
To constrain the continuum emission, we selected frequencies at least 40
km s$^{-1}$ away from the lowest and highest velocity channels with any
line emission in the cleaned datacube (-80 and 80 km s$^{-1}$,
respectively). The very edges of the bandwidth were also
avoided, resulting in a 683 MHz wide continuum window. We mapped
this data with the same spatial parameters as the line data. No
emission could be seen in the map with an rms noise of 0.54 mJy beam$^{-1}$,
giving a 3$\sigma$ upper limit of 1.62 mJy assuming a point source.
\subsection[]{Results}
The channel maps (Fig.~\ref{fig:chmap10}) show CO(1-0) emission
around the galaxy centre. The emission is just over
the 3$\sigma$ level, with stronger emission in the
20, 40 and 60 km s$^{-1}$ channels. The channel velocity values are
all given with respect to systemic velocity. In the channel maps, a mild
velocity gradient is observed, with more of the emission at positive
relative velocities to the north of the galaxy centre and more of the
emission at negative relative velocities to the south. The 60 km
s$^{-1}$ channel is an exception with a peak slightly south
of the galaxy centre, although the emission clearly also extends to the north.
To pick up more extended emission, we created an integrated intensity
map and a mean velocity field map (Fig.~\ref{fig:meanvelo}) using the
smoothed-mask
method. To make these maps, we first created a smoothed cube by
smoothing with a 2D circular gaussian spatially (FWHM of 4\arcsec or
about one synthesized beam width). Then we hanning
smoothed by 3 channels in velocity. The moments were computed
only using pixels in the original cube that corresponded to pixels
above 3$\sigma$ in the smoothed cube.
The integrated intensity map shows that the emission is limited to the
central 750 pc (10\arcsec) in diameter and is slightly stronger in the north.
The mean velocity map shows a north-south
gradient, as expected from the shift observed in the channel
maps. Assuming the velocity gradient indicates rotation around
the centre of the galaxy, the molecular gas rotates like the observed
ionized gas and the thicker disc. Spatially integrating the
entire datacube over the region with
observed emission results in the spectrum shown in
Fig.~\ref{fig:spec}. This spectrum shows that the emission is present
in all channels between -80 and 80 km s$^{-1}$, but is strictly
limited to these channels.
While the lopsidedness of the CO emission was discovered
by \citet{wiklind01}, the degree of asymmetry was exaggerated due to
their use of an incorrect systemic velocity. The
systemic velocity listed in NASA Extragalactic Database (NED) for NGC~4550 and that used by
\citet{wiklind01} is $381 \pm 9$ km s$^{-1}$. This value is based on
the H{\small{I}} measurement of \citet{peterson79}. More recent and more
sensitive H{\small{I}} observations have not detected H{\small{I}} (DuPrie
\& Schneider, 1996; Morganti et al., in preparation), making the
\citet{peterson79} value very questionable. In addition, recent
absorption line studies seem to be converging to a value of around 435~km~s$^{-1}$ (\citet{rubin97} find 434~km~s$^{-1}$, \citet{uzc00} list
$458 \pm 41$~km~s$^{-1}$, \citet{wegner03} find $437 \pm
15$~km~s$^{-1}$ and the velocity in the central pixel of the SAURON
velocity map is $435$~km~s$^{-1}$). We have adopted this value for our
work. Then, the
flux from positive relative velocities is 1.64
Jy km s$^{-1}$, around twice as much as the flux from negative
relative velocities of 0.91 Jy~km~s$^{-1}$. The ionised gas
does not show the same bias as the molecular gas, with fairly
symmetric emission in both space and velocity. In addition, the
extent of the ionised gas is much greater than that observed in the
molecular gas (see Fig.~\ref{fig:ionized}). As shown in
\citet{wiklind01}, however, the dust favours the north, in agreement with the molecular gas bias.
Integrating the spectrum over the velocity range with observed
emission, i.e. from -80 to 80 km s$^{-1}$, we obtain a total CO
flux of 2.77 Jy km s$^{-1}$.
The total flux is used to compute the total molecular
hydrogen mass using the formula $M(\mathrm{H}_2) = (1.22 \times 10^4
\mathrm{M}_{\sun})D^2 \times S_{\mathrm{CO}}$, where D is the distance
measured in Mpc and $S_{\mathrm{CO}}$ is the total CO(1-0) flux. This
formula comes from using the standard CO to H$_{2}$ conversion
ratio $N(\mathrm{H}_2)/I(\mathrm{CO}) = 3 \times 10^{20}$ cm$^{-2}$,
where $N(\mathrm{H}_{2})$ is the column density of H$_{2}$ and
$I(\mathrm{CO})$ is the CO(1-0) intensity in K km s$^{-1}$. We note
that the actual conversion ratio for NGC~4550 is unknown and can be
expected to vary by a factor of at least two
\citep[e.g.][]{leroy07}. However, using the specified conversion
ratio, the
total H$_{2}$ mass detected by our interferometric observations is
$8.1\times10^{6}$ M$_{\sun}$.
Using the same value of $N(\mathrm{H}_2)/I(\mathrm{CO})$ and the same
distance to NGC~4550, the single-dish measurement of \citet{wiklind01}
gives a molecular mass of $1.4 \times 10^{7}$ $\rm M_{\sun}$, slightly
less than twice the value we obtain. However, they integrate over a
much larger velocity range (225 to 535 km s$^{-1}$), likely including
much noise as positive emission (our velocity range is only 355 to 515
km s$^{-1}$). Over a similar velocity range, our estimates for the
molecular mass in NGC~4550 would be in suitable agreement, considering
calibration uncertainties ($\approx20$\%). Reflecting the calibration uncertainty and
the much larger uncertainty in the conversion ratio, we report the
molecular mass in NGC~4550 as $1\times 10^{7}$ $\rm M_{\sun}$,
which is the value we will use for the rest of the paper.
\begin{figure}
\begin{center}
\rotatebox{270}{\includegraphics[width=7cm]
{figures/specxm.ps}}
\caption{Spectrum of the spatial region with observed CO(1-0)
emission in NGC~4550.\label{fig:spec} }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\rotatebox{0}{\includegraphics[width=7cm]{figures/hb-co.ps}}
\caption{CO(1-0) contours over the H$\beta$~ emission map of NGC~4550 from SAURON
\citep{sarzi06}. The H$\beta$~
emission is clearly more extended than the CO. CO(1-0)
contours are at 0.25, 0.75, 1.25 and 1.75 Jy beam$^{-1}$ km s$^{-1}$.\label{fig:ionized} }
\end{center}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=16.4cm,clip]{figures/cins.ps}
\caption{Particle plots (x-y projection) of six snapshots of the
simulation: the face-on views of the gas (left, blue) and stars
(right, red) are plotted for each epoch, T=250 to
1500 Myr in steps of 250 Myr. The relative orbital motion is
counter-clockwise in this picture, where the prograde galaxy is labeled "P" and the
retrograde one "R". The merger is complete at the 3rd snapshot
(T= 750 Myr), with some gaseous tidal dwarfs remaining
thereafter. All boxes are 80~kpc wide. }
\label{pplot}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=16.4cm,clip]{figures/cont2D-20-25.ps}
\caption{The edge-on (x-z) stellar density (left), gas density
contours (middle) and gas velocity field (right) are plotted
for two times after the merger, T=1000 and 1250~Myr. It is easy
to see that the settling of the gas occurs during this epoch: the
gas density contours flatten and the gas velocity field becomes
regular. The boxes are all 10~kpc
wide, the wedges at the right of the velocity maps indicate the amplitude
of the projected velocities in unit of 100~km~s$^{-1}$. All later
snapshots are similar to the T=1250 Myr one, with a regular
rotation velocity map for the gas.}
\label{isovels}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=16.4cm,clip]{figures/rotcur-20-25.ps}
\caption{Velocity evolution during the simulation. For the same epochs as Figure
6, the left panel shows the stellar density (solid black line), mean
line-of-sight velocity (black dashed line) and velocity dispersion
(dot-dashed red line) profiles. The central panel shows the mean
stellar velocity profile of the resulting system again (solid black
line), as well as those of the prograde (solid red line) and
retrograde (solid blue line) galaxies individually. The right panel
is as the central panel but for the gas velocities. All the profiles
were extracted from a 4.5~kpc thick horizonthal slice taken from
the edge-on projection shown in Figure 6. All boxes are 30~kpc wide.}
\label{rotcurv}
\end{figure*}
\section[]{Numerical Simulation}
While both the gas accretion and coplanar major merger scenarios seem
to be able to produce at least the general properties of the two
stellar discs in NGC~4550, we would also like to know how the gas in
this galaxy came to rotate like the thicker disc. Gas rotating
like the thick disc is actually in disagreement with the
predictions of the gas accretion scenario, in which the thin disc is
formed from the acquired cold gas \citep{kenney00}. Any remaining gas would then be
expected to rotate like the thin disc. To see what the merger scenario
predicts for the gas, we have performed a numerical simulation,
described below.
\subsection[]{Method and initial conditions}
Our merger simulation for NGC~4550 has been run with the same
TREE-SPH technique
as described in Di Matteo et al. (2007). The two merging galaxies
are initially of the same type (Sb) and mass (each has a total mass
$2.4 \times 10^{11}$ M$_\odot$ including dark matter).
The bulges and halos are modelled as Plummer spheres, of masses 1.15
and $17.25 \times 10^{10}$ M$_\odot$, and characteristic radii 1 and
12~kpc, respectively. The stellar discs have masses of $4.6
\times 10^{10}$ M$_\odot$ and radial scale lengths of 5 kpc. The
gas to stellar disc mass ratio is 0.2.
The initial separation between the two galaxies is 100 kpc. We choose
an orbit for the galaxies such that they would have a parabolic
encounter if they were both point masses. If the galaxies followed the
parabolic orbit, the first pericentre would be 8~kpc and
their relative velocity at this point would be 707 km s$^{-1}$. Of
course, the model galaxies do not follow this keplerian
orbit. Instead, given the strong dynamical friction,
merging occurs rapidly. The orbital angular momentum is initially
oriented in the positive z-axis as is the spin of one of the two
galaxies (hereafter called the prograde galaxy). The other galaxy has
negative spin (and will be called the retrograde galaxy).
The total number of particles is 240 000, with 120 000 in each galaxy,
divided between 40 000 in the gas, 40 000 in the stars and 40 000 in the
dark matter halo. We use the Schmidt law and hybrid particles to take
star formation into account as in Di Matteo et al. (2007).
In the following, when we refer to the ``gas component'', this means
the sum of the gas particles and the very young stars formed since the merger.
\subsection{Analysis and Results}
Figure \ref{pplot} shows the gas and stellar particles for six
snapshots from 250 to 1500~Myr. The simulation was run until T=4000
Myr (with a timestep of 0.5 Myr), but the evolution after 1500~Myr is
only secular relaxation of the merger remnant.
With the spin of one galaxy aligned with the orbital motion and the
spin of the other anti-aligned, the interaction is strongly asymmetric
despite the
equal mass of the galaxies. The prograde galaxy develops long tidal
tails, while the retrograde is less heavily influenced by the interaction. As the
merger evolves, the central regions become dominated by the stars from
the retrograde galaxy as it remains more compact. However, the gas behaves
differently due to its collisional nature. While two counter-rotating
systems of stars can coexist, one of the gas systems must
dominate. Initially, the gas from the prograde galaxy expands in tidal
tails and even tidal dwarfs. However, this gas then falls back and
settles in the disc of the merger remnant due to its dissipative
nature. Since the orbital angular momentum is positive, the prograde
gas increases its angular momentum and the retrograde gas loses some
of its angular momentum in the encounter. This results in the gas
rotating with positive angular momentum after the gas from the two
discs has interacted and fully settled down.
Figure \ref{isovels} displays both the edge-on gas density and
velocity map for two time steps: it explicitly reveals how the gas settles
into a thin disc while re-ordering its kinematics to prograde
rotation. More quantitatively, Figure \ref{rotcurv} shows the global
rotation curves along an edge-on projection, splitting the
rotational profiles according to the various components, the gas and
stars in the two galaxies. The rotation curves make it very clear that
the stars in the retrograde galaxy retain their direction of rotation,
dominating the total rotation curve in the centre of the galaxy. The
stars from the prograde galaxy show a lower amplitude of rotation, as
they have been strongly heated in the encounter.
The merger has thus created a disc galaxy with one dynamically hotter disc of
stars, one dynamically cooler counter-rotating disc of stars and a gaseous
component rotating like the thick disc. As the alignment of the
orbital angular momentum with the prograde galaxy causes both the
heating of this galaxy and the settling of the gas into prograde
rotation, we predict that in such systems the gas will always be in
corotation with the
most perturbed stellar system, i.e. with the hotter disc.
NGC~4550 clearly exhibits this phenomenon with both the molecular and
ionised gas rotating like the thicker stellar disc.
However, in the present simulation with perfectly coplanar galaxies, we note that there is not much heating in the perpendicular direction. The thickness of the stars with positive angular momentum (those that would observationally be considered in the prograde disc) is comparable to the thickness of the stars with negative angular momentum. While the lack of a bona fide thicker disc limits the direct comparison to NGC~4550, we expect more resonant heating in the perpendicular direction if the interaction were not perfectly coplanar. Earlier simulations show that the angle between the discs' orientation and their orbit must be significant before the remnant takes on an elliptical morphology \citep{bournaud05}, leaving room for disc thickening before disc destruction. We will explore the possible parameters of low-inclination mergers in a future work.
As angular momentum exchange strongly drives the evolution of the
merging galaxies, it is interesting to follow the angular momentum
exchange between all components present, including the dark matter.
Figure \ref{angmom} shows clearly that the largest angular momentum
exchanges occur during the violent merging epoch between
400 and 800 Myr. Afterwards, all momenta evolve more slowly.
The main feature is the loss of all the initially positive
orbital angular momentum in each component, which is then transferred
almost exclusively to the dark matter, except for a tiny fraction
transferred to
the internal spin of the gas and stars.
This exchange is seen in the total global momentum (black full curve)
in each panel of Figure \ref{angmom}. It is particularly intriguing to
note that both dark haloes acquire a large prograde rotation, an
effect even more marked for the halo of the prograde galaxy.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=15.5cm,clip]{figures/angmom50.ps}
\caption{Angular momentum evolution during the simulation.
The top left panel shows the sum of all components, while the other three
are split into the dark matter halo, the stars and
the gas, as indicated above each panel. The wiggles in the total
angular momentum line in the upper left box reflect noise in the
simulation of order 2-3\%. The colour scheme is as for
middle and right panels of Figure 7, i.e. red for the prograde galaxy
and blue for the retrograde galaxy. The solid lines indicate the
orbital angular momenta, while the dashed lines indicate the internal
spin momenta. The angular momentum is in units of $2.3 \times 10^{11}$ M$_\odot$
kpc km s$^{-1}$; note that the scale is different in each plot. }
\label{angmom}
\end{figure*}
Our simulation agrees with the results of Puerari \& Pfenniger
(2001) who first showed that a coplanar major merger
between two disc galaxies could limit the heating effect and create a
NGC~4550 look-alike. In their simulation, they consider either a
parabolic or a circular orbit, assuming that the relative energy of
the two galaxies coming from infinity has already been absorbed by
some outer matter. Both orbits avoid excessive heating, but only
the parabolic orbit produces the spectacular counter-rotation
observed in NGC~4550. However, their simulations are limited in how
they investigate the gas- they include gas in either the prograde
galaxy or the retrograde galaxy, but do not consider the case where
gas is present in both galaxies as we have done. Of course, the final
distribution of the gas
within the merger remnant will depend on its initial distribution in
the two progenitor galaxies.
\citet{dimatteo08} have also studied dynamical mechanisms
to produce counter-rotating systems, through the merging of
a non-rotating early-type galaxy with a gas-rich spiral. In these
types of mergers, the orbital momentum is transferred to the rotation
of the early-type galaxy and to the outer parts of the spiral, while
the centre of the spiral (gas and stars)
keeps its retrograde momentum. The counter rotation is
then spatially marked (centre versus the outer parts), which is a
different pattern than that observed in NGC~4550, where gas and stars are
counter-rotating at the same location. The nature, amount and extent
of the observed counter-rotation can thus help to determine
the merging history of the system.
\section[]{Star Formation}
The CO detected in NGC~4550 indicates the potential for star
formation, as stars originate in cold molecular gas. Using the Toomre
density criterion, Q, with a conservatively cold velocity dispersion of
6~km~s$^{-1}$ and an epicyclic frequency measured from the rotation of
the ionised gas \citep{sarzi06}, we calculate a threshold density
(Q=1) of 285 $\rm M_{\sun}$~
pc$^{-2}$. Determining a gas surface
density for NGC~4550 from the CO data is difficult given the lack of
resolution, but assuming a uniform-density, symmetric
5\arcsec-radius disc, we obtain a surface density of only 18.3 $\rm M_{\sun}$~ pc$^{-2}$ for
the molecular gas, far below the threshold. Even decreasing the radius
to 2\arcsec, about the minimum believable extent of the molecular gas,
gives a surface density of only 115 $\rm M_{\sun}$~ pc$^{-2}$, still below
the star formation threshold density. However, if the molecular gas
is significantly more clumpy than a uniform density disc, than there
may be isolated regions where the critical density is reached. We
therefore do not immediately rule out the possibility of star
formation in NGC~4550, instead looking for signs in other star
formation tracers.
Far-infrared (FIR) emission is often used as a tracer of star
formation, as young OB stars efficiently heat their surrounding
dust. In late-type galaxies where the ultraviolet (UV) and visible
radiation is dominated by young stars, the FIR emission is a very good
tracer of star formation. Its use in earlier-type galaxies, however,
is more disputable, as the radiation field of old hot stars should also
contribute to the dust heating. Yet the relations between other
tracers of star formation such as H$\alpha$ \citep[e.g.][]{kewley02} and radio
continuum \citep[e.g.][]{gavazzi86} hold puzzlingly constant with
morphology (at least within spirals), indicating that it is dust proximate to star forming
regions which dominates the FIR emission. Converting the IRAS 60 and
100 $\mu$m fluxes into a star formation rate (SFR) following the prescription of
\citet{kewley02}, we derive a FIR-based SFR of $\approx0.017$ $\rm M_{\sun}$~
yr$^{-1}$ for NGC~4550.
\begin{figure}
\begin{center}
\rotatebox{0}{\includegraphics[width=6cm]
{figures/NGC4550_surface_nuv.eps}}
\caption{NUV and V surface brightness profiles and NUV-V colour along
the major axis in NGC~4550. The vertical dashed line indicates the
effective radius while the horizontal dashed line indicates the blue
limit to the colours the uv-upturn phenomenon can produce,
NUV-V=5.2. \label{fig:nuv-v}}
\end{center}
\end{figure}
As many early-type galaxies without star formation also emit in the
FIR (an active
galactic nucleus or infrared cirrus can also contribute), we do not
want to rely on the FIR emission alone to ascertain ongoing star
formation in NGC~4550. Instead, we look to the UV, where young stars
are particularly bright. The first diagnostic we consider is the NUV-V
colour. Using optical data from the MDM 1.3m telescope
(Falc\'on-Barroso et al. in preparation) and {\it GALEX} UV data
(Jeong et al. in preparation), we have produced a plot of the NUV-V colour
along the major axis of NGC~4550
(Fig.~\ref{fig:nuv-v}). NGC~4550 has a central NUV-V colour of around
5, becoming bluer with increasing radius. While old stars can also
emit in the UV, producing the UV-upturn phenomenon seen in old
galaxies \citep[e.g.][see O'Connell 1999 for a review]{burstein88}, the bluest NUV-V colour this effect
produces is NUV-V$=$5.2 \citep{yi05}. Thus, the NUV-V colour in NGC~4550 is
more likely from young stars. Furthermore, the
decreasing gradient (bluer colour with increasing radius) is not seen
in UV-upturn galaxies, reinforcing the idea that the NUV emission
comes from young stars.
\begin{figure*}
\begin{center}
\rotatebox{0}{\includegraphics[width=6cm]
{figures/age.eps}}
\rotatebox{0}{\includegraphics[width=6cm]
{figures/massfraction.eps}}\\
\caption{
{\em Left:} Two-component fit map of the age
of the young component (in Gyr). {\em Right:}
Two-component fit map of the mass fraction of the young
component. Note that the mass fraction is relevant to
each individual pixel, thus a certain mass fraction in the central
pixels represents a larger mass of young stars than the same mass
fraction in outer pixels. \label{fig:uv} }
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\rotatebox{0}{\includegraphics[width=14cm]{figures/NGC4550_2comfit.eps}}
\caption{Integrated V, NUV and FUV fluxes for the central region of
NGC~4550. The central region is defined by an ellipse
of 5\arcsec$\times$3\arcsec, PA=0$^{\circ}$. The spectral energy
distributions (SEDs) of the best-fit young and old components are shown,
along with the SED of the sum of these components. We have also
shown the integrated
colours of NGC~4552 (strong UV-upturn galaxy) and M32
(intermediate-aged compact elliptical) normalized in the V-band for
comparison. {\em Inset:}
$\chi^{2}$ contours of the two-component stellar
population fit to the central integrated colours of NGC 4550,
showing the age-mass fraction degeneracy. The best
fit is marked with an 'x'.\label{fig:chicont} }
\end{center}
\end{figure*}
To investigate this possibility, we have used the method
described in \citet{jeong07} to fit a two-stage star formation history to the
UV-optical colours of NGC~4550. An old population is fixed at an
age of 10~Gyr with a composite metallicity, while a younger component at
solar metallicity is allowed to vary in age ($0.001$ Gyr $<$ {\it
t}$_{\mathrm{YC}}$ $< 10$~Gyr) and mass fraction ($10^{-4}<$
{\it f}$_{\mathrm{YC}} < 1$).
Figure~\ref{fig:uv} shows the results of these
fits, pixel by pixel. The central region appears to host a young stellar population
(fitted age around 100~Myr), although the mass fraction is very low at only
$~$0.01\%. To increase the signal-to-noise and thus the reliability of
our model fits, we computed integrated colours for the central
region where the CO emission is detected, using an ellipse with a major
axis of $5\arcsec$, a minor axis of $3\arcsec$ and a position angle of
0$^{\circ}$. The colours from this aperture give a best-fit age of
280~Myr with $5.89 \times
10^{7}$~M$_{\sun}$ of young stars. The fit to the spectral energy distribution (SED)
of this central region of NGC~4550 is shown in
Figure~\ref{fig:chicont}. As seen in the figure, NGC~4550 is slightly
bluer in NUV-V than the strong UV-upturn galaxy NGC~4552 and much bluer than
the intermediate age compact elliptical M32, indicating that young
stars must be responsible for the blue NUV-V
colour of NGC~4550. Figure~\ref{fig:chicont} also displays an
inset of the chi-square contours for the fit. As can be seen, an
age-mass degeneracy is present, with larger mass of
slightly older stars (but only up to 500~Myr) or a smaller mass of even younger stars also able to fit
the colours.
Absorption linestrengths also give an indication of a young or
intermediate age population in NGC~4550. While absorption
linestrengths are not sensitive to the very youngest stars,
they can easily reveal stellar populations 1~Gyr or older.
The age-sensitive linestrength H$\beta$ (measured on the Lick/IDS
system) is relatively
high in NGC~4550, at a value of 1.99 $\rm \AA$ integrated over an
effective radius (R$_{e}$) and
2.14 $\rm \AA$ integrated over R$_{e}$/8 \citep{kuntschner06}, reaching a
value of 2.2 $\rm \AA$
in the very centre (Maier et al., in preparation). Observations with the
Multi-Pupil Fiber Spectrograph (MPFS) measured
lower H$\beta$ linestrengths, however these were not corrected for the
contamination from the H$\beta$ emission line \citep{afanasiev02}. These
linestrength values are hard to reproduce with only old ($> 5$ Gyr)
stars and more likely indicate a young or intermediate age
population, perhaps dating from the time of the merger. As with the
UV, there is again an age-mass fraction
degeneracy, with a smaller fraction of young stars producing an
analogous effect to a larger fraction of intermediate-age stars.
Unfortunately, neither the UV nor the absorption linestrengths can
conclude anything about ongoing star formation in NGC~4550, while the
FIR only gives an upper limit. To investigate ongoing star formation
further, we look at optical emission line ratios and polyaromatic
hydrocarbon (PAH) emission. Particularly low ratios of
[O$\:${\small III}]~ to H$\beta$~ emission lines [log([O$\:${\small III}]/H$\beta) < -0.2$] indicate
current star formation ($<20$ Myr) as the source of ionisation
\citep{ho97}. Data from SAURON indicate that log([O$\:${\small
III}]/H$\beta)$ is between
0.15 and 0.5 for NGC~4550 \citep{sarzi06}. While gas could be star
forming at these
high values, it is more probable that shocks, post-asymptotic giant branch stars or an AGN are the
dominant source of ionisation (Sarzi et al. in preparation). The greater physical extent of the ionised gas
compared to the molecular gas (see Fig.~\ref{fig:ionized}) also supports the idea that star
formation is not the primary ionisation source. Similarly, although the PAH
emission detected in NGC~4550 \citep{bressan06} is suggestive of star
formation, the PAH spectrum lacks the distinctive line ratios seen in
star-forming regions. This different spectrum may just reflect a very low star
formation rate \citep{galliano08}, but it may also indicate another
source of PAH excitation altogether \citep{smith07}. Thus, with the
available data, we cannot say more than that the star formation rate in
NGC~4550 is less than about $0.02$ M$_{\sun}$ yr$^{-1}$, as
indicated by the FIR emission.
At this rate, the amount of molecular gas we detect in NGC~4550 can
fuel star formation for up to 350~Myr. The molecular gas is likely
related to the young population (280~Myr) found in the UV, although whether the
star formation has proceeded continuously or in a more
episodic fashion since then is impossible to say.
Depending on the interpretation of the
linestrengths, the star formation period may be of an even longer
duration (extending back to more than $1$~Gyr) or the moderately high
linestrengths may reflect a stellar population originating in the
merger that formed NGC~4550.
\section[]{Conclusions}
We detect a very small amount ($1\times 10^{7}$ M$_{\sun}$) of
molecular gas in the centre of the counter-rotating red disc galaxy
NGC~4550. The CO(1-0) emission is limited to the central 750~pc
(10\arcsec) and is asymmetrically distributed, stronger at positive
relative velocities north of the galaxy centre. The molecular gas
co-rotates with the thick stellar disc and the ionised gas,
counter-rotating with respect to the thin stellar disc.
Our simulation of the merger of two counter-rotating coplanar disc
galaxies shows that the main features of NGC~4550 can naturally be explained with
such a scenario. The interaction heats the prograde disc more than its
retrograde companion and the gas component ends up aligned with the total angular momentum (dominated by the orbital angular momentum), and thus with the prograde disc. Thus, the gas is predicted to always rotate like the dynamically hotter disc, as observed in NGC~4550. As the gas
accretion scenario does not provide a natural explanation for the
corotation of the gas and thick disc, the merger scenario appears the
more likely.
Both the UV and optical-linestrength data indicate that NGC~4550
cannot be made up of a purely old stellar population. The best-fit
two-population model to the UV-optical photometry in the region with
observed molecular gas gives a young population of 280 Myr and mass of
$5.89 \times 10^7$ $\rm M_{\sun}$. The optical H$\beta$ linestrength
value requires a population at least younger than 5~Gyr. Ongoing star
formation in the observed molecular gas is possible if there are
locally dense regions that exceed the critical density. The FIR
emission gives an upper limit on the current star formation rate of
$0.02$ M$_{\sun}$ yr$^{-1}$. This low star formation rate combined
with the small amount of molecular gas present suggest that we are
either witnessing a weak period in an extended bursty star formation
episode, or a more continuous very low-level star formation episode
that will last about another 350~Myr.
\section*{Ackowledgments}
The authors acknowledge receipt of a Daiwa Anglo-Japanese Foundation
small grant and a Royal Society International Joint Project grant, 2007/R2-IJP,
which facilitated this work.
We would like to thank Philippe Salome
for help with the reduction of the Plateau de Bure data. We are also
grateful to the SAURON Team for providing SAURON data, including as
yet unpublished MDM and GALEX images. LMY acknowledges
support from grant NSF AST-0507432 and would like to thank the Oxford
Astrophysics Department for its hospitality during sabbatical work.
MB acknowledges support from NASA through {\it GALEX} Guest Investigator
program GALEXGI04-0000-0109. This work was supported by grant No. R01-2006-000-10716-0 from the
Basic Research Program of the Korea Science and Engineering Foundation
to SKY.
Based on observations carried out with the IRAM Plateau de Bure
Interferometer. IRAM is supported by INSU/CNRS (France), MPG
(Germany) and IGN (Spain). Also based on observations carried out with
the NASA {\it GALEX}. {\it GALEX} is operated for NASA by the
California Institute of Technology under NASA contract NAS5-98034.
The NASA/IPAC Extragalactic Database (NED) is operated by the Jet
Propulsion Laboratory, California Institute of Technology, under
contract with the National Aeronautics and Space Administration. This
research made use of HyperLEDA: http://leda.univ-lyon1.fr.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,818
|
Q: Product images not load PWA setup magento 2.3 I had done a PWA setup on my local machine.
I am facing product and category images issue.
Images path :
src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAQAAAAFCAQAAADIpIVQAAAADklEQVR42mNkgAJGIhgAALQABsHyMOcAAAAASUVORK5CYII="
src="/img/resize/300?url=%2Fmedia%2Fcatalog%2Fproduct%2Fv%2Fs%2Fvsk12-la_main_3.jpg"
A: Try this issue by copying the contents of vendor/magento/sample-data-media-venia/catalog to pub/media/catalog. Hope this helps.
Reference :
https://github.com/magento-research/pwa-studio/issues/413
A: Open this file
/var/www/html/magento231/pwa-studio/packages/venia-concept/.env
Remove comment and replace path like
# MAGENTO_BACKEND_MEDIA_PATH_PRODUCT="/media/catalog/product"
# MAGENTO_BACKEND_MEDIA_PATH_CATEGORY="/media/catalog/category"
To
MAGENTO_BACKEND_MEDIA_PATH_PRODUCT="/pub/media/catalog/product"
MAGENTO_BACKEND_MEDIA_PATH_CATEGORY="/pub/media/catalog/category"
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,129
|
Discussion in 'Development' started by MrDienns, Dec 1, 2017.
Shout out to a user called Hex_27 for creating a simple but effective lootbox plugin for us! Don't worry, it's not like EA's lootbox system. The plugin allows us to spawn different types of lootboxes in different areas & regions. It basically means free goodies for everyone!
Kind of him. Keep up the good work!
What sort of loot will be inside these boxes?
Probably anything, but I expect it to mostly be just loot you can get from killing monsters and doing quests. This basically encourages people to go adventuring without losing too much gold and exp etc.
That's pretty cool. If people want to see Dyescape for it's content other than questing and grinding, they can be rewarded with little gifts. Will it re-spawn in the same place every time though?
Nope, it is randomised to even encourage adventuring even more. This also means that if you pass a house on the side of the road it could contain goodies or something alike. Every cave, every mountain etc. This makes the map feel way more interactive and even bigger! Every little thing has its meaning. But, if you find one chest in a cave in a mountain, the chances of it respawning in that region again is huge. Not on the exact same spot, but it will definitely be around there somewhere.
I guess I'll quickly fly in on the lootbox questions (we're probably better off renaming it to treasure chests instead at this point).
Correct. The lootboxes were are currently speaking of are simply treasure chests spawned in the world which contain (not fully random, it's always configured somewhere) loot. The amount of loot differs and depends how lucky you are.
It's not completely random. It's random enough to make sure that it's always interesting wondering where the lootbox for a specific area will spawn, but it's not completely random as in the lootbox can spawn anywhere. Anywhere as in, in the middle of a cave, bottom of the ocean or even on top of a mountain. The content team configures a bunch of specific points for a specific area where a specific chest can spawn. For example, in the sea acre farms we may choose to spawn 1 (which is quite low, but for the sake of explaining this) treasure chest. Then, we can configure something like 15 spawn points. The chest will then randomly spawn in one of these spawn points and will respawn after an x amount of time after being looted. This is done to prevent chests from spawning in such random locations nobody will ever find them, and it helps us balance things out more as we can configure where specific types of chests (type of loot) should spawn.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,301
|
Q: How can I simplify this LINQ code Basically I have a multiselect list box in MVC and when the user changes selection it will come back and it should update the model. the below code works but I am just wondering how can I put it in a single foreach loop or is there a better way of updating the selection? Note: There is a many to many relationship between artist and artist type.
foreach (var artistTtype in this._db.ArtistTypes.ToList().Where(artistTtype => artist.ArtistTypes.Contains(artistTtype)))
{
artist.ArtistTypes.Remove(artistTtype);
}
foreach (var artistTtype in this._db.ArtistTypes.ToList().Where(artisttype => vm.SelectedIds.Contains(artisttype.ArtistTypeID)))
{
artist.ArtistTypes.Add(artistTtype);
}
A: This for adding (just use AddRange):
artist.ArtistTypes.AddRange(this._db.ArtistTypes
.Where(artisttype => vm.SelectedIds.Contains(artisttype.ArtistTypeID)));
This for removing (use ForEach):
this._db.ArtistTypes
.Where(artistTtype => artist.ArtistTypes.Contains(artistTtype)).ToList()
.ForEach(x=>artist.ArtistTypes.Remove(x));
EDIT:
you can always set
artist.ArtistTypes = this._db.ArtistTypes
.Where(artisttype => vm.SelectedIds.Contains(artisttype.ArtistTypeID)).ToList();
this will set ArtistTypes to what you want, you don't need to delete then add.
A: I see two "fixes":
1) You don't need to care about what's inside the list, since you're going to update the list of selections you can start from scratch, so the removal part becomes
artist.ArtistTypes.Clear();
2) Now you fill the list again. ToList() should not be needed since you're performing a .Where() to get the data, and you can leverage Linq's lazy mechanisms so you'll only read the data you actually use. You can also split the lines for increased readability (it doesn't matter: until you do the foreach() the db will not be actually hit.
//note that the .ToList() is gone
var query = this._db.ArtistTypes.Where(artisttype => vm.SelectedIds.Contains(artisttype.ArtistTypeID);
foreach (var artistTtype in query))
{
artist.ArtistTypes.Add(artistTtype);
}
2b) (UNTESTED, off the top of my head) Another way of implementing the comparison you do is through a custom IEqualityComparer, switching to .Intersect() method. This is way more solid since if your keys change in the model you only have to change the comparer.
// I'm making up "ArtistType", fix according to your actual code
class ArtistTypeEqualityComparer : IEqualityComparer<ArtistType>
{
public bool Equals(ArtistType x, ArtistType y)
{
if (ArtistType.ReferenceEquals(x, null)) return false;
if (ArtistType.ReferenceEquals(y, null)) return false;
if (ArtistType.ReferenceEquals(x, y)) return true;
return x.ArtistTypeId.Equals(y.ArtistTypeId);
}
public int GetHashCode(ArtistType obj)
{
return obj.ArtistTypeId.GetHashCode();
}
}
// And then the "add" part simplifies
artist.ArtistTypes.AddRange(this._db.ArtistTypes.Intersect(vm.SelectedIds.Select(x => new ArtistType{ ArtistTypeId = x }));
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,044
|
\section{Introduction}
\label{sec:intro}
The study of flux compactifications in string theory has been pursued intensively in recent years \cite{generalreviews}.
One important motivation is the possibility to stabilize the massless moduli at a minimum of the potential induced by the fluxes.
The simplest scenarios for this mechanism are provided by type IIB and type IIA \neq1 orientifolds with $p$-form fluxes turned on \cite{generalreviews}.
In IIA compactifications the mixture of NSNS and RR fluxes generates a superpotential that depends on all closed string
moduli allowing to stabilize them without invoking non-perturbative effects \cite{Grimm, Derendinger, vz1, DeWolfe, cfi}.
Moreover, in the IIA setup it is natural to add the so-called geometric $f$-fluxes that determine the isometry algebra of
the internal space \cite{Derendinger, vz1, cfi}. The case of nilpotent algebras was studied in \cite{gmpt, andriot, caviezel} and an example with
internal $\mathfrak{su(2)^2}$ was spelled out in \cite{af}.
To recover T-duality between IIA and IIB compactifications, it is necessary to introduce new parameters referred to as
non-geometric fluxes \cite{stw1, stw2, wecht}.
The original observation is that performing a T-duality to NSNS $\bar H$-fluxes leads to geometric $f$-fluxes \cite{glmw, kstt}.
Further T-dualities give rise to generalized $Q$ and $R$-fluxes \cite{stw1}. The $Q$'s are called non-geometric because the emerging
background after two T-dualities can be described locally but not globally. The third T-duality is formal, evidence for the $R$-fluxes
comes rather from T-duality at the level of the effective superpotential \cite{stw1}. Moreover, the $Q$ and $R$-fluxes logically
extend \cite{stw1, dabholkar} the set of structure constants of the gauge algebra, generated by isometries and shifts of the $B$ field,
that is known to contain the geometric and NSNS fluxes \cite{ss, km}.
In this article we consider type IIB orientifolds with O3/O7-planes in which only NSNS $\bar H$ and non-geometric $Q$-fluxes are invariant
under the orientifold action. These fluxes together induce a superpotential that depends on all closed string moduli.
One advantage of working with IIB is that the $Q$-fluxes by themselves appear as the structure constants
of a subalgebra of the full gauge algebra. However, one must keep in mind that the $\bar H$ and $Q$ in IIB map into all kinds of
fluxes in type IIA with O6-planes, and into non-geometric $R$ plus geometric $f$ in IIB with O9/O5-planes. Similar examples
with generalized fluxes have been considered by several authors \cite{stw1, acfi, stw2, vz2, benmachiche, tasinato,ihl, palti, camara}.
Our guiding principle is precisely the classification of the subalgebras satisfied by the non-geometric $Q$-fluxes.
We will discuss a simplified scheme with additional symmetries in order to reduce the number of fluxes.
Concretely, we study compactification on $\msm{({\rm T}^2 \times {\rm T}^2 \times {\rm T}^2)/({\mathbb Z}_2 \times {\mathbb Z}_2)}$,
and further impose invariance under exchange of the internal ${\rm T}^2$'s. In this way we obtain the same model with moduli $(S,T,U)$
proposed in \cite{stw1} and generalized in \cite{acfi}. We have classified the allowed subalgebras of the $Q$-fluxes of
the $(S,T,U)$-model. There are five inequivalent classes, namely $\mathfrak{so(4)}$, $\mathfrak{so(3,1)}$, $\mathfrak{su(2)+u(1)^3}$,
$\mathfrak{iso(3)}$ and the nilpotent algebra denoted $n(3.5)$ in \cite{gmpt}. The non-semisimple solutions are contractions of
$\mathfrak{so(4)}$ consistent with the symmetries. A compelling byproduct is that each subalgebra yields a characteristic flux-induced
superpotential. The corresponding 12-dimensional gauge algebras can be easily identified after a convenient change of basis.
We are mostly interested in discovering supersymmetric flux backgrounds with non-geometric fluxes switched on,
and all moduli stabilized. To this end we work exclusively with the \deq4 effective action.
We widen the search of vacua of \cite{stw2} in several respects. A key difference is that in most cases
we can solve the F-flat conditions analytically and can therefore derive explicit expressions for the moduli vevs in terms
of the fluxes. The computations are facilitated by using a transformed complex structure \mbox{${\mathcal Z}=\msm{(\alpha U + \beta)/(\gamma U + \delta)}$},
invariant under the modular group $SL(2,{\mathbb Z})_U$. The independent non-geometric fluxes are precisely parametrized by
$\Gamma=\genfrac{(}{)}{0pt}{}{\alpha \, \beta}{\gamma \, \delta}$. The parametrization of NSNS and RR fluxes is also dictated by $\Gamma$.
By exploiting the variable ${\mathcal Z}$ we can effectively factor out the vacuum degeneracy due to modular transformations.
There is a further vacuum degeneracy originating from special constant translations in the axions ${\rm Re \,} S$ and ${\rm Re \,} T$. We argue that vacua connected
by this type of translations are identical because the full background including the RR fluxes is invariant under such axionic shifts.
In our analysis the values of the flux-induced $C_4$ and $C_8$ RR tadpoles are treated as variables.
To cancel these tadpoles in general requires to add D-branes besides the orientifold planes. These D-branes are also constrained
by cancellation of Freed-Witten anomalies \cite{cfi, vz2}.
In our concrete setup, D3-branes and unmagnetized D7-branes wrapping an internal ${\rm T}^4$ are free of anomalies and can be
included. However, such D-branes do not give rise to charged chiral matter.
By treating the flux tadpoles as variables we can deduce in particular that the vacua
found in \cite{stw2}, having O3-planes and no O7/D7 sources, can only arise when the $Q$-subalgebra is the compact $\mathfrak{so(4)}$.
For completeness we study the supersymmetric ${\rm AdS}_4$ minima due to the fluxes of all compatible $Q$-subalgebras, including the
non-compact $\mathfrak{so(3,1)}$. In general, such vacua exist in all cases but unusual types of sources might be needed to cancel
the tadpoles. Interestingly, in models based on semisimple subalgebras we find that there can exist more than one vacuum for some
combinations of fluxes.
It is well known that supersymmetric or no-scale Minkowski vacua in IIB orientifolds with RR and NSNS fluxes require sources of
negative RR charge such as O3-planes or wrapped D7-branes \cite{gkp}. However, working with the effective \deq4 formalism we find that O3-planes
and/or D7-branes can be bypassed in fully stabilized supersymmetric ${\rm AdS}_4$ vacua, provided specific non-geometric fluxes are turned on.
It is conceivable that such vacua only occur in the effective theory and will not survive after lifting to a full string background.
Helpful hints in this direction can come from our results relating properties of the vacua with the gauge algebra.
It might well be that only models built on certain algebras can be lifted to full backgrounds.
The newly proposed formulation of non-geometric fluxes based on compactification on doubled twisted tori suggests that the gauge algebra has to be
compact or admit a discrete cocompact subgroup \cite{reid, prezas}. It is also feasible that the
recent description of non-geometric fluxes in the context of generalized geometry \cite{minasian} could be applied to deduce the generalized flux
configurations which allow supersymmetric vacua. A discussion of these issues is beyond our present scope.
We now outline the paper. In section \ref{sec:gen} we review the properties of the fluxes and write down the flux-induced effective
quantities needed to investigate the vacua. The classification of the $Q$-subalgebras is carried out in section \ref{sec:alg}, where we also obtain the
parametrization of the non-geometric and NSNS fluxes that is crucial in the subsequent analysis. In section \ref{sec:newvars} we introduce
the transformed complex structure ${\mathcal Z}$ motivated by modular invariance. Using this variable then points to the efficient parametrization of the
RR fluxes given in the appendix. In the end we are able to derive very compact expressions for the flux-induced superpotential and tadpoles according to
the particular $Q$-subalgebra. In section \ref{sec:vac} we solve the F-flat conditions and collect the results that distinguish the vacua with moduli stabilized.
The salient features of these vacua are discussed in section \ref{sec:lands}. Section \ref{sec:end} is devoted to some final comments.
\section{Generalities}
\label{sec:gen}
In this section we outline our notation to describe the non-geometric fluxes introduced in \cite{stw1}. To be specific we will work in
the context of toroidal orientifolds with O3/O7-planes. We will discuss the case of generic untwisted moduli, and also the simpler
isotropic model considered in \cite{stw1}.
\subsection{Fluxes}
\label{ssec:fluxes}
The starting point is a type IIB string compactification on a six-torus ${\rm T}^6$ whose basis of
1-forms is denoted $\eta^a$. Moreover, we assume the factorized geometry
\begin{equation}
{\rm T}^6={\rm T}^2 \times {\rm T}^2 \times {\rm T}^2 \,\,:\,\,
(\eta^{1}\,,\,\eta^{2}) \,\,\times\,\,( \eta^{3}\,,\,\eta^{4} )
\,\,\times\,\,( \eta^{5}\,,\,\eta^{6} ) \ .
\label{factorus}
\end{equation}
As in \cite{stw1}, we will use greek indices $\alpha,\beta,\gamma$ for horizontal
$\,``-"$ $x$-like directions $(\eta^{1},\eta^{3},\eta^{5})$ and latin indices $i,j,k$
for vertical $\,``|"$ $y$-like directions $(\eta^{2},\eta^{4},\eta^{6})$ in the 2-tori.
The ${\mathbb Z}_2$ orientifold involution denoted $\sigma$ acts as
\begin{equation}
\sigma \ : \ ( \eta^{1}\,,\,\eta^{2}\,,\,\eta^{3}\,,\,\eta^{4}\,,\,\eta^{5}\,,\,\eta^{6} )
\ \rightarrow \
( -\eta^{1}\,,\,-\eta^{2}\,,\,-\eta^{3}\,,\,-\eta^{4}\,,\,-\eta^{5}\,,\,-\eta^{6} ) \ .
\label{osigma}
\end{equation}
There are 64 O3-planes located at the fixed points of $\sigma$.
We further impose a ${\mathbb Z}_2 \times {\mathbb Z}_2$ orbifold symmetry with generators acting as
\begin{eqnarray}
\label{orbifold1}
\theta_1 & : & ( \eta^{1}\,,\,\eta^{2}\,,\,\eta^{3}\,,\,\eta^{4}\,,\,\eta^{5}\,,\,\eta^{6} )
\ \rightarrow \
(\eta^{1}\,,\,\eta^{2}\,,\,-\eta^{3}\,,\,-\eta^{4}\,,\,-\eta^{5}\,,\,-\eta^{6} ) \ , \\[2mm]
\theta_2 & : & (\eta^{1}\,,\,\eta^{2}\,,\,\eta^{3}\,,\,\eta^{4}\,,\,\eta^{5}\,,\,\eta^{6} )
\ \rightarrow \ (-\eta^{1}\,,\,-\eta^{2}\,,\,\eta^{3}\,,\,\eta^{4}\,,\,-\eta^{5}\,,\,-\eta^{6} )
\ .
\nonumber
\end{eqnarray}
Clearly, there is another order-two element $\theta_3 = \theta_1 \theta_2$.
Under this ${\mathbb Z}_{2} \times {\mathbb Z}_{2}$ orbifold group, only 3-forms with one leg in each 2-torus survive.
This also occurs in the compactification with an extra ${\mathbb Z}_{3}$ cyclic permutation of the three 2-tori
that was studied in \cite{stw1, stw2}. In that case there are only O3-planes and two geometric
moduli, namely the overall K\"ahler and complex structure parameters.
In contrast, in our setup, the full symmetry group ${\mathbb Z}_2^3$ includes additional orientifold actions
$\sigma \theta_I$ that have fixed 4-tori and lead to \mbox{O$7_I$-planes}, $I=1,2,3$.
Another difference is that in principle we have one K\"ahler and one complex structure parameter for each
2-torus ${\rm T}_I^2$.
The K\"ahler form and the holomorphic 3-form that encode the geometric moduli of the internal space
can be written in a basis of invariant forms that also enters in the description
of background fluxes. Under the ${\mathbb Z}_2 \times {\mathbb Z}_2$ orbifold action the invariant 3-forms are just
\begin{equation}
\begin{array}{lclclcl}
\alpha_{0}=\eta^{135} & \quad ; \quad & \alpha_{1}=\eta^{235} & \quad ; \quad &
\alpha_{2}=\eta^{451} & \quad ; \quad & \alpha_{3}=\eta^{613} \ , \\
\beta^{0}=\eta^{246} & \quad ; \quad & \beta^{1}=\eta^{146} & \quad ; \quad &
\beta^{2}=\eta^{362} & \quad ; \quad & \beta^{3}=\eta^{524} \ .
\end{array}
\label{basisab}
\end{equation}
where, e.g. $\eta^{135}= \eta^1 \wedge \eta^3 \wedge \eta^5$.
Clearly, these forms are all odd under the orientifold involution $\sigma$.
On the other hand, the invariant 2-forms and their dual 4-forms are
\begin{equation}
\begin{array}{lclcl}
\omega_{1}=\eta^{12} & \quad ; \quad & \omega_{2}=\eta^{34} & \quad ; \quad &
\omega_{3}=\eta^{56} \ , \\
\tilde{\omega}^{1}=\eta^{3456} & \quad ; \quad & \tilde{\omega}^{2}=\eta^{1256} & \quad ; \quad &
\tilde{\omega}^{3}=\eta^{1234} \ .
\end{array}
\label{inv2form}
\end{equation}
These forms are even under $\sigma$.
We choose the orientation and normalization
\begin{equation}
\int_{{\rm M}_6} \!\! \eta^{123456}={\mathcal V}_6 \ .
\label{normal1}
\end{equation}
The positive constant ${\mathcal V}_6$ gives the volume of the internal space that we generically denote ${\rm M}_6$.
Notice that the basis satisfies
\begin{equation}
\int_{{\rm M}_6} \!\! \alpha_{0} \wedge \beta^{0}= -{\mathcal V}_6 \quad , \quad
\int_{{\rm M}_6} \!\! \alpha_{I} \wedge \beta^{J}=
\int_{{\rm M}_6}\!\! \omega_{I} \wedge \tilde{\omega}^{J}= {\mathcal V}_6 \delta_{I}^{J} \quad \, , \quad \, I,J=1,2,3.
\label{normal2}
\end{equation}
The ${\mathbb Z}_{2} \times {\mathbb Z}_{2}$ orbifold symmetry restricts the period matrix $\tau^{ij}$ to be diagonal.
Then, up to normalization, the holomorphic 3-form is given by
\begin{equation}
\label{holoexpan}
\Omega= (\eta^1 + \tau_1 \eta^2) \wedge (\eta^3 + \tau_2 \eta^4) \wedge (\eta^5 + \tau_3 \eta^6)
=\alpha_{0} + \tau_{K} \,\alpha_{K} + \beta^{K}\,\frac{\tau_1 \tau_{2} \tau_{3}}{ \tau_{K}} +
\beta^{0}\,\tau_1 \tau_{2} \tau_{3} \ ,
\end{equation}
with the $H^{3}({\rm M}_6,{\mathbb Z})$ basis displayed in (\ref{basisab}).
The next step is to switch on background fluxes for the NSNS and RR 3-forms. Since both $H_3$ and $F_3$ are
odd under the orientifold involution, the allowed background fluxes can be expanded as
\begin{eqnarray}
\bar H_3 & = & b_{3} \,\alpha_{0} + b_2^{(I)} \,\alpha_{I} + b_{1}^{(I)} \,\beta^{I} + b_{0} \,\beta^{0} \ ,
\label{H3expan} \\[2mm]
\bar F_3 & = & a_{3} \,\alpha_{0} + a_{2}^{(I)} \,\alpha_{I} +
a_{1}^{(I)} \,\beta^{I} + a_{0} \,\beta^{0} \ .
\label{F3expan}
\end{eqnarray}
All flux coefficients are integers because the integrals
of $\bar H_3$ and $\bar F_3$ over 3-cycles are quantized. To avoid subtleties with exotic orientifold planes
we take all fluxes to be even \cite{frey, kst}.
As argued originally in \cite{glmw, kstt}, applying one T-duality transformation to the NSNS fluxes
can give rise to geometric fluxes $f^a_{bc}$ that correspond to structure constants of the isometry algebra
of the internal space. Performing further T-dualities leads to generalized fluxes denoted $Q_c^{ab}$ and $R^{abc}$ \cite{stw1}.
The $Q_c^{ab}$ are called non-geometric fluxes because the resulting metric after two T-dualities yields a background that is
locally but not globally geometric \cite{stw2, wecht}. Compactifications with $R^{abc}$ fluxes are not even
locally geometric but these fluxes are necessary to maintain T-duality between type IIA and type IIB.
The geometric and the R-fluxes must be even under the orientifold involution and are thus totally absent in type IIB with O3/O7-planes.
On the other hand, the non-geometric fluxes must be odd and are fully permitted.
The main motivation of this work is to study supersymmetric vacua in toroidal type IIB orientifolds
with NSNS, RR and non-geometric $Q$-fluxes turned on. In our construction, the ${\mathbb Z}_2 \times {\mathbb Z}_2$
symmetry only allows 24 components of the flux tensor $Q_c^{ab}$, namely those with one leg on each
2-torus. This set of non-geometric fluxes is displayed in table \ref{tableNonGeometric}. All components
of the tensor $Q$ are integers that we take to be even.
\begin{table}[htb]
\begin{center}\begin{tabular}{|c|c|c|}
\hline
Type & Components & Fluxes \\
\hline
\hline
$Q_{-}^{--} \equiv Q_{\alpha}^{\beta \gamma}$ & $ Q_{1}^{35}\,,\,Q_{3}^{51}\,,\,Q_{5}^{13}$ &
$\tilde{c}_{1}^{\,(1)}\,,\,\tilde{c}_{1}^{\,(2)}\,,\,\tilde{c}_{1}^{\,(3)}$ \\
\hline
\hline
$Q_{|}^{|-} \equiv Q_{k}^{i \beta} $ & $ Q_{4}^{61}\,,\,Q_{6}^{23}\,,\,Q_{2}^{45}$ &
$ \hat{c}_{1}^{\,(1)}\,,\,\hat{c}_{1}^{\,(2)}\,,\,\hat{c}_{1}^{\,(3)}$ \\
\hline
\hline
$Q_{|}^{-|} \equiv Q_{k}^{\alpha j}$ & $ Q_{6}^{14}\,,\,Q_{2}^{36}\,,\,Q_{4}^{52}$ &
$ \check{c}_{1}^{\,(1)}\,,\,\check{c}_{1}^{\,(2)}\,,\,\check{c}_{1}^{\,(3)}$ \\
\hline
\hline
$Q_{|}^{--} \equiv Q_{k}^{\alpha\beta}$ & $Q_{2}^{35}\,,\,Q_{4}^{51}\,,\,Q_{6}^{13}$ &
$ c_{0}^{\,(1)}\,,\,c_{0}^{\,(2)}\,,\,c_{0}^{\,(3)}$\\
\hline
\hline
$Q_{-}^{||} \equiv Q_{\gamma}^{i j}$ & $ Q_{1}^{46}\,,\,Q_{3}^{62}\,,\,Q_{5}^{24}$ &
$ c_{3}^{\,(1)}\,,\,c_{3}^{\,(2)}\,,\,c_{3}^{\,(3)}$ \\
\hline
\hline
$Q_{-}^{|-} \equiv Q_{\gamma}^{i \beta}$ & $Q_{5}^{23}\,,\,Q_{1}^{45}\,,\,Q_{3}^{61}$ &
$\check{c}_{2}^{\,(1)}\,,\,\check{c}_{2}^{\,(2)}\,,\,\check{c}_{2}^{\,(3)}$ \\
\hline
\hline
$Q_{-}^{-|} \equiv Q_{\beta}^{\gamma i}$ & $ Q_{3}^{52}\,,\,Q_{5}^{14}\,,\,Q_{1}^{36}$ &
$\hat{c}_{2}^{\,(1)}\,,\,\hat{c}_{2}^{\,(2)}\,,\,\hat{c}_{2}^{\,(3)}$ \\
\hline
\hline
$Q_{|}^{||} \equiv Q_{k}^{i j}$ & $Q_{2}^{46}\,,\,Q_{4}^{62}\,,\,Q_{6}^{24}$ &
$\tilde{c}_{2}^{\,(1)}\,,\,\tilde{c}_{2}^{\,(2)}\,,\,\tilde{c}_{2}^{\,(3)}$ \\
\hline
\end{tabular}\end{center}
\caption{Non-geometric $Q$-fluxes.}
\label{tableNonGeometric}
\end{table}
\subsection{Effective action}
\label{ssec:action}
The NSNS, RR and non-geometric fluxes induce a potential for the closed string moduli.
We will focus on the untwisted moduli of the toroidal orientifold.
To write explicitly the effective action, recall first that
the axiodilaton and the complex structure moduli are given by
\begin{equation}
S = C_0 + i e^{-\phi} \qquad ; \qquad U_I = \tau_I \quad ; \quad I=1,2,3 \ ,
\label{sumoduli}
\end{equation}
where $C_0$ is the RR 0-form, $\phi$ is the 10-dimensional dilaton and the $\tau_I$ are
the components of the period matrix. The K\"ahler moduli $T_I$ are instead extracted from the expansion
of the complexified K\"ahler 4-form ${\mathcal J}$, i.e. ${\mathcal J}=-\sum T_{I} \, \tilde{\omega}^{I}$. In turn,
the real (axionic) part of ${\mathcal J}$ arises from the RR 4-form $C_4$ whereas the imaginary part is
$e^{-\phi} J\wedge J/2$, where $J$ is the fundamental K\"ahler form. In fact, ${\rm Im \,} T_I$ is basically
the area of the 4-cycle dual to the 4-form $\tilde\omega^I$.
We are interested in compactifications that preserve \neq1 supersymmetry in four dimensions.
In this case we know that the scalar potential can be computed from the K\"ahler potential and
the superpotential.
The K\"ahler potential for the moduli is given by the usual expression
\begin{equation}
K =-\sum_{K=1}^{3}\log\left( -i\,(U_{K}-\bar{U}_{K})\right) - \,\log\left( -i\,(S-\bar{S})\right) -
\sum_{K=1}^{3} \log\left( -i\,(T_{K}-\bar{T}_{K})\right) \ ,
\end{equation}
which is valid to first order in the string and sigma model perturbative expansions.
The NSNS and RR fluxes induce a superpotential only for $S$ and the $U_I$.
In absence of non-geometric fluxes K\"ahler moduli do not enter
in the superpotential and non-perturbative effects such as gaugino condensation are required to get
vacua with all moduli fixed. The $Q$-fluxes generate new couplings involving K\"ahler fields, thereby
opening the possibility to stabilize all types of closed string moduli.
The general superpotential can be computed from \cite{acfi}
\begin{equation}
\label{WInt}
W=\int_{{\rm M}_{6}} \!\!\! \left(G_{3} \,+\,Q\,\mathcal{J} \right) \,\wedge\, \Omega \ ,
\end{equation}
where $G_{3}= \bar F_{3}-\,S\,\bar H_{3}$, and $Q{\mathcal J}$ is a 3-form with components defined by
\begin{equation}
(Q\,\mathcal{J})_{abc}=\frac{1}{2} \,Q_{[a}^{mn}\, \mathcal{J}_{bc]\,mn} \ .
\label{qjcomp}
\end{equation}
Being a 3-form, $Q{\mathcal J}$ can be expanded in the basis (\ref{basisab}). We obtain
\begin{equation}
\label{QJexpan}
Q\,\mathcal{J}=T_{K} \left( c_{3}^{(K)} \,\alpha_{0} - {\mathcal C}_{2}^{(I K)} \,\alpha_{I} - {\mathcal C}_{1}^{(I K)} \,\beta^{I}
+ c_{0}^{(K)} \,\beta^{0} \right) \ ,
\end{equation}
where ${\mathcal C}_1$ and ${\mathcal C}_2$ are the non-geometric flux matrices
\begin{equation}
{\mathcal C}_{1}=\left(
\begin{array}{lll}
-\tilde{c}_{1}^{\,(1)} & \check{c}_{1}^{\,(3)} & \hat{c}_{1}^{\,(2)} \\
\hat{c}_{1}^{\,(3)} & -\tilde{c}_{1}^{\,(2)} & \check{c}_{1}^{\,(1)} \\
\check{c}_{1}^{\,(2)} & \hat{c}_{1}^{\,(1)} & -\tilde{c}_{1}^{\,(3)} \\
\end{array}
\right)
\qquad ,\qquad
{\mathcal C}_{2}=\left(
\begin{array}{lll}
-\tilde{c}_{2}^{\,(1)} & \check{c}_{2}^{\,(3)} & \hat{c}_{2}^{\,(2)} \\
\hat{c}_{2}^{\,(3)} & -\tilde{c}_{2}^{\,(2)} & \check{c}_{2}^{\,(1)} \\
\check{c}_{2}^{\,(2)} & \hat{c}_{2}^{\,(1)} & -\tilde{c}_{2}^{\,(3)} \\
\end{array}
\right) \ .
\label{c1c2mat}
\end{equation}
The expansion for the 3-form $G_3$ that combines the NSNS and the RR fluxes can be read off from (\ref{H3expan}) and (\ref{F3expan}).
Substituting the expansions of the holomorphic 3-form and the background fluxes in (\ref{WInt}) shows that the superpotential takes the form
\begin{equation}
W=P_{1}(U) + P_{2}(U)\,S + \sum_{K=1}^{3} P_{3}^{\,(K)}(U)\,T_{K} \ .
\label{fullW}
\end{equation}
The $P$'s are cubic polynomials in the complex structure moduli given by
\begin{eqnarray}
P_{1}(U) & = & a_{0} -\sum_{K=1}^{3} a_{1}^{\,(K)}\,U_{K} +
\sum_{K=1}^{3} a_{2}^{\,(K)} \frac{U_{1}U_{2}U_{3}}{U_{K}} - a_{3} U_{1}U_{2}U_{3} \ ,
\label{p1gen} \\[2mm]
P_{2}(U) & = & -b_{0} +\sum_{K=1}^{3} b_{1}^{\,(K)}\,U_{K} -
\sum_{K=1}^{3} b_{2}^{\,(K)} \frac{U_{1}U_{2}U_{3}}{U_{K}} + b_{3} U_{1}U_{2}U_{3} \ ,
\label{p2gen} \\[2mm]
P_{3}^{\,(K)}(U) & = & c_{0}^{\,(K)} +\sum_{L=1}^{3} {\mathcal C}_{1}^{\,(L K)}\,U_{L} -
\sum_{L=1}^{3} {\mathcal C}_{2}^{\,(L K)} \frac{U_{1}U_{2}U_{3}}{U_{L}} -c_{3}^{\,(K)} U_{1}U_{2}U_{3} \ .
\label{p3gen}
\end{eqnarray}
The main feature of the flux superpotential is that it depends on all untwisted closed string moduli.
At this point we have a model with seven moduli whose potential depends on forty flux
parameters. Finding vacua in this generic setup is rather cumbersome. For this reason we consider
a simpler configuration in which the fluxes are isotropic. Concretely, we make the Ansatz
\begin{eqnarray}
\tilde{c}_{1}^{\,(I)} \equiv \tilde{c}_{1} \quad ; \!\! &{}&
\hat{c}_{1}^{\,(I)} \equiv \hat{c}_{1} \quad ; \quad
\check{c}_{1}^{\,(I)} \equiv \check{c}_{1} \quad ; \quad
\tilde{c}_{2}^{\,(I)} \equiv \tilde{c}_{2} \quad ; \quad
\hat{c}_{2}^{\,(I)} \equiv \hat{c}_{2} \quad ; \quad
\check{c}_{2}^{\,(I)} \equiv \check{c}_{2} \ ,
\nonumber \\
&{}& b_{1}^{\,(I)}\equiv b_{1} \quad ; \quad
b_{2}^{\,(I)} \equiv b_{2} \quad ; \quad
a_{1}^{\,(I)} \equiv a_{1} \quad ; \quad
a_{2}^{\,(I)} \equiv a_{2} \ .
\label{isofluxes}
\end{eqnarray}
Isotropic fluxes are summarized in tables \ref{tableIsoNSRR} and \ref{tableIsoNon-Geometric}.
\begin{table}[htb]
\begin{center}\begin{tabular}{|c|c|c|c||c|c|c|c|}
\hline
$\bar{F}_{---}$ & $\bar{F}_{|--}$ & $\bar{F}_{-||}$ & $\bar{F}_{|||}$ &
$\bar{H}_{---}$ & $\bar{H}_{|--}$ & $\bar{H}_{-||}$ & $\bar{H}_{|||}$\\
\hline
\hline
$a_{3}$ & $a_{2}$ & $a_{1}$ & $a_{0}$ & $b_{3}$ & $b_{2}$ & $b_{1}$ & $b_{0}$ \\
\hline
\end{tabular}\end{center}
\caption{NS and RR isotropic fluxes. }
\label{tableIsoNSRR}
\end{table}
\begin{table}[htb]
\begin{center}\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$Q_{-}^{--}$ & $Q_{|}^{|-}$ & $Q_{|}^{-|}$ & $Q_{|}^{--}$ & $Q_{-}^{||}$ & $Q_{-}^{|-}$ & $Q_{-}^{-|}$ & $Q_{|}^{||}$\\
\hline
\hline
$\tilde{c}_{1}$ & $ \hat{c}_{1}$ & $ \check{c}_{1}$ & $ c_{0}$ & $c_{3}$ &
$\check{c}_{2}$ & $\hat{c}_{2}$ & $\tilde{c}_{2}$\\
\hline
\end{tabular}\end{center}
\caption{Non-geometric isotropic fluxes.}
\label{tableIsoNon-Geometric}
\end{table}
The Ansatz of isotropic fluxes is compatible with vacua in which the geometric moduli are also isotropic, namely
\begin{equation}
U_{1}=U_{2}=U_{3}\equiv U \quad ; \quad T_{1}=T_{2}=T_{3}\equiv T \ .
\label{isoUT}
\end{equation}
This means, that there is only one overall complex structure modulus $U$ and one K\"ahler modulus $T$.
The model also includes the axiodilaton. In this case, the K\"ahler potential and the superpotential
reduce to
\begin{eqnarray}
K & = & -3\,\log\left( -i\,(U-\bar{U})\right) - \,\log\left( -i\,(S-\bar{S})\right) -
3\,\log\left( -i\,(T-\bar{T})\right) \nonumber \\[2mm]
W & = &P_{1}(U) + P_{2}(U)\,S + P_{3}(U)\,T \quad .
\label{kwiso}
\end{eqnarray}
The $P$'s are now cubic polynomials in the single complex structure moduli. They are given by
\begin{eqnarray}
P_{1}(U) & = & a_{0}-3\,a_{1}\,U+3\,a_{2}\,U^{2}-a_{3}\,U^{3} \ ,
\label{P1Iso} \\[2mm]
P_{2}(U) & = & -b_{0}+3\,b_{1}\,U-3\,b_{2}\,U^{2}+b_{3}\,U^{3} \ ,
\label{P2Iso} \\[2mm]
P_{3}(U) & = & 3\, \left( c_{0}+ (\hat{c}_{1}+\check{c}_{1}-\tilde{c}_{1}) \,U -
(\hat{c}_{2}+\check{c}_{2}-\tilde{c}_{2})\,U^{2} - c_{3}\,U^{3} \right) \ .
\label{P3Iso}
\end{eqnarray}
This is the model considered in \cite{stw1,stw2}.
\subsection{Bianchi identities and tadpoles}
\label{ssec:bianchi}
The NSNS and generalized fluxes that follow from the T-duality chain can be regarded as structure
constants of an extended symmetry algebra of the compactification \cite{stw1, dabholkar}. This algebra includes isometry
generators $Z_{a}$ as well as gauge symmetry generators $X^{a}$, $a=1,\ldots, 6$,
coming from the reduction of the $B$-field on $T^{6}$ with fluxes. We are interested in type IIB with O3/O7-planes where geometric and $R$-fluxes
are forbidden. In this case the algebra is given by
\begin{eqnarray}
\left[ X^{a} , X^{b} \right]&=&Q_{c}^{ab}\,X^{c} \ , \nonumber \\
\left[ Z_{a} , X^{b} \right]&=&Q_{a}^{bc}\,Z_{c} \ , \label{zxalgebra}\\
\left[ Z_{a} , Z_{b} \right]&=&\bar H_{abc}\,X^{c} \ . \nonumber
\end{eqnarray}
Notice that the $X^a$ span a 6-dimensional subalgebra in which the non-geometric $Q_c^{ab}$ are the structure constants.
Computing the Jacobi identities of the full 12-dimensional algebra we obtain the constraints
\begin{equation}
\label{BianchiGen}
\bar H_{x[bc}\,Q^{ax}_{d]} =0 \qquad ; \qquad Q_{x}^{[ab}\,Q^{c]x}_{d}=0 \ .
\end{equation}
In the following we will refer to these identities in the shorthand notation $\bar H Q=0$ and
$Q Q=0$. The constraints on the fluxes can also be interpreted in terms of a nilpotency condition
${\mathcal D}^2=0$ on the operator ${\mathcal D}=H\wedge + Q\cdot$ introduced in \cite{stw2}.
The RR fluxes are also constrained by Bianchi identities of the type ${\mathcal D} \bar F={\mathcal S}$, where ${\mathcal S}$
is a generalized form due to sources that are assumed smeared instead of localized. These Bianchi identities can be
understood as tadpole cancellation conditions on the RR 4-form $C_4$ and $C_8$ that couple to the sources.
The sources are just the orientifold O3/O7-planes and D3/D7-branes that can be present. In the IIB
orientifold that we are considering there is a flux-induced $C_4$ tadpole due to the coupling
\begin{equation}
\int_{{\rm M}_{4} \times {\rm M}_{6}} C_{4} \wedge \bar H_{3} \wedge \bar F_{3} \ .
\label{c4tad}
\end{equation}
There are further $C_4$ tadpoles due to O3-planes and to D3-branes that can also be added.
The total orientifold charge is -32, equally distributed among 64 O3-planes located at the fixed points of the
orientifold involution $\sigma$. Each D3-brane has charge $+1$ and if they are located in the bulk,
as opposed to fixed points of ${\mathbb Z}_2^3$, images must be included. Adding the sources to the flux
tadpole (\ref{c4tad}) leads to the cancellation condition
\begin{equation}
\label{O3tad}
a_{0}\,b_{3} - a_{1}^{(K)}\,b_{2}^{(K)} + a_{2}^{(K)}\,b_{1}^{(K)} - a_{3}\,b_{0}=N_{3} \ ,
\end{equation}
where $N_3=32-N_{\rm D3}$ and $N_{\rm D3}$ is the total number of D3-branes.
The non-geometric and RR fluxes can also combine to produce a tadpole for the RR $C_8$ form. The
contraction $Q \bar F_3$ is a 2-form and the flux-induced tadpole is due to the coupling
\begin{equation}
\int_{{\rm M}_{4} \times {\rm M}_{6}} C_{8} \wedge (Q \bar F_{3})
\label{c8tad}
\end{equation}
Expanding the 2-form $(Q \bar{F}_{3})$ in the basis of 2-forms $\omega_I$, $I=1,2,3$, yields coefficients
\begin{equation}
( Q \bar{F}_{3})_{I}=a_{0}\,c_{3}^{(I)}+a_{1}^{(K)}\,{\mathcal C}_{2}^{(K I)} - a_{2}^{(K)}\,{\mathcal C}_{1}^{(K I)}-a_{3}\,c_{0}^{(I)}
\quad ; \quad I=1,2,3 \ .
\label{o3d3tad}
\end{equation}
This means that there are induced tadpoles for $C_8$ components of type
$C_8 \sim d{\rm vol}_4 \wedge \widetilde{\omega}^I$, where $d{\rm vol}_4$ is the space-time volume 4-form
and $\widetilde{\omega}^I$ is the 4-form dual to $\omega_I$.
On the other hand, there are also $C_8$ tadpoles due to \mbox{O$7_I$-planes} that
have a total charge $+32$ for each $I$. As discussed before, due to the orbifold
group ${\mathbb Z}_2 \times {\mathbb Z}_2$, there are \mbox{O$7_I$-planes} located at the 4 fixed tori of
$\sigma \theta_I$, where $\theta_I$ are the three order-two elements of ${\mathbb Z}_2 \times {\mathbb Z}_2$.
In the end we find the three tadpole cancellation conditions
\begin{equation}
\label{O7tad}
a_{0}\,c_{3}^{(I)}+a_{1}^{(K)}\,{\mathcal C}_{2}^{(K I)} - a_{2}^{(K)}\,{\mathcal C}_{1}^{(K I)}-a_{3}\,c_{0}^{(I)}
=N_{7_I} \quad ; \quad I=1,2,3 \ ,
\end{equation}
where $N_{7_I}=-32+N_{{\rm D}_{7_I}}$ and $N_{{\rm D}_{7_I}}$ is the number of
\mbox{D$7_I$-branes} that are generically allowed.
In this work we mostly consider isotropic fluxes so that we will again make the Ansatz (\ref{isofluxes}).
Jacobi identities as well as tadpoles cancellation conditions become simpler. Computing $QQ=0$ constraints
from (\ref{BianchiGen}) leave us with
\begin{eqnarray}
\label{BianchiXhatcheck}
\hat{c}_{2}\,\tilde{c}_{1} - \tilde{c}_{1}\,\check{c}_{2} + \check{c}_{1}\,\hat{c}_{2} - c_{0}\,c_{3}=0 \quad & ; & \quad
c_{3}\,\tilde{c}_{1} - \check{c}_{2}^{2} + \tilde{c}_{2}\,\hat{c}_{2} - \hat{c}_{1}\,c_{3}=0 \ , \nonumber \\
c_{3}\,c_{0} - \check{c}_{2}\,\hat{c}_{1} + \tilde{c}_{2}\,\check{c}_{1} - \hat{c}_{1}\,\tilde{c}_{2}=0 \quad & ; & \quad
c_{0}\,\tilde{c}_{2} - \check{c}_{1}^{2} + \tilde{c}_{1}\,\hat{c}_{1} - \hat{c}_{2}\,c_{0}=0 \ ,
\end{eqnarray}
plus one additional copy of each condition with $\check{c}_{i} \leftrightarrow \hat{c}_{i}$.
An important result is that saturating\footnote{This can be done using a computational algebra program as Singular \cite{singular}
and solving over the real field. In \cite{stw1}, an analogous result is obtained manipulating this set of polynomial
constraints by hand.} this ideal with respect to the conditions $\check{c}_{i} \not= \hat{c}_{i}$
automatically implies that $\tilde{c}_{i}$ is complex. Therefore, it must be that
\begin{equation}
\check{c}_{1}=\hat{c}_{1} \equiv c_{1} \qquad ; \qquad
\check{c}_{2}=\hat{c}_{2} \equiv c_{2} \ .
\label{oneci}
\end{equation}
The cubic polynomial that couples the complex structure and K\"ahler moduli, c.f. (\ref{P3Iso}), then reduces to
\begin{equation}
\label{P3Iso2}
P_{3}(U)=3\, \left( c_{0}+ (2\,c_{1}-\tilde{c}_{1}) \,U - (2\,c_{2}-\tilde{c}_{2})\, U^{2} - c_{3}\,U^{3} \right) \ .
\end{equation}
Recall that the non-geometric fluxes are integer parameters.
Upon using (\ref{oneci}), the Jacobi constraints satisfied by the non-geometric fluxes become
\begin{eqnarray}
\label{BianchiC}
c_0 \left(c_2-\tilde{c}_2\right)+ c_1\,(c_1-\tilde{c}_1) &=&0 \ , \nonumber \\
c_2\,(c_2-\tilde{c}_2)+c_3 \left(c_1-\tilde{c}_1\right) &=&0 \ , \\
c_0 c_3-c_1 c_2 &=&0 \ . \nonumber
\end{eqnarray}
This system of equations is easy to solve explicitly. The solution variety has three disconnected
pieces of different dimensions. The first piece has dimension four and it is characterized by fluxes
\begin{equation}
\begin{array}{lclcl}
c_{3}= \lambda_{p}\,k_2 & \quad ; \quad & c_{2}= \lambda_{p}\,k_1 & \quad ; \quad &
\tilde{c}_{1}= \lambda_{q}\,k_2 + \lambda k_{1} \quad ; \\
c_{1}= \lambda_{q}\,k_2 & \quad ; \quad & c_{0}= \lambda_{q}\,k_1 & \quad ; \quad &
\tilde{c}_{2}= \lambda_{p}\,k_1 - \lambda k_{2} \quad .
\end{array}
\label{PieceA}
\end{equation}
Here $\lambda=1$, $(k_1,k_2)$ are two integers not zero simultaneously,
and $(\lambda_p, \lambda_q)$ are two rays given by
\begin{equation}
\lambda_{p}=1+\frac{p}{\msm{\rm GCD}(k_{1},k_{2})} \quad ; \quad
\lambda_{q}=1+\frac{q}{\msm{\rm GCD}(k_{1},k_{2})} \ ,
\label{raylambda}
\end{equation}
where $p, \, q, \in {\mathbb Z}$. By convention ${\rm GCD}(n,0)=|n|$.
With coefficients given by the fluxes (\ref{PieceA}) the polynomial $P_3(U)$ turns out to factorize as
\begin{equation}
P_{3}(U)=3\, (k_1 + k_2 \,U) \,(\lambda_q -\lambda \, U - \lambda_{p}\,U^{2}) \ .
\label{p3fact}
\end{equation}
Notice that we have taken into account that the non-geometric fluxes are integers.
The second piece of solutions is three dimensional, the set of fluxes can still be characterized
by (\ref{PieceA}) and $P_3(U)$ by (\ref{p3fact}), but with $\lambda\equiv 0$ and $\lambda_p\equiv1 $.
Finally, the third piece has only two dimensions with fluxes and $P_3(U)$ specified by setting
$\lambda\equiv 0$, $\lambda_p\equiv 0$ and $\lambda_q\equiv 1$.
As a byproduct of the above analysis we have isolated the real root of $P_3(U)$ that always exist.
In the next section we will explain how the nature of the remaining two roots is correlated with
the type of algebra fulfilled by the $X^a$ generators. For example, we will see that in the third piece
of solutions with $k_2=0$, the algebra is nilpotent.
Let us now consider the constraints $\bar H Q=0$ that mix non-geometric and NSNS fluxes.
Inserting the isotropic fluxes in (\ref{BianchiGen}), and using (\ref{oneci}), we find
\begin{eqnarray}
\label{BianchiB}
b_2 c_0-b_0 c_2+b_1(c_1- \tilde{c}_1) &=&0 \ , \nonumber \\
b_3 c_0-b_1 c_2+b_2 (c_1-\tilde{c}_1)&=&0 \ , \nonumber \\
b_2 c_1-b_0 c_3-b_1(c_2-\tilde{c}_2) &=&0 \ , \\
b_3 c_1-b_1 c_3-b_2 (c_2-\tilde{c}_2)&=&0 \ . \nonumber
\end{eqnarray}
These conditions restrict the NSNS fluxes $b_A$ that determine the coupling between the
complex structure and the dilaton moduli through the polynomial $P_2(U)$ in (\ref{P2Iso}).
In the next section we will discuss solutions to the full set of constraints that will lead
to specific forms for the polynomials $P_2(U)$ and $P_3(U)$.
The tadpole cancellation relations also become simpler in the isotropic case. In particular,
the three constraints in (\ref{O7tad}), depending on $I$, reduce to just one condition.
Substituting the isotropic Ansatz and (\ref{oneci}) we obtain
\begin{equation}
\label{O3tadIso}
a_{0}\,b_{3}-3\,a_{1}\,b_{2}+3\,a_{2}\,b_{1}-a_{3}\,b_{0}=N_{3} \ ,
\end{equation}
\begin{equation}
\label{O7tadIso}
a_{0}\,c_{3}+a_{1}\,(2\,c_{2} -\tilde{c}_{2})-a_{2}\,(2\,c_{1} -\tilde{c}_{1})-a_{3}\,c_{0}=N_{7} \ .
\end{equation}
These conditions constraint the RR fluxes. We consider the net O3/D3 and O7/D7 charges, $N_3$ and $N_7$,
to be free parameters.
\section{Algebras and fluxes}
\label{sec:alg}
In this section we discuss solutions to the Jacobi identities satisfied by the NSNS and
the non-geometric $Q$ fluxes. The key idea is twofold. First, the generators $X^a$ in (\ref{zxalgebra}) span
a six-dimensional subalgebra whose structure constants are precisely the $Q^{ab}_c$. Second, when these
fluxes are invariant under the ${\mathbb Z}_2^3$ symmetry described in section \ref{ssec:fluxes},
this subalgebra is rather constrained. We expect only a few subalgebras to be allowed and our
strategy is to identify them. In this way we will manage to provide explicit parametrizations
for non-geometric fluxes that satisfy the identity $QQ=0$. Once this is achieved, we will also be able to
find the corresponding NSNS fluxes that fulfill $\bar H Q=0$.
We want to consider in detail the set of isotropic non-geometric fluxes given in
table \ref{tableIsoNon-Geometric} plus the conditions $\check{c}_{1}=\hat{c}_{1} \equiv c_{1}$,
$\check{c}_{2}=\hat{c}_{2} \equiv c_{2}$. In this case the subalgebra simplifies to
\begin{eqnarray}
\left[X^{2I-1}, X^{2J-1}\right] & = & \epsilon_{IJK} \left( \tilde c_1 \, X^{2K-1} + c_0\, X^{2K}\right)
\nonumber \ , \\
\left[ X^{2I-1}, X^{2J}\right] & = & \epsilon_{IJK} \left(c_2 \, X^{2K-1} + c_1\, X^{2K}\right)
\label{subiso} \ , \\
\left[ X^{2I}, X^{2J}\right] & = & \epsilon_{IJK} \left(c_3 \, X^{2K-1} + \tilde c_2 \, X^{2K}\right)
\nonumber \ ,
\end{eqnarray}
where $I,J,K=1,2,3$.
The Jacobi identities of this algebra are given in (\ref{BianchiC}).
To reveal further properties, it is instructive to compute the Cartan-Killing metric, denoted
${\mathcal M}$, with components
\begin{equation}
{\mathcal M}^{ab}= Q_{c}^{ad} \,\,Q_{d}^{bc} \ .
\label{ckmetric}
\end{equation}
For the above algebra of isotropic fluxes we find that the six-dimensional matrix ${\mathcal M}$ is block-diagonal,
namely
\begin{equation}
{\mathcal M} = {\rm diag \,}({\mathcal X}_2, {\mathcal X}_2, {\mathcal X}_2) \ .
\label{mkcblock}
\end{equation}
The $2\times 2$ matrix ${\mathcal X}_2$ turns out to be
\begin{equation}
{\mathcal X}_2=-2\,\left(
\begin{array}{ll}
\qquad \tilde c_1^2 + 2 c_0c_2 + c_1^2 & \tilde c_1 c_2 + c_1 c_2 + c_0 c_3 + c_1 \tilde c_2 \\
\tilde c_1 c_2 + c_1 c_2 + c_0 c_3 + c_1 \tilde c_2 & \qquad \tilde c_2^2 + 2 c_1c_3 + c_2^2 \\
\end{array}
\right)
\ .
\label{mkc2}
\end{equation}
Since ${\mathcal X}_2$ is symmetric, we conclude that ${\mathcal M}$ can have up to two distinct
real eigenvalues, each with multiplicity three.
The full 12-dimensional algebra also enjoys distinctive features.
In the isotropic case the remaining algebra commutators involving NSNS fluxes are given by
\begin{eqnarray}
\left[Z_{2I-1}, Z_{2J-1}\right] & = & \epsilon_{IJK} \left(b_3 \, X^{2K-1} + b_2\, X^{2K}\right)
\ , \nonumber \\
\left[ Z_{2I-1}, Z_{2J}\right] & = & \epsilon_{IJK} \left(b_2 \, X^{2K-1} + b_1\, X^{2K}\right)
\ , \label{zzxiso} \\
\left[ Z_{2I}, Z_{2J}\right] & = & \epsilon_{IJK} \left(b_1 \, X^{2K-1} + b_0 \, X^{2K}\right)
\ . \nonumber
\end{eqnarray}
The mixed piece of the algebra is determined by the non-geometric fluxes as
\begin{eqnarray}
\left[Z_{2I-1}, X^{2J-1}\right] & = & \epsilon_{IJK} \left(\tilde c_1 \, Z_{2K-1} + c_2\, Z_{2K}\right)
\ , \nonumber \\
\left[Z_{2I-1}, X^{2J}\right] & = & \epsilon_{IJK} \left(c_2 \, Z_{2K-1} + c_3\, Z_{2K}\right)
\ , \nonumber \\
\left[ Z_{2I}, X^{2J-1}\right] & = & \epsilon_{IJK} \left(c_0 \, Z_{2K-1} + c_1\, Z_{2K}\right)
\ , \label{zxziso} \\
\left[ Z_{2I}, X^{2J}\right] & = & \epsilon_{IJK} \left(c_1 \, Z_{2K-1} + \tilde c_2\, Z_{2K}\right)
\ . \nonumber
\end{eqnarray}
Besides the Jacobi identities purely involving non-geometric fluxes, there are the additional mixed constraints
(\ref{BianchiB}).
Computing the full Cartan-Killing metric, denoted ${\mathcal M}_{12}$, shows that there are no mixed $XZ$ terms. In fact,
the matrix is again block-diagonal
\begin{equation}
{\mathcal M}_{12} = {\rm diag \,}({\mathcal X}_2, {\mathcal X}_2, {\mathcal X}_2, {\mathcal Z}_2, {\mathcal Z}_2, {\mathcal Z}_2) \ ,
\label{fullmkcblock}
\end{equation}
with ${\mathcal X}_2$ shown above. The new $2\times 2$ matrix ${\mathcal Z}_2$ is found to be
\begin{equation}
{\mathcal Z}_2=-4\,\left(
\begin{array}{ll}
\quad b_3\tilde c_1 + 2 b_2c_2 + b_1 c_3 & b_2(c_1+\tilde c_1) +b_1(c_2 + \tilde c_2) \\
b_2(c_1+\tilde c_1) +b_1(c_2 + \tilde c_2) & \quad b_0\tilde c_2 + 2 b_1c_1 + b_2 c_0 \\
\end{array}
\right)
\ .
\label{fullmkc2}
\end{equation}
Here we have simplified using the Jacobi identities (\ref{BianchiB}).
We conclude that the allowed 12-dimensional algebras are such that the Cartan-Killing
matrix can have up to four distinct eigenvalues, each with multiplicity three.
Let us now return to the subalgebra spanned by the $X$ generators and the task of
solving the constraints (\ref{BianchiC}) that arise from the Jacobi identities $QQ=0$.
The idea is to fulfill these constraints by choosing the non-geometric fluxes to be the
structure constants of six-dimensional Lie algebras whose Cartan-Killing matrix has the
simple block-diagonal form (\ref{mkcblock}). To proceed it is convenient to distinguish whether
${\mathcal M}$ is non-degenerate or not, i.e. whether the algebra is semisimple or not.
If \mbox{$\det {\mathcal M} \! \not=\! 0$}, and ${\mathcal M}$ is negative definite, the only possible algebra
is the compact $\mathfrak{so(4)} \sim \mathfrak{su(2)^2}$. On the other hand, the only
non-compact semisimple algebra with the required block structure is $\mathfrak{so(3,1)}$.
When $\det {\mathcal M}\! =\! 0$, the algebra is non-semisimple. In this class to begin we find
two compatible algebras, namely the direct sum
$\mathfrak{su(2) + u(1)^3}$ and the semi-direct sum $\mathfrak{su(2) \oplus u(1)^3}$
that is isomorphic to the Euclidean algebra $\mathfrak{iso(3)}$.
The remaining possibility is that the non-semisimple algebra be completely solvable. One example is
the nilpotent $\mathfrak{u(1)^6}$ that we disregard because the non-geometric fluxes vanish identically.
A second non-trivial solvable algebra, that is actually nilpotent, will be discussed shortly.
After classifying the allowed 6-dimensional subalgebras the next step is to find the set of corresponding non-geometric fluxes.
Except for the nilpotent example, all other cases have an $\mathfrak{su(2)}$ factor.
This suggests to make a change of basis from $(X^{2I-1}, X^{2I})$, $I=1,2,3$, to new generators
$(E^I, \widetilde E^I)$ such that basically one type, say $E^I$, spans $\mathfrak{su(2)}$. The
${\mathbb Z}_2^3$ symmetries of the fluxes require that we form combinations that transform in a
definite way, For instance, $E^I$ can only be a combination of $X^{2I-1}$ and $X^{2I}$ with the
same $I$. Furthermore, for isotropic fluxes it is natural to make the same transformation for each $I$.
We will then make the $SL(2, {\mathbb R})$ transformation
\begin{equation}
\left(
\begin{array}{c}
E^I \\
\widetilde{E}^I
\end{array}
\right)
= \frac{1}{|\Gamma |^{2}}
\left(
\begin{array}{cc}
-\alpha & \beta \\
-\gamma &\delta
\end{array}
\right)
\left(
\begin{array}{c}
X^{2I-1} \\
X^{2I}
\end{array}
\right) \ ,
\label{chbasis}
\end{equation}
for all $I=1,2,3$. Here $|\Gamma|=\alpha\delta - \beta\gamma$, and it must be that $|\Gamma|\not=0$.
In the following we will refer to $(\alpha, \beta, \gamma, \delta)$ as the $\Gamma$ parameters.
Substituting in (\ref{subiso}) it is straightforward to obtain the algebra satisfied by the new generators
$E^I$ and $\widetilde E^J$. This algebra will depend on the non-geometric fluxes as well as on the parameters
$(\alpha, \beta, \gamma, \delta)$.
We can then prescribe the commutators to have the standard form for the allowed algebras
found previously. For instance, in the direct product examples we impose $\big[E^I, \widetilde E^J\big]=0$.
In the following sections we will discuss each compatible 6-dimensional algebra in more detail. The goal
is to parametrize the non-geometric fluxes in terms of $(\alpha, \beta, \gamma, \delta)$. By construction these fluxes
will satisfy the Jacobi identities of the algebra. We will
then solve the mixed constraints involving the NSNS fluxes.
The main result will be an explicit factorization of the cubic polynomials $P_3(U)$
and $P_2(U)$ that dictate the couplings among the moduli.
\subsection{Semisimple algebras}
The algebra is semisimple when the Cartan-Killing metric is non-degenerate. This means
$\det {\mathcal M} \not=0$ and hence $\det {\mathcal X}_2 \not= 0$. Now, six-dimensional semisimple algebras
are completely classified. If ${\mathcal M}$ is negative definite the algebra is compact so that
it must be $\mathfrak{so(4) \sim su(2) + su(2)}$. When ${\mathcal M}$ has positive eigenvalues
the algebra is non-compact and it could be $\mathfrak{so(3,1)}$ or $\mathfrak{so(2,2)}$ but
the latter does not fit the required block-diagonal form (\ref{mkcblock}).
\subsubsection{$\mathfrak{so(4) \sim su(2)^2}$}
\label{subsubso4}
The standard commutators of this algebra are
\begin{equation}
\big[E^I, E^J\big]=\epsilon_{IJK} E^K \quad ; \quad
\big[\widetilde E^I, \widetilde E^J\big]=\epsilon_{IJK}\widetilde E^K \quad ; \quad \big[E^I, \widetilde E^J\big]=0 \ .
\label{su2su2}
\end{equation}
After performing the change of basis in (\ref{subiso}) we find that the non-geometric fluxes needed to describe this
algebra can be parametrized as
\begin{equation}
\label{LimC}
\begin{array}{lcl}
c_{0}= \beta\, \delta \, (\beta+\delta) & \quad ; \quad & c_{3}=- \,\alpha\, \gamma \, (\alpha+\gamma) \quad , \\
c_{1} = \beta\, \delta \, (\alpha+\gamma) & \quad ; \quad & c_{2}=- \,\alpha\, \gamma \, (\beta+\delta) \quad , \\
\tilde{c}_{2}= \gamma^{2}\, \beta + \alpha^{2}\,\delta & \quad ; \quad &
\tilde{c}_{1}=- \,(\gamma\, \beta^{2} + \alpha\,\delta^{2}) \quad ,
\end{array}
\end{equation}
provided that $|\Gamma|=(\alpha\delta-\beta\gamma) \not= 0$. It is easy to show that these fluxes
verify the Jacobi identities (\ref{BianchiC}).
What we have done is to trade the six non-geometric fluxes, constrained by two independent conditions,
by the four independent parameters $(\alpha,\beta,\gamma, \delta)$. These parameters are real but the resulting
non-geometric fluxes in (\ref{LimC}) must be integers.
For future purposes we need to determine the cubic polynomial $P_3(U)$ that corresponds to the parametrized
non-geometric fluxes. Substituting in (\ref{P3Iso2}) yields
\begin{equation}
P_3(U)=3(\alpha U + \beta)(\gamma U + \delta)\big[(\alpha+\gamma)U + (\beta+\delta)\big] \ .
\label{p3so4}
\end{equation}
This clearly shows that in this case $P_3$ has three real roots. Moreover, the roots are all
different because $|\Gamma|\not=0$. We will prove that for other algebras $P_3$ has either
complex roots or degenerate real roots. The remarkable conclusion is that $P_3$ has three
different real roots if and only if the algebra of the
non-geometric fluxes is the compact $\mathfrak{so(4) \sim su(2) + su(2)}$.
Alternatively, we may start with the condition that the polynomial has three different real roots
that we can choose to be at $0$, $-1$ and $\infty$ without loss of generality. These roots can then
be moved to arbitrary real locations by a linear fractional transformation
\begin{equation}
{\mathcal Z} = \frac{\alpha U + \beta}{\gamma U + \delta} \ .
\label{zdef}
\end{equation}
with $(\alpha, \beta, \gamma, \delta) \in {\mathbb R}$ and $|\Gamma|\not=0$. By comparing the roots of $P_3$ in terms of
the fluxes with those in terms of the transformation parameters we rediscover the map (\ref{LimC})
and the associated $\mathfrak{su(2)^2}$ algebra.
In the next sections we will see that the variable ${\mathcal Z}$ introduced above plays a very important physical r\^ole.
We now turn to the Jacobi constraints (\ref{BianchiB}) involving the NSNS fluxes.
Inserting the non-geometric fluxes (\ref{LimC}) we find that the $b_A$ can be completely fixed
by the $\Gamma$ parameters plus two new real variables $({\epsilon }_1, {\epsilon }_2)$ as follows
\begin{eqnarray}
b_{0}&=&-\,(\epsilon_{1}\, \beta^{3} + \epsilon_{2}\, \delta^{3}) \ , \nonumber\\
b_{1}&=& \epsilon_{1}\, \alpha\,\beta^{2} + \epsilon_{2}\,\gamma\, \delta^{2} \ , \label{bso4} \\
b_{2}&=& -\,(\epsilon_{1}\, \alpha^{2}\,\beta + \epsilon_{2}\,\gamma^{2}\, \delta )\ , \nonumber \\
b_{3}&=&\epsilon_{1}\, \alpha^{3} + \epsilon_{2}\, \gamma^{3} \ . \nonumber
\end{eqnarray}
We also need to compute the polynomial $P_2(U)$ that depends on the NSNS fluxes. Substituting the above $b_A$
in (\ref{P2Iso}) yields
\begin{equation}
P_2(U)={\epsilon }_1 (\alpha U + \beta)^3 + {\epsilon }_2(\gamma U + \delta)^3 \ .
\label{p2so4}
\end{equation}
It is easy to show that because $|\Gamma| \not= 0$, $P_2$ has complex roots whenever ${\epsilon }_1{\epsilon }_2\not=0$.
Contrariwise, $P_2$ has a triple real root if either ${\epsilon }_1$ or ${\epsilon }_2$ vanishes.
We may expect that the full 12-dimensional algebra has special properties when $P_2$ has a triple root.
Indeed, inserting the fluxes in (\ref{fullmkc2}) yields $\det {\mathcal Z}_2 = 16 {\epsilon }_1{\epsilon }_2|\Gamma|^6$.
Hence, the full Cartan-Killing matrix ${\mathcal M}_{12}$ happens to be degenerate when ${\epsilon }_1{\epsilon }_2=0$.
To learn more about the full algebra it is convenient to switch from the original $Z_a$ generators to a new
basis $(D_I, \widetilde D_I)$ defined by
\begin{equation}
\left(
\begin{array}{c}
D_I \\
\widetilde D_I
\end{array}
\right)
= \frac{1}{|\Gamma |^{2}}
\left(
\begin{array}{cc}
\delta & \gamma \\
\beta &\alpha
\end{array}
\right)
\left(
\begin{array}{c}
Z_{2I-1} \\
Z_{2I}
\end{array}
\right) \ ,
\label{chzbasis}
\end{equation}
for $I=1,2,3$. It is straightforward to compute the piece of the full algebra
generated by the $(D_I, \widetilde D_I)$. Substituting the parametrized fluxes in
(\ref{zzxiso}) and (\ref{zxziso}) we obtain
\begin{equation}
\begin{array}{lcl}
\big[D_I, D_J\big]=-{\epsilon }_1 \, \epsilon_{IJK} E^K & \quad ; \quad &
\big[\widetilde D_I, \widetilde D_J\big]= -{\epsilon }_2 \, \epsilon_{IJK}\widetilde E^K \quad , \\
\big[E^I, D_J\big]=\epsilon_{IJK} D_K & \quad ; \quad &
\big[\widetilde E^I, \widetilde D_J\big]=\epsilon_{IJK} \widetilde D_K \quad .
\end{array}
\label{morealg}
\end{equation}
All other commutators do vanish.
A quick inspection of the whole algebra encoded in (\ref{su2su2}) and (\ref{morealg}) shows that when
either ${\epsilon }_1$, or ${\epsilon }_2$, is zero, the $D_I$, or the $\widetilde D_I$, generate a 3-dimensional invariant Abelian
subalgebra. Moreover, when say ${\epsilon }_1=0$ and ${\epsilon }_2\not=0$, the ${\mathcal Z}_2$ block of the
full Cartan-Killing metric has one zero and one non-zero eigenvalue which is negative for ${\epsilon }_2 < 0$
and positive for ${\epsilon }_2 > 0$. The upshot is that when ${\epsilon }_1{\epsilon }_2=0$, the 12-dimensional algebra is
$\mathfrak{iso(3) + g}$, where $\mathfrak{g}$ is either
$\mathfrak{so(4)}$ or $\mathfrak{so(3,1)}$. On the other hand, when ${\epsilon }_1 {\epsilon }_2 < 0$, the algebra is
$\mathfrak{so(4) + so(3,1)}$, whereas for ${\epsilon }_1, {\epsilon }_2 < 0$ it is $\mathfrak{so(4)^2}$, and
for ${\epsilon }_1, {\epsilon }_2 > 0$ it is $\mathfrak{so(3,1)^2}$.
The methods developed in this section will be applied shortly to other subalgebras. In summary, the non-geometric and NSNS
fluxes can be parametrized using auxiliary variables $(\alpha,\beta, \gamma, \delta)$ and $({\epsilon }_1, {\epsilon }_2)$ in such a way that
the Jacobi identities are satisfied and flux-induced superpotential terms are explicitly factorized.
The full 12-dimensional algebras can be simply characterized after the changes of basis (\ref{chbasis}) and (\ref{chzbasis}) are performed.
The auxiliary variables are constrained by the condition that the resulting fluxes be integers.
This issue deserves further explanation. There are two cases depending on whether the
polynomial $P_2(U)$ has complex roots or not. If it does not, we can take $\epsilon_1=0$ to be concrete. From the
structure of the NSNS fluxes in (\ref{bso4}) it is then obvious that, for $\alpha \not=0$, the quotient $\beta/\alpha$ is a rational number.
Going back to the non-geometric fluxes it can be shown that the ratios $\gamma/\alpha$ and $\delta/\alpha$, as well as $\alpha^3$ and ${\epsilon }_2$ also
belong to $\mathbb{Q}$. If $P_2(U)$ admits complex roots the generic result is that ${\epsilon }_2/{\epsilon }_1$, $\beta/\alpha$, $\alpha^3$, etc., involve
square roots of rationals. However, it happens that when at least one of the non-geometric parameters $(\alpha,\beta, \gamma, \delta)$ is zero
then all well defined quotients are again rational numbers.
\subsubsection{$\mathfrak{so(3,1)}$}
This is the Lorentz algebra. We can take $E^I$ to be the angular momentum, and $\widetilde E^J$ to be the
boost generators. Thus, the algebra can be written as
\begin{equation}
\big[E^I, E^J\big]=\epsilon_{IJK} E^K \quad ; \quad
\big[\widetilde E^I, \widetilde E^J\big]=-\epsilon_{IJK} E^K \quad ; \quad \big[E^I, \widetilde E^J\big]=\epsilon_{IJK} \widetilde E^K \ .
\label{s031}
\end{equation}
In this case the non-geometric fluxes that produce the algebra are found to be
\begin{equation}
\label{LimCSO31}
\begin{array}{lcl}
c_{0}=-\beta \,\big(\beta ^2+\delta ^2\big) & \quad \ ; \quad &
c_{3}=\,\alpha \, \big(\alpha ^2+\gamma ^2\big) \quad , \\
c_{1}= -\alpha \,\big(\beta ^2+\delta ^2\big) & \quad ; \quad &
c_{2}=\beta \, \big(\alpha ^2+\gamma ^2\big) \quad , \\
\tilde{c}_{2}= -\beta \, (\alpha ^2-\gamma ^2)-2\,\gamma \, \delta \,\alpha & \quad ; \quad &
\tilde{c}_{1}=\alpha \big(\beta ^2-\delta ^2\big) + 2 \,\beta \, \gamma \, \delta \quad ,
\end{array}
\end{equation}
as long as $|\Gamma| \not= 0$.
Substituting the resulting non-geometric fluxes in (\ref{P3Iso2}) gives the $P_3(U)$ polynomial
\begin{equation}
P_3(U)=-3(\alpha U+\beta)\big[(\alpha U + \beta)^2 + (\gamma U + \delta)^2 \big] \ .
\label{p3so31}
\end{equation}
Since $\Gamma\not=0$, $P_3$ always has complex roots. We will see that for non-semisimple algebras
all roots of $P_3$ are real, as for the compact $\mathfrak{so(4)}$. Hence, the important observation now is
that $P_3$ has complex roots if and only if the algebra of the non-geometric fluxes is the non-compact
$\mathfrak{so(3,1)}$.
The Jacobi constraints (\ref{BianchiB}) for the NSNS fluxes can again be solved in terms of the $\Gamma$
parameters plus two real constants that we again denote by $({\epsilon }_1, {\epsilon }_2)$. Concretely,
\begin{eqnarray}
b_{0}&=&-\beta\left(\beta^2 - 3\delta ^2\right) \epsilon _1
-\delta \left(\delta ^2-3 \beta ^2\right) \epsilon _2 \ , \nonumber\\
b_{1}&=& (\alpha \beta^2 - 2 \beta \gamma \delta - \alpha \delta^2) \epsilon _1
+\left(\gamma \delta ^2 - 2 \alpha \delta \beta - \gamma \beta^2\right) \epsilon _2 \ ,
\label{bso31}\\
b_{2}&=& \left(\beta \gamma^2 + 2 \gamma \delta \alpha - \beta \alpha^2\right) \epsilon _1
+\left(\delta \alpha ^2+2 \beta \gamma \alpha - \delta \gamma^2 \right) \epsilon _2 \ , \nonumber \\
b_{3}&=& \alpha\left(\alpha ^2 - 3\gamma ^2\right) \epsilon _1
+\gamma \left(\gamma ^2-3 \alpha ^2\right) \epsilon _2 \ . \nonumber
\end{eqnarray}
These fluxes give rise to
\begin{equation}
P_2(U)=(\gamma U+\delta)^3({\epsilon }_1 {\mathcal Z}^3 - 3{\epsilon }_2 {\mathcal Z}^2 - 3 {\epsilon }_1 {\mathcal Z} + {\epsilon }_2) \ ,
\label{p2so31}
\end{equation}
where ${\mathcal Z}=(\alpha U + \beta)/(\gamma U + \delta)$ as before. The discriminant of this cubic polynomial is always
negative. Therefore, $P_2$ has three different real roots.
\subsection{Non-semisimple algebras}
In this case the algebra is the semidirect sum of a semisimple algebra and a solvable invariant subalgebra.
Lack of simplicity is detected imposing $\det {\mathcal M}=0$ which requires $\det {\mathcal X}_2=0$,
where ${\mathcal X}_2$ is shown in (\ref{mkc2}).
Combining with the Jacobi identities (\ref{BianchiC}) we deduce that up to isomorphisms there are only
two solutions in which the solvable invariant subalgebra has dimension less than six.
In practice this means that ${\mathcal X}_2$ has only one zero eigenvalue. As expected from the
underlying symmetries, this invariant subalgebra can only have dimension three and be $\mathfrak{u(1)^3}$.
The semisimple piece can only be $\mathfrak{su(2)}$. The two solutions are the direct and semidirect
sum discussed below.
The remaining possibility consistent with the symmetries is for the solvable invariant subalgebra
to have dimension six. The criterion for solvability is that the derived algebra $\mathfrak{[g,g]}$ be
orthogonal to the whole algebra $\mathfrak{g}$ with respect to the Cartan-Killing metric.
In our case this means $Q^{ab}_c {\mathcal M}^{dc}=0$, $\forall a,b,d$. The non-geometric fluxes
further satisfy the Jacobi identities $Q_{x}^{[ab}\,Q^{c]x}_{d}=0$. On the other hand, the stronger
condition for nilpotency is ${\mathcal M}^{dc}=0$. For our algebra of isotropic fluxes given in (\ref{subiso}),
we find that all solvable flux configurations are necessarily nilpotent. The proof can be carried out
using the algebraic package {\it Singular} to manipulate the various ideals.
This result is consistent with the fact that in our model ${\mathcal M}$ is block-diagonal so that
when $\det {\mathcal M}=0$, it has three or six null eigenvalues and in the latter situation ${\mathcal M}$
is identically zero.
One obvious nilpotent algebra is $\mathfrak{u(1)^6}$, but it is uninteresting because the associated fluxes
vanish identically. There is a second solution described in more detail below.
The allowed non-semisimple subalgebras can all be obtained starting from $\mathfrak{su(2)^2}$ and performing
contractions consistent with the underlying symmetries of the isotropic fluxes.
For example, setting $E^{\prime\, I} = E^I$, $\widetilde E^{\prime\, I} = \lambda \widetilde E^I$ in (\ref{su2su2})
and then letting $\lambda \to 0$ obviously gives the direct sum $\mathfrak{su(2)+ u(1)^3}$.
More generically we can take $E^{\prime\, I} = \lambda^a(E^I+ \widetilde E^I)$,
$\widetilde E^{\prime\, I} = \lambda^b(E^I- \widetilde E^I)$, with $a\ge 0$, $b\ge 0$. The limit $a=0$, $b >0$,
$\lambda \to 0$ yields the Euclidean algebra $\mathfrak{iso(3)}$. Letting instead $2b=a >0$ and
contracting gives the nilpotent algebra.
In the coming sections we present the explicit configurations of non-geometric fluxes associated
to the non-semisimple subalgebras. The parametrization of NSNS fluxes is also computed.
Evaluating the full 12-dimensional algebras in each case is straightforward.
\subsubsection{$\mathfrak{su(2)+ u(1)^3}$}
Since the algebra is a direct sum and one factor is Abelian,
the brackets take the simple form
\begin{equation}
\big[E^I, E^J\big]=\epsilon_{IJK} E^K \quad ; \quad
\big[\widetilde E^I, \widetilde E^J\big]=0 \quad ; \quad \big[E^I, \widetilde E^J\big]=0 \ .
\label{su2d}
\end{equation}
Requiring that upon the change of basis the algebra (\ref{subiso}) is of this type
returns the following non-geometric fluxes
\begin{equation}
\label{LimCFac}
\begin{array}{lcl}
c_{0}= \beta\, \delta^2 & \quad ; \quad & c_{3}=-\alpha\, \gamma^2\quad , \\
c_{1} = \beta\, \delta \, \gamma & \quad ; \quad & c_{2}= -\alpha\, \gamma \, \delta \quad , \\
\tilde{c}_{2}= \gamma^{2}\, \beta & \quad ; \quad &
\tilde{c}_{1}= -\alpha\,\delta^{2} \quad ,
\end{array}
\end{equation}
assuming $|\Gamma| \not= 0$. These fluxes automatically satisfy the Jacobi identities (\ref{BianchiC}).
They also satisfy the additional condition $c_0 c_2 = c_1 \tilde c_1$ arising from $\det {\mathcal X}_2=0$.
The non-geometric fluxes of the algebra $\mathfrak{su(2)+ u(1)^3}$ lead to the $P_3(U)$ polynomial
\begin{equation}
P_3(U)=3(\alpha U+\beta)(\gamma U + \delta)^2 \ .
\label{p3su2d}
\end{equation}
Evidently, $P_3$ has one single and one double real root.
The Jacobi identities $\bar H Q=0$ again fix the NSNS fluxes as in the previous cases.
The solution in terms of the free parameters is given by
\begin{eqnarray}
b_{0}&=&-\,(\epsilon_{1}\, \beta^{3} + \epsilon_{2}\, \delta^{3}) \ , \nonumber\\
b_{1}&=& \epsilon_{1}\, \alpha\,\beta^{2} + \epsilon_{2}\,\gamma\, \delta^{2} \ , \label{bsu2d}\\
b_{2}&=& -\,(\epsilon_{1}\, \alpha^{2}\,\beta + \epsilon_{2}\,\gamma^{2}\, \delta )\ , \nonumber \\
b_{3}&=&\epsilon_{1}\, \alpha^{3} + \epsilon_{2}\, \gamma^{3} \ . \nonumber
\end{eqnarray}
For the associated polynomial $P_2(U)$ we then find
\begin{equation}
P_2(U)={\epsilon }_1 (\alpha U + \beta)^3 + {\epsilon }_2(\gamma U + \delta)^3 \ .
\label{p2su2d}
\end{equation}
As in the compact case, this $P_2$ has complex roots whenever ${\epsilon }_1 {\epsilon }_2 \not= 0$.
\subsubsection{$\mathfrak{su(2)\oplus u(1)^3 \sim iso(3)}$}
According to Levi's theorem, in general this algebra can be characterized as
\begin{equation}
\big[E^I, E^J\big]=\epsilon_{IJK} \big(E^K + \widetilde E^K \big) \quad ; \quad
\big[\widetilde E^I, \widetilde E^J\big]=0 \quad ; \quad \big[E^I, \widetilde E^J\big]=\epsilon_{IJK} \widetilde E^K \ .
\label{su2u13sd}
\end{equation}
The typical form of the Euclidean algebra in three dimensions is recognized
after the isomorphism $(E^I-\widetilde E^I) \to \widehat E^I$.
The non-geometric fluxes needed to reproduce the above commutators turn out to be
\begin{equation}
\label{LimCFacSemi}
\begin{array}{lcl}
c_{0}=-\delta ^2\,(\beta-\delta) & \quad \ ; \quad &
c_{3}=\gamma ^2\,(\alpha-\gamma) \quad , \\
c_{1}= -\delta ^2\,(\alpha-\gamma) & \quad ; \quad &
c_{2}=\gamma ^2\,(\beta-\delta) \quad , \\
\tilde{c}_{2}= \gamma ^2\,(\beta+\delta)- 2\,\gamma \, \delta \,\alpha & \quad ; \quad &
\tilde{c}_{1}=-\delta^2\, (\alpha+\gamma) + 2\, \gamma \, \delta \, \beta \quad ,
\end{array}
\end{equation}
for $|\Gamma| \not= 0$. Besides the Jacobi identities these fluxes satisfy $4 c_0c_2=-(c_1-\tilde c_1)^2$,
by virtue of $\det {\mathcal X}_2=0$.
For the flux configuration of this algebra the $P_3(U)$ polynomial becomes
\begin{equation}
P_3(U)=3(\gamma U + \delta)^2\big[(\gamma-\alpha)U + (\delta-\beta)\big] \ .
\label{p3su2sd}
\end{equation}
As in the direct sum $\mathfrak{su(2) + u(1)^3}$, $P_3$ has one single and one double real root.
The NSNS fluxes can be determined from the Jacobi identities (\ref{BianchiB}).
Introducing again parameters $({\epsilon }_1,{\epsilon }_2)$ leads to
\begin{eqnarray}
b_{0}&=&-\delta^2 \, \left(\beta \, \epsilon _1+\delta \, \epsilon _2\right) \ , \nonumber\\
b_{1}&=& \msm{\frac{1}{3}} \, \delta (\alpha\, \delta + 2\, \beta \,\gamma)\epsilon _1
+ \gamma \, \delta^2 \,\epsilon _2 \ , \label{bsu2sd} \\
b_{2}&=& - \msm{\frac{1}{3}} \gamma (\beta \, \gamma +2 \,\alpha \, \delta ) \epsilon _1
- \gamma^2 \, \delta \, \epsilon _2 \ , \nonumber \\
b_{3}&=& \gamma ^2\, \left(\alpha \, \epsilon _1+\gamma \, \epsilon _2\right) \ , \nonumber
\end{eqnarray}
The companion polynomial $P_2(U)$ of NSNS fluxes is fixed as
\begin{equation}
P_2(U)=(\gamma U + \delta)^2\left[{\epsilon }_1(\alpha U + \beta) + {\epsilon }_2(\gamma U + \delta) \right] \ .
\label{p2su2sd}
\end{equation}
Analogous to the non-compact case, this $P_2$ has only real roots, but one of them is degenerate.
\subsubsection{Nilpotent algebra}
To search for flux configurations that generate a nilpotent algebra we impose that the
Cartan-Killing metric vanishes. Now, in our model ${\mathcal M}=0$ implies the much simpler conditions
$\det {\mathcal X}_2=0$ and ${\rm Tr \,} {\mathcal X}_2=0$.
Up to isomorphisms, we find only one non-trivial solution. This is the expected result based on
the known classification of 6-dimensional nilpotent algebras\footnote{
A table and references to the original literature are given in \cite{gmpt}.}.
{}From the 34 isomorphism classes of nilpotent algebras, besides $\mathfrak{u(1)^6}$, only
one is compatible with isotropic fluxes invariant under ${\mathbb Z}_2 \times {\mathbb Z}_2$. The algebra
is 2-step nilpotent and its brackets can be written as
\begin{equation}
\big[E^I, E^J\big]= \epsilon_{IJK} \widetilde E^K \quad ; \quad
\big[\widetilde E^I, \widetilde E^J\big]=0 \quad ; \quad \big[E^I, \widetilde E^J\big]=0 \ .
\label{nilal}
\end{equation}
Up to isomorphisms this is the algebra labelled $n(3.5)$ in Table 4 of \cite{gmpt}.
The change of basis from the original $(X^{2I-1}, X^{2I})$ generators to the $(E^I, \widetilde E^I)$
is still given by (\ref{chbasis}).
Starting from the $X$ commutators in (\ref{subiso}) we can then deduce
fluxes such that the nilpotent algebra (\ref{nilal}) is reproduced. In this way we obtain
\begin{equation}
\label{LimCNilp}
\begin{array}{lcl}
c_{0}= \delta ^3 & \quad ; \quad & c_{3}=- \gamma^3 \quad , \\
c_{1} = \delta^2\, \gamma & \quad ; \quad & c_{2}=- \delta \, \gamma^2 \quad , \\
\tilde{c}_{2}= \delta \, \gamma^{2} & \quad ; \quad &
\tilde{c}_{1}=- \delta^{2}\, \gamma \quad .
\end{array}
\end{equation}
Notice that these fluxes only depend on two independent parameters. This occurs because besides the
Jacobi constraints there are two more conditions $\det {\mathcal X}_2=0$ and ${\rm Tr \,} {\mathcal X}_2=0$.
The non-geometric fluxes of the nilpotent algebra generate the $P_3(U)$ polynomial
\begin{equation}
P_3(U)=3(\gamma U+\delta)^3 \ .
\label{p3nil}
\end{equation}
Clearly, $P_3$ always has one triple real root.
In analogy with all previous examples, the $\bar H Q=0$ Jacobi identities determine the NSNS fluxes in terms of two
additional parameters $({\epsilon }_1, {\epsilon }_2)$. Inserting the non-geometric fluxes of the nilpotent algebra
in (\ref{BianchiB}) readily yields
\begin{eqnarray}
b_{0}&=&-\delta ^2 \left(\delta \,\epsilon _2+\gamma \,\epsilon _1\right) \ , \nonumber\\
b_{1}&=& \gamma \, \delta ^2\, \epsilon _2
-\msm{\frac{1}{3}} \,\delta \left(\delta^2 -2\, \gamma ^2\right) \epsilon _1 \ , \label {bnil}\\
b_{2}&=& - \gamma^2\, \delta \, \epsilon _2
+ \msm{\frac13} \gamma \left(2\, \delta^2 - \gamma^2 \right) \epsilon _1 \ , \nonumber \\
b_{3}&=& \gamma^2 \left(\gamma \, \epsilon _2-\delta \,\epsilon _1\right) \ . \nonumber
\end{eqnarray}
Substituting in (\ref{P2Iso}) we easily obtain the corresponding polynomial
\begin{equation}
P_2(U)=(\gamma U + \delta)^2\left[{\epsilon }_2(\gamma U + \delta) + {\epsilon }_1(\gamma - \delta U) \right] \ .
\label{p2nil}
\end{equation}
As in $\mathfrak{su(2) \oplus u(1)^3}$, this $P_2$ has one single and one double real root.
Without loss of generality we can choose $\alpha=-\delta$ and $\beta=\gamma$ in order to write $P_2$
in terms of the variable ${\mathcal Z}=(\alpha U + \beta)/(\gamma U + \delta)$ as
\begin{equation}
P_2(U)=(\gamma U+\delta)^3({\epsilon }_1 {\mathcal Z} + {\epsilon }_2) \ .
\label{p2nils}
\end{equation}
The advantage of this choice of parameters will become evident when we perform a transformation
from $U$ to ${\mathcal Z}$ in the scalar potential.
\section{New variables and RR fluxes}
\label{sec:newvars}
In type IIB orientifolds, the superpotential depends on the complex structure parameter $U$
through the three cubic polynomials $P_1(U)$, $P_2(U)$ and $P_3(U)$ induced respectively by
RR, NSNS and non-geometric $Q$-fluxes. Our results in last section show that
the last two polynomials can be concisely written as
\begin{equation}
P_2(U)=(\gamma U + \delta)^3 {\mathcal P}_2({\mathcal Z}) \qquad ; \qquad P_3(U)=(\gamma U + \delta)^3 {\mathcal P}_3({\mathcal Z}) \ ,
\label{cp23def}
\end{equation}
where ${\mathcal Z}=(\alpha U+ \beta)/(\gamma U + \delta)$. The real parameters $(\alpha, \beta, \gamma, \delta)$, with
\mbox{$|\Gamma|=(\alpha\delta-\beta\gamma)\not=0$}, encode the non-geometric fluxes. For the NSNS fluxes two additional
real constants $({\epsilon }_1, {\epsilon }_2)$ are needed. As summarized in table \ref{tablecps}, ${\mathcal P}_2({\mathcal Z})$ and
${\mathcal P}_3({\mathcal Z})$ take very specific forms according to the subalgebra of the $Q$-fluxes.
\begin{table}[htb]
\small{
\renewcommand{\arraystretch}{1.15}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$Q$-subalgebra & ${\mathcal P}_3({\mathcal Z})/3$ & ${\mathcal P}_2({\mathcal Z})$ & ${\mathcal P}_1({\mathcal Z})$ \\
\hline
\hline
$\mathfrak{so(4)}$ & ${\mathcal Z}({\mathcal Z}+1)$ & ${\epsilon }_1 {\mathcal Z}^3 + {\epsilon }_2$ & $\xi_3({\epsilon }_1-{\epsilon }_2{\mathcal Z}^3) + 3\xi_7{\mathcal Z}(1-{\mathcal Z})$ \\
\hline
& & & $\xi_3({\epsilon }_1 +3{\epsilon }_2{\mathcal Z}-3{\epsilon }_1{\mathcal Z}^2-{\epsilon }_2{\mathcal Z}^3)$ \\
\raisebox{2.5ex}[0cm][0cm]{$\mathfrak{so(3,1)}$} &
\raisebox{2.5ex}[0cm][0cm]{$-{\mathcal Z}({\mathcal Z}^2+1)$} &
\raisebox{2.5ex}[0cm][0cm]{${\epsilon }_1 {\mathcal Z}^3 -3{\epsilon }_2 {\mathcal Z}^2 - 3 {\epsilon }_1 {\mathcal Z} + {\epsilon }_2$} &
$ + 3\xi_7({\mathcal Z}^2+1)$ \\
\hline
$\mathfrak{su(2)+u(1)^3}$ & ${\mathcal Z}$ & ${\epsilon }_1 {\mathcal Z}^3 + {\epsilon }_2$ & $\xi_3({\epsilon }_1-{\epsilon }_2{\mathcal Z}^3) - 3\xi_7{\mathcal Z}^2$ \\
\hline
$\mathfrak{su(2)\oplus u(1)^3}$ & $1-{\mathcal Z}$ & ${\epsilon }_1 {\mathcal Z} + {\epsilon }_2$ &
$3\lambda_1 {\mathcal Z} + 3\lambda_2 {\mathcal Z}^2 + \lambda_3 {\mathcal Z}^3$ \\
\hline
$\mathfrak{nil}$ & $1$ & ${\epsilon }_1{\mathcal Z} + {\epsilon }_2 $ & $3\lambda_1 {\mathcal Z} + 3\lambda_2 {\mathcal Z}^2 + \lambda_3 {\mathcal Z}^3$ \\
\hline
\end{tabular}
\end{center}
\caption{$Q$-subalgebras and polynomials}
\label{tablecps}
}
\end{table}
A very nice property of the variable ${\mathcal Z}$ is its invariance under the $SL(2,{\mathbb Z})_U$ modular
transformations
\begin{equation}
U^\prime = \frac{k \, U + \ell}{m \, U + n} \quad ; \quad k,\, \ell, \, m , \, n \, \in {\mathbb Z}
\quad ; \quad kn-\ell m=1 \ .
\label{umodt}
\end{equation}
Since this is a symmetry of the compactification, the effective action must be invariant.
The K\"ahler potential, $K=-3\log[-i(U-\bar U)] + \cdots$, clearly transforms as
\begin{equation}
K^\prime = K + 3 \log| m U + n|^2 \ .
\label{kmodt}
\end{equation}
Therefore, the physically important quantity $e^K |W|^2$ is invariant as long as the
superpotential satisfies
\begin{equation}
W^\prime = \frac{W}{(m U + n)^3} \ .
\label{wmodt}
\end{equation}
In order for $W$ to fulfill this property the fluxes must transform in definite patterns.
In fact, it follows that (\ref{wmodt}) holds separately for each of the flux induced polynomial
$P_i(U)$.
We claim that the fluxes transform under $SL(2,{\mathbb Z})_U$ precisely in such a manner that ${\mathcal Z}^\prime={\mathcal Z}$.
The proof begins by first finding how the $Q$-fluxes mix among themselves
from the condition $P_3^\prime=P_3/(m U + n)^3$. For example, under $U^\prime=-1/U$, the non-geometric
fluxes transform as
\begin{equation}
c_0^\prime=-c_3 \quad , \quad c_1^\prime=c_2 \quad , \quad c_2^\prime=-c_1 \quad , \quad c_3^\prime=c_0
\quad , \quad \tilde c_1^{\, \prime}= \tilde c_2 \quad , \quad \tilde c_2^{\, \prime}= -\tilde c_1 \ .
\label{cduals}
\end{equation}
Next we read off the corresponding transformation of
the parameters $(\alpha, \beta, \gamma, \delta)$ that are better thought of as the elements of a matrix $\Gamma$.
The result is
\begin{equation}
\Gamma^\prime=
\left(
\begin{array}{ll}
\alpha^\prime & \beta^\prime \\
\gamma^\prime & \delta^\prime
\end{array} \right)
=\left(
\begin{array}{ll}
\alpha & \beta \\
\gamma & \delta
\end{array} \right) \!\!
\left(
\begin{array}{cc}
n & \!\!\!\! -\ell \\
\!\!\! -m & k
\end{array} \right)
\label{Gmodt}
\end{equation}
It easily follows that ${\mathcal Z}^\prime={\mathcal Z}$. Notice that $|\Gamma^\prime|=|\Gamma|$.
For the NSNS fluxes we can study the transformation of $P_2$ with coefficients given by the $b_A$.
Alternatively, we may start from $P_2$ written as function of ${\mathcal Z}$ as in (\ref{cp23def}).
The conclusion is that the transformation of the $b_A$ is also determined by $\Gamma^\prime$
together with $({\epsilon }_1^\prime,{\epsilon }_2^\prime)=({\epsilon }_1, {\epsilon }_2)$. This is valid for all
$Q$-subalgebras.
At this point it must be evident that we want to change variables from $U$ to ${\mathcal Z}$. It is
also convenient to trade the axiodilaton $S$ and the K\"ahler modulus $T$ by new fields
defined by
\begin{equation}
{\mathcal S} = S + \xi_s \quad ; \quad {\mathcal T}=T + \xi_t \ ,
\label{csctdef}
\end{equation}
where the shifts $\xi_s$ and $\xi_t$ are some real parameters. The motivation is that such
shifts in the axions ${\rm Re \,} S$ and ${\rm Re \,} T$
can be reabsorbed into RR fluxes as explained in the following.
\subsection{Parametrization of RR fluxes}
\label{ss:rr}
The systematic procedure is to express the RR fluxes $a_A$ in such a way that
their contribution to the superpotential is of the form
\begin{equation}
P_1(U)=(\gamma U + \delta)^3 \widehat {\mathcal P}_1({\mathcal Z}) \ ,
\label{cp1hatdef}
\end{equation}
in complete analogy with (\ref{cp23def}).
To arrive at this factorization we must relate the four RR fluxes $a_A$ to the parameters
$(\alpha, \beta, \gamma, \delta)$ that define ${\mathcal Z}=(\alpha U+ \beta)/(\gamma U + \delta)$, and to four additional independent variables.
Obviously, $\widehat {\mathcal P}_1({\mathcal Z})$ can be expanded in the monomials $(1,{\mathcal Z},{\mathcal Z}^2,{\mathcal Z}^3)$. However,
a more convenient basis contains the already known polynomials ${\mathcal P}_3$ and ${\mathcal P}_2$ that are
generically linearly independent. We still need two independent polynomials and these are taken to be
the duals $\widetilde {\mathcal P}_3$ and $\widetilde {\mathcal P}_2$. The dual $\widetilde {\mathcal P}$ is such that
${\mathcal P} \to \widetilde {\mathcal P}/{\mathcal Z}^3$ when ${\mathcal Z} \to -1/{\mathcal Z}$.
The last two subalgebras in table \ref{tablecps} must be treated slightly different because
linear independence of ${\mathcal P}_3$ and ${\mathcal P}_2$ fails for particular properties of the NSNS flux parameter
${\epsilon }_1$.
We concretely make the expansion
\begin{equation}
\widehat {\mathcal P}_1({\mathcal Z}) = \xi_s {\mathcal P}_2({\mathcal Z}) + \xi_t {\mathcal P}_3({\mathcal Z}) + {\mathcal P}_1({\mathcal Z}) \ .
\label{cp1hatexp}
\end{equation}
In the full superpotential the first two terms in $\widehat {\mathcal P}_1$ will precisely offset the axionic shifts
in the new variables ${\mathcal S}$ and ${\mathcal T}$. Let us now discuss the remaining piece ${\mathcal P}_1({\mathcal Z})$ that also
depends on the $Q$-subalgebra and is displayed in table \ref{tablecps}.
As explained before, for the first three subalgebras in the table we can further choose
\begin{equation}
{\mathcal P}_1({\mathcal Z}) = \xi_7 \widetilde {\mathcal P}_3({\mathcal Z}) - \xi_3 \widetilde {\mathcal P}_2({\mathcal Z}) \ .
\label{cp1defa}
\end{equation}
A motivation for this choice is that the RR tadpoles turn out to depend on the RR fluxes
only through the coefficients $(\xi_3, \xi_7)$.
For the last two subalgebras in table \ref{tablecps}, ${\mathcal P}_3$ and ${\mathcal P}_2$ are not independent when
${\epsilon }_1$ takes a particular critical value.
For $\mathfrak{su(2)\oplus u(1)^3}$ this happens when ${\epsilon }_1=-{\epsilon }_2$, whereas for
the nilpotent algebra the critical value is ${\epsilon }_1=0$.
To take into account these possibilities, compensating at the same time
for the axionic shifts, we still make the decomposition (\ref{cp1hatexp}) but with
\begin{equation}
{\mathcal P}_1({\mathcal Z}) = 3\lambda_1 {\mathcal Z} + 3 \lambda_2 {\mathcal Z}^2 + \lambda_3 {\mathcal Z}^3 \ .
\label{cp1defb}
\end{equation}
Away from the critical values of ${\epsilon }_1$ we can take $\lambda_1=0$ because $\xi_s$ and
$\xi_t$ are independent parameters. At the critical value necessarily $\lambda_1 \not=0$
but in this case $\xi_s$ and $\xi_t$ enter in the RR fluxes in only one linearly
independent combination.
The RR tadpoles happen to depend just on the parameters $(\lambda_2,\lambda_3)$.
The next step is to compare the expansion of $P_1(U)$ in $U$ with its factorized form, c.f. (\ref{cp1hatdef}) and
(\ref{P1Iso}). In this way we can obtain an explicit parametrization of the RR fluxes $a_A$ in terms of the
variables that determine $\widehat {\mathcal P}_1({\mathcal Z})$, namely $(\xi_s,\xi_t)$ together with $(\xi_3,\xi_7)$ or
$(\lambda_1, \lambda_2, \lambda_3)$, depending on the $Q$-subalgebra.
These results are collected in the appendix. We stress that the $\xi$'s and $\lambda$'s are real
parameters but the emerging RR fluxes must be integers.
A vacuum solution in which the moduli $({\mathcal Z}, {\mathcal S}, {\mathcal T})$ are fixed generically requires specific
values of the non-geometric, NSNS and RR fluxes. These fluxes also generate RR tadpoles that
must be balanced by adding orientifold planes or D-branes. To determine the type of sources
that must be included we need
to evaluate the RR tadpole cancellation conditions using all parametrized fluxes.
Substituting in (\ref{O3tadIso}) and (\ref{O7tadIso}) we arrive at the very compact expressions for
the number of sources $N_3$ and $N_7$ gathered in table \ref{tabletads}.
As advertised before, the RR fluxes only enter either through the parameters $(\xi_3, \xi_7)$ or
$(\lambda_2,\lambda_3)$. The non-geometric and NSNS fluxes only contribute through
$|\Gamma|^3$ and $({\epsilon }_1,{\epsilon }_2)$. We will see that there is also a clear correlation of the
tadpoles with the vevs of the moduli.
\begin{table}[htb]
\small{
\renewcommand{\arraystretch}{1.15}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$Q$-subalgebra & $N_3/|\Gamma|^3$ & $N_7/|\Gamma|^3$ \\
\hline
\hline
$\mathfrak{so(4)}$ & \ \ $({\epsilon }_1^2 + {\epsilon }_2^2)\,\xi_3$ & \, $2 \,\xi_7$ \\
\hline
$\mathfrak{so(3,1)}$ & \!\! $4({\epsilon }_1^2 + {\epsilon }_2^2)\,\xi_3$ & \, $4 \,\xi_7$ \\
\hline
$\mathfrak{su(2)+u(1)^3}$ & \ \ $({\epsilon }_1^2 + {\epsilon }_2^2)\, \xi_3$ & \!\! $\,\xi_7$ \\
\hline
$\mathfrak{su(2)\oplus u(1)^3}$ & $\lambda_2\, {\epsilon }_1-\lambda_3\, {\epsilon }_2$ & $\lambda_2 + \lambda_3$ \\
\hline
$\mathfrak{nil}$ & $\lambda_2 \,{\epsilon }_1 -\lambda_3\,{\epsilon }_2$ & $\lambda_3$ \\
\hline
\end{tabular}
\end{center}
\caption{$Q$-subalgebras and RR tadpoles}
\label{tabletads}
}
\end{table}
Finally, let us remark that, just like $({\epsilon }_1, {\epsilon }_2)$, the $\xi$ and $\lambda$ variables are all invariant
under modular transformations of the complex structure $U$. Indeed, from the explicit parametrization
of the RR fluxes $a_A$ we deduce that their correct behavior under $SL(2,{\mathbb Z})_U$, analogous to
(\ref{cduals}), precisely follows from the transformation of $(\alpha, \beta, \gamma, \delta)$ in (\ref{Gmodt}).
This is of course consistent with the fact that the number of sources $N_3$ and $N_7$ in the tadpoles
are physical quantities that must be modular invariant.
\subsection{Moduli potential in the new variables}
We have just seen how a systematic parametrization of the fluxes has guided us to new moduli fields
denoted $({\mathcal Z}, {\mathcal S}, {\mathcal T})$. As we may expect, the effective action in the transformed variables also takes a form
more suitable for finding vacua. The shifts in the axionic real parts of the axiodilaton and the K\"ahler field
do not affect the K\"ahler potential $K$ whereas in the superpotential $W$ they can be reabsorbed in RR fluxes.
On the other hand, the change from the complex structure $U$ to ${\mathcal Z}$ is the $SL(2,{\mathbb R})$ transformation
$U=(\beta-\delta{\mathcal Z})/(\gamma{\mathcal Z}-\alpha)$ whose effect on $K$ and $W$ is completely analogous to a modular transformation
except for factors of $|\Gamma|=(\alpha\delta-\beta\gamma)$. Combining previous results we obtain $e^K|W|^2 \to e^{\mathcal K}|\cw|^2$,
where the transformed K\"ahler potential ${\mathcal K}$ and superpotential $\cw$ are given by
\begin{eqnarray}
\mathcal{K} & = &-3 \,\log\left( -i\,(\mathcal{U}-\bar{\mathcal{U}})\right) -
\,\log\left( -i\,(\mathcal{S}-\bar{\mathcal{S}})\right) - 3 \,\log\left(- i\,(\mathcal{Z}-\bar{\mathcal{Z}}) \right)
\ , \label{KModular} \\[2mm]
\mathcal{W} & = & |\Gamma|^{3/2} \left[{\mathcal T} \, {\mathcal P}_3({\mathcal Z}) \,+ \, {\mathcal S} \, {\mathcal P}_2({\mathcal Z})
+ {\mathcal P}_1({\mathcal Z}) \right] \ .
\label{WModular}
\end{eqnarray}
The flux-induced polynomials ${\mathcal P}_i({\mathcal Z})$ are displayed in table \ref{tablecps} for each $Q$-subalgebra.
In the effective 4-dimensional action with \neq1 supergravity
the functions ${\mathcal K}$ and $\cw$ determine the scalar potential of the moduli according to
\begin{equation}
V = e^{\mathcal K} \left\{ \sum_{\Phi={\mathcal Z},{\mathcal S},{\mathcal T}} {\mathcal K}^{\Phi\bar \Phi} |D_\Phi \cw|^2 - 3|\cw|^2 \right\} \ .
\label{VModular}
\end{equation}
We are interested in supersymmetric minima for which $D_\Phi \cw = \partial_\Phi \cw + \cw \partial_\Phi {\mathcal K} =0$,
for all fields.
\section{Supersymmetric vacua}
\label{sec:vac}
This section is devoted to searching for supersymmetric vacua of the moduli potential induced
by RR, NSNS and non-geometric fluxes together. We will show that by using our new variables
the problem simplifies substantially and analytic solutions are feasible.
Supersymmetric vacua are characterized by the vanishing of the F-terms. In our setup the
conditions are
\begin{eqnarray}
D_{{\mathcal T}}\mathcal{W}&=&\frac{\partial \cw}{\partial {\mathcal T}} +
\frac{3i \cw}{2 {\rm Im \,} {\mathcal T}}=0 \ , \nonumber \\[2mm]
D_{{\mathcal S}}\mathcal{W}&=&\frac{\partial \cw}{\partial {\mathcal S}} +
\frac{i \cw}{2 {\rm Im \,} {\mathcal S}}=0 \ , \label{FFlat} \\[2mm]
D_{{\mathcal Z}}\mathcal{W}&=&\frac{\partial \cw}{\partial {\mathcal Z}} +
\frac{3i \cw}{2 {\rm Im \,} {\mathcal Z}}=0 \ . \nonumber
\end{eqnarray}
The task is to determine whether there are solutions with moduli completely stabilized at vevs
denoted
\begin{equation}
{\mathcal Z}_0 = x_0 + i y_0 \quad ; \quad {\mathcal S}_0 = s_0 + i\sigma_0 \quad ; \quad {\mathcal T}_0 = t_0 + i \mu_0 \ .
\label{vzst}
\end{equation}
The vacua are either Minkowski or AdS because the potential
(\ref{VModular}) at the minimum is given by $V_0 = - 3 e^{{\mathcal K}_0} |\cw_0|^2 \leq 0$.
Besides stabilization, there are further physical requirements. At the minimum
the imaginary part of the axiodilaton, $\sigma_0$, must be positive for the reason it is the inverse
of the string coupling constant $g_s$.
It can be argued that the geometric moduli are subject to similar conditions. The main assumption is
that they arise from the metric of the internal space, which is ${\rm T}^6$ in absence of fluxes.
In particular, the K\"ahler modulus has ${\rm Im \,} {\mathcal T} = e^{-\phi} A$, where $A$ is
the area of a 4-dimensional subtorus. Hence, it must be $\mu_0 > 0$.
Notice also that the internal volume is measured by $V_{int}=(\mu_0/\sigma_0)^{3/2}$.
For the transformed complex structure ${\mathcal Z}$ it happens that ${\rm Im \,} {\mathcal Z} = |\Gamma| {\rm Im \,} U/|\gamma U + \delta|^2$.
Therefore, necessarily ${\rm Im \,} {\mathcal Z}_0 = y_0 \not= 0$ because for ${\rm Im \,} U_0=0$ the internal space is degenerate.
Without loss of generality we choose that ${\rm Im \,} U_0$ is always positive.
Another physical issue is whether the moduli take values such that the effective supergravity action
is a reliable approximation to string theory. Specifically, the string coupling $g_s = 1/\sigma_0$ is expected to be small
to justify the exclusion of non-perturbative string effects. Conventionally, there is also a requirement
of large internal volume to disregard corrections in $\alpha^\prime$.
However, in presence of non-geometric fluxes the internal space might be
a T-fold in which there can exist cycles with sizes related by T-duality \cite{hull, dabholkar}. Thus, for large
volume there could be tiny cycles whose associated winding modes would be light. To date these effects
are not well understood. At any rate, in this work we limit ourselves to finding supersymmetric vacua
of an effective field theory defined by a very precise K\"ahler potential and flux-induced superpotential.
A more detailed discussion of the landscape of vacua is left for section \ref{sec:lands}. We will see that the
moduli can be fixed at small string coupling and cosmological constant.
In the following we will first consider supersymmetric Minkowski vacua that have
\mbox{$\cw=0$} at the minimum.
In our approach it is straightforward to show that for isotropic fluxes such vacua are disallowed.
We then turn our attention to the richer class of ${\rm AdS}_4$ vacua.
Since superpotential terms adopt very specific forms depending
on the particular subalgebra satisfied by the non-geometric fluxes, we will study
the corresponding vacua case by case.
We will mostly focus on the model associated to the non-geometric fluxes of the compact
$\mathfrak{su(2)^2}$ but will also consider other allowed subalgebras to some extent.
\subsection{Minkowski vacua}
Minkowski solutions with zero cosmological constant require that the potential vanishes. Imposing supersymmetry
further implies that the superpotential must be zero at the minimum $({\mathcal Z}_0, {\mathcal S}_0, {\mathcal T}_0)$.
A key property of the superpotential (\ref{WModular}) is its linearity in ${\mathcal S}$ and ${\mathcal T}$. This implies
in particular that the F-flat conditions $D_{\mathcal S}\cw=0$ and $D_{\mathcal T}\cw=0$, together with $\cw=0$, reduce just to
\begin{equation}
{\mathcal P}_3({\mathcal Z}_0)={\mathcal P}_2({\mathcal Z}_0)={\mathcal P}_1({\mathcal Z}_0)=0 \ .
\label{minkcon}
\end{equation}
The third condition $D_{\mathcal Z}\cw=0$ yields a linear relation between ${\mathcal S}_0$ and ${\mathcal T}_0$ so that not all moduli
can be stabilized. The situation is actually worse because (\ref{minkcon}) cannot be fulfilled appropriately. Indeed,
for the specific polynomials for each subalgebra shown in table \ref{tablecps}, it is evident that ${\mathcal P}_3$ and ${\mathcal P}_2$
can only have a common real root ${\mathcal Z}_0$. But then ${\rm Im \,} U_0={\rm Im \,} {\mathcal Z}_0=0$ and this is inconsistent with a well defined
internal space.
It must be emphasized that we are assuming that non-geometric fluxes, and their induced ${\mathcal P}_3$, are non-trivial.
Our motivation is to fix the K\"ahler modulus without invoking non-perturbative effects.
If only RR and NSNS fluxes are turned on there do exist physical supersymmetric Minkowski vacua in which only the
axiodilaton and the complex structure are stabilized \cite{kst, dgkt}. In such solutions the RR and NSNS fluxes
must still satisfy a non-linear constraint \cite{dgkt, gray}.
No-go results for supersymmetric Minkowski vacua in presence of non-geometric fluxes have been obtained previously
\cite{acfi, tasinato, gray}
\footnote{In \cite{tasinato} it is further shown that Minkowski vacua with all moduli
stabilized can exist in more general setups having more complex structure than K\"ahler moduli (in IIB language).}.
In \cite{acfi} the existence was disproved supposing special solutions for the Jacobi identities (\ref{BianchiC}).
We are now extending the proof to all possible non-trivial {\it isotropic} non-geometric fluxes solving these constraints.
\subsection{${\rm AdS}_4$ vacua}
\label{sub:ads}
We now want to solve the supersymmetry conditions when $\cw \not=0$. The three complex
equations $D_\Phi\cw=0$, $\Phi={\mathcal Z},{\mathcal S},{\mathcal T}$, in principle admit solutions with all moduli
fixed at values ${\mathcal Z}_0 = x_0 + i y_0$, ${\mathcal S}_0 = s_0 + i\sigma_0$, and ${\mathcal T}_0 = t_0 + i \mu_0$.
We will also impose the physical requirements $\sigma_0 > 0$, $\mu_0 > 0$ and
${\rm Im \,} U_0 > 0$ which implies $|\Gamma| y_0 > 0$. In general existence of such solutions demands
that the fluxes satisfy some specific properties.
In the ${\rm AdS}_4$ vacua, ${\mathcal P}_2$ and ${\mathcal P}_3$ are necessarily different from zero. Moreover,
combining the equations $D_{\mathcal S}\cw=0$ and ${\rm D}_{\mathcal T}\cw=0$ shows that at the minimum
${\rm Im \,}\left({\mathcal P}_3/{\mathcal P}_2\right)=0$, or equivalently
\begin{equation}
\left({\mathcal P}_3 {\mathcal P}_2^* - {\mathcal P}_3^* {\mathcal P}_2\right)\left|_{0}\right.=0 \ .
\label{parcond}
\end{equation}
{}From this condition we can quickly extract useful information. For example, for the
polynomials of the nilpotent subalgebra we find that ${\epsilon }_1=0$. Similarly, for the semidirect
product $\mathfrak{su(2) \oplus u(1)^3}$, it follows that ${\epsilon }_1=-{\epsilon }_2$. Thus, in these two
cases ${\mathcal P}_2$ and ${\mathcal P}_3$ are forced to be parallel and equation (\ref{parcond}) is inconsequential
for the moduli. Having one equation less means that all moduli cannot be fixed. In fact, what
happens is that only a linear combination of the axions $s_0$ and $t_0$ is determined \cite{cfi}.
Another instructive example is that of the $\mathfrak{su(2) + u(1)^3}$ subalgebra. With the polynomials
provided in table \ref{tablecps} the condition (\ref{parcond}) implies
\begin{equation}
{\epsilon }_2 - 2 {\epsilon }_1 x_0(x_0^2 + y_0^2) = 0 \ ,
\label{direx}
\end{equation}
where we already used that $y_0\not=0$. Now we see that forcefully ${\epsilon }_1\not=0$ because otherwise
${\epsilon }_2$, and thus ${\mathcal P}_2$ itself, would vanish. However, it could be ${\epsilon }_2=0$ and then $x_0=0$.
If ${\epsilon }_2 \not=0$ we will just have one equation that gives $y_0$ in terms of $x_0$.
In other examples with ${\mathcal P}_2$ and ${\mathcal P}_3$ not parallel there are analogous results. It can happen
that (\ref{parcond}) already fixes $x_0$ or it gives $y_0$ as function of $x_0$.
The remaining five equations can be used to obtain ${\mathcal S}_0$ and ${\mathcal T}_0$ in terms of $y_0$ or $x_0$, and
to find a polynomial equation that determines $y_0$ or $x_0$. This procedure can be efficiently carried out
using the algebraic package {\it Singular}. The results are described below in more detail.
The superpotential for each $Q$-subalgebra is constructed with the flux-induced polynomials listed
in table \ref{tablecps}. The numbers of sources needed to cancel tadpoles are given in table \ref{tabletads}.
Recall that O3-planes (D3-branes) make a positive (negative) contribution to $N_3$, whereas
O7-planes (D7-branes) yield negative (positive) values of $N_7$.
Each supersymmetric vacua can be distinguished by the modular invariant values of the string coupling constant $g_s$ and
the potential at the minimum $V_0$ that is equal to the cosmological constant up to normalization.
In the models at hand these quantities are given by
\begin{equation}
V_0 =-\frac{3 |\cw_0|^2}{128\, y_0^3 \, \mu_0^3 \, \sigma_0} \qquad ; \qquad g_s = \frac1{\sigma_0} \ .
\label{vacdata}
\end{equation}
In all examples the vevs of the moduli $y_0$, $\sigma_0$, $\mu_0$, as well as the value $\cw_0$ of the superpotential at the minimum,
can be completely determined and will be given explicitly. It is then straightforward to evaluate the
characteristic data $(V_0, g_s)$.
\subsubsection{Nilpotent subalgebra}
\label{sss:nilpotentres}
When ${\epsilon }_1=0$, the model based on the non-geometric fluxes of the nilpotent subalgebra is
$U \leftrightarrow T$ dual to a IIA orientifold with only RR and NSNS fluxes already considered in the literature \cite{DeWolfe, cfi}.
Supersymmetry actually requires ${\epsilon }_1=0$.
There are some salient features that are easily reproduced in our setup. For instance, a solution
exists only if $\lambda_3 \not=0$ and $(\lambda_1\lambda_3 - \lambda_2^2) > 0$. The axions $s_0$ and $t_0$
can only be fixed in the linear combination
\begin{equation}
3t_0 + {\epsilon }_2 s_0 = \frac{\lambda_2}{\lambda_3^2}(3\lambda_1\lambda_2 - 2 \lambda_2^2) \ .
\label{nilaxions}
\end{equation}
The rest of the moduli are determined as
\begin{equation}
x_0=-\frac{\lambda_2}{\lambda_3} \quad ; \quad y_0^2 = \frac{5(\lambda_1\lambda_3-\lambda_2^2)}{3\lambda_3^2}
\quad ; \quad \sigma_0 = -\frac{2(\lambda_1\lambda_3-\lambda_2^2) y_0}{3{\epsilon }_2 \lambda_3}
\quad ; \quad \mu_0 = {\epsilon }_2 \sigma_0 \ .
\label{nilrest}
\end{equation}
The cosmological constant can be computed using $\cw_0=2i\mu_0 |\Gamma|^{3/2}$.
{}From the results we deduce that ${\epsilon }_2 > 0$, and $\lambda_3 > 0$ for $y_0 < 0$. Then ${\rm Im \,} U_0 > 0$
requires $|\Gamma| < 0$ as it happens for the nilpotent algebra. The tadpole conditions then verify
$N_3 = - \lambda_3 {\epsilon }_2 |\Gamma|^3 > 0$ and $N_7 = \lambda_3 |\Gamma|^3 < 0$. The relevant conclusion
is that the model necessarily requires O3-planes and O7-planes.
\subsubsection{Semidirect sum $\mathfrak{su(2) \oplus u(1)^3}$}
\label{sss:semidirectres}
The non-geometric fluxes of this subalgebra are $U \leftrightarrow T$ dual to NSNS plus {\it geometric}
fluxes in a IIA orientifold. Models of this type have been studied previously \cite{Derendinger, vz1, cfi}.
For completeness we will briefly summarize our results that totally agree with the general solution presented
in \cite{cfi}. Existence of a supersymmetric minimum imposes the constraint ${\epsilon }_1=-{\epsilon }_2$.
In this case it occurs again that the axions $s_0$ and $t_0$ can only be determined in a linear
combination given by
\begin{equation}
3t_0 + {\epsilon }_2 s_0 = 3\lambda_1 + 3\lambda_2(9-7x_0) + 3\lambda_3 x_0(9-8x_0) \ .
\label{semiaxions}
\end{equation}
The imaginary parts of the axiodilaton and the K\"ahler field are stabilized at values
\begin{equation}
\mu_0 = {\epsilon }_2 \sigma_0 \quad ; \quad
{\epsilon }_2 \sigma_0 = 6(\lambda_2 + \lambda_3 x_0) y_0 \ .
\label{semimusigma}
\end{equation}
Notice that ${\epsilon }_2$ must be positive. It also follows that $\cw_0=2i\mu_0 (1-x_0 - iy_0)|\Gamma|^{3/2}$.
The vevs of $x_0$ and $y_0$ depend on whether the RR flux parameter $\lambda_3$ is zero or not.
When $\lambda_3=0$ we obtain
\begin{equation}
x_0=1 \quad ; \quad 3\lambda_2 y_0^2 = -(\lambda_1+\lambda_2) \ .
\label{semil3zero}
\end{equation}
Notice that $\lambda_2 \not=0$ to guarantee $\sigma_0 \not=0$. In fact, choosing $y_0 > 0$ it
must be $\lambda_2 > 0$. For the number of sources we find
$N_3 = - \lambda_2 {\epsilon }_2 |\Gamma|^3 < 0$ and $N_7 = \lambda_2 |\Gamma|^3 > 0$.
Therefore, D3 and D7-branes must be included.
When $\lambda_3\not=0$ we instead find
\begin{equation}
\lambda_3 y_0^2 = 15(x_0-1)(\lambda_2+\lambda_3 x_0) \ ,
\label{semil3difzero}
\end{equation}
whereas $x_0$ must be a root of the cubic equation
\begin{equation}
160(x_0-1)^3 + 294(1+ \frac{\lambda_2}{\lambda_3})(x_0-1)^2 + 135(1+ \frac{\lambda_2}{\lambda_3})^2(x_0-1)
+\frac{1}{\lambda_3}(\lambda_3 + 3\lambda_2 + 3\lambda_1)=0 \ .
\label{semix0}
\end{equation}
The solution for $x_0$ must be real and such that $y_0 ^2 > 0$.
For the tadpoles we now have $N_7=|\Gamma|^3(\lambda_2+\lambda_3)$ and $N_3=-{\epsilon }_2 N_7$.
Thus, in general $N_3$ and $N_7$ have opposite signs. The remarkable feature is that now
they can be zero simultaneously. This occurs when the RR parameters satisfy $\lambda_2=-\lambda_3$,
in which case the cubic equation for $x_0$ can be solved exactly.
\subsubsection{Direct sum $\mathfrak{su(2)+ u(1)^3}$}
\label{sss:directres}
As explained before, necessarily ${\epsilon }_1 \not=0$. Let us consider ${\epsilon }_2=0$ which is the condition
for ${\mathcal P}_2$ to have only real roots. Now it happens that all moduli can be determined. The axions are
fixed at $x_0=0$, $s_0=0$ and $t_0=0$, whereas the real parts have vevs
\begin{equation}
y_0^2 = \frac{{\epsilon }_1\xi_3}{\xi_7} \qquad ; \qquad
\sigma_0 = -\frac{2 \xi_7^2 y_0}{{\epsilon }_1^2 \xi_3}
\qquad ; \qquad \mu_0 = 2 \xi_7 y_0 \ .
\label{alldprod}
\end{equation}
The cosmological constant is easily found substituting $\cw_0=-2\mu_0 y_0 |\Gamma|^{3/2}$.
Clearly, the solution exists only if $\xi_3 \not=0$ and $\xi_7\not=0$. Moreover, ${\epsilon }_1\xi_3\xi_7 > 0$
and if we take $y_0 > 0$, $\xi_3 < 0$, $\xi_7 > 0$ and ${\epsilon }_1 < 0$. The numbers of sources satisfy $N_3 < 0$
and $N_7 > 0$, so that D3 and D7-branes are needed.
Taking ${\epsilon }_2\not=0$ we deduce that there are no solutions at all when $\xi_7=0$ and $\xi_3\not=0$. However, there are
minima that require ${\epsilon }_1 < 0$ and $N_7 > 0$ when $\xi_3=0$.
\subsubsection{Non-compact $\mathfrak{so(3,1)}$}
\label{sss:noncompactres}
This is the only flux configuration for which ${\mathcal P}_3({\mathcal Z})$ has complex roots. It also happens that ${\mathcal P}_2({\mathcal Z})$
always has three different real roots. We will briefly discuss the vacua according to whether
the NSNS flux parameter ${\epsilon }_2$ vanishes or not.
\begin{trivlist}
\item[$\bullet$] \underline{${\epsilon }_2=0$}
In this setup the axions are determined to be $x_0=0$, $s_0=0$ and $t_0=0$. For the imaginary parts of the
K\"ahler modulus and the axiodilaton we obtain
\begin{equation}
\mu_0 = \frac{{\epsilon }_1 \sigma_0 (3+y_0^2)}{(1-y_0^2)} \quad ; \quad
{\epsilon }_1\sigma_0 = \frac{1}{2y_0(3+y_0^2)}\big[3\xi_7 (y_0^2-1) - {\epsilon }_1\xi_3(3y_0^2+1)\big] \ .
\label{so31ip}
\end{equation}
To evaluate the potential at the minimum we use $\cw_0=2\mu_0 y_0 (1-y_0^2)|\Gamma|^{3/2}$.
Notice that $\xi_3$ and $\xi_7$ cannot be zero simultaneously and that $y_0^2=1$ is not allowed.
Actually, the imaginary part of the transformed complex structure satisfies a third order polynomial
equation in $y_0^2$ given by
\begin{equation}
{\epsilon }_1\xi_3(5 y_0^6 + 13 y_0^4 + 15 y_0^2 -1) - \xi_7(y_0^2-1)(5 y_0^4 + 6y_0^2 -3) = 0 \ .
\label{sol31y0}
\end{equation}
We are interested in real roots $y_0 \not=0$ and $y_0\not=\pm1$.
Although we have not made an exhaustive analysis, it is clear that the solutions of (\ref{sol31y0}) depend
on the range of the ratio $\xi_7/{\epsilon }_1\xi_3$. For instance, there are values for which there is
no real root at all, as it occurs e.g. for $2\xi_7=-{\epsilon }_1\xi_3$.
For other values there might be only one real positive solution for $y_0^2$. An special example happens
when $\xi_3=0$ and the net O3/D3 charge $N_3$ is zero, while the net O7/D7 charge $N_7$ is negative as implied by the
conditions $\mu_0 > 0$ and $|\Gamma| y_0 > 0$. Similarly, when $\xi_7=0$ , there is only one solution in which
$N_7=0$ while $N_3 < 0$.
The third possibility is to have two allowed solutions. For instance, taking $\xi_7=2{\epsilon }_1 \xi_3$ gives roots
$y_0^2=1/5$ and $y_0^2=1+2\sqrt 2$. However, in principle the corresponding vacua cannot be realized simultaneously
because the net charges would have to jump. In fact, for $y_0^2 < 1$, it happens that $N_3 N_7 > 0$, whereas for
$y_0^2 > 1$, it must be $N_3 N_7 < 0$. It can also arise that both solutions have $y_0^2 < 1$.
For example, when $\xi_7=-30{\epsilon }_1 \xi_3$ each of the two vacua has $N_3 > 0$ and $N_7 < 0$.
We will explore the phenomenon of multiple AdS vacua in more detail for the non-geometric fluxes of the
$\mathfrak{su(2)^2}$ algebra.
\item[$\bullet$] \underline{${\epsilon }_2\not=0$}
We have only studied the special cases when one of the flux-tadpoles $N_3$ or $N_7$ is zero.
We find that when ${\epsilon }_1=0$ the F-flat conditions can not be solved but for ${\epsilon }_1 > 0$ there are
consistent solutions for a particular range of $|{\epsilon }_2/{\epsilon }_1|$. Vacua with $\xi_3 = 0$ exist
provided that $\xi_7 < 0$. Vacua with no O7/D7 flux-tadpoles, i.e. with $\xi_7=0$, require $\xi_3 < 0$.
One important conclusion is that for the fluxes of the non-compact $Q$-subalgebra solutions with $N_7=0$
must have $N_3 < 0$.
\end{trivlist}
\subsubsection{Compact $\mathfrak{su(2)^2}$}
\label{sss:compactres}
This is the only situation in which the polynomial $P_3(U)$ induced by the non-geometric fluxes has three
different real roots. The polynomial $P_2(U)$ generated by NSNS fluxes has complex roots
whenever ${\epsilon }_1{\epsilon }_2\not=0$, and one triple real root otherwise. We will study the vacua in both cases in some detail.
The full model based on the non-geometric fluxes of $\mathfrak{su(2)^2}$ has an interesting residual symmetry
that exchanges the NSNS auxiliary parameters. It can be shown that the effective action is invariant under
${\epsilon }_1 \leftrightarrow {\epsilon }_2$, $\xi_3 \to \xi_3$ and $\xi_7 \to \xi_7$, together with the field transformations
\begin{equation}
{\mathcal Z} \to 1/{\mathcal Z}^* \qquad ; \qquad {\mathcal S} \to -{\mathcal S}^* \qquad ; \qquad {\mathcal T} \to -{\mathcal T}^* \ .
\label{extrasym}
\end{equation}
This symmetry leaves one of the ${\mathcal P}_3$ roots invariant while exchanging the other two.
\subsubsection*{\thesubsubsection.1 \quad $P_2(U)$ with triple real root}
\label{su2zero}
\addcontentsline{toc}{subsubsection}{\hspace{13pt} \thesubsubsection.1 \quad $P_2(U)$ with triple real root }
Due to the symmetry (\ref{extrasym}) it is enough to consider ${\epsilon }_1=0$ and ${\epsilon }_2 \not=0$.
In this model the axions are stabilized at vevs
\begin{equation}
x_0=-\frac12 \qquad ; \qquad {\epsilon }_2s_0=3\xi_7 -\frac{{\epsilon }_2\xi_3}{2} \qquad ; \qquad t_0=\xi_7 -\frac{{\epsilon }_2\xi_3}{2} \ .
\label{su2rst}
\end{equation}
The imaginary parts of the K\"ahler modulus and the axiodilaton are fixed in terms of $y_0$ according to
\begin{equation}
\mu_0 = -\frac{4{\epsilon }_2 \sigma_0}{(1+4y_0^2)} \quad ; \quad
{\epsilon }_2\sigma_0 = -y_0\big[3\xi_7 + \frac{{\epsilon }_2 \xi_3}{8}(4y_0^2-3)\big] \ .
\label{su2ip}
\end{equation}
At the minimum $\cw_0=2i {\epsilon }_2 \sigma_0 |\Gamma|^{3/2}$.
Clearly $\xi_3$ and $\xi_7$ cannot vanish simultaneously so that the model always requires additional
sources to cancel tadpoles. Observe that necessarily ${\epsilon }_2 < 0$.
The modulus $y_0$ is determined by the fourth order polynomial equation
\begin{equation}
{\epsilon }_2\xi_3(4y_0^2-1)(4y_0^2+5) - 8\xi_7(4y_0^2-5)= 0 \ .
\label{sol2y0}
\end{equation}
In the two special cases $\xi_7=0$ and $\xi_3=0$ an exact solution is easily found. When $\xi_3 \xi_7 \not=0$ there
can be two AdS solutions. The corresponding vacua, which can be characterized by the net tadpoles $N_3$ and $N_7$, are
described more extensively in the following.
\begin{trivlist}
\item[$\bullet$] \underline{$N_7=0$}
When $\xi_7=0$ the vevs have the very simple expressions
\begin{equation}
y_0^2 = \frac14
\qquad ; \qquad \sigma_0 = \frac{\xi_3 y_0}{4}
\qquad ; \qquad \mu_0 = -2{\epsilon }_2 \sigma_0 \qquad ; \qquad
V_0 = \frac{12 |\Gamma|^3 y_0}{{\epsilon }_2 \xi_3^2}
\ .
\label{n7zero}
\end{equation}
Since both $\mu_0$ and $\sigma_0$ are positive, it must be ${\epsilon }_2 <0$, and taking $y_0 > 0$, $\xi_3 > 0$.
Therefore, $N_3 > 0$ and O3-planes must be included.
\item[$\bullet$] \underline{$N_3=0$}
This is the case $\xi_3=0$. The moduli and the cosmological constant are fixed at values
\begin{equation}
y_0^2 = \frac54
\qquad ; \qquad {\epsilon }_2\sigma_0 = -3\xi_7 y_0
\qquad ; \qquad \mu_0 = -\frac23 {\epsilon }_2 \sigma_0 \qquad ; \qquad
V_0 = \frac{9 |\Gamma|^3 {\epsilon }_2 y_0}{500\, \xi_7^2}
\ .
\label{n3zero}
\end{equation}
Necessarily ${\epsilon }_2 < 0$, and choosing $y_0 > 0$, $\xi_7 > 0$.
Hence, $N_7 > 0$ and D7-branes are required.
\item[$\bullet$] \underline{$N_3N_7\not=0$}
The solutions for $y_0$ depend on the ratio $\xi_7/{\epsilon }_2\xi_3$. A detailed analysis can be easily performed because
the polynomial equation (\ref{sol2y0}) is quadratic in $y_0^2$. We find that there are no real solutions in the interval
$\msm{1/8 < \xi_7/{\epsilon }_2\xi_3 < (7+2\sqrt{10})/4}$. On the other hand, when $\msm{0 < \xi_7/{\epsilon }_2\xi_3 < 1/8}$, there is only one real positive
solution for $y_0^2$ and it requires $N_3 > 0$ and $N_7 < 0$.
For $\msm{\xi_7/{\epsilon }_2\xi_3 \leq 0}$ there is only one acceptable root for $y_0^2$
and it leads to $N_3 > 0$ and $N_7 \geq 0$. A more interesting range of parameters is $\msm{\xi_7/{\epsilon }_2\xi_3 > (7+2\sqrt{10})/4}$ because
there are two allowed solutions for $y_0^2$ and for both it must be that $N_3 < 0$ and $N_7 > 0$. The upshot is that there can be metastable
AdS vacua in the presence of D3 and D7-branes.
\end{trivlist}
\subsubsection*{\thesubsubsection.2 \quad $P_2(U)$ with complex roots}
\label{su2nonzero}
\addcontentsline{toc}{subsubsection}{\hspace{13pt} \thesubsubsection.2 \quad $P_2(U)$ with complex roots}
The F-flat conditions can be unfolded to obtain analytic expressions for the vevs of all moduli.
However, for generic range of parameters, a higher order polynomial equation has to be solved to determine $y_0$ in the end.
The main interesting feature is the appearance of multiple vacua even when $N_3 N_7 = 0$, i.e. when there are either no O7/D7 or no O3/D3 net charges
present. We will first describe the overall picture and then present examples. For definiteness we always choose $y_0 > 0$ so that
$|\Gamma| > 0$ is required to have ${\rm Im \,} U_0 > 0$ for the complex structure.
To obtain and examine the results it is useful to make some redefinitions. The idea is to leave as few free
parameters as possible in the F-flat equations.
Since $\epsilon_1$ is different from zero we can work with the ratio
\begin{equation}
\rho=\frac{\epsilon_2}{\epsilon_1} \ .
\label{newps}
\end{equation}
By virtue of the residual symmetry (\ref{extrasym}) there is an invariance under $\rho \to 1/\rho$. Therefore, we can restrict
to the range $-1 \leq \rho \leq 1$, where the boundary corresponds to the fixed points of the inversion. Furthermore, as discussed at the
end of section \ref{subsubso4}, the parameter $\rho$ is either a rational number or involves at most square roots of rationals.
When $\xi_3 \not=0$ it is also convenient to introduce new variables as
\begin{equation}
{\mathcal T}=\epsilon_1 \xi_3 \,\hat{\mathcal T}
\qquad ; \qquad
{\mathcal S} = \xi_3 \hat {\mathcal S} \qquad ; \qquad \xi_7 = {\epsilon }_1 \xi_3 (\rho^2+1) \eta \ .
\label{newus}
\end{equation}
The definition of the parameter $\eta$ seems awkward but it simplifies the results. Notice that $\eta \to \eta\rho$ under (\ref{extrasym}).
In the new variables the superpotential becomes
\begin{equation}
\cw=|\Gamma|^{3/2} \epsilon_1 \xi_3 [3\, \hat{\mathcal T} {\mathcal Z}({\mathcal Z}+1) + \hat{\mathcal S}({\mathcal Z}^3 + \rho) + (1-\rho {\mathcal Z}^3) + 3\eta (1+\rho^2) {\mathcal Z} (1-{\mathcal Z}) ] \ .
\label{w1}
\end{equation}
Since the F-flat conditions are homogeneous in $\cw$ the resulting equations will only depend on the parameters $\rho$ and $\eta$.
When $\xi_3=0$ we just make different field redefinitions, i.e. ${\mathcal T}=\epsilon_1 \xi_7 \,\hat{\mathcal T}$ and ${\mathcal S}=\xi_7 \,\hat{\mathcal S}$, so that
the free parameters will be $\rho$ and $\xi_7/{\epsilon }_1$.
Manipulating the F-flat conditions enables us to find the vevs ${\mathcal T}_0$ and ${\mathcal S}_0$ as functions of $(x_0, y_0)$.
The expressions are tractable but bulky so that we refrain from presenting them. The exception is the
handy relation between the size and string coupling moduli
\begin{equation}
\mu_0=\frac{{\epsilon }_1 \sigma_0(3x_0^2-y_0^2)}{1+2x_0} \ ,
\label{msvacgen}
\end{equation}
which is valid when $x_0\not=\displaystyle{\msm{-\frac12}}$ and $y_0^2\not=\displaystyle{\msm{\frac34}}$. There is a solution with $x_0 =\displaystyle{\msm{-\frac12}}$ and
$y_0^2 =\displaystyle{\msm{\frac34}}$ but it has $\mu_0=-{\epsilon }_1(1+\rho)\sigma_0$, \, $\mu_0=3\xi_7y_0$, and it requires $\eta=-(1+\rho)/(\rho^2-7\rho+1)$. There is another
vacuum with $x_0=\displaystyle{\msm{-\frac12}}$ that occurs when $\rho \to \infty$ (${\epsilon }_1=0$) and was discussed in section \ref{su2zero}.
The case $x_0^2=y_0^2$, which is better treated separately, requires $\xi_7\not=0$ unless $\rho=0$.
The residual unknowns $(x_0, y_0)$ are determined from the coupled system
\begin{eqnarray}
&{\hspace*{-8mm}} & y_0^4+2x_0(1+x_0)y_0^2-\rho(2x_0+1) + x_0^3(x_0+2) = 0 \label{eqab} \ , \\[4mm]
& {\hspace*{-8mm}} &
y_0^6 \left( 1+2\eta x_0-2 \eta \right) + \left( 1+30 \eta {x_0}^{3}-{x_0}^{2}+18\eta {x_0}^{2}-6\,\rho\eta \right) y_0^4
\nonumber \\
& {\hspace*{-8mm}} &
+ \, x_0 \left( 54\eta {x_0}^{4}+11 {x_0}^{3}+42 \eta {x_0}^{3}+8 {x_0}^{2}+12 \rho \eta x_0-4 x_0-6 \rho\eta \right) y_0^2
\label{eqab2} \\
& {\hspace*{-8mm}} &
\mbox{}+ \left( 2\rho+4\rho x_0+11 {x_0}^{3}+13 {x_0}^{4} \right) \left( 2\,\rho\eta+2\eta{x_0}^{3}+{x_0}^{2}+x_0 \right) = 0
\ . \nonumber
\end{eqnarray}
The corresponding equations when $\xi_3=0$ can be obtained taking the limit $\eta \to \infty$.
Eliminating $y_0$ for generic parameters gives a ninth-order polynomial equation for $x_0$.
For some range of parameters the above equations can admit several solutions for ${\mathcal Z}_0=x_0+iy_0$, which in turn yield consistent
values for the remaining moduli. The existence of multiple vacua is most easily detected in the limiting cases in which one of the net tadpoles $N_7$
or $N_3$ vanishes, equivalently when $\xi_7=0$ ($\eta=0$) or $\xi_3=0$ ($\eta \to \infty$). In either limit the NSNS
parameter $\rho$ can still be adjusted. We expect the results to be invariant under $\rho \to 1/\rho$
and this is indeed what happens.
We have mostly looked at models having no O7/D7 net charge, namely with $\eta=0$.
It turns out that the solutions require $\xi_3 > 0$ so that $N_3 > 0$ and O3-planes must be present.
Below we list the main results.
\noindent
1. For $\rho=1$ there are no minima with moduli stabilized.
\noindent
2. For $\rho=-1$ there is only one distinct vacuum with data
\begin{equation}
\mfn{
{\mathcal Z}_0 = -0.876 + 1.158 \, i \quad ; \quad
{\mathcal S}_0 = \xi_3(-0.381 + 0.238\, i) \quad ; \quad
{\mathcal T}_0 = \epsilon_1 \xi_3 (0.602 - 0.305\, i) \quad ; \quad
V_0= \frac{2.38\, |\Gamma|^3}{\xi_3^2 \epsilon_1}
\ . }
\label{solrmu}
\end{equation}
Notice that necessarily $\xi_3 > 0$ and $\epsilon_1 < 0$. Actually, for $\rho=-1$, there is a second consistent
solution but it is related to the above by the residual symmetry (\ref{extrasym}).
\noindent
3. There can be only one solution when $\rho_c \leq \rho < 1$, where $\rho_c=-0.7267361874$.
The critical value $\rho_c$ is such that the discriminant of the polynomial equation that determines
$x_0$ is zero. Consistency requires ${\epsilon }_1 < 0$ and $\xi_3 > 0$ so that O3-planes are needed.
For instance, when $\rho=0$ the solution is exact and has
\begin{equation}
\mfn{
{\mathcal Z}_0 = -1 + i \quad ; \quad
{\mathcal S}_0 = \displaystyle{\frac{\xi_3}8}(4 + i) \quad ; \quad
{\mathcal T}_0 = \displaystyle{\frac{\epsilon_1 \xi_3}4}(2-i) \quad ; \quad
V_0= \frac{6\, |\Gamma|^3}{\xi_3^2 \epsilon_1}
\ .}
\label{solrzero}
\end{equation}
As expected, upon the transformation (\ref{extrasym}) this vacuum coincides with that having $\xi_7=0$ and ${\epsilon }_1=0$, given in (\ref{n7zero}).
For other values of $\rho$ the solution is numerical. For example, taking $\rho=\msm{\frac12}$ leads to the vevs
\begin{equation}
\mfn{
{\mathcal Z}_0=-1.036 + 0.834\, i \quad ; \quad {\mathcal S}_0=\xi_3(1.561 + 0.192) \quad ; \quad
{\mathcal T}_0=\xi_3 \epsilon_1(1.055 - 0.453\, i) \quad ; \quad V_0=\frac{2.283 |\Gamma|^3}{ \xi_3^2 \, \epsilon_1} \ .}
\label{solrm12}
\end{equation}
\noindent
4. The important upshot is that in the interval $-1 < \rho < \rho_c$ there can be two
distinct solutions for the same set of fluxes. An example with $\rho=\msm{-\frac45}$ is shown in table \ref{solezero}.
Notice that the last two solutions can exist for $\xi_3 > 0$ and
$\epsilon_1 > 0$. The first solution can also occur but for $\xi_3 > 0$ and
$\epsilon_1 < 0$.
\begin{table}[htb]
\renewcommand{\arraystretch}{1.15}
\begin{center}
{\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
${\mathcal Z}_0$ & ${\mathcal S}_0/\xi_3$ & ${\mathcal T}_0/\xi_3 \epsilon_1$ & $V_0\, \xi_3^2 \, \epsilon_1/|\Gamma|^3$
\\
\hline
$\!\! -0.91105442 + 1.14050441 \, i$ & $ -0.26002362 + 0.19059447 \, i$ &
$0.53128071 - 0.27572497\, i$ & 3.353 \\
\hline
$\!\! -0.43550654 + 0.73478523 \, i$ & $ 0.28605555 + 0.55017649 \, i$ &
$0.60410811 + 0.12407321 \, i$ & \!\!\! -2.168 \\
\hline
$\!\! -0.40368586 + 0.57866160\, i$ & $ 0.49215445 + 0.33255331 \, i$ &
$0.57101568 + 0.26593032\, i$ & \!\!\! -1.880 \\
\hline
\end{tabular}
}
\end{center}
\caption{\small Degenerate vacua for $\xi_7=0$ and $\rho=\msm{-\frac45}$.}
\label{solezero}
\end{table}
\noindent
For models having no O3/D3 net charge a detailed analysis is clearly feasible but we have only sampled narrow ranges of the adjustable parameter $\rho$.
Consistent solutions must have ${\epsilon }_1 < 0$ and $\xi_7 > 0$. Hence, $N_7 > 0$ and D7-branes must be included. There are values of $\rho$, e.g. $\rho=-1$,
for which there are no vacua with stabilized moduli. For $\rho=1$ there is only one minimum which can be computed exactly.
More interestingly, models of this type can also exhibit multiple vacua. In table \ref{soleinfty} we show one example with $\rho=\msm{\frac34}$.
Observe that both solutions exist for $ {\epsilon }_1 < 0$ and $\xi_7 >0$.
\begin{table}[htb]
\renewcommand{\arraystretch}{1.15}
\begin{center}
{\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
${\mathcal Z}_0$ & ${\epsilon }_1{\mathcal S}_0/\xi_7$ & ${\mathcal T}_0/\xi_7$ & $V_0\, \xi_7^2 /\epsilon_1|\Gamma|^3$
\\
\hline
$-0.88312113 + 0.74580943 \, i$ & $ -6.1818994 - 1.6867660 \, i$ &
$-4.20643209 + 3.92605399 \, i$ & 0.026 \\
\hline
$0.20646056 + 0.89488895 \, i$ & $ 0.03039439 - 2.49813344 \, i$ &
$-0.06455485 + 1.18981502 \, i$ & 0.084 \\
\hline
\end{tabular}
}
\end{center}
\caption{\small Vacua for $\xi_3=0$ and $\rho=\msm{\frac34}$.}
\label{soleinfty}
\end{table}
\section{Aspects of the non-geometric landscape}
\label{sec:lands}
In this section we discuss the main aspects of the ${\rm AdS}_4$ vacua in our models that are standard examples of type IIB toroidal
orientifolds with O3/O7-planes. Besides the axiodilaton $S$, after an isotropic Ansatz the massless scalars reduce to the overall complex
structure $U$ and the size modulus $T$. Fluxes of the RR and NSNS 3-forms generate a potential that gives masses only to $S$ and $U$.
The new ingredient here are non-geometric $Q$-fluxes, that are required to restore T-duality between type IIA and type IIB, and that
induce a superpotential for the K\"ahler field $T$. The various fluxes must satisfy certain constraints arising from Jacobi or Bianchi identities.
The problem is then to minimize the scalar potential while solving the constraints. The question is whether there are solutions
with all moduli stabilized. We have seen that the answer is affirmative and now we intend to analyze it in more detail.
It is instructive to begin by recounting the findings of the previous sections. The initial step is to classify the subalgebras whose
structure constants are the $Q$'s. With the isotropic Ansatz there are only five classes. For each type, the non-geometric fluxes can be written in terms
of four auxiliary parameters $\genfrac{(}{)}{0pt}{}{\alpha \, \beta}{\gamma \, \delta}= \Gamma$, in such a way that the Jacobi identities are automatically satisfied.
Other fluxes can also be parametrized using $\Gamma$ plus additional variables: $({\epsilon }_1, {\epsilon }_2)$ for NSNS, and $(\xi_3, \xi_7, \xi_s, \xi_t)$
or $(\lambda_1, \lambda_2, \lambda_3, \xi_s, \xi_t)$ for RR. The significance of $\Gamma$ is that it defines a transformed complex
structure ${\mathcal Z}=(\alpha U + \beta)/(\gamma U + \delta)$ that is invariant under the modular group $SL(2,{\mathbb Z})_U$. The effective action can be expressed in
terms of ${\mathcal Z}$ according to the $Q$-subalgebra. Once the subalgebra is chosen the vacua will depend only on the variables
$\Gamma$, $({\epsilon }_1, {\epsilon }_2)$, and $(\xi_3, \xi_7)$ or $(\lambda_1, \lambda_2, \lambda_3)$, that in turn determine the values of the
cosmological constant and the string coupling $(V_0, g_s)$, as well as the net tadpoles $(N_3, N_7)$. In many examples, the vevs of the moduli
can be determined in closed form.
Our approach to analyze the vacua in presence of non-geometric fluxes has the great advantage that the degeneracy due to modular transformations
of the complex structure is already taken into account. Inequivalent vacua are just labelled by the vevs $({\mathcal Z}_0, S_0, T_0)$ that are
modular invariant. In practice this means that we can study families of modular invariant vacua by choosing a particular structure for
$\Gamma$. In section \ref{sub:fam} we will give concrete examples.
There is an additional vacuum degeneracy because the characteristic data $(V_0, g_s)$ happen to be independent of the parameters $(\xi_s, \xi_t)$.
The explanation is that they correspond to shifts of the axions ${\rm Re \,} S$ and ${\rm Re \,} T$ which can be reabsorbed in the RR fluxes.
The flux-induced RR tadpoles $(N_3, N_7)$ are blind to $(\xi_s, \xi_t)$ as well.
Apparently, generic shifts in ${\rm Re \,} S$ and ${\rm Re \,} T$ are not symmetries of the compactification, so that
two vacua differing only in the RR flux parameters $(\xi_s, \xi_t)$ would be truly distinct. We argue below that the vacua are
equivalent because the full background is symmetric under $S \to S - \xi_s$, and $T \to T - \xi_t$.
In absence of non-geometric fluxes the 3-form RR field strength that appears in the 10-dimensional action
is given by $F_3=dC_2- H_3 \wedge C_0 + \bar F_3$, where $H_3 = dB_2 + \bar H_3$. The natural generalization to include
non-geometric fluxes is
\begin{equation}
F_3=dC_2- H_3 \wedge C_0 + QC_4 + \bar F_3 \ ,
\label{f3q}
\end{equation}
where $QC_4$ is a 3-form that we can extract from (\ref{QJexpan}) because ${\rm Re \,} {\mathcal J} = C_4$. In fact, $C_4=-{\rm Re \,} T \sum_I \tilde \omega^I$, where
$\tilde \omega ^I$ are the basis 4-forms. Recall also that $C_0 = {\rm Re \,} S$. Notice then that $F_3$ involves the axions in question.
The relevant result is that $F_3$ is invariant\footnote{We thank P. C\'amara for giving us this hint.}
under the shifts $S \to S - \xi_s$, and $T \to T - \xi_t$. To show this
we first compute the variation of $\bar F_3$ using the universal terms (\ref{uniRR}) in the parametrization of the RR fluxes and
then substitute in (\ref{f3q}). In the effective \deq4 action the result is simply that the superpotential is invariant
under these axionic shifts and the corresponding transformation of the RR fluxes. In turn this follows from (\ref{P3Iso}) after
substituting (\ref{uniRR}).
\subsection{Overview}
\label{ss:view}
We now describe in order some prominent features of the ${\rm AdS}_4$ vacua with non-geometric $Q$-fluxes switched on.
\bigskip
\noindent
1. The explicit results of section \ref{sub:ads} indicate that in all models the vevs $\sigma_0 = {\rm Im \,} S_0$ and $\mu_0 = {\rm Im \,} T_0$ are
correlated. This generic property follows from the F-flat conditions simply because the superpotential is linear in the axiodilaton
and the K\"ahler modulus. Recall that the vevs in question determine physically important quantities, namely the string coupling $g_s=1/\sigma_0$,
and the overall internal volume $V_{int}=(\mu_0/\sigma_0)^{3/2}$. To trust the perturbative string approximation $g_s$ must be small and we
will shortly explain, as already shown in \cite{stw2}, that generically there are regions in flux space in which both $g_s$
and the cosmological constant are small, while $V_{int}$ is large. We stress again the caveat that even at large overall volume there could still
exist light winding string states when non-geometric fluxes are in play. These effects are certainly important in trying to lift the solutions
to full string vacua. In this paper we only claim to have found vacua of the effective field theory with a precise set of massless fields
and interactions due to generalized fluxes.
\bigskip
\noindent
2. Another common feature of all models is the relation between moduli vevs and net RR charges. In type IIB toroidal orientifolds
it is known that in Minkowski supersymmetric vacua the contribution of RR and NSNS fluxes to the $C_4$ tadpole is positive ($N_3 >0$)
and this occurs if and only if ${\rm Im \,} S_0 > 0$ \cite{kst}. The interpretation is that to cancel the tadpole due to $\bar F_3$
and $\bar H_3$ it is mandatory to include O3-planes, whereas D3-branes can be added only as long as $N_3$ stays positive.
This is also true for no-scale Minkowski vacua in which supersymmetry is broken by the F-term of the K\"ahler field.
Turning on non-geometric fluxes enables to stabilize all moduli at a supersymmetric ${\rm AdS}_4$ minimum. At the same time,
the $Q$-fluxes induce a $C_8$ tadpole of magnitude $N_7$ that can be cancelled by adding O7-planes and/or D7-branes.
We find in general that the vevs ${\rm Im \,} S_0$ and ${\rm Im \,} T_0$, that must be positive, are correlated to the tadpoles $(N_3, N_7)$.
According to the $Q$-subalgebra there are several possibilities for the type of sources that have to be included.
For example, the models considered in \cite{stw2}, having $N_3 > 0$ and $N_7 =0$, proceed only with the fluxes of
the compact $\mathfrak{su(2)^2}$.
\medskip
\noindent
For the $Q$-fluxes of the nilalgebra, and the semidirect sum $\mathfrak{su(2) \oplus u(1)^3}$, there is a relation $N_3 = -{\epsilon }_2 N_7$,
with ${\epsilon }_2 > 0$. Only in the latter case it is allowed to have $N_3=N_7=0$, and the sources can be avoided altogether.
For the fluxes of $\mathfrak{su(2) + u(1)^3}$ it turns out that orientifold planes are unnecessary to cancel tadpoles, but both D3 and D7-branes
must be added ($N_3 < 0$, $N_7 > 0$).
\medskip
\noindent
The fluxes of the semisimple subalgebras are more flexible. In particular, it can happen that
one flux-tadpole vanishes while the other must have a definite sign. Moreover, the sign is opposite for the compact and non-compact
cases. For instance, when $N_7=0$, $N_3 > 0$ and O3-planes are obligatory for the $\mathfrak{su(2)^2}$ fluxes, while for $\mathfrak{so(3,1)}$
$N_3 < 0$ and D3-branes are required.
\medskip
\noindent
The magnitudes of the vevs are also proportional to the net tadpoles. This then implies that the string coupling typically decreases
when $N_3$ and/or $N_7$ increase. However, the number of D-branes cannot be increased arbitrarily without taking into account their
backreaction.
\bigskip
\noindent
3. Consistency of the vacua can in fact be related to the full 12-dimensional algebra in which
the $\bar H$ and $Q$-fluxes are the structure constants. The reason is that the conditions ${\rm Im \,} S_0 > 0$ and ${\rm Im \,} T_0 > 0$ also impose restrictions on the
signs of the NSNS parameters $({\epsilon }_1, {\epsilon }_2)$. For instance, in section \ref{sss:compactres} we have seen that for $Q$-fluxes of the
compact $\mathfrak{so(4) \sim su(2)^2}$, the solutions with ${\epsilon }_1=0$ require ${\epsilon }_2 < 0$. This in turn implies, as explained in section
\ref{subsubso4}, that the full gauge algebra is $\mathfrak{so(4) + iso(3)}$. Another simple example is the model based on the
$\mathfrak{su(2)+ u(1)^3}$ $Q$-subalgebra. The vacua of \ref{sss:directres} with ${\epsilon }_2=0$ require ${\epsilon }_1 < 0$ and it can then be shown
that the full gauge algebra is $\mathfrak{so(4) + u(1)^6}$.
A more detailed study of the 12-dimensional algebras is left for future work \cite{guarino}.
\bigskip
\noindent
4. We defer to section \ref{sub:fam} a more thorough discussion of the landscape of values attained by the string coupling $g_s$
and the cosmological constant $V_0$, for the fluxes of the compact $\mathfrak{su(2)^2}$ $Q$-subalgebra. The situation for $\mathfrak{so(3,1)}$
is similar and can be analyzed using the results of section \ref{sss:noncompactres}. The model based on the direct product $\mathfrak{su(2)+u(1)^3}$
is different because both $N_3$ and $N_7$ must be non-zero, but it can still be shown that there exist vacua with small $g_s$ and $V_0$.
The models built using the nilpotent and semidirect $Q$-subalgebras have been studied in their T-dual IIA formulation in refs. \cite{DeWolfe, cfi},
where it was found that there are infinite families of vacua within the perturbative region.
\bigskip
\noindent
5. A peculiar result is the appearance of multiple vacua for certain combination of fluxes. These events occur only in models based
on the semisimple $Q$-subalgebras. They can have $N_3 N_7=0$ or $N_3 N_7 \not=0$, but in the former case both NSNS parameters
$({\epsilon }_1, {\epsilon }_2)$ must be non-zero. Reaching small string coupling and cosmological constant typically requires that $N_3$ and/or $N_7$
be sufficiently large.
\bigskip
\noindent
6. To cancel RR tadpoles it might be necessary to add stacks of D3 and/or D7-branes. These additional D-branes could also
generate a charged chiral spectrum but more generally a different sector of D-branes will serve this purpose. In any case,
the D-branes that can be included are constrained by cancellation of Freed-Witten anomalies \cite{cfi, vz2}.
In absence of non-geometric fluxes the condition amounts to the vanishing of $\bar H_3$ when integrated over any internal 3-cycle wrapped by
the D-branes. For unmagnetized D7-branes in ${\rm T}^6/{\mathbb Z}_2 \times {\mathbb Z}_2$, with $\bar H_3$ given in (\ref{H3expan}), it is easy to see that the condition is met,
whereas for D3-branes it is trivial. When $Q$-fluxes are switched on the modified condition \cite{vz2} is still satisfied basically because the 3-form
$Q{\mathcal J}$, defined in (\ref{QJexpan}), can be expanded in the same basis as $\bar H_3$.
\medskip
\noindent
D3-branes and unmagnetized D7-branes in ${\rm T}^6/{\mathbb Z}_2 \times {\mathbb Z}_2$ do not give rise to charged chiral matter. Therefore the models
will not have $U(1)$ chiral anomalies. This is consistent with the fact that the axions ${\rm Re \,} S$ and ${\rm Re \,} T$ are generically stabilized
by the fluxes and having acquired a mass they could not participate in the Green-Schwarz mechanism to cancel the chiral
anomalies\footnote{We thank L. Ib\'a\~nez for discussions on this point.}.
\medskip
\noindent
To construct a more phenomenologically viable scenario one could introduce magnetized D9-branes as in the ${\rm T}^6/{\mathbb Z}_2 \times {\mathbb Z}_2$
type IIB orientifolds with NSNS and RR fluxes that were considered some time ago \cite{magnetized}.
Now, care has to be taken because magnetized D9-branes suffer from Freed-Witten anomalies. They are actually forbidden in absence
of non-geometric fluxes when $\bar H_3 \not=0$.
\medskip
\noindent
The effect of the $Q$-fluxes can be studied as explained in \cite{vz2}. Cancellation of Freed-Witten anomalies
translates into invariance of the superpotential under shifts \mbox{$S \to S + q_s \nu$} and
\mbox{$T \to T + q_t \nu$}, where the real charges $(q_s,q_t)$ depend on the $U(1)$ gauged by the D-brane.
Applying this prescription we conclude that in our setup with isotropic fluxes magnetized D9-branes could be introduced only in
models based on the nilpotent and semidirect sum $\mathfrak{su(2) \oplus u(1)^3}$ \mbox{$Q$-subalgebras}. The reason is that only in these
cases the flux-induced polynomials $P_2(U)$ and $P_3(U)$ can be chosen parallel and then $W$ can remain invariant under the
axionic shifts. Equivalently, only in these cases the axions ${\rm Re \,} S$ and ${\rm Re \,} T$ are not fully determined and the residual massless
linear combination can give mass to an anomalous $U(1)$. For other \mbox{$Q$-subalgebras} the polynomials
$P_2(U)$ and $P_3(U)$ are linearly independent and both axions are completely stabilized.
\medskip
\noindent
It would be interesting to study the consistency conditions on magnetized D9-branes in models with non-isotropic fluxes. In principle
there could exist configuration of fluxes such that the general superpotential (\ref{fullW}-\ref{p3gen}) is invariant under axionic
shifts of $S$ and the K\"ahler moduli $T_I$.
\subsection{Families of modular invariant vacua}
\label{sub:fam}
To generate specific families of vacua we first choose the $Q$-subalgebra and then select the parameters in $\Gamma$.
In general $\Gamma$ can be chosen so that the non-geometric fluxes are even integers. The NSNS fluxes turn out to be even integers
by picking $({\epsilon }_1, {\epsilon }_2)$ appropriately. One can also start from given non-geometric and NSNS
even integer fluxes and deduce the corresponding $\Gamma$ and $({\epsilon }_1, {\epsilon }_2)$. Similar remarks apply to the RR fluxes.
We will illustrate the procedure for the compact $\mathfrak{su(2)^2}$.
If one of the parameters vanishes, say $\gamma=0$, it can be shown from (\ref{LimC}) that the ratios $\delta/\alpha$ and $\beta/\alpha$ are rational numbers
(recall that $|\Gamma|\not=0$ so that $\alpha, \delta \not=0$). It then follows that by a modular transformation, c.f. (\ref{Gmodt}), we can go to
a canonical gauge in which also $\beta=0$.
The canonical diagonal gauge $\gamma=\beta=0$ is completely generic when ${\epsilon }_2=0$ (${\epsilon }_1 \not=0$). In this case we find that $\beta/\alpha$ and $\gamma/\delta$
are rational because they are given respectively by quotients of NSNS and non-geometric fluxes. Therefore, $\beta$ and $\gamma$ can be gauged away
by modular transformations. If instead ${\epsilon }_1=0$, but ${\epsilon }_2\not=0$, we can take $\alpha=\delta=0$.
When ${\epsilon }_1 {\epsilon }_2 \not=0$ we can still use the canonical gauge but it will not give the most general results that are obtained
simply by considering $\alpha, \beta, \gamma, \delta \not= 0$.
\subsubsection{Canonical families for $\mathfrak{su(2)^2}$ fluxes}
\label{ss:canonical}
For each subalgebra we can obtain families of vacua starting from the canonical gauge defined by $\gamma=\beta=0$.
In the $\mathfrak{su(2)^2}$ case only the non-geometric fluxes $\tilde c_1$ and $\tilde c_2$ are
different from zero and can be written as
\begin{equation}
\tilde c_1 = -2m \quad ; \quad \tilde c_2 = 2n \quad ; \quad m, \, n \, \in {\mathbb Z} \ .
\label{nongeocan}
\end{equation}
{}From (\ref{LimC}) we easily find $\alpha/\delta=n/m$, $\delta^3=2m^2/n$, so that
$|\Gamma|^3=4 nm$. The non-zero NSNS and RR fluxes are easily found to be
\begin{eqnarray}
b_0 & = & - \frac{2m^2}{n}{\epsilon }_2 \quad ; \quad b_3 = \frac{2n^2}{m}{\epsilon }_1 \quad ; \quad
a_0 = \frac{2m^2}{n}({\epsilon }_1 \xi_3 + {\epsilon }_2 \xi_s)
\ , \label{abcan} \\
a_1 & = & -2m(\xi_t + \xi_7) \quad ; \quad
a_2 = 2n(\xi_t - \xi_7) \quad ; \quad a_3 = -\frac{2n^2}{m}({\epsilon }_1\xi_s - {\epsilon }_2 \xi_3) \ .
\nonumber
\end{eqnarray}
Since the $b$'s and $a$'s are (even) integers, it is obvious that $({\epsilon }_1, {\epsilon }_2)$ and $(\xi_3, \xi_7, \xi_s, \xi_t)$ are all
rational numbers.
The moduli vevs depend on $(\xi_3, \xi_7)$ and $({\epsilon }_1, {\epsilon }_2)$. For concreteness, and to compare with the results of \cite{stw2},
we focus on the case $\xi_7=0$. Other cases can be studied using the results of section \ref{sss:compactres}.
When $\xi_7=0$ the RR fluxes $a_1$ and $a_2$ are spurious, they can be eliminated by setting $\xi_t=0$,
i.e. by a shift in ${\rm Re \,} T$.
To continue we have to distinguish whether one of the NSNS parameters ${\epsilon }_1$ or ${\epsilon }_2$ is zero. Recall that in this case
the flux induced polynomial $P_2$ does not have complex roots.
\begin{trivlist}
\item[$\bullet$] \underline{${\epsilon }_1 {\epsilon }_2=0$}
Let us consider ${\epsilon }_2=0$. Then, also $a_3$, or $\xi_s$, is irrelevant and can be set to zero by a shift in ${\rm Re \,} S$.
The important physical parameters are ${\epsilon }_1$ and $\xi_3$, they can be deduced from $b_3$ and $a_0$.
Notice also that at this point $N_3=a_0 b_3$.
Using (\ref{solrzero}) we obtain the values of the cosmological constant and the string coupling
\begin{equation}
V_0 = \frac{48 \, m^6 b_3^3}{n^3 N_3^2} \qquad ; \qquad g_s = \frac{8\, m^3 b_3^2}{n^3 N_3} \ .
\label{candata}
\end{equation}
Consistency requires ${\epsilon }_1 < 0$ and $\xi_3 > 0$, or equivalently $V_0 < 0 $ and $g_s > 0$.
For the purpose of counting distinct vacua we can safely assume $b_3 > 0$ and then $m, n < 0$.
As noticed in \cite{stw2}, the important outcome is that $g_s$ and $V_0$ can be made arbitrarily small by keeping
$b_3$ and $m$ fixed while letting $n \to \infty$.
In our approach it is also easy to see that $(V_0, g_s)$ always take values of the form (\ref{candata}) whenever $P_2$ has only real roots.
This follows because all vacua are related by modular transformations plus axionic shifts.
However, if as in \cite{stw2} we want to count the vacua with fluxes bounded by an upper limit $L$, it does not suffice to just consider
the canonical gauge. The reason is that by performing modular transformations and axionic shifts we can reach larger effective
values of $b_3$ that seem to violate the tadpole condition. Rather than an elaborate argument we will just provide a simple example.
We can go to a non-canonical gauge with $\gamma=0$ but $\beta\not=0$ and also take $\xi_t=0$ but $\xi_s \not=0$. With these choices it is
straightforward to show that $N_3=a_0 b_3 - a_3 b_0$, which would allow to take e.g. $b_3=N_3$ that is forbidden when $b_0=0$ ($\beta=0$),
or $a_3=0$ ($\xi_s=0$), because $a_0$ must be even. To do detailed vacua statistics it is necessary to use generic gauge and axionic shifts.
\item[$\bullet$] \underline{${\epsilon }_1 {\epsilon }_2\not=0$}
As in section \ref{su2nonzero} we set ${\epsilon }_2=\rho {\epsilon }_1$. In the canonical gauge the parameter $\rho$ is a rational number that we assume to
be given. We choose to vary the NSNS flux $b_3$ that determines
\begin{equation}
{\epsilon }_1= \frac{m b_3}{2n^2} \quad ; \quad b_0 = -\frac{\rho \, m^3 b_3}{n^3} \ ,
\label{bzero}
\end{equation}
where $m, n$ are the integers coming from the non-geometric fluxes. The vacuum data have been found to be
\begin{equation}
V_0 = \frac{4 F_V \, n m }{{\epsilon }_1 \xi_3^2} \qquad ; \qquad g_s = \frac{1}{F_g \xi_3} \ ,
\label{candatagen}
\end{equation}
where we used $|\Gamma|^3=4nm$. The numerical factors $F_V$ and $F_g$ depend on $\rho$. For instance, for $\rho=0$, $F_V=6$ and $F_g=\msm{1/8}$.
Other examples are given in section \ref{su2nonzero}. We remark that for $\rho$ in a particular range there can be multiple vacua, meaning
that for some $\rho$ the above numerical factors might take different values (e.g. table \ref{solezero}).
It is most convenient to extract $\xi_3$ from the tadpole relation $N_3=4mn{\epsilon }_1^2(1+\rho^2)\xi_3$, which in terms of the integer fluxes
reads $N_3=a_0 b_3 - a_3 b_0$. Combining all the information we readily find
\begin{equation}
V_0 = \frac{8 F_V \, m^6 b_3^3 (1+\rho^2)^2}{n^3 N_3^2} \qquad ; \qquad g_s = \frac{m^3 b_3^2 (1+\rho^2)}{F_g \, n^3 N_3} \ .
\label{candata2}
\end{equation}
Unlike the case when $\rho=0$, in general we cannot keep $m$ and $b_3$ fixed while letting $n \to \infty$. The reason is that the NSNS flux
$b_0$ in (\ref{bzero}) must be an integer.
The main conclusion is that it is not always possible to obtain small string coupling and cosmological constant. In fact, when $\rho\not=0$,
there are no vacua with $g_s < 1$ unless the tadpole $N_3$ is sufficiently big. To prove this, notice first that the string coupling
can be rewritten as $g_s=\msm{- b_3 b_0 (1+\rho^2)/(F_s \rho N_3)}$. The most favorable situation occurs when $\rho=-1$ for which $F_s=0.238$.
The smallest allowed NSNS fluxes are $b_0=b_3=2$ (compatible with $\rho=-1$). Hence, the minimum value of the coupling is
$g_s^{min}=\msm{8/(F_s N_3)}$ and $g_s^{min} < 1$ would require $N_3 > 33$. The situation is worse for values of $\rho$ such that multiple vacua can
appear. The problem is that since such $\rho$'s are rational, $b_3$ must be largish for $b_0$ to be integer.
Going to a more general gauge does not change the conclusion.
We have just provided a quantitative, almost analytic, explanation of why there are no perturbative vacua when the flux polynomial
$P_2$ has complex roots and $N_3$ is not large enough. This observation was first made in \cite{stw2} based on a purely numerical analysis.
\end{trivlist}
\section{Final remarks}
\label{sec:end}
In this paper we have investigated supersymmetric flux vacua in a type IIB orientifold with RR, NSNS and non-geometric $Q$-fluxes turned on.
We enlarged the related analysis of \cite{stw2} by considering the most general fluxes solving the Jacobi identities, and by including variable
numbers of O3/D3 and O7/D7 sources to cancel the flux-induced RR tadpoles.
Our approach is based on the classification of the subalgebras satisfied by the non-geometric fluxes. A convenient parametrization
of the $Q$-fluxes leads to an auxiliary complex structure that turns out to be invariant under modular transformations.
Writing the superpotential in terms of this invariant field simplifies solving the F-flat conditions and enables us to obtain
analytic expressions for the moduli vevs. We have found families of supersymmetric ${\rm AdS}_4$ vacua in all models defined by the inequivalent
$Q$-subalgebras. General properties of the solutions were discussed in section \ref{sec:lands}.
The vacua typically exist in all cases, provided that arbitrary values of the flux-induced RR tadpoles are allowed.
In type IIB orientifolds with only RR and NSNS fluxes there is a non-trivial induced tadpole that must be cancelled by
O3-planes or wrapped D7-branes. But including non-geometric fluxes can require other types of sources.
For instance, similar to well understood ${\rm AdS}_4$ models in type IIA, the induced flux-tadpoles might vanish implying that sources can be avoided.
There are also examples in which sources of positive RR charge are sufficient to cancel the tadpoles.
As one might expect, these latter exotic vacua occur in models built using $Q$-fluxes satisfying the non-compact $\mathfrak{so(3,1)}$ subalgebra.
Such solutions might be ruled out once a deeper understanding of non-geometric fluxes has been developed.
We discussed a simplified set of fluxes but our methods could be used to study other configurations. The starting point
would be the classification of the $Q$-subalgebras consistent with the underlying symmetries.
Although our main goal was to explore supersymmetric vacua with moduli stabilized, our results could have further applications.
We have succeeded in connecting properties of the vacua to the underlying gauge algebra and this can help towards extending
the description of non-geometric fluxes beyond the effective action limit. At present one of the most challenging problems in need of new
insights is precisely to formulate string theory on general backgrounds at the microscopic level.
\vspace*{1cm}
\noindent
{\bf \large Acknowledgments}
We are grateful to P.~C\'amara, B.~de Carlos, L.~Ib\'a\~nez, R.~Minasian, G.~Tasinato, S.~Theisen and G.~Weatherill for useful comments.
A.F. thanks the Max-Planck-Institut f\"ur Gravitationsphysik, as well as the Instituto de F\'{\i}sica Te\'orica
UAM/CSIC, for hospitality and support at several stages of this paper, and CDCH-UCV for a research grant No. PI-03-007127-2008.
A.G. acknowledges the financial support of a FPI (MEC) grant reference BES-2005-8412.
This work has been partially supported by CICYT, Spain, under contract FPA 2007-60252,
the Comunidad de Madrid through Proyecto HEPHACOS S-0505/ESP-0346, and by the European Union
through the Marie Curie Research and Training Networks {\it Quest for Unification} (MRTN-CT-2004-503369)
and {\it UniverseNet} (MRTN-CT-2006-035863).
\newpage
\section*{Appendix: Parametrized RR fluxes}
\label{appA}
\addcontentsline{toc}{section}{\hspace{13pt} Appendix: Parametrized RR fluxes}
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
In this appendix we give the explicit expressions for the original RR fluxes $a_A$ in terms of the
axionic shifts $(\xi_s, \xi_t)$ and the tadpole parameters $(\xi_3, \xi_7)$ or $(\lambda_2, \lambda_3)$,
depending on the $Q$-subalgebra. For the semidirect sum $\mathfrak{su(2)\oplus u(1)^3}$ and the nilpotent
algebra there is another auxiliary variable $\lambda_1$ as explained in \ref{ss:rr}.
In all cases there is a non-singular rotation matrix from the $a_A$'s to the new variables.
In principle the $\xi$'s and $\lambda$'s are just real constants but
the resulting $a_A$ fluxes must be integers. The exact nature of these parameters can be elucidated starting with the
non-geometric fluxes of each subalgebra. For example, following the discussion at the end of section \ref{subsubso4}, for $\mathfrak{su(2)^2}$
when ${\epsilon }_1{\epsilon }_2=0$ it transpires that $(\xi_3, \xi_7, \xi_s, \xi_t) \in \mathbb{Q}$.
There is a universal structure in the RR fluxes that is worth noticing. For all $Q$-subalgebras the dependence on the axionic
shift parameters $(\xi_s, \xi_t)$ is of the form
\begin{eqnarray}
a_0 &=& - b_0 \xi_s + 3 c_0 \xi_t \, + \cdots \nonumber \\
a_1 &=& - b_1 \xi_s - (2c_1 - \tilde c_1) \xi_t \, + \cdots \label{uniRR} \\
a_2 &=& - b_2 \xi_s - (2c_2 - \tilde c_2) \xi_t \, + \cdots \nonumber\\
a_3 &=& - b_3 \xi_s + 3 c_3 \xi_t \, + \cdots \nonumber
\end{eqnarray}
where $\cdots$ stands for extra terms depending on the tadpole parameters.
\begin{trivlist}
\item[{\bf A.1}] \underline{Compact $\mathfrak{su(2)^2}$ background}.
\begin{eqnarray}
\! a_{0} \!\!\! &=& \!\!\! \delta^3(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s) + \beta^3({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
+ 3 \delta\beta^2(\xi_t - \xi_7) + 3 \beta\delta^2(\xi_t+\xi_7) \nonumber \\[2mm]
\! a_{1} \!\!\! &=&\!\!\! -\gamma\delta^2(\epsilon_{1} \xi_3 + \epsilon_{2} \xi_s)
- \alpha \beta^2 ({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
- \beta(\beta\gamma+2\alpha\delta)(\xi_t - \xi_7) - \delta(\alpha\delta+2\beta\gamma)(\xi_t+\xi_7) \nonumber \\[2mm]
\! a_{2}\!\!\! &=&\!\!\! \delta\gamma^2(\epsilon_{1} \xi_3 + \epsilon_{2} \xi_s)
+ \beta \alpha^2 ({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
+ \alpha(\alpha\delta + 2\beta\gamma)(\xi_t - \xi_7) + \gamma(\beta\gamma +2\alpha\delta)(\xi_t+\xi_7) \nonumber \\[2mm]
\! a_{3}\!\!\! &=& \!\!\! -\gamma^3(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s) - \alpha^3({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
- 3 \gamma\alpha^2(\xi_t - \xi_7) - 3 \alpha\gamma^2(\xi_t+\xi_7) \nonumber
\end{eqnarray}
\item[{\bf A.2}] \underline{Non-compact $\mathfrak{so(3,1)}$ background}.
\begin{eqnarray}
\! a_{0} \!\! &=& \!\! \delta(\delta^2-3\beta^2)(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s)
+ \beta(\beta^2-3\delta^2)({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
- 3(\beta^2 + \delta^2)(\beta\xi_t - \delta\xi_7)\nonumber \\[2mm]
\! a_{1} \!\! &=&\!\! (\gamma\beta^2+ 2\alpha\beta\delta- \gamma\delta^2)(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s)
+ (\alpha\delta^2 + 2\beta\gamma\delta - \alpha\beta^2)(\epsilon_{1} \xi_s - \epsilon_{2}\xi_3)
\nonumber \\
&{}& \ + (\beta^2 + \delta^2)(\alpha\xi_t - \gamma\xi_7) + 2 (\alpha\beta + \gamma\delta)(\beta\xi_t - \delta\xi_7)
\nonumber \\[2mm]
\! a_{2}\!\! &=&\!\! (\delta\gamma^2- 2\alpha\beta\gamma- \delta\alpha^2)(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s)
+ (\beta\alpha^2 - 2\alpha\gamma\delta - \beta\gamma^2)(\epsilon_{1} \xi_s - \epsilon_{2}\xi_3)
\nonumber \\
&{}& \ - 2 (\alpha\beta + \gamma\delta)(\alpha\xi_t - \gamma\xi_7) - (\alpha^2 + \gamma^2)(\beta\xi_t - \delta\xi_7)
\nonumber \\[2mm]
\! a_{3}\!\! &=& \!\!- \gamma(\gamma^2-3\alpha^2)(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s)
- \alpha(\alpha^2-3\gamma^2)({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
+ 3(\alpha^2 + \gamma^2)(\alpha\xi_t - \gamma\xi_7)\nonumber
\end{eqnarray}
\item[{\bf A.3}] \underline{Direct sum $\mathfrak{su(2)+ u(1)^3}$ background}.
\begin{eqnarray}
a_{0}\!\! &=&\!\! \delta^3(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s) + \beta^3({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
+ 3 \beta\delta^2 \xi_t - 3\delta\beta^2 \xi_7 \nonumber \\[2mm]
a_{1}\!\! &=&\!\! -\gamma\delta^2(\epsilon_{1} \xi_3 + \epsilon_{2} \xi_s)
- \alpha \beta^2 ({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3) - \delta(\alpha\delta+2\beta\gamma)\xi_t
+ \beta(\beta\gamma+2\alpha\delta)\xi_7 \nonumber \\[2mm]
a_{2}\!\! &=& \!\! \delta\gamma^2(\epsilon_{1} \xi_3 + \epsilon_{2} \xi_s)
+ \beta \alpha^2 ({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
+ \gamma(\beta\gamma +2\alpha\delta)\xi_t - \alpha(\alpha\delta + 2\beta\gamma)\xi_7 \nonumber \\[2mm]
a_{3}\!\! &=&\!\! -\gamma^3(\epsilon_{1} \xi_3 + \epsilon_{2}\xi_s) - \alpha^3({\epsilon }_1 \xi_s - {\epsilon }_2 \xi_3)
- 3\alpha\gamma^2\xi_t + 3 \gamma\alpha^2\xi_7 \nonumber
\end{eqnarray}
\item[{\bf A.4}] \underline{Semidirect sum $\mathfrak{su(2) \oplus u(1)^3}$ background}.
\begin{eqnarray}
a_{0}\!\! &=&\!\! \delta^3(\epsilon_2 \xi_s + 3\xi_t) + \beta\delta^2(\epsilon_1 \xi_s - 3\xi_t + 3\lambda_1)
+ 3 \delta\beta^2 \lambda_2 + \beta^3 \lambda_3 \nonumber \\[2mm]
a_{1}\!\! &=&\!\! -\gamma\delta^2(\epsilon_2 \xi_s + 3\xi_t)
- \msm{\frac13} \delta(\alpha\delta + 2\beta\gamma)(\epsilon_1 \xi_s - 3\xi_t + 3\lambda_1)
- \beta(\beta\gamma+2\alpha\delta)\lambda_2 - \alpha \beta^2\lambda_3 \nonumber \\[2mm]
a_{2}\!\! &=& \!\! \delta\gamma^2(\epsilon_2 \xi_s + 3\xi_t)
+ \msm{\frac13} \gamma(\beta\gamma + 2\alpha\delta)(\epsilon_1 \xi_s - 3\xi_t + 3\lambda_1)
+ \alpha(\alpha\delta+2\beta\gamma)\lambda_2 + \beta \alpha^2\lambda_3 \nonumber \\[2mm]
a_{3}\!\! &=&\!\! -\gamma^3(\epsilon_2 \xi_s + 3\xi_t) - \alpha\gamma^2(\epsilon_1 \xi_s - 3\xi_t + 3\lambda_1)
- 3 \gamma\alpha^2 \lambda_2 - \alpha^3 \lambda_3 \nonumber
\end{eqnarray}
\item[{\bf A.5}] \underline{Nilpotent $\mathfrak{nil}$ background}.
\begin{eqnarray}
a_{0}\!\! &=&\!\! \delta^3(\epsilon_2 \xi_s + 3\xi_t) + \gamma\delta^2(\epsilon_1 \xi_s + 3\lambda_1)
+ 3 \delta\gamma^2 \lambda_2 + \gamma^3 \lambda_3 \nonumber \\[2mm]
a_{1}\!\! &=&\!\! -\gamma\delta^2(\epsilon_2 \xi_s + 3\xi_t)
+ \msm{\frac13} \delta(\delta^2-2\gamma^2)(\epsilon_1 \xi_s + 3\lambda_1)
- \gamma(\gamma^2-2\delta^2)\lambda_2 + \delta \gamma^2\lambda_3 \nonumber \\[2mm]
a_{2}\!\! &=& \!\! \delta\gamma^2(\epsilon_2 \xi_s + 3\xi_t)
+ \msm{\frac13} \gamma(\gamma^2 - 2\delta^2)(\epsilon_1 \xi_s + 3\lambda_1)
+ \delta(\delta^2 -2\gamma^2)\lambda_2 + \gamma \delta^2\lambda_3 \nonumber \\[2mm]
a_{3}\!\! &=&\!\! -\gamma^3(\epsilon_2 \xi_s + 3\xi_t) + \delta\gamma^2(\epsilon_1 \xi_s + 3\lambda_1)
- 3 \gamma\delta^2 \lambda_2 + \delta^3 \lambda_3 \nonumber
\end{eqnarray}
\end{trivlist}
\newpage
{\small
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,914
|
Q: How do i validate this requirements using RegEx How do i validate this requirements using regular expression in Javascript ?
For Telephone Number the pattern should be:
3 digits followed by a dash (–) which is also followed by 4 digits. As an example 123-1234 is a valid phone number.
For PO Box the pattern should be:
2 characters from alphabet (A to Z) case sensitive followed by 3 digits from (0 to 9)
Or
5 digits (from 0 to 9)
As an example: NY090 or 90392 both are valid.
For Password the pattern should be:
6 to 8 characters as digits (0 to 9) and/or/(or a mixture) alphabet (A to Z) followed by 3 digits from (0 to 9)
As an example: ABCDEF123, ABCDEFG123, A1B1CD123 all are valid.
A: Use the following regular expressions (re):
*
*Telephone: /^\d{3}-\d{4}$/
*PO Box: /^[A-Z0-9]{2}\d{3}$/
*Password: /^[A-Z0-9]{6,8}\d{3}$/
in the format:
re.test(yourstringhere)
A: for the phone number:
[0-9]{3}-[0-9]{4}
for the PO box:
(([A-Z]{2}[0-9]{3})|([0-9]{5}))
for the password:
[A-Z,0-9]{6,8}[0-9]{3}
A: These regular expressions should hold true throughout my testing:
*
*Telephone: ^\d{3}[-]\d{4}$ (exactly 3 digits (\d), a -, and then exactly 4 digits)
*PO Box: ^(\d{2}|[A-Z]{2})\d{3}$ (Two digits OR two letters between a-z or A-Z and then exactly three digits - note that \w does not work for letters as \w also includes underscores and digits)
*Password: ^[A-Z0-9]{6,8}\d{3}$ (6-8 characters between (a-z, A-Z, 0-9), followed by three digits)
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,673
|
FBI vs APPLE: The War of Words Escalates
APPLE ... FBI.... who do you side with? That's the talk around the water cooler and it, at times, can become contentious. After all, we're talking about Privacy vs. National Security. Two new polls indicate that those questioned are split almost 50-50 on allowing the FBI to force Apple to open the iPhone belonging to Sayed Faruk. He's one of the two terrorists who killed 14 people and wounded others during a holiday party in San Bernardino in December. The phone is a county issued one.
If you ask some people why they side with the FBI they'll tell you, as one man did, "Forget the privacy. They're killing people!"
On the other side of the growing debate Apple supporters say things like this statement one man told us, "I think the government is going to far. I think Apple should resist."
Stephen Larsen sides with the FBI. a U.S. Attorney in LA for ten years and a Federal Judge for the same amount of time, Larsen says "Law enforcement has an interest. They want to build their case, but these families and these victims... people that were in the room, they have a lot of unanswered questions. Why did this happen? How could this have happened? Who was involved? Are they still at risk? Who were the terrorists communicating with?
Larsen plans to file an amicus brief March 3rd in Riverside Federal Court supporting the FBI side. That, and all of the other points of view, will come up at a hearing slated for March 22nd.
Meanwhile, Apple CEO Tim Cook fired off a public letter saying "the case is about much more than a single phone or a single investigation. At stake is the data security of hundreds of millions of law abiding people and setting a dangerous precedent that threatens everyone's civil liberties.
Larsen describes the remark as being over the top.
Meanwhile, FBI Director James Comey wrote a public letter asking "everyone to take a deep breath and stop saying the world is ending and use that breath to talk to each other.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,354
|
{"url":"https:\/\/www.calculatoratoz.com\/en\/steady-state-error-for-type-2-system-calculator\/Calc-37576","text":"Steady State Error for Type 2 System Solution\n\nSTEP 0: Pre-Calculation Summary\nFormula Used\nSteady State Error = Coefficient Value\/Acceleration error constant\ness = A\/Ka\nThis formula uses 3 Variables\nVariables Used\nSteady State Error - Steady State Error means a System whose open loop transfer function has no pole at origin.\nCoefficient Value - Coefficient value will be used to calculate the system errors.\nAcceleration error constant - Acceleration error constant:- A control system has steady state error constants for changes in position, velocity and acceleration, these constants are called as static error constants.\nSTEP 1: Convert Input(s) to Base Unit\nCoefficient Value: 2 --> No Conversion Required\nAcceleration error constant: 20 --> No Conversion Required\nSTEP 2: Evaluate Formula\nSubstituting Input Values in Formula\ness = A\/Ka --> 2\/20\nEvaluating ... ...\ness = 0.1\nSTEP 3: Convert Result to Output's Unit\n0.1 --> No Conversion Required\n(Calculation completed in 00.000 seconds)\nYou are here -\nHome \u00bb\n\nCredits\n\nCollege Of Engineering, Pune (COEP), Pune\nJaffer Ahmad Khan has created this Calculator and 10+ more calculators!\nVerified by Parminder Singh\nChandigarh University (CU), Punjab\nParminder Singh has verified this Calculator and 100+ more calculators!\n\n< 10+ Fundamental Formulas Calculators\n\nBandwidth Frequency given Damping Ratio\n\nBandwidth Frequency given Damping Ratio\n\nFormula\n\"f\"_{\"b\"} = \"f\"*((sqrt(1-(2*(\"\u03b6\"^2))))+sqrt((\"\u03b6\"^4)-(4*(\"\u03b6\"^2))+2))\n\nExample\n\"54.96966Hz\"=\"23Hz\"*((sqrt(1-(2*(\"0.1\"^2))))+sqrt((\"0.1\"^4)-(4*(\"0.1\"^2))+2))\n\nCalculator\nLaTeX\nBandwidth Frequency = Frequency*((sqrt(1-(2*(Damping Ratio^2))))+sqrt((Damping Ratio^4)-(4*(Damping Ratio^2))+2))\nAngle of Asymptotes\n\nAngle of Asymptotes\n\nFormula\n\"\u03d5\"_{\"k\"} = (((2*\"K\")+1)*pi)\/(\"P\"-\"Z\")\n\nExample\n\"951.4286\u00b0\"=(((2*\"18\")+1)*pi)\/(\"13\"-\"6\")\n\nCalculator\nLaTeX\nAngle of Asymptotes = (((2*Parameter for Root Locus)+1)*pi)\/(Number of Poles-Number of Zeros)\nDamping Ratio or Damping Factor\n\nDamping Ratio or Damping Factor\n\nFormula\n\"\u03b6\" = \"c\"\/(2*sqrt(\"M\"*\"K\"_{\"spring\"}))\n\nExample\n\"0.188147\"=\"16\"\/(2*sqrt(\"35.45kg\"*\"51N\/m\"))\n\nCalculator\nLaTeX\nDamping Ratio = Damping Coefficient\/(2*sqrt(Mass*Spring Constant))\nGain-bandwidth Product\n\nGain-bandwidth Product\n\nFormula\n\"G.B\" = \"modulus\"(\"A\"_{\"M\"})*\"BW\"\n\nExample\n\"56.16Hz\"=\"modulus\"(\"0.78\")*\"72b\/s\"\n\nCalculator\nLaTeX\nGain-Bandwidth Product = modulus(Amplifier Gain in Mid-band)*Amplifier Bandwidth\nDamped Natural Frequency\n\nDamped Natural Frequency\n\nFormula\n\"\u03c9\"_{\"d\"} = \"f\"*(sqrt(1-(\"\u03b6\")^2))\n\nExample\n\"22.88471Hz\"=\"23Hz\"*(sqrt(1-(\"0.1\")^2))\n\nCalculator\nLaTeX\nDamped natural frequency = Frequency*(sqrt(1-(Damping Ratio)^2))\nResonant Peak\n\nResonant Peak\n\nFormula\n\"M\"_{\"r\"} = 1\/((2*\"\u03b6\")*sqrt(1-(\"\u03b6\")^2))\n\nExample\n\"5.025189\"=1\/((2*\"0.1\")*sqrt(1-(\"0.1\")^2))\n\nCalculator\nLaTeX\nResonant Peak = 1\/((2*Damping Ratio)*sqrt(1-(Damping Ratio)^2))\nResonant Frequency\n\nResonant Frequency\n\nFormula\n\"\u03c9\"_{\"r\"} = \"f\"*sqrt(1-2*(\"\u03b6\")^2)\n\nExample\n\"22.76884Hz\"=\"23Hz\"*sqrt(1-2*(\"0.1\")^2)\n\nCalculator\nLaTeX\nResonant Frequency = Frequency*sqrt(1-2*(Damping Ratio)^2)\nNumber of Asymptotes\n\nNumber of Asymptotes\n\nFormula\n\"N\"_{\"a\"} = \"P\"-\"Z\"\n\nExample\n\"7\"=\"13\"-\"6\"\n\nCalculator\nLaTeX\nNumber of Asymptotes = Number of Poles-Number of Zeros\nTransfer Function for Closed and Open Loop System\n\nTransfer Function for Closed and Open Loop System\n\nFormula\n\"G\"_{\"s\"} = \"C\"_{\"s\"}\/\"R\"_{\"s\"}\n\nExample\n\"0.458333\"=\"22\"\/\"48\"\n\nCalculator\nLaTeX\nTransfer Function = Output of System\/Input of System\nClosed-loop Gain\n\nClosed-loop Gain\n\nFormula\n\"A\"_{\"f\"} = 1\/\"\u03b2\"\n\nExample\n\"0.25\"=1\/\"4\"\n\nCalculator\nLaTeX\nGain-with-feedback = 1\/Feedback Factor\n\nSteady State Error for Type 2 System Formula\n\nSteady State Error = Coefficient Value\/Acceleration error constant\ness = A\/Ka\n\nWhat is Transient Response?\n\nt is a part of the time response that reaches 0 (zero) when the time becomes very large. In the graph analysis containing poles and zeroes, the poles lying on the left half of the s-plane gives the transient response. We can also say that it is a part of the response where output continuously increases or decreases. The transient response is also known as the temporary part of the response.\n\nHow to Calculate Steady State Error for Type 2 System?\n\nSteady State Error for Type 2 System calculator uses Steady State Error = Coefficient Value\/Acceleration error constant to calculate the Steady State Error, Steady State error for Type 2 system means a System whose open loop transfer function has two pole at origin is called as Type 2 system. Steady State Error is denoted by ess symbol.\n\nHow to calculate Steady State Error for Type 2 System using this online calculator? To use this online calculator for Steady State Error for Type 2 System, enter Coefficient Value (A) & Acceleration error constant (Ka) and hit the calculate button. Here is how the Steady State Error for Type 2 System calculation can be explained with given input values -> 0.1 = 2\/20.\n\nFAQ\n\nWhat is Steady State Error for Type 2 System?\nSteady State error for Type 2 system means a System whose open loop transfer function has two pole at origin is called as Type 2 system and is represented as ess = A\/Ka or Steady State Error = Coefficient Value\/Acceleration error constant. Coefficient value will be used to calculate the system errors & Acceleration error constant:- A control system has steady state error constants for changes in position, velocity and acceleration, these constants are called as static error constants.\nHow to calculate Steady State Error for Type 2 System?\nSteady State error for Type 2 system means a System whose open loop transfer function has two pole at origin is called as Type 2 system is calculated using Steady State Error = Coefficient Value\/Acceleration error constant. To calculate Steady State Error for Type 2 System, you need Coefficient Value (A) & Acceleration error constant (Ka). With our tool, you need to enter the respective value for Coefficient Value & Acceleration error constant and hit the calculate button. You can also select the units (if any) for Input(s) and the Output as well.\nHow many ways are there to calculate Steady State Error?\nIn this formula, Steady State Error uses Coefficient Value & Acceleration error constant. We can use 2 other way(s) to calculate the same, which is\/are as follows -\n\u2022 Steady State Error = Coefficient Value\/(1+Position error constant)\n\u2022 Steady State Error = Coefficient Value\/Velocity error constant\nLet Others Know","date":"2023-02-01 05:51:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7330324649810791, \"perplexity\": 4091.5628939889257}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499911.86\/warc\/CC-MAIN-20230201045500-20230201075500-00182.warc.gz\"}"}
| null | null |
{"url":"https:\/\/www.gradesaver.com\/textbooks\/science\/physics\/physics-10th-edition\/chapter-2-kinematics-in-one-dimension-problems-page-49\/38","text":"## Physics (10th Edition)\n\n$\\frac{385}{324}\\approx1.2$\nLet d = distance between 55 and 35 mi\/h signs = distance between 35 and 25 mi\/h signs First option: $time=\\frac{distance}{speed}$ $t_{A}=\\frac{d}{55} + \\frac{d}{35}=\\frac{18d}{385}$ Second option: $x=\\frac{1}{2}(u+v)t$ $t=\\frac{2x}{u+v}$ Between 55 and 35 mi\/h signs: $t=\\frac{2d}{55+35}=\\frac{d}{45}$ Between 35 and 25 mi\/h signs: $t=\\frac{2d}{35+25}=\\frac{d}{30}$ $t_{B}= \\frac{d}{45} + \\frac{d}{30} = \\frac{d}{18}$ $t_{B}\/t_{A}=(\\frac{d}{18})\/(\\frac{18d}{385})=\\frac{385}{324}\\approx1.2$","date":"2018-08-19 04:47:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8192457556724548, \"perplexity\": 784.5557685995832}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-34\/segments\/1534221214691.99\/warc\/CC-MAIN-20180819031801-20180819051801-00095.warc.gz\"}"}
| null | null |
It would come as no suprise to anyone who knew me to disscover that adore cherry blossom trees.
They're pink, dainty, feminine and did I mention pink?!
But their appearance isn't the only thing I'm fond of.
Cherry blossoms are the symbol of Spring and each year when the trees bloom around my house, I know Spring is coming.
The dread of Winter is finally easing, while the temperatures may not be as high as I'd like and rain still streams from the sky every now and again, most days are filled with bright blue skies and sunshine.
But until that warmer weather hits and Winter is over, I'll just enjoy the pretty pink Cherry Blossoms.
Are there any particular flowers that you connect with a particular season/time?
I can feel the same change in the air! Prettiest pic of cherry blossoms. I don't have a flower as such, but I love the feeling of that first bite of sun on your skin when you know summer is on its way.
Ohhh, I have to agree, that is THE BEST feeling! Especially on days where it's still just a little bit chilly, it always puts me in a good mood.
Thanks Lisa! I agree, wattle trees are definitely vibrant!
Oh no! Hopefully winter isn't too dreary for you!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,711
|
var Promise = require('bluebird');
var bcrypt = require('bcrypt');
var db = require('../db');
function checkLogin(username, password) {
return db.execute({
type: 'select',
table: 'Users',
columns: ['Id', 'Password'],
where: {
'UserName': username,
'isactive': 'TRUE'
}
})
.then(function(result) {
//check if account is not active
if(!result.rows.length) return false;
var hash = result.rows[0].password;
var userid = result.rows[0].id;
var match = bcrypt.compareSync(password, hash.trim());
return match ? {'id': userid} : false;
});
};
function doLogout(username) {
return true;
};
module.exports = {
login: checkLogin,
logout: doLogout
};
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,072
|
{"url":"http:\/\/mathematica.stackexchange.com\/questions\/25992\/different-chapters-in-different-files","text":"# Different Chapters in Different Files\n\nI was wondering if there is a command to change the counter to adjust its numbeing to having different chapters saved as separate files. This seems possible with page numbers but not section numbers.\n\n-\nCould you elaborate? This question is not entirely clear to me. \u2013\u00a0Sjoerd C. de Vries May 28 '13 at 13:37\nI am writing a book using Mathematica. If I save in 2 files, the 2nd file will restart numbering at Chapter 1. There is a way to fix page numbers for printing, but I am not aware of one for chapter numbers. Thanks. \u2013\u00a0Nero Jun 3 '13 at 23:11\n\nIn the notebook you want to have the chapter be numbered other than 1, select the title cell (to which you have inserted automatic numbering for \"Title\", assuming your chapter titles are in cells of style \"Title\" -- otherwise change as needed). Go to the Options Inspector and search for CounterAssigments. Add {\"Title\", 1}, to the list to have the chapter numbered 2, or {\"Title\", 2}, to have the chapter numbered 3, etc.\n\nThe whole list for CounterAssignments will (probably) look like this:\n\n{{\"Title\", 1}, {\"Section\", 0}, {\"Equation\", 0}, {\"Figure\", 0},\n{\"Subtitle\", 0}, {\"Subsubtitle\", 0}, {\"Item\", 0}, {\"Subitem\", 0},\n{\"Subsubitem\", 0}, {\"ItemNumbered\", 0}, {\"SubitemNumbered\", 0},\n{\"SubsubitemNumbered\", 0}}\n\n-\nReference (John Fultz) \u2013\u00a0Mr.Wizard Jul 21 '13 at 7:12\n@Mr.Wizard Yes, although I learned it from Stan Wagon some years ago, who probably learned it from someone else, perhaps John! :). I was going to answer, got interrupted, and John beat me - with a better answer most likely. Later, checking for a duplicate, I found this neglected question. \u2013\u00a0Michael E2 Jul 21 '13 at 13:25\nMichael, pardon me, I just realized how that \"reference\" comment might have been taken (poorly). I only meant to provide a supporting example from what I consider an authoritative source, and I did not mean to imply you were not yourself a trustworthy source of information. \u2013\u00a0Mr.Wizard Jul 22 '13 at 14:46\n@Mr.Wizard No problem. It occurred to me there were (at least) two ways to take the comment - one was to connect the two questions, valuable in itself. Another was as an acknowledgement of a source, which I extended. Your rationale is a third, and worthwhile, given the somewhat scant documentation on counter options. The fourth (the implication) would not have occurred to me. \u2013\u00a0Michael E2 Jul 22 '13 at 15:43\nThanks Michael E2; it works. \u2013\u00a0Nero Jul 29 '13 at 21:05","date":"2016-02-06 13:49:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3553841710090637, \"perplexity\": 2024.6707160226224}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-07\/segments\/1454701146550.16\/warc\/CC-MAIN-20160205193906-00235-ip-10-236-182-209.ec2.internal.warc.gz\"}"}
| null | null |
Su-cung (; 21. února 711 – 16. května 762) byl čínský císař vládnoucí v letech 756–762 říši Tchang. Byl synem a nástupcem císaře Süan-cunga; po dobytí tchangského hlavního města Čchang-anu armádou povstalců An Lu-šana jeho otec Süan-cung uprchl na jihozápad do S'-čchuanu. Su-cung se stáhl na severozápad do Ling-wu, kde se prohlásil císařem. Süan-cung se svému sesazení neprotivil. Vláda Su-sunga probíhala v bojích s výše zmíněným povstáním, roku 757 se mu podařilo dobýt zpět Čchang-an, na likvidaci povstání však neměl dost sil, boje s rebely trvaly až do roku 763, kdy již vládl Su-cungův syn a nástupce Taj-cung.
Jméno
Su-cung je chrámové jméno císaře, jeho vlastní jméno bylo Li S'-šeng (), roku 725 změněné na Li Ťün (), roku 736 změněné na Li Jü (), roku 738 změněné na Li Šao () a vzápětí na Li Čcheng ().
Mládí
Su-cung se narodil roku 711 pod jménem Li S'-šeng za vlády císaře Žuej-cunga (císař 684–690 a 710–712) jako třetí syn následníka trůnu Li Lung-ťiho (později císař Süan-cung) a jedné z jeho manželek, Jang Kuej-pchin († 728), pocházející z císařského rodu říše Suej (581–618). Roku 712 Li Lung-ťi přiměl otce k rezignaci a sám nastoupil na trůn. Su-cung obdržel titul knížete ze Šan, knížecí tituly obdrželi i jeho starší bratři, Li S'-č' a Li S'-čchien. Roku 715 byl Li S'-čchien povýšen na následníka trůnu.
Roku 725 byl mladý Su-cung přejmenován z Li S'-šenga na Li Ťüna a následující rok byl jeho titul změněn na knížete z Čung. Roku 729 vpadli do tchangské říše Kitani a Tatabové, Su-cunga císař formálně postavil do čela armády vyslané proti nim (fakticky vojsku velel jeho bratranec Li Chuej (), kníže ze Sin-an. Li Chuej nájezdníky porazil a za odměnu Su-cung obdržel vysoký čestný titul s'-tchu. Roku 735 mu bylo opět změněno jméno, a sice na Li Jü.
Süan-cungova oblíbená konkubína paní Wu se snažila prosadit svého syna Li Maoa jako korunního prince. Roku 737 byl, v důsledku jejích intrik, následník trůnu Li S'-čchien (roku 725 přejmenovaný na Li Chunga) a několik dalších princů zbaveno postavení a donuceni k sebevraždě. Paní Wu však téhož roku zemřela. Císař novým následníkem poté v létě 738 jmenoval Li Jüa, přičemž mu jméno změnil na Li Šao. Vzápětí dvořané panovníka upozornili, že Li Šao je jméno korunního prince, který roku 453 svrhl a zabil svého otce a císaře, Süan-cung proto princovo jméno opět změnil, na Li Čcheng.
Následník trůnu
Li Čcheng se stal následníkem proti přání tehdejšího hlavního ministra Li Lin-fua, který předtím podporoval paní Wu a jejího syna. Li Lin-fu se snažil Li Čchenga zkompromitovat, roku 746 obvinil bratry jeho hlavní manželky a jistého generála ze spiknutí proti císaři, Li Čcheng se ve snaze distancovat se od postižených se svou manželkou rozvedl. Téhož roku se byl nucen rozvést s jednou ze svých konkubín, když byl její otec zapleten do případu čarodějnictví.
Koncem roku 755 se velitel vojsk severovýchodu tchangské říše, generál An Lu-šan, vzbouřil. Povstalci začátkem roku 756 dobyli Luo-jang, An Lu-šan se prohlásil císařem státu Jen a v červenci 756 dobyli i tchangské hlavní město Čchang-an. Süan-cung prchl na jihozápad do S'-čchuanu, Li Čcheng se však od něj oddělil a odešel na severozápad do Ling-wu, kde se v polovině srpna 756 s podporou tamní armády prohlásil císařem. Jeho otec i tchangští regionální vojenští i civilní představitelé Su-cungovu uzurpaci uznali.
Císař
Po převzetí moci si za hlavní úkol stanovil znovudobytí Čchang-anu. Útoky jeho vojsk na podzim 756 a na jaře 757 ale jenská armáda odrazila, tchangské oddíly přitom utrpěly velké ztráty. V červenci 756, ve snaze o posílení autority tchangské dynastie, Süan-cung pověřil některé své syny (včetně korunního prince) správou regionů. Zpravidla fakticky funkci nevykonávali, Li Lin, kníže z Jung, se však chopil správy území na středním toku Jang-c' a začátkem roku 757 se vzbouřil. Pokusil se ovládnout též oblast dolního toku Jang-c', byl však dostižen tchangskými oddíly a zabit. Jednota nepanovala ani v Ling-wu, kde se císařův třetí syn Li Tchan (který první navrhl nenásledovat Süan-cunga do S'-čchuan, ale odejít do Ling-wu a vynikl během cesty) střetl s císařovou oblíbenou ženou paní Čang a eunuchem Li Fu-kuoem, Li Fu-kuo a paní Čang obvinili Li Tchana ze spiknutí proti staršímu bratru Li Čchuovi (od roku 758 následníkovi, později císař Taj-cung) a císař, který jim uvěřil, přikázal Li Tchanovi spáchat sebevraždu.
Na podzim 757 vyjednal Pchu-ku Chuaj-en, tchangský generál teleského původu, spojenectví s Ujgurským kaganátem. Podle dohody Ujgurové poslali do Číny 4 tisíce jezdců. Ujgury posílená tchangská armáda v čele s Li Čchuem a Kuo C'-im v říjnu 757 vytáhla na Čchang-an, 13. listopadu v bitvě u chrámu Siang-ťi (deset mil jižně od města) porazila rebely, a následující den Čchang-an obsadila. Tchangská ofenzíva pokračovala postupem na východ, vítězstvím v další bitvě (mezi průsmykem Tung a Šen-čou) 30. listopadu a obsazením Luo-jangu 3. prosince 757. An Čching-sü, syn a následník An Lu-šana, se poté stáhl do Siang-čou (dnešní An-jang na severu Che-nanu). Tchangská vláda neměla prostředky na další ofenzivu, zastavila postup svých armád a soustředila se na obnovu hlavních měst. Po osvobození metropole se vzdal všech funkcí Su-cungův nejbližší ministr Li Mi, nejvlivnějšími osobnostmi dvora poté byli paní Čang (od roku 758 císařovna), eunuch Li Fu-kuo velící císařské gardě a korunní princ Li Čchu s titulem vrchního velitele vojsk.
Další tchangská ofenzíva začala v listopadu 758. Dvě stě tisíc vládních vojáků však bylo rozděleno pod velení devíti ťie-tu-š', které koordinoval eunuch Jü Čchao-en. Důvodem uspořádání bylo, že dva nejváženější generálové – Kuo C'-i a Li Kuang-pi – odmítli sloužit jeden pod druhým a nedůvěrou vlády ke generálům. Přesto tchangská vojska porazila An Čching-süa v bitvě a oblehla ho v jeho sídle v Siang-čou. Obléhání trvalo celou zimu 758/759, na jaře si však tchangské strana – zřejmě vinou generála Li Kuang-pina – znepřátelila generála Š' S'-minga z Fan-jangu kontrolujícího sever Che-peje, který začátkem roku 758 přešel na tchangskou stranu, ale nyní přišel na pomoc obleženým. Š' S'-mingova armáda, která byla údajně více než desetkrát slabší než spojená tchangská armáda, se 7. dubna setkala se spojeným tchangským vojskem, ve zmatku vyvolaném prašnou bouří se však armády rozpadly a po obnovení pořádku se tchangští vojevůdci – nedůvěřující si navzájem a jednotlivě nedostatečně silní proti rebelům – stáhli. Š' S'-ming poté zavraždil An Čching-süa i s jeho nejvěrnějšími stoupenci a sám se prohlásil císařem jenského státu. Na podzim 759 Š' S'-ming znovu zabral Luo-jang, s tchangskou armádou v Che-jangu, 40 km severovýchodně od města se však neodvážil postupovat dále na západ. Na nějakou dobu boje utichly, když povstalci neměli dost sil k útoku, ale ani tchangská vláda nebyla schopna shromáždit dostatek prostředků pro ofenzívu.
Nezávisle na souboji říší Tchang a Jen vypukla v Číně řada regionálních vzpour a povstání, které vyčerpávaly tchangské síly. K největším z nich patřila vzpoura v údolí řeky Chan a na středním toku Jang-c'-ťiang roku 759 a znovu začátkem roku 760, povstání na dolním toku Jang-c'-ťiang v letech 760 a 762–763 a povstání v S'-čchuanu v letech 761 a 762.
Na jaře 761 Š' S'-ming zaútočil na tchangskou armádu u Luo-jangu a v dubnu 761 ji rozdrtil. Povstalecká ofenzíva se však zhroutila, když vzápětí Š' S'-minga zavraždil jeho syn Š' Čchao-i, který se stal dalším císařem státu Jen. Změna panovníka oslabila povstalce, protože, podobně jako v případě An Čching-süa, ani Š' Čchao-i neměl autoritu svého otce.
Začátkem května 762 zemřel bývalý císař Süan-cung a o několik dnů později i Su-cung. Trůn po něm převzal jeho nejstarší syn a korunní princ Taj-cung.
Odkazy
Reference
Literatura
Externí odkazy
Tchangští císaři
Narození 21. února
Narození v roce 711
Úmrtí 16. května
Úmrtí v roce 762
Muži
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,760
|
var chai = require('chai')
, expect = chai.expect
, _ = require('lodash')
, path = require('path')
, recursiveReaddirSync = require('../index')
;
describe('Functionality testing.', function(){
//Files we should find in or path
var expectedFiles = [
'./nested/a/file1.txt',
'./nested/a/b/file1.txt',
'./nested/a/b/file2.js',
'./nested/a/b/.hidden1',
'./nested/x/.hidden2',
'./nested/y/z/file.conf'
];
expectedFiles = _.map(expectedFiles, function(f){
return path.resolve(__dirname, f);
});
//Directories should not be listed in the results only files.
var unexpectedFiles = [
'./nested/empty'
];
unexpectedFiles = _.map(unexpectedFiles, function(f){
return path.resolve(__dirname, f);
});
var results = recursiveReaddirSync(__dirname + '/nested');
it('should return an array with length equal to that of expectedFiles', function(){
expect(results).to.have.length(expectedFiles.length);
});
it('should find all nested files in the folder structure', function(){
expect(_.xor(results, expectedFiles)).to.have.length(0);
});
it('should not contain empty folders', function(){
expect(results).to.not.include.members(unexpectedFiles);
});
});
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,783
|
Lupiac (en francès Lupiac) és un municipi francès, situat al departament del Gers i a la regió d'Occitània.
Demografia
Administració
Personatges il·lustres
D'Artagnan, de veritable nom Charles de Batz de Castelmore, nasqué a Lupiac el 1611 i morí el 1673 al setge de Maastricht.
Referències
Municipis del Gers
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,837
|
{"url":"https:\/\/codereview.stackexchange.com\/questions\/132137\/check-if-a-number-is-prime","text":"# Check if a number is prime\n\nI'm looking for feedback on the following code which checks if a number is prime using Swift 2.\n\n import UIKit\n\nvar number:Int = 1123\n\nvar isPrime:Bool = true\n\nswitch number {\n\ncase 1:\nisPrime = false\n\ncase 2:\nisPrime = true\n\ncase 3:\nisPrime = true\n\ndefault:\nprimeCheck:for i in 2...Int(sqrt(Double(number))) {\n\nif number % i == 0 {\n\nisPrime = false\nbreak primeCheck\n\n}\n}\n\n}\n\nif isPrime {\n\nprint(\"The number \\(number) is prime!\")\n\n} else {\n\nprint(\"The number \\(number) is composite!\")\n}\n\n\u2022 Should we assume that number is never smaller than 1? Because Int can be. \u2013\u00a0user86624 Jun 16 '16 at 1:49\n\u2022 Ah, good point, no that is a mistake on my part, I should be checking for numbers less than 1. \u2013\u00a0newToProgramming Jun 16 '16 at 10:35\n\u2022 I also wonder if you need a case for 3, and even 2, since you already have initialized isPrime to true. You'll need to change the upper bound for the for loop a bit then: 2...Int(sqrt(Double(number)))+1, but it loses two case labels. Matter of taste perhaps. \u2013\u00a0user86624 Jun 16 '16 at 10:54\n\u2022 This article includes a really good isPrime implementation, as well as a discussion on how to unit test these sorts of functions. \u2013\u00a0nhgrif Jul 24 '16 at 5:12\n\nI won't comment on the algorithm you're using to find out if the number is prime or not. Though the way you're doing it is quite inefficient. Some people in the comments have already pointed out some possible improvements and I'd suggest looking into the sieve of Eratosthenes to get better performance. You can find a Swift implementation of it here.\n\nFirst off, there is a lot of unnecessary whitespace, you should get rid of that and only use a blank line too separate logical parts of your code.\n\nSecondly, let's look at the following part:\n\nvar number:Int = 1123\nvar isPrime:Bool = true\n\n\nThere are three issues with it. number is mutable, but you never actually change it, it is customary to have exactly one space after a colon in Swift (instead of none) and while we're at it, in Swift you don't typically explicitly state the the of a variable unless it significantly improves readability (which is almost never) or type inference gets the wrong type. (which also doesn't happen often.) So you can rewrite those two lines as:\n\nlet number = 1123\nvar isPrime = true\n\n\nAs for the meat of this answer, it would be preferable to extract the actual prime checking into a function:\n\nfunc isPrime(number: UInt32) -> Bool {\nswitch number {\ncase 0, 1: \/\/ you can put multiple cases on one line\nreturn false\ncase 2, 3:\nreturn true\ndefault:\nfor i in 2...Int(sqrt(Double(number))) {\nif number % i == 0 {\nreturn false\n}\n}\nreturn true\n}\n}\n}\n\n\nThis allows you to write:\n\nlet number = 1123\nif isPrime(number) {\nprint(\"The number \\(number) is prime!\")\n} else {\nprint(\"The number \\(number) is composite!\")\n}\n\n\nor even:\n\nprint(\"The number \\(number) is \\(isPrime(number) ? \"prime\" : \"composite\")!\")\n\n\nThis allows you to get rid of the mutable isPrime, which is something you should strive for in Swift. And allows you to nicely ecapsule the actrual logice and separate it from the rest.","date":"2019-10-15 17:50:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4973490238189697, \"perplexity\": 1746.1309877171366}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986660067.26\/warc\/CC-MAIN-20191015155056-20191015182556-00535.warc.gz\"}"}
| null | null |
All products that are available in our shop.
Perfect filigree table cards for your table decorations. Suits for any event, party or shower.
Fun favors for engagement celebrations, showers, or your wedding tables.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 971
|
El Ostfriesentee (en español, 'Té de Frisia oriental' o 'té frisón oriental') se trata de una mezcla ordenada de tés muy bebida en Frisia oriental, una región del estado federado de Baja Sajonia en la zona costera del norte de Alemania. Este té puede tomarse en esta región debido a los puertos marítimos que podrían tener conexiones con la India. Se trata de una mezcla de hasta diez tés negros de Assam y de Sri Lanka, aunque pueden mezclase con tés de África, de Java y de Sumatra, así como de Darjeeling. La toma del té forma parte de una ceremonia en esta zona de Alemania.
Cultura del té en Frisia oriental
Existe una cultura especial para servir el té en Frisia oriental en el que se sirve Kluntjes (azúcar cande) y nata.
Economía
Existen algunas empresas dedicadas al comercio de este té en Frisia oriental, tales como: Bünting-Gruppe, Thiele & Freese, Onno Behrends. Todas ellas ofrecen sus propias mezclas populares en la zona.
Literatura
Ingrid Buck: Volkskunde und Brauchtum in Ostfriesland, Verlag Ostfriesische Landschaft, 1988
Johann Haddinga: Das Buch vom ostfriesischen Tee. 2. Auflage. Schuster Verlag, Leer 1986, ISBN 3-7963-0237-8
Dietrich Janssen: Ostfriesischer Tee – Teegeschichte, Geschichte, Geschichten und Anekdoten. Wartberg, 2006, ISBN 3-8313-1600-7
Ernst Müller: De Utrooper's kleines Buch vom ostfriesischer Tee. 1998, ISBN 3-928245-81-3
Gastronomía de Baja Sajonia
Mezclas de té
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,104
|
package org.projectfloodlight.openflow.protocol.oxs;
import org.projectfloodlight.openflow.protocol.*;
import org.projectfloodlight.openflow.protocol.action.*;
import org.projectfloodlight.openflow.protocol.actionid.*;
import org.projectfloodlight.openflow.protocol.bsntlv.*;
import org.projectfloodlight.openflow.protocol.errormsg.*;
import org.projectfloodlight.openflow.protocol.meterband.*;
import org.projectfloodlight.openflow.protocol.instruction.*;
import org.projectfloodlight.openflow.protocol.instructionid.*;
import org.projectfloodlight.openflow.protocol.match.*;
import org.projectfloodlight.openflow.protocol.stat.*;
import org.projectfloodlight.openflow.protocol.oxm.*;
import org.projectfloodlight.openflow.protocol.oxs.*;
import org.projectfloodlight.openflow.protocol.queueprop.*;
import org.projectfloodlight.openflow.types.*;
import org.projectfloodlight.openflow.util.*;
import org.projectfloodlight.openflow.exceptions.*;
import io.netty.buffer.ByteBuf;
public interface OFOxsFlowCount extends OFObject, OFOxs<U32> {
long getTypeLen();
U32 getValue();
StatField<U32> getStatField();
boolean isMasked();
OFOxs<U32> getCanonical();
U32 getMask();
OFVersion getVersion();
void writeTo(ByteBuf channelBuffer);
Builder createBuilder();
public interface Builder extends OFOxs.Builder<U32> {
OFOxsFlowCount build();
long getTypeLen();
U32 getValue();
Builder setValue(U32 value);
StatField<U32> getStatField();
boolean isMasked();
OFOxs<U32> getCanonical();
U32 getMask();
OFVersion getVersion();
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,147
|
GW1 Differentials: Che Adams
The Scout 7 Sep 2020
The Scout says Southampton striker has form and fixtures to make a big impact at the start of 2020/21 FPL
Hasenhüttl pleased with pre-season opener External Link
Time to get your FPL Draft started
FPL Must-haves: Experts look to Man City duo
The Scout is selecting four overlooked players who could be set for a breakout Gameweek 1 in Fantasy Premier League.
Che Adams (SOU) £6.0m
The forward can be found in only 2.4 per cent of squads for Southampton's visit to Crystal Palace.
Adams impressed in Saints' only friendly of the summer, producing a goal and an assist in a 7-1 win over Swansea City.
Adams scores in pre-season friendly
That mirrors his improving output from towards the end of last season.
Adams scored all four of his goals for 2019/20 in the final six Gameweeks, equalling the total of strike partner Danny Ings (£8.5m) during that time.
The underlying numbers further highlight Adams' goal threat in that period.
His 16 shots and six shots on target were both joint-top with Ings for Southampton.
And he bettered his fellow forward by 15 shots inside the penalty area to 14.
Saints' favourable run of early opponents hands their key attackers the chance to bring in returns for a number of the first few Gameweeks.
According to the Fixture Difficulty Ratings (FDR), five of their first eight matches score only two.
Costing £2.5m less than Ings, Adams can emerge as a major source of value early on this season.
Part 2: Eddie Nketiah
Part 3: Reece James
Part 4: Dele Alli
Don't get caught out. GW1 deadline: 11:00 BST on 12 September. Register your team now at fantasy.premierleague.com.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,379
|
{"url":"http:\/\/www.researchgate.net\/publication\/6331601_Field-induced_Fermi_surface_reconstruction_and_adiabatic_continuity_between_antiferromagnetism_and_the_hidden-order_state_in_URu2Si2","text":"Article\n\n# Field-induced Fermi surface reconstruction and adiabatic continuity between antiferromagnetism and the hidden-order state in URu2Si2.\n\n\u2022 ##### J. A. Mydosh\nNational High Magnetic Field Laboratory, Florida State University, Tallahassee, Florida 32306, USA.\nPhysical Review Letters (Impact Factor: 7.73). 05\/2007; 98(16):166404. DOI:\u00a010.1103\/PhysRevLett.98.166404\nSource: PubMed\n\nABSTRACT Shubnikov-de Haas oscillations reveal at high fields an abrupt reconstruction of the Fermi surface within the hidden-order (HO) phase of URu2Si2. Taken together with reported Hall effect results, this implies an increase in the effective carrier density and suggests that the field suppression of the HO state is ultimately related to destabilizing a gap in the spectrum of itinerant quasiparticles. While hydrostatic pressure favors antiferromagnetism in detriment to the HO state, it has a modest effect on the complex H-T phase diagram. Instead of phase separation between HO and antiferromagnetism our observations indicate adiabatic continuity between both orderings with field and pressure changing their relative weight.\n\n0 Bookmarks\n\u00b7\n58 Views\n\u2022 Source\n##### Article: Origin of the Large Anisotropy in the \u03c73 Anomaly in URu2Si2\n[Hide abstract]\nABSTRACT: Motivated by recent quantum oscillations experiments on U Ru2Si2, we discuss the microscopic origin of the large anisotropy observed many years ago in the anomaly of the nonlinear susceptibility in this same material. We show that the magnitude of this anomaly emerges naturally from hastatic order, a proposal for hidden order that is a two-component spinor arising from the hybridization of a non-Kramers \u03935 doublet with Kramers conduction electrons. A prediction is made for the angular anisotropy of the nonlinear susceptibility anomaly as a test of this proposed order parameter for U Ru2Si2.\nJournal of Physics Conference Series 07\/2013; 449(1):2026-.\n\u2022 Source\n##### Article: Cyclotron resonance study of quasiparticle mass and scattering rate in the hidden-order and superconducting phases of URu2Si2\n[Hide abstract]\nABSTRACT: The observation of cyclotron resonance in ultra-clean crystals of URu2Si2 [S. Tonegawa et al., PRL 109, 036401 (2012)] provides another route besides quantum oscillations to the determination of the bulk electronic structure in the hidden order phase. We report detailed analyses of the resonance lines, which fully resolve the cyclotron mass structure of the main Fermi surface sheets. A particular focus is given to the anomalous splitting of the sharpest resonance line near the [110] direction under in-plane magnetic-field rotation, which implies peculiar electronic structure in the hidden order phase. The results under the field rotation from [110] toward [001] direction reveal that the splitting is a robust feature against field tilting from the basal plane. This is in sharp contrast to the reported frequency branch alpha in the quantum oscillation experiments showing a three-fold splitting that disappears by a small field tilt, which can be explained by the magnetic breakdown between the large hole sphere and small electron pockets. Our analysis of the cyclotron resonance profiles reveals that the heavier branch of the split line has a larger scattering rate, providing evidence for the existence of hot-spot regions along the [110] direction. These results are consistent with the broken fourfold rotational symmetry in the hidden-order phase, which can modify the interband scattering in an asymmetric manner. We also extend our measurements down to 0.7 K, which results in the observation of cyclotron resonance in the superconducting state, where novel effects of vortex dynamics may enter. We find that the cyclotron mass undergoes no change in the superconducting state. In contrast, the quasiparticle scattering rate shows a rapid decrease below the vortex-lattice melting transition temperature, which supports the formation of quasiparticle Bloch state in the vortex lattice phase.\n10\/2013;\n\u2022 Source\n##### Article: Fermi surface in the hidden-order state of URu$_2$Si$_2$ under intense pulsed magnetic fields up to 81~T\n[Hide abstract]\nABSTRACT: We present measurements of the resistivity $\\rho_{x,x}$ of URu2Si2 high-quality single crystals in pulsed high magnetic fields up to 81~T at a temperature of 1.4~K and up to 60~T at temperatures down to 100~mK. For a field \\textbf{H} applied along the magnetic easy-axis \\textbf{c}, a strong sample-dependence of the low-temperature resistivity in the hidden-order phase is attributed to a high carrier mobility. The interplay between the magnetic and orbital properties is emphasized by the angle-dependence of the phase diagram, where magnetic transition fields and crossover fields related to the Fermi surface properties follow a 1\/$\\cos\\theta$-law, $\\theta$ being the angle between \\textbf{H} and \\textbf{c}. For $\\mathbf{H}\\parallel\\mathbf{c}$, a crossover defined at a kink of $\\rho_{x,x}$, as initially reported in [Shishido et al., Phys. Rev. Lett. \\textbf{102}, 156403 (2009)], is found to be strongly sample-dependent: its characteristic field $\\mu_0H^*$ varies from $\\simeq20$~T in our best sample with a residual resistivity ratio RRR of $225$ to $\\simeq25$~T in a sample with a RRR of $90$. A second crossover is defined at the maximum of $\\rho_{x,x}$ at the sample-independent characteristic field $\\mu_0H_{\\rho,max}^{LT}\\simeq30$~T. Fourier analyzes of SdH oscillations show that $H_{\\rho,max}^{LT}$ coincides with a sudden modification of the Fermi surface, while $H^*$ lies in a regime where the Fermi surface is smoothly modified. For $\\mathbf{H}\\parallel\\mathbf{a}$, i) no phase transition is observed at low temperature and the system remains in the hidden-order phase up to 81~T, ii) quantum oscillations surviving up to 7~K are related to a new and almost-spherical orbit - for the first time observed here - at the frequency $F_\\lambda\\simeq1400$~T and associated with a low effective mass $m^*_\\lambda=(1\\pm0.5)\\cdot m_0$, and iii) no Fermi surface modification occurs up to 81~T.\n11\/2013; 89(16).","date":"2014-08-20 06:50:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7761808633804321, \"perplexity\": 1893.2087757994666}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-35\/segments\/1408500800767.23\/warc\/CC-MAIN-20140820021320-00088-ip-10-180-136-8.ec2.internal.warc.gz\"}"}
| null | null |
KBru Communications
The Bruery
ArmsUp Motorsports Podiums Again in Carb Night Classic
KBruCommunications May 30, 2016 0
BROWNSBURG, Ind. – ArmsUp Motorsports secured another podium at the Carb Night Classic Freedom 75, round 7 of the Cooper Tires USF2000 Championship Powered by Mazda. The 75-lap race took place in Brownsburg, Indiana on the .686-mile, semi-banked pavement oval at Lucas Oil Raceway Park just six miles west of the Indianapolis Motor Speedway where the 100th Running of the Indianapolis 500 will take place on Sunday.
The team succeeded their podium efforts with Brazilian, Victor Franzoni, who was fourth in points going into Round 7 of the championship. Each car qualified one-by-one on the short oval. Franzoni finished in fourth position for the qualifying session and then the team went to work to make minimal changes before getting back on track the next hour for the green flag.
"Qualifying was good for us," said Franzoni. "The track has changed a lot from yesterday. Yesterday we had a really good car, today we had a little bit of understeer. But it's ok, we're going to work (on the car) a little bit for the race and fix everything before the race. But it's good, we're there in the top again and that's important. This oval race is 75 laps so we have time to try different stuff while the track is changing. So we'll be fine for the race for sure."
With only an hour and a half between qualifying and race, the team quickly made a few adjustments and went back to grid to wait for the command. Once the green flag flew Franzoni made a quick move to the 3rd position where he would end up at the end of the 75 laps. Towards the end, the Brazilian was closing the gap on the cars holding first and second and with a few more laps he could have fought for second. Franzoni and the team were still happy to bring home another podium finish, bringing in valuable championship points.
Franzoni followed up after the race, "The race was really cool. We started fourth and I moved to third at the start. I couldn't manage to keep the gap from moving forward but I couldn't catch the two Cape guys. At the end I was faster and catching them but we didn't have enough laps. We needed 20 more laps to be perfect. But it was good. I'm glad ArmsUp did a great job again giving us another consecutive podium. So we are there, but we need a little more to win the race. But I'm happy for another podium."
Team owner Gregg Borland was satisfied with his team's efforts over the weekend. "For us it was a good weekend," Borland said. "I think we qualified well. Victor was disappointed with where we qualified in fourth, because I think we had more in the car but with the Silver Crown cars putting rubber down I think it affected us a little differently than we expected. We did ok with that. I'm happy with fourth place and his start was great. Victor had a car that was good over a long run and at the end, we started closing in on the leaders and caught up with them. If we had five more laps it would have been different. I'm really pleased with what Victor did and the team did a great job."
The team will move focus to Rounds 8 & 9 of the championship at Road America in Elkhart Lake, Wis., on June 24-26 in hopes to continue their podium streak.
Previous: Previous post: Performance Motorsports Group Unveils Multi Series Competition for 2016
Next: Next post: Tequila Patrón ESM Qualifies in Austin
Time limit is exhausted. Please reload the CAPTCHA. one + = seven
© 2021 KBru Communications.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,700
|
\section*{Introduction}
Quasi-Frobenius rings, i.e., rings which are Artinian and self-injective, belong to those classical Artinian rings which have been a driving force in the development of modern ring and module theory. Introduced by Nakayama~\cite{nakayama}, one of their many equivalent characterizations identifies them as those rings~$R$ for which the $\Hom( -, R )$ functor provides a duality between its finitely generated left modules and its finitely generated right modules. They also appear as the smallest categorical generalization of group rings of finite groups.
Within the class of quasi-Frobenius rings, those rings~$R$ for which the socle $\soc R$ is isomorphic, as one-sided module, to the semisimple quotient ring $R /\! \rad R$, are the Frobenius rings. They emerge naturally as a generalization of Frobenius algebras, i.e., finite-dimensional algebras over a field that admit a non-degenerate balanced bilinear pairing.
In more recent years, with the advent of ring-linear coding theory (see, for example, \cite{ring-codes}), the interest in finite ring theory has increased. One of the striking results in this regard is the characterization, due to Wood~\cite{wood1, wood2}, of finite Frobenius rings as precisely those rings~$R$ which satisfy the following MacWilliams extension property: every Hamming weight preserving isomorphism between left submodules of $R^n$ extends to a monomial transformation, i.e., is of the form $(x_i) \mapsto ( x_{\sigma i} u_i )$ for a permutation $\sigma \in S_n$ and invertible ring elements $u_i \in R$. This property has been established by MacWilliams~\cite{macwilliams} in the case of finite fields and consolidates the notion of code equivalence. A further remarkable result is the observation by Honold~\cite{honold} that in the finite case Frobenius rings can be characterized as those rings~$R$ satisfying a one-sided condition $\soc R \cong R /\! \rad R$, even without assuming the ring to be quasi-Frobenius.
In the present note we offer a generalization of Wood's characterization of finite Frobenius rings as rings satisfying the MacWilliams extension property to the realm of general infinite (Artinian) rings. We remark that the case of infinite Artin algebras has recently been treated by Iovanov~\cite{iovanov}.
While each (two-sided) MacWilliams ring is necessarily quasi-Frobenius, it turns out that the Frobenius property is in general too strong to be deduced from satisfying MacWilliams' extension theorem. We therefore weaken the Frobenius property to a criterion which we call \emph{finitarily Frobenius}, which merely requires that the finitary socle embeds into the semisimple quotient (either as left or right module). Clearly, every Frobenius ring is also finitarily Frobenius. In our main result, Theorem~\ref{theorem:main}, we show that a left Artinian ring satisfies the MacWilliams extension property for left modules if and only if it is left pseudo-injective and the finitary left socle embeds into the semisimple quotient. It follows that an Artinian ring is finitarily Frobenius if and only if it satisfies MacWilliams' extension property.
Our approach is based on the description of Frobenius rings in terms of generating characters, an idea developed by Wood~\cite{wood1} and adapted by Iovanov~\cite{iovanov}. In fact, our proof method relies on the existence of certain torsion-free characters on finitarily Frobenius rings, as well as results on Pontryagin duality of discrete and compact abelian groups. Along the way, we show for a left Artinian ring that the finitary left socle embeds into the semisimple quotient if and only if it admits a finitarily left torsion-free character, if and only if the Pontryagin dual of the regular left module is almost monothetic.
\section{Frobenius rings and generalizations}\label{section:frobenius.rings}
We compile in this section a few notions from ring and module theory as needed in the present context and introduce the notion of a finitarily Frobenius ring. For a comprehensive account on the classical theory we refer to~\cite{lam, anderson-fuller}, see also~\cite{wood1, honold}.
In the following, the term \emph{ring} will always mean unital ring. Recall that a ring is said to be \emph{quasi-Frobenius} if it is left Artinian and left self-injective, i.e., injective as a left module. As it turns out, the properties left Artinian and left self-injective can each be replaced by its right counterparts, and the Artinian property by Noetherian (cf.~\cite[Sec.~15]{lam}).
We shall, for a (left or right) module~$M$, denote by $\soc M$ the sum of all its minimal submodules, by $\rad M$ the intersection of all maximal ones and by $\tp M \defeq M /\! \rad M$ its "top quotient". Accordingly, we denote by $\rad R$ the \emph{Jacobson radical} of a ring~$R$, i.e., the intersection of all maximal left (right) ideals; also, let $\soc({}_R R)$ be its \emph{left socle}, i.e., the sum of all minimal left ideals, and let $\soc(R_R)$ be its analogously defined \emph{right socle}. A crucial notion for the present note is the Frobenius property.
\begin{definition}\label{def:frob} A ring~$R$ is called \emph{Frobenius} if it is quasi-Frobenius and satisfies \[ \text{(i)} \ \soc({}_R R) \cong {}_R (R /\! \rad R) \quad \text{ and/or } \quad \text{(ii)} \ \soc(R_R) \cong (R /\! \rad R)_R \,. \] \end{definition}
For quasi-Frobenius rings the conditions (i) and (ii) are actually equivalent. Indeed, it is worthwhile to recall how the properties of quasi-Frobenius and Frobenius may be expressed with respect to the principal decomposition, as we outline briefly below (for details, see~\cite[Sec.~16]{lam} or~\cite[Sec.~31]{anderson-fuller}). Let~$R$ be a left or right Artinian ring and let $S \defeq R /\! \rad R$ be its semisimple quotient. Then there is a list of orthogonal primitive idempotents $e_1, \dots, e_n \in R$ such that \[ R = R e_1 \oplus \ldots \oplus R e_n \quad \text{ and } \quad R = e_1 R \oplus \ldots \oplus e_n R \] are direct sums of indecomposable left and right modules, respectively, and, letting $\overline e_i \defeq e_i + \rad R \in S$ for all $i \in \{ 1, \dots, n \}$, one has decompositions \[ S = S \overline e_1 \oplus \ldots \oplus S \overline e_n \quad \text{ and } \quad S = \overline e_1 S \oplus \ldots \oplus \overline e_n S \] into simple left and right modules, respectively. For all $i, j \in \{ 1, \dots, n \}$ there holds \[ S \overline e_i \cong S \overline e_j \ \Longleftrightarrow\ R e_i \cong R e_j \ \Longleftrightarrow\ e_i R \cong e_j R \ \Longleftrightarrow\ \overline e_i S \cong \overline e_j S \,, \] and we may assume that $R e_1, \dots, R e_m$ (for some $m \le n$) form a complete set of non-isomorphic representatives for all $R e_i$. Then $S \overline e_1, \dots, S \overline e_m$ (and $\overline e_1 S, \ldots, \overline e_m S$) form an irredundant set of representatives for all simple left (right) modules. We shall refer to $e_1, \dots, e_m$ as a \emph{basic set} of idempotents for the ring~$R$. It is easy to see that $\tp(R e_i) \cong S \overline e_i$ and $\tp(e_i R) \cong \overline e_i S$ (considered as $R$-modules), in particular, the former are simple.
Now if the ring~$R$ is quasi-Frobenius then each of $\soc (R e_i)$ and $\soc (e_i R)$ is also simple. In fact, the following characterization is valid (see, e.g., \cite[Cor.~31.4]{anderson-fuller}), which actually corresponds to Nakayama's original definition of quasi-Frobenius rings~\cite{nakayama}.
\begin{thm}\label{theorem:nakayama} Let~$R$ be a left or right Artinian ring with a basic set of idempotents $e_1, \dots, e_m$. Then the ring~$R$ is quasi-Frobenius if and only if there is a permutation $\pi \in S_m$ such that \[ \soc(R e_i) \cong \tp(R e_{\pi(i)}) \quad \text{ and } \quad \soc(e_{\pi(i)} R) \cong \tp(e_i R) \,. \] \end{thm}
The permutation $\pi \in S_m$ in Theorem~\ref{theorem:nakayama} is referred to as the \emph{Nakayama permutation}. Notice that for any fixed~$j$ the number~$\mu_j$ of indecomposables $R e_i$ isomorphic to $R e_j$ equals the number of simples $\tp(R e_i)$ isomorphic to $\tp(R e_j)$, and coincides with its right counterpart. Hence, for a quasi-Frobenius ring~$R$, Theorem~\ref{theorem:nakayama} yields that \[ \soc({}_R R) = \textstyle\bigoplus\limits_{i=1}^n \soc(R e_i) \cong \textstyle\bigoplus\limits_{i=1}^n \tp(R e_i) = \tp({}_R R) \] if and only if $\mu_{\pi(i)} = \mu_i$ for all $i \in \{ 1, \dots, n \}$, which in turn is equivalent to $\soc(R_R) \cong \tp(R_R)$. This shows the equivalence of condition (i) and (ii) of Definition~\ref{def:frob} for quasi-Frobenius rings. (On the other hand, any Artinian ring~$R$ satisfying both $\soc({}_R R) \cong \tp({}_R R)$ and $\soc(R_R) \cong \tp(R_R)$ is necessarily quasi-Frobenius.)
We are going to introduce a finitary version of the Frobenius property. Given a ring $R$, we define its \emph{finitary left socle} $\soc^{\ast}({}_R R)$ to be the sum of all finite minimal left ideals of~$R$, and its \emph{finitary right socle} $\soc^{\ast}(R_R)$ as the sum of all finite minimal right ideals of~$R$.
\begin{prop}\label{proposition:finitary.frobenius} Let~$R$ be a quasi-Frobenius ring. Then $\soc^{\ast}({}_R R)$ embeds into ${}_R (R /\! \rad R)$ if and only if $\soc^{\ast}(R_R)$ embeds into $(R /\! \rad R)_R$. \end{prop}
\begin{proof} Let $e_1, \dots, e_m$ be a basic set of idempotents for the ring~$R$. First we observe that $\soc(R e_i)$ is finite if and only if $\tp(e_i R)$ is finite. Indeed, since~$R$ is quasi-Frobenius we have $\Hom(\soc(R e_i), R) \cong \tp(e_i R)$ and $\Hom(\tp(e_i R), R) \cong \soc(R e_i)$ (see~\cite[Cor.~16.6]{lam} or~\cite[Cor.~2.5]{wood1}). Furthermore, if~$T$ is any finite simple module, then $\Hom(T, R)$ is finite, since every homomorphism $T \to R$ maps into the finite set $\soc^{\ast}(R)$.
Next it is easy to see that $\tp(R e_i) \cong S \overline e_i$ is finite if and only if $\overline e_i S \cong \tp(e_i R)$ is finite; in fact they are isomorphic to the standard column and row modules of the same matrix ring in the Artin-Wedderburn decomposition of $S \defeq R /\! \rad R$. This shows that $\soc(R e_i)$, $\tp(R e_i)$, $\soc(e_i R)$, $\tp(e_i R)$ are, for each~$i$, simultaneously either finite or infinite.
Now denoting by~$F$ the set of all~$i$ such that $\soc(R e_i)$ is finite, we see from Theorem~\ref{theorem:nakayama} that the Nakayama permutation~$\pi$ preserves the set~$F$, and from the subsequent discussion that $\soc^{\ast}({}_R R)$ embeds into $\tp({}_R R)$ if and only if $\mu_{\pi(i)} = \mu_i$ for all $i \in F$, which holds if and only if $\soc^{\ast}(R_R)$ embeds into $\tp(R_R)$. \end{proof}
In view of Proposition~\ref{proposition:finitary.frobenius}, we record the following definition.
\begin{definition}\label{def:finitely.frobenius} A ring~$R$ is called \emph{finitarily Frobenius} if it is quasi-Frobenius and there holds one (thus each) of the following equivalent conditions: \begin{enumerate}[label=(\roman*)] \item $\soc^*({}_R R)$ embeds into ${}_R (R /\! \rad R)$. \item $\soc^{\ast}(R_R)$ embeds into $(R /\! \rad R)_R$. \end{enumerate} \end{definition}
The following notion will also be relevant for the MacWilliams extension property. A left module ${}_R M$ is said to be \emph{pseudo-injective} if for every submodule~$N$ of~$M$ and any injective homomorphism $f \colon N \to M$ there is an endomorphism $g \colon M \to M$ with $g|_N = f$. Accordingly, a ring~$R$ is called \emph{left (right) pseudo-injective} if for every left (right) ideal~$I$, each injective homomorphism $I \to R$ is given by a right (left) multiplication by an element of~$R$. Dinh and L\'opez-Permouth have shown~\cite[Prop.~3.2]{DinhLopezA} that a finite ring is left pseudo-injective if and only if it satisfies the MacWilliams property for codes of length one.
Clearly, every injective module is pseudo-injective and every quasi-Frobenius ring is left and right pseudo-injective. There is considerable interest in such weaker forms of self-injectivity, one motivation being to discuss more general assumptions on rings that imply quasi-Frobenius. In particular, a ring~$R$ is termed \emph{left (right) min-injective} if for every minimal left (right) ideal~$I$ each homomorphism $I \to R$ is given by right (left) multiplication; see, e.g., \cite{harada, nicholson-yousif}. Note that left (right) pseudo-injectivity implies left (right) min-injectivity for rings. Pseudo-injective modules gained attention more recently, as they are characterized as modules that are invariant under automorphisms of the injective envelope~\cite{ErSinghSrivastava}.
Let us record the following result.
\begin{prop}\label{prop:quasi-Frobenius.pseudo-injective} An Artinian ring is quasi-Frobenius if and only if it is both left and right pseudo-injective. \end{prop}
\begin{proof} Since pseudo-injectivity implies min-injectivity, the result is a direct consequence of~\cite[Thm.~13]{harada}; see also~\cite[Thm.~3.12]{iovanov}. \end{proof}
The following useful observation is implicit in~\cite[Cor.~3.5]{iovanov}, and for the reader's convenience we include a direct argument based on work of Bass~\cite{bass}, along the lines of \cite[Prop.~5.1]{wood1} and \cite[Lem.~3.3]{iovanov}.
\begin{lem}\label{lemma:bass} Let~$R$ be a left or right Artinian ring which is left pseudo-injective. If~${}_R M$ is a left $R$-module and $g, h \colon M \to R$ are homomorphisms such that $\ker g = \ker h$, then there exists a unit $u \in R$ such that $h(x) = g(x) u$ for all $x \in M$. \end{lem}
\begin{proof} Consider the induced injective maps $\widetilde g, \smash{\widetilde h} \colon M / N \to R$, where $N \defeq \ker g = \ker h$. Letting $I \defeq \mathrm{im}\, \widetilde g$ we have an injective homomorphism $f \defeq \smash{\widetilde h} \circ \widetilde g^{-1} \colon I \to R$, thus by pseudo-injectivity there exists $a \in R$ with $f(z) = z a$ for all $z \in I$, which implies $h(x) = g(x) a$ for all $x \in M$. Similarly, we find $b \in R$ such that $g(x) = h(x) b$ for all $x \in M$. Now since $R = a b R + (1 - a b) R \subseteq a R + (1 - a b) R$ and $R /\! \rad R$ is semisimple, it follows from~\cite[Lem.~6.4]{bass} that there is a unit $u \in R$ such that $u = a + (1 - a b) r$ for some $r \in R$. Thus $g(x) u = g(x) a + g(x) (1 - a b) r = g(x) a = h(x)$ for all $x \in M$, as desired. \end{proof}
\section{Torsion-free characters}\label{section:characters}
In this section we show that any Frobenius ring admits a left (resp., right) torsion-free character and that, similarly, every finitarily Frobenius ring admits a finitarily left (resp., right) torsion-free character, cf.~Definition~\ref{definition:torsion-free}. Let us start with a fairly general setting.
\begin{definition}\label{definition:torsion-free} Let~$R$ be a ring and let~$E$ be an abelian group. A homomorphism $\chi \colon {R \to E}$ is called \emph{left torsion-free} (resp., \emph{right torsion-free}) if the subgroup $\ker \chi$ contains no nonzero left (resp., right) ideals. The homomorphism~$\chi$ is called \emph{torsion-free} if it is both left- and right torsion-free. Furthermore, $\chi \colon {R \to E}$ is said to be \emph{finitarily left torsion-free} (resp., \emph{finitarily right torsion-free}) if the subgroup $\ker \chi$ contains no nonzero finite left (resp., right) ideals; and~$\chi$ is called \emph{finitarily torsion-free} if it is both finitarliy left- and right torsion-free. \end{definition}
Our construction of torsion-free characters on Frobenius rings depends upon the celebrated Artin-Wedderburn theorem. We start with the case of division rings.
\begin{lem}\label{lemma:torsion-free.division} Every division ring~$D$ admits a torsion-free homomorphism into $\mathbb Q / \mathbb Z$. \end{lem}
\begin{proof} The $\mathbb Z$-module $\mathbb Q / \mathbb Z$ has the cogenerator property, i.e., for any abelian group~$X$ and every nonzero $x \in X$ there is a homomorphism $f \colon X \to \mathbb Q / \mathbb Z$ with $f(x) \ne 0$ (cf.~\cite[Lem.~4.7]{lam}). In particular, there exists a nonzero homomorphism $\chi \colon D \to \mathbb Q / \mathbb Z$, which clearly must be torsion-free as~$D$ does not admit any non-trivial left or right ideals. \end{proof}
Given a ring $R$ and some integer $n \geq 1$, we consider the \emph{matrix ring} $\mathrm{M}_{n}(R) \defeq R^{n\times n}$ and the \emph{trace map} $\mathrm{tr} \colon \mathrm{M}_{n}(R) \to R, \, m \mapsto \sum_{i=1}^{n} m_{ii}$, which is an $R$-bimodule homomorphism.
\begin{lem}\label{lemma:torsion-free.matrix} Let $R$ be a ring and let~$E$ be an abelian group. If a homomorphism $\chi \colon R \to E$ is left torsion-free (resp., right torsion-free), then so is $\chi \circ \mathrm{tr} \colon \mathrm{M}_{n}(R) \to E$ for every $n \geq 1$. \end{lem}
\begin{proof} Suppose that $\chi \colon R \to E$ is left torsion-free and let~$I$ be a left ideal in~$M_n(R)$ contained in $\ker( \chi \circ \mathrm{tr})$. Since the trace map is $R$-linear, it follows that~$\mathrm{tr}(I)$ is a left ideal in~$R$ contained in $\ker \chi$, and therefore $\mathrm{tr}(I) = 0$. We claim that $I = 0$. Let $a = (a_{i j}) \in I$, and denoting by $e^{i j} \in \mathrm{M}_n(R)$ the elementary matrix with $(e^{i j})_{i j} = 1$ and $(e^{i j})_{k \ell} = 0$ if $(k, \ell) \ne (i, j)$, we have $\mathrm{tr} ( e^{i j} a ) = \mathrm{tr} ( e^{i j} \sum_{k, \ell} a_{k \ell} e^{k \ell} ) = \mathrm{tr} ( \sum_{\ell} a_{j \ell} e^{i \ell} ) = a_{j i}$. Since $e^{i j} a \in I$ for all $i, j$ and $\mathrm{tr}(I) = 0$, we conclude $a = 0$ as desired. The proof for the right torsion-free case is analogous. \end{proof}
\begin{lem}\label{lemma:torsion-free.product} Let $R_{1},\ldots,R_{n}$ be rings and let $E$ be an abelian group. For any left torsion-free (resp., right torsion-free) homomorphisms $\chi_{i} \colon R_{i} \to E$ $(i \in \{ 1,\ldots,n \})$, the homomorphism \begin{displaymath}
R_{1} \times \ldots \times R_{n} \to E, \quad (r_{1}, \ldots, r_{n}) \mapsto \textstyle\sum\limits_{i=1}^{n} \chi_{i}(r_{i})
\end{displaymath} is left torsion-free (resp., right torsion-free), too. \end{lem}
\begin{proof} Let $R \defeq R_1 \times \ldots \times R_n$ and let $\chi \colon R \to E$, $(r_1, \dots, r_n) \mapsto \sum_{i=1}^n \chi_i(r_i)$. Suppose that the $\chi_i \colon R_i \to E$ are left torsion-free and let~$I$ be a left ideal in~$R$ contained in $\ker \chi$. For $i \in \{ 1, \ldots, n \}$ denote by $\pi_i \colon R \to R_i$ the projection and let $e^i \in R$ be the central idempotent with $(e^i)_i = 1$ and $(e^i)_j = 0$ for $j \ne i$. Then $\pi_i(e^i I)$ is a left ideal in~$R_i$ contained in $\ker \chi_i$. It follows that $\pi_i(e^i I) = 0$ and thus $e^i I = 0$ for all $i \in \{ 1, \dots, n \}$, which implies $I = 0$ as desired. The proof for the right torsion-free case is analogous. \end{proof}
\begin{cor}\label{corollary:torsion-free.semisimple} Every semisimple ring admits a torsion-free homomorphism into $\mathbb Q / \mathbb Z$. \end{cor}
\begin{proof} Thanks to the famous Artin-Wedderburn theorem, any semisimple ring is isomorphic to $M_{n_1}(D_1) \times \ldots \times M_{n_m}(D_m)$ for suitable positive integers $n_1, \ldots, n_m$ and division rings $D_1, \ldots, D_m$. Thus we may apply Lemma~\ref{lemma:torsion-free.division}, Lemma~\ref{lemma:torsion-free.matrix} and Lemma~\ref{lemma:torsion-free.product}. \end{proof}
We are now ready to establish the first main result of the present note. Our proof utilizes Pontryagin duality for modules, which we recall in the appendix.
\begin{thm}\label{theorem:frobenius.characters} Let $R$ be a left Artinian ring. The following are equivalent: \begin{enumerate}
\item[$(1)$] $\soc^{\ast}({}_R R)$ embeds into ${}_R (R /\! \rad R)$,
\item[$(2)$] $R$ admits a finitarily left torsion-free homomorphism into $\mathbb Q / \mathbb Z$,
\item[$(3)$] $R$ admits a finitarily left torsion-free character.
\end{enumerate} \end{thm}
\begin{proof} $(1) \!\Longrightarrow\! (2)$. Since $S \defeq R /\! \rad R$ is semisimple there is by Corollary~\ref{corollary:torsion-free.semisimple} a torsion-free homomorphism $\chi \colon S \to \mathbb Q / \mathbb Z$. By hypothesis we have an embedding $\phi \colon \soc^{\ast}({}_R R) \to {}_R S$. Now since $\mathbb Q / \mathbb Z$ is divisible, i.e., injective as a $\mathbb Z$-module, there exists a homomorphism $\overline \chi \colon R \to \mathbb Q / \mathbb Z$ such that $\overline \chi \vert_{\soc^{\ast}({}_R R)} = \chi \circ \phi$. We claim that~$\overline \chi$ is finitarily left torsion-free. To see this, let~$I$ be any nonzero finite left ideal of~$R$. Then we find a minimal (nonzero) left ideal~$I_0$ of~$R$ such that $I_0 \subseteq I$. As~$I_0$ is finite, $I_0 \subseteq \soc^{\ast}({}_R R)$. It follows that $\phi(I_0)$ is a nonzero submodule of~${}_R S$, i.e., $\phi(I_0)$ is a nonzero left ideal of the ring~$S$. Since~$\chi$ is left torsion-free, we have that $\phi(I_0) \nsubseteq \ker \chi$ and thus $I_0 \nsubseteq \ker (\chi \circ \phi)$. By choice of~$\overline \chi$ and~$I_0$, this implies that $I \nsubseteq \ker \overline \chi$. This shows that $\ker \overline \chi$ contains no nonzero finite left ideal.
$(2) \!\Longrightarrow\! (3)$. Since $\mathbb T \cong \mathbb R / \mathbb Z$, this is obvious.
$(3) \!\Longrightarrow\! (1)$. Let~$R$ be left Artinian. By applying the Artin-Wedderburn theorem we find a finite semisimple ring~$E$ and a semisimple ring~$U$ without non-trivial finite left modules, together with a surjective homomorphism $h \colon R \to E \times U$ with $\ker h = \rad R$. Consider the projection $p \colon E \times U \to E$ and let $h_E \defeq p \circ h$, so that $K \defeq \ker h_E = h^{-1}(U)$. Since $\ker h = \rad R$ and~$U$ has no non-trivial finite left modules, it is easy to see that $K T = 0$ for every minimal finite left ideal~$T$ of~$R$. We conclude that $K \soc^{\ast}(R) = 0$.
Now suppose that $\chi \colon R \to \mathbb T$ is a finitarily left torsion-free homomorphism. For each $a \in A \defeq \soc^{\ast}(R)$ we have just shown that $K \subseteq \ker(a \chi)$, whence there exists a unique $a . \chi \in \smash{\widehat E}$ such that $a . \chi \circ h_E = a \chi$. Moreover, viewing~$E$ as a right $R$-module, the homomorphism $h_E \colon R_R \to E_R$ induces a homomorphism $\phi \colon {}_R A \to {}_R \smash{\widehat E}$, $a \mapsto a . \chi$. Furthermore, as~$\chi$ is finitarily left torsion-free we deduce that~$\phi$ is injective: if $a \in A \setminus \{ 0 \}$, then $R a \nsubseteq \ker \chi$, i.e., there exists $r \in R$ such that $1 \ne \chi(r a) = (a . \chi) (h_E(r))$, wherefore $a . \chi \ne 1$. This shows that ${}_R A$ embeds into ${}_R \smash{\widehat E}$. Finally, since~$E$ is finite and semisimple, thus Frobenius, we have that ${}_E \smash{\widehat E} \cong {}_E E$ by work of Wood~\cite[Thm.~3.10]{wood1}, and hence ${}_R \smash{\widehat E} \cong {}_R E$. Thus the composition of embeddings ${}_R A \to {}_R \smash{\widehat E} \to {}_E E \to {}_R (R /\! \rad R)$ provides an embedding of ${}_R A = \soc^{\ast}({}_R R)$ into ${}_R (R /\! \rad R)$, as desired. \end{proof}
Let us also add a direct argument for the implication $(3) \!\Longrightarrow\! (2)$ of Theorem~\ref{theorem:frobenius.characters}. Suppose for a left Artinian ring~$R$ a finitarily left torsion-free homomorphism $\chi \colon R \to \mathbb{R}/\mathbb Z$ is given. Since $F \defeq \soc^{\ast}({}_R R)$ is finite, $\chi (F)$ is a finite subgroup of $\mathbb R / \mathbb Z$, thus contained in the torsion subgroup $\mathbb Q / \mathbb Z$. By divisibility of $\mathbb Q/\mathbb Z$, there exists a homomorphism $\chi^{\ast} \colon R \to \mathbb Q / \mathbb Z$ such that $\chi^{\ast}|_F = \chi|_F$. In particular, $F \cap (\ker \chi^{\ast}) = F \cap (\ker \chi)$. In turn, $\chi^{\ast}$ must be finitarily left torsion-free, as every finite left ideal of~$R$ contains a minimal left ideal.
By the method of proof, we have the following.
\begin{cor}\label{corollary:frobenius.characters} Let~$R$ be a left Artinian ring. If $\soc({}_R R) \cong {}_R (R /\! \rad R)$ (in particular, if $R$ is Frobenius), then~$R$ admits a left torsion-free homomorphism into $\mathbb Q / \mathbb Z$. \end{cor}
\section{Dual modules and almost monotheticity}\label{section:almost.monothetic.modules}
This section offers a topological perspective on (finitarily) Frobenius rings, in terms of compact modules arising via Pontryagin duality. Let us start off with a simple characterization of torsion freeness of characters. For notation, see the appendix.
\begin{lem}\label{lemma:torsion-free.density} Let~$R$ be a ring and let $\chi \in \smash{\widehat R}$. The following hold. \begin{enumerate}
\item[$(1)$] The character $\chi$ is left torsion-free (resp., right torsion-free) if and only if $\chi R$ (resp., $R\chi$) is dense in $\smash{\widehat R}$.
\item[$(2)$] The character $\chi$ is finitarily left torsion-free (resp., right torsion-free) if and only if $\chi$ is not contained in a finite-index closed proper submodule of $\smash{\widehat R_R}$ (resp., $\smash{{}_R \widehat R}$).
\end{enumerate} \end{lem}
\begin{proof} Consider the closed submodule $B \defeq \overline{\chi R} \leq \widehat{R}_R$ and the corresponding left ideal \begin{equation*}\tag{$\ast$}\label{annihilator}
\Delta (B) = \Delta (\chi R) = \{ x \in R \mid Rx \subseteq \ker \chi \} .
\end{equation*} Then $\chi$ is left torsion-free if and only if $\Delta (B) = \{ 0 \}$, which, by Proposition~\ref{proposition:pontryagin}, is the case if and only if $B = \smash{\widehat{R}}$. This proves~(1). In order to show~(2), we infer from~\eqref{annihilator} that $\chi$ is finitarily left torsion-free if and only if $\Delta (B)$ has no nonzero finite left sub-ideals, which, thanks to Proposition~\ref{proposition:pontryagin} and Lemma~\ref{lemma:pontryagin}, just means that $B$ is not contained in any finite-index proper closed submodules of $\smash{\widehat{R}}_R$. Of course, the latter is equivalent to $\chi$ not being contained in any finite-index proper closed submodules of $\smash{\widehat{R}}_R$, which readily completes the argument. The other cases are proven analogously. \end{proof}
We continue with a useful abstract concept for compact modules. Given a ring~$R$, a \emph{compact right $R$-module} is a compact abelian group~$X$ together with a \emph{continuous} right $R$-module structure, i.e., such that $X \to X$, $x \mapsto x r$ is continuous for every $r \in R$.
\begin{definition} Let $R$ be a ring. A compact right $R$-module $X_R$ is said to be \emph{monothetic} if there exists $x \in X$ such that $\overline{xR} = X$. A compact right $R$-module $X_R$ is called \emph{almost monothetic} if every finite cover of $X_R$ by closed submodules contains $X$ itself, i.e., for every finite set $\mathcal{M}$ of closed submodules of $X_R$ we have \begin{displaymath}
X = \bigcup \mathcal{M} \quad \Longrightarrow \quad X \in \mathcal{M} \, .
\end{displaymath} \end{definition}
It is easily seen that both monothetic and almost monothetic compact modules provide a generalization of cyclic finite modules, in the sense that a finite right $R$-module $X_R$ is cyclic if and only if $X_R$ is monothetic, if and only if $X_R$ is almost monothetic. For general compact modules, monotheticity implies almost monotheticity.
The term monotheticity was introduced in topological group theory by van~Dantzig~\cite{vanDantzig}: a topological group is said to be \emph{monothetic} if it contains a dense cyclic subgroup (which clearly implies that the group is abelian). For more details on such groups, the reader is referred to~\cite{HalmosSamelson, TopologicalGroups}. Considering compact abelian groups as compact right $\mathbb{Z}$-modules, our definition of monotheticity above naturally extends van Dantzig's concept to the realm of compact modules over arbitrary rings. Almost monotheticity appears to be the right generalization thereof in the context of MacWilliams' extension property, as will be substantiated by Theorem~\ref{theorem:almost.monothetic} and~Corollary~\ref{corollary:summary}.
Utilizing the following combinatorial Lemma~\ref{lemma:passman.gottlieb}, we will provide a simple characterization of almost monothetic compact modules in Proposition~\ref{proposition:almost.monothetic}.
\begin{lem}[{\cite[Lem.~5.2]{passman}; see also~\cite[Thm.~18]{gottlieb}}]\label{lemma:passman.gottlieb} Let $G$ be an abelian group. If $\mathcal{H}$ is a finite cover of $G$ by subgroups such that $G \ne \bigcup \mathcal{H} \setminus \{ H \}$ for every $H \in \mathcal{H}$, then $G / \bigcap \mathcal{H}$ is finite. \end{lem}
\begin{prop}\label{proposition:almost.monothetic} Let $R$ be a ring. A compact right $R$-module $X_R$ is almost monothetic if and only if every finite cover of $X_R$ by finite-index closed submodules contains $X$ itself. \end{prop}
\begin{proof} The implication ($\Longrightarrow$) is obvious. In order to prove ($\Longleftarrow$), let $\mathcal{M}$ be a finite set of closed submodules of $X_R$ with $X = \bigcup \mathcal{M}$. We wish to show that $X \in \mathcal{M}$. Without loss of generality, we may assume that $X \ne \bigcup \mathcal{M}\setminus \{ M \}$ for every $M \in \mathcal{M}$. Thanks to Lemma~\ref{lemma:passman.gottlieb}, each member of $\mathcal{M}$ then has finite index in $X_R$, whence $X \in \mathcal{M}$ by our hypothesis. \end{proof}
\begin{cor}\label{corollary:almost.monothetic} Let $R$ be a ring and let $X_R$ be a compact right $R$-module. If $X_R$ is not covered by its finite-index closed proper submodules, then $X_R$ is almost monothetic. \end{cor}
We now return to Pontryagin duals of Artinian rings. The subsequent result characterizes the finitarily Frobenius rings in topological terms, in turn offering an approach to the proof of the general MacWilliams theorem.
\begin{thm}\label{theorem:almost.monothetic} Let $R$ be a left Artinian ring. Then $\soc^{\ast}({}_R R)$ embeds into ${}_R (R /\! \rad R)$ if and only if $\smash{\widehat R}_R$ is almost monothetic. \end{thm}
\begin{proof} ($\Longrightarrow$) This follows from Theorem~\ref{theorem:frobenius.characters} and Lemma~\ref{lemma:torsion-free.density}(2) along with Corollary~\ref{corollary:almost.monothetic}.
($\Longleftarrow$) Suppose that $\smash{\widehat R}_R$ is almost monothetic. Since $R$ is left Artinian, the set $\mathcal L$ of all finite simple left ideals of $R$ is finite. By Proposition~\ref{proposition:pontryagin} and Lemma~\ref{lemma:pontryagin}, the finite set $\mathcal{M} \defeq \{ \Gamma (I) \mid I \in \mathcal{L} \}$ consists of closed proper submodules of $\smash{\widehat R}_R$. As $\smash{\widehat R}_R$ is almost monothetic, there exists $\chi \in \smash{\widehat R}$ with $\chi \notin \bigcup \mathcal{M}$, i.e., $I \nsubseteq \ker \chi$ for every $I \in \mathcal{L}$. Since every nonzero finite left ideal of $R$ contains a member of $\mathcal{L}$, it follows that $\chi$ is finitarily left torsion-free. Hence, $\soc^{\ast}({}_RR)$ embeds into ${}_R (R /\! \rad R)$ by Theorem~\ref{theorem:frobenius.characters}. \end{proof}
The following lemma is the main reason for our interest in almost monothetic modules.
\begin{lem}\label{lemma:almost.monothetic} Let $R$ be a ring, $X_R$ and~$Y_R$ be compact right $R$-modules, where~$X_{R}$ is almost monothetic. Let $f_1, \dots, f_n, g_1, \dots, g_n \colon X_R \to Y_R$ be continuous homomorphisms such that \begin{displaymath}
\forall \alpha \in \widehat Y \colon \qquad \sum_{i=1}^n \int \alpha (f_{i}(x)) \, d \mu_X(x) = \sum_{i=1}^n \int \alpha (g_{i}(x)) \, d \mu_X(x) \,.
\end{displaymath} Then, for each $j \in \{1, \dots, n \}$, there exists $k \in \{ 1, \dots, n \}$ such that $\ker \widehat {g_k} \subseteq \ker \smash{\widehat {f_j}}$. \end{lem}
\begin{proof} Since by Theorem~\ref{theorem:bohr} the linear span of $\smash{\widehat Y}$ is dense in $C(Y)$, and by continuity of the map $C(Y) \to \mathbb C$, $h \mapsto \sum_{i=1}^n \int h (f_{i}(x) - g_{i}(x)) \, d \mu_X(x)$, our hypothesis implies that \begin{displaymath}
\forall h \in C(Y) \colon \qquad \sum_{i=1}^n \int h (f_{i}(x) - g_{i}(x)) \, d \mu_X(x) = 0 \,.
\end{displaymath} Let $j \in \{ 1, \dots, n \}$. Then $f_{j}(X)$ is contained in $B \defeq \bigcup_{k=1}^{n} g_k(X)$: otherwise, assuming that $f_j(x) \notin B$ for some $x \in X$ and noting that $B$ is closed in $Y$, by Urysohn's lemma we find $h \in C(Y)$ with $h \ge 0$ such that $h|_B \equiv 0$ and $h(f_j(x)) > 0$, which implies that \begin{displaymath}
0 = \sum_{i=1}^n \int h (f_{i}(x) - g_{i}(x)) \, d \mu_X(x) \ge \int h (f_{j}(x)) \, d \mu_X(x) > 0
\end{displaymath} and thus gives a contradiction. Hence, $X = \bigcup_{k=1}^{n} f^{-1}_{j}(g_{k}(X))$. Since $X$ is almost monothetic, there exists $k \in \{ 1,\ldots,n \}$ such that $X = f^{-1}_{j}(g_{k}(X))$, i.e., $f_{j}(X) \subseteq g_{k}(X)$. We show that $\ker \widehat {g_k} \subseteq \ker \smash{\widehat {f_j}}$. To this end, let $\kappa \in \smash{\widehat Y}$ with $\kappa \in \ker \widehat {g_k}$, i.e., $\kappa \circ g_k = 1$. Then \begin{displaymath}
(\kappa \circ f_j)(X) = \kappa(f_j(X)) \subseteq \kappa(g_k(X)) = (\kappa \circ g_k)(X) = 1 \,,
\end{displaymath} so that $\kappa \in \ker \smash{\widehat {f_j}}$ as desired. \end{proof}
We finish this section with the observation that, by the method of proof of Theorem~\ref{theorem:almost.monothetic}, we have the following.
\begin{cor} Let $R$ be a left Artinian ring. If $\soc({}_RR) \cong {}_R (R / \rad R)$ (in particular, if $R$ is Frobenius), then $\smash{\widehat R}_R$ is monothetic. \end{cor}
\begin{proof} This is an immediate consequence of Corollary~\ref{corollary:frobenius.characters} and Lemma~\ref{lemma:torsion-free.density}(1). \end{proof}
\section{MacWilliams' extension theorem for the Hamming weight}\label{section:macwilliams}
In this section we prove MacWilliams' extension theorem for the Hamming weight on general Frobenius rings. Let us start off with some basic terminology. Let~$G$ be an abelian group. By a \emph{weight} on~$G$ we mean any function from~$G$ to~$\mathbb C$. Given a weight $w \colon G \to \mathbb C$ and any positive integer~$n$, we denote $w(x) \defeq \sum_{i=1}^{n} w(x_{i})$ for $x \in G^{n}$. The \emph{Hamming weight} $w_{\mathrm{H}} \colon G \to \mathbb{C}$ is defined by \begin{displaymath}
w_{\mathrm{H}}(x) \defeq \begin{cases}
0 & \text{if } x = 0, \\
1 & \text{otherwise}
\end{cases} \qquad (x \in G) .
\end{displaymath} The following well-known general character-theoretic observation, noted in~\cite[p.~572, Eq.~(1)]{iovanov}, connects the Hamming weight with the Haar integration on Pontryagin duals. For notation, see the appendix.
\begin{lem}\label{lemma:weights} Let $G$ be an abelian group. For every $x \in G$, \begin{displaymath}
w_{\mathrm{H}}(x) = 1 - \int \gamma(x) \, d\mu_{\widehat{G}}(\gamma) .
\end{displaymath} \end{lem}
\begin{proof} Noting that $x \ne 0$ if and only if $\eta_G(x) \ne 1$ by Theorem~\ref{theorem:pontryagin}, the result is immediate from Lemma~\ref{lemma:characters.are.ap} applied to the group $\smash{\widehat G}$. \end{proof}
\begin{cor}\label{corollary:weights} Let $G$ be an abelian group and $n \geq 1$. For every $x \in G^{n}$, \begin{displaymath}
w_{\mathrm{H}}(x) = n - \sum_{i=1}^{n} \int \gamma(x_{i}) \, d\mu_{\widehat{G}}(\gamma) .
\end{displaymath} \end{cor}
We proceed to rings. Our main focus will be on the MacWilliams property. Let $R$ be a ring and consider its \emph{group of units} \begin{displaymath}
U(R) \defeq \{ u \in R \mid \exists v \in R \colon \, uv = vu = 1 \} .
\end{displaymath} Given $n \geq 1$, $\sigma \in S_{n}$ and $u \in U(R)^{n}$, we consider the module automorphisms \begin{gather*}
\Phi_{\sigma,u} \colon {}_R R^{n} \to {}_R R^{n}, \quad x \mapsto (x_{\sigma 1}u_{1},\ldots,x_{\sigma n}u_{n}) , \\
\Psi_{\sigma,u} \colon R^{n}_R \to R^{n}_R, \quad x \mapsto (u_{1}x_{\sigma 1},\ldots,u_{n}x_{\sigma n}) ,
\end{gather*} and note that $w_{\mathrm{H}}(\Phi_{\sigma,u}(x)) = w_{\mathrm{H}}(\Psi_{\sigma,u}(x)) = w_{\mathrm{H}}(x)$ for all $x \in R^{n}$.
\begin{definition} A ring~$R$ is called \emph{left MacWilliams} if, for every integer $n \ge 1$ and any homomorphism $\phi \colon {}_R M \to {}_R N$ between submodules $M, N$ of ${}_R R^{n}$ with $w_{\mathrm{H}} (\phi(x)) = w_{\mathrm{H}}(x)$ for all $x \in M$, there exist $\sigma \in S_{n}$ and $u \in U(R)^{n}$ with $\phi = \Phi_{\sigma,u}\vert_{M}^{N}$. Analogously, a ring~$R$ will be called \emph{right MacWilliams} if, for every integer $n \ge 1$ and any homomorphism $\phi \colon M_R \to N_R$ between submodules $M, N$ of $R^{n}_R$ with $w_{\mathrm{H}} (\phi(x)) = w_{\mathrm{H}}(x)$ for all $x \in M$, there exist $\sigma \in S_{n}$ and $u \in U(R)^{n}$ with $\phi = \Psi_{\sigma,u}\vert_{M}^{N}$. \end{definition}
Our goal is to establish a link between the MacWilliams property and the finitary Frobenius property. The next two lemmata together constitute a key observation. The arguments are reminiscent of Iovanov's work~\cite[Sec.~4.1]{iovanov}.
\begin{lem}\label{lemma:main} Let $R$ be a ring such that $\smash{\widehat R}_R$ is almost monothetic. Let $n\geq 1$, let $M$ be a left $R$-module and let $\phi, \psi \colon {}_RM \to {}_RR^{n}$ be homomorphisms with $w_{\mathrm{H}}(\phi(x)) = w_{\mathrm{H}}(\psi(x))$ for all $x \in M$. Then, \begin{displaymath}
\forall j \in \{ 1, \dots, n \} \ \exists k \in \{ 1,\ldots,n \} \colon \qquad \ker \psi_k \subseteq \ker \phi_j .
\end{displaymath} \end{lem}
\begin{proof} In light of Corollary~\ref{corollary:weights}, our assumption means that \begin{displaymath}
\forall x \in M \colon \qquad \sum_{i=1}^n \int \gamma(\phi_i(x)) \, d \mu_{\widehat R}(\gamma) =\sum_{i=1}^n \int \gamma(\psi_i(x)) \, d \mu_{\widehat R}(\gamma) \,,
\end{displaymath} or equivalently, \begin{displaymath}
\forall x \in M \colon \qquad \sum_{i=1}^n \int \eta_M(x)(\widehat{\phi_i}(\gamma)) \, d \mu_{\widehat R}(\gamma) =\sum_{i=1}^n \int \eta_M(x)(\widehat{\psi_i}(\gamma))) \, d \mu_{\widehat R}(\gamma) \,.
\end{displaymath} The result then follows by applying Lemma~\ref{lemma:almost.monothetic} to $X_R = \smash{\widehat R}_R$ and $Y_R = \smash{\widehat M}_R$, together with the fact that $\eta_{M} \colon M \to \smash{\widehat{Y}}$ is an isomorphism by Pontryagin duality (Theorem~\ref{theorem:pontryagin}). \end{proof}
The conclusion of Lemma~\ref{lemma:main} will now be adapted as follows.
\begin{lem}\label{lemma:main.too} Let~$R$ be a ring such that $\smash{\widehat R}_R$ is almost monothetic. Let $n \ge 1$, let~$M$ be a left $R$-module and let $\phi, \psi \colon {}_R M \to {}_R R^{n}$ be homomorphisms such that $w_{\mathrm{H}}(\phi(x)) = w_{\mathrm{H}}(\psi(x))$ for all $x \in X$. Then there exist $j, k \in \{ 1, \dots, n \}$ such that $\ker \phi_j = \ker \psi_k$. \end{lem}
\begin{proof} Let $j_0 \defeq 1$. By Lemma~\ref{lemma:main}, there are $k_0, \ldots, k_{n-1}, j_1, \ldots, j_n \in \{ 1, \dots, n \}$ with \begin{displaymath}
\ker \phi_{j_0} \supseteq \ker \psi_{k_0} \supseteq \ker \phi_{j_1} \supseteq \ker \psi_{k_1} \supseteq \ldots \supseteq \ker \psi_{k_{n-1}} \supseteq \ker \phi_{j_{n}} .
\end{displaymath} Clearly, $j_{k} = j_{\ell}$ for some $k,\ell \in \{ 0,\ldots,n \}$ with $k < \ell$, and thus $\ker \phi_{j_{k}} = \ker \psi_{j_{k}}$. \end{proof}
We are ready to prove the generalized MacWilliams extension theorem.
\begin{prop}\label{proposition:macwilliams} Every left Artinian, left pseudo-injective ring~$R$ such that $\soc^*({}_R R)$ embeds into ${}_R (R /\! \rad R)$ is left MacWilliams. \end{prop}
\begin{proof} By virtue of Theorem~\ref{theorem:almost.monothetic} we have that $\smash{\widehat R}_{R}$ is almost monothetic. Given $n\geq 1$, any left $R$-module $M$ and homomorphisms $\phi,\psi \colon {}_RM \to {}_RR^{n}$ with $w_{\mathrm{H}}(\phi(x)) = w_{\mathrm{H}}(\psi(x))$ for all $x \in M$, we need to show that there exist $\sigma \in S_{n}$ and $u \in U(R)^{n}$ such that $\psi = \Phi_{\sigma,u} \circ \phi$. Our proof proceeds by induction on $n \geq 1$. For the induction base, let $n = 1$. Since $\smash{\widehat{R}_{R}}$ is almost monothetic, Lemma~\ref{lemma:main.too} implies that $\ker \phi = \ker \psi$. Thanks to Lemma~\ref{lemma:bass}, there exists $u \in U(R)$ such that $\psi(x) = \phi(x)u$ for all $x \in M$, i.e., $\psi = \Phi_{\mathrm{id},u} \circ \phi$. For the inductive step, suppose that the statement is true for some $n \geq 1$. As $\smash{\widehat{R}_{R}}$ is almost monothetic, Lemma~\ref{lemma:main.too} implies that there exist $j, k \in \{ 1, \dots, n+1 \}$ such that $\ker \phi_j = \ker \psi_k$. Using Lemma~\ref{lemma:bass} again, we find $u \in U(R)$ with $\psi_k(x) = \phi_j(x) u$ for all $x \in M$. For every $x \in M$, \begin{displaymath}
\sum_{i=1}^{n+1} w_{\mathrm{H}}(\phi_i(x)) = \sum_{i=1}^{n+1} w_{\mathrm{H}}(\psi_i(x))
\end{displaymath} by assumption and $w_{\mathrm{H}}(\phi_j(x)) = w_{\mathrm{H}}(\phi_j(x) u) = w_{\mathrm{H}}(\psi_k(x))$, which implies that \begin{displaymath}
\sum_{i=1, \, i \ne j}^{n+1} w_{\mathrm{H}}(\phi_i(x)) = \sum_{i=1, \, i \ne k}^{n+1} w_{\mathrm{H}}(\psi_i(x)) .
\end{displaymath} Appealing to the induction hypothesis and the fact that $\psi_k(x) = \phi_j(x) u$ for all $x \in M$, we find $\sigma \in S_{n+1}$ and $u \in U(R)^{n+1}$ so that $\psi = \Phi_{\sigma,u} \circ \phi$. \end{proof}
Now we present the main result, which characterizes the Artinian rings satisfying the MacWilliams extension property. In addition to the results of the previous sections, we use a strategy developed by Dinh, L\'opez-Permouth~\cite{DinhLopezB} and Wood~\cite{wood2}.
\begin{thm}\label{theorem:main} A left Artinian ring~$R$ is left MacWilliams if and only if it is left pseudo-injective and $\soc^*(R)$ embeds into ${}_R (R /\! \rad R)$. \end{thm}
\begin{proof} Every left Artinian, left pseudo-injective ring~$R$ with an embedding of $\soc^*(R)$ into ${}_R (R /\! \rad R)$ is left MacWilliams by Proposition~\ref{proposition:macwilliams}. For the converse, suppose the ring~$R$ to be left Artinian and left MacWilliams. From the extension property for codes of length~$1$ we readily infer that~$R$ is left pseudo-injective. It remains to prove that $\mathrm{soc}^{\ast}({}_{R}R)$ embeds into ${}_{R}(R /\! \rad R)$. To this end, let the Artin-Wedderburn decomposition of $S \defeq R /\! \rad R$ be \[ S \cong \textstyle\bigoplus\limits_{i=1}^m {\mathrm M}_{\mu_i}(D_i) \] for some positive integers $\mu_1, \ldots, \mu_m$ and division rings~$D_i$, so there is a basic set $e_1, \dots, e_m$ of idempotents in~$R$ such that ${}_R S \cong \bigoplus_{i=1}^m (S \overline e_{i})^{\mu_i}$ and each simple left $R$-module is isomorphic to some $S \overline e_i$. Also, there are natural numbers $\nu_1, \ldots, \nu_m$ with $\soc({}_R R) \cong \bigoplus_{i=1}^{m} (S \overline e_{i})^{\nu_i}$. Without loss of generality, we may assume that the finite modules $S \overline e_i$ are precisely $S \overline e_1, \dots, S \overline e_{\ell}$ for some $\ell \le m$. We conclude that $\soc^{\ast}({}_R R)$ embeds into ${}_R S$ if and only if $\nu_i \le \mu_i$ for every $i \in \{ 1, \ldots, \ell \}$. Now assuming for contradiction that $\soc^{\ast}({}_R R)$ does not embed into ${}_R S$, there exists some~$j \in \{ 1, \ldots, \ell \}$ such that $\nu_j > \mu_j$.
As ${}_R S \overline e_j$ is isomorphic to the pull-back to~$R$ of the standard column module over ${\mathrm M}_{\mu_j}(D_j)$, we may assume that $\soc^{\ast}({}_R R)$ contains a matrix module $\smash{A \defeq D_j^{\mu_j \times \nu_j}}$ over ${\mathrm M}_{\mu_j}(D_j)$. Wood's result~\cite[Thm.~4.1]{wood2} states the existence of left submodules $C_+, C_-$ of~$A^n$ for some positive integer~$n$ and a Hamming weight preserving isomorphism $f \colon C_+ \to C_-$ such that $C_+$ has an identically zero component while $C_-$ does not. Of course, $C_+, C_-$ are submodules of~${}_R R^n$ and the isomorphism $f \colon C_+ \to C_-$ cannot be extended to a monomial transformation, contradicting the left MacWilliams property. \end{proof}
We conclude this note with multiple characterizations of the MacWilliams extension property for quasi-Frobenius rings. \pagebreak
\begin{cor}\label{corollary:summary} Let~$R$ be a quasi-Frobenius ring. The following are equivalent: \begin{enumerate}
\item[$(1)$] $R$ is finitarily Frobenius,
\item[$(2)$] $R$ admits a finitarily left torsion-free character,
\item[$(3)$] $\smash{\widehat R}_R$ is almost monothetic,
\item[$(4)$] $R$ is left MacWilliams.
\end{enumerate} Also, each of (2) and (4) may be exchanged by its right version, and (3) by its left version. \end{cor}
\begin{proof} We have $(1) \!\Longleftrightarrow\! (2)$ by Theorem~\ref{theorem:frobenius.characters}, $(1) \!\Longleftrightarrow\! (3)$ is due to Theorem~\ref{theorem:almost.monothetic}, and, noting that~$R$ is pseudo-injective, $(1) \!\Longleftrightarrow\! (4)$ according to Theorem~\ref{theorem:main}. The last statement follows from the symmetry of (1), see Proposition~\ref{proposition:finitary.frobenius}. \end{proof}
Finally, let us state the following characterization of Artinian rings satisfying the MacWilliams property on both sides.
\begin{cor} An Artinian ring is finitarily Frobenius iff it is left and right MacWilliams. \end{cor}
\begin{proof} This follows at once from Proposition~\ref{prop:quasi-Frobenius.pseudo-injective} and Theorem~\ref{theorem:main}. \end{proof}
A natural open question now arises in light of the results presented in this paper.
\begin{quest} Suppose that~$R$ is an Artinian ring which is left MacWilliams. Does it follow that~$R$ is quasi-Frobenius, i.e., is the ring~$R$ also right MacWilliams? \end{quest}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,587
|
<?php
class BattleNetAPI_Wow_Auction_Data_Endpoint extends BattleNetAPI_Endpoint {
function __construct( $params ) {
if ( ! isset( $params['realm'] ) ) {
throw new BattleNetAPI_Exception( 'The "/wow/auction/data/" endpoint requires the parameter "realm".' );
}
$this->url = BattleNetAPI::getHost() . '/wow/auction/data/' . rawurlencode( $params['realm'] );
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,788
|
include_recipe "bison"
include_recipe "libreadline"
include_recipe "phpenv"
include_recipe "phpbuild"
phpbuild_path = "#{node.travis_build_environment.home}/.php-build"
phpenv_path = "#{node.travis_build_environment.home}/.phpenv"
node.php.multi.versions.each do |php_version|
phpbuild_build "#{phpenv_path}/versions" do
version php_version
owner node.travis_build_environment.user
group node.travis_build_environment.group
action :create
end
link "#{phpenv_path}/versions/#{php_version}/bin/php-fpm" do
to "#{phpenv_path}/versions/#{php_version}/sbin/php-fpm"
end
end
node.php.multi.aliases.each do |short_version, target_version|
link "#{phpenv_path}/versions/#{short_version}" do
to "#{phpenv_path}/versions/#{target_version}"
end
end
include_recipe "php::extensions"
include_recipe "php::hhvm"
include_recipe "composer"
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,375
|
\section{Introduction}
Over the past decade, there has been an explosion of interest in
complex networks for describing structures and dynamics of complex
systems. Despite differences in their nature, many networks may be
characterized by similar topological properties. For instance, real
networks display highly clustering than expected from classic random
graphs \cite{DJW99}. Also, it has been widely observed that
node-degree distributions of many large networks are heavy tailed
\cite{ASBS00}, e.g., exponential and power-law. To understand how
these phenomena arise, research devoted to evolving networks has
rapidly flourished \cite{BA99,KRL00,DMS00,AB00,FM01,KR01}. The basic
premise is that the network will continue to grow at a constant rate
and new nodes attach to old ones with some possibility. When the
newly added nodes connect with equal probability to nodes already
present in the network, the degree distribution of the nodes of the
resulting network is exponential. Whereas for newcomers connecting
to old ones with linear preference of the node degree, Barab\'{a}si
and Albert (BA) observed a power-law distribution of connectivity
\cite{BA99}.
In the real world, agents, represented by nodes, always age after
growth. For instance, in scientific citation networks there is a
half-life effect: old papers are rarely cited since they are no
longer sufficiently topical (or they are more often referenced
through secondary literature). On the World Wide Web popular web
sites (for example, search engines) will often loose favor to newer
alternatives. To study the effect of this phenomenon on network
evolution, the BA model has been modified by incorporating time
dependence in the network
\cite{DM00,ZWZ03,HS04,HAB07,LR07,CM09,KE02a,KE02b,VA03,WXW05,TL06}.
Dorogovtsev and Mendes studied the case when the connection
probability of the new node with an old one is not only proportional
to the degree $k$ but also to a power of its present age
$\tau^{-\alpha}$ (where $\tau$ is the age of a node) \cite{DM00}.
They found that the network shows scale-free (SF) behavior only in
the region $\alpha < 1$. For $\alpha > 1$, the distribution $P(k)$
is exponential. Yet, the gradual aging model show lower clustering
than realistic networks. On the contrary, Klemm and Egu\'{i}luz
considered evolving networks based on the memory of nodes and
proposed a degree-dependent deactivation network model \cite{KE02a}
which is highly clustered and retains the power-law distribution of
the degree (but no consideration of exponential degree
distributions).
The aim of this paper is to propose a simpler and more fundamental
mechanism to build networks, including SF networks, while retaining
the positive features of aforesaid models. The mechanism we propose
simulates realistic networks, and can be understood analytically. It
has been shown that in some networks, the popularity (or activity)
of a node is essentially determined by the so-called \lq\lq
fitness\rq\rq \cite{BB01,CCRM02,GL04,SC04} which is intrinsically
related to the role played by each node, such as the innovation of a
scientific paper or the content of a webpage. This allows us to
represent the activities of individuals by intrinsic fitnesses and
suggest a fitness-driven deactivation approach to build structured
networks. Compared with topological information, the intrinsic
fitness provides a more natural and appropriate deactivation
criterion for aging of nodes. We show that depending on the
node-fitness distributions two topologically different networks can
emerge, the connectivity distribution following either an
exponential or a generalized power law. In both cases, the networks
are highly clustered. Irrespective of the fitness distribution, we
observe two scaling laws of the clustering coefficient, $C(k) \sim
k^{-1}$ and $C \sim n^{-1}$, where $k$ is the node degree and $n$
corresponds to the number of active individuals in the network.
Hence, this mechanism offers an explanation for the origin and
ubiquity of such clustering in real networks.
\section{Model}
Rather than connectivity-dependent deactivation dynamics of the
nodes developed in \cite{KE02a,KE02b,VA03,WXW05,TL06}, the present
deactivation model is based on the individual fitnesses. For each
node $i$ a fitness $x_i>0$, the random number drawn from a given
probability distribution function $\rho(x)$, is assigned to measure
its popularity or activity. The deactivation mechanism is
characterized by the transition of a node from the active to the
inactive state interpreted as a collective forgetting of it.
The network starts from an initial seed of $n$ nodes, totally
connected by undirect edges, which are all active. Then at each time
step the dynamics runs as follows.
(i) Add a new node $i$, which connects to $m$ ($m \le n$) nodes
randomly chosen from the $n$ active ones. By $k^{{\rm in}}$ we
denote the in-degree of a node, i.e., the number of edges pointing
to it. The in-degree of the newcomer is $k_{i}^{\rm{in}}=0$ at
first. Each selected active node $j$ receives exactly one incoming
edge, thereby $k_{j}^{\rm{in}} \rightarrow k_{j}^{\rm{in}} +1$.
Since the out-degree of each node is $m$ always, the total degree of
a node is $k=k^{\rm{in}}+m$.
(ii) Activate the new node and deactivate one (denoted by $j$) of
the $n$ old active nodes with probability
\begin{equation}
\pi(x_j)=\sigma x_{j}^{-1}, \label{deactprob}
\end{equation}
where $\sigma=(\sum_{j\in{\rm \Lambda}}x_{j}^{-1})^{-1}$ is the
normalization factor. The summation runs over the set $\Lambda$ of
the $n$ old active nodes.
During evolution, a node might receive edges while it is active, and
once inactive it will not receive edges any more. Note that the
fitter the individual is, the more difficult for it to be
deactivated. For the case of the citation network,
Eq.~(\ref{deactprob}) means that the famous paper with great
innovation is less possibility to be forgotten.
\section{Degree distribution}
Denoting by $a(k,x,t)$ the probability of active nodes with degree
$k$ and fitness $x$ at time $t$, we can write out the master
equation for network evolution
\begin{equation}
a(k+1,x,t+1) = a(k+1,x,t)[1-\pi(x)](1-\frac{m}{n}) +
a(k,x,t)[1-\pi(x)]\frac{m}{n}. \label{mastereq}
\end{equation}
At each time step, an active node is deactivated and a newcomer
joins the set $\Lambda$ to keep the number $n$ unchanged. According
to Eq.~(\ref{deactprob}), the normalization factor $\sigma$ varies
with time because of the change of the nodes in the active set.
However, the fitness of each node is a random number taken from a
given probability distribution $\rho(x)$, therefore the
normalization $\sigma$ fluctuates very slightly and can be treated
as a constant. Substituting Eq.~(\ref{deactprob}) into
Eq.~(\ref{mastereq}) yields
\begin{equation}
a(k+1,x,t+1) \approx a(k+1,x,t)(1-\frac{\sigma}{x})(1-\frac{m}{n}) +
a(k,x,t)(1-\frac{\sigma}{x})\frac{m}{n}. \label{mastereqr}
\end{equation}
Imposing the stationarity condition $a(k,x,t)=a(k,x)$, we obtain the
equation
\begin{equation}
a(k+1,x) = \frac{m(x-\sigma)}{m(x-\sigma)+n\sigma}a(k,x) = \left[
\frac{m(x-\sigma)}{m(x-\sigma)+n\sigma} \right]^{k-m}a(m,x)
\end{equation}
for the probability of active nodes with degree $k+1$ and fitness
$x$ in the stationary state, where
\begin{equation}
a(m,x)=\frac{n(x-\sigma)}{m(x-\sigma)+n\sigma}\rho(x)
\end{equation}
is the stationary probability of active nodes with degree $m$ and
fitness $x$. Then we obtain
\begin{equation}
a(k,x) = \frac{n}{m} \left[ \frac{m(x-\sigma)}{m(x-\sigma)+n\sigma}
\right]^{k-m} \rho(x).
\end{equation}
Denoting by $a(k)$ the possibility of active nodes with degree $k$
in the steady state, we have
\begin{equation}
a(k) = \int_0^{x_{{\rm max}}} a(k,x){\rm d}x = \int_0^{x_{{\rm
max}}} \frac{n}{m} \left[ \frac{m(x-\sigma)}{m(x-\sigma)+n\sigma}
\right]^{k-m} \rho(x)\rm{d}x.
\end{equation}
In case the total number $N$ of nodes in the network is larger than
the number $n$ of active nodes, the degree distribution $P(k)$ can
be approximated by considering inactive nodes only. Thus, $P(k)$ can
be calculated as the rate of the change of $a(k)$,
\begin{eqnarray}
P(k) &=& -\frac{{\rm d}a(k)}{{\rm d}k} \nonumber\\
&=& \int_0^{x_{{\rm max}}} \frac{n}{m} \left[
\frac{m(x-\sigma)}{m(x-\sigma)+n\sigma} \right]^{k-m} \ln \left[
\frac{m(x-\sigma)}{m(x-\sigma)+n\sigma} \right]^{-1} \rho(x) {\rm
d}x. \label{integralpk}
\end{eqnarray}
Even when the form of $\rho(x)$ is given, it is still difficult to
solve the integral on the right-hand side of the equation. Instead,
we need a more subtle technique. We assume that
\begin{eqnarray}
F(x) &=& \frac{m(x-\sigma)}{m(x-\sigma)+n\sigma}, \label{deff}\\
G(x) &=& \frac{n}{m} [F(x)]^{-m} \ln[F(x)]^{-1}\rho(x). \label{defg}
\end{eqnarray}
Without lack of generality, we also normalize the fitnesses. Now Eq.
(\ref{integralpk}) can be rewritten as
\begin{equation}
P(k) = \int_0^{1} [F(x)]^{k} G(x) {\rm d}x. \label{integralpks}
\end{equation}
As will be seen below, for the proper choice of $F$ and $G$, one can
construct networks with exponential or power-law degree
distributions, and then determine the forms of the corresponding
fitness distributions.
(i) {\em Exponential degree distribution.} We set $F(x)=1/\mu$ and
$G(x)=\nu$, where $\mu (>1)$ and $\nu$ are positive constants.
Consequently, the integral of Eq. (\ref{integralpks}) is
\begin{equation}
P(k)=\nu\mu^{-k}, \label{pkexp}
\end{equation}
following an exponential. According to the definition of $F(x)$, we
have
\begin{equation}
\frac{m(x-\sigma)}{m(x-\sigma)+n\sigma} = \frac{1}{\mu},
\end{equation}
the solution of which reads
\begin{equation}
x = \sigma + \frac{n\sigma}{m(\mu-1)} = {\rm constant}.
\label{expressx}
\end{equation}
Then the normalization factor becomes
\begin{equation}
\sigma = \left\{ n \left[\sigma +
\frac{n\sigma}{m(\mu-1)}\right]^{-1} \right\}^{-1},
\end{equation}
yielding
\begin{equation}
\mu = 1 + \frac{n}{m(n-1)}.
\end{equation}
According to the definition of $G(x)$, we have
\begin{equation}
\nu = \frac{n}{m} \mu^{m} \ln\mu \rho(x),
\end{equation}
which suggests
\begin{equation}
\rho(x)= \frac{m\nu}{n\mu^{m}\ln\mu} = {\rm constant}.
\label{expressrho}
\end{equation}
According to the normalization, $\rho(x)$ should satisfy
$\int_{0}^{1} \rho(x){\rm d}x =1$. Combining Eqs.~(\ref{expressx})
and (\ref{expressrho}), we finally obtain the distribution of
$\rho(x)$,
\begin{equation}
\rho(x)=\delta\left(x-\frac{\sigma(m\mu-m+n)}{m(\mu-1)}\right).
\end{equation}
That is to say, the fitness of each node is identical. Considering
the constant approximation of $\sigma$, the above result can be
generalized to homogeneous distributions of fitnesses. Here,
homogeneity implies that the node fitnesses have small fluctuations
around the mean $\langle x \rangle$, hence the small variance. We
conclude that for any given $\rho(x)$ distributing homogeneously,
the fitness-driven deactivation generates structured exponential
networks.
(ii) {\em Power-law degree distribution.} We set $F(x)=e^{-\phi x}$
and $G(x)=\varphi x^{\gamma}$, where $\phi$, $\varphi$ and $\gamma$
are positive constants. Accordingly, Eq. (\ref{integralpks}) can be
rewritten as
\begin{equation}
P(k)=\frac{\varphi}{(\phi k)^{\gamma+1}} \int_{0}^{\phi k}
e^{-z}z^{\gamma}{\rm d}z. \label{integralpkpow}
\end{equation}
In the limit $k \rightarrow \infty$, one has $\int_{0}^{\phi k}
e^{-z}z^{\gamma}{\rm d}z=\Gamma(\gamma+1)$. For any finite $k$, the
integral $\int_{0}^{\phi k} e^{-z}z^{\gamma}{\rm d}z$ is convergent
and smaller than $\Gamma(\gamma+1)$. Thus the connectivity
distribution has a power-law form
\begin{equation}
P(k) \sim k^{-\gamma-1}. \label{pkpow}
\end{equation}
Combining Eqs. (\ref{deff}) and (\ref{defg}), we obtain
\begin{equation}
\rho(x) = \frac{\varphi m}{\phi n}x^{\gamma-1}e^{-\phi mx}.
\end{equation}
This heavy tailed distribution implies large fluctuations of the
node fitnesses, hence heterogeneity. We conclude that the
fitness-driven deactivation can build structured SF networks if the
node fitnesses are distributed heterogeneously.
\begin{figure}
\includegraphics[width=\columnwidth]{distribk.eps}
\caption{(Color online) Degree distributions of nodes of generated
networks for different fitness distribution functions: uniform (a),
Gaussian (b), exponential $e^{-x}$ (c), and power-law $x^{-1}$ (d).
All the fitnesses have been normalized. For uniform and Gaussian
distributions with the same mean ($0.5$) and small variances
($0.08(4)$ and $0.01(6)$), the node fitnesses fluctuate slightly
around the mean and can be regarded as the homogeneity; whereas for
exponential and power-law distributions, there are large
fluctuations of the node fitnesses due to the right-skewed feature,
resulting in the heterogeneity. The experiment networks start from
the initial $n=10$ nodes and end with the total population
$N=10^5$.} \label{fig1}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{svsm.eps}
\caption{(Color online) The dependence of the slopes of the lines in
Fig.~\ref{fig1} on $m$. In the inset, the data reported on the
log-log representation show a power law $m^{-\xi}$, and the best
linear fit gives $\xi=0.98(1)$.} \label{fig2}
\end{figure}
In Fig.~\ref{fig1} we give the simulation results of degree
distributions $P(k)$ for four kinds of distribution functions
$\rho(x)$ of node fitnesses. In case that the distributions of
fitnesses are homogeneous, e.g., uniform (Fig. \ref{fig1}(a)) and
Gaussian (Fig. \ref{fig1}(b)), the linear-log plots imply an
exponential degree distribution. Conversely, when the fitnesses
distribute heterogeneously, the log-log plots of Figs. \ref{fig1}(c)
and \ref{fig1}(d), corresponding to exponential and power-law cases
respectively, predict a generalized power law. We obtain the slopes
of the lines in Fig.~\ref{fig1} by least squares fitting and plot
them as a function of $m$ in Fig.~\ref{fig2}. For homogeneous
fitnesses, we notice a scaling relation between the slope and $m$,
whereas for heterogenous fitnesses, the slope shows independence on
$m$.
\section{Clustering coefficient}
In a network, if a node $i$ has $k_i$ edges, and among its $k_i$
nearest neighbors there are $e_i$ edges, then the clustering
coefficient of $i$ is defined by
\begin{equation}
c_i=\frac{2e_i}{k_i(k_i-1)}. \label{defclusteri}
\end{equation}
In the deactivation model, new edges are created between the
selected active nodes and the added one. Let us first consider the
case of $n=m$. At each time step, the degree $k_i$ of the active
node $i$ increases by $1$ and $e_i$ increase by $m-1$ until it is
deactivated. Therefore, the evolutionary dynamics of $k_i$ and $e_i$
are given by
\begin{eqnarray}
k_i &=& (m+t), \label{evolutionk}\\
\frac{de_i}{dt} &=& (m-1). \label{evolutione}
\end{eqnarray}
Integrating Eq. (\ref{evolutione}) with the boundary condition
$e_i(0)=m(m-1)/2$ and substituting the solution into Eq.
(\ref{defclusteri}), we recover the clustering coefficient $c(k)$
restricted to the nodes of degree $k$ \cite{KE02b,VA03}
\begin{equation}
c(k)=\frac{2(m-1)}{k-1}-\frac{m(m-1)}{k(k-1)}. \label{expressck1}
\end{equation}
For $n>m$, the clustering coefficient $C(k)$ is just the
generalization of Eq. (\ref{expressck1}),
\begin{equation}
C(k)=\frac{m}{n} \left[ \frac{2(m-1)}{k-1}-\frac{m(m-1)}{k(k-1)}
\right], \label{expressck2}
\end{equation}
which indicates that the local clustering scales as $C(k) \sim
k^{-1}$ for large $k$. The clustering coefficient $C$ of the whole
network is the mean of $C(k)$ with respect to the degree
distribution $P(k)$,
\begin{equation}
C = \int_m^{\infty} C(k)P(k) {\rm d}k. \label{defcluster}
\end{equation}
Substituting Eqs. (\ref{pkexp}) and (\ref{pkpow}) into the integral
respectively, we obtain
\begin{equation}
C \sim \left\{
\begin{array}{lll}
& \frac{2\nu(m^2-m)}{n}\int_m^{\infty}\frac{\mu^{-k}}{k}{\rm d}k+\frac{\nu(m^2-m^3)}{n}\int_m^{\infty}\frac{\mu^{-k}}{k(k-1)}{\rm d}k,\\
& \frac{2(m^2-m)}{n}\int_m^{\infty}\frac{{\rm d}k}{k^{\gamma+2}}+\frac{(m^2-m^3)}{n}\int_m^{\infty}\frac{{\rm d}k}{k^{\gamma+2}(k-1)}, \\
\end{array}
\right. \label{cluster}
\end{equation}
proportional to $n^{-1}$.
\begin{figure}
\includegraphics[width=\columnwidth]{cvsk.eps}
\caption{(Color online) Log-log plots of $C(k)$ as a function of $k$
for different fitness distribution functions: uniform (a), Gaussian
(b), exponential $e^{-x}$ (c), and power-law $x^{-1}$ (d). The solid
lines correspond to a power law $2k^{-1}$.} \label{fig3}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{cvsn.eps}
\caption{(Color online) Log-log plots of $C$ as a function of $n$
for different fitness distribution functions: uniform (a), Gaussian
(b), exponential $e^{-x}$ (c), and power-law $x^{-1}$ (d). The solid
lines correspond to a power law $2n^{-1}$.} \label{fig4}
\end{figure}
In Fig. \ref{fig3} we show the simulation results of $C(k)$ as a
function of $k$. All the plots follow a power law $C(k) \sim
k^{-1}$, which coincides with the expression in Eq.
(\ref{expressck2}). For the clustering coefficient of the whole
network, as shown in Fig. \ref{fig4}, the linearity of all the plots
also implies a power-law relation $C \sim n^{-1}$. It is worth
noting that both the scaling laws are independent of fitness
distribution functions. One obtains the same result for uniform,
Gaussian, exponential and power-law distributions of fitnesses.
\section{Conclusion}
In this paper, we have presented an alternatively simple and
intuitive model for a large and important class of networks widely
observed in the real world. We defined the intrinsic fitness as the
way of quantifying the popularity of individuals, i.e., the fitter a
node is, the higher possibility it is active. The growth dynamics of
the network is governed by the naive fitness-driven deactivation
mechanism. The deactivation probability of a node is proportional to
the inverse of its fitness, which characterizes the individual
capability of obtaining further links. We studied the connectivity
distribution and the clustering coefficient that can fundamentally
shape a network. On one hand, we found the great influence of the
node fitnesses on the connectivity distribution. The homogeneous
fitnesses generate exponential networks, while the heterogeneous
fitnesses result in SF ones. On the other hand, we recovered two
universal scaling laws of the clustering coefficient regardless of
the fitness distributions. These results are consistent with what
has been empirically observed in many real-world networks in (for
example) \cite{WM00,MN01,RB03,BMG04}, and so the present model
provides a new way to understand complex networks with age.
\section*{Acknowledgments}
This work was jointly supported by NSFC (Nos. 10805033 and
11072136), HK UGC GRF (No. PolyU5300/09E), and by Shanghai Leading
Academic Discipline Project S30104.
\section*{References}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,482
|
WeWork's Pending Crash; Expanding During Good Times?
August 22, 2019 /in Our Blog, Self Education
Scott Galloway is an NYU business professor and entrepreneur whose commentary I love! Recently, he has been beating the heck out of WeWork and what he says is a ridiculous over-valuation. For those of you who don't know, WeWork is a shared-office-space-provider that caters to entrepreneurs and small business people, and it is currently booming. […]Read More
https://www.jvmlending.com/wp-content/uploads/JVMLending.svg 0 0 lindseyh@jvmlending.com https://www.jvmlending.com/wp-content/uploads/JVMLending.svg lindseyh@jvmlending.com2019-08-22 16:07:182019-08-22 16:47:59WeWork's Pending Crash; Expanding During Good Times?
Why Businesses Must Focus On "Growth," or Die
September 7, 2016 /in Our Blog, Self Education
The CEO of Delta Airlines was on Marketplace's Corner Office podcast recently, and it was boring as hell (a lot of platitudes). BUT, there was one huge takeaway that stuck with me. When the host asked the CEO if he intended to keep growing even though Delta is the largest airline in the world, he said: "of course – companies that are not growing, are in […]Read More
https://www.jvmlending.com/wp-content/uploads/JVMLending.svg 0 0 lindseyh@jvmlending.com https://www.jvmlending.com/wp-content/uploads/JVMLending.svg lindseyh@jvmlending.com2016-09-07 10:24:572019-03-05 01:14:28Why Businesses Must Focus On "Growth," or Die
Bear vs. Monkey; Must Accept and Learn From Criticism; Insiders
April 28, 2016 /in Our Blog, Self Education
A few years ago I shared a video with our office that I thought was amusing. It was a bear and monkey racing on bicycles in a circus in China, and the bear sort of ended up turning the monkey into lunch. Apparently, what is amusing to a 50-year-old Minnesota farm boy is not the […]Read More
https://www.jvmlending.com/wp-content/uploads/JVMLending.svg 0 0 lindseyh@jvmlending.com https://www.jvmlending.com/wp-content/uploads/JVMLending.svg lindseyh@jvmlending.com2016-04-28 09:48:062019-01-30 21:57:16Bear vs. Monkey; Must Accept and Learn From Criticism; Insiders
Kung Fu Panda Organization For Freedom And Growth
February 24, 2016 /in Our Blog, Self Education
Eric Weinstein is a PhD in mathematical physics from Harvard, and a managing director at Thiel Capital. Eric is one of the smartest guys in Silicon Valley working for the smartest VC in Silicon Valley. He was on Tim Ferriss' podcast recently talking about Kung Fu Panda. He publicly cried at the end of the […]Read More
https://www.jvmlending.com/wp-content/uploads/JVMLending.svg 0 0 lindseyh@jvmlending.com https://www.jvmlending.com/wp-content/uploads/JVMLending.svg lindseyh@jvmlending.com2016-02-24 10:00:362019-01-28 20:14:45Kung Fu Panda Organization For Freedom And Growth
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,720
|
\section{Introduction}
\textit{Introduction.}---Mixtures of two atomic Bose-Einstein condensates are the systems with a diverse spectrum of physical properties. The inter- and intra-species
interaction strengths, $g_{11}, g_{22}$, and $g_{12}$, respectively, are the key parameters defining their behavior. The energy density functional of a uniform mixture in
the mean field approximation is a quadratic form~\cite{stringari}:
\begin{equation}\label{MF}
\epsilon_\mathrm{MF} =\frac12 g_{11} n_1^2 + \frac12 g_{22} n_2^2 + g_{12} n_1 n_2,
\end{equation}
where $n_1$ and $n_2$ are densities of the species. The mixtures can be miscible if $|g_{12}| < \sqrt{g_{11}g_{22}}$, or immiscible if interspecies repulsion dominates,
$g_{12}> \sqrt{g_{11}g_{22}}$. On the contrary, if inter-species attraction is strongly attractive, $g_{12} < - \sqrt{g_{11}g_{22}}$, a mixture collapses. Typically,
miscible mixtures have to be kept in external traps since, if left alone, they expand to minimize their energy.
The mean-field description overlooks existence of ultra-dilute quantum droplets --- the exotic phases of the self-bound incompressible system of a two component Bose-Einstein
condensates (BECs), stabilized by quantum fluctuations~\cite{Petrov15}, and with densities orders of magnitude smaller than of ordinary liquids.
In a weakly interacting regime, the energy related to the quantum fluctuations is small, and, for a single-component BEC, is known as the Lee-Huang-Yang (LHY)
correction to the ground state energy of the system~\cite{Lee57}:
\begin{eqnarray}
\epsilon_\mathrm{LHY} = \frac{128}{30\sqrt{\pi}}\, gn^2 \sqrt{na^3},
\end{eqnarray}
where $n$ is the density, the coupling strenght $g=4 \pi \hbar^2 a/m$, $a$ is the positive $s$-wave scattering length, and $m$ is the atomic mass. The correction
$\epsilon_\mathrm{LHY}$ originates from a zero-point energy of the vacuum of Bogoliubov's quasiparticles. Since it depends on a higher power of the density, as compared
to the leading mean-field terms, its contribution to the energy is negligible in most circumstances. However, for the Bose-Bose mixture, at the edge of the stability,
close to the collapse threshold, the mean-field energy vanishes, and the quantum fluctuations start to dominate. As predicted in~\cite{Petrov15}, these fluctuations
contribute additional energy, called the LHY correction~\cite{Larsen63,Sacha08,Petrov15}, and stabilize the system and lead to the formation of quantum droplets.
Quantum droplets were first observed in Dysprosium and Erbium BECs~\cite{Kadau16, Ferrier16a,Ferrier16b,Schmitt16, Chomaz16}, in which the dipole-dipole interactions
between atoms is significant. This anisotropic interaction, depending on the relative position of atoms and the orientation of their magnetic dipole moments, can be
attractive or repulsive. The competition of attraction and repulsion, similarly to the two-component mixtures, might bring the system to the stability edge, making it
vulnerable to quantum fluctuations. The original scenario from Ref.~\cite{Petrov15}, was realized in the recent experiments with two-component Potassium
BECs~\cite{Cabrera17, Fattori17, Cheiney18}.
Quantum droplets can also exists in low-dimensional systems~\cite{Petrov16}. Due to the expected reduction of three-body losses, these droplets are of a great
experimental interest. Such low-dimensional systems can be created by employing tight confinements in one or two spatial directions. However, tight externals potentials
significantly modify the excitation spectrum, and, in particular, the zero-point energy of the quasi-particles. Therefore, quantum droplets in reduced dimensions possess
different properties then those in three-dimensional (3D) space both for BEC mixtures~\cite{Petrov16} and for dipolar BECs~\cite{Mishra16}.
In experiments, the quasi-two-dimensional (quasi-2D) or quasi-one-dimensional (quasi-1D) regimes are obtained by a tight confinement introduced by external potential in
one or two directions. The potential introduces an additional linear length scale $L$ of the tight confinement. This scale sets a lower limit on the excitation momentum
to $ \sim \hbar/L$ and minimal excitation energy $\varepsilon_0 = (\hbar^2/2m)(2\pi/L)^2$ in the confined direction(s). If both thermal energy $k_BT$ as well as
characteristic interaction energy $\sim g_{11}n_1+g_{22}n_2$, are too small to allow for excitation in the tight direction(s), i.e. $\varepsilon_0 \gg k_BT$, and
$\varepsilon_0 \gg g_{11}n_1,g_{22}n_2$, i.e., $\varepsilon_0$ is the largest energy scale of the problem, then from a point of view of kinematics the system is
low-dimensional.
As shown in \cite{Petrov16}, the low-dimensional liquids are even more exotic then their 3D analogue. Three-dimensional droplets are formed when the mean-field approach
predicts a collapse of the system, i.e., interspecies attraction is sufficiently strong. However, in lower dimensions, the two- and one-dimensional droplets can be
formed in an overall repulsive system, which liquefies while squeezed, and does not need any trapping potential in the not confined direction(s) anymore.
In our paper, we study the formation of droplets at the dimensional crossover from 3D to quasi-2D, and 3D to quasi-1D. In this regime, which was not previously explored
in the literature, we find new kind of stable droplets which are formed only due to quantum fluctuations, when the mean-field interaction vanishes. Our results are also
important for experiments for which the access to quasi-1D or quasi-2D regimes is demanding. Since such experiments are always performed in 3D under conditions of tight
confinement the kinematics may be low-dimensional to a large extent. The elimination of excitations, however, in the confined direction(s) is not complete, and the proper
description requires inclusion of corrections. In particular, we show that the access to quasi-1D is significantly more demanding than to quasi-2D, therefore, our results
are especially important for these experiments since in most circumstances only the crossover is accessible.
The conditions of the dimensional crossover may be reached by varying the trap geometries of the Bose-Bose mixture. Both strongly prolate and oblate shapes of BECs can
be formed, and excitation energies in confined and extended direction(s) can be separated energetically, with a limited number of modes in confined direction(s) occupied
at low temperatures in the weakly interacting limit. Such systems, with significantly varying spatial extensions in different directions, are in the region of
dimensional crossover.
\textit{Lee-Huang-Yang energy of a mixture in a box.}---The system we study is a two component mixture of interacting ultracold Bose gases in the ground state. The
mean-field energy density is given by Eq.~(\ref{MF}). Following the analysis presented in~\cite{Petrov15}, we consider the case when both intraspecies interactions are
repulsive, $g_{11}>0$, $g_{22}>0$, ($g_{11} \approx g_{22}$), while interspecies interaction is attractive, $g_{12}<0$. We also assume that the system is close to the
region of collapse, and, thus, the parameter $\delta g = g_{12} + \sqrt{g_{11}g_{22}}$ is small, i.e., $|\delta g| \ll g_{11},g_{22}$. The diagonal form of the mean-field
energy density reads:
\begin{eqnarray}
\epsilon_\mathrm{MF} = \lambda_-n_-^2 + \lambda_+n_+^2.\,
\end{eqnarray}
where the coefficients $\lambda_+ \simeq ({g_{11}+g_{22}})/2$, and $\lambda_- \simeq \delta g \sqrt{g_{11}g_{22}}/(g_{11}+g_{22})$. In our regime $|\lambda_-| \ll
\lambda_+$, and thus the density $n_- = (n_1 \sqrt{g_{22}} + n_2 \sqrt{g_{11}})/(\sqrt{g_{11} + g_{22}})$ corresponds to a soft mode, while $n_+ = (n_1 \sqrt{g_{11}} -
n_2 \sqrt{g_{22}})/(\sqrt{g_{11} + g_{22}})$, is the density of a hard mode. Deviation of the latter from zero is energetically very costly, so we assume that in the
ground state the hard-mode density effectively vanishes, $n_+ = 0$. Consequently, the densities of both species are proportional to the density of the soft mode:
\begin{eqnarray}
n_- = n_1 \sqrt{(g_{11}+g_{22})/g_{22}} = n_2 \sqrt{(g_{11}+g_{22})/g_{11}}.
\end{eqnarray}
To further specify our system we assume that it is confined in a box, and periodic boundary conditions are imposed. The standard LHY correction~\cite{Petrov15} in 3D is
evaluated under the assumption that all sides of the box have similar length.
To find the LHY correction for the tightly confined system we have to consider the case when a one side of the box is much smaller than the others, $L_z \ll L_x \simeq
L_y$ (3D-2D crossover), or much larger (3D-1D crossover) $L_x \simeq L_y \ll L_z $, than remaining sides. These two configurations are considered separately below. We
denote the tight confinement extension by $L$ while a linear size of the box in perpendicular direction(s) by $L_{\perp}$.
At this stage, we do not assume any particular geometry yet. The LHY energy density reads:
\begin{equation} \label{LHY_mix}
\frac{\varepsilon_0}{L^3} e_\mathrm{LHY} = \lim_{r \rightarrow 0} \frac{\partial }{\partial r} \left( r \frac{1}{2V}\sum_{\bf k} e^{i{\bf k}{\bf r}}
( \varepsilon_{\bf k} - A_{\bf k}) \right),
\end{equation}
where $\varepsilon_{\bf k} = \sqrt{E_{k}^2 + 2 E_k(g_{11}n_1+g_{22}n_2) }$ and $ A_{\bf k} = E_k + g_{11}n_1+g_{22}n_2$, and $E_k= (\hbar^2k^2)/2m$. We extracted the prefactor
$\varepsilon_0/L^3$ to make $e_\mathrm{LHY}$ dimensionless. This form of the LHY energy results from a regularized pseudopotential~\cite{Lee57}, and it is equivalent to
the formula used in Ref.~\cite{Petrov15}, where the origin of the LHY term is attributed to the zero-point energy of the Bogoliubov vacuum.
In writing Eq.~(\ref{LHY_mix}), we made two approximations. First, we set $g_{12}^2 = g_{11}g_{22}$ which is consistent with the previous assumptions that the system is
about to collapse. This approximation is not a very restrictive one. Second, we limit the analysis to mixtures of two species with equal masses only. Therefore, the
system we consider is, for instance, a mixture of atoms in two different internal spin states~\cite{Cabrera17,Cheiney18,Fattori17}. The second approximations is quite
restrictive, however. We note that the LHY term is equal to the one of a single component Bose gas with effective $(gn)_\mathrm{eff} = g_{11}n_1+g_{22}n_2$.
The summation over discrete momentum states is essential to account for a tight confinement. If we substituted the summation over momenta with the integral, i.e., $1/V
\sum_{\bf k} \rightarrow \int {\rm d} {\bf k}/(2 \pi)^{3}$, we would recover the limit of an infinite box and the LHY energy of a Bose-Bose mixture in 3D
space~\cite{Petrov15}.
\textit{LHY energy at the 3D-2D crossover.}---Our main goal here is to find the LHY energy for a system confined in one spatial direction. In such a situation the
$z$-axis is a tight direction, i.e., $L=L_z$. Assuming that $L_x=L_y \rightarrow \infty$, in Eq.~(\ref{LHY_mix}) one has to substitute $\frac{1}{V}\sum_{\bf k} \rightarrow
\frac{1}{(2\pi)^2} \int {\rm d}^2 k_{\perp} \frac{1}{L} \sum_{k_z} $, and the LHY energy in quasi-2D takes the form:
\begin{equation} \label{LHY_2d}
e_\mathrm{LHY}^\mathrm{2d}(\xi) = \lim_{r \rightarrow 0} \frac{\partial }{\partial r} \left(\! r \frac{1}{2}\sum_{q_z} \int {\rm d}^2 q_{\perp}
\, e^{i{\bf q} {\bf r}} \left( \varepsilon_{\bf q} - A_{\bf q} \right) \!\right),
\end{equation}
where $\xi=(g_{11} n_1 + g_{22} n_2)/\varepsilon_0$, ${\bf q} = ({\bf q}_\perp,q_z)$ and $q_z$, ${\bf q}_\perp$ are the integer dimensionless momenta: $q_z=(L/2\pi)k_z$,
and ${\bf q}_\perp=(L/2\pi) {\bf k}_{\perp}$. Bogoliubov's energies expressed in the units of $\varepsilon_0$ are: $\varepsilon_{\bf q}= \sqrt{q^4 + 2 \xi q^2}$ and $
A_{\bf q}=q^2 + \xi$. The ratio $\xi$ of the sum of mean field energies of both component to the excitation energy in the tight direction is the crucial parameter
characterizing the system. We note that Eq.~(\ref{LHY_2d}) applies not only to a system at 3D-2D crossover, but also in the case of strongly oblate geometry, where the
characteristic spacing of kinetic momenta in the tighter direction is much larger than spacing in the perpendicular directions. Then, the densely spaced momenta in the
perpendicular direction can be considered as continuous.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{fig1.pdf}\\
\caption{ The ratio $ s^\mathrm{2d}(\xi)= e_\mathrm{LHY}^\mathrm{2d}(\xi)/e_\mathrm{LHY}^\mathrm{3d}(\xi)$ as the function of $\xi$ given by the thick black
line. The additional thin meshed curve is the same ratio but using the approximate formula for $e_\mathrm{LHY}^\mathrm{2d}(\xi)$ given by
Eq.~(\ref{e2d}). The red dashed horizontal line is the asymptotic 3D result.}
\label{fig1}
\end{figure}
For small $\xi$, the result can be obtained analytically (see the Supplementary Material). The formula, derived for $\xi \ll 1$, is the following:
\begin{equation}\label{e2d}
e_\mathrm{LHY}^\mathrm{2d}(\xi) = \frac{\pi}{4} \xi^2 \left( \log(\xi) + \log(2\pi^2) + \frac{1}{2} + \frac{\pi^2 \xi}{3} \right).
\end{equation}
We compare this expansion to the direct numerical evaluation of Eq.~(\ref{LHY_mix}). In Fig.~\ref{fig1}, we plot the ratio $ s^\mathrm{2d}(\xi)=
e_\mathrm{LHY}^\mathrm{2d}(\xi)/e_\mathrm{LHY}^\mathrm{3d}(\xi)$, where the 3D LHY energy is $e_\mathrm{LHY}^\mathrm{3d}(\xi) = 16 \sqrt{2} \pi \xi^{5/2}/15$. We also
plot there $s^\mathrm{2d}(\xi)$ but with $e_\mathrm{LHY}^\mathrm{2d}(\xi)$ taken from Eq.~(\ref{e2d}) (thin meshed curve). The approximate expression for LHY term at
3D-2D crossover almost perfectly reproduces the numerical result for $\xi < 0.3$. For larger values of $\xi$, the exact formula is in the perfect agreement with the 3D
expression. The agreement between quasi-2D and 3D results for values of $\xi$ such small as $\xi=0.3$ is quite surprising because the 3D formula formally applies in the
limit $\xi \gg 1$.
\textit{Droplets at 3D-2D crossover.}---Neglecting the surface energy, which is well justified for large droplets where a bulk contribution dominates, the energy of the
homogeneous droplet of volume $V$ is equal to the sum of the mean-field term, $e_\mathrm{MF}$, and LHY correction, $e_\mathrm{LHY}^\mathrm{2d}$:
\begin{equation}
\label{Ehom}
E_\mathrm{hom} = \frac{\varepsilon_0}{L^3} \left( e_\mathrm{MF}(\xi) + e_\mathrm{LHY}^\mathrm{2d}(\xi) \right) V,
\end{equation}
where $e_\mathrm{MF}(\xi) = \beta \xi^2$, and $\beta = \varepsilon_0 L^3 \delta g/\sqrt{g_{11}g_{22}} (\sqrt{g_{11}} +\sqrt{g_{22}})^2$ .
The droplet is stable in an empty space if its pressure vanishes, $ p= -({\rm d}/{\rm d}V) E_\mathrm{hom} =0 $. Note, that $\xi$ is proportional to the density, i.e.,
${\rm d}\xi/{\rm d}V = - \xi/V$. The condition for the equilibrium density of droplets takes the form:
\begin{equation}
\label{pzero}
\left(\xi \frac{ \partial}{\partial \xi} -1 \right) \left( e_\mathrm{MF}(\xi) + e_\mathrm{LHY}^\mathrm{2d}(\xi)\right ) = 0.
\end{equation}
We now focus on the quasi-2D regime in which $e_\mathrm{LHY}^\mathrm{2d}(\xi)$ is given by Eq.~(\ref{e2d}) with the last term neglected. Assuming for simplicity
$g_{11}=g_{22} = 4\pi \hbar^2 a/m$, which implies $n_1=n_2 =n$, the solution of Eq.~(\ref{pzero}) yields:
\begin{equation}
\label{x0}
\xi_0 = \frac{1}{2\pi^2} e^{ -\frac{3}{2} - \frac{ L \delta a}{2 a^2}},
\end{equation}
where we used $\delta g = 4 \pi \hbar^2 \delta a/m$. The above result leads to the following droplet density:
\begin{equation}
\label{n2d}
n=\frac{e^{-3/2}}{8 \pi} \frac{1}{a L^2} e^{-\frac{L\delta a}{2a^2}}.
\end{equation}
To find the conditions for a quasi-2D system, we compare the droplet density obtained with Eq.~(\ref{e2d}) to the one given by Eq.~(\ref{n2d}). We find that for $\xi
\lesssim 0.03$, the relative difference between the two results is smaller than $20\%$. We assume this condition defines the quasi-2D regime. Therefore, to have a
quasi-2D system we need to have $\xi_0 \lesssim 0.03$ , and, from Eq.~(\ref{x0}), we find that
\begin{equation}
\label{criterion}
\frac{\delta a}{a} > - 4 \frac{a}{L} ,
\end{equation}
According to Eq.~(\ref{criterion}), $\delta a$ can have arbitrary sign. Therefore, we arrive at the conclusion that droplets can be formed for the system with mean-field
energy corresponding to repulsive, weakly attractive, or even effectively vanishing interactions. The last possibility was not discussed in the literature so far. It is
a droplet which is formed only due to quantum fluctuations.
Finally, let us compare our results for quasi-2D regime with the results from Ref.~\cite{Petrov16} for strictly 2D systems. In the latter case, the LHY energy and
droplet densities are expressed in terms of 2D scattering length. Here, we consider the case $a \ll L$, i.e., the scattering has a 3D character. To compare the results,
we have to express the 2D scattering length, $a_\mathrm{2d}$, by the 3D one in the case of the box geometry analyzed in our paper. The scattering process in quasi-2D,
when the confinement in a tight direction is provided by a box of length $L$, is expressed by~\cite{Zin18}:
\begin{equation}
\label{a2d3d}
a_\mathrm{2d} = 2L e^{ - \gamma - \frac{L}{2a} }.
\end{equation}
An analogous formula, in a situation when the tight confinement is provided by a harmonic potential, is given in \cite{shlyapnikov}. Inserting the above relation into
equations for 2D LHY energy density and droplet density of~\cite{Petrov16} we recover our results, given by Eqs.~(\ref{Ehom}) and~(\ref{x0}), together with
Eq.~(\ref{e2d}) with the last term neglected. This agreement provides an important and independent test of our approach.
\textit{LHY energy at the 3D-1D crossover.}---We now focus on the 3D-1D crossover regime where the LHY energy is:
\begin{equation}
\label{LHY_1d}
e_\mathrm{LHY}^\mathrm{1d}(\xi) \!=\! \lim_{r \rightarrow 0} \frac{\partial }{\partial r} \left(\! r \frac{1}{2}\sum_{q_x,q_y} \int {\rm d} q_z
\, e^{i{\bf q} {\bf r}} \left( \varepsilon_{\bf q} \!-\! A_{\bf q} \right) \!\right),
\end{equation}
where $q_{x,y}$ are integers, $q_{x,y}=(L_{\perp}/2\pi)k_{x,y}$, and $q_z$ is a real-valued dimensionless momentum, $q_z=(L/2\pi) k_z$. The Bogoliubov's energies
expressed in the units of $\varepsilon_0$ have the same form as in the 3D-2D system. Similarly as before, we took $L_{\perp} \rightarrow \infty$, and, thus, we
substituted: $\frac{1}{V}\sum_{\bf k} \rightarrow \frac{1}{2\pi} \int d k \frac{1}{L^2} \sum_{k_x,k_y} $. For small $\xi$, we obtain (see the Supplementary Material)
\begin{equation}
\label{e1d}
e_\mathrm{LHY}^\mathrm{1d} = - \frac{2\sqrt{2}}{3} \xi^{3/2} + c_2 \xi^2 + c_3 \xi^3,
\end{equation}
where $c_2 = \frac{1}{4} \left( \int {\rm d} {\bf n} \, 1/n^2 - \sum_{n_y,n_z \neq 0} \int {\rm d} n_x 1/n^2 \right)
\simeq 3.06$ and $c_3 = \frac{\pi}{8} \sum_{n_x,n_y \neq 0} (n_x^2+n_y^2)^{-3/2} \simeq 3.55 $.
In Fig.~\ref{fig2}, we plot the ratio $ s^\mathrm{1d}(\xi)= e_\mathrm{LHY}^\mathrm{1d}(\xi)/e_\mathrm{LHY}^\mathrm{3d}(\xi)$. As before, the analytic approximate
expression for the 3D-1D LHY term almost perfectly matches the full numerical result for $\xi < 0.3$. For larger values of $\xi$, the exact formula is close to the 3D
expression.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{fig2.pdf}
\caption{The ratio $ s^\mathrm{1d}(\xi)= e_\mathrm{LHY}^\mathrm{1d}(\xi)/e_\mathrm{LHY}^\mathrm{3d}(\xi)$ as the function of $\xi$ given by the thick black line.
The additional thin meshed curve is the same ratio but using the approximate formula for $e_\mathrm{LHY}^\mathrm{1d}(\xi)$ given by Eq.~(\ref{e1d}).}
\label{fig2}
\end{figure}
\textit{Droplets at 3D-1D crossover.}--- We now analyze the quasi-1D regime defined in the limit $\xi \ll 1$. Including only the first term of Eq.~(\ref{e1d}), we find from
Eq.~(\ref{pzero}) that at the equilibrium $\xi_0 = (2/9)\beta^2 = (128/9\pi^2)a^4/(\delta a^2 L^2)$. The corresponding droplet density is
\begin{equation}\label{xi01}
n = \frac{32 }{9\pi } \frac{a^3}{\delta a^2 L^4},
\end{equation}
where we assumed for simplicity $g_{11}=g_{22}= 4\pi \hbar^2 a/m$. To find the condition for the validity of this formula, we compare it with the density of the droplet
using the full $e_\mathrm{LHY}^\mathrm{1d}(\xi)$ from Eq.~(\ref{e1d}). The relative density differs from the one given by Eq.~(\ref{xi01}) by $20\%$ for $\xi$
approximately equal to $0.0004$. Thus, for $\xi \lesssim 0.0004$, the formula for the density of the quasi-1D droplet is valid. However such small value of $\xi$ is
probably out reach for current experiments. As in the 2D-3D crossover we also find here droplets which exist for $\beta =0$. Using Eqs.~(\ref{pzero}) and (\ref{e1d}) we
find their density corresponds to $\xi_0 \simeq 0.15$ which places such droplet far away from the quasi-1D regime and of course far away from the 3D system where such
droplet cannot exist.
We now compare our predictions to the 1D results obtained in~\cite{Petrov16}. To this end, we have to express the 3D interaction parameter by the 1D coupling,
$g_\mathrm{1d}$. From Ref.~\cite{olshanii}, we infer that, for $a/L \ll 1$, the $g_\mathrm{1d}$ can be obtained by averaging the 3D interaction over the density profile
in the tight directions, yielding $g_\mathrm{1d} = g/L^2 $. Using this relation, we obtain that in the quasi-1D regime the energy and equilibrium densities of the droplet
have the same form as given in~\cite{Petrov16,dopiska1}.
\textit{Validity of the approach.}---We now briefly discuss the validity of our results. The Bogoliubov approach is valid as long as the LHY energy correction
$\frac{\varepsilon_0}{L^3} e_\mathrm{LHY} $ is much smaller than the characteristic mean-field energy density $gn^2$ (for simplicity we take $g_{11}=g_{22}=g$). This
condition reads $\frac{\pi L}{2 a} \xi^2 \gg |e_\mathrm{LHY}|$. For both situations analyzed in our paper, $e_\mathrm{LHY}^\mathrm{1d,2d}$ is practically equal to
$e_\mathrm{LHY}^\mathrm{3d}$ for $\xi > 0.3$. Then, the condition is equivalent to the 3D condition, namely, $na^3 = \xi \frac{a^2}{L^2} \ll 1 $ , which we assume. For
smaller values of $\xi$, we can use analytical formulas given in Eqs.~(\ref{e2d}) and~(\ref{e1d}), which lead to the condition $ |\log(\xi)| \ll \frac{2L}{a} $ in the
3D-2D and $\sqrt{\xi} \gg \frac{a}{L}$ in the 3D-1D case.
\textit{Conclusions.}---We analyzed the so far unexplored formation of quantum droplets in the Bose-Bose mixtures at dimensional crossover from 3D to 2D or 1D. Under the
assumption, that the scattering processes are 3D, which happens when the spatial extent of a tight confinement $L$ is much larger than a 3D scattering length $a$, we have
found expressions for the beyond-mean field correction to the system energy. These corrections generalize the Lee-Huang-Yang term as obtained for the 3D BEC. We show
how this energy smoothly changes as a function of the parameter $\xi=(g_{11} n_1 + g_{22} n_2)/\varepsilon_0$.
The analysis of 3D-2D and 3D-1D crossovers revealed that the quasi-2D and quasi-1D regimes are accessed for values of $\xi\lesssim0.03$ and $\xi \lesssim 0.0004$,
respectively, which are much smaller than expected. The naive prediction suggesting that for $\xi < 1 $, the excitations in the confined directions are practically
frozen, and the system should be quasi-low-dimensional, does not work. Counter-intuitively, we find that for $\xi > 0.3$ the LHY correction is practically equal to the
one obtained in the 3D case.
Our results provide the working parameters for the planned experiments, which aim at exploring low-dimensional formation of droplets in Bose-Bose mixtures. The quasi-2D
regime, as compared to quasi-1D, is accessible for a broader range of~$\xi$, i.e., for $\xi \lesssim 0.03$, which is, however, still experimentally demanding. The
quasi-1D regime is attained for a much smaller range of~$\xi$, i.e., $\xi\lesssim0.0004$, which poses a severe experimental constraint. However, our work reveals that
yet unexplored 3D-1D crossover supports exotic droplets, different from both 3D and quasi-1D case, and formed only due to quantum fluctuations. Such droplets also exist
at the border of the quasi-2D regime. The results we present pave the way for exploring new states of matter in low-dimensional systems, in which quantum fluctuations
play the prominent role.
\begin{acknowledgments}
P.Z. and Z.I. acknowledge the support from the Polish National Science Center Grant No. 2015/17/B/ST2/00592.
M.P. and T.W. were supported by the Polish National Science Center Grant No. 2014/14/M/ST2/00015.
M.G. acknowledges support from the (Polish) National Science Center Grant UMO-2017/25/B/ST2/01943 and from the EU Horizon 2020-FET QUIC 641122.
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,645
|
Alfrederick Smith Hatch (July 24, 1829 – May 13, 1904) was an American investment banker who founded Fisk & Hatch along with Harvey Fisk. Hatch was the President of the New York Stock Exchange from 1883 to 1884.
Life
Hatch was born in Vermont to Horace Hatch (1788–1873) and Mary Yates Smith (1798–1859).
In March 1862, Hatch and Harvey Fisk began a finance and insurance company called, Fisk & Hatch. The company initially focused almost exclusively in government bonds. Both men were short on capital at the time and relied on $15,000 worth of loans from family and friends. Hatch and Fisk found success as sub-agents for Jay Cooke & Company, popularizing and selling millions of dollars in government war securities in New York and New England. The two quickly became the front rank of bond dealers.
In 1871, Hatch commissioned a portrait of his family at his house on Park Avenue and 37th Street. In 1872, he donated a building he owned on 316 Water Street to Jerry McAuley and his wife, Maria. McAuley used the building to established a rescue mission for homeless men called the "Helping Hand for Men". This establishment would later become the New York City Rescue Mission.
Hatch was the President of the New York Stock Exchange from 1883 to 1884.
Death
Hatch died on May 13, 1904 at the age of 74.
References
External links
1829 births
1904 deaths
Businesspeople from Vermont
Presidents of the New York Stock Exchange
American investment bankers
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,288
|
\section{Introduction}
The role of environment on galaxy formation and evolution is one of
the most complex issues in cosmology. The present day Universe is
filled with billions of galaxies which are distributed across a vast
network namely the `cosmic web' \citep{bond96} that stretches through
the Universe. This spectacular network of galaxies is made up of
interconnected filaments, walls and nodes which are encompassed by
vast empty regions. The galaxies broadly form and evolve in these four
types of environments inside the cosmic web. One can characterize the
environment of a galaxy with the local density at its location. The
role of local density on galaxy properties is well studied in
literature \citep{Oemler, dress, gotto, davis2, guzo,zevi,hog1, blan1,
park1, einas2, kauffmann, mocine, koyama, bamford}. It is now well
known that the galaxy properties exhibit a strong dependence on the
local density of their environment. However the role of large-scale
environment on the formation and evolution of galaxies still remains a
debated issue.
The growth of primordial density perturbations leads to collapse of
dark matter halos in a hierarchical fashion. It is now widely accepted
following the seminal work by \citet{white78} that galaxies form at
the centre of the dark matter halos by radiative cooling and
condensation. One of the central postulates of the halo model
\citep{neyman, mo, ma, seljak, scocci2, cooray, berlind, yang1} is
that the halo mass determines all the properties of a
galaxy. But this need not be strictly true. The halos
are assembled through accretion and merger in different parts of the
cosmic web. Different accretion and merger histories of the halos
across different environments leads to assembly bias \citep{croton,
gao07, musso, vakili} which manifests in the clustering of these
halos. The early-forming low mass halos in simulations
are found to be more strongly clustered than the late-forming halos
of similar mass.
The presence of beyond halo mass effect in
observations, is a matter of considerable debate due to conflicting
results obtained by various studies on galactic conformity and
assembly bias. A study \citep{zehavi11} of the colour and luminosity
dependence of galaxy clustering in SDSS find that most observed
trends can be explained by halo occupation distribution (HOD)
modelling within a $\Lambda$CDM cosmology. \citet{alam19} study the
dependence of clustering and quenching on the cosmic web using SDSS
and show that the observed cosmic web dependence in the SDSS can be
largely explained by HOD modelling without introducing any galaxy
assembly bias. \citet{yan13} show that the galaxy properties do not
depend on the tidal environment of the cosmic
web. \citet{paranjape18} show that any observed dependence of galaxy
properties on the tidal environment can be traced to those inherited
from the assembly bias of their parent halos and additional effects
of large-scale environment must be weak. \citet{lin16} analyze the
clustering of early and late forming halo samples using SDSS and
find no significant evidence for assembly bias. \citet{abbas07}
show that environmental effects are also present in Poisson cluster
models and the halo bias in these models are surprisingly similar to
the standard models of halo bias \citep{mo}. A number of
observations suggest that the properties of satellite galaxies are
strongly correlated with the central galaxy \citep{weinmann,
kauffmann10, wang10, wangwhite}. \citet{tinker17} study the effect
of halo formation history on quenching process in central galaxies
and find a statistically significant impact at high masses and no
impact at low masses. \citet{kauffmann13} find that the star
formation rates in galaxies can be correlated upto 4
Mpc. \citet{sin17} re-examine the nature of galactic conformity
presented in \citet{kauffmann13} and find that such effects can
arise due to selection biases. \citet{paranjape15} prescribed a
tunable model within HOD framework to introduce varying levels of
conformity in the mock galaxy catalogues and find no conclusive
evidence of galaxy assembly bias on 4 Mpc. \citet{miyatake} study
the halo bias of SDSS galaxy clusters using projected
auto-correlation function and weak lensing and find that they differ
by a factor of $1.5$, which could be a significant evidence of
assembly bias. \citet{zu17} study the possible origin of the
discrepancy between the large scale halo bias of galaxy clusters
\citep{miyatake} and find that these differences mostly arise due to
projection effects. A recent work by \citet{kerscher} reported the
existence of galactic conformity out to 40 Mpc. \citet{montedorta}
analyze LRGs from SDSS-III BOSS survey and find a strong
observational evidence of assembly bias.
Some other works \citep{lupa, scudder, pandey2, pandey3, darvish,
filho} report significant dependence of the luminosity, star
formation rate and metallicity of galaxies on the large-scale
environment. A recent study by \citet{Lee} show that both the least
and most luminous elliptical galaxies in sheetlike structures inhabit
the regions with highest tidal coherence. It has been shown that the
large-scale environments in the cosmic web influence the mass, shape
and spin of dark matter halos \citep{hahn1, hahn2}. A number of
studies \citep{trujillo, erdogdu, paz, jones, tempel1,tempel2} suggest
alignment of halo shapes and spins with filaments which can extend
upto $40$ Mpc \citep{chen}. In a recent study \citet{pandey17} use
information theoretic measures to show that the galaxy morphology and
environment in the SDSS exhibit a synergic interaction at least upto a
length scale of $ \sim 30 {\, h^{-1}\, {\rm Mpc}}$. A more recent study \citep{pandey20}
find that the fraction of red galaxies in sheets and filaments
increases with the size of these large-scale structures. Any such
large-scale correlations beyond the extent of the dark matter halo are
unlikely to be explained by direct interactions between them. All
these observations suggest that the role of environment on galaxy
formation and evolution may not be limited to local density alone. The
morphology and coherence of large-scale patterns in the cosmic web may
play a significant role in determining the galaxy properties and their
evolution.
\citet{pandey17} use mutual information to quantify the large-scale
environmental dependence of galaxy morphology. They find a non-zero
mutual information between morphology of galaxies and their
environment which decreases with increasing length scales but remains
non-zero throughout the entire length scales probed. In the present
work, we would like to test the statistical significance of mutual
information between morphology and environment and study its validity
and effectiveness as a measure of large-scale environmental dependence
of galaxy properties for future studies.
We propose a method where we destroy the correlation between
morphology and environment by randomizing the morphological
classification and measure the mutual information to test its
statistical significance. We also divide the data into cubes and
shuffle them around many times to test how the mutual information
between morphology and environment are affected by the shuffling
procedure. We carry out these tests using data from the Galaxy Zoo
database \citep{lintott08}. Further, we carry out a controlled test
using a semi-analytic galaxy catalogue \citep{henriques15} based on
the Millennium simulation \citep{springel05}. The galaxies in these
mock datasets are selectively assigned morphology based on their local
density. We measure the mutual information between morphology and
environment in each case and try to understand the statistical
significance of mutual information in the present context. The goal of
the present analysis is to explore the potential of mutual information
as a statistical measure to reveal the large-scale correlations
between environment and morphology if any.
A $\Lambda$CDM cosmological model with $\Omega_{m0}=0.315$,
$\Omega_{\Lambda0}=0.685$ and $h=0.674$ \citep{planck18} is used to
convert redshifts to distances throughout the analysis.
\section{DATA}
\subsection{SDSS DR16}
We use data from the $16^{th}$ data release \cite{ahumada19} of Sloan
Digital Sky Survey (SDSS) \cite{york00}. DR16 is the final data
release of the fourth phase of SDSS which covers more than nine
thousand square degrees of the sky and provides spectral information
for more than two million galaxies. This includes an accumulation of
data collected for new targets as well as targets from all prior data
releases of SDSS. The data is downloaded through {\it SciServer:
CASjobs}\footnote{https://skyserver.sdss.org/casjobs/} which is a
SQL based interface for public access. We identify a contiguous region
within $0^{\circ} \leq \delta \leq 60^{\circ}$ \& $ 135^{\circ} \leq
\alpha \leq 225^{\circ}$ and select all galaxies with the apparent
r-band Petrosian magnitude limit $m_r<17.77$ within that region. Here
$\alpha$ and $\delta$ are the right ascension and declination
respectively. We combine the three tables {\it SpecObjAll}, {\it
Photoz} and {\it ZooSpec} of SDSS database to get the required
information about each of these selected galaxies. We retrieve the
spectroscopic and photometric information of galaxies from the {\it
SpecObjAll} and {\it Photoz} tables respectively. The {\it ZooSpec}
table provides The morphological classifications for the SDSS galaxies
from the Galaxy Zoo project\footnote{http://zoo1.galaxyzoo.org}.
Galaxy zoo \citep{lintott08,lintott11} is a platform where millions of
registered volunteers vote for visual morphological classification of
galaxies. These votes contribute in identification of galaxy
morphologies through a structured algorithm. The galaxies in galaxy
zoo are flagged as {\it spiral}, {\it elliptical} or {\it uncertain}
depending on the vote fractions. We only consider the galaxies which
are flagged as {\it spiral} or {\it elliptical} with debiased vote
fraction $>0.8$ \citep{bamford}. These cuts yield a total $136155$
galaxies within redshift $z<0.3$. We then construct a volume limited
sample using a r-band absolute magnitude cut $M_r \leq -20.5$. This
provides us $44049$ galaxies within $z<0.096$. The present analysis
requires a cubic region. We extract a cubic region of side $145 {\, h^{-1}\, {\rm Mpc}}$
from the volume limited sample which contains $14558$ galaxies. The
resulting datacube consists of $11171$ spiral galaxies and $3387$
elliptical galaxies. The mean intergalactic separation of the
galaxies in this sample is $\sim 6 {\, h^{-1}\, {\rm Mpc}}$.
\begin{figure*}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{zM}}} \hspace{0.5cm}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{dist}}} \\
\resizebox{7.4 cm}{!}{\rotatebox{0}{\includegraphics{galcube_3d}}} \hspace{0.5cm}
\resizebox{7.0 cm}{!}{\rotatebox{0}{\includegraphics{cdent}}} \\
\caption{The top left panel shows the definition of the volume limited
sample in the redshift-absolute magnitude plane. The top right panel
shows the projected view of the galaxies in the entire volume
limited sample (green dots) and those inside the cubic region (blue
dots). The bottom left panel shows the distributions of spirals
(blue dots) and ellipticals (brown dots) in the extracted datacube
from the volume limited sample. The bottom right panel shows the
variation in number density inside the datacube along each of the
3-axes. The number densities are computed in slices of thickness
$10.36 {\, h^{-1}\, {\rm Mpc}}$.}
\label{fig:sample}
\end{figure*}
\begin{figure*}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{Imxy_ran}}}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{t_ran}}}
\caption{The left panel of this figure shows the mutual information
$I(X;Y)$ as a function of length scales for the original SDSS
datacube and the SDSS datacube where the morphological information
of galaxies are randomized. The results for mock Poisson
distribution with randomly assigned morphology are also shown
together for a comparison. The $1-\sigma$ errorbars for the original
SDSS data are estimated using $10$ jack-knife samples drawn from the
same dataset. For the SDSS random and Poisson random datasets each,
we estimate the $1-\sigma$ errobars using 10 different
realizations. The right panel of this figure shows the t score as a
function of length scales, obtained from a $t$-test which compares
the SDSS galaxy distribution with randomized morphological
classification to the SDSS galaxy distribution with actual
morphological classification.}
\label{fig:Imxy_ran}
\end{figure*}
\begin{figure*}
\resizebox{8 cm}{!}{\rotatebox{0}{\includegraphics{galdist_3d}}}
\resizebox{8
cm}{!}{\rotatebox{0}{\includegraphics{shufdist_3d_ns3}}}
\\ \resizebox{8
cm}{!}{\rotatebox{0}{\includegraphics{shufdist_3d_ns7}}}
\resizebox{8
cm}{!}{\rotatebox{0}{\includegraphics{shufdist_3d_ns15}}}
\caption{This figure shows the distributions of spirals (blue dots)
and ellipticals (brown dots) in the original unshuffled SDSS
datacube along with one realization of shuffled datacube for three
different values of shuffling lengths ($l_s$). The value of $l_s$ is
decided by $n_s$ which is the number of subcubes that would fit
along each dimension. The size of shuffling units in each case is
shown with a subcube (in red) at a corner of the respective shuffled
realization.}
\label{fig:3Dv_shuf}
\end{figure*}
\begin{figure*}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{Imxy_shuf}}}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{t_shuf}}}
\caption{The left panel of this figure shows the mutual information
$I(X;Y)$ as a function of length scales in the unshuffled SDSS
datacube along with that from the shuffled realizations with three
different shuffling length. The $1-\sigma$ errorbars shown for the
unshuffled SDSS data are obtained from $10$ jack-knife samples drawn
from the same dataset. For the SDSS shuffled datasets and Poisson
random datasets each, the $1-\sigma$ errobars are estimated using 10
different realizations. For each shuffling length, the grid sizes
are chosen so that they are not equal or integral multiples of the
shuffling length and the vice versa. The right panel of this figure
shows the t score at different length scales, obtained from a $t$
test comparing the shuffled distributions with the original
unshuffled galaxy distribution from SDSS.}
\label{fig:Imxy_shuf}
\end{figure*}
\begin{figure*}
\resizebox{8 cm}{!}{\rotatebox{0}{\includegraphics{rsdmill_3d_f0p3}}}
\resizebox{8 cm}{!}{\rotatebox{0}{\includegraphics{rsdmill_3d_f0p5}}}
\resizebox{8 cm}{!}{\rotatebox{0}{\includegraphics{rsdmill_3d_f1p0}}}
\caption{This figure shows the distributions of spirals (greenish dot)
and ellipticals (brown dot) in a realization of the mock SDSS
datacube from SAM where the galaxies are assigned morphology based
on the density at their locations. The three datacubes corresponds
to three different schemes for density dependent morphology
assignment.}
\label{fig:3Dv_mill}
\end{figure*}
\begin{figure*}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{Imxy_mill_rsd}}}
\resizebox{7.6 cm}{!}{\rotatebox{0}{\includegraphics{t_mill_rsd}}}
\caption{The left panel of this figure shows mutual information
$I(X;Y)$ as a function of length scales for different
morphology-density relations. $1-\sigma$ errorbars for the
Millennium galaxies are estimated using data from 8 non-overlapping
mock datacubes from the SAM catalogue. The $1-\sigma$ errorbars
corresponding to the Poisson dataset are estimated using 8 mock
datacubes containing random distributions. The right panel of this
figure shows the t score at different length scales, obtained from
$t$ test where we compared the distributions with density dependent
morphological tagging to that without any density dependence of
morphology.}
\label{fig:Imxy_mill}
\end{figure*}
\subsection{Millennium Run Simulation}
Galaxy formation and evolution involve many complex physical
processes such as gas cooling, star formation, supernovae feedback,
metal enrichment, merging and morphological evolution. The semi
analytic models (SAM) of galaxy formation
\citep{white91,kauff1,cole1,bagh,somervil,benson} is a powerful tool
which parametrise these complex physical processes in terms of simple
models following the dark matter merger trees over time and finally
provide the statistical predictions of galaxy properties at any given
epoch. In the present work, we use the data from a semi analytic
galaxy catalogue \citep{henriques15} derived from the Millennium run
simulation (MRS) \citep{springel05}. \citet{henriques15} updated the
Munich model of galaxy formation using the values of cosmological
parameters from PLANCK first year data. This model provides a
better fit to the observed stellar mass functions and reproduce the
recent data on the abundance and passive fractions of galaxies over
the redshift range $0 \le z \le 3$ better than the other models. We
use SQL to extract the required data from the Millennium
database \footnote{https://www.mpa.mpa-garching.mpg.de/millennium/}.
We use the peculiar velocities of the Millennium galaxies to map them
in redshift space and extract all the galaxies with $M_r \leq -20.5
$. Finally we construct 8 mock SDSS datacubes of side $145 {\, h^{-1}\, {\rm Mpc}}$
each containing a total $14558$ galaxies.
\subsection{Random distributions}
We simulate 10 Poisson distributions each within a cube of side $145
{\, h^{-1}\, {\rm Mpc}}$. $14558$ random data points are generated within each of the 10
datacubes. For each cube, we randomly label $3387$ points as
elliptical and rest of the points are labelled as spirals. The number
of galaxies and the ratio of spirals to ellipticals in these random
data sets are identical to that observed in the original SDSS
datacube.
\section{Method of analysis}
\subsection{Mutual information between environment and morphology}
We consider a cubic region of side $L {\, h^{-1}\, {\rm Mpc}}$ extracted from the volume
limited sample prepared from SDSS DR16. We subdivide the entire cube
into $N_{d}$ number of $d {\, h^{-1}\, {\rm Mpc}} \times d {\, h^{-1}\, {\rm Mpc}} \times d {\, h^{-1}\, {\rm Mpc}}$
voxels. We define a discrete random variable $X$ with $N_d$ outcomes
$\{ X_i:i=1,...N_d \}$. The probability of finding a randomly selected
galaxy in the $i^{th}$ voxel is $p(X_i)=\frac{N_i}{N}$, where $N_i$ is
the number of galaxies in the $i^{th}$ voxel and $N$ is the total
number of galaxies in the cube. The random variable $X$ thus defines
the environment of a galaxy at a specific length scale $d {\, h^{-1}\, {\rm Mpc}}$.
The information entropy \citep{shannon48} associated with the random
variable $X$ at scale $d$ is given by
\begin{eqnarray}
H(X)& = &-\sum_{i=1}^{N_d} p(X_i) \log p(X_i) \nonumber \\ &=&\log N -
\frac{\sum_{i=1}^{N_d} N_i \log N_i}{N}
\label{eqn:Hx}
\end{eqnarray}
We use another variable $Y$ to describe the morphology of the
galaxies. We have only considered the galaxies with a classified
morphology and hence there are only two possible outcomes:
{\it{spiral}} or {\it{elliptical}}. If the cube consists of $N_{sp}$
spiral galaxies and $N_{el}$ elliptical galaxies then the information
entropy associated with $Y$ will be
\begin{eqnarray}
H(Y)& = &- \left( \frac{N_{sp}}{N} \log \frac{N_{sp}}{N} + \frac{N_{el}}{N} \log \frac{N_{el}}{N} \right) \nonumber \\
&=& \log N- \frac{ N_{sp} \log N_{sp} + N_{el} \log N_{el}}{N}
\label{eqn:Hy}
\end{eqnarray}
Now having the prior information about the morphology of each of the
galaxies one can determine the mutual information between morphology
of the galaxies and their environment.
The mutual information $I(X;Y)$ between environment and morphology is ,
\begin{eqnarray}
I(X;Y) & = & \sum^{N_{d}}_{i=1} \sum^{2}_{j=1} \, p(X_i,Y_j) \, \log\, \frac{p(X_i,Y_j)}{p(X_i)p(Y_j)} \\
\nonumber & = & H(X)+H(Y)-H(X,Y)
\label{eq:Ixy}
\end{eqnarray}
$H(X)$ and $H(Y)$ are the individual entropy associated with the
random variables $X$ and $Y$ respectively. The joint entropy
$H(X,Y)\leq H(X)+H(Y)$ where the equality holds only when $X$ and $Y$
are independent. The joint entropy is symmetric i.e. $H(X,Y)=H(Y,X)$.
If $N_{ij}$ is the number of galaxies in the $i^{th}$ voxel that
belongs to the $j^{th}$ morphological class ($j=1$ for spiral and
$j=2$ for elliptical), then the joint entropy $H(X,Y)$ is given by,
\begin{eqnarray}
H(X,Y) &=& -\sum_{i=1}^{N_d} \sum_{j=1}^{2} p(X_i,Y_j) \log p(X_i,Y_j) \nonumber \\
&=&\log N - \frac{1}{N}\sum_{i=1}^{N_d} \sum_{j=1}^{2} N_{ij} \log N_{ij}
\label{eqn:Hxy}
\end{eqnarray}
where
\begin{eqnarray}
\sum_{i=1}^{N_d} \sum_{j=1}^{2} N_{ij}=N
\label{eqn:Nij}
\end{eqnarray}
Here $p(X_i,Y_j)=p(X_i|Y_j)p(Y_j)=\frac{N_{ij}}{N}$ is the joint
probability derived from the conditional probability using Bayes'
theorem.
The mutual information between two random variables measures the
reduction in uncertainty in the knowledge of one random variable given
the knowledge of other. A higher value of mutual information between
two random variables convey a greater degree of association between
the two random variables. One specific advantage of mutual information
over the traditional tools like covariance analysis is that it does
not require any assumptions regarding the nature of the random
variables and their relationship.
\subsection{Randomizing the morphological classification of galaxies}
We consider each of the SDSS galaxies in the datacube and randomly
identify them as spirals and ellipticals leaving aside their actual
morphology. We randomly pick $3387$ SDSS galaxies and tag them as
ellipticals. Rest of the galaxies in the SDSS datacube are labelled as
spirals. The number of spirals and ellipticals in the resulting
distribution thus remains same as the original distribution.
We generate 10 such datacubes with randomly assigned galaxy morphology
from the original SDSS datacube and measure the mutual information
between environment and morphology in each of them. We would like to
compare the mutual information $I(X;Y)$ measured in the original SDSS
data with that from the SDSS dataset with randomly assigned morphology
to study the statistical significance of $I(X;Y)$ and its scale
dependence.
\subsection{Shuffling the spatial distribution of galaxies}
We divide the SDSS datacube of side $L {\, h^{-1}\, {\rm Mpc}}$ into $N_c=n_s^3$ smaller
subcubes of size $l_s=\frac{L}{n_s} {\, h^{-1}\, {\rm Mpc}}$. Each of these smaller
subcubes along with all the galaxies within them are rotated around
three different axes by different angles which are random multiples of
$90^{\circ}$. The rotated subcubes are then randomly interchanged with
any other subcubes inside the datacube. This process of arbitrary
rotation followed by random swapping is repeated for $100 \times N_c$
times to generate a {\it{Shuffled}} realization \citep{bhav} from the
original SDSS datacube. We carry out the shuffling procedure for three
different choices $n_s=3$, $n_s=7$ and $n_s=15$ which corresponds to
shuffling length $l_s=48.33 {\, h^{-1}\, {\rm Mpc}}$, $l_s= 20.71{\, h^{-1}\, {\rm Mpc}}$ and $l_s=
9.67{\, h^{-1}\, {\rm Mpc}}$ respectively. We generate 10 shuffled realizations for each
values of the shuffling length ($l_s$). Our goal is to compare the
mutual information $I(X;Y)$ measured in the original SDSS data with
that from the shuffled datasets to test the statistical significance
of $I(X;Y)$ on different length scales.
\subsection{Simulating different morphology-density correlations}
The morphology-density relation is a well known phenomenon which
indicates that environment play a crucial role in deciding galaxy
morphology. We would like to test whether mutual information $I(X;Y)$
can capture the strength of morphology-density relation in the galaxy
distribution. We construct a set of SDSS mock datacubes from a semi
analytic galaxy catalogues as discussed in Section 2.2.
We compute the local number density at the location of each galaxies
using $k^{th}$ nearest neighbour method \citep{casertano85}. We find
the distance to the the $k^{th}$ nearest neighbour to each galaxy. The
local number density around a galaxy is estimated as,
\begin{eqnarray}
n_k = \frac{k-1}{V(r_k)}
\label{eqn:knn}
\end{eqnarray}
Here $r_k$ is the distance to the $k^{th}$ nearest neighbour and
$V(r_k)=\frac{4}{3}\pi r_k^3$. We have used $k=10$ in this analysis.
Our goal is to test if $I(X;Y)$ can capture the degree and nature of
correlation between environment (X) and morphology (Y). The elliptical
galaxies are known to reside preferentially in denser
environments. Each mock SDSS datacubes from the SAM contains a total
$14558$ galaxies. We would like to assign a morphology to each of
these galaxies. To do so, we first sort the number density at the
locations of galaxies in a descending order. We consider three
different schemes which are as follows,
(i) We randomly label $3387$ galaxies as ellipticals from the top 30\%
high density locations and consider the rest of the $11171$ galaxies
as spirals.
(ii) We randomly label $3387$ galaxies as ellipticals from top 50\%
high density locations and consider the rest of the $11171$ galaxies
as spirals.
(iii) We randomly label $3387$ galaxies as ellipticals irrespective of
their local density and consider the rest of the $11171$ galaxies as
spirals.
The morphology-density relation in case (i) is stronger than case (ii)
and there is no morphology-density relation in case (iii). We would
like to test if mutual information $I(X;Y)$ can correctly capture the
degree of association between environment and morphology in these
distributions.
\subsection{Testing statistical significance of the difference in mutual information with $t$ test}
We use an equal variance $t$-test which can be used when both the
datasets consists of same number of samples or have a similar
variance. We calculate the $t$ score at each length scale using the
following formula,
\begin{eqnarray}
t= \frac{|\bar{X_1}-\bar{X_2}|}{\sigma_s \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}
\label{eqn:ttest}
\end{eqnarray}
where $\sigma_s =
\sqrt{\frac{(n_1-1)\sigma_1^2+(n_2-1)\sigma_2^2}{n_1+n_2-2}}$,
$\bar{X_1}$ and $\bar{X_1}$ are the average values, $\sigma_1$ and
$\sigma_2$ are the standard deviations, $n_1$ and $n_2$ are the number
of datapoints associated with the two datasets at any given
lengthscale.
We would like to test the null hypothesis that the average value of
mutual information in the original and randomized or shuffled
distribution at a given lengthscale are not significantly
different. We find that randomizing or shuffling the data always leads
to a reduction in the mutual information between morphology and
environment. We use a one-tailed test with significance level
$\alpha=0.0005$ which corresponds to a confidence level of
$99.9\%$. The degrees of freedom in this test is $(n_1+n_2-2)$. The
same test is also applied to asses the statistical significance of
$I(X;Y)$ in mock datasets where a morphology-density relation is
introduced in a controlled manner. We compute the $t$ score at each
length scale using \autoref{eqn:ttest} and determine the associated $p$
value to test the statistical significance.
\begin{table*}{}
\caption{This table shows the $t$ score and the associated $p$ value
at each length scale when we compare the mutual information between
actual SDSS data and SDSS data with randomized morphological
information.}
\label{tab:ttest1}
\begin{tabular}{ccc}
\hline
Grid size ( ${\, h^{-1}\, {\rm Mpc}}$ ) & $t$ score & $p$ value \\
\hline
$12.08 $ & $13.911$ & $2.26 \times 10^{-11}$\\
$13.18 $ & $13.417$ & $4.10 \times 10^{-11}$\\
$14.50 $ & $16.125$ & $1.90 \times 10^{-12}$\\
$16.11 $ & $15.692$ & $3.02 \times 10^{-12}$\\
$18.12 $ & $20.853$ & $2.34 \times 10^{-14}$\\
$20.71 $ & $18.698$ & $1.53 \times 10^{-13}$\\
$24.17 $ & $20.088$ & $4.46 \times 10^{-14}$\\
$29.00 $ & $28.934$ & $7.59 \times 10^{-17}$\\
$36.25 $ & $33.613$ & $5.36 \times 10^{-18}$\\
$48.33 $ & $30.151$ & $3.67 \times 10^{-17}$\\
$72.50 $ & $30.736$ & $2.61 \times 10^{-17}$\\
\hline
\end{tabular}
\end{table*}
\begin{table*}{}
\caption{ This table shows the $t$ score and the associated $p$ value
at each length scale when we compare the mutual information between
actual SDSS data and its shuffled realizations for different
shuffling lengths. The grid size for each $n_s$ is chosen in a such
a way so that the shuffling length is not equal or an integral
multiple of the grid size.}
\label{tab:ttest2}
\begin{tabular}{cccccccc}
\hline
Grid size & \multicolumn{2}{c}{$n_s = 3$} & \multicolumn{2}{c}{$n_s = 7$} & \multicolumn{2}{c}{$n_s = 15$}\\
( ${\, h^{-1}\, {\rm Mpc}}$ ) & $t$ score & $p$ value & $t$ score & $p$ value & $t$ score & $p$ value \\
\hline
$12.08$ & - & - &$ 2.196$ &$ 2.07\times 10^{- 2}$ &$ 2.029$ &$ 2.88\times 10^{- 2}$\\
$13.18$ & $1.559$ &$ 6.82\times 10^{- 2}$ &$ 3.148$ &$ 2.78\times 10^{- 3}$ &$ 4.097$ &$ 3.38\times 10^{- 4}$\\
$14.50$ & $1.967$ &$ 3.24\times 10^{- 2}$ &$ 2.037$ &$ 2.83\times 10^{- 2}$ &$ 4.324$ &$ 2.04\times 10^{- 4}$\\
$16.11$ & - & - &$ 5.064$ &$ 4.04\times 10^{- 5}$ &$ 9.656$ &$ 7.63\times 10^{- 9}$\\
$18.12$ & $3.794$ &$ 6.64\times 10^{- 4}$ &$ 9.806$ &$ 6.03\times 10^{- 9}$ &$13.765$ &$ 2.69\times 10^{-11}$\\
$20.71$ & $4.928$ &$ 5.43\times 10^{- 5}$ & - & - &$13.762$ &$ 2.70\times 10^{-11}$\\
$24.17$ & - & - &$12.536$ &$ 1.24\times 10^{-10}$ &$16.367$ &$ 1.48\times 10^{-12}$\\
$29.00$ &$10.667$ &$ 1.64\times 10^{- 9}$ &$20.061$ &$ 4.57\times 10^{-14}$ &$24.534$ &$ 1.38\times 10^{-15}$\\
$36.25$ &$15.510$ &$ 3.68\times 10^{-12}$ &$23.429$ &$ 3.09\times 10^{-15}$ &$29.184$ &$ 6.52\times 10^{-17}$\\
$48.33$ & - & - &$26.795$ &$ 2.94\times 10^{-16}$ &$27.235$ &$ 2.20\times 10^{-16}$\\
$72.50$ &$21.376$ &$ 1.52\times 10^{-14}$ &$27.325$ &$ 2.08\times 10^{-16}$ &$29.234$ &$ 6.33\times 10^{-17}$\\
\hline
\end{tabular}
\end{table*}
\begin{table*}{}
\caption{This table shows the $t$ score and the associated $p$ value at
each length scale when we compare the mutual information between
mock datasets with and without a morphology-density relation.}
\label{tab:ttest3}
\begin{tabular}{ccccc}
\hline
Grid size & \multicolumn{2}{c}{ Random selection from top 30\%} & \multicolumn{2}{c} { Random selection from top 50\%} \\
( ${\, h^{-1}\, {\rm Mpc}}$ ) & $t$ score & $p$ value & $t$ score & $p$ value \\
\hline
$12.08$ &$75.590$ &$ 5.46 \times 10^{-20}$ &$41.511$ &$ 2.32 \times 10^{-16}$\\
$13.18$ &$80.264$ &$ 2.36 \times 10^{-20}$ &$56.285$ &$ 3.35 \times 10^{-18}$\\
$14.50$ &$84.539$ &$ 1.14 \times 10^{-20}$ &$50.987$ &$ 1.33 \times 10^{-17}$\\
$16.11$ &$75.073$ &$ 6.01 \times 10^{-20}$ &$53.361$ &$ 7.04 \times 10^{-18}$\\
$18.12$ &$61.435$ &$ 9.88 \times 10^{-19}$ &$41.186$ &$ 2.59 \times 10^{-16}$\\
$20.71$ &$52.083$ &$ 9.87 \times 10^{-18}$ &$36.618$ &$ 1.32 \times 10^{-15}$\\
$24.17$ &$36.197$ &$ 1.55 \times 10^{-15}$ &$27.185$ &$ 8.10 \times 10^{-14}$\\
$29.00$ &$31.549$ &$ 1.04 \times 10^{-14}$ &$17.937$ &$ 2.34 \times 10^{-11}$\\
$36.25$ &$27.540$ &$ 6.77 \times 10^{-14}$ &$20.213$ &$ 4.65 \times 10^{-12}$\\
$48.33$ &$26.512$ &$ 1.14 \times 10^{-13}$ &$12.707$ &$ 2.23 \times 10^{- 9}$\\
$72.50$ &$ 8.541$ &$ 3.17 \times 10^{ -7}$ &$ 8.379$ &$ 3.98 \times 10^{- 7}$\\
\hline
\end{tabular}
\end{table*}
\section{Results}
\subsection{Effects of randomizing the morphological classification}
We show the mutual information $I(X;Y)$ between environment and
morphology as a function of length scale in the SDSS datacube in left
panel of \autoref{fig:Imxy_ran} which shows that the
morphology of the SDSS galaxies and their large-scale environment
share a small non-zero mutual information throughout the entire
length scale. The result for the SDSS datasets with randomly
assigned morphology is also shown in the same panel for a
comparison. This shows that there is a significant reduction in
$I(X;Y)$ at each length scale due to the randomization of
morphological information of the SDSS galaxies. We find that a finite
non-zero mutual information still persists at each length scale even
after the randomization of morphology. To understand its origin, we
also measure the mutual information measured in the Poisson datacubes
with randomly assigned morphology and show them together in the left
panel of \autoref{fig:Imxy_ran}. Interestingly, we find that the
non-zero mutual information between $X$ and $Y$ in the Poisson
distributions are nearly same as the SDSS datacube with randomly
assigned morphology.
The information entropy $H(X)$ associated with environment at each
length scale $d$ remains unchanged, as the position of each galaxies
in the resulting distribution remains same as the original SDSS
distribution. There would be also no change in the information entropy
$H(Y)$ associated with morphology of the galaxies as the number of
spirals and ellipticals remains the same after the
randomization. However this procedure would change the joint entropy
$H(X,Y)$. The randomization of morphological classification would turn
the joint probability distribution to a product of the two individual
probability distribution i.e. $p(X_i,Y_j)=p(X_i)p(Y_j)$. The adopted
procedure is thus expected to destroy any existing correlations
between environment and morphology and consequently any non-zero
mutual information between environment and morphology should ideally
disappear after the randomization.
However in left panel of \autoref{fig:Imxy_ran}, we find that $I(X;Y)$
does not reduce to zero after the randomization of morphology of the
SDSS galaxies. This residual nonzero mutual information can be
explained by the results obtained from the Poisson datacubes with
randomly assigned morphology. The results show that $I(X;Y)$ in the
Poisson datacubes with randomly assigned morphology and SDSS datacube
with randomly assigned morphology are nearly the same. This suggests
that a part of the measured mutual information arises due to the
finite and discrete nature of the galaxy sample. The origin of this
residual information is thus non-physical in nature and should be
properly taken into account during such analysis.
The reduction in $I(X;Y)$ due to the randomization of morphology
suggests that a part of the measured mutual information $I(X;Y)$ must
have some physical origin. Interestingly, left panel of
\autoref{fig:Imxy_ran} shows that randomization leads to a reduction
in the mutual information at each length scale. We test the
statistical significance of these differences at each length scale
using a $t$ test. We show the $t$ score at each length scale in the
right panel of \autoref{fig:Imxy_ran}. The critical $t$ score at
$99.9\%$ confidence level for $18$ degrees of freedom are also shown
in the same panel. The $t$ score and the associated $p$ value at each
length scale are tabulated in \autoref{tab:ttest1}. We find a strong
evidence against the null hypothesis which suggests that the
differences in the mutual information $I(X;Y)$ in the two
distributions are statistically significant at $99.9\%$ confidence
level for the entire length scales probed. This clearly indicates that
the association between environment and morphology is not limited to
only the local environment but extends to environments on larger
length-scales.
\subsection{Effects of shuffling the spatial distribution of galaxies}
We divide the SDSS datacube into a number of regular subcubes using
different values of $l_s$ as discussed in Section 3.3 and shuffle them
many times to generate a set of shuffled realizations for each
shuffling length. The \autoref{fig:3Dv_shuf} shows the distributions
of ellipticals (brown dots) and spirals (blue dots) in the original
unshuffled SDSS datacube along with one realization of the shuffled
datacubes for each shuffling length. The size of the shuffling units
used to shuffle the data in each case are shown with a red subcube at
the corner of the respective shuffled datacubes. A comparison of the
shuffled datacubes with the original SDSS datacube clearly shows that
the coherent features visible in the actual data on larger length
scales progressively disappears with the increasing shuffling
length. It may be noted that both the measurement of $I(X;Y)$ and
shuffling requires us to divide the datacube into a number of
subcubes. In each case, we choose the shuffling lengths and the grid
sizes so that the shuffling length is not equal or integral multiple
of grid size or vice versa. This must be ensured to avoid any spurious
correlations in $I(X;Y)$.
We compare the mutual information $I(X;Y)$ in the original and
shuffled datasets in the left panel of \autoref{fig:Imxy_shuf}. For
each shuffled datasets we observe a reduction in $I(X;Y)$ at different
length scales. A smaller reduction in $I(X;Y)$ is observed at smaller
length scales whereas a relatively larger reduction in $I(X;Y)$ is
seen on larger length scales.
It may be noted that the morphological information of galaxies remain
intact after shuffling the data. The shuffling procedure keeps the
clustering at scales below $l_s$ nearly identical to the original data
but eliminates all the coherent spatial features in the galaxy
distribution on scales larger than $l_s$. Shuffling is thus expected
to diminish any existing correlations between environment and
morphology. Measuring the mutual information between environment and
morphology in the original SDSS data and its shuffled versions allows
us to address the statistical significance of $I(X;Y)$. The mutual
information is expected to reduce by a greater amount on scales above
the shuffling length $l_s$ because shuffling destroys nearly all the
coherent patterns beyond this length scale. On the other hand, we
expect a relatively smaller reduction in $I(X;Y)$ below the shuffling
length $l_s$. This can be explained by the fact that most of the
coherent features in the galaxy distribution below length scale $l_s$
survive the shuffling procedure. However some of the coherent features
which extend upto $l_s$ but lie across the subcubes would be destroyed
by shuffling. Shuffling may also produce a small number of spatial
features which are the product of pure chance alignments. These random
features are unlikely to introduce any physical correlations between
environment and morphology. A comparison of $I(X;Y)$ between the
original and shuffled data at different length scales for different
shuffling length thus reveal the statistical significance of the
degree of association between environment and morphology on different
length scales.
We find that $I(X;Y)$ decreases monotonically at all length scales
with decreasing shuffling lengths. \autoref{fig:Imxy_shuf} shows that
$I(X;Y)$ for $n_s=15$ or $l_s\sim 10 {\, h^{-1}\, {\rm Mpc}}$ still lies above the
values that are expected for an identical Poisson random
distributions. A greater reduction in $I(X;Y)$ on larger length scales
for each shuffling length considered suggests that the mutual
information between environment and morphology is statistically
significant on these length scales. $I(X;Y)$ in actual data and
shuffled data for different shuffling lengths do not differ much on
smallest length scale as the coherent structures on these length
scales are nearly intact in all the shuffled datasets. However when
shuffled with smaller values of $l_s$, greater number of coherent
structures on larger length scales are lost. This explain why
reduction in $I(X;Y)$ increases with decreasing shuffling length.
We employ a $t$ test to test the statistical significance of the
observed differences in $I(X;Y)$ in original and all shuffled datasets
at different length scales. The $t$ score and the corresponding $p$
value at each length scale are tabulated in \autoref{tab:ttest2}. The
$t$ score for the shuffled datasets for three different shuffling
length are shown as a function of length scale in the right panel of
\autoref{fig:Imxy_shuf}. We find that the differences in $I(X;Y)$ in
the shuffled and unshuffled SDSS data are statistically significant at
$99.9\%$ confidence level at nearly the entire length scale probed.
We find a weak evidence against the null hypothesis for all the
shuffling lengths at smaller length scales. This arises due to the
fact that the coherence between environment and morphology are
retained on smaller scales when the data is shuffled with a comparable
or larger shuffling lengths. However we note that a considerable
reduction in $I(X;Y)$ can occur even below the shuffling length for
$n_s=7$ and $n_s=3$. A subset of the coherent features extending below
the shuffling length may lie across the subcubes used to shuffle the
data. These coherent structures will be destroyed by the shuffling
procedure even when they are smaller than the shuffling length. The
number of such coherent structures which belongs to this particular
group is expected to increase with the size of the subcubes due to
their larger boundary.
The results shown in \autoref{fig:Imxy_shuf} thus indicates that the
association between environment and morphology is certainly not
limited to their local environment but extends throughout the length
scales probed in this analysis.
\subsection{Effects of different morphology-density correlations}
In \autoref{fig:3Dv_mill}, we show the distributions of spirals and
ellipticals in mock SDSS datacubes from SAM. We show one distribution
for each of the simulated morphology-density relations.
We show the mutual information $I(X;Y)$ as a function of length scales
for the three different density-morphology relation in the left panel
of \autoref{fig:Imxy_mill}. When the elliptical are randomly selected
from the entire distribution irrespective of their density then we do
not expect any mutual information between morphology and environment.
The non-zero mutual information in this case is just an outcome of the
finite and discrete nature of the distributions. We find that the
results for this case is identical to that expected for a Poisson
distribution with same ratio of spirals to ellipticals.
However when ellipticals are preferentially selected from denser
regions, the mutual information between morphology and environment
rises above the values that are expected for a Poisson random
distribution. The figure \autoref{fig:Imxy_mill} shows that mutual
information $I(X;Y)$ is significantly higher than Poisson distribution
when galaxies are randomly tagged as elliptical from the top $50\%$
high density positions. We find that the mutual information between
morphology and environment increase further to much higher values when
galaxies are randomly identified as ellipticals from the top $30\%$
high density regions. We note a change in $I(X;Y)$ at all lengthscales
upto $50 {\, h^{-1}\, {\rm Mpc}}$. A larger change in $I(X;Y)$ is observed on smaller
length scales whereas the change in $I(X;Y)$ becomes gradually smaller
on larger length scales. This indicates that the morphology-density
relations simulated here, become weaker on larger length scales.
We use a $t$ test to asses the statistical significance of the
differences in $I(X;Y)$ in the mock datasets with and without a
morphology-density relation. We tabulate the $t$ score and the
corresponding $p$ value at each length scale for the two mock datasets
in \autoref{tab:ttest3}. In the right panel of
\autoref{fig:Imxy_mill}, we show the $t$ score as a function of length
scales in two mock datasets with different morphology-density
relation. The results suggest that a statistically significant
difference ($99.9\%$ confidence level) exists between the datasets
with and without a morphology-density relation. Interestingly, these
differences persist throughout the entire length scale probed in the
analysis. This indicates that the correlation between environment and
morphology is not limited to the local environment but extends to
larger length scales.
The morphology-density relations considered here are too simple in
nature. In this experiment, we find that the mutual information
between morphology and environment decreases monotonically with
increasing length scales. Contrary to this, the SDSS observations show
that mutual information initially decreases with increasing length
scales and nearly plateaus out at larger length scales. The schemes
used for the morphology-density relation in this experiment are not
realistic in nature. But they clearly shows that mutual information
can effectively capture the degree of association between morphology
and environment and such a relation may extend upto larger length
scales.
\section{Conclusions}
In the present work, we aim to test the statistical significance of
mutual information between morphology of a galaxy and its environment.
The morphology-density relation is a well known phenomenon which has
been observed in the galaxy distribution. The relation suggests that
the ellipticals are preferentially found in denser regions of galaxy
distribution whereas spirals are sporadically distributed across the
fields. It is important to understand the role of environment in
galaxy formation and evolution. The local density at the location of a
galaxy is very often used to characterize its environment. It is
believed that the environmental dependence of galaxy properties can be
mostly explained by the local density alone. The mutual information
between environment and morphology for SDSS galaxies has been studied
by \citet{pandey17} where they find that a non-zero mutual information
between morphology and environment persists throughout the entire
length scale probed. They show that the mutual information between
environments on different length scales may introduce such
correlations between environment and morphology observed on larger
length scales. We would like to critically examine the statistical
significance of the observed non-zero mutual information between
morphology and environment on different length scales. We propose
three different methods to asses the statistical significance of
mutual information. These methods also help us to understand the
relative importance of environment on different length scales in
deciding the morphology of galaxies.
Three different tests are carried out in the present analysis. In the
first case, we randomize the morphological information about the SDSS
galaxies without affecting their spatial distribution. In the second
case, we shuffle the spatial distribution of the SDSS galaxies without
affecting their morphological classification. Both these tests show
that the mutual information between morphology and environment are
statistically significant at $99.9\%$ confidence level throughout the
entire length scales probed in this analysis. We find that a small
non-zero mutual information can be observed even in a random
distribution without any existing physical correlations between
environment and morphology. This non-zero value originates from the
finite and discrete nature of the distribution. Interestingly, the
mutual information between environment and morphology in the SDSS
datacube is significantly larger than the randomized datasets
throughout the entire length scales probed. Shuffling the SDSS
datacube also affect the mutual information between environment and
morphology in a statistically significant way at nearly the entire
length scales considered. This suggests that the association between
morphology and environment continues upto a larger length scales and
these correlations must have a physical origin. In a third test, we
construct a set of mock SDSS datacubes from the semi analytic galaxy
catalogue where we assign morphology to the simulated galaxies based
on the density at their locations. We vary the strength of the
simulated morphology-density relation and measure mutual information
between environment and morphology in each case. Our results suggest
that mutual information effectively capture the degree of association
between environment and morphology in these mock datasets.
We extend our analysis to dark matter halo sample from
Millennium simulation (see Appendix A) where we investigate if the
angular momentum of dark matter halos display any large-scale
correlations at fixed halo mass. The analysis shows that
statistically significant correlations are observed only for the
halos in the mass range $\sim 10^{11}-10^{12} M_{\odot}$. The
assembly bias is known to be more pronounced at low masses ($\sim
10^{12} M_{\odot}$) and the observed correlations could be a
signature of assembly bias. But we could not confirm this due to a
wider variation of halo mass in this mass range. Choosing a narrower
range around this halo mass does not provide us sufficient number of
dark matter halos within the specific volume required for the
present analysis. The present analysis also suggests that the
observed large-scale correlations between morphology and environment
is small but statistically more significant than that observed
between the angular momentum of dark matter halos and their
environment.
Besides the halo assembly bias, the rich baryonic
physics may also play an important role, which allow much more
complicated interactions between galaxies and their
environment. The effects of local density on morphology of
galaxies is understood in terms of various types of galaxy
interactions, ram pressure stripping and quenching of star
formation. These processes may play a dominant role in shaping the
morphology of a galaxy. However they may not be the only factors
which decides the morphology of a galaxy. The presence of
large-scale coherent features like filaments, sheets and voids may
induce large-scale correlations between the observed galaxy
properties and their environment. Further studies may reveal if
any new physical processes are required to explain such
large-scale correlations. In any case, we need to understand the
physical origin of such correlations and if required, incorporate
them in the models of galaxy formation. Most studies employ
correlation functions to study the assembly bias. Here, we
speculate that the information theoretic framework presented in
this paper, might serve as a more sensitive probe of galaxy
assembly bias than traditional correlation functions.
Every statistical measure have their pros and cons. One particular
drawback of mutual information is that it does not tell us the
direction of the relation between two random variables i.e. the
measured mutual information does not provide us the simple information
that the ellipticals and spirals are preferentially distributed in
high density and low density regions respectively. But the mutual
information reliably captures the degree of association between any
two random variables irrespective of the nature of their
relationship. So in the present context, mutual information can be an
effective and powerful tool to quantify the degree of influence that
environment imparts on morphology across different length scales. The
amplitude of mutual information quantify the strength of correlation
between morphology and environment on different length scales. It also
helps us to probe the length scales upto which the morphology of a
galaxy is sensitive to its environment.
One can also study the mutual information between environment and any
other galaxy property to understand the influence of environment on
that property at various length scales. The relative influence of
environment on different galaxy properties on any given length scale
may provide useful inputs for the galaxy formation models. Finally we
note that mutual information between environment and a galaxy property
is a powerful and effective tool which can be used successfully for
the future studies of large-scale environmental dependence of galaxy
properties.
\section{Data availability}
The data underlying this article are available in
https://skyserver.sdss.org/casjobs/ and
https://www.mpa.mpa-garching.mpg.de/millennium/. The datasets were
derived from sources in the public domain: https://www.sdss.org/,
http://zoo1.galaxyzoo.org and
https://www.mpa.mpa-garching.mpg.de/millennium/.
\section{ACKNOWLEDGEMENT}
The authors thank an anonymous reviewer for useful comments and
suggestions which helped us to improve the draft. SS would like to
thank UGC, Government of India for providing financial support through
a Rajiv Gandhi National Fellowship. BP would like to acknowledge
financial support from the SERB, DST, Government of India through the
project CRG/2019/001110. BP would also like to acknowledge IUCAA, Pune
for providing support through associateship programme.
The authors would like to thank the SDSS team and Galaxy Zoo team for
making the data public. Funding for the Sloan Digital Sky Survey IV
has been provided by the Alfred P. Sloan Foundation, the
U.S. Department of Energy Office of Science, and the Participating
Institutions. SDSS-IV acknowledges support and resources from the
Center for High-Performance Computing at the University of Utah. The
SDSS web site is www.sdss.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group, the
French Participation Group, Harvard-Smithsonian Center for
Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns
Hopkins University, Kavli Institute for the Physics and Mathematics of
the Universe (IPMU) / University of Tokyo, the Korean Participation
Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur
Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA
Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching),
Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National
Astronomical Observatories of China, New Mexico State University, New
York University, University of Notre Dame, Observat\'ario Nacional /
MCTI, The Ohio State University, Pennsylvania State University,
Shanghai Astronomical Observatory, United Kingdom Participation Group,
Universidad Nacional Aut\'onoma de M\'exico, University of Arizona,
University of Colorado Boulder, University of Oxford, University of
Portsmouth, University of Utah, University of Virginia, University of
Washington, University of Wisconsin, Vanderbilt University, and Yale
University.
The Millennium Simulation data bases \citep{lemson} used in this paper
and the web application providing online access to them were
constructed as part of the activities of the German Astrophysical
Virtual Observatory.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,204
|
<?xml version="1.0" encoding="utf-8" ?>
<object name="Person"
connection="EidssConnectionString"
generator="ObjectGenerator.xslt"
xmlns="urn:schemas-bv:objectmodel">
<storage>
<get name="spPerson_SelectDetail" />
</storage>
<tables>
<table name="Person">
<properties auditObject="daoEmployee" auditTable="tlbPerson" permissionObject="Person">
<lookupcache>
<item name="Person"/>
</lookupcache>
</properties>
<fields>
<!--storage name="NewObject" type="bool"/-->
<storage name="idfsOASite" type="long?"/>
<calculated name="strInstitutionName" type="string" depends="Institution"
lambda='c => c.idfInstitution == null ? (string)null : c.Institution.name' />
<calculated name="strPositionName" type="string" depends="StaffPosition"
lambda='c => c.idfsStaffPosition == null ? (string)null : c.StaffPosition.name' />
<storage name="ObjectAccessListFiltered" type="EditableList<ObjectAccess>" />
<storage name="HideDeleteAction" type="bool"/>
</fields>
<readonly>
<fields name="strInstitutionName,idfsSite" predicate="c => true" />
<fields name="Site" predicate="c => !EidssUserContext.User.HasPermission(PermissionHelper.ExecutePermission(EIDSSPermissionObject.ManageRightsRemotely))" />
</readonly>
<relations>
<relation name="LoginInfoList" table="LoginInfo" internal="false" type="child" source="idfPerson" target="idfPerson" lazy="false" />
<relation name="GroupInfoList" table="PersonGroupInfo" internal="true" type="child" source="idfPerson" target="idfEmployee" lazy="false" />
<!--<relation name="UserGroupMemberList" table="UserGroupMember" internal="false" type="child" source="idfPerson" target="idfEmployee" lazy="false" />-->
<relation name="ObjectAccessList" table="ObjectAccess" internal="false" type="child" source="idfPerson" target="idfEmployee" lazy="false" />
</relations>
<lookups>
<lookup name="Institution" table="OrganizationLookup" source="idfInstitution" target="idfInstitution">
<params>
<param name="ID" const="null" />
<param name="intHACode" const="null" />
</params>
</lookup>
<lookup name="Department" table="DepartmentLookup" source="idfDepartment" target="idfInstitution">
<params>
<param name="Institution" lambda="c => c.idfInstitution ?? 0" type="long" />
<param name="ID" const="null" />
</params>
</lookup>
<lookup name="StaffPosition" table="BaseReference" section="rftPosition" source="idfsStaffPosition" target="idfsBaseReference" />
<lookup name="Site" table="SiteLookup" source="idfsOASite" target="idfsSite"/>
</lookups>
<storage>
<!--<post name="spPerson_Post" />-->
<post />
<delete name="spPerson_Delete" />
</storage>
<postorder>
<item name="this"/>
<item name="LoginInfoList"/>
<item name="GroupInfoList"/>
<!--<item name="UserGroupMemberList"/>-->
<item name="ObjectAccessList"/>
</postorder>
<deleteorder>
<!--<item name="LoginInfoList"/>
<item name="ObjectAccessList"/>
--><!--<item name="UserGroupMemberList"/>--><!--
<item name="GroupInfoList"/>
<item name="this"/>-->
</deleteorder>
<extenders>
<creating>
<scalar_extender target="idfPerson" class="GetNewIDExtender" />
<custom_extender>
<text>
_LoadObjectAccessList(obj);
</text>
</custom_extender>
</creating>
<created>
<value_extender target="Site" value="obj.SiteLookup.Where(s => s.idfsSite == EidssSiteContext.Instance.SiteID).FirstOrDefault();"/>
<value_extender target="idfsSite" value="EidssSiteContext.Instance.SiteID"/>
<custom_extender>
<text>
obj.RefreshObjectAccessListFiltered();
</text>
</custom_extender>
</created>
<loaded>
<custom_extender>
<text>
obj.Site = obj.SiteLookup.Where(s => s.idfsSite == EidssSiteContext.Instance.SiteID).FirstOrDefault();
obj.RefreshObjectAccessListFiltered();
</text>
</custom_extender>
</loaded>
</extenders>
<handlers>
<fieldhandler>
<custom_handler field="Site">
<text>
obj.RefreshObjectAccessListFiltered();
</text>
</custom_handler>
</fieldhandler>
</handlers>
<validators>
<post>
<required_validator target="strFirstName" />
<required_validator target="strFamilyName" />
<required_validator target="Institution" label="Organization" />
</post>
</validators>
<actions>
<!--<action name="Create" type="Create" />-->
<standard>
<remove type="Create"/>
<remove type="Edit"/>
<remove type="Delete"/>
</standard>
<action name="Delete" type="Action" forceClose="true">
<visual alignment="Right" panel="Main" visiblePredicate="(c,a,p,r) => !((c as Person).IsNew || (c as Person).HideDeleteAction) && EidssUserContext.User.HasPermission(PermissionHelper.DeletePermission(EIDSSPermissionObject.Person))">
<regular caption="strDelete_Id" tooltip="tooltipDelete_Id" icon="Delete_Remove"/>
<readOnly caption="" tooltip="tooltipDelete_Id" icon=""/>
</visual>
<run>
<preText>
return new ActResult(obj.MarkToDelete() && ObjectAccessor.PostInterface<Person>().Post(manager, obj), obj);
</preText>
</run>
</action>
</actions>
</table>
<table name="PersonGroupInfo">
<grid>
<item name="idfEmployeeGroup" key="true" visible="false" />
<item name="strName"/>
<item name="strDescription"/>
</grid>
<storage>
<post name="dbo.spPersonGroupInfo_Post"/>
</storage>
<actions>
<standard>
<remove type="Create"/>
<remove type="Edit"/>
<remove type="Delete"/>
</standard>
<action name="AddGroupInfo" type="Action">
<visual panel="Group" alignment="Right" visiblePredicate="(c, a, b, p) => EidssUserContext.User.HasPermission(PermissionHelper.UpdatePermission(EIDSSPermissionObject.UserGroup))">
<regular caption="strAdd_Id" icon="add" tooltip="tooltipAdd_Id" />
</visual>
</action>
<action name="DeleteGroupInfo" type="Action">
<visual panel="Group" alignment="Right" visiblePredicate="(c, a, b, p) => EidssUserContext.User.HasPermission(PermissionHelper.UpdatePermission(EIDSSPermissionObject.UserGroup))"
enablePredicate="(c, p, b) => c != null && !c.Key.Equals(PredefinedObjectId.FakeListObject)">
<regular caption="strDelete_Id" icon="delete" tooltip="tooltipDelete_Id" />
</visual>
</action>
</actions>
</table>
</tables>
</object>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,578
|
Trzęsienie ziemi w Illapel – trzęsienie ziemi o sile 8,4 w skali Richtera, które miało miejsce 16 września 2015 w mieście Illapel. Zginęło 16 osób, a 34 osoby zostały ranne. Epicentrum trzęsienia zostało zlokalizowane w mieście Illapel w regionie Coquimbo.
Zniszczenia
Według chilijskiej marynarki wojennej fale o wysokości 4,5 metra zalały kilka miejscowości. Ogromne zniszczenia zanotowano w nadmorskim mieście Coquimbo. Z powodu zagrożenia z falami tsunami ewakuowano milion osób na chilijskim wybrzeżu.
Przypisy
Trzęsienia ziemi w Chile
Trzęsienia ziemi w 2015
2015 w Chile
Illapel
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,772
|
\section{Introduction}
The sophisticated behavior of cells emerges from the computations that are being performed by the underlying biochemical reaction networks. These biochemical pathways have been studied in a ``top-down'' manner, by looking for recurring motifs, and signs of modularity \cite{milo2002network}. There is also an opportunity to study these pathways in a ``bottom-up'' manner by proposing primitive building blocks which can be composed to create interesting and technologically valuable behavior. This ``bottom-up'' approach connects with work in the Molecular Computation community whose goal is to generate sophisticated behavior using DNA hybridization reactions~\cite{seesawgates,qian2011scaling,napp2013message,yordanov2014computational,soloveichik2010dna,Cardelli_2011StrandAlgebra,cardelli2013two,qian2011efficient,benenson2004autonomous,shapiro2006bringing} and other Artificial Chemistry approaches~\cite{buisman2009computing,daniel2013synthetic}.
We propose a new building block for molecular computation. We show that the mathematical structure of reaction networks is particularly well adapted to compute Maximum Likelihood Estimators for log-linear models, allowing a pithy encoding of such computations by reactions. According to \cite{fienberg2012maximum}: \begin{quotation}Log-linear models are arguably the most popular and important statistical models for the analysis of categorical data; see, for example, Bishop, Fienberg and Holland (1975)~\cite{bishop1975discrete}, Christensen (1997)~\cite{christensen1997log}. These powerful models, which include as special cases graphical models [see, e.g., Lauritzen (1996)~\cite{lauritzen1996graphical}] as well as many logit models [see, e.g., Agresti (2002)~\cite{agresti2013categorical}, Bishop, Fienberg and Holland (1975)~\cite{bishop1975discrete}], have applications in many scientific areas, ranging from social and biological sciences, to privacy and disclosure limitation problems, medicine, data mining, language processing and genetics. Their popularity has greatly increased in the last decades...\end{quotation}
In order to respond in a manner that maximizes fitness, a cell has to correctly estimate the overall state of its environment. Receptors that sit on cell walls collect a large amount of information about the cellular environment. Processing and integration of this spatially and temporally extensive and diverse information is carried out in the biochemical reaction pathways. We propose that this processing and integration may be advantageously viewed from the lens of machine learning.
Our proposal entails that {\em schemes for statistical inference by reaction networks are of biological significance, and are deserving of as thorough and extensive a study as schemes for statistical inference by neural networks.} In particular, machine learning is not just a tool for the analysis of biochemical data, but theoretical and technological insights from machine learning could provide a deep and fundamental way, and perhaps ``the'' correct way, to think about biochemical networks. We view the scheme we present here as a promising first step in this program of applying machine learning insights to biochemical networks.\
\\\\
\textbf{The problem:} We illustrate the main ideas of our scheme with an example. Following \cite{pachter2005algebraic}, consider the \textbf{log-linear model} (also known as toric model) described by the \textbf{design matrix} $A = \tiny\left(\begin{array}{ccc}
2&1&0\\
0&1&2
\end{array}\right)$. This means that we are observing an event with three possible mutually exclusive outcomes, call them $X_1, X_2$, and $X_3$, which represent respectively the columns of $A$. The rows of $A$ represent ``hidden variables'' $\theta_1$ and $\theta_2$ respectively which parametrize the statistics of the outcomes in the following way specified by the columns of $A$:
\begin{align*}
P[X_1\mid \theta_1, \theta_2] &\propto \theta_1^2\
\\P[X_2\mid\theta_1, \theta_2] &\propto \theta_1\theta_2\
\\P[X_3\mid\theta_1, \theta_2] &\propto \theta_2^2
\end{align*}
where the constant of proportionality normalizes the probabilities so they sum to $1$. \footnote{It is more common in statistics and statistical mechanics literature to write $\theta_1 = \e^{-E_1}$ and $\theta_2=\e^{-E_2}$ in terms of ``energies'' $E_1, E_2$ so that $P[X_2\mid E_1, E_2] \propto \e^{-E_1-E_2}$ for example.}
Suppose several independent trials are carried out, and the outcome $X_1$ is observed $x_1\in (0,1)$ fraction of the time, the outcome $X_2$ is observed $x_2\in(0,1-x_1)$ fraction of the time, and the outcome $X_3$ is observed $x_3 = 1 - x_1 - x_2$ fraction of the time. We wish to find the maximum likelihood estimator $(\hat{\theta}_1,\hat{\theta}_2)\in\mathbb{R}^2_{> 0}$ of the parameter $(\theta_1,\theta_2)$, i.e., that value of $\theta$ which maximizes the likelihood of the observed data.
\textbf{Our contribution:} We describe a scheme that takes the design matrix $A$ to a reaction network that solves the maximum likelihood estimation problem. In Definition~\ref{def:mlenetwork}, we describe our scheme for every matrix $A$ over the integers with all column sums equal. All our results hold in this generality.
\begin{itemize}
\item In Definition~\ref{def:mlenetwork}.\ref{thm:MLD}, we show how to obtain from the matrix $A$, a reaction network that computes the maximum likelihood distribution. Specialized to our example, note that the kernel of the matrix $A$ is spanned by the vector $(1, -2, 1)^T$. We encode this by the reversible reaction
\[
X_1 + X_3 \xrightleftharpoons[1]{1} 2X_2
\]
\item In Theorem~\ref{thm:MLD}, we show that if this reversible reaction is started at initial concentrations $X_1(0) = x_1, X_2(0) = x_2, X_3(0) = x_3$, and the dynamics proceeds according to the law of mass action with all specific rates set to $1$:
\begin{align*}
\dot{X}_1(t) = \dot{X}_3(t) = - X_1(t)X_3(t) + X_2^2(t), &&\dot{X}_2(t) = -2X_2^2(t) + 2X_1(t)X_3(t)
\end{align*}
then the reaction reaches equilibrium $(\hat{x}_1,\hat{x}_2,\hat{x}_3)$ where $\hat{x}_1 + \hat{x}_2 + \hat{x}_3 = 1$ and $\hat{x}_1 \propto \hat{\theta}_1^2$, $\hat{x}_2\propto\hat{\theta}_1\hat{\theta}_2$, and $\hat{x}_3\propto\hat{\theta}_2^2$, so that $(\hat{x}_1, \hat{x}_2, \hat{x}_3)$ represents the probability distribution over the outcomes $X_1, X_2, X_3$ at the maximum likelihood $\hat{\theta}_1,\hat{\theta}_2$.
\item This part of our scheme involves only reversible reactions, and requires no catalysis (see \cite[Theorem~5.2]{ManojCatalysis} and Lemma~\ref{lem:issaturated}). One difficulty with implementing such schemes has been that empirical control over kinetics is rather poor. Exquisitely setting the specific rates of individual reactions to desired values is very tricky, and requires a detailed understanding of molecular dynamics. Our scheme avoids this problem since any choice of specific rates that leads to the same equilibrium will do. Hence we can freely set the specific rates so long as the equilibrium constants (ratio of forward and backward specific rates) have value $1$. This is an equilibrium thermodynamic condition that is much easier to ensure in vitro. This combination of reversible reactions, no catalysis, and robustness to the values of the specific rates may make this scheme particularly easy and efficient to implement.
\item In Definition~\ref{def:mlenetwork}.\ref{def:R_MLE}, we show how to obtain from the matrix $A$ a reaction network that computes the maximum likelihood estimator. Specialized to our example, we obtain the reaction network with $5$ species $X_1, X_2, X_3, \theta_1, \theta_2$ and the $5$ reactions:
\begin{align*}
X_1 + X_3 \rightleftharpoons 2X_2, &&2\theta_1\to 0, &&X_1\to X_1 + 2\theta_1,\
\\&&\theta_1 + \theta_2\to 0, &&X_2\to X_2 + \theta_1 + \theta_2
\end{align*}
The number of species equals the number of rows plus the number of columns of $A$. The reactions are not uniquely determined by the problem, but become so once we choose a basis for the kernel of $A$ and a maximal linearly independent set of columns. Here we have chosen columns $1$ and $2$. Each column of $A$ determines a pair of irreversible reactions.
\item Theorem~\ref{thm:MLE} implies that if this reaction system is launched at initial concentrations $X_1(0) = x_1, X_2(0) = x_2, X_3(0) = x_3$ and arbitrary concentrations of $\theta_1(0)$ and $\theta_2(0)$, and the dynamics proceeds according to the law of mass action with all specific rates set to $1$:
\begin{align*}
\dot{X}_1(t) = \dot{X}_3(t) = - X_1(t)X_3(t) + X_2^2(t), && \dot{\theta}_1(t) =-2\theta_1^2(t) + 2X_1(t) -\theta_1(t)\theta_2(t) + X_2(t),\
\\\dot{X}_2(t) = -2X_2^2(t) + 2X_1(t)X_3(t), &&\dot{\theta}_2(t) =
-\theta_1\theta_2(t) +X_2(t),
\end{align*}
then the reaction reaches equilibrium $(\hat{x_1},\hat{x_2},\hat{x_3},\hat{\theta}_1,\hat{\theta}_2)$ where $(\hat{\theta}_1, \hat{\theta}_2)$ is the maximum likelihood estimator for the data frequency vector $(x_1, x_2, x_3)$ and $(\hat{x}_1, \hat{x}_2, \hat{x}_3)$ represents the probability distribution over the outcomes $X_1, X_2, X_3$ at the maximum likelihood. We prove global convergence: our dynamical system provably converges to the desired equilibrium. Global convergence results are known to be notoriously hard to prove in reaction network theory~\cite{GeoGac}.
\item A number of schemes have been proposed for translating reaction networks into DNA strand displacement reactions \cite{soloveichik2010dna,qian2011efficient,Cardelli_2011StrandAlgebra,cardelli2013two}. Adapting these schemes to our setting should allow molecular implementation of our MLE-solving reaction networks with DNA molecules.
\end{itemize}
\section{Maximum Likelihood Estimation in toric models}
The definitions and results in this section mostly follow \cite{pachter2005algebraic}. Because we require a slightly stronger statement, and Theorem~\ref{thm:birch} allows a short, easy, and insightful proof, we give the proof here for completeness.
In statistics, a \textbf{parametric model} consists of a family of probability distributions, one for each value of the parameters. This can be described as a map from a manifold of parameters into a manifold of probability distributions. If this map can be described by monomials as below, then the parametric statistical model is called a \textbf{toric} or \textbf{log-linear} model, as we now describe.
\begin{definition}[Toric Model]
Let $m,n$ be positive integers. The probability simplex and its relative interior are:
\[
\Delta^n:= \{(x_1,x_2,\dots,x_n)\in\mathbb{R}^n_{\geq 0} \mid x_1 + x_2 + \dots + x_n = 1\}
\]
\[
\operatorname{ri}(\Delta^n):= \{(x_1,x_2,\dots,x_n)\in\mathbb{R}^n_{> 0} \mid x_1 + x_2 + \dots + x_n = 1\}.
\]
An $m\times n$ matrix $A = (a_{ij})_{m\times n}$ of integer entries is a \textbf{design matrix} iff all its column sums $\sum_i a_{ij}$ are equal. Let $a_j := (a_{1j}, a_{2j}, \dots, a_{mj})^T$ be the $j$'th column of $A$. Define $\theta^{a_j}:= \theta_1^{a_{1j}}\theta_2^{a_{2j}}\dots\theta_m^{a_{mj}}$. Define the \textbf{parameter space}
$
\Theta:=\{\theta\in\mathbb{R}^m_{>0}\mid \theta^{a_1}+\theta^{a_2}+\dots+\theta^{a_n}=1\}.
$
The \textbf{toric model} of $A$ is the map
\[
p_A=(p_1,p_2,\dots,p_n):\Theta \to \Delta^n\text{ given by }p_j(\theta) = \theta^{a_j}\text{ for }j=1\text{ to }n.
\]
\end{definition}
We could also have defined the parameter space $\Theta$ to be all of $\mathbb{R}^m_{>0}$, in which case we would need to normalize the probabilities by the {\em partition function} $\theta^{a_1}+\theta^{a_2}+\dots+\theta^{a_n}$ to make sure they add up to $1$. For our present purposes, the current approach will prove technically more direct.
Note that here $p_j(\theta)$ specifies $\operatorname{Pr}[j\mid \theta]$, the conditional probability of obtaining outcome $j$ given that the true state of the world is described by $\theta$.
A central problem of statistical inference is the problem of \textbf{parameter estimation}. After performing several independent identical trials, suppose the \textbf{data vector} $u\in\mathbb{Z}_{\geq 0}^n$ is obtained as a record of how many times each outcome occurred. Let the norm $|u|_1:= u_1+u_2+\dots+u_n$ denote the total number of trials performed. The \textbf{Maximum Likelihood} solution to the problem of parameter estimation finds that value of the parameter $\theta$ which maximizes the \textbf{likelihood function} $f_u(\theta):=\operatorname{Pr}[u\mid\theta]$, i.e.:
\begin{align}\label{def:mle}
\hat{\theta}(u) := \arg\sup_{\theta\in \Theta} f_u(\theta)
\end{align}
is a \textbf{maximum likelihood estimator} or MLE for the data vector $u$. We will call the point $\hat{p}(u):=p_A(\hat{\theta}(u))$ a \textbf{maximum likelihood distribution}.
\begin{definition}
Let $A$ be an $m\times n$ design matrix, and $u$ a data vector. Then the \textbf{sufficient polytope} is $P_A(u) := \{ p\in\operatorname{ri}(\Delta^n)\mid A p = A \frac{u}{|u|_1}\}$.
\end{definition}
The following theorem is a version of Birch's theorem from Algebraic Statistics. It provides a variational characterization of the maximum likelihood distribution as the unique maximum entropy distribution in the sufficient polytope. In particular the maximum likelihood distribution always belongs to the sufficient polytope, which justifies the name.
\begin{theorem}\label{thm:birch}
Fix a design matrix $A$ of size $m\times n$.
\begin{enumerate}
\item If $u,v\in\mathbb{Z}^n_{\geq 0}$ are nonzero data vectors such that $A u/|u|_1 = A v/|v|_1$ then they have the same maximum likelihood estimator: $\hat{\theta}(u) = \hat{\theta}(v)$.
\item Further if $P_A(u)$ is nonempty then
\begin{enumerate}
\item There is a unique distribution $\tilde{p}\in P_A(u)$ which maximizes Shannon entropy $H(p)=-\sum_{i=1}^n p_i\log p_i$ viewed as a real-valued function from the closure $\overline{P_A(u)}$ of $P_A(u)$ with $0\log 0$ defined as $0$.
\item $\{\tilde{p}\} = P_A(u)\cap p_A(\Theta)$.
\item $\tilde{p}=\hat{p}(u)$, the Maximum Likelihood Distribution for the data vector $u$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
1. Fix a data vector $u$. Note that $f_u(\theta) = \frac{|u|_1!}{u_1!u_2!\dots u_n!}p_1(\theta)^{u_1}p_2(\theta)^{u_2}\dots p_n(\theta)^{u_n} = \frac{|u|_1!}{u_1!u_2!\dots u_n!}\theta^{Au}$. Therefore the maximum likelihood estimator
\[
\hat{\theta}(u) = \arg\sup_{\theta\in\Theta} \theta^{Au} = \arg\sup_{\theta\in\Theta} (\theta^{Au})^{1/|u|_1} = \arg\sup_{\theta\in\Theta} \theta^{Au/|u|_1}
\]
where the second equality is true because the function $x\mapsto x^c$ is monotonically increasing whenever $c>0$. It follows that if $v\in\mathbb{Z}^n_{\geq 0}$ is a data vector such that $A u/|u|_1 = A v/|v|_1$ then $\hat{\theta }(u) = \hat{\theta}(v)$.\
\\
\\2.(a) Suppose $P_A(u)$ is nonempty. A local maximum of the restriction $H|_{\overline{P_A(u)}}$ of $H$ to the polytope $\overline{P_A(u)}$ can not be on the boundary $\partial \overline{P_A(u)}$ because for $p\in \partial \overline{P_A(u)}$, moving in the direction of arbitrary $q\in P_A(u)$ increases $H$, as can be shown by a simple calculation:
\[
\lim_{\lambda\to 0}\frac{d}{d\lambda}H((1-\lambda)p + \lambda q) \to +\infty.
\]
Since $H$ is a continuous function and the closure $\overline{P_A(u)}$ is a compact set, $H$ must attain its maximum value in $P_A(u)$. Further $H$ is a strictly concave function since its Hessian is diagonal with entries $-1/p_i$ and hence negative definite. It follows that $H|_{\overline{P_A(u)}}$ is also strictly concave, and has a unique local maximum at $\tilde{p}\in P_A(u)$, which is also the global maximum.\
\\
\\(b) By concavity of $H$, the maximum $\tilde{p}$ is the unique point in $P_A(u)$ such that $\nabla H(\tilde{p})$ is perpendicular to $P_A(u)$. We claim that $q\in P_A(u)\cap p_A(\Theta)$ iff $\nabla H(q) = (-1 - \log q_1, -1-\log q_2, \dots, -1-\log q_n)$ is perpendicular to $P_A(u)$. Since all column sums are equal, this is equivalent to requiring that $\log q$ be in the span of the rows of $A$, which is true iff $q\in p_A(\Theta)$. Hence $P_A(u)\cap p_A(\Theta)=\{\tilde{p}\}$.\
\\
\\(c) To compute the Maximum Likelihood Distribution $\hat{p}(u)$, we proceed as follows:
\begin{align*}
\hat{p}(u) &= p_A(\hat{\theta}(u)) = p_A(\arg\sup_{\theta\in\Theta} \theta^{Au}) = p_A(\arg\sup_{\theta\in\Theta} \theta^{Au/|u|_1})\
\\&= p_A(\arg\sup_{\theta\in\Theta} \theta^{A\tilde{p}}) =\arg\sup_{p\in p_A(\Theta)} p^{\tilde{p}} = \arg\sup_{p\in p_A(\Theta)} \sum_{i=1}^n \tilde{p}_i\log p_i = \tilde{p}
\end{align*}
where the fourth equality uses $A\tilde{p} = Au/|u|_1$ and the last equality follows because $\sum_{i=1}^n \tilde{p}_i\log p_i$ viewed as a function of $p$ attains its maximum in all of $\Delta^n$, and hence in $p_A(\Theta)$, at $p = \tilde{p}$.
\end{proof}
This theorem already exposes the core of our idea. We will design reaction systems that maximize entropy subject to the ``correct'' constraints capturing the polytope $P_A(u)$. Then because the reactions also proceed to maximize entropy, the equilibrium point of our dynamics will correspond to the maximum likelihood distribution. Most of the technical work will go in proving convergence of trajectories to these equilibrium points.
\section{Reaction Networks}
According to \cite{Klavins_2011Biomolecular}, ``In building a design theory for chemistry, chemical reaction networks are usually the most natural intermediate representation - the middle of the
hourglass \cite{doyle2007rules}. Many different high level languages and
formalisms have been and can likely be compiled to
chemical reactions, and chemical reactions themselves (as
an abstract specification) can be implemented with a variety
of low level molecular mechanisms.''
In Subsection~\ref{subsec:crntreview}, we recall the definitions and results for reaction networks which we will need for our main results. For a comprehensive presentation of these ideas, see \cite{ManojCatalysis}. In Subsection~\ref{subsec:pert}, we prove a new result in reaction network theory. We extend a previously known global convergence result to the case of perturbations.
\subsection{Brief review of Reaction Network Theory}\label{subsec:crntreview}
For vectors $a=(a_i)_{i\in S}$ and $b=(b_i)_{i\in S}$, the notation $a^b$ will be shorthand for the formal monomial $\prod_{i\in S} a_i^{b_i}$. We introduce some standard definitions.
\begin{definition}[Reaction Network]
Fix a finite set $S$ of \textbf{species}.
\begin{enumerate}
\item A \textbf{reaction} over $S$ is a pair $(y,y')$ such that $y,y'\in \NN^S$. It is usually written $y\ra y'$, with \textbf{reactant} $y$ and \textbf{product} $y'$.
\item A \textbf{reaction network} consists of a finite set $S$ of species, and a finite set $\mathcal{R}$ of reactions.
\item A reaction network is \textbf{reversible} iff for every reaction $y\to y'\in\mathcal{R}$, the reaction $y'\to y\in\mathcal{R}$.
\item A reaction network is \textbf{weakly reversible} iff for every reaction $y\to y'\in\mathcal{R}$ there exists a positive integer $n\in\mathbb{Z}_{>0}$ and $n$ reactions $y_1\to y_2,y_2\to y_3,\dots,y_{n-1}\to y_n\in\mathcal{R}$ with $y_1=y'$ and $y_n=y$.
\item The \textbf{stoichiometric subspace} $H\subseteq\mathbb{R}^S$ is the subspace spanned by $\{y'-y\mid y\to y'\in\mathcal{R}\}$, and $H^\perp$ is the orthogonal complement of $H$.
\item A \textbf{siphon} is a set $T\subseteq S$ of species such that for all $y\to y'\in\mathcal{R}$, if there exists $i\in T$ such that $y'_i>0$ then there exists $j\in T$ such that $y_j>0$.
\item A siphon $T\subseteq S$ is \textbf{critical} iff $v\in H^\perp\cap\mathbb{R}^S_{\geq 0}$ with $v_i=0$ for all $i\notin T$ implies $v=0$.
\end{enumerate}
\end{definition}
\begin{definition}
Fix a weakly reversible reaction network $(S,\mathcal{R})$. The \textbf{associated ideal} $I_{(S,\mathcal{R})}\subseteq \mathbb{C}[x]$ where $x=(x_i)_{i\in S}$ is the ideal generated by the binomials $\{ x^y - x^{y'}\mid y\to y'\in\mathcal{R}\}$. A reaction network is \textbf{prime} iff its associated ideal is a prime ideal.
\end{definition}
The following theorem follows from \cite[Theorem~4.1, Theorem~5.2]{ManojCatalysis}.
\begin{theorem}\label{thm:prime}
A weakly reversible prime reaction network $(S,\mathcal{R})$ has no critical siphons.
\end{theorem}
We now recall the mass-action equations which are widely employed for modeling cellular processes~\cite{thomson2009unlimited,shinar2010structural,sontag01structure,tyson2003sniffers} in Biology.
\begin{definition}[Mass Action System]
A \textbf{reaction system} consists of a reaction network $(S,\mathcal{R})$ and a \textbf{rate function} $k:\mathcal{R}\to\R_{>0}$. The \textbf{mass-action equations} for a reaction system are the system of ordinary differential equations in {\em concentration} variables $\{x_i(t) \mid i\in S\}$:
\begin{equation}\label{eqn:ma}
\dot{x}(t) = \sum_{y\to y' \in \R} k_{y\to y'}\, x(t)^y \,(y' - y)
\end{equation}
where $x(t)$ represents the vector $(x_i(t))_{i\in S}$ of concentrations at time $t$.
\end{definition}
Note that $\dot{x}(t)\in H$, so affine translations of $H$ are invariant under the dynamics of Equation~\ref{eqn:ma}.
We recall the well known notions of detailed balanced and complex balanced reaction system.
\begin{definition}
A reaction system $(S,\mathcal{R},k)$ is
\begin{enumerate}
\item \textbf{Detailed balanced} iff it is reversible and there exists a point $\alpha\in\mathbb{R}^S_{>0}$ such that for every $y\to y'\in\mathcal{R}$:
\[
k_{y\to y'}\, \alpha^y \,(y' - y) = k_{y'\to y}\, \alpha^{y'}\,(y - y')
\]
A point $\alpha\in\mathbb{R}^S_{>0}$ that satisfies the above condition is called a \textbf{point of detailed balance}.
\item \textbf{Complex balanced} iff there exists a point $\alpha\in\mathbb{R}^S_{>0}$ such that for every $y\in\mathbb{Z}^S_{\geq 0}$:
\[
\sum_{y\to y'\in \mathcal{R}} k_{y\to y'}\, \alpha^y \,(y' - y) = \sum_{y''\to y\in\mathcal{R}} k_{y''\to y}\, \alpha^{y''}\,(y - y'')
\]
A point $\alpha\in\mathbb{R}^S_{>0}$ that satisfies the above condition is called a \textbf{point of complex balance}.
\end{enumerate}
\end{definition}
The following observations are well known and easy to verify.
\begin{itemize}
\item A complex balanced reaction system is always weakly reversible.
\item If all rates $k_{y\to y'}=1$ and the network is weakly reversible then the reaction system is complex balanced with point of complex balance $(1,1,\dots,1)\in\mathbb{R}^S$; if the network is reversible then the reaction system is also detailed balanced with point of detailed balance $(1,1,\dots,1)\in\mathbb{R}^S$.
\item Every detailed balance point is also a complex balance point, but there are complex balanced reversible networks that are not detailed balanced.
\end{itemize}
It is straightforward to check that every point of complex balance (respectively, detailed balance) is a fixed point for Equation~\ref{eqn:ma}. The next theorem, which follows from \cite[Theorem~2]{Angeli2007598} and \cite{horn74dynamics}, states that a converse also exists: if a reaction system is complex balanced (respectively, detailed balanced) then every fixed point is a point of complex balance (detailed balance). Further there is a unique fixed point in each affine translation of $H$, and if there are no critical siphons then the basin of attraction for this fixed point is as large as possible, namely the intersection of the affine translation of $H$ with the nonnegative orthant.
\begin{theorem}[Global Attractor Theorem for Complex Balanced Reaction Systems with no critical siphons]\label{thm:gac}
Let $(S,\mathcal{R},k)$ be a weakly reversible complex balanced reaction system with no critical siphons and point of complex balance $\alpha$. Fix a point $u\in\mathbb{R}^S_{>0}$. Then there exists a point of complex balance $\beta$ in $(u+H)\cap\mathbb{R}^S_{>0}$ such that for every trajectory $x(t)$ with initial conditions $x(0)\in (u+H)\cap\mathbb{R}^S_{\geq 0}$, the limit $\lim_{t\to\infty} x(t)$ exists and equals $\beta$. Further the function $g(x) := \sum_{i=1}^n x_i\log x_i - x_i-x_i\log\alpha_i$ is strictly decreasing along non-stationary trajectories and attains its unique minimum value in $(u+H)\cap\mathbb{R}^S_{\geq 0}$ at $\beta$.
\end{theorem}
It is not completely trivial to show, but nevertheless true, that this theorem holds with weakly reversible replaced by ``reversible'' and ``complex balance'' replaced by ``detailed balance.'' What is to be shown is that the point of complex balance obtained in $(u+H)\cap\mathbb{R}^S_{\geq 0}$ by minimizing $g(x)$ is actually a point of detailed balance, and this follows from an examination of the form of the derivative $\frac{d}{dt}g(x(t))$ along trajectories $x(t)$ to Equation~\ref{eqn:ma}.
\subsection{A Perturbatively-Stable Global Attractor Theorem}\label{subsec:pert}
Global attractor results usually assume that the reaction network is weakly reversible. We are going to describe our scheme in the next section.
Our scheme will employ reaction networks that are not weakly reversible, yet we will prove global attractor results for them. The key idea we use is that our reaction network can be broke into a reversible part, and an irreversible part. The reversible part acts on, but evolves independent of, the irreversible part. So we get to use the global attractor results ``as is'' on the reversible part. Further, as the reversible part approaches equilibrium, our irreversible part behaves as a perturbation of a reversible detailed-balanced network. The closer the reversible part gets to equilibrium, the smaller the perturbation of the irreversible part from the dynamics of a certain reversible detailed-balanced network.
To make this proof idea work out, we will need a perturbative version of Theorem~\ref{thm:gac}. The next lemma shows that if the rates are perturbed slightly then, outside a small neighborhood of the detailed balance point, the strict Lyapunov function $g(x)$ from Theorem~\ref{thm:gac} continues to decrease along non-stationary trajectories.
\begin{lemma}\label{lem:striclya}
Let $(S,\mathcal{R},k)$ be a weakly reversible complex balanced reaction system with no critical siphons and point of complex balance $\alpha$. For every sufficiently small $\epsilon>0$ there exists $\delta>0$ such that for all $x'$ outside the $\epsilon$-neighborhood of $\alpha$ in $(\alpha+H)\cap\mathbb{R}^S_{\geq 0}$, the derivative $\frac{d}{dt}g(x(t))|_{t=0}<-\delta$, where $x(t)$ is a solution to the Mass-Action Equations~\ref{eqn:ma} with $x(0)=x'$.
\end{lemma}
\begin{proof}
Let $B_\epsilon$ be the open $\epsilon$ ball around $\alpha$ in $(\alpha+H)\cap\mathbb{R}^S_{\geq 0}$, with $\epsilon$ small enough so that $B_\epsilon$ does not meet the boundary $\partial\mathbb{R}^S_{\geq 0}$. Consider the closed set $S:= (\alpha+H)\cap\mathbb{R}^S_{\geq 0}\setminus B_\epsilon$. Define the orbital derivative of $g$ at $x'$ as $\mathcal{O}_k g(x'):=\frac{d}{dt}g(x(t))|_{t=0}$, where $x(t)$ is a solution to the mass-action equations~\ref{eqn:ma} with $x(0)=x'$. Define $\delta:= \inf_{x'\in S} (- \mathcal{O}_k g(x'))$. If $\delta\leq 0$ then since $S$ is a closed set, and $\mathcal{O}_k g$ is a continuous function, there exists a point $x'$ such that $\mathcal{O}_k g(x')\geq 0$, which contradicts Theorem~\ref{thm:gac}.
\end{proof}
We formalize the notion of perturbation using \textbf{differential inclusions}. Recall that differential inclusions model uncertainty in dynamics in a nondeterministic way by generalizing the notion of vector field. A differential inclusion maps every point to a subset of the tangent space at that point.
\begin{definition}\label{def:pert}
Let $(S,\mathcal{R},k)$ be a reaction system and let $\delta>0$. The $\delta$-\textbf{perturbation} of $(S,\mathcal{R},k)$ is the differential inclusion $V:\mathbb{R}^S_{\geq 0}\to 2^{\mathbb{R}^S}$ that at point $x\in\mathbb{R}^S_{\geq 0}$ takes the value
\[
V(x):=\left\{ \sum_{y\to y'\in \mathcal{R}} k'_{y\to y'} x^y (y'-y) \,\,\,\middle|\,\,\, k'_{y\to y'} \in (k_{y\to y'} - \delta, k_{y\to y'}+\delta)\text{ for all }y\to y'\in\mathcal{R}\right\}.
\]
A \textbf{trajectory} of $V$ is a tuple $(I,x)$ where $I\subseteq\mathbb{R}$ is an interval and $x:I\to \mathbb{R}^S_{\geq 0}$ is a differentiable function with $\dot{x}(t)\in V(x(t))$.
\end{definition}
\begin{theorem}[Perturbatively-Stable Global Attractor Theorem for Complex Balanced Reaction Systems with no critical siphons]\label{thm:pgac}
Let $(S,\mathcal{R},k)$ be a weakly reversible complex balanced reaction system with no critical siphons. Fix a point $u\in\mathbb{R}^S_{>0}$. Then there exists a point of complex balance $\beta$ in $(u+H)\cap\mathbb{R}^S_{>0}$ such that:
\begin{enumerate}
\item For every sufficiently small $\varepsilon>0$, there exists $\delta>0$ such that every trajectory of the form $(\mathbb{R}_{\geq 0},x)$ to the $\delta$-perturbation of $(S,\mathcal{R},k)$ with initial conditions $x(0)\in (u+H)\cap\mathbb{R}^S_{\geq 0}$ eventually enters an $\varepsilon$-neighborhood of $\beta$ and never leaves.
\item Consider a sequence $\delta_1> \delta_2> \dots >0$ and a sequence $0<t_1< t_2< \dots$ such that $\lim_{i\to\infty}\delta_i=0$ and $\lim_{i\to\infty} t_i = +\infty$, and a trajectory $(\mathbb{R}_{\geq 0}, x)$ with $x(0)\in (u+H)\cap\mathbb{R}^S_{\geq 0}$ such that $((t_i,\infty),x)$ is a trajectory of the $\delta_i$-perturbation of $(S,\mathcal{R},k)$. Then the limit $\displaystyle\lim_{t\to\infty} x(t)=\beta$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof sketch]
1. Fix $\varepsilon>0$ such that the $\varepsilon$-ball $B_\varepsilon$ around $\beta$ does not meet the boundary $\partial\mathbb{R}^S_{\geq 0}$. By Lemma~\ref{lem:striclya}, outside $B_\varepsilon$, there exists $\delta_\varepsilon>0$ such that the function $\mathcal{O}_kg<-\delta_\varepsilon$. Since $\mathcal{O}_kg$ is a continuous function of the specific rates $k$, a sufficiently small perturbation $\delta>0$ in the rates will not change the sign of $\mathcal{O}_kg$. Hence, outside $B_\epsilon$, the function $g$ is strictly decreasing along trajectories $x(t)$ to Equation~\ref{eqn:ma}. It follows that eventually every trajectory must enter $B_\epsilon$.
2. Fix a sequence $\varepsilon_1>\varepsilon_2>\dots>0$ with $\varepsilon_1$ small enough so that the $\varepsilon_1$-ball around $\beta$ does not meet the boundary $\partial\mathbb{R}^S_{\geq 0}$ and $\lim_{i\to\infty}\varepsilon_i\to 0$. For each $\varepsilon_i$, there exists $j$ such that $\delta_j$ is small enough as per part (1) of the theorem. So every trajectory will eventually enter the $\epsilon_i$ neighborhood of $\beta$, and never leave. Since this is true for every $i$ and $\lim_{i\to \infty}\varepsilon_i\to 0$, the result follows.
\end{proof}
\section{Main Result}
The next definition makes precise our scheme, which takes a design matrix $A$ to a reaction system $\mathcal{S}_{MLE}$ depending on $A$. The choice of this reaction system is not unique, but depends on two choices of basis. We proceed in two stages. In the first stage, we construct the reaction system $\mathcal{S}_{MLD}$ which solves the problem of finding the maximum likelihood distribution. In the second stage, we add reactions to solve for $\theta$ from the algebraic relations between the $\theta$ and $X$ variables, obtaining $\mathcal{S}_{MLE}$.
\begin{definition}\label{def:mlenetwork}
Fix a design matrix $A= (a_{ij})_{m\times n}$, a basis $B$ for the free group $\mathbb{Z}^n\cap\ker A$, and a maximal linearly-independent subset $B'$ of the columns of $A$.
\begin{enumerate}
\item\label{def:R_MLD} The reaction network $\mathcal{R}_{MLD}(A,B)$ consists of $n$ species $X_1, X_2,\dots, X_n$ and for each $b\in B$, the reversible reaction:
\[
\sum_{j:b_j>0} b_j X_j \rightleftharpoons \sum_{j:b_j<0} -b_j X_j
\]
\item\label{def:R_MLE} The reaction system $\mathcal{S}_{MLD}(A,B)$ consists of the reaction network $\mathcal{R}_{MLD}(A,B)$ with an assignment of rate $1$ to each reaction.
\item The reaction network $\mathcal{R}_{MLE}(A,B,B')$ consists of $m+n$ species $\theta_1, \theta_2,\dots, \theta_m, X_1, X_2,\dots, X_n$, and in addition to the reactions in $\mathcal{R}_{MLD}$, the following reactions:
\begin{itemize}
\item For each column $j\in B'$ of $A$, a reaction $\sum_{i=1}^m a_{ij} \theta_i \to 0$.
\item For each column $j\in B'$ of $A$, a reaction $X_j \to X_j + \sum_{i=1}^m a_{ij} \theta_i$.
\end{itemize}
\item The reaction system $\mathcal{S}_{MLE}(A,B,B')$ consists of the reaction network $\mathcal{R}_{MLE}(A,B,B')$ with an assignment of rate $1$ to each reaction.
\end{enumerate}
\end{definition}
Note that by the rank-nullity theorem of linear algebra, the dimension of the kernel plus the rank of the matrix equals the number of columns of the matrix. Hence counting the reversible reactions as two irreversible reactions, our scheme yields a reaction system whose number of reactions is twice the number of columns of $A$.
It is clear from the definition of $\mathcal{S}_{MLE}$ that the reactions that come from $\mathcal{R}_{MLD}$ are reversible and evolve without being affected by the other reactions. Hence we first prove global convergence of the reaction system $\mathcal{S}_{MLD}$ to the maximum likelihood distribution. This part is fairly straightforward. The key point is to verify that the reaction network $\mathcal{R}_{MLD}$ has no critical siphons. In fact, we show in the next lemma that $\mathcal{R}_{MLD}$ is prime, which will imply ``no critical siphons'' by Theorem~\ref{thm:prime}.
\begin{lemma}\label{lem:issaturated}
Fix a design matrix $A= (a_{ij})_{m\times n}$ and a basis $B$ for the free group $\mathbb{Z}^n\cap\ker A$. Then the reaction network $\mathcal{R}_{MLD}(A,B)$ is prime and $\mathcal{S}_{MLD}(A,B)$ is detailed balanced. Consequently, the reaction system $\mathcal{S}_{MLD}(A,B)$ is globally asymptotically stable.
\end{lemma}
\begin{proof}
$\mathcal{R}_{MLD}(A,B)$ is prime by \cite[Corollary~2.15]{miller2011theory}. The idea is to look at the toric model $p_A$ as a ring homomorphism $\mathbb{C}[x_1,x_2,\dots, x_n]\to \mathbb{C}[\mathbb{N} A]$ with $x_j\mapsto \theta^{a_j}$. (Here $\mathbb{N}A$ is the affine semigroup generated by the columns of $A$.) The kernel of this ring homomorphism is the associated ideal of $\mathcal{R}_{MLD}(A,B)$ by \cite[Proposition~2.14]{miller2011theory}, and the codomain is an integral domain, so the kernel must be prime.
To verify that $\mathcal{S}_{MLD}(A,B)$ is detailed balanced, note that the point $(1,1,\dots,1)\in\mathbb{R}^n$ is a point of detailed balance since all rates are $1$. Global asymptotic stability now follows from Theorem~\ref{thm:prime} and Theorem~\ref{thm:gac}.
\end{proof}
We can now obtain global convergence for $\mathcal{S}_{MLD}$.
\begin{theorem}[The reaction system $\mathcal{S}_{MLD}(A,B)$ computes the Maximum Likelihood Distribution]\label{thm:MLD}
Fix a design matrix $A= (a_{ij})_{m\times n}$, a basis $B$ for the free group $\mathbb{Z}^n\cap\ker A$, and a nonzero data vector $u\in\mathbb{Z}^n_{\geq 0}$. Let $x(t) = (x_1(t), x_2(t),\dots, x_n(t))$ be a solution to the mass-action differential equations for the reaction system $\mathcal{S}_{MLD}(A,B)$ with initial conditions $x(0) = u/|u|_1$. Then $x(\infty):=\displaystyle\lim_{t\to\infty} x(t)$ exists and equals the maximum likelihood distribution $\hat{p}(u)$.
\end{theorem}
\begin{proof}
For the system $\mathcal{S}_{MLD}(A,B)$, note that $(x(0)+H)\cap\mathbb{R}^n_{>0}= P_A(u/|u|_1)$. By Theorem~\ref{thm:gac}, $x(\infty)$ exists, and the function $\sum_{i=1}^n x_i\log x_i - x_i - x_i\log 1$ attains its unique minimum in $P_A(u/|u|_1)$ at $x(\infty)$. Since the system is mass-conserving, $\sum_{i=1}^n x_i$ is constant on $P_A(u/|u|_1)$, so this is equivalent to the fact that Shannon entropy $H(x)= -\sum_{i=1}^n x_i\log x_i$ is increasing, and attains its unique maximum value in $P_A(u/|u|_1)$ at $x(\infty)$. By Theorem~\ref{thm:birch}, the point $x(\infty)$ must be the maximum likelihood distribution $\hat{p}(u)$.
\end{proof}
As the reversible reactions in $\mathcal{S}_{MLE}$ approach closer and closer to equilibrium, we wish to absorb the values of the $X$ variables into reaction rates and pretend that the irreversible reactions are reactions only in the $\theta$ variables. This has the advantage that we can treat this pretend reaction system in the $\theta$ variables as a perturbation of a reversible, detailed balanced system. We can then hope to employ Theorem~\ref{thm:pgac} and conclude global convergence for these irreversible reactions, and hence for $\mathcal{S}_{MLE}$.
One small technical point deserves mention. The pretend reaction system in the $\theta$ variables is not a reaction system since the rates are not real numbers but functions of time. This will not trouble us. We have already provisioned for this in Definition~\ref{def:pert} by allowing perturbations of reaction systems to be differential inclusions.
\begin{theorem}[The reaction system $\mathcal{S}_{MLE}(A,B,B')$ computes the Maximum Likelihood Estimator]\label{thm:MLE}
Fix a design matrix $A= (a_{ij})_{m\times n}$, a basis $B$ for the free group $\mathbb{Z}^n\cap\ker A$, and a nonzero data vector $u\in\mathbb{Z}^n_{\geq 0}$. Let $x(t) = (x_1(t), x_2(t),\dots, x_n(t),\theta_1(t),\theta_2(t),\dots, \theta_m(t))$ be a solution to the mass-action differential equations for the reaction system $\mathcal{S}_{MLE}(A,B,B')$ with initial conditions $x(0) = u/|u|_1$ and $\theta(0)=0$. Then $x(\infty):=\lim_{t\to\infty} x(t)$ exists and equals the maximum likelihood distribution $\hat{p}(u)$, and $\theta(\infty):=\lim_{t\to\infty} \theta(t)$ exists and equals the maximum likelihood estimator $\hat{\theta}(u)$.
\end{theorem}
\begin{proof}[Proof sketch]
Fix $u$ and let $\hat{p}=\hat{p}(u)$ and $\hat{\theta}=\hat{\theta}(u)$. Note that for the species $X_1, X_2,\dots, X_n$, the differential equations for $\mathcal{S}_{MLE}(A,B)$ and $\mathcal{S}_{MLD}(A,B,B')$ are identical, since these species appear purely catalytically in the reactions that belong to $\mathcal{R}_{MLE}(A,B,B')\setminus \mathcal{R}_{MLD}(A,B)$. Hence $x(\infty)=\hat{p}(u)$ follows from Theorem~\ref{thm:MLD}.
To see that $\theta(\infty)=\hat{\theta}$, let us first allow the $X$ species to reach equilibrium, then treat the $\theta$ system with replacing the $X$ species by rate constants representing their values at equilibrium. The system $\Theta_{MLE}(A,B,B',x(\infty))$ obtained in this way in only the $\theta$ species is a reaction system with the reactions
\begin{itemize}\label{sys:theta}
\item For each column $j\in B'$ of $A$, a reaction $\sum_{i=1}^m a_{ij} \theta_i \to 0$ of rate $1$
\item For each column $j\in B'$ of $A$, a reaction $0 \to \sum_{i=1}^m a_{ij} \theta_i$ of rate $x_j(\infty)$.
\end{itemize}
This is a reversible reaction system, and the maximum likelihood estimators $\hat{\theta}$ are precisely the points of detailed balance for this system, where we are using the fact that $B'$ was a maximal linearly-independent set of the columns of $A$. In addition, this system has no siphons since if species $\theta_i$ is absent, and $a_{ij}>0$ then $\theta_i$ will immediately be produced by the reaction $0 \to \sum_{i'=1}^m a_{i'j} \theta_{i'}$. (We are assuming $A$ has no $0$ row. If $A$ has a $0$ row, we can ignore it anyway.) It follows from Theorem~\ref{thm:gac} that this system is globally asymptotically stable, and every trajectory approaches a maximum likelihood estimator $\hat{\theta}$.
Our actual system may be viewed as a perturbation of the system $\Theta_{MLE}(A,B,B',x(\infty))$. Consider any trajectory $(x(t), \theta(t))$ to $\mathcal{S}_{MLE}(A,B,B')$ starting at $(u/|u|_1,0)$. We are going to consider the projected trajectory $(\mathbb{R}_{\geq},\theta)$. We now show that it is possible to choose appropriate $t_i$ and $\delta_i$ so that $((t_i,\infty),\theta(t))$ is a trajectory of a $\delta_i$-perturbation of $\Theta_{MLE}(A,B,B', x(\infty))$, for $i=1,2,\dots$.
Wait for a sufficiently large time $t_1$ till $x(t)$ is in a sufficiently small $\delta_1$ neighborhood of $x(\infty)$ which it will never leave. After this time, we obtain a differential inclusion in the $\theta$ species with the mass-action equations~\ref{eqn:ma} for the reactions
\begin{itemize}\label{sys:xtheta}
\item For each column $j$ of $A$, a reaction $\sum_{i=1}^m a_{ij} \theta_i \to 0$ of rate $1$
\item For each column $j$ of $A$, a reaction $0 \to \sum_{i=1}^m a_{ij} \theta_i$ with time-varying rate lying in the interval ${(x_j(\infty)-\delta_1, x_j(\infty)+\delta_1)}$.
\end{itemize}
Continuing in this way, we choose a decreasing sequence $\delta_1>\delta_2>\dots>0$ with $\lim_{i\to\infty}\delta_i\to 0$, and corresponding times $t_1<t_2<t_3\dots$ with $\lim_{i\to\infty} t_i\to \infty$ such that after time $t_i$, $x(t)$ is in a $\delta_i$ neighborhood of $x(\infty)$ which it will never leave. Then $((t_i,\infty),\theta(t))$ is a trajectory of the $\delta_i$-perturbation of $\Theta_{MLE}(A,B,B',x(\infty))$. Hence $\theta(t)$ satisfies the conditions of Theorem~\ref{thm:pgac}. Hence $\lim_{t\to\infty}\theta(t)=\hat{\theta}$.
\end{proof}
\section{Related Work and Conclusions}
The mathematical similarities of both log-linear statistics and reaction networks to toric geometry have been pointed out before~\cite{TDS,miller2011theory}. Craciun et al.~\cite{TDS} refer to the steady states of complex-balanced reaction networks as {\it Birch points} ``to highlight the parallels'' with algebraic statistics. This paper develops on these observations, and serves to flesh out this mathematical parallel into a scheme for molecular computation.
Various building blocks for molecular computation that assume mass-action kinetics have been proposed before. We briefly review some of these proposals.
In \cite{napp2013message}, Napp and Adams model molecular computation with mass-action kinetics, as we do here. They propose a molecular scheme to implement message passing schemes in probabilistic graphical models. The goal of their scheme is to convert a factor graph into a reaction network that encodes the single-variable marginals of the joint distribution as steady state concentrations. In comparison, the goal of our scheme is to do statistical inference and compute maximum likelihood estimators for log-linear models. Napp and Adams focus on the ``forward model'' task of how a given data-generating process (a factor graph) can lead to observed data, whereas our focus is on the ``backward model'' task of inference, going from the observed data to the data-generating process. Further our scheme couples the deep role that MaxEnt algorithms play in Machine Learning with MaxEnt's roots in the Second Law of Thermodynamics whereas Napp and Adams are drawing their inspiration from variable elimination implemented via message passing which has its roots in Boolean constraint satisfaction problems.
Qian and Winfree~\cite{seesawgates,qian2011scaling} have proposed a DNA gate motif that can be composed to build large circuits, and have experimentally demonstrated molecular computation of a Boolean circuit with around 30 gates. In comparison, our scheme natively employs a continuous-time dynamical system to do the computation, without a Boolean abstraction.
Taking a control theory point of view, Oishi and Klavins~\cite{Klavins_2011Biomolecular} have proposed a scheme for implementing linear input/output systems with reaction networks. Note that for a given matrix $A$, the set of maximum likelihood distributions is usually not linear, but log-linear.
Daniel et al.\cite{daniel2013synthetic} have demonstrated an in vivo implementation of feedback loops, exploiting analogies with electronic circuits. It is possible that the success of their schemes is also related to the toric nature of mass-action kinetics.
Buisman et al.~\cite{buisman2009computing} have proposed a reaction network scheme for computation of algebraic functions. The part of our scheme which reads out the maximum likelihood estimator from the maximum likelihood distribution bears some similarity to their work.
One limitation of our present work is that the number of columns of the matrix $A$ can become very large, for example $2^{|V|}$ for a graphical model with $V$ nodes. Since the number of species and number of reactions both depend on the number of columns of $A$, this can require an exponentially large reaction network which may become impractical. One direction for future work is to extend our scheme by specifying a reaction network that computes maximum likelihood for graphical models.
We have some freedom in our scheme in the choice of basis sets $B$ and $B'$. In any chemical implementation of this work, there might be opportunity for optimization in choice of basis.
\paragraph{Acknowledgements:} I thank Nick S. Jones, Anne Shiu, Abhishek Behera, Ezra Miller, Thomas Ouldridge, Gheorghe Craciun, and Bence Melykuti for useful discussions.
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,013
|
La Monona es un tipo de canción folclórica originaria de la localidad de Villanueva de la Reina, en la provincia de Jaén (España), en la campiña, junto al valle del Guadalquivir.
Se trata de una copla, con letras de tono picante o mordaz, propia de los cortijos en la época de la recogida de la aceituna. El contenido de las letras abarca desde las cuestiones relacionadas con las relaciones entre sexos, hasta cuestiones de tipo político y social. Existen canciones similares en las zonas inmediatas, como Cazalilla.
Su origen se remonta al siglo XVIII.
Referencias
Música de la provincia de Jaén
Folclore de la provincia de Jaén
Villanueva de la Reina
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 952
|
Satchelliella sziladyi és una espècie d'insecte dípter pertanyent a la família dels psicòdids que es troba a Europa: Hongria.
Referències
Enllaços externs
Arctos
BioLib
The Animal Diversity Web
sziladyi
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,173
|
{"url":"http:\/\/www.ashukumar27.io\/1_time-series\/","text":"# Time Series Modeling - Part I (Theoretical Background)\n\nApart from clssification and regression problems, Time Series models are a separte entity in itself which are not easily tackled by standard methods and algorithms (well, it can be after some smart tweaks). The main aim of a time series analysis is to forecast future values of a variable using its past values. Time series models are also very business friendly, and directly solve some business problems like \u201cWhat will be my stores sales in the nest two months\u201d or \u201cHow many customers are going to come in my pizz store tomorrow, so that I can optimize my ingredients\u201d\n\nTrends and Seasonality are two important component of Time Series.\n\nStationary Time Series\n\nAll the modelling techniques discussed are based on the assumption that our time series is weakly stationary. In case, if a non-stationary series is encountered it is first converted to a weakly stationary series and then modeling is done on it.\n\nA series $x_t$ is said to be a weakly stationary series if it satisfies the following properties \u2013\n\nThe mean, $E(x_t)$ is same for all t.\nVariance of $x_t$ is same for all t.\n\nThe covariance and correlation between $x_t$ and $x_{t-h}$ depends only on the lag, h. Hence, the covariance and correlation between $x_t$ and $x_{t-h}$ is same for all t.\n\nLet\u2019s put it simple, practitioners say that the stationary time-series is the one with no trend - fluctuates around the constant mean and has constant variance. In stationary processes, shocks are temporary and dissipate (lose energy) over time. After a while, they do not contribute to the new time-series values. For example, something which happened log time ago (long enough) such as World War II, had an impact, but, it the time-series today is the same as if World War II never happened, we would say that shock lost its energy or dissipated. Stationarity is especially important as many classical econometric theories are derived under the assumptions of stationarity.\n\nSome examples of non-stationary series \u2013\n\n1. a series with a continual upward trend.\n2. a series with a distinct seasonal pattern.\n\n### AutoRegressive (AR) Models\n\nAn autoregressive (AR) model assumes that the present value of a time series variable can be explained as a function of its past values.\n\n$x_t = \\phi_1x_{t-1} + \\phi_2x_{t-2} + \u2026\u2026\u2026\u2026\u2026\u2026.. + \\phi_px_{t-p} + w_t$\n\nwhere, $w_t$ is the error term and $x_t$ is stationary.\n\nAn AR model is said to be of order p if it is dependent on p past values and is denoted as AR(p).\n\nAn important thing to note about AR models is that they are not the same as the standard linear regression models because the data in this case is not necessarily independent and not necessarily identically distributed.\n\nINTUITION\n\nLets discuss a practical example to get a feel about the kind of information an AR model incorporates.\n\nConsider, the number of blankets sold in a city. On a particular day, the temperature dropped below normal and there was an increase in the sale of blankets ($x_{t-1}$). The next day, the temperature went back to normal but there was still a significant demand of blankets ($x_t$). This could be due to the fact that the number of blankets sold depends on the current temperature but it is also affected by the past sale of blankets. This situation can be expressed as \u2013\n\n$xt = \\phi_1x_{t-1} + w_t$\n\n### Moving Averages (MA) Models\n\nIn moving average (MA) models, the present value of a time series is explained as a linear representation of the past error terms.\n\n$x_t = \\theta_1w_{t-1} + \\theta_2w_{t-2} + \u2026\u2026\u2026\u2026\u2026\u2026.. + \\theta_qw_{t-q} + w_t$\n\nwhere $w_k$ is the error at time k, $x_t$ is stationary and mean of the series is 0.\n\nAn MA model is said to be of order q if it is dependent on q past error terms and is denoted as MA(q).\n\nINTUITION\n\nConsider a car manufacturer who manufactured 10000 special edition cars. This edition was a success and he managed to sell all of them (lets call this $x_{t-1}$). But there were some 1500 customers who could not purchase this car as it went out of stock (lets call this as $w_{t-1}$). Some of these 1500 customers settled buying some other car but some returned the next month when this special edition car was back in stock. Mathematically, the above scenario can be depicted as,\n\n$x_t =\\theta_1w_{t-1} + w_t$\n\nIt is usually difficult to guess a suitable model by just looking at the data. We need certain techniques to come up with a suitable forecasting model. Understanding the autocorrelation function and the partial autocorrelation function is an important step in time series modelling.\n\n### Autocorrelation Function (ACF)\n\nAs discussed earlier, for a stationary series the autocorrelation between $x_t$ and $x_{t-h}$ depends only on the difference (lag) of the two measurements.\n\nTherefore, for a time series the autocorrelation function is a function of lag, h. It gives the correlation between two time dependent random variables with a separation of h time frames.\n\nACF plots are used to infer the type and the order of the models that can be suitable for a particular forecasting problem.\n\nAutocorrelation Function of an AR(p) model\n\nThe autocorrelation function of an AR model dampens exponentially as h increases. At times, the exponential decay can also be sinusoidal in nature.\n\nIn practice, a sample won\u2019t usually provide such a clear pattern. But such an exponential decay is usually indicative of an AR model. But ACF plots cannot tell you the order of the AR model. To determine the order, we use the partial autocorrelation function plot, but more on that later.\n\nAutocorrelation of an MA(q) model\n\nThe autocorrelation function of an MA(q) model cuts off at lag q. It means that it will have a finite value for lags, h \u2264 q.\n\nIf the ACF plot has such a characteristic, we can decipher the order of the MA model as well. Again, in practice, a sample won\u2019t usually provide such a clear pattern. But a resemblance to such a plot would suggest an MA model.\n\n### PARTIAL AUTOCORRELATION FUNCTION (PACF)\n\nWe can understand the order of an MA(q) model by looking at its ACF plot. But this is not feasible with an AR(p) model. Hence, we use the partial autocorrelation function for this. It is the correlation between $x_t$ and $x_s$ with the linear effect of everything in the middle removed.\n\nLet us consider an AR(1) model, $x_t = \\phi_1x_{t-1} + w_t$ . We know, that the correlation between $x_t$ and $x_{t-2}$ is not zero , because $x_t$ is dependent on $x{t-2}$ through $x_{t-1}$ . But what if we break this chain of dependence by removing the effect of $x_{t-1}$. That is, we consider the correlation between $x_t \u2212 \\phi x_{t-1}$ and $x_{t-2} \u2212 \\phi x_{t-1}$ , because it is the correlation between $x_t$ and $x_{t-2}$ with the linear dependence of each on $x_{t-1}$ removed. In this way, we have broken the dependence chain between $x_t$ and $x_{t-2}$. Hence,\n\n$cov(x_t \u2212\\phi x_{t-1}, x_{t-2} \u2212 \\phi x_{t-1}) = cov(w_t, x_{t-2} \u2212 \\phi x_{t\u22121}) = 0$\n\nPACF of an AR(p) model\n\nThe PACF for an AR(p) model cuts off after p lags.\n\nPACF of an MA(q) model\n\nSimilar to the ACF of an AR(p) model, the PACF of an MA(q) model tails off as the lag increases.\n\nWhat are these horizontal blue lines in the ACF and PACF plots?\n\nFor those who are not comfortable with inferential statistics \u2013 We always work with a sample data. We cannot find out true population parameters. Therefore, we find out sample statistics (in this case the sample autocorrelation) to estimate the true population parameter (or, the true autocorrelation). The blue lines denote the confidence interval of our estimate. If the autocorrelation value in the plot is inside the blue lines we assume it to be zero (statistically insignificant). You would not have to worry about calculating the confidence intervals as most of the software packages you use would do it for you.\n\nFor those who are comfortable with inferential statistics \u2013 We take,\n\nnull hypothesis : autocorrelation for a particular lag, \u03c1(h) = 0\n\nalternate hypothesis : \u03c1(h) \u2260 0\n\nThe blue lines represent the \u00b12 standard errors region. We reject the null hypothesis if our sample estimate is outside this boundary.\n\n### AUTOREGRESSIVE MOVING AVERAGE (ARMA) MODELS\n\nMost often using only AR or MA models does not give the best results. Hence, we use ARMA models. These models incorporate the autoregressive as well as the moving average terms. An ARMA model can be represented as,\n\n$x_t = \\phi_1x_{t-1} + \\phi_2x_{t-2} + \u2026.. + \\phi_px_{t-p} + \\theta_1w_{t-1} + \\theta_2w_{t-2} + \u2026.. + \\theta_qw_{t-q}+ wt$\n\nAn ARMA model dependent on p past values and q past error terms is denoted as ARMA(p,q) .\n\nBehavior of the ACF and PACF for ARMA Models \u2013What are these horizontal blue lines in the ACF and PACF plots?\n\nFor those who are not comfortable with inferential statistics \u2013 We always work with a sample data. We cannot find out true population parameters. Therefore, we find out sample statistics (in this case the sample autocorrelation) to estimate the true population parameter (or, the true autocorrelation). The blue lines denote the confidence interval of our estimate. If the autocorrelation value in the plot is inside the blue lines we assume it to be zero (statistically insignificant). You would not have to worry about calculating the confidence intervals as most of the software packages you use would do it for you.\n\nFor those who are comfortable with inferential statistics \u2013 We take,\n\nnull hypothesis : autocorrelation for a particular lag, \u03c1(h) = 0\n\nalternate hypothesis : \u03c1(h) \u2260 0\n\nThe blue lines represent the \u00b12 standard errors region. We reject the null hypothesis if our sample estimate is outside this boundary.\n\n### AUTOREGRESSIVE MOVING AVERAGE (ARMA) MODELS\n\nMost often using only AR or MA models does not give the best results. Hence, we use ARMA models. These models incorporate the autoregressive as well as the moving average terms. An ARMA model can be represented as,\n\n$x_t = \\phi_1x_{t-1} + \\phi_2x_{t-2} + \u2026.. + \\phi_p x_{t-p} + \\theta_1w_{t-1} + \\theta_2w_{t-2} + \u2026.. + \\theta_qw_{t-q} + w_t$\n\nAn ARMA model dependent on p past values and q past error terms is denoted as ARMA(p,q) .\n\nBehavior of the ACF and PACF for ARMA Models \u2013\n\nDIFFERENCING Till now we have only talked about stationary series. But what if we encounter a non-stationary series? Well, as I mentioned earlier, we will have to come with strategies to stationarize our time series.\n\nDifferencing is one such and perhaps the most common strategy to stationarize non-stationary series. Consider,\n\n$x_t = \u03bc + \u03c6x_{t-1} + w_t$ \u2026\u2026\u2026\u2026 (1)\n\n$x_{t-1} = \u03bc + \u03c6x_{t-2} + w_{t-1}$ \u2026\u2026\u2026\u2026 (2)\n\nSubtracting (2) from (1) we get,\n\n$x_{t-1} \u2013 x_t= \u03c6(x_{t-2} \u2013 x_{t-1}) + w_{t-1} \u2013 w_t$\n\nHere, we removed a linear trend in the data by doing a first order differencing. After fitting a model on the differenced terms, we can always retrieve the actual terms to get their forecasted values.\n\nExample of a second order differencing,\n\n$(x_t \u2013 x_{t-1}) \u2013 (x_{t-1} \u2013 x_{t-2})$\n\n### AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) MODELS\n\nThey are nothing but ARMA models applied after differencing a time series. In most of the software packages, the elements in the model are specified in the order \u2013\n\n(AR order, differencing order, MA order)\n\nFor example,\n\nMA(2) => ARIMA(0, 0, 2)\nARMA(1,3) => ARIMA(1, 0, 3)\nAR(1), differencing(1), MA(2) => ARIMA(1, 1, 2)\n\nSource:\n\nDisclaimer - a lot of material in this post has been shamelessly copied from the awesome post from Yashuseth\u2019s Blog. Since it was already well written, I did not want to write it again.\n\nOther Sources:\nhttps:\/\/towardsdatascience.com\/time-series-analysis-in-python-an-introduction-70d5a5b1d52a\nThe awesome guys at Analytics Vidhya\nJason Brownlee -1\nJason Brownlee -2\n\nWritten on February 23, 2018\n[ ]","date":"2018-06-24 01:02:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6639024615287781, \"perplexity\": 694.3623392191347}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-26\/segments\/1529267865995.86\/warc\/CC-MAIN-20180624005242-20180624025242-00115.warc.gz\"}"}
| null | null |
Pirates Winter Leagues: Oneil Cruz, Rodolfo Castro, and Miguel Andujar
Monday was a light night of action around winter ball. The Dominican Republic was the only country playing games. Three Pittsburgh Pirates saw action.
Oneil Cruz went 0-for-4 with three strikeouts, in a night that he faced three current big league pitchers. He's now hitting .235 with a .798 OPS in ten games.
Rodolfo Castro went 2-for-5 with two singles and an RBI. He started at shortstop in this game, giving him starts at second base, third base and shortstop in the last few days. He's now hitting .256 with a .701 OPS in 26 games.
Miguel Andujar went 1-for-4 with a single, walk and an RBI. He had back-to-back RBI singles with Castro in the seventh inning of their team's 5-3 loss. Andujar is now hitting .267 with a .700 OPS in 18 games.
Pirates Winter Report: Dylan Shockley Finds Lost...
Winter Leagues February 4, 2023
Pirates Winter Report: Tsung-Che Cheng Gains Valuable...
Winter Leagues January 28, 2023
Pirates Winter Leagues: Final Stats from the...
Pirates Winter Leagues: Doubleheader Results from Australia
Castro's numbers are very respectable, especially given that he's one of the youngest players in the league. But what might be more impressive is that his team kept playing him when his numbers weren't as good, i.e., that he seems to have the trust of his coaches.
joebaseball
Time to give Cruz the Jacob DeGrom treatment. SS turned Pitcher.
Last edited 1 month ago by joebaseball
Reply to joebaseball
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,595
|
package org.railsschool.tiramisu.views.patterns;
import android.content.Context;
import android.view.View;
import android.widget.TextView;
import org.railsschool.tiramisu.R;
import org.railsschool.tiramisu.models.dao.DayNotificationPreference;
import org.railsschool.tiramisu.models.dao.TwoHourNotificationPreference;
import butterknife.ButterKnife;
import butterknife.InjectView;
/**
* @class ReminderSeekbarLabelPattern
* @brief
*/
public class ReminderSeekbarLabelPattern {
private Context _context;
@InjectView(R.id.pattern_reminder_seekbar_always)
TextView _always;
@InjectView(R.id.pattern_reminder_seekbar_attending)
TextView _onlyIfAttending;
@InjectView(R.id.pattern_reminder_seekbar_never)
TextView _never;
public ReminderSeekbarLabelPattern(Context context, View pattern) {
this._context = context;
ButterKnife.inject(this, pattern);
}
private void _setCurrent(int value) {
int softGray = _context.getResources().getColor(R.color.soft_gray),
blue = _context.getResources().getColor(R.color.blue);
_always.setTextColor(softGray);
_onlyIfAttending.setTextColor(softGray);
_never.setTextColor(softGray);
if (value == 0) {
_always.setTextColor(blue);
} else if (value == 2) {
_never.setTextColor(blue);
} else {
_onlyIfAttending.setTextColor(blue);
}
}
public void update(TwoHourNotificationPreference preference) {
_setCurrent(preference.toInt());
}
public void update(DayNotificationPreference preference) {
_setCurrent(preference.toInt());
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,732
|
Vinton – wieś w Stanach Zjednoczonych, w stanie Ohio, w hrabstwie Gallia.
Wsie w stanie Ohio
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,951
|
{"url":"http:\/\/mathoverflow.net\/feeds\/question\/116608","text":"Clifford algebra is graded separable - MathOverflow most recent 30 from http:\/\/mathoverflow.net 2013-05-24T05:43:24Z http:\/\/mathoverflow.net\/feeds\/question\/116608 http:\/\/www.creativecommons.org\/licenses\/by-nc\/2.5\/rdf http:\/\/mathoverflow.net\/questions\/116608\/clifford-algebra-is-graded-separable Clifford algebra is graded separable Sasha Pavlov 2012-12-17T13:50:06Z 2012-12-17T13:50:06Z <p>Let $D$ be an algebra of odd differential operators on a free module $V$, this algebra is isomorphic to the Clifford algebra $Cl(V^* \\oplus V)$. Let $m$ denote multiplication map $$m : D\\otimes D \\to D.$$ I need an explicit formula for a bimodule splitting of this map or equivalently an element $z \\in D\\otimes D$ s.t. $az=za$ for any $a \\in D$ and $m(z)=1$.<\/p> <p>It is possible to use isomorphism of algebras $Cl(V^* \\oplus V) \\cong End(\\wedge V)$ and for $End(\\wedge V)$ such splitting is given (up to sign) by the same formula as for matrix algebra. So, I know that such splitting exists and I want a nice formula in term of differential operators (or standard generators of Clifford algebra).<\/p>","date":"2013-05-24 05:43:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9715599417686462, \"perplexity\": 486.7065766960942}, \"config\": {\"markdown_headings\": false, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368704234586\/warc\/CC-MAIN-20130516113714-00086-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
| null | null |
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal"
android:paddingTop="32dp"
android:layout_centerHorizontal="true">
<Button
android:text="load"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnShowMRect" />
<Button
android:text="stop"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnStopMRect" />
</LinearLayout>
<FrameLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:id="@+id/frMRectBanner" />
</RelativeLayout>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,828
|
We assist new arrivals with documentation paperwork and in navigating the American system.
ESL classes are taught in our Faribault location through our partnership with Faribault Adult Basic Education.
We work with Rice County Social Services to connect transportation, childcare resources, and provide job search assistance.
We offer emergency support funding through Islamic Relief USA's zakat program.
We are grant-funded through UCare to provide education regarding diabetes, child and teen check-ups, and mental health.
We partner with Minnesota CareCounseling to provide linguistically-appropriate trauma counseling.
We partner with the MN Department of Human Services - Refugee Assistance Program (through IMAA) to provide assistance developing job skills and finding employment.
Through partnerships with the MN Department of Human Services and the MN Dept of Employment and Economic Development we provide leadership, soft skills training and assistance with job search to youth.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,640
|
Stamped: Racism, Antiracism, and You
Jason Reynolds, Ibram Kendi
Publisher: Little, Brown & Company
Stamped: Racism, Antiracism, and You quantity
In this important and compelling young readers adaptation of his National Book Award-winning title, Dr. Ibram X. Kendi, writing with award-winning author Jason Reynolds, chronicles the story of anti-black, racist ideas over the course of American history.
Racist ideas in our country did not arise from ignorance or hatred. Instead, they were developed by some of the most brilliant minds in history to justify and rationalise the nation's deeply entrenched discriminatory policies. But while racist ideas have always been easy to fabricate and distribute, they can also be discredited. In shedding light on the history of racist ideas in America, this adaptation offers young readers the tools they need to combat these ideas – and, in the process, gives society a reason to hope.
Through a gripping and fast-paced narrative that speaks to young readers on their level, this book shines a light on the many insidious forms of racist ideas – and on ways anti-racists can be empowered to combat racism in their daily lives.
Publisher Review
"The R-word: Racism. Some tuck tail and run from it. Others say it's no longer a thing. But Dr. Kendi breaks it down, and Jason Reynolds makes it easy to understand. Mark my words: This book will change everything."--Nic Stone, bestselling author of Dear Martin "Teens are often searching for their place in the world, in Stamped, Reynolds gives context to where we are, how we got here, and reminds young people-and all of us-that we have a choice to make about who we want to be. This unapologetic telling of the history of racism in our nation is refreshingly simple and deeply profound. This is the history book I needed as a teen." --Renee Watson, New York Timesbestselling and Newbery Honor-winning author of Piecing Me Together "Sheer brilliance....An empowering, transformative read. Bravo."--Jewell Parker Rhodes, New York Times bestselling author of Ghost Boys "Reading this compelling not-a-history book is like finding a field guide to American racism, allowing you to quickly identify racist ideas when you encounter them in the wild." --Dashka Slater, author of The 57 Bus "Jason Reynolds has the amazing ability to make words jump off the page. Told with passion, precision, and even humor, Stamped is a true story-a living story-that everyone needs to know."--Steve Sheinkin, New York Times bestselling and award-winning author of Bomb and Born to Fly "If knowledge is power, this book will make you more powerful than you've ever been before."--Ibi Zoboi, author of the National Book Award finalist American Street * "Reynolds and Kendi eloquently challenge the common narrative attached to U.S. history. This adaptation, like the 2016 adult title, will undoubtedly leave a lasting impact. Highly recommended for libraries serving middle and high school students."--School Library Journal, starred review * "Required reading for everyone, especially those invested in the future of young people in America."--Booklist, starred review * "Readers who want to truly understand how deeply embedded racism is in the very fabric of the U.S., its history, and its systems will come away educated and enlightened. Worthy of inclusion in every home and in curricula and libraries everywhere. Impressive and much needed." --Kirkus Reviews, starred review * "Eye-opening...this engaging overview offers readers lots to think about and should spark important conversations about this timely topic."--School Library Connection, starred review Praise for Stamped: Racism, Antiracism, and You: "An amazingly timely and stunningly accessible manifesto for young people....At times funny, at times somber but always packed with relevant information that is at once thoughtful and spot-on, Stamped is the book I wish I had as a young person and am so grateful my own children have now." --Jacqueline Woodson, bestselling and National Book Award-winning author of Brown Girl Dreaming * "Reynolds (Look Both Ways) lends his signature flair to remixing Kendi's award-winning Stamped from the Beginning...Told impressively economically, loaded with historical details that connect clearly to current experiences, and bolstered with suggested reading and listening selected specifically for young readers, Kendi and Reynolds's volume is essential, meaningfully accessible reading."--Publishers Weekly, starred review * "An epic feat... More than merely a young reader's adaptation of Kendi's landmark work, Stamped does a remarkable job of tying together disparate threads while briskly moving through its historical narrative."--Bookpage, starred review "Reynolds's engaging, clear prose shines a light on difficult and confusing subjects....This is no easy feat."--The New York Times Book Review
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 267
|
Home > visual novel > Utawarerumono: A beautiful cultural artefact
Utawarerumono: A beautiful cultural artefact
Utawarerumono is truly beautiful stuff. It's a gorgeous visual novel, with some quality turn-based tactics action backing it up. But while that is good stuff, it's what inspired Utawarerumono where the game finds its greatest magic - it's inspired by native Japanese (Ainu) culture, and that gives it a tone all of its own.
In this video I talk about those influences a bit, and discuss why it matters: how it gives the game its unique aesthetics, it's tonal differences to most JRPGs and fantasy VNs, and how it inspired me to learn more about a native culture we just don't get to see much.
For more information on the Ainu influences in Utawarerumono, check out the interview I ran with the producer of the series from last year's TGS.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,418
|
The Intersektsionen Byuro ('Inter-Sectional Bureau') was a committee of Yiddish-speaking sections of the French trade union confederation Confédération générale du travail (CGT).
Intersektsionen Byuro was founded at the first conference of the Jewish labour movement in France, held in December 1910.
Intersektsionen Byuro emerged from the efforts of Solomon Lozovsky, an exiled Russian-Jewish social democrat, and other Jewish radicals to unite different Yiddish-speaking union sections. It functioned as a link between the Jewish and French labour movements. Intersekstionen Byuro was modelled after the Deutsche Arbeiter Kartell ('German Workers Cartel'), the organization of German workers in France. Intersekstionen Byuro united Yiddish-speaking sections of unions among cobblers, locksmiths, woodworkers, bakers, leatherworkers, tailors and barbers as well as the Syndicat des casquettiers (Capmakers Union, the first Jewish trade union in Paris, founded in 1896). All in all, Intersektsionen Byuro gathered over a dozen trade union sections in Paris. The organization published the journal Der yidisher arbeyter ('The Jewish Worker') as its organ.
The organization found itself in the midst of ongoing debates of the role of Jewish separatism in the labour movement; the Bundists wished to have a more distinct 'Jewish' movement whilst the anarchists claimed the Jewish separatism of the Intersektsionen Byuro was going too far. At the second conference of the Jewish labour movement, held in December 1912, the Bundists (refugees after the defeat of the Russian Revolution of 1905) proposed creating a completely independent Jewish labour centre (i.e. breaking the bonds with the French CGT). This proposal was rejected by the majority at the conference.
The organization ceased to exist in 1914. In 1923 the Confédération générale du travail unitaire (CGTU) organized the Intersindikale kommisie as a continuation of the Intersektsionen Byuro, notably a majority of Jewish workers had joined CGTU in the CGT-CGTU split.
References
1910 establishments in France
1914 disestablishments in France
Ashkenazi Jewish culture in Paris
General Confederation of Labour (France)
Jewish anarchism
Jewish socialism
Secular Jewish culture in France
Yiddish culture in France
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,191
|
Elaine is an experienced Solicitor with a background in litigation and dispute resolution and has recently taken up a Consultancy role with this firm specialising in Employment Law.
Elaine has a Law degree from Sussex University. She completed the Legal Practice Course at the College of Law in Guildford and was admitted as a solicitor in 2001. She went on to gain a Master's Degree specialising in Employment Law from Leicester University.
Elaine specialises in all employment matters, including but not limited to unfair/constructive dismissal, discrimination claims, settlement agreements, drafting and reviewing contracts and policies and advising on workplace issues.
Elaine's hobbies include travelling, theatre, cinema and fine dining.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,991
|
/**
* Container and controllers for the highscore screen.
*/
game.HighScoreCtl = game.HighScoreCtl || {};
game.HighScoreCtl.Container = me.ObjectContainer.extend({
init: function() {
// call the constructor
this.parent();
// non collidable
this.collidable = false;
this.autoSort = false;
this.addChild(new game.HighScoreCtl.Title(130, 8));
this.addChild(new game.HighScoreCtl.ScoresList(88, 32));
this.addChild(new game.HighScoreCtl.Bg());
}
});
/** The background item. */
game.HighScoreCtl.Bg = me.Renderable.extend({
init: function() {
// call the parent constructor
// (size does not matter here)
this.parent(new me.Vector2d(10, 10), 10, 10);
// make sure we use screen coordinates
this.floating = true;
this.alwaysUpdate = true;
this.backImage = me.loader.getImage("highscorebg");
this.x = 0;
this.y = 0;
},
/**
* draw the overlay
*/
draw : function (context) {
this.parent(context);
context.drawImage(this.backImage, this.x - 16, this.y - 16);
},
update: function() {
this.parent();
this.y += 1;
while (this.x >= 16) { this.x -= 16; }
while (this.y >= 16) { this.y -= 16; }
if (me.input.isKeyPressed("exit") || me.input.isKeyPressed("action")) {
me.state.change(me.state.MENU);
}
return true;
},
});
/** Title graphic for the highscore page. */
game.HighScoreCtl.Title = me.Renderable.extend({
init: function(x, y) {
// call the parent constructor
// (size does not matter here)
this.parent(new me.Vector2d(x, y), 10, 10);
// make sure we use screen coordinates
this.floating = true;
this.img = me.loader.getImage("highscoretitle");
},
/**
* draw the overlay
*/
draw : function (context) {
context.drawImage(this.img, this.pos.x, this.pos.y);
},
update: function() {
this.parent();
},
});
/**
* A scores list, either for best scores or best times.
*/
game.HighScoreCtl.ScoresList = me.Renderable.extend({
/** SCORESORTIMES should be "scores" or "times", and determines where to
* source data from. */
init: function(x, y) {
// call the parent constructor
// (size does not matter here)
this.parent(new me.Vector2d(x, y), 10, 10);
// make sure we use screen coordinates
this.floating = true;
this.alwaysUpdate = true;
this.items = [];
this.addText("Best Scores:", "yellow");
for (var i = 0; i < 6; i++) {
var override = null;
if (i == game.data.highlightScoreIndex) {
override = "green";
}
this.addScoreItem(i + 1, game.settings.bestScores[i], override);
}
this.addText("", "blue");
this.addText("Best Times:", "green");
for (var i = 0; i < 6; i++) {
var override = null
if (i == game.data.highlightTimeIndex) {
override = "green";
}
this.addScoreItem(i + 1, game.settings.bestTimes[i], override);
}
},
draw: function(context) {},
/** Adds a menu item with text STR and color COLOR. */
addText: function(str, color) {
var newIndex = this.items.length;
var textObj = new game.FancyText.String(this.pos.x,
this.pos.y + 12*newIndex, str.length, color);
textObj.setString(str);
this.items[newIndex] = [textObj];
me.game.add(textObj, 100);
},
/** Adds an item to the list corresponding to a highscore entry. INDEX is
* the number of the score to add. SCOREOBJ is the obj with properties
* "score", "time", and "initials". OVERRIDECOLOR will color the whole
* row one color, if it is set. */
addScoreItem: function(index, scoreObj, overrideColor) {
var newIndex = this.items.length;
var indText = new game.FancyText.String(this.pos.x,
this.pos.y + 12*newIndex, 3, overrideColor || "blue");
var str = index.toString() + ".";
if (index < 10) { str = " " + str; }
indText.setString(str);
var initialsText = new game.FancyText.String(this.pos.x + 28,
this.pos.y + 12*newIndex, 3, overrideColor || "red");
str = scoreObj.initials;
initialsText.setString(str);
var scoreText = new game.FancyText.String(this.pos.x + 56,
this.pos.y + 12*newIndex, 8, "yellow");
str = this.scoreToStr(scoreObj.score);
scoreText.setString(str);
var timeText = new game.FancyText.String(this.pos.x + 124,
this.pos.y + 12*newIndex, 9, "green");
str = this.stepsToTimeStr(scoreObj.time);
timeText.setString(str);
this.items[newIndex] = [indText, scoreText, timeText, initialsText];
me.game.add(indText, 100);
me.game.add(scoreText, 100);
me.game.add(timeText, 100);
me.game.add(initialsText, 100);
},
/** Return a stringified score. */
scoreToStr: function(score) {
var str = "$" + score.toString();
while (str.length < 8) str = " " + str;
return str;
},
/** Turns STEPS in 60ths of a second into a proper --'--.--" readout. */
stepsToTimeStr: function(steps) {
var centiseconds = Math.floor(steps * 100.0 / 60.0);
var str = "";
// 10 minutes at a time
var digit = 0;
while (centiseconds >= 60000) {
centiseconds -= 60000;
digit += 1;
}
str += digit.toString();
// 1 minute at a time
digit = 0;
while (centiseconds >= 6000) {
centiseconds -= 6000;
digit += 1;
}
str += digit.toString() + "'";
// 10 seconds at a time
digit = 0;
while (centiseconds >= 1000) {
centiseconds -= 1000;
digit += 1;
}
str += digit.toString();
// 1 second at a time
digit = 0;
while (centiseconds >= 100) {
centiseconds -= 100;
digit += 1;
}
str += digit.toString() + ".";
// 10 centis at a time
digit = 0;
while (centiseconds >= 10) {
centiseconds -= 10;
digit += 1;
}
str += digit.toString();
// 1 centi at a time
digit = 0;
while (centiseconds >= 1) {
centiseconds -= 1;
digit += 1;
}
str += digit.toString() + "\"";
return str;
},
});
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,689
|
{"url":"http:\/\/mathhelpforum.com\/trigonometry\/136279-finding-value-angle-theta-radians-knowing-sec-theta-cot-theta.html","text":"# Thread: Finding the value of the angle theta in radians, knowing sec theta and cot theta\n\n1. ## Finding the value of the angle theta in radians, knowing sec theta and cot theta\n\nHi,\n\nThe question I have is:\n\nGiven that $\\sec \\theta = -2$, $\\cot \\theta = -1\/3\\sqrt{3}$ and $-\\pi < 0 < \\pi$, find the exact value of the angle $\\theta$ in radians. Justify your answer.\n\nIs the following along the right lines?\n\n$\\sec\\theta=1\/cos\\theta$ therefore $\\cos\\theta=1\/\\sec \\theta=-1\/2$\n$\\cot\\theta=1\/\\tan\\theta$ therefore $\\tan\\theta=1\/\\cot\\theta=-3 \\sqrt{3}\/3$\n\n$\\sin\\theta =\\tan\\theta\\cos\\theta=-3\\sqrt{3}\/3x-1\/2=1\/2sqrt{3}$\n\nSince both $\\tan\\theta$ and $\\cos\\theta$ are negative and lie in the range $-\\pi < 0 < \\pi$, then $\\theta$ must lie in the 2nd quadrant so is positive.\n\n2. Originally Posted by cozza\nHi,\n\nThe question I have is:\n\nGiven that $\\sec \\theta = -2$, $\\cot \\theta = -1\/3\\sqrt{3}$ and $-\\pi < 0 < \\pi$, find the exact value of the angle $\\theta$ in radians. Justify your answer.\n\nIs the following along the right lines?\n\n$\\sec\\theta=1\/cos\\theta$ therefore $\\cos\\theta=1\/\\sec \\theta=-1\/2$\n$\\cot\\theta=1\/\\tan\\theta$ therefore $\\tan\\theta=1\/\\cot\\theta=-3 \\sqrt{3}\/3$\n\n$\\sin\\theta =\\tan\\theta\\cos\\theta=-3\\sqrt{3}\/3x-1\/2=1\/2sqrt{3}$\n\nSince both $\\tan\\theta$ and $\\cos\\theta$ are negative and lie in the range $-\\pi < 0 < \\pi$, then $\\theta$ must lie in the 2nd quadrant so is positive.\n\nGood. Now you also need to know some basic angles. All three angles in an equilateral triangle have measure $\\pi\/3$ radians. If you drop a perpendicular from one vertex to the opposite side, you divide the triangle into two congruent right triangle with angles of $\\pi\/3$ and [\/tex]\\pi\/6[\/tex]. If you take the equilateral triangle to have sides of length 1, the hypotenuse of the right triangles is 1 and the leg opposite the $\\pi\/6$ angle is 1\/2.\nGood. Now you also need to know some basic angles. All three angles in an equilateral triangle have measure $\\pi\/3$ radians. If you drop a perpendicular from one vertex to the opposite side, you divide the triangle into two congruent right triangle with angles of $\\pi\/3$ and [\/tex]\\pi\/6[\/tex]. If you take the equilateral triangle to have sides of length 1, the hypotenuse of the right triangles is 1 and the leg opposite the $\\pi\/6$ angle is 1\/2.\nThank you for such a quick reply! So does the angle $\\theta=1\/2sqrt{3}$? Can I then justify this answer by explaining what you have written? Why do I need to know the lenths of the sides in the right angle traingle to find the exact value of the angle $\\theta$ in rtadians? Sorry if this seems like a really stupid question.","date":"2016-08-30 03:52:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 34, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8083160519599915, \"perplexity\": 172.34561652975427}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-36\/segments\/1471982968912.71\/warc\/CC-MAIN-20160823200928-00231-ip-10-153-172-175.ec2.internal.warc.gz\"}"}
| null | null |
Chi Mak
Updated on Feb 03, 2018
Name Chi Mak
Role Engineer
Searching a spy's home without leaving a trace
Chi Mak (traditional Chinese: 麥大志; simplified Chinese: 麦大志; pinyin: Mài Dàzhì; Jyutping: mak6 daai6 zi3) is a Chinese-born naturalized American citizen who worked as an engineer for California-based defense contractor Power Paragon, a part of L-3 Communications. In 2007, Mak was found guilty of conspiring to export sensitive defense technology to China.
Searching a spys home without leaving a trace
Mak's defense was that he thought there was nothing improper about allowing the papers on U.S. defense technology to leave the U.S., despite his training from his employer indicating quite the opposite. He had intentionally released it without his employer's permission at a 2004 international engineering conference. He had been briefed every year on regulations regarding documents designated "For Official Use Only" (FOUO) and items restricted by export controls. His defense argued that making the data accessible to scrutiny by the general public negated its military value and made it acceptable to transport outside the United States, despite the fact that Chi Mak was the one who released the information without authorization. The defense also argued that the data was in the public domain. However, once again this was due to Chi Mak's unauthorized release of it.
The prosecution indicated that the data was nevertheless export-controlled and that it should not have been shared with foreign nationals without authorization. The IEEE presentations cited by prosecution in the trial are currently available on a worldwide basis, due to Chi Mak's unauthorized releases.
Mak's brother and sister-in-law were apprehended by the FBI after boarding a flight to Hong Kong carrying one encrypted CD which contained defense-related documents. They, along with their son as well as Mak's wife, all pleaded guilty to related charges.
On March 24, 2008, he was sentenced to 24 years and 4 months in federal prison.
Mak lived in Hong Kong before, in the late 1970s, moving to the U.S. as an immigrant.
Chi Mak Wikipedia
Similar TopicsThe Quiet American (2002 film)
Walid Badir
Dajan Šimac
The Quiet American (2002 film)
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,190
|
import aiohttp
import discord
import logging
import random
from collections import defaultdict
from xml.etree import cElementTree as ET
from discord.ext import commands
import code.Perms as Perms
Perms = Perms.Perms
log = logging.getLogger(__name__)
class Porn:
def __init__(self, bot):
log.debug("Porn Loading...")
self.bot = bot
@commands.command(pass_context=True, no_pm=False)
async def rule34(self, ctx):
"""Search rule 34
Seperate multiple tags with ','"""
await self.bot.send_typing(ctx.message.channel)
tags = ctx.message.content.lower().replace("rule34", "")
tags = tags.replace(".", "")
tags = tags.replace("<@{}>".format(self.bot.user.id), "") # mention
tags = tags.replace("<@!{}>".format(self.bot.user.id), "") # mention w/ nickname
tags = tags.strip()
if tags == "":
await self.bot.send_message(ctx.message.channel, "Please search for something\nUse ``,`` to seperate search terms")
return
else:
tags = tags.replace(", ", ",").replace(" ", "_")
tags = tags.replace(",", " ") # ignore this
imgList = await Rule34(self.bot)._getImageURLS(tags=tags)
try:
url = random.choice(imgList)
if imgList != None:
em = discord.Embed(color=16738740)
em.set_image(url=url)
else:
em = discord.Embed(description="No results", color=16711680)
if ".gif" in url:
await self.bot.send_message(ctx.message.channel, url)
else:
await self.bot.send_message(ctx.message.channel, embed=em)
except:
em = discord.Embed(description="No results", color=16711680)
await self.bot.send_message(ctx.message.channel, embed=em)
class Rule34:
"""Based on my rule34 api, modified for async"""
def __init__(self, bot):
self.session = aiohttp.ClientSession(loop=bot.loop)
@staticmethod
def _urlGen(tags=None, limit=None, id=None, PID=None, deleted=None, **kwargs):
"""Generates a URL to access the api using your input:
Arguments:
"limit" ||str ||How many posts you want to retrieve
"pid" ||int ||The page number.
"tags" ||str ||The tags to search for. Any tag combination that works on the web site will work here. This includes all the meta-tags. See cheatsheet for more information.
"cid" ||str ||Change ID of the post. This is in Unix time so there are likely others with the same value if updated at the same time.
"id" ||int ||The post id.
"deleted"||bool||If True, deleted posts will be included in the data
All arguments that accept strings *can* accept int, but strings are recommended
If none of these arguments are passed, None will be returned
"""
# I have no intentions of adding "&last_id=" simply because its response can easily be massive, and all it returns is ``<post deleted="[ID]" md5="[String]"/>`` which has no use as far as im aware
URL = "https://rule34.xxx/index.php?page=dapi&s=post&q=index"
if PID != None:
URL += "&pid={}".format(PID)
if limit != None:
URL += "&limit={}".format(limit)
if id != None:
URL += "&id={}".format(id)
if tags != None:
tags = str(tags).replace(" ", "+")
URL += "&tags={}".format(tags)
if deleted == True:
URL += "&deleted=show"
if PID != None or limit != None or id != None or tags != None:
return URL + "&rating:explicit"
else:
return None
async def _totalImages(self, tags):
"""Get an int of how many images are on rule34.xxx
Argument: tags (string)"""
XML = None
with aiohttp.Timeout(10):
async with self.session.get(self._urlGen(tags=tags, PID=0)) as XMLData:
XMLData = await XMLData.read()
XMLData = ET.XML(XMLData)
XML = self.ParseXML(XMLData)
return int(XML['posts']['@count'])
async def _getImageURLS(self, tags):
"""Returns a list of all images/webms/gifs it can find
This function can take a LONG time to finish with huge tags. E.G. in my testing "gay" took 200seconds to finish (740 pages)
Argument: tags (string)"""
num =await self._totalImages(tags)
if num != 0:
imgList = []
if tags == "random":
tempURL = self._urlGen(PID=random.randint(0, 1000))
else:
if num <= 300:
pid = 0
else:
pid = int(num/100)-3
if pid > 1000:
# rule34 wont return results for a pid over 1000
pid = 1000
pid = random.randint(1, pid)
tempURL = self._urlGen(tags=tags, PID=pid)
try:
with aiohttp.Timeout(10):
async with self.session.get(tempURL) as XML:
XML = await XML.read()
XML = self.ParseXML(ET.XML(XML))
for data in XML['posts']['post']:
try:
if not ".webm" in data['@file_url']:
imgList.append(str(data['@file_url']))
except Exception as e:
log.error(e)
await self.session.close()
if len(imgList) >= 1:
return imgList
else:
return None
except Exception as e:
log.error(e)
await self.session.close()
return None
await self.session.close()
if len(imgList) == 0:
return None
return imgList
else:
await self.session.close()
return None
def ParseXML(self, rawXML):
"""Parses entities as well as attributes following this XML-to-JSON "specification"
Using https://stackoverflow.com/a/10077069"""
d = {rawXML.tag: {} if rawXML.attrib else None}
children = list(rawXML)
if children:
dd = defaultdict(list)
for dc in map(self.ParseXML, children):
for k, v in dc.items():
dd[k].append(v)
d = {rawXML.tag: {k: v[0] if len(v) == 1 else v for k, v in dd.items()}}
if rawXML.attrib:
d[rawXML.tag].update(('@' + k, v) for k, v in rawXML.attrib.items())
if rawXML.text:
text = rawXML.text.strip()
if children or rawXML.attrib:
if text:
d[rawXML.tag]['#text'] = text
else:
d[rawXML.tag] = text
return d
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,684
|
export default function reducer(state={
userStatus: {
activatedUser: false,
loggedInUser: false
},
}, action) {
switch (action.type) {
case 'ACTIVATE_NEW_USER_FULFILLED': {
return {
...state,
userStatus: action.payload,
};
}
}
return state;
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,669
|
\section{Introduction}
In this paper we study the real roots of the Yablonskii--Vorob'ev polynomials $Q_n$ ($n\in\mathbb{N}$).
Yablonskii and~Vorob'ev found these polynomials while studying the hierarchy of rational solutions of the second Painlev\'e equation.
The Yablonskii--Vorob'ev polynomials satisfy the def\/ining dif\/ferential-dif\/ference equation
\begin{gather*}
Q_{n+1}Q_{n-1}=zQ_n^2-4\big(Q_n Q_n''-(Q_n')^2\big),
\end{gather*}
with $Q_0=1$ and~$Q_1=z$.
The Yablonskii--Vorob'ev polynomials $Q_n$ are monic polynomials of degree $\frac{1}{2}n(n+1)$, with integer coef\/f\/icients.
The f\/irst few are given in Table~\ref{tableQn}.
Yablonskii~\cite{yablonskii} and~Vorob'ev~\cite{vorob} expressed the rational solutions of the second Painlev\'e equation,
\begin{gather*}P_{\rm II}(\alpha): \ w''(z)=2w(z)^3+zw(z)+\alpha,
\end{gather*}
with complex parameter $\alpha$, in terms of logarithmic derivatives of the Yablonskii--Vorob'ev polynomials, as summerized in the following theorem:
\begin{table}[t]
\centering
\caption{}\label{tableQn}
\vspace{1mm}
\begin{tabular}{r@{\,}c@{\,}l}
\hline
\multicolumn{3}{c}{Yablonskii--Vorob'ev polynomials\tsep{1pt}\bsep{1pt}}\\
\hline
$Q_2$ & $=$ & $4+z^3$\tsep{1pt}\\
$Q_3$ & $=$ & $-80+20z^3+z^6$\\
$Q_4$ & $=$ & $z\big(11200+60z^6+z^9\big)$\\
$Q_5$ & $=$ & $-6272000-3136000z^3+78400z^6+2800z^9+140z^{12}+z^{15}$\\
$Q_6$ & $=$ & $-38635520000+19317760000z^3+1448832000z^6-17248000z^9+
627200z^{12}$\\
& & ${}+18480z^{15}+280z^{18}+z^{21}$\\
$Q_7$ & $=$ & $z\big({-}3093932441600000-49723914240000z^6-828731904000z^9+13039488000
z^{12}$\\
& & ${} +62092800z^{15}+5174400z^{18}+75600z^{21}+504z^{24}+z^{27}\big)$\\
$Q_8$ & $=$ & $-991048439693312000000-743286329769984000000z^3$\\
& & ${} +37164316488499200000
z^6+1769729356595200000z^9+126696533483520000z^{12}$\\
& & ${} +407736096768000
z^{15}-6629855232000z^{18}+124309785600z^{21}+2018016000z^{24}$\\
& & ${} +32771200z^{27}+240240z^{30}+840z^{33}+z^{36}$\bsep{1pt}\\ \hline
\end{tabular}
\end{table}
\begin{theorem}
\label{thmYV}
$P_{\rm II}(\alpha)$ has a~rational solution iff $\alpha=n\in\mathbb{Z}$.
For $n\in\mathbb{Z}$ the rational solution is unique and~if $n\geq1$, then it is equal to
\begin{gather*}
w_n=\frac{Q_{n-1}'}{Q_{n-1}}-\frac{Q_n'}{Q_n}.
\end{gather*}
The other rational solutions are given by $w_0=0$ and~for $n\geq1$, $w_{-n}=-w_n$.
\end{theorem}
In~\cite{roffelsen} we proved the irrationality of the nonzero real roots of the Yablonskii--Vorob'ev polynomials, in this article we determine precisely the number of real roots of these polynomials.
Clarkson~\cite{clarksonoverview} conjectured that the number of real roots of $Q_n$ equals $\left[\frac{n+1}{2}\right]$, where $[x]$ denotes the integer part of $x$ for real numbers~$x$.
In Section~\ref{sectionreal} we prove this conjecture and~obtain the following theorem, where $Z_n$ is def\/ined as the set of real roots of~$Q_n$ for $n\in\mathbb{N}$.
\begin{theorem}
\label{thmnumberrealroots}
For every $n\in\mathbb{N}$, the number of real roots of $Q_n$ equals
\begin{gather}
\label{numberrealroots}
\left|Z_n\right|=\left[\frac{n+1}{2}\right].
\end{gather}
Furthermore for $n\geq2$,
\begin{gather}\label{minmaxroots}
\min(Z_{n-1})> \min(Z_{n+1}),\qquad
\max(Z_{n-1})< \max(Z_{n+1}).
\end{gather}
\end{theorem}
The argument is inductive and~an important ingredient is the fact that the real roots of $Q_{n-1}$ and~$Q_{n+1}$ interlace, which is proven by Clarkson~\cite{clarksonoverview}.
Kaneko and~Ochiai~\cite{kaneko} found a~direct formula for the lowest degree coef\/f\/icients of the Yablonskii--Vorob'ev polynomials $Q_n$ depending on~$n$.
In particular the sign of $Q_n(0)$ can be determined for $n\in\mathbb{N}$.
In Section~\ref{sectionpositivenegative} we use this to determine precisely the number of positive and~the number of negative real roots of $Q_n$, which yields to the following theorem.
\begin{theorem}
\label{thmpositivenegative}
Let $n\in\mathbb{N}$, then the number of negative real roots of $Q_n$ is equal to
\begin{gather*}
\left|Z_n\cap(-\infty,0)\right|=\left[\frac{n+1}{3}\right].
\end{gather*}
The number of positive real roots of $Q_n$ is equal to
\begin{gather*}
\left|Z_n\cap(0,\infty)\right|=
\begin{cases}\left[\dfrac{n}{6}\right]&\text{if $n$ is even,}\\[1ex]
\left[\dfrac{n+3}{6}\right] & \text{if $n$ is odd.}
\end{cases}
\end{gather*}
\end{theorem}
As a~consequence, for every $n\in\mathbb{N}$, we can calculate the number of positive real poles of the rational solution $w_n$ with residue $1$ and~with residue $-1$, and~the number of negative real poles of the rational solution $w_n$ with residue $1$ and~with residue $-1$.
\section{Number of real roots}
\label{sectionreal}
Let $P$ and~$Q$ be polynomials with no common real roots.
We say that the real roots of $P$ and~$Q$ interlace if and~only if in between any two real roots of $P$, $Q$ has a~real root and~in between any two real roots of~$Q$,~$P$ has a~real root.
Throughout this paper we use the convention $\mathbb{N}=\{0,1,2,\ldots\}$ and~def\/ine $\mathbb{N}^*:=\mathbb{N}\setminus\{0\}$.
\begin{theorem}
\label{simpleroots}
For every $n\in\mathbb{N}$, $Q_n$ has only simple roots.
Furthermore for $n\geq1$, $Q_{n-1}$ and~$Q_{n+1}$ have no common roots and~$Q_{n-1}$ and~$Q_n$ have no common roots.
\end{theorem}
\begin{proof}
See Fukutani, Okamoto and~Umemura~\cite{fukutani}.
\end{proof}
\begin{theorem}
\label{thminterlace}
For every $n\geq1$, the real roots of $Q_{n-1}$ and~$Q_{n+1}$ interlace.
\end{theorem}
\begin{proof}
See Clarkson~\cite{clarksonoverview}.
\end{proof}
Let $f,g:\mathbb{R}\rightarrow\mathbb{R}$ be continuous functions and~$x\in\mathbb{R}$.
We say that $f$ crosses $g$ positively at $x$ if and~only if $f(x)=g(x)$ and~there is a~$\delta>0$ such that $f(y)<g(y)$ for $x-\delta<y<x$ and~$f(y)>g(y)$ for $x<y<x+\delta$.
We say that $f$ crosses $g$ negatively at $x$ if and~only if $f(x)=g(x)$ and~there is a~$\delta>0$ such that $f(y)>g(y)$ for $x-\delta<y<x$ and~$f(y)<g(y)$ for $x<y<x+\delta$.
So $f$ crosses $g$ negatively at $x$ if and~only if $g$ crosses $f$ positively at $x$.
Let $m\in\mathbb{N}$ and~suppose that $f$ is $m$ times dif\/ferentiable, then we denote the $m$th derivative of $f$ by $f^{(m)}$ with convention $f^{(0)}=f$.
\begin{proposition}
\label{propanalytic}
Let $f,g:\mathbb{R}\rightarrow\mathbb{R}$ be analytic functions and~$x\in\mathbb{R}$.
Then $f$ crosses~$g$ positively at~$x$ if and~only if there is an $m\geq1$ such that $f^{(i)}(x)=g^{(i)}(x)$ for $0\leq i<m$ and~$f^{(m)}(x)>g^{(m)}(x)$.
Similarly $f$ crosses $g$ negatively at $x$ if and~only if there is a~$m\geq1$ such that $f^{(i)}(x)=g^{(i)}(x)$ for $0\leq i<m$ and~$f^{(m)}(x)<g^{(m)}(x)$.
\end{proposition}
\begin{proof}
This is proven easily using Taylor's theorem.
\end{proof}
\begin{lemma}
For every $n\in\mathbb{N}^*$ we have
\begin{subequations}
\begin{gather}
Q_{n+1}'Q_{n-1}-Q_{n+1}Q_{n-1}' =(2n+1)Q_n^2,\label{eqfukutani1}\\
Q_{n+1}''Q_{n-1}-Q_{n+1}Q_{n-1}'' =2(2n+1)Q_nQ_n',\label{eqfukutani2}\\
Q_{n+1}'''Q_{n-1}-Q_{n+1}Q_{n-1}''' =2(2n+1)\left(Q_n'\right)^2+(2n+1)Q_nQ_n''.\label{eqfukutani3}
\end{gather}
\end{subequations}
\end{lemma}
\begin{proof}
See Fukutani, Okamoto and~Umemura~\cite{fukutani}.
\end{proof}
The following proposition contains some well-known properties of the Yablonskii--Vorob'ev polynomials, see for instance Clarkson and~Mansf\/ield~\cite{clarksonmansfield}.
\begin{proposition}
\label{proplimitbehaviour}
For every $n\in\mathbb{N}$, $Q_n$ is a~monic polynomial of degree $\tfrac{1}{2}n(n+1)$ with integer coefficients.
As a consequence, for $n\geq1$,
\begin{gather*}
\lim_{x\rightarrow \infty} Q_n(x)=\infty,\qquad
\lim_{x\rightarrow -\infty} Q_n(x)=
\begin{cases} -\infty & \text{if $n\equiv1,2\pmod{4}$,}\\
\infty & \text{if $n\equiv0,3\pmod{4}$.}
\end{cases}
\end{gather*}
\end{proposition}
By Proposition~\ref{proplimitbehaviour}, $Q_n$ has real coef\/f\/icients and~hence we can consider $Q_n$ as a~real-valued function def\/ined on the real line, that is, we consider
\begin{gather*}
Q_n: \ \mathbb{R}\rightarrow\mathbb{R}.
\end{gather*}
\begin{proposition}
\label{crossing}
Let $n\in\mathbb{N}^*$, if $x\in\mathbb{R}$ is such that $Q_{n+1}$ crosses $Q_{n-1}$ positively at $x$, then
\begin{gather*}
Q_{n+1}(x)=Q_{n-1}(x)>0.
\end{gather*}
Similarly if $x\in\mathbb{R}$ is such that $Q_{n+1}$ crosses $Q_{n-1}$ negatively at $x$, then
\begin{gather*}
Q_{n+1}(x)=Q_{n-1}(x)<0.
\end{gather*}
\end{proposition}
\begin{proof}
Let $n\in\mathbb{N}^*$.
Suppose $x\in\mathbb{R}$ is such that $Q_{n+1}$ crosses $Q_{n-1}$ positively at $x$.
If
\begin{gather*}
Q_{n+1}(x)=Q_{n-1}(x)=0,
\end{gather*}
then $Q_{n+1}$ and~$Q_{n-1}$ have a~common root, which contradicts Theorem~\ref{simpleroots}.
Let us assume
\begin{gather}
\label{negativecrossing}
Q_{n+1}(x)=Q_{n-1}(x)<0.
\end{gather}
Then by Proposition~\ref{propanalytic},
\begin{gather}
\label{derivativenegative}
Q_{n+1}'(x)-Q_{n-1}'(x)\geq0.
\end{gather}
Therefore, by equation~\eqref{eqfukutani1},
\begin{gather*}
0\leq (2n+1)Q_n(x)^2=Q_{n+1}'(x)Q_{n-1}(x)-Q_{n+1}(x)Q_{n-1}'(x)\\
\phantom{0\leq (2n+1)Q_n(x)^2}
=Q_{n+1}(x)\left(Q_{n+1}'(x)-Q_{n-1}'(x)\right)\leq 0,
\end{gather*}
where in the last inequality we used equation~\eqref{negativecrossing} and~equation~\eqref{derivativenegative}.
We conclude
\begin{gather*}
(2n+1)Q_n(x)^2=Q_{n+1}(x)\left(Q_{n+1}'(x)-Q_{n-1}'(x)\right)=0,
\end{gather*}
so $Q_n(x)=0$ and~$Q_{n+1}'(x)=Q_{n-1}'(x)$.
Therefore by equation~\eqref{eqfukutani2},
\begin{gather*}
Q_{n+1}(x)\left(Q_{n+1}''(x)-Q_{n-1}''(x)\right)=Q_{n+1}''(x)Q_{n-1}(x)-Q_{n+1}(x)Q_{n-1}''(x)\\
\phantom{Q_{n+1}(x)\left(Q_{n+1}''(x)-Q_{n-1}''(x)\right)}
=2(2n+1)Q_n(x)Q_n'(x)=0.
\end{gather*}
We conclude $Q_{n+1}''(x)=Q_{n-1}''(x)$.
Since $Q_n(x)=0$ and, by Theorem~\ref{simpleroots}, $Q_n$ has only simple roots, we have $Q_n'(x)\neq0$.
Therefore by~\eqref{eqfukutani3},
\begin{gather*}
Q_{n+1}(x)\left(Q_{n+1}'''(x)-Q_{n-1}'''(x)\right)=Q_{n+1}'''(x)Q_{n-1}(x)-Q_{n+1}(x)Q_{n-1}'''(x)\\
\phantom{Q_{n+1}(x)\left(Q_{n+1}'''(x)-Q_{n-1}'''(x)\right)}=2(2n+1)\left(Q_n'(x)\right)^2+(2n+1)Q_n(x)Q_n''(x)\\
\phantom{Q_{n+1}(x)\left(Q_{n+1}'''(x)-Q_{n-1}'''(x)\right)}=2(2n+1)\left(Q_n'(x)\right)^2>0.
\end{gather*}
Since $Q_{n+1}(x)<0$ we conclude $Q_{n+1}'''(x)<Q_{n-1}'''(x)$.
So $Q_{n+1}'(x)=Q_{n-1}'(x)$, $Q_{n+1}''(x)=Q_{n-1}''(x)$ but $Q_{n+1}'''(x)<Q_{n-1}'''(x)$.
Therefore by Proposition~\ref{propanalytic}, $Q_{n+1}$ does not cross $Q_{n-1}$ positively at $x$ and~we have obtained a~contradiction.
We conclude that
\begin{gather*}
Q_{n+1}(x)=Q_{n-1}(x)>0.
\end{gather*}
The second part of the proposition is proven similar.
\end{proof}
We prove theorem~\ref{thmnumberrealroots}, using Theorem~\ref{thminterlace} and~Proposition \ref{crossing}.
\begin{proof}[Proof of Theorem~\ref{thmnumberrealroots}.]
Observe that~\eqref{numberrealroots} is correct for $n=0,1,2,3,4$.
Furthermore it is easy to see that \eqref{minmaxroots} is true for $n=1,2,3$.
We proceed by induction, suppose $n\geq4$ and
\begin{gather*}
\left|Z_{n-1}\right|=\left[\frac{n}{2}\right].
\end{gather*}
Then $Q_{n-1}$ has at least $2$ real roots.
By Theorem~\ref{thminterlace} the real roots of $Q_{n-1}$ and~$Q_{n+1}$ interlace, hence $Q_{n+1}$ has a~real root.
Let us def\/ine
\begin{gather*}
z:=\min(Z_{n+1}), \qquad z_1:=\min(Z_{n-1}), \qquad z_2:=\min(Z_{n-1}\setminus\left\{z_1\right\}),
\end{gather*}
so $z$ is the smallest real root of $Q_{n+1}$ and~$z_1$ and~$z_2$ are the smallest and~second smallest real root of $Q_{n-1}$ respectively.
By Theorem~\ref{thminterlace} the real roots of $Q_{n-1}$ and~$Q_{n+1}$ interlace, hence either $z<z_1$ or $z_1<z<z_2$.
We prove that $z_1<z<z_2$ can not be the case.
Suppose $z_1<z<z_2$ and~suppose $n\equiv0,1\pmod{4}$, then by Proposition~\ref{proplimitbehaviour},
\begin{gather*}
\lim_{x\rightarrow-\infty}Q_{n-1}(x)=\infty.
\end{gather*}
Hence $Q_{n-1}(x)>0$ for $x<z_1$.
Since $Q_{n-1}(z_1)=0$, this implies $Q_{n-1}'(z_1)\leq0$.
By Theorem~\ref{simpleroots}, $Q_{n-1}$ has only simple roots, hence $Q_{n-1}'(z_1)\neq0$, so $Q_{n-1}'(z_1)<0$.
Therefore by Proposition \ref{propanalytic}, $Q_{n-1}$ crosses $0$ negatively at $z_1$.
Hence $Q_{n-1}(x)<0$ for $z_1<x<z_2$, in particular
\begin{gather}
\label{negativez}
Q_{n-1}(z)<0.
\end{gather}
Since $n\equiv0,1\pmod{4}$, we have by Proposition~\ref{proplimitbehaviour},
\begin{gather*}
\lim_{x\rightarrow-\infty}Q_{n+1}(x)=-\infty.
\end{gather*}
Therefore $Q_{n+1}(x)<0$ for $x<z$, in particular
\begin{gather*}
Q_{n+1}(z_1)<0.
\end{gather*}
Def\/ine the polynomial $P:=Q_{n+1}-Q_{n-1}$, then
\begin{gather*}
P(z_1)=Q_{n+1}(z_1)-Q_{n-1}(z_1)=Q_{n+1}(z_1)-0<0,
\end{gather*}
and by equation~\eqref{negativez},
\begin{gather*}
P(z)=Q_{n+1}(z)-Q_{n-1}(z)=0-Q_{n-1}(z)>0.
\end{gather*}
So $P$ is a~polynomial with $P(z_1)<0$, $P(z)>0$ and~$z_1<z$.
Hence there is a~$z_1<x<z$ such that $P$ crosses $0$ positively at $x$, for instance
\begin{gather*}
x:=\inf\left\{t\in(z_1,z)\mid P(t)>0\right\},
\end{gather*}
has the desired properties.
Since $P=Q_{n+1}-Q_{n-1}$ crosses $0$ positively at $x$, $Q_{n+1}$ crosses $Q_{n-1}$ positively at $x$.
But $z_1<x<z_2$, hence
\begin{gather}
\label{minimum}
Q_{n+1}(x)=Q_{n-1}(x)<0.
\end{gather}
This contradicts Proposition~\ref{crossing}.
If $n\equiv2,3\pmod{4}$, then by a~similar argument, there is a $z_1<x<z$ such that $Q_{n+1}$ cros\-ses~$Q_{n-1}$ negatively at $x$ with
\begin{gather*}
Q_{n+1}(x)=Q_{n-1}(x)>0,
\end{gather*}
which again contradicts Proposition~\ref{crossing}.
We conclude that $z_1<z<z_2$ can not be the case and~hence $z<z_1$, that is,
\begin{gather*}
\min(Z_{n-1})>\min(Z_{n+1}).
\end{gather*}
Let us def\/ine
\begin{gather*}
w:=\max(Z_{n+1}), \qquad w_1:=\max(Z_{n-1}), \qquad w_2:=\max(Z_{n-1}\setminus\left\{w_1\right\}),
\end{gather*}
so $w$ is the largest real root of $Q_{n+1}$ and~$w_1$ and~$w_2$ are the largest and~second largest real root of~$Q_{n-1}$ respectively.
Suppose $w_1>w$, then by a~similar argument as the above, there is a $w_2<x<w$ such that~$Q_{n+1}$ crosses $Q_{n-1}$ positively at $x$ with
\begin{gather*}
Q_{n+1}(x)=Q_{n-1}(x)<0.
\end{gather*}
This is in contradiction with Proposition~\ref{crossing}, so $w_1<w$, that is
\begin{gather}
\label{maximum}
\max(Z_{n-1})<\max(Z_{n+1}).
\end{gather}
Let $z_1<z_2<\cdots<z_k$ be the real roots of $Q_{n-1}$ with $k=\left[\frac{n}{2}\right]$ and~$z_1'<z_2'<\cdots <z_m'$ be the real roots of $Q_{n+1}$.
Then by equations~\eqref{minimum} and~\eqref{maximum}, $z_1'<z_1$, $z_k<z_m'$ and~since by Theorem~\ref{thminterlace} the real roots of $Q_{n-1}$ and~$Q_{n+1}$ interlace, we have
\begin{gather*}
z_1'<z_1<z_2'<z_2<z_3'<z_3<\cdots<z_{k-1}'<z_{k-1}<z_{k}'<z_k<z_{k+1}'=z_m'.
\end{gather*}
Hence $m=k+1$, that is,
\begin{gather*}
\left|Z_{n+1}\right|=m=k+1=\left[\frac{n}{2}\right]+1=\left[\frac{n+2}{2}\right].
\end{gather*}
The theorem follows by induction.
\end{proof}
\section{Number of positive and~negative real roots}
\label{sectionpositivenegative}
For a~polynomial $P$ we denote the set of real roots of $P$ by $Z_P$.
\begin{lemma}
\label{lemnumbersroots}
Let $P$ and~$Q$ be polynomials with real coefficients, both a~positive leading coefficient and~only simple roots.
Assume that the real roots of $P$ and~$Q$ interlace.
Furthermore suppose both $P$ and~$Q$ have a~real root and
\begin{gather*}
\min(Z_P)>\min(Z_Q),\qquad
\max(Z_P)<\max(Z_Q).
\end{gather*}
Then we have the following relations between the number of negative and~positive real roots of~$P$ and~$Q$,
\begin{gather*}
\left|Z_Q\cap (-\infty,0)\right|=\left|Z_P\cap (-\infty,0)\right|+
\begin{cases} 1 & \text{if $P(0)=0$,}\\
0 & \text{if $Q(0)=0$,}\\
1 & \text{if $P(0)>0$ and~$Q(0)>0$,}\\
0 & \text{if $P(0)>0$ and~$Q(0)<0$,}\\
0 & \text{if $P(0)<0$ and~$Q(0)>0$,}\\
1 & \text{if $P(0)<0$ and~$Q(0)<0$,}
\end{cases}\\
\left|Z_Q\cap (0,\infty)\right|=\left|Z_P\cap (0,\infty)\right|+
\begin{cases} 1 & \text{if $P(0)=0$,}\\
0 & \text{if $Q(0)=0$,}\\
0 & \text{if $P(0)>0$ and~$Q(0)>0$,}\\
1 & \text{if $P(0)>0$ and~$Q(0)<0$,}\\
1 & \text{if $P(0)<0$ and~$Q(0)>0$,}\\
0 & \text{if $P(0)<0$ and~$Q(0)<0$.}
\end{cases}
\end{gather*}
\end{lemma}
\begin{proof}
Let $z_1>z_2>\cdots>z_n$ be the real roots of $P$ and~$z_1'>z_2'>\cdots>z_m'$ be the real roots of $Q$.
Observe that
\begin{gather*}
z_n=\min(Z_P)>\min(Z_Q)=z_m',\qquad
z_1=\max(Z_P)<\max(Z_Q)=z_1'.
\end{gather*}
Therefore, since the real roots of $P$ and~$Q$ interlace, we have
\begin{gather}
\label{rootsinterlace}
z_1'>z_1>z_2'>z_2>\cdots>z_n'>z_n>z_{n+1}'=z_m',
\end{gather}
In particular $m=n+1$.
Suppose $P(0)=0$.
Then there is an unique $1\leq k\leq n$ such that $z_k=0$.
So equation~\eqref{rootsinterlace} implies
\begin{gather*}
z_1'>z_1>z_2'>z_2>\!\cdots\!>z_{k-1}'\!>z_{k-1}\!>z_k'>z_k=0>z_{k+1}'\!>z_{k+1}\!>\!\cdots\!>z_n'>z_n>z_{n+1}'.
\end{gather*}
Therefore
\begin{gather*}
\left|Z_Q\cap (-\infty,0)\right|=n+1-(k+1)+1=n-k+1=\left|Z_P\cap (-\infty,0)\right|+1,\\
\left|Z_Q\cap (0,\infty)\right|=k=\left|Z_P\cap (0,\infty)\right|+1.
\end{gather*}
The case $Q(0)=0$ is proven similarly.
Suppose $P(0)>0$ and~$Q(0)>0$.
Since $P$ has a~positive leading coef\/f\/icient and~is not constant, we have
\begin{gather*}
\lim_{x\rightarrow\infty}P(x)=\infty.
\end{gather*}
Therefore, since $z_1$ is the largest real root of $P$, $P(x)>0$ for $x>z_1$.
Since $P$ has only simple roots, $P$ crosses $0$ positively at $z_1$, so $P(x)<0$ for $z_2<x<z_1$.
Again since $P$ has only simple roots, $P$ crosses $0$ negatively at $z_2$, so $P(x)>0$ for $z_3<x<z_2$.
Inductively we see that when $1\leq i<n$ is even, $P(x)>0$ for $z_{i+1}<x<z_i$, and~when $1\leq i<n$ is odd, $P(x)<0$ for $z_{i+1}<x<z_i$.
Furthermore $P(x)>0$ for $x<z_n$ if $n$ is even and~$P(x)<0$ for $x<z_n$ if $n$ is odd.
Similarly we have, for $1\leq i<n+1$ even, $Q(x)>0$ for $z_{i+1}'<x<z_i'$, and~for $1\leq i<n+1$ odd, $Q(x)<0$ for $z_{i+1}'<x<z_i'$.
Furthermore $Q(x)<0$ for $x<z_{n+1}'$, if $n$ is even and~$Q(x)>0$ for $x<z_{n+1}'$, if $n$ is odd.
There are three cases to consider: $z_1>0>z_n$, $z_1<0$ and~$z_n>0$.
We f\/irst assume $z_1>0>z_n$.
Then there is an unique $1\leq k\leq n$ such that $z_k>0>z_{k+1}$.
Since $z_k>0>z_{k+1}$ and~$P(0)>0$, we conclude that $k$ is even.
By equation~\eqref{rootsinterlace},
\begin{gather*}
z_k'>z_k>0>z_{k+1}>z_{k+2}'.
\end{gather*}
Since $k$ is even, $Q(x)>0$ for $z_{k+1}'<x<z_k'$ and~$Q(x)<0$ for $z_{k+2}'<x<z_{k+1}'$.
But $z_{k+2}'<0<z_k'$ and~$Q(0)>0$, hence $z_{k+1}'<0<z_k'$.
Therefore
\begin{gather*}
\left|Z_Q\cap (-\infty,0)\right|=n+1-(k+1)+1=\left|Z_P\cap (-\infty,0)\right|+1,\\
\left|Z_Q\cap (0,\infty)\right|=k=\left|Z_P\cap (0,\infty)\right|.
\end{gather*}
Let us assume $z_1<0$, then $P$ has no positive real roots.
Observe $Q(x)<0$ for $z_2'<x<z_1'$.
Suppose $z_1'>0$, then $z_2'>0$ since $Q(0)>0$.
Hence by equation~\eqref{rootsinterlace}, $z_1'>z_1>z_2'>0$, so $z_1>0$ and~we have a~contradiction.
So $z_1'<0$, hence all the real roots of $Q$ are negative and~we have
\begin{gather*}
\left|Z_Q\cap (-\infty,0)\right|=m=n+1=\left|Z_P\cap (-\infty,0)\right|+1,\\
\left|Z_Q\cap (0,\infty)\right|=0=\left|Z_P\cap (0,\infty)\right|.
\end{gather*}
Finally let us assume $z_n>0$, then $P$ has no negative real roots.
By equation~\eqref{rootsinterlace}, $z_n'>0$.
Since $P(0)>0$, $P(x)>0$ for $x<z_n$, therefore $n$ must be even.
Hence $Q(x)>0$ for $z_{n+1}'<x<z_n'$ and~$Q(x)<0$ for $x<z_{n+1}'$.
Since $z_n'>0$ and~$Q(0)>0$, this implies $z_{n+1}'<0<z_n'$.
Therefore
\begin{gather*}
\left|Z_Q\cap (-\infty,0)\right|=1=\left|Z_P\cap (-\infty,0)\right|+1,\qquad
\left|Z_Q\cap (0,\infty)\right|=n=\left|Z_P\cap (0,\infty)\right|.
\end{gather*}
This ends our discussion of the case $P(0)>0$ and~$Q(0)>0$.
The remaining cases are proven similarly.
\end{proof}
Taneda~\cite{taneda} proved that for $n\in\mathbb{N}$:
\begin{itemize}\itemsep=0pt
\item if $n\equiv1\pmod{3}$, then $\frac{Q_n}{z}\in\mathbb{Z}[z^3]$;
\item if $n\not\equiv1\pmod{3}$, then $Q_n\in\mathbb{Z}[z^3]$.
\end{itemize}
Hence $Q_n(0)=0$ if $n\equiv1\pmod{3}$.
By Theorem~\ref{simpleroots}, for every $n\geq1$, $Q_{n-1}$ and~$Q_n$ do not have a~common root.
Therefore $Q_n(0)=0$ if and~only if $n\equiv1\pmod{3}$.
Let us denote the coef\/f\/icient of the lowest degree term in $Q_n$ by $x_n$.
That is, we def\/ine $x_n:=Q_n(0)$ if $n\not\equiv1\pmod{3}$, and~$x_n:=Q_n'(0)$ if $n\equiv1\pmod{3}$.
In~\cite{roffelsen} we derived the following recursion for the $x_n$:
\begin{gather*}
x_0=1,\qquad x_1=1
\end{gather*}
and
\begin{gather}\label{EQ}
x_{n+1}x_{n-1}=
\begin{cases}(2n+1)x_n^2&\text{if $n\equiv0\pmod{3}$,}\\
4x_n^2 &\text{if $n\equiv1\pmod{3}$,}\\
-(2n+1)x_n^2 &\text{if $n\equiv2\pmod{3}$.}
\end{cases}
\end{gather}
We remark that the above recursion can be used to determine the $x_n$ explicitly, a~direct formula for~$x_n$ is given by Kaneko and~Ochiai~\cite{kaneko}.
\begin{lemma}
\label{lemsign}
For every $n\in\mathbb{N}$,
\begin{gather*}
\operatorname{sgn}(Q_n(0))=
\begin{cases}-1&\text{if $n\equiv3,5,6,8\pmod{12}$},\\
\hphantom{-}0 & \text{if $n\equiv1,4,7,10\pmod{12}$},\\
\hphantom{-}1 & \text{if $n\equiv0,2,9,11\pmod{12}$},
\end{cases}
\end{gather*}
where $\text{sgn}$ denotes the sign function on $\mathbb{R}$.
\end{lemma}
\begin{proof}
By induction using recursion~\eqref{EQ}, we have
\begin{gather*}
\operatorname{sgn}(x_n)=
\begin{cases}-1&\text{if $n\equiv3,5,7,6,8,10\pmod{12}$},\\
\hphantom{-}1 & \text{if $n\equiv0,1,2,4,9,11\pmod{12}$}.
\end{cases}
\end{gather*}
The lemma follows from this and~the fact that $Q_n(0)=0$ if and~only if $n\equiv1\pmod{3}$.
\end{proof}
We apply Lemma~\ref{lemnumbersroots} to the Yablonskii--Vorob'ev polynomials to prove Theorem~\ref{thmpositivenegative}.
\begin{proof}[Proof of Theorem~\ref{thmpositivenegative}.]
Let $n\geq2$, then by Proposition~\ref{proplimitbehaviour}, Theorem~\ref{simpleroots} and~Theorem~\ref{thminterlace}, $P:=Q_{n-1}$ and~$Q:=Q_{n+1}$ are monic polynomials with only simple roots such that the real roots interlace.
Furthermore by Theorem \ref{thmnumberrealroots},
both $P$ and~$Q$ have a~real root and
\begin{gather*}
\min(Z_P)>\min(Z_Q),\qquad
\max(Z_P)<\max(Z_Q).
\end{gather*}
So we can apply Lemma~\ref{lemnumbersroots} together with Lemma~\ref{lemsign} and~obtain:
\begin{gather*}
\left|Z_{n+1}\cap (-\infty,0)\right|=\left|Z_{n-1}\cap (-\infty,0)\right|+
\begin{cases} 0 & \text{if $n\equiv0,3\pmod{6}$,}\\
1 & \text{if $n\equiv1,2,4,5\pmod{6}$,}
\end{cases}\\
\left|Z_{n+1}\cap (0,\infty)\right|=\left|Z_{n-1}\cap (0,\infty)\right|+
\begin{cases} 0 & \text{if $n\equiv0,1\pmod{3}$,}\\
1 & \text{if $n\equiv2\pmod{3}$.}
\end{cases}
\end{gather*}
Observe that $Z_0=\varnothing$, $Z_1=\left\{0\right\}$ and~$Z_2=\left\{-\sqrt[3]{4}\right\}$.
The theorem is obtained by applying the above recursive formulas inductively.
\end{proof}
Let us discuss an example.
By Theorem~\ref{thmYV}, the unique rational solution of $P_{\rm II}(\alpha)$ for the parametervalue $\alpha:=21$ is given by
\begin{gather*}
w_{21}=\frac{Q_{20}'}{Q_{20}}-\frac{Q_{21}'}{Q_{21}}.
\end{gather*}
By Theorem~\ref{simpleroots}, $Q_{20}$ and~$Q_{21}$ do not have common roots and~the roots of $Q_{20}$ and~$Q_{21}$ are simple.
Hence the poles of $w_{21}$ are precisely the roots of $Q_{20}$ and~$Q_{21}$, the roots of $Q_{20}$ are poles of $w_{21}$ with residue $1$ and~the roots of $Q_{21}$ are poles of $w_{21}$ with residue $-1$.
By Theorem~\ref{thmnumberrealroots}, $Q_{20}$ has $10$ real roots and~by Theorem~\ref{thmpositivenegative}, $7$ of them are negative and~$3$ of them are positive.
Similarly $Q_{21}$ has $11$ real roots, $7$ of them are negative and~$4$ of them are positive.
Therefore $w_{21}$ has $21$ real poles, $10$ with residue~$1$ and~$11$ with residue~$-1$.
More precisely~$w_{21}$ has $7$ positive real poles, $3$ with residue~$1$ and~$4$ with residue~$-1$ and~$w_{21}$ has~$14$ negative real poles, $7$ with residue~$1$ and~$7$ with residue~$-1$.
\pdfbookmark[1]{References}{ref}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,104
|
Q: How to prove that the difference between two consecutive squares is odd? Prove that the difference between the squares of two consecutive numbers is always an odd number
For two consecutive numbers $p, q \in R$ where $q=p+1$, it follows that
$p^2 - q^2 = (p-q)(p+q)$. Moreover, $$p^2 - q^2 = (p-(p+1))(p+(p+1))$$ which gives $$p^2 - q^2 = -1(2p+1)$$
Hence the number is in fact odd.
My question is whether I should consider the case where $q=p-1$ too and is there a way to generalize it.
A: First of all, you probably meant $p,q \in \textbf{Z}$. Now, if the smallest of two numbers is $n$, then you are interested in $(n+1)^2-n^2=n^2+2n+1-n^2=2n+1$ - which is odd.
Note: to avoid considering all cases, as you asked you can say (without lost of generality) that $n$ is the smallest among two of them, which will give you the fact that $n+1$ is the seocnd number.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 735
|
Amalia of Neuenahr (6 April 1539 – 10 April 1602) was the daughter of Gumprecht of Neuenahr and Cordula of Holstein Schauenburg.
Her first husband was Hendrik van Brederode, who played an important part in the events leading up to the Eighty Years' War. After he became one of the leaders in the resistance against the Spanish Inquisition and Spanish rule in the Netherlands, she helped him collect funds. After his death in 1568, she married Frederick III, Elector Palatine of the Rhine in 1569. It was in the same year that Emilia, the second daughter of William the Silent and his second wife Anna of Saxony was named after her. This is because she was in charge of Anna's household at the time. Frederick died in 1576.
From 1579 until 1587 she was in charge of Vianen, which she inherited from her first husband. In 1589 she inherited Limburg from Adolf, her half-brother. In 1590 she was given the rights of use of Alpen, Helpenstein, Lennep and Erbvogtei of Köln by her half-sister, Magdalena. Alpen was occupied by the Dutch Republic in 1597 and the following year by Spanish forces.
References
"Women in Power" last accessed August 1, 2007
|-
Dutch people of the Eighty Years' War (United Provinces)
Electresses of the Palatinate
1539 births
1602 deaths
Burials at the Church of the Holy Spirit, Heidelberg
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 50
|
\section{Introduction} \label{secintroduction}
Hadron electromagnetic polarizabilities encode important information about the distribution of charge and current densities
inside the hadrons. Experimentally these parameters are extracted using cross-sections measured in Compton scattering reactions with theoretical input from effective models and dispersion relations. Lattice QCD can provide
first-principles-based results for static polarizabilities directly as predicted by quark-gluon dynamics.
This input is particularly important for unstable
hadrons, where experimental and theoretical uncertainties in the effective models are large.
At the lowest order the effects of an electromagnetic field on hadrons can be parameterized by the effective Hamiltonian:
\begin{equation}
\mathcal{H}_{em} = -\vec{p}\cdot\vec{\mathcal{E}} -\vec{\mu}\cdot\vec{B} -\frac{1}{2}\left(\alpha \mathcal{E}^2 + \beta B^2\right)+\cdots,
\label{eq:1}
\end{equation}
where $p$ and $\mu$ are the static electric and magnetic dipole moments, respectively,
and $\alpha$ and $\beta$ are the static electric and magnetic polarizabilities.
Due to time reversal symmetry of the strong interaction, the static dipole moment,
$\vec{p}$, vanishes. In the presence of a constant electric field only,
the leading contribution to the electromagnetic interaction comes from the electric
polarizability term at $\mathcal{O}(\mathcal{E}^2)$.
Lattice QCD calculations of electromagnetic polarizabilities are challenging since the electromagnetic effects are small
compared to the natural hadronic scale. A good understanding of all systematic effects is required to ensure that the
parameters extracted from these calculations are reliable. To that end, our first goal was to validate our method by
focusing on the neutron electric polarizability. For neutral hadrons lattice QCD calculations are more reliable than for charged hadrons since
neutral particles are not accelerated by the external field. On the experimental side results for the neutron are reasonably
precise and effective model predictions are in good agreement with the experimental data. This makes
the lattice QCD extraction of the electric polarizability of the neutron a good benchmark study.
In a previous study we computed the electric polarizability of the neutron, neutral pion, and neutral kaon for two
different pion masses (306 and 227~MeV) with a fixed box size of $L \simeq 3 \,{\rm fm}$~\cite{Lujan:2014kia}. The results
we found were a bit puzzling: the pion polarizability exhibited the same negative trend observed in other studies
both with dynamical~\cite{Detmold:2009dx} and quenched ensembles~\cite{Alexandru:2010dx} and the neutron polarizability
was in disagreement with predictions from chiral perturbation
theory~\cite{Lensky:2009uv,McGovern:2012ew,Griesshammer:2012we,Griesshammer:2015ahu}.
We speculated that corrections due to electrically neutral sea quarks or finite-volume effects
could explain these discrepancies. A calculation of the polarizability, with the inclusion of the charged sea quarks,
was done on the 306~MeV ensemble~\cite{Freeman:2014kka,Freeman:2014gsa}. It was found that charging the sea
quarks does not change the polarizability significantly, which is aligned with expectations from chiral perturbation theory.
Thus, the discrepancy between our lattice calculation of the neutron polarizability and the calculation from
$\chi$PT remained. In this paper we study the finite-volume corrections for this quantity.
The paper is organized as follows: In Section~\ref{secmethodology} we present the method used to extract
the polarizability from the lattice for mesons and baryons. This includes a discussion of our fitting procedure.
In Sections~\ref{sec:voldependence} we present our results of the polarizability for the neutron, pion, and kaon
and discuss the finite-volume corrections. In Section~\ref{sec:discussion} we discuss the quark mass dependence for
the infinite volume extrapolated polarizability and compare our results with predictions from $\chi$PT.
Lastly, in Section~\ref{sec:conclusion} we summarize our results and outline
our plans for future investigations.
\bigskip
\section{ Methodology} \label{secmethodology}
\subsection{Background field method} \label{seceshift}
In lattice QCD polarizabilities can be computed using the background field method~\cite{Martinelli:1982cb}:
the energy shift induced by a constant electric field is directly related to the static electric polarizability.
A static electromagnetic field can be introduced by coupling the vector potential ($A_{\mu}$) to
the covariant derivative of the Euclidean QCD Lagrangian,
\begin{equation}
D_{\mu} = \partial_{\mu} -igG_{\mu} -iqA_{\mu},
\end{equation}
where $G_{\mu}$ is the gluon field. On the lattice this is implemented by a multiplicative
U(1) phase factor to the gauge links {\it{i.e.}},
\begin{equation}
U_{\mu} \rightarrow e^{-iqaA_{\mu}} U_{\mu}.
\end{equation}
For a constant electric field, one choice for the vector potential is $A_x = \mathcal{E} t$, where we have used an
imaginary value for the electric field leading to a U(1) multiplicative factor that keeps the links unitary.
When using an imaginary value of the field, the energy shift due to the polarizability acquires an additional
negative sign so that a positive energy shift corresponds to a positive value of the polarizability~\cite{Alexandru:2008sj}.
In this study we use very weak electric fields to extract the polarizability,
so that the energy shift is due to polarizabilities rather than higher order terms in the effective
Hamiltonian in Eq.~\ref{eq:1}.
It is possible of course to extract the polarizability using stronger fields, but this
would require the evaluation of the energy shifts for different electric field strengths to reliably separate
the higher order corrections.
We use Dirichlet boundary conditions (DBC) for the valence quarks in both the time direction and the direction of
the electric field. This choice of boundary conditions allows us to chose an arbitrarily small value of the electric field.
In our analysis we use a value of
\begin{equation}
\eta \equiv a^2 q_d\mathcal{E} = 10^{-4},
\end{equation}
where $a$ is the lattice spacing and $q_d$ is the magnitude of the electric charge for the down quark.
One bound on $\eta$ is determined by looking at a $\pm{\cal E}$-symmetrized hadron correlator (see below) at various time slices
and determining the range of $\eta$ values which exhibit quadratic scaling~\cite{Lujan:2014kia}.
A more stringent constraint on $\eta$ appears when we take into account the
effect of the sea-quark charge via perturbative reweighting~\cite{Freeman:2014kka}. The latter constraint forces us
to use this low $\eta$ value.
In physical terms, this value of $\eta$ corresponds to an electric field that an electron would generate at a
distance of $0.5\,{\rm fm}$. The value is well within the quadratic scaling region. Note that the value is about $50$
times lower than the lowest quantized value $2\pi/(N_x \times N_t)$ corresponding to one unit of electric flux;
thus the induced energy shift is thousands of times smaller.
In our study, the energy shift for the neutron is on the order of keV out of 938 MeV.
Due to the boundary conditions, the quark and hadrons' correlators
close to the boundaries will behave differently than in the bulk.
These effect are enhanced when the source is placed close to the
walls (see for example the discussion about correlators with
sources close to discontinuities in the non-quantized
background fields~\cite{Detmold:2009dx,Davoudi:2015cba}.) To minimize these effects, we placed
the source for our quark correlators at maximal distance from the
spatial walls and six lattice units from the temporal wall. In any case, the hadron propagator
will be affected by the walls since the particle in the lowest
momentum state will have a non-zero probability to be within
the distortion region due to the hard walls. Since this
region is expected to have finite range, the corrections will
be proportional with the probability to be in this region, which
is will vanish as we increase the distance $L$ between the walls
as $1/L$ (recall that we only have hard walls in one spatial direction.)
These corrections will appear as a finite volume correction.
To determine the energy shift $\delta E$ on the lattice we calculate the zero-field ($G_0$), plus-field ($G_{+\mathcal{E}}$), and minus-field ($G_{-\mathcal{E}}$) two-point correlation functions for the interpolating operators of interest. The combination of the plus and minus field correlators allows us to remove any $\mathcal{O}(\mathcal{E})$ effects, which are statistical artifacts, when the sea quarks are not charged. For neutral particles in a constant electric field the correlation functions still retain their single exponential decay in the limit $t \rightarrow \infty$,
\begin{equation}
\langle G_{\mathcal{E}}(t) \rangle \underset{t\to\infty}{\approx} A(\mathcal{E})e^{-E(\mathcal{E})t},
\label{eqn::corr}
\end{equation}
where $E(\mathcal{E})$ has the perturbative expansion in the electric field given by
\begin{equation}\label{eqn.energy}
E(\mathcal{E}) = m + \frac{1}{2}\alpha \mathcal{E}^2 + ... ~.
\end{equation}
By studying the variations of the correlation functions with and without an electric field one can isolate the energy shift to obtain $\alpha$.
For spin-1/2 hadrons, the energy shift in a constant electric field receives a contribution
due to the magnetic moment of the hadron at order ${\cal O}({\cal E}^2)$. Thus the static polarizability $\alpha$ defined by Eq.~\ref{eqn.energy},
is not identical to the Compton polarizability $\bar{\alpha}$ that enters the effective Lagrangian for spin-1/2
systems~\cite{Lvov:1993fp}. The relation between these polarizabilities can be computed~\cite{Detmold:2009dx,Lujan:2014kia}.
For these systems the energy expansion reads,
\begin{equation}
E(\mathcal{E}) = m + \frac{1}{2}\mathcal{E}^2\left(\bar{\alpha}- \frac{\mu^2}{m}\right) + ...~,
\label{eqn::Eneutron}
\end{equation}
where $\bar{\alpha}$ is the Compton polarizability that we wish to compute. To account for the magnetic moment we use the same procedure as we did in a previous study~\cite{Lujan:2014kia}.
Since we use Dirichlet boundary conditions, the lowest energy state corresponds to a hadron moving with a momentum roughly
equal to $\pi/L$, which vanishes in the limit $L\rightarrow \infty$. When we extract the energy shift from the
hadron we need to account for the induced momentum because the energy shift ($\delta E$) is not equal to the mass
shift ($\delta m$). The two are related via the dispersion relation $E = \sqrt{m^2 + p^2}$ by
\begin{equation}\label{eqn:dEtodm}
\delta m = \delta E \frac{E}{m},
\end{equation}
where $m$ is the zero-momentum mass of the particle which we calculate using periodic boundary conditions (PBC). The mass shift
$\delta m$ is then used in
Eq.~\ref{eqn.energy} or Eq.~\ref{eqn::Eneutron}, to extract the polarizability.
\subsection{Fitting Method}
\label{fitmethod}
Since the correlation functions $G_0, G_{+{\cal E}}$, and $G_{-{\cal E}}$ are dominated by a single exponential at large times,
we can use standard spectroscopy techniques to measure the shift in a hadron's energy.
The only caveat is that the shift is very small at the field strength used in this study, smaller than the statistical
errors if they were fitted separately.
To overcome the difficulty, we take advantage of the fact that the three correlators are highly correlated since they are computed on the same set of gauge configurations.
To do this we construct the combined residue vector from the individual residue vectors in each sector,
\begin{eqnarray}
\mathbf{v}_{i} &\equiv& f(t_i) - \langle G_{0}(t_i)\rangle, \nonumber\\
\mathbf{v}_{N+i} &\equiv& \bar{f}(t_{i}) - \langle G_{+\mathcal{E}}(t_{i})\rangle, \\
\mathbf{v}_{2N+i} &\equiv& \bar{f}(t_{i}) - \langle G_{-\mathcal{E}}(t_{i})\rangle, \nonumber
\end{eqnarray}
where $i=1,\cdots,N$ labels the time slices in the fit window, $f(t) = A~e^{-E t}$ the fitting function in the
absence of the field,
and $\bar{f}(t) =(A+\delta A)~e^{-(E+\delta E)t}$ the fitting function in the
presence of the field.
We minimize the $\chi^2$ function,
\begin{equation}
\chi^2= \mathbf{v}^T \mathbf{C}^{-1} \mathbf{v},
\end{equation}
for four parameters ($A$, $E$, $\delta A$, $\delta E$) in the usual fashion, where $\mathbf{C}$ is the $3N\times 3N$ jackknifed covariance matrix which takes into account the correlations both in time and in the electric field.
Specifically, the matrix has a $3\times 3$ block structure
\[ \mathbf{C} = \left( \begin{array}{ccc}
C_{0 0} ~& C_{0 +}~ & C_{0 -} \\
C_{+ 0} ~& C_{+ +}~ & C_{+ -} \\
C_{- 0} ~& C_{- +} ~& C_{- -} \end{array} \right),\]
where $0,+,-$ represent $G_0, G_{+\mathcal{E}}$, and $G_{-\mathcal{E}}$ respectively. Each block is a $N\times N$ matrix.
The correlations are encoded in the off-diagonal blocks.
Note that the symmetrization in the electric field is done implicitly in this procedure,
since $\bar{f}$ is the same for $G_{+\mathcal{E}}$, and $G_{-\mathcal{E}}$.
The statistical errors on the parameters are derived from the Hessian of the $\chi^2$.
This method is used to extract all parameters presented in this work.
To illustrate the importance of accounting for these correlations, we consider the energy shift $\delta E$ for the neutron
for one of the ensembles used in this work. Using the full covariance matrix we find
$a\,\delta E = (4.3\pm1.2) \times 10^{-7}$. If we neglect the correlations, which is equivalent to using only
the diagonal blocks of the covariance matrix, we find $a\, \delta E = (8.15 \pm 150000) \times 10^{-7}$,
which has huge errors.
\subsection{Calculation details}
\label{sec:ensemble}
We calculate the electric polarizability for the neutron, neutral pion, and neutral kaon on eight dynamically
generated ensembles using 2-flavor nHYP-clover fermions~\cite{Hasenfratz:2007rf}. For the neutral pion polarizability we are computing
only the connected contribution to the pion correlation function, as we had
also done in~\cite{Lujan:2014kia}. We used two quark masses,
corresponding to pion masses of $227(2)\,{\rm MeV}$ and $306(1)\,{\rm MeV}$.
For each mass we performed simulations on four different volumes, to study finite volume effects.
To save time, we varied the dimension of the lattice only along the electric field ($x$-direction).
We expect that the finite-volume corrections vanish exponentially in the transverse directions, and that our
lattice is large enough for these corrections to be negligible at the current precision level. On the other
hand, the corrections associated with the direction parallel with the electric field are expected to vanish
only as a power law in $1/L$. We will show that our results agree with these expectations.
Details of the ensembles are given in Table~\ref{tab:ensembles}. The determination of both the lattice spacing
and $\kappa_s$, the hopping parameter for the strange quark that is required to compute the kaon polarizability,
is discussed in detail in our previous study~\cite{Lujan:2014kia}. We use the same values here:
$\kappa_s=0.1266$ for ensembles EN1 to EN4; $\kappa_s=0.1255$ for ensembles EN5 to EN8.
\begin{table}[t]
\begin{tabular*}{0.9\columnwidth}{@{\extracolsep{\stretch{1}}}*{6}c@{}}
\toprule
Label &Lattice& $a$ (fm) & $\kappa$& $N_{\text{c}}$& $N_{s}$\\
\midrule
$\text{EN1}$&$16\times16^2 \times 32$ & 0.1245& 0.12820 & 230&11\\
$\text{EN2}$&$24\times 24^2 \times 48$ & 0.1245& 0.12820 & 300& 25\\
$\text{EN3}$&$30 \times 24^2 \times 48$& 0.1245& 0.12820 & 300& 29\\
$\text{EN4}$&$48 \times 24^2 \times 48$& 0.1245& 0.12820 & 270& 37\\
\cmidrule[0pt]{1-6}
$\text{EN5}$&$16\times16^2 \times 32$& 0.1215& 0.12838 & 230 & 16\\
$\text{EN6}$&$24\times 24^2 \times 64$& 0.1215& 0.12838 & 450& 23 \\
$\text{EN7}$&$28\times 24^2 \times 64$& 0.1215& 0.12838 & 670& 33\\
$\text{EN8}$&$32\times 24^2 \times 64$& 0.1215& 0.12838 & 500& 37\\
\bottomrule
\end{tabular*}
\caption{Details of the lattice ensembles used in this work. $N_c$ and $N_s$ label the number of configurations and number of sources on each configuration, respectively. The top four ensembles correspond to $m_\pi=306(1)\,{\rm MeV}$ and the bottom four $m_\pi=227(2)\,{\rm MeV}$.}
\label{tab:ensembles}
\end{table}
To reduce the statistical uncertainties we computed quark propagators at multiple point sources for
each configuration. Since the presence of the Dirichlet walls breaks translational symmetry in the
$x$ and $t$ directions, the point sources have to be picked carefully; they were displaced with
respect to each other using translations in the $y$ and $z$ directions, which have periodic boundary conditions.
\begin{table}[b]
\begin{tabular*}{0.8\columnwidth}{@{\extracolsep{\stretch{1}}}cccc@{}}
\toprule
Ensemble & Pion&Kaon&Neutron\\
\midrule
$\text{EN1}$ & [10, 19]& [10, 19] & [8, 21]\\
$\text{EN2}$ & [14, 30]& [14, 30] & [8, 21]\\
$\text{EN3}$ & [13, 30]& [13, 30] & [9, 21]\\
$\text{EN4}$ & [14, 30]& [14, 30] & [8, 21]\\
\cmidrule[0pt]{2-4}
$\text{EN5}$ & [10, 19]& [10, 19] & [9, 21]\\
$\text{EN6}$ & [15, 36]& [15, 37] & [9, 21]\\
$\text{EN7}$ & [15, 37]& [15, 37] & [10, 21]\\
$\text{EN8}$ & [15, 37]& [15, 36] & [9, 21]\\
\bottomrule
\end{tabular*}
\caption{Fit ranges used in extracting the energy shifts for the pion, kaon, and neutron.}
\label{tab:fitwindow}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{neutron_vol_300.pdf}
\includegraphics[width=0.48\textwidth]{neutron_vol_200.pdf}
\caption{Infinite volume extrapolation for neutron polarizability. The left panel shows our results for the
$m_{\pi}=306$ MeV ensembles and the right panel for the $m_{\pi}=227$ MeV ensembles. On each plot we overlay
the infinite volume extrapolations using a linear (solid line) or quadratic (dashed line) fit.}
\label{plot:nvolplot1}
\end{figure*}
To determine the appropriate time window to fit the correlation functions, we varied the start
time, $t_\text{min}$ and kept the maximum fit time fixed. For each case we performed a fit and
extracted the hadron's energy shift, $\delta E$, and the associated $\chi^2$/dof.
Following the procedure discussed in~\cite{Lujan:2014kia}, we choose the largest
fit window that produces a good quality fit. The fit windows used for each of the hadrons
studied in this paper are listed in Table~\ref{tab:fitwindow}.
The computed values for the polarizability of the three hadrons are presented in Tables~\ref{tab:fitdata}.
In the same table we include the energy shifts due to the field, the energies measured in the absence of the
field with Dirichlet boundary conditions, and the masses as extracted using periodic boundary conditions.
Since we use dozens of point sources for each ensemble, and for each point source we
need to compute the quark propagator for five different couplings to the background electric
field, we have to compute hundreds of inversions for each configuration. To compute these
efficiently, we use our implementation of a multi-GPU Dslash operator~\cite{Alexandru:2011sc}
and an efficient multi-mass inverter~\cite{Alexandru:2011ee}.
\section{Volume Dependence Analysis} \label{sec:voldependence}
Finite volume corrections have been estimated using $\chi$PT. For periodic boundary conditions these effects were calculated for electric polarizabilities~\cite{Detmold:2006vu} and magnetic polarizabilities~\cite{Hall:2013dva}. At $m_{\pi}$ around 250 MeV and $L=3\,\,{\rm fm}$ it was estimated that the correction to the neutron polarizability is about 7\%~\cite{Detmold:2006vu}. For Dirichlet boundary conditions used in this work, no direct $\chi$PT predictions are available. The only estimate comes from sigma model studies of the chiral condensate in the presence of hard walls~\cite{Tiburzi:2013vza}. This choice of boundary conditions is expected to introduce larger finite volume effects that are expected to vanish algebraically with $1/L_x$ in the infinite volume limit. The expectation is based on the idea that the corrections are mainly driven by the hadron momentum $\pi/L_x$. To thoroughly analyze the volume dependence we performed our calculations on four different lattice sizes for both pion masses.
\begin{table}[b]
\begin{tabular*}{\columnwidth}{@{\extracolsep{\stretch{1}}}l*{7}c@{}}
\toprule
\multicolumn{1}{c}{}& \multicolumn{3}{c}{ $306\,{\rm MeV}$ }&\phantom{aa} &\multicolumn{3}{c}{ $227\,{\rm MeV}$ }\\
&$\bar\alpha_n$ & $\chi^2$& AIC & &$\bar\alpha_n$&$\chi^2$ & AIC\\
\cmidrule{2-4}\cmidrule{6-8}
Constant & 2.18(11)&17.4& 19.40&& 2.77(22)&11.76 &13.76 \\
Linear &3.67(38) &0.298 & 4.30 && 5.62(91) &1.28 &5.28 \\
Quadratic &4.1(1.1) &0.141&6.14&& 8.9(6.1) &0.99 &6.99\\
\bottomrule
\end{tabular*}
\caption{Infinite volume extrapolation results for the neutron with three different fit models.
The polarizabilities $\bar\alpha_n$ are reported in units of $10^{-4}\,{\rm fm}^3$.}
\label{tab:ninfres}
\end{table}
Since we do not know the analytical form for the finite volume effects, we fit the polarizability as a function of $1/L$ to three different models: constant, linear, and quadratic. We cannot go beyond the quadratic since we only have four different lattice sizes. To determine which model fits the data best we compute the $\chi^2$ to gauge the overall goodness of the fit. In conjunction with the goodness of fit criteria we use Akaike Information Criterion (AIC)~\cite{Akaike:1974}, which measures the relative quality between different statistical models and helps in determining whether or not a model is overfitting the data. The AIC value is given by
\begin{equation}
\mbox{AIC} = 2k + \chi^2,
\end{equation}
where $k$ is the number of parameters in the model.
For a given fit model we will sum the values of the AIC for both pion masses. The model with the smallest AIC value will be our fit model used subsequently.
\subsection{The Neutron}\label{sec:neutrondisc}
The extrapolation results for the neutron polariability are tabulated in Table~\ref{tab:ninfres}. Figure~\ref{plot:nvolplot1} plots our polarizability results along with the linear and quadratic fits which had the smallest values for the AIC and good $\chi^2$ values. Both the linear and quadratic models produced consistent results. However, the linear model produces a smaller value for the AIC which indicates that the quadratic model may be overfitting the data. We will use the linear infinite volume results when discussing the chiral behavior of the neutron.
The volume dependence analysis assumes that the finite size effects, due to the electric field,
are determined by the size of the lattice parallel to the applied field~(which is in $x$-direction for this work).
To verify this, we take our $\text{EN4}$ lattice which has the spatial dimension $48\times24^2$ and place the
electric field along the $y$-direction which has only 24 lattice units. We choose this ensemble because the
difference in the $x$ and $y$ directions are the largest which gives us the best comparison. We expect our
results to be comparable to the results of the $\text{EN2}$ ensemble which has the spatial dimension $24\times 24^2$.
We find $\bar{\alpha}_n = 2.25(25) \times 10^{-4}\, \,{\rm fm}^3$, which is statistically equivalent to the polarizability
for the $\text{EN2}$ and significantly different from the case where we place the field along the $N_x=48$ direction.
Fig.~\ref{plot:nvolplot2} displays the comparison. We conclude that the finite volume effects associated with
the directions perpendicular to the field are negligible.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{neutron_compare_xy.pdf}
\caption{Comparison of the $\text{EN2}$ spatial lattice ($24\times 24^2$) with the electric field in the
$x$-direction to the results of the $\text{EN4}$ spatial lattice ($48\times24^2$) with the electric field
in both $x$- and $y$-directions. The results confirm that the finite size effects associated with the
directions transverse to the electric field are negligible.}
\label{plot:nvolplot2}
\end{figure}
\subsection{Pion and Kaon}
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{pion_vol_300.pdf}
\includegraphics[width=0.48\textwidth]{pion_vol_200.pdf}
\includegraphics[width=0.48\textwidth]{kaon_vol_300.pdf}
\includegraphics[width=0.48\textwidth]{kaon_vol_200.pdf}
\caption{Infinite volume extrapolation for pion~(top) and kaon~(bottom) polarizability
for $m_\pi=306\,{\rm MeV}$~(left) and $m_\pi=227\,{\rm MeV}$~(right).
The two lines are infinite volume extrapolations using a constant (solid line) or a linear (dashed line) fit.}
\label{plot:pvolplot1}
\end{figure*}
The volume dependence analysis for the pion and kaon proceeds in the same way as the neutron.
Fig.~\ref{plot:pvolplot1} shows our extracted polarizabilities as a function of $1/L$ for the pion~(top plots)
and kaon~(bottom plots). We also plot the results of the constant and linear extrapolations which were the two
models with the smallest values for the AIC.
The results of the extrapolation are tabulated in Tables~\ref{tab:pinfres}~and~\ref{tab:kinfres}. For the pion
we find that the constant fit model gives the smallest combined AIC values. For the kaon at $m_\pi=306 \,{\rm MeV}$ the
constant model gives a smaller value of the AIC than the linear model. However, the combined result for both pion
masses---the AIC coefficient for the combined fit is the sum of coefficients for the individual fits---are smaller
for the linear model. We therefore use the linear model for the kaon.
\begin{table}[b]
\begin{tabular*}{\columnwidth}{@{\extracolsep{\stretch{1}}}l*{7}c@{}}
\toprule
\multicolumn{1}{c}{}& \multicolumn{3}{c}{ $306\,{\rm MeV}$ }&\phantom{aa} &\multicolumn{3}{c}{ $227\,{\rm MeV}$ }\\
&$\alpha_\pi$ & $\chi^2$& AIC & &$\alpha_\pi$&$\chi^2$ & AIC\\
\cmidrule{2-4}\cmidrule{6-8}
Constant & -0.16(6)&0.27& 2.27 && -0.486(94)& 1.67& 3.67 \\
Linear &-0.20(20)&0.23 & 4.23 && -0.08(36)& 0.29& 4.29 \\
Quadratic &-0.44(61)&0.06 &6.06 && -1.1(2.5)&0.12 &6.12\\
\bottomrule
\end{tabular*}
\caption{Infinite volume extrapolation results for the pion with three different fit models.
The polarizabilities $\alpha_\pi$ are reported in units of $10^{-4}\,{\rm fm}^3$.}
\label{tab:pinfres}
\end{table}
\begin{table}[b]
\begin{tabular*}{\columnwidth}{@{\extracolsep{\stretch{1}}}l*{7}c@{}}
\toprule
\multicolumn{1}{c}{}& \multicolumn{3}{c}{ $306\,{\rm MeV}$ }&\phantom{aa} &\multicolumn{3}{c}{ $227\,{\rm MeV}$ }\\
&$\alpha_{K^0}$ & $\chi^2$& AIC & &$\alpha_{K^0}$&$\chi^2$ & AIC\\
\cmidrule{2-4}\cmidrule{6-8}
Constant & 0.132(15)& 3.45& 5.45&& 0.197(14) & 4.65& 6.65 \\
Linear &0.186(47) &1.98 &5.98 && 0.289(55) & 1.71& 5.71 \\
Quadratic &0.12(15)&1.80 & 7.8&& 0.29(42) &1.71 & 7.71\\
\bottomrule
\end{tabular*}
\caption{Infinite volume extrapolation results for the kaon with three different fit models.
The polarizabilities $\alpha_{K^0}$ are reported in units of $10^{-4}\,{\rm fm}^3$.}
\label{tab:kinfres}
\end{table}
\section{Discussion} \label{sec:discussion}
In this section we discuss our infinite volume results for the polarizability of the neutral pion and kaon and neutron
in the context of other calculations on the lattice, chiral perturbation theory ($\chi$PT), and experiment.
\begin{figure}
\includegraphics[width=0.97\columnwidth]{inf_pion_polar.pdf}\\
\hskip3mm\includegraphics[width=0.945\columnwidth]{inf_kaon_polar.pdf}
\caption{Top: Neutral pion polarizability as a function of the quark mass. The circles are
quenched results found in \cite{Alexandru:2010dx} and the triangle is the value determined
in~\cite{Detmold:2009dx}. Bottom: Neutral kaon polarizability
along with a chiral extrapolation which includes the value determined in~\cite{Detmold:2009dx}. }
\label{plot:chiralpionkaon}
\end{figure}
For the neutral pion, the results are summarized in the top panel of Fig.~\ref{plot:chiralpionkaon}.
In addition to our dynamical results,
we also show the infinite volume results from our quenched study~\cite{Alexandru:2010dx}. Since the
finite volume corrections are insignificant, the conclusions from our recent study~\cite{Lujan:2014kia} are
unchanged: the polarizability depends very little on the mass of the sea quarks, but it changes as we vary
the mass of the valence quarks. The puzzling feature persists: the neutral pion polarizability becomes
negative for $m_\pi \approx 350\,{\rm MeV}$, and its magnitude increases as we approach the physical point.
The negative trend was also observed by Detmold {\it et al.}~\cite{Detmold:2009dx}
as indicated by their result (the blue triangle) at $m_{\pi} = 400\,{\rm MeV}$ on the same plot.
It was pointed out in Ref.~\cite{Detmold:2009dx} that the negative value is inconsistent with expectations from
$\chi$PT when only the connected part of the correlator is included, as is the case in both lattice calculations.
It was speculated in Ref.~\cite{Detmold:2009dx} that the negative value could arise due to finite volume effects.
Our infinite volume results demonstrate that it does not seem to be the case. Other effects might be at play.
One possible contribution to this discrepancy is the fact that the sea quarks are electrically neutral in these studies.
We have investigated the effects of charging the sea quarks~\cite{Freeman:2014kka}, and our initial results hint
at this scenario: we found that the neutral pion polarizability changes sign as we charge the sea quarks,
albeit still with large statistical errors. Efforts are under way to reduce the errors.
Note that there remains a bit of disagreement between the trend suggested by our results
and the data from Ref.~\cite{Detmold:2009dx}.
It is not clear whether the disagreement is due to finite volume effects,
discretization errors (we use different actions), or statistical fluctuations.
Recently it was suggested that discretization errors present for Wilson-type fermions used in this study and the
other mentioned above might be responsible for these puzzles~\cite{Bali:2015vua}. The background field changes the value of the
additive mass renormalization and this might lead to energy shifts in hadron mass unrelated to polarizabilities.
A continuum limit study is required to determine whether this effect is large enough to explain these puzzles.
Ultimately, the disconnected contribution must also be included to complete the picture for the neutral pion polarizability.
For neutral kaon our results are presented in the bottom panel of Fig.~\ref{plot:chiralpionkaon}.
In contrast to the pion case, neutral kaon has a stronger dependence on the sea quarks.
In our previous study we performed a chiral extrapolation and we found $\alpha_{K^0} = 0.269(43)$ in units
of $10^{-4} \,{\rm fm}^3$~\cite{Lujan:2014kia}. We perform the same chiral extrapolation using a linear ansatz in
$m_{\pi}$ but now using our infinite volume values. We include the value determined by
Detmold et al.\ \cite{Detmold:2009dx} since the finite volume corrections decrease with increasing
$m_\pi$ and we expect it to be negligible at $400\,{\rm MeV}$.
We find $\alpha_{K^0}= 0.356(74)\times10^{-4} \,{\rm fm}^3$, only slightly higher than
the finite volume value, suggesting that the finite volume corrections are small for the kaon.
The relative smallness of the neutral kaon polarizability is consistent with $\chi$PT
which predicts a vanishing value at the one-loop level, even with electrically neutral sea quarks~\cite{Guerrero:1997rd}.
\defHB$\chi$PT-NNLO{HB$\chi$PT-NNLO}
\defB$\chi$PT-NLO{B$\chi$PT-NLO}
\begin{figure}
\centering
\includegraphics[width=1.03\columnwidth]{inf_vol_neutron_v4c.pdf}
\includegraphics[width=1.03\columnwidth]{inf_vol_neutron_v3c.pdf}
\caption{Top: Neutron polarizability as a function of quark mass.
The black empty circles are our finite volume results presented in~\cite{Lujan:2014kia}
and the full circles are our infinite volume results. The dashed lines are two different
$\chi$PT calculations: HB$\chi$PT-NNLO~\cite{Griesshammer:2015ahu} and B$\chi$PT-NLO~\cite{Lensky:2009uv}.
The uncertainties in the curves are indicated by the shaded regions.
Bottom: Comparison with the experimental value and two other lattice calculations~\cite{Engelhardt:2007ub}
and~\cite{Detmold:2010ts}. }
\label{plot:nvolplotchiral1}
\end{figure}
We turn the discussion now to the neutron. In the top panel of Fig.~\ref{plot:nvolplotchiral1} we display
the neutron electric polarizability as a function of $m_{\pi}$.
We compare our results to two different $\chi$PT curves: a N${}^2$LO calculation using a nonrelativistic
form for some of the propagators (HB$\chi$PT-NNLO)~\cite{Griesshammer:2015ahu},
and a NLO result that uses relativistic propagators (B$\chi$PT-NLO)~\cite{Lensky:2009uv}.
We see that the value for $m_\pi=227\,{\rm MeV}$ computed on a box with $L\approx 3\,{\rm fm}$ disagrees with both curves.
After correcting for the finite volume effects, our results agree very well with the HB$\chi$PT-NNLO\ curve.
In the right panel of Fig.~\ref{plot:nvolplotchiral1} we show our results together with the experimental
value and compare them with two other lattice results~\cite{Detmold:2010ts,Engelhardt:2007ub} obtained on finite lattices.
We see that our results have significantly smaller statistical errors
even though they are computed using smaller pion masses and they are extrapolated to infinite volume.
This analysis demonstrates that finite volume effects are very important for neutron polarizability.
We expect that any other systematic effects are small and that the calculation,
for the pion masses used in this study, is nearly complete.
The discretization effects are expected to be of the order of one percent as experience with similar
actions indicates~\cite{Durr:2010aw}. The only remaining significant systematic error comes
from neglecting the charge of the sea quarks. For the $\text{EN2}$ ensemble, the correction was already
computed~\cite{Freeman:2012cy}. The effect was found to be small, similar to the size of statistical errors.
This is also supported by a partially quenched $\chi$PT calculation~\cite{Detmold:2006vu}: using the
formulas derived in that paper, we find that for $140\,{\rm MeV} \leq m_\pi \leq 300\,{\rm MeV}$, neutron electric
polarizability increases by a value of 1.5 to 2 in units of $10^{-4}\,{\rm fm}^3$, when the sea quark charges are turned on.
This prediction is shown in Fig.~\ref{plot:nvplotcharge2}. To produce these curves we used the parameters
suggested in the paper, but we had to set $|g_{N\Delta}| = 0.25$ (a value outside the expected range) to
make the ``charged" curve go through the experimental point. Our results, which were derived using neutral
sea quarks, agree very well with the ``neutral" curve.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{detmold-chipt-charged-vs-neutral-sea.pdf}
\caption{Expected sea quark charging effects in the neutron polarizability.
Our infinite volume results are plotted along with the $\chi$PT predictions from~\cite{Detmold:2006vu}
with neutral and charged sea quarks.}
\label{plot:nvplotcharge2}
\end{figure}
Before we conclude, we would like to discuss the systematic error
associated with the choice for the fitting window. To gauge this
error, we extract the energy shift using two other fit windows---one shifted by one unit
in positive time direction and one shifted in the negative
direction---and repeated the analysis. For the infinite volume
extrapolations we use a linear fit for neutron and kaon, and a constant fit for
the pion. The systematic error quoted here is the standard deviation of the final
results extracted using our three fit windows. For the neutron we have
$\bar\alpha_n=3.67(38)(27)$ and $\bar\alpha_n=5.62(91)(89)$ for $m_\pi=306\,{\rm MeV}$
and $227\,{\rm MeV}$ respectively. Similarly, for neutral pion we have
$\alpha_{\pi}=-0.16(6)(6)$ and $\alpha_{\pi}=-0.486(94)(46)$ and
for neutral kaon $\alpha_{K^0}=0.186(47)(29)$ and $\alpha_{K^0}=0.289(55)(52)$.
The polarizability for neutral kaon at the physical point is $\alpha_{K^0}=0.356(74)(46)$.
All the results here are presented in natural units for hadron polarizabilities of
$10^{-4}\,{\rm fm}^3$, with the first error being stochastic and the second the systematic
due to fit window. Note that this systematic is smaller or comparable with the
stochastic error.
\section{Conclusion}
\label{sec:conclusion}
We have analyzed the volume dependence of the electric polarizability $\alpha$
for the neutral pion, neutral kaon, and neutron on four different lattice volumes
at two light quark masses corresponding to pion masses of 306 and 227~MeV, in the
mass region where chiral perturbation theory predictions are most likely reliable.
The novel aspect of this calculation is that it is the first systematic study
of finite volume effects on polarizability in the presence of
Dirichlet boundaries. These boundary conditions allow for very weak electric fields
in order to avoid a possible vacuum instability.
We also estimate the effects of charging the sea quarks.
For the neutral pion, our results confirm that the negative trend in the polarizability is not due to finite volume effects.
Rather, preliminary results indicate that the behavior is most likely due to the neglecting of the charge in the sea quarks.
To compare with experiment, the disconnected contribution to the neutral pion polarizability will have to be included.
For the neutral kaon, we performed a similar chiral extrapolation to the physical point as was done
in~\cite{Lujan:2014kia} but now using the infinite volume extrapolations for $\alpha_{K^0}$. We find
$\alpha_{K^0}(m_\pi^\text{phys})= 0.356(74)\times10^{-4} \,{\rm fm}^3$ which is only slightly higher than the value determined on box
sizes $L\simeq 3 \,{\rm fm}$. This indicates that the volume effects for the kaon polarizability are relatively mild.
For the neutron we find that the finite volume corrections are important. After removing them, our results are now in
excellent agreement with predictions from chiral perturbation theory.
We have not yet performed a chiral extrapolation for the neutron
since we still need to include the corrections due to the interactions
between the sea-quarks and the background field.
We are currently investigating the best method to do the extrapolation using input from $\chi$PT.
We are in the process of including the effect of charged sea quarks in the analysis for all our ensembles. Along with the infinite-volume extrapolation done here, this is part of our program geared toward determining the polarizabilities at the physical point.
\begin{acknowledgements}
We thank Andr\'{e} Walker-Loud, Vladimir Pascalutsa, and Harald Grie$\ss$hammer for discussions and
correspondence related to this project.
The computations were carried out on a variety of GPU-based supercomputers, including the GWU IMPACT collaboration machines
and Colonial One cluster, and USQCD resources at Jefferson Lab and Fermilab. This work is supported in part by the
NSF CAREER grant PHY-1151648, the U.S. Department of Energy grant DE-FG02-95ER40907, and the ARCS foundation.
\end{acknowledgements}
\begin{table*}
\begin{tabular*}{0.95\textwidth}{@{\extracolsep{\stretch{1}}}*{12}c@{}}
\toprule
&Hadron& \phantom{a}& $\text{EN1}$& $\text{EN2}$& $\text{EN3}$& $\text{EN4}$&\phantom{a}& $\text{EN5}$&$\text{EN6}$ &$\text{EN7}$&$\text{EN8}$\\
\cmidrule{4-7}\cmidrule{9-12}
\multirow{3}*{\parbox[t]{1.25cm}{$\alpha$\\$[10^{-4}\,{\rm fm}^3]$}}
&\multirow{1}*{$\pi$}&& -0.160(10) & -0.15(11)& -0.13(15)&-0.24(0.17) && -0.66(17) &-0.43(18)& -0.35(19)& -0.46(23)\\
&\multirow{1}*{$K$} &&0.110(22) & 0.176(34)& 0.120(33) & 0.164(40) && 0.164(24) &0.222(29) &0.191(28) &0.256(42) \\
&\multirow{1}*{$n$} && 1.66(19)&2.23(18) & 2.69(37) &3.05(31) && 1.86(38) &3.06(37) & 3.00(59)& 4.26(65)\\
\midrule[0pt]
\multirow{3}*{\parbox[t]{1.25cm}{$a\delta E$\\$[\times10^{-8}]$}}
& \multirow{1}*{$\pi$}&& -3.15(2.00) & -3.69(2.77)& -3.40(4.13)& -7.13(5.09) && -11.92(3.07) & -10.01(4.24) & -9.0(4.8) &-12.50(6.27)\\
& \multirow{1}*{$K$} &&2.83(57)& 5.14(99) & 3.47(1.00)&5.22(1.28) && 4.84(72) & 7.14(94) & 6.22(91)&8.82(1.47)\\
& \multirow{1}*{$n$} &&33.1(5.0) &53.4(5.4)& 72.6(12) &78.9(9.1) && 41.65(11.2) & 86.2(12.5) & 87.8(20.0)&125.3(20.7)\\
\midrule[0pt]
\multirow{3}*{\parbox[t]{1.25cm}{$a E$\\}}
& \multirow{1}*{$\pi$}&&0.322(35) & 0.251(9)& 0.2362(9)& 0.2084(9) && 0.276(6)& 0.207(1)& 0.184(1)&0.176(1) \\
& \multirow{1}*{$K$}&&0.401(2)& 0.3515(8) & 0.3566(8) & 0.3241(7) && 0.433(1) & 0.3952(6) & 0.392(2) & 0.3711(10) \\
& \multirow{1}*{$n$} && 0.768(16) & 0.696(9) & 0.658(10) & 0.689(2) && 0.710(6) & 0.634(4) &0.610(4) &0.619(7) \\
\midrule[0pt]
\multirow{3}*{\parbox[t]{1.25cm}{$a m$\\}}
& \multirow{1}*{$\pi$}&&0.1986(22) &0.1932(7)&0.1934(8) & 0.1938(8) && 0.145(3) & 0.140(1)& 0.138(1)&0.1391(8)\\
& \multirow{1}*{$K$} &&0.3235(15) & 0.3220(7)& 0.3228(8)& 0.3229(7) && 0.372(1) & 0.3698(6) &0.371(2) &0.372(1)\\
& \multirow{1}*{$n$} &&0.642(11) &0.644(6)& 0.657(8) & 0.647(4) && 0.622(20)& 0.618(13)& 0.620(23)&0.60(3)\\
\bottomrule
\end{tabular*}
\caption{Electric polarizabilities, energy shifts due to the field, energies computed with no external field,
and masses extracted from boxes with periodic boundary conditions for the pion, kaon, and neutron for
the 8 ensembles used in this study.}
\label{tab:fitdata}
\end{table*}
\bibliographystyle{jhep}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,113
|
\section{Introduction}
Given a closed and connected subanalytic subset $X\subset{\mathbb R}^m$ the
\emph{inner metric} $d_X(x_1,x_2)$ on $X$ is defined as the infimum of
the lengths of rectifiable paths on $X$ connecting $x_1$ to $x_2$.
Clearly this metric defines the same topology on $X$ as the Euclidean
metric on ${\mathbb R}^m$ restricted to $X$ (also called \emph{``outer
metric''}). But the inner metric is not necessarily bi-Lipschitz
equivalent to the Euclidean metric on $X$. To see this it is enough to
consider a simple real cusp $x^2=y^3$. A subanalytic set is called
\emph{normally embedded} if these two metrics (inner and Euclidean)
are bi-Lipschitz equivalent.
\begin{theorem}[\cite{BM}] Let $X\subset{\mathbb R}^m$ be a connected and
globally subanalytic set. Then there exist a normally embedded
globally subanalytic set $\tilde{X}\subset{\mathbb R}^q$ and a global
subanalytic homeomorphism $p\colon\tilde{X}\rightarrow X$
bi-Lipschitz with respect to the inner metric. The pair
$(\tilde{X},p)$, is called a normal embedding of $X$.
\end{theorem}
The original version of this theorem (see \cite{BM}) was formulated
in a semialgebraic language, but it easy to see that this result
remains true for a global subanalytic structure or, moreover, for
any o-minimal structure. The proof remains the same as in \cite{BM}
Complex algebraic sets and real algebraic sets are globally
subanalytic sets. By the above theorem these sets admit globally
subanalytic normal embeddings. Tadeusz Mostowski asked if there exists
a complex algebraic normal embedding when $X$ is complex algebraic
set, i.e., a normal embedding for which the image set
$\tilde{X}\subset{\mathbb C}^n$ is a complex algebraic set. In this note we
give a negative answer for the question of Mostowski. Namely, we prove
that a Brieskorn surface $x^b+y^b+z^a=0$ does not admit a complex
algebraic normal embedding if $b>a$ and $a$ is not a divisor of $b$. For
the proof of this theorem we use the ideas of the remarkable paper of
A. Bernig and A. Lytchak \cite{BL} on metric tangent cones and the
paper of the authors on the $(b,b,a)$ Brieskorn surfaces \cite{BFN2}.
We also briefly describe other examples based on taut singularities.
\subsection*{Acknowledgements}
The authors acknowledge research support under the grants: CNPq
grant no 301025/2007-0 (Lev Birbriar), FUNCAP/PPP 9492/06, CNPq
grant no 300393/2005-9 (Alexandre Fernandes) and NSF grant no.\
DMS-0456227 (Walter Neumann).
\section{Proof}
Recall that a subanalytic set $X\subset{\mathbb R}^n$ is called
\emph{metrically conical} at a point $x_0$ if there exists an
Euclidean ball $B\subset{\mathbb R}^n$ centered at $x_0$ such that $X\cap B$
is bi-Lipschitz homeomorphic, with respect to the inner metric, to
the metric cone over its link at $x_0$. When such a bi-Lipschitz
homeomorphism is subanalytic we say that $X$ is
\emph{subanalytically metrically conical} at $x_0$.
\begin{example}{\rm The Brieskorn surfaces in ${\mathbb C}^3$}
$$\{(x,y,z) ~|~ x^b+y^b+z^a=0\}$$ {\rm ($b>a$) are subanalytically
metrically conical at $0\in{\mathbb C}^3$ (see \cite{BFN2}).}
\end{example}
We say that a complex algebraic set admits a \emph{complex algebraic
normal embedding} if the image of a subanalytic normal embedding of
this set can be chosen complex algebraic.
\begin{example} Any complex algebraic curve
admits a complex algebraic normal embedding. This follows from the
fact that the germ of an irreducible complex algebraic curve is
bi-Lipschitz homeomorphic with respect to the inner metric to the
germ of ${\mathbb C}$ at the origin.
\end{example}
\begin{theorem}\label{main_theorem} If $1<a<b$ and $a$ is not a
divisor of $b$ then no neighborhood of $0$ in the
Brieskorn surface in ${\mathbb C}^3$
$$\{(x,y,z)\in{\mathbb C}^3 ~|~ x^b+y^b+z^a=0\}$$
admits a complex algebraic normal embedding.
\end{theorem}
We will need the following result on
tangent cones.
\begin{theorem}\label{bernig_lytchak_2}
If $(X_1,x_1)$ and $(X_2,x_2)$ are germs of subanalytic sets which
are subanalytically bi-Lipschitz homeomorphic with respect to the
induced Euclidean metric, then their tangent cones $T_{x_1}X_1$ and
$T_{x_2}X_2$ are subanalytically bi-Lipschitz homeomorphic.
\end{theorem}
This result is a weaker version of the results of
Bernig-Lytchak(\cite{BL}, Remark 2.2 and Theorem 1.2). We present here
an independent proof.
\begin{proof}[Proof of Theorem \ref{bernig_lytchak_2}] Let us denote
$$S_xX=\{ v\in T_xX ~|~ |v|=1\}.$$ Since $T_xX$ is a cone over
$S_xX$, in order to prove that $T_{x_1}X_1$ and $T_{x_2}X_2$ are
subanalytically bi-Lipschitz homeomorphic, it is enough to prove
that $S_{x_1}X_1$ and $S_{x_2}X_2$ are subanalytically bi-Lipschitz
homeomorphic.
By Corollary 0.2 in \cite{V}, there exists a subanalytic
bi-Lipschitz homeomorphism with respect to the induced Euclidean
metric: $$h\colon (X_1,x_1)\rightarrow (X_2,x_2)\,,$$ such that
$|h(x)-x_2|=|x-x_1|$ for all $x$. Let us define $$dh\colon
S_{x_1}X_1\rightarrow S_{x_2}X_2$$ as follows: given $v\in
S_{x_1}X_1$, let $\gamma\colon[0,\epsilon)\rightarrow X_1$ be a
subanalytic arc such that
$$ |\gamma(t)-x_1|=t ~ \forall ~t\in [0,\epsilon)\quad
\mbox{and}\quad
\lim_{t\to 0^+}\frac{\gamma(t)-x_1}{t}=v\,;$$ we define
$$dh(v)=\lim_{t\to 0^+}\frac{h\circ\gamma(t)-x_2}{t}.$$ Clearly,
$dh$ is a subanalytic map. Define $d(h^{-1})\colon
S_{x_2}X_2\rightarrow S_{x_1}X_1$ the same way. Let $k>0$ be a
Lipschitz constant of $h$. Let us prove that $k$ is a Lipschitz
constant of $dh$. In fact, given $v_1,v_2\in S_{x_1}X_1$, let
$\gamma_1,\gamma_2\colon[0,\epsilon)\rightarrow X_1$ be subanalytic
arcs such that
$$ |\gamma_i(t)-x_1|=t ~ \forall ~t\in [0,\epsilon) \quad
\mbox{and} \quad \lim_{t\to 0^+}\frac{\gamma_i(t)-x_1}{t}=v \ i=1,2.$$
Then
\begin{eqnarray*}
|dh(v_1)-dh(v_2)| &=& \Bigl|\lim_{t\to 0^+}\frac{h\circ\gamma_1(t)-x_2}{t}-\lim_{t\to 0^+}\frac{h\circ\gamma_1(t)-x_2}{t}\Bigr| \\
&=& \lim_{t\to 0^+}\frac{1}{t}|h\circ\gamma_1(t)-h\circ\gamma_2(t)| \\
&\leq& k \lim_{t\to 0^+}\frac{1}{t}|\gamma_1(t)-\gamma_2(t)| \\
&=& k |v_1-v_2|.
\end{eqnarray*}
Since $d(h^{-1})$ is $k$--Lipschitz by the same argument and $dh$ and
$d(h^{-1})$ are mutual inverses, we have proved the theorem.
\end{proof}
\begin{corollary}\label{proposition6.2}
Let $X\subset{\mathbb R}^n$ be a normally embedded subanalytic set. If $X$ is
subanalytically metrically conical at a point $x\in X$, then the
germ $(X,x)$ is subanalytically bi-Lipschitz homeomorphic to the
germ $(T_xX,0)$.
\end{corollary}
\begin{proof} The tangent cone of the straight cone at the
vertex is the cone itself. So the corollary is a
direct application of Theorem \ref{bernig_lytchak_2}.
\end{proof}
\begin{proof}[Proof of the \ref{main_theorem}]
Let $X\subset{\mathbb C}^3$ be the complex algebraic surface defined by
$$X=\{(x,y,z) ~|~ x^b+y^b+z^a=0\}.$$ We are going to prove that the
germ $(X,0)$ does not have a normal embedding in ${\mathbb C}^N$ which is a
complex algebraic surface. In fact, if
$(\tilde{X},0)\subset({\mathbb C}^N,0)$ is a complex algebraic normal
embedding of $(X,0)$ and $p\colon(\tilde{X},0)\rightarrow (X,0)$ is
a subanalytic bi-Lipschitz homeomorphism, since $(X,0)$ is
subanalytically metrically conical \cite{BFN2}, then
$(\tilde{X},{0})$ is subanalytically metrically conical and by
Corollary \ref{proposition6.2} $(\tilde{X},{0})$ is subanalytically
bi-Lipschitz homeomorphic to $(T_{0}\tilde{X},{0})$. Now, the
tangent cone $T_{0}\tilde{X}$ is a complex algebraic cone, thus its
link is a $S^1$-bundle. On the other hand, the link of $X$ at $0$ is
a Seifert fibered manifold with $b$ singular fibers of degree $\frac
a{\gcd(a,b)}$. This is a contradiction because the Seifert fibration
of a Seifert fibered manifold (other than a lens space) is unique up
to diffeomorphism.
\end{proof}
The following result relates the metric tangent cone of $X$ at $x$
and the usual tangent cone of the normally embedded sets. See
\cite{BL} for a definition of a metric tangent cone.
\begin{theorem}
[\cite{BL}, Section 5]\label{bernig_lytchak} Let $X\subset{\mathbb R}^m$ be a
closed and connected subanalytic set and $x\in X$. If
$(\tilde{X},p)$ is a normal embedding of $X$, then
$T_{p^{-1}(x)}\tilde{X}$ is bi-Lipschitz homeomorphic to the metric
tangent cone $\mathcal{T}_xX$.
\end{theorem}
\begin{remark}{\rm We
showed that the metric tangent cones of the above Brieskorn surface
singularities are not homeomorphic to any complex cone.}
\end{remark}
\subsection{Other examples} We sketch how taut surface singularities
give other examples of complex surface germs without any complex
algebraic normal embeddings. We first outline the argument and then
give some clarification.
Both the inner metric and the outer (euclidean) metric on a complex
analytic germ $(V,p)$ are determined up to bi-Lipschitz equivalence by
the complex analytic structure (independent of a complex
embedding). This is because $(f_1,\dots,f_N)\colon
(V,p)\hookrightarrow ({\mathbb C}^N,0)$ is a complex analytic embedding if and
only if the $f_i$ generate the maximal ideal of $\mathcal O_{(V,p)}$,
and adding to the set of generators gives an embedding that induces
the same metrics up to bi-Lipschitz equivalence. A taut complex
surface germ is an algebraicly normal germ (to avoid confusion we say
``algebraicly normal'' for algebro-geometric concept of normality)
whose complex analytic structure is determined by its topology. Thus
a taut singularity whose inner and outer metrics do not agree can have
no complex analytic normal embedding. Taut complex surface
singularities were classified by Laufer \cite{taut} and include, for
example, the simple singularities. The simple singularities of type
$B_n$, $D_n$, and $E_n$ have non-reduced tangent cones, from which
follows easily that they have non-equivalent inner and outer
metrics. Thus, they admit no complex algebraic normal embeddings.
There is an issue with this argument, in that we have restricted to
complex analytic embeddings of $(V,p)$, that is, embeddings that
induce an isomorphism on the local ring. But one can find holomorphic
maps that are topological embeddings but which only induce an
injection on the local ring (the image will no longer be an
algebraicly normal germ). Such a map is not a complex analytic
embedding, but it will still be holomorphic and real
semialgebraic. It is not hard to see that the non-reducedness of
the tangent cone persists when one restricts the local ring, so the
argument of the previous paragraph still applies.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,783
|
\section{Introduction}
The $B_c$ system is the only known heavy meson family with unequal quark masses, which provides an important testing ground for understanding strong interaction physics.
While the spectra and properties of the $c\bar c$ and $b\bar b$ mesons are extensively studied in experiments, data on $b\bar c$ or $c \bar b$ are relatively scarce.
Until now, only two states, the ground state and its first radial excitation, are confirmed in experiments \cite{PhysRevD.58.112004,PhysRevLett.113.212004}.
Meanwhile, ongoing and forthcoming high energy experiments, e.g. LHC and RHIC, are expected to generate a large ensemble of these particles. For these reasons, there are renewed interests in theoretical investigations \cite{PhysRevD.96.054501,PhysRevD.91.114509,PhysRevD.94.034036,BHATTACHARYA2017430}.
Light-front quantization is a natural relativistic framework to describe the intrinsic partonic structure of hadrons \cite{BAKKER2014165}. Among various light-front approaches, light-front holographic (LFH) models stand out as semi-classical approximations to QCD (see Ref. \cite{BRODSKY20151} and the references therein). Meanwhile, a computational framework known as the basis light-front quantization (BLFQ),
has been established to tackle the many-body dynamics and has been applied to QED \cite{maris2013bound,PhysRevD.91.105009} and QCD \cite{LI2016118,PhysRevD.96.016022} bound states. In the latter case, LFH is embedded in the BLFQ formulation to model the heavy quarkonium (charmonium and bottomonium) system. The results have shown good agreement with experiments and other theoretical models.
In this work, we adapt the successful Hamiltonian of Refs. \cite{LI2016118,PhysRevD.96.016022} to the $B_c$ system in the BLFQ approach. In essence, this model implements the AdS/QCD soft-wall Hamiltonian \cite{PhysRevLett.102.081601} plus a longitudinal confinement \cite{LI2016118}, both of which are of long range. In addition, we adopt the one-gluon exchange with a running coupling \cite{PhysRevD.96.016022}. This term controls the short-distance physics and embeds the spin structure information. We solve the $B_c$ system without introducing any additional free parameter (other than the ones employed in charmonium and bottomonium). Therefore, it is also a test of the predictive power of the model proposed in Ref. \cite{PhysRevD.96.016022}. This work is a straightforward yet necessary step for developing a relativistic model for hadrons based on light-front holography and light-front dynamics.
We begin by introducing the effective light-front Hamiltonian and the basis function approach in Sec. \ref{sec2}, following Ref. \cite{PhysRevD.96.016022}. Presented in Sec. \ref{sec3} are results including the mass spectrum, wave functions, decay constants, transverse charge and momentum density, and distribution amplitudes. They are compared with experiments and other theories whenever available. We also discuss the differences between $B_c$ and heavy quarkonium. Sec. \ref{sec4} summarizes our current work and provides a brief discussion of possible improvements.
\section{\label{sec2}Hamiltonian Formalism and the Basis Function Representation}
The light-front Hamiltonian formalism leads to an eigenvalue equation $H\ket{\psi_h} = M^2_h\ket{\psi_h} $. Here we adapt the effective Hamiltonian of Refs. \cite{LI2016118,PhysRevD.96.016022,PhysRevLett.102.081601} for unequal quark masses:
\begin{equation}
\begin{split}
H_\text{eff} = \frac{\vec{k}^2_\bot+m^2_q}{x} + \frac{\vec{k}^2_\bot+m^2_{\bar{q}}}{1-x} +\kappa^4 \vec{\zeta}^2_\bot - \frac{\kappa^4}{(m_q+m_{\bar{q}})^2}&\partial_x\big(x(1-x)\partial_x\big) \\
-&\frac{C_F 4\pi \alpha_s(Q^2)}{Q^2}\bar{u}_{s'}(k')\gamma_\mu u_s(k) \bar{v}_{\bar{s}}(\bar{k})\gamma^\mu v_{\bar{s}'}
(\bar{k}'),
\end{split}
\end{equation}
where $\vec{\zeta}_\bot \equiv \sqrt{x(1-x)} \vec{r}_\bot$ is the holographic variable \cite{BRODSKY20151}, $C_F = (N_c^2-1)/(2N_c)=4/3$ is the color factor of the $q\bar{q}$ color singlet state. In this paper, we investigate the $B_c$ system as $b\bar{c}$, i.e. $B_c^-$. Therefore $m_q$ is the mass of the bottom quark and $m_{\bar{q}}$ the anti-charm quark. $x$ and $(1-x)$ are the longitudinal momentum fractions of $b$ and $\bar{c}$, respectively.
%
We incorporate a running coupling for the one-gluon exchange potential, which is modeled as \cite{PhysRevD.96.016022},
\begin{equation}
\alpha_s(Q^2)=1/[ \beta_0 \ln (Q^2/\Lambda^2+\tau) ],
\end{equation}
where $\beta_0=(33-2N_f)/(12\pi)$, with the quark flavor number taken to be $N_f=4$. We use $\Lambda = 0.13$ GeV and in order to avoid the pQCD IR divergence we use $\tau = 12.3$ such that $\alpha(0) = 0.6$. See Ref. \cite{PhysRevD.96.016022} for more details.
We adopt a Fock space limited to the $\ket{q\bar{q}}$ sector where the state vector reads,
\begin{equation}
\begin{split}
\ket{\psi_h(P,j,m_j)}=\sum_{s,\bar{s}}\int_0^1&\frac{{\mathrm{d}} x}{2x(1-x)}\int\frac{{\mathrm{d}} ^2 k_\bot}{(2\pi)^3}\ \psi_{s\bar{s}/h}^{(m_j)}(\vec{k}_\bot,x)\\
&\times\frac{1}{\sqrt{N_c}}\sum_{i=1}^{N_c} b^\dagger_{si/b} (xP^+,\vec{k}_\bot+x\vec{P}_\bot) d^\dagger_{\bar{s}i/\bar{c}} ((1-x)P^+,-\vec{k}_\bot+(1-x)\vec{P}_\bot)\ket{0}.
\end{split}
\end{equation}
In the expression above, $\psi_{s\bar{s}/h}^{(m_j)}(\vec{k}_\bot,x)$ represents the light-front wave functions (LFWFs), $s$ and $\bar{s}$ are the spins of the quark and anti-quark, respectively.
The anti-commutation relations of creation operators and the orthonormal relation of state vectors in this work are similar as for heavy quarkonium \cite{PhysRevD.96.016022}.
We use a basis function approach, BLFQ \cite{PhysRevC.81.035205}, and following Ref. \cite{LI2016118}, we represent the LFWFs in terms of transverse and longitudinal basis functions $\phi_{nm}$ and $\chi_l$, with basis coefficients $\psi_h(n,m,l,s,\bar{s})$,
\begin{equation}
\psi_{s\bar{s}/h}^{(m_j)}(\vec{k}_\bot,x) =\sum_{n,m,l} \psi_h(n,m,l,s,\bar{s})\phi_{nm}(\vec{k}_\bot/\sqrt{x(1-x)})\chi_l(x).
\end{equation}
%
For the basis functions, we employ
\begin{equation}
\begin{gathered}
\phi_{nm}(\vec{q}_\bot;b)=\frac{1}{b}\sqrt{\frac{4\pi n!}{(n+|m|)!}} \bigg(\frac{q_\bot}{b}\bigg)^{|m|} e^{-\frac{1}{2}q^2_\bot/b^2} L^{|m|}_n (q^2_\bot/b^2) e^{{\mathrm{i}} m\theta_q} ,\\
\chi_l(x;\alpha,\beta)=\sqrt{4\pi(2l+\alpha+\beta+1)}\sqrt{\frac{\Gamma(l+1)\Gamma(l+\alpha+\beta+1)}{\Gamma(l+\alpha+1)\Gamma(l+\beta+1)}} x^{\frac{\beta}{2}}(1-x)^{\frac{\alpha}{2}} P^{(\alpha,\beta)}_l(2x-1),
\end{gathered}
\end{equation}
which are the analytical solutions of the effective Hamiltonian without the one-gluon exchange.
Here, $\phi_{nm}$ is the 2D harmonic oscillator function with $n$ and $m$ the principle and orbital quantum numbers, respectively, with $\vec{q}_\bot = \vec{k}_\bot/\sqrt{x(1-x)}, \ q_\bot= \abs{\vec{q}_\bot}, \ \theta_q=\arg \vec{q}_\bot$, and $L^{\abs{m}}_n(z)$ is the associated Laguerre polynomial. Note that the conserved total magnetic projection $m_j$ is the sum of the orbital projection $m$, and the sum of the spin projections, $m_j = m + s + \bar{s}$. We adopt $b=\kappa$ for the scale parameter in the HO basis. For the longitudinal basis function $\chi_l$, $l$ is the longitudinal quantum number, $P^{(\alpha,\beta)}_l(z)$ is the Jacobi polynomial . The dimensionless parameters $\alpha$ and $\beta$ are associated with the quark masses: $\alpha = 2m_{\bar{c}}(m_b+m_{\bar{c}})/\kappa^2$ and $\beta = 2m_b(m_b+m_{\bar{c}})/\kappa^2$.
%
When the one-gluon exchange is implemented, one can solve the eigen-equation by diagonalizing the Hamiltonian matrix. Hence the obtained eigenvalues represent the spectra as squared masses, and the eigenvectors are the coefficients $\psi_h(n,m,l,s,\bar{s})$.
\section{\label{sec3}Numerical results}
\begin{table} \footnotesize
\centering
\begin{tabular}{ccccccccccc}
\hline\hline
& \hspace{0.1cm} $N_f$ \hspace{0.1cm} & \hspace{0.1cm}$\kappa$ (GeV) \hspace{0.1cm}& \hspace{0.1cm} $m_c$ (GeV) \hspace{0.1cm} & \hspace{0.1cm} $m_b$ (GeV) \hspace{0.1cm} & \hspace{0.1cm} r.m.s (MeV) \hspace{0.1cm}& \hspace{0.1cm} $\overline{\delta_j M}$ (MeV) \hspace{0.1cm}& $N_\text{max} = L_\text{max}$ & Ref. \\
\hline
%
$c\bar{c}$ &$4$& $0.966$ & $1.603$&$-$&$31$&$17$&$32$&\cite{PhysRevD.96.016022} \\
$b\bar{b}$ &$5$& $1.389$ &$-$&$4.902$&$38$&$8$&$32$&\cite{PhysRevD.96.016022} \\
$b\bar{c}$ &$4$& $1.196$ &$1.603$&$4.902$&$37$&$6$&$32$&this work \\
\hline\hline
\end{tabular}
\caption{Summary of the model parameters}
\label{tb1}
\end{table}
In this work, we adopt model parameters from those of the charmonium and bottomonium calculations without doing further parameter fitting. In particular, we adopt the quark masses from the charmonium and bottomonium applications \cite{PhysRevD.96.016022} (See TABLE \ref{tb1}). On the other hand, the confining strength is taken as $\kappa_{b\bar{c}} = \sqrt{(\kappa^2_{c\bar{c}}+\kappa^2_{b\bar{b}})/2}$, where $\kappa_{b\bar{b}}$ and $\kappa_{c\bar{c}}$ are the confining strength of the charmonium and bottomonium system, respectively. This is in accordance with the heavy quark effective theory (HQET) \cite{PhysRevD.95.034016}. All the calculations in this paper, unless otherwise stated, are based on $N_\text{max} = L_\text{max}=32$, which is associated with UV and IR regulators $\Lambda_\text{UV} = b\sqrt{N_\text{max}} \simeq 6.77$ GeV and $\lambda_\text{IR} = b / \sqrt{N_\text{max}}\simeq 0.21$ GeV.
\subsection{Mass Spectroscopy}
In order to identify the multiplet of magnetic substates belonging to a single angular momentum $j$, the effective Hamiltonian is diagonalized for various $m_j$'s. One needs to perform the state identification to deduce the full set of quantum numbers $\mathsf{n}^{2s+1}\ell_j$ or $j^{\mathsf{P}}$, where $\ell$ is the orbital angular momentum, $\mathsf{n}$ is the radial quantum number (not to be confused with the basis quantum numbers $n$ and $l$).
The reconstructed mass spectrum up to the $B\overline{D}$ open flavor threshold is presented in Fig. \ref{fig1}, where we use the dashed lines for the mean values of invariant masses:
\begin{equation}
\overline{M}\equiv \sqrt{\frac{M^2_{-j}+M^2_{1-j}+...+M^2_j}{2j+1}}.
\end{equation}
The boxes indicate the spread from different $m_j$, i.e. $\delta_jM\equiv \max (M_{m_j} )- \min( M_{m_j}) $, which is nonzero due to the violation of rotational symmetry arising from the Fock space and basis space truncations.
We also employ the mean spread to quantify the rotational symmetry violation from all high spin states below their respective dissociation thresholds,
\begin{equation}
\label{eq2}
\overline{\delta_j M} \equiv \sqrt{\frac{1}{N_h} \sum_{h}^{j\neq0}(\delta_jM_h)^2} \quad \Big(N_h\equiv \sum_{h}^{j\neq0} 1\Big).
\end{equation}
We provide the experimental values \cite{1674-1137-40-10-100001} and results from Lattice \cite{PhysRevLett.94.172001, PhysRevLett.104.022001, DAVIES1996131} for comparison.
Note that the r.m.s deviations in TABLE \ref{tb1} are evaluated with respect to 8 and 14 states for charmonium and bottomonium \cite{PhysRevD.96.016022}, but only with respect to two experimental states for $B_c$.
\begin{figure}
\centering
\includegraphics[scale=0.45]{Bc_spectrum_Runalf0p6_kap1p20_mc1p60_mb4p96_NL32_style8}
\caption{\label{fig1}The reconstructed $B_c\ (b\bar{c})$ spectrum at $N_\text{max}=L_\text{max}=32$. The horizontal axis is $J^{\mathsf{P}}$ and vertical axis is invariant mass in GeV. We compare with data from PDG \cite{1674-1137-40-10-100001} and Lattice \cite{PhysRevLett.94.172001, PhysRevLett.104.022001, DAVIES1996131}, with central values shown as solid lines and uncertainties as shades.}
\end{figure}
We note that the mean spread of $b\bar{c}$, which is evaluated with a total of 12 states below the threshold of this system, is smallest in TABLE \ref{tb1}. We compare the mass spectrum of $B_c$ and heavy quarkonia in Fig. \ref{fig7} for selected states. It is a challenge to visually ascertain which system exhibits the best rotational symmetry. However, as evident from TABLE I, on the basis of the fraction of the mean spread relative to the total mass, we can see that the violation of rotational symmetry is larger for charmonium than for $B_c$ and bottomonium. This suggests heavier systems retain rotational symmetry in our approach better than lighter systems.
\begin{figure}
\centering
\includegraphics[scale=0.45]{spectrum_all}
\caption{\label{fig7}The mass spectrum for charmonium, $B_c$, and bottomonium below their respective dissociation thresholds. Note that, aside from an overall shift, the mass scales are similar. We only select the states with $J=0 \text{ and } 1$ for comparison.
Experimental results from PDG \cite{1674-1137-40-10-100001} are in red while our results are in black. The spread in $m_j$ values, defined in Eq. \ref{eq2}, is indicated by a rectangular black box around the $J = 1 $ theory results. This is scarcely visible in many cases.
The mean spread of charmonium is larger than the other two. All the three systems have similar patterns in the spectrum while the heavier system has more states below the threshold.}
\end{figure}
\subsection{Light-Front Wave Functions}
Obtaining the light-front wave functions is a major motivation for this formalism, as they provide direct access to hadron observables. We present some of the valence LFWFs with different polarization and spin alignments for $B_c$ states. Specifically, we have the relation $m_j =s_1+s_2+m$, where $m$ is the orbital angular momentum projection.
Since the phase $\exp({\mathrm{i}} m \theta)$ factorizes in the wave function on the two-body level, we drop it while retaining the sign for negative $\vec{k}_\perp$, i.e. we visualize the LFWFs at $k_y = 0$ ($\theta = 0$ and $\theta = \pi$).
In Fig. \ref{fig3}, we show the ground state pseudoscalar LFWFs. There are two independent components with different spin alignments for $0^-$ state: $\psi_{\uparrow\downarrow - \downarrow\uparrow}(\vec{k}_\bot,x) \equiv \frac{1}{\sqrt{2}}[\psi_{\uparrow\downarrow}(\vec{k}_\bot,x) - \psi_{\downarrow\uparrow}(\vec{k}_\bot,x)] $ and $\psi_{\downarrow\downarrow}(\vec{k}_\bot,x)=\psi^*_{\uparrow\uparrow}(\vec{k}_\bot,x)$.
The former is dominant and reduces to the non-relativistic wave function in the heavy quark limit, while the latter is of pure relativistic origin.
Furthermore, $B_c$ has another significant feature that distinguish from the quarkonium (equal-mass mesons).
For the heavy quarkonia, charge conjugation is a good symmetry, and is reflected by states having components either even or odd in $(m+l)$ \cite{LI2016118}. There is no charge conjugation symmetry of $B_c$, but we do observe that our solutions are dominated by either even or odd $(m+l)$. TABLE \ref{tb4} exhibits this dominance, along with the comparison with heavy quarkonia of the ground states. In a separate test calculation, we verified that, as the mass difference between quark and anti-quark decreases, the contribution from even $(m+l)$ is getting smaller, and progresses smoothly to the equal-mass limit.
\begin{table}
\centering
\begin{tabular}{c|c c|c c}
\hline\hline
\\[-0.3cm]
\multirow{2}{*}{ \diagbox[width=3.5cm, height= 1.1cm]{system}{\raisebox{2cm}{\rotatebox{0}{ \footnotesize{even/odd $(m+l)$}}}} } & \multicolumn{2}{c}{ \hspace{1cm} $\abs{\psi_{\uparrow\downarrow - \downarrow\uparrow}(\vec{k}_\bot,x)}^2$} \hspace{1cm}&
\multicolumn{2}{|c}{ $\quad \abs{\psi_{\downarrow\downarrow}(\vec{k}_\bot,x)}^2 +\abs{\psi_{\uparrow\uparrow}(\vec{k}_\bot,x)}^2$} \\[0.2cm]
& \hspace{0.6cm} Odd & Even & \hspace{0.5cm} Odd & Even \\[0.05cm]
\hline
$c\bar{c}$ & \hspace{0.6cm} $88.01\%$ & $0$ & \hspace{0.5cm} $11.99\%$ &$0$ \\
$b\bar{c}$ & \hspace{0.6cm} $91.62\%$ &$0.35\%$ & \hspace{0.5cm} $7.98\%$ &$0.05\%$ \\
$b\bar{b}$ & \hspace{0.6cm} $96.61\%$ &$0$ & \hspace{0.5cm} $3.39\%$ &$0$ \\
\hline\hline
\end{tabular}
\caption{ \label{tb4} The probabilities of finding the specified even or odd $(m+l)$ in the ground state of heavy mesons. The dominant spin alignment listed here are the components that persist in the non-relativistic limit. Note the systematic increase of these dominant components with the increasing meson mass.}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.42]{bc1S0_mJ0_s+1sbar-1_mom_style1} \hspace{1cm}
\includegraphics[scale=0.42]{bc1S0_mJ0_s+1sbar-1_mom_style1_1}
$$\text{a)}\ \psi_{\uparrow\downarrow - \downarrow\uparrow}(k_x, k_y = 0,x). \text{\bf{ Left}}: (m+l)=\text{Even}; \ \text{\bf{Right}}: (m+l)=\text{Odd}. $$
\includegraphics[scale=0.42]{bc1S0_mJ0_s+1sbar+1_mom_style1} \hspace{1cm}
\includegraphics[scale=0.42]{bc1S0_mJ0_s+1sbar+1_mom_style1_1}
$$\text{b)}\ \psi_{\downarrow\downarrow}(k_x, k_y = 0,x)=\psi^*_{\uparrow\uparrow}(k_x, k_y = 0,x). \text{\bf{ Left}}: (m+l)=\text{Even}; \ \text{\bf{Right}}: (m+l)=\text{Odd}. $$
\caption{\label{fig3}LFWFs of the ground state $B_c$ shown as plots of their magnitudes versus $x$ and $k_x$ at $k_y = 0$. In general spin alignment a) is dominant and reminiscent of non-relativistic behavior, while b) is purely a relativistic component.
}
\end{figure}
The LFWFs of $B_c$ do not have symmetry with respect to $x= \frac{1}{2}$ in the longitudinal direction, another feature distinguishing $B_c$ from heavy quarkonium. The wave function peaks in $x$ near the bottom quark mass fraction, i.e. $x = m_b/(m_b+m_{\bar{c}}) \approx 0.75$, as expected from the major role of the kinetic energy terms in the Hamiltonian. This asymmetry will have interesting consequences for other observables as discussed
in the following subsections.
\subsection{Decay Constants}
Meson decay constants, $f_h$, are hadronic properties defined from the matrix element of the local current that annihilates the meson. They are
\begin{equation}
\begin{gathered}
\matrixel{0}{\bar{c}\gamma^\mu\gamma_5 b }{P(p)}= {\mathrm{i}} p^\mu f_P,\\
\matrixel{0}{\bar{c}\gamma^\mu b}{V(p,\lambda)}= e^\mu_\lambda M_Vf_V,
\end{gathered}
\end{equation}
for pseudoscalar ($P$) and vector ($V$) states, respectively. Here $p$ is the momentum of the meson, and $e^\mu_\lambda$ is the polarization vector:
\begin{equation}
e^\mu_\lambda(k) = \big(e_\lambda^-(k), e_\lambda^+(k), \vec{e}_{\bot\lambda}(k)\big) \triangleq
\left \{
\begin{aligned}
&\Big(\frac{\vec{k}^2_\bot - M_V^2}{M_V k^+} , \frac{k^+}{M_V} , \frac{\vec{k }_\bot}{M_V}\Big),\ \lambda=0\\
&\Big(\frac{2\vec{\epsilon} _{\bot \lambda} \cdot \vec{k }_\bot}{k^+} ,0 , \vec{\epsilon}_{\bot\lambda}\Big), \quad \lambda=\pm 1\\
\end{aligned}
\right. ,
\end{equation}
where $ \vec{\epsilon}_{\bot \pm} = (1,\pm {\mathrm{i}})/\sqrt{2}$, and we adopt $\lambda \equiv m_j$ as the angular momentum projection.
The decay constant can be computed in the light-front representation in terms of LFWFs with different polarizations and corresponding current components.
In this work, we choose the ``good current'' ($\mu = +$) and the longitudinal polarization $(\lambda = 0)$ for the calculations. Since for $J=0$ states, $+$ and $\bot$ currents lead to identical results; for $J=1$ states, it has been illustrated that $\lambda=0$ and $\lambda=1$ provide comparable results for S-waves \cite{PhysRevD.98.034024}.
%
This choice leads to the decay constant as:
\begin{equation}
\frac{f_{P,V}}{2\sqrt{2 N_c}}= \int_0^1 \frac{{\mathrm{d}} x}{2\sqrt{x(1-x)}} \int \frac{{\mathrm{d}} ^2 k_\bot}{(2\pi)^3} \psi^{(\lambda=0)}_{\uparrow\downarrow \mp \downarrow\uparrow} (\vec{k}_\bot,x),
\end{equation}
where the ``minus'' and ``plus'' signs correspond to pseudoscalar and vector states, respectively.
Here, calculations have been done with $N_\text{max} = 32$, corresponding to $\Lambda_\text{UV}\triangleq \kappa \sqrt{N_\text{max}} \approx m_b+m_{\bar{c}}$, where $\Lambda_\text{UV}$ is the ultraviolet regulator. This is to balance the needs for better basis resolution and lower UV scale owing to the omitted radiative corrections.
An early effort using QCD sum rules provided $300$ MeV as an estimate for $f_{B_c}$ and $500$ MeV\footnote{T. Aliev, private communication.} for $f_{B_c^*}$ \cite{Aliev1992}. We present a survey of recent work in TABLE \ref{tb2} along with results from Lattice \cite{PhysRevD.96.054501,PhysRevD.91.114509} and other approaches (see Refs. \cite{Baker2014,Wang2013,PhysRevD.80.054016,PhysRevD.97.054014,PhysRevD.81.034010,PhysRevD.59.094001}) for comparison. Lattice results are systematically smaller than the other methods by about $20\%$, as also discussed in other references (e.g. \cite{BENBRIK2009172,Baker2014}).
In Fig. \ref{fig2}, we compare the vector decay constants of $B_c$ and heavy quarkonia \cite{PhysRevD.96.016022}. We can see a trend that decay constants within each meson system decrease with increasing radial quantum numbers.
This trend seems reasonable since increasing radial quantum numbers correspond to less binding and a larger spread in the radial probability distributions.
In addition, we note that the vector decay constants increase with the mass of the system for corresponding states, e.g. $J/\Psi < B_c (1^3S_1 )< \Upsilon$. This trend correlates with decreasing size as the mass increases which is discussed further in the next session.
\begin{table}
\centering
\begin{tabular}{ccccccccccccc}
\hline\hline
Constant (MeV) & \hspace{0.1cm }this work \hspace{0.1cm }& \hspace{0.1cm } Lattice \cite{PhysRevD.96.054501,PhysRevD.91.114509} \hspace{0.1cm } & \hspace{0.1cm } QCD sum rules \cite{Baker2014,Wang2013} \hspace{0.1cm } & \hspace{0.1cm } LFQM \cite{PhysRevD.80.054016} \hspace{0.1cm }& CCQM \cite{PhysRevD.97.054014} & \hspace{0.1cm } BSE \cite{PhysRevD.59.094001} \\
\hline
$f_{B_c}$ &$523(62)$ & $427(6)$ &$528(19)$ & $551$ &$489.3$ &$578$ \\
$f_{B^*_c}$ &$474(42)$ &$422(13)$ & $384(32)$ & $508$ & & \\
\hline\hline
\end{tabular}
\caption{Pseudoscalar and vector decay constants of the ground state $B_c$ and its vector partner $B^*_c$. The uncertainties of this work indicate the sensitivity to basis truncation, which is taken to be $\Delta f_{b\bar{c}} =2 \abs{f_{b\bar{c}}(N_\text{max}=32) - f_{b\bar{c}}(N_\text{max}=24)}$.}
\label{tb2}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.5]{decayconst_compare}
\caption{\label{fig2}The decay constants for vector states of charmonium, $B_c$ and bottomonium. Results of charmonium and bottomonium are from previous work \cite{PhysRevD.96.016022} and PDG \cite{1674-1137-40-10-100001}.}
\end{figure}
\subsection{Charge and Longitudinal Momentum Densities}
The transverse density offers insight into the hadron structure. In this work, we study the charge density in the transverse impact parameter space of $B_c$ mesons. By definition, it is the two-dimensional Fourier transform of the Dirac form factor \cite{burkardt2003impact,PhysRevLett.99.112001},
\begin{equation}
\rho_c(\vec{b}_\bot)= \int \frac{{\mathrm{d}} ^2 \Delta_\bot}{(2\pi)^2} e^{{\mathrm{i}} \vec{\Delta}_\bot \cdot \vec{b}_\bot} F_1 (q^2 = -\vec{\Delta}^2_\bot),
\end{equation}
where $\vec{\Delta}_\bot$ is the transverse momentum transfer, and $\vec{b}_\bot$ can be interpreted as the conjugated position of $\vec{\Delta}_\bot$ at which the current probes the charge density. Analogous to the charge distribution, we can perform the two-dimensional Fourier transform of the gravitational form factor, which can be interpreted as the longitudinal momentum density in the transverse plane \cite{PhysRevD.78.071502}.
In the LFWF representation of the two-body ($b\bar{c}$) approximation, they can be expressed as,
\begin{equation}
\rho_c(\vec{b}_\bot) = \frac{1}{3} \sum_{s,\bar{s}} \int_0^1 \frac{{\mathrm{d}} x}{4\pi (1-x)^2} \abs{\widetilde{\psi}_{s\bar{s}} \big( \frac{-\vec{b}_\bot}{1-x},x \big)}^2 +
\frac{2}{3} \sum_{s,\bar{s}}\int_0^1 \frac{{\mathrm{d}} x}{4\pi x^2} \abs{\widetilde{\psi}_{s\bar{s}} \big( \frac{\vec{b}_\bot}{x},x \big)}^2 ,
\end{equation}
\begin{equation}
\rho_g(\vec{b}_\bot) = \sum_{s,\bar{s}} \int_0^1 \frac{{\mathrm{d}} x}{4\pi}\frac{x}{(1-x)^2} \abs{\widetilde{\psi}_{s\bar{s}} \big( \frac{-\vec{b}_\bot}{1-x},x \big)}^2 +
\sum_{s,\bar{s}}\int_0^1 \frac{{\mathrm{d}} x}{4\pi} \frac{1-x}{x^2} \abs{\widetilde{\psi}_{s\bar{s}} \big( \frac{\vec{b}_\bot}{x},x \big)}^2 .
\end{equation}
Each density is normalized to unity, as the unit charge and the mass of the meson, respectively. The momentum density is more concentrated in the center than the charge density, where the difference is a relativistic effect \cite{PhysRevD.96.016022}. This pattern can be observed in Fig. \ref{fig6}, where we present the results of pseudoscalar and scalar states. We compare the r.m.s radii of $\rho_g(\vec{b}_\bot)$ among heavy meson systems, which are $0.84$ GeV$^{-1}$, $0.58$ GeV$^{-1}$, $0.57$ GeV$^{-1}$ for $J/\Psi$, $B_c (1^3S_1 )$, and $ \Upsilon$, respectively. This result is consistent with the trend of decay constants: for the heavier system, it has smaller the radii, therefore it is easier to decay.
\begin{figure}
\centering
\hspace{-2cm}
\includegraphics[scale=0.5]{comb1} \includegraphics[scale=0.5]{comb2} \includegraphics[scale=0.5]{comb3}
\hspace{-2cm}
\caption{\label{fig6}The charge density and longitudinal momentum density on the transverse plane of the pseudoscalar and scalar states.}
\end{figure}
\subsection{Distribution Amplitude}
Distribution amplitudes (DAs) are defined from the light-like vacuum-to-meson matrix elements, and can be written with LFWFs as,
\begin{equation}
\frac{f_{P,V}}{2\sqrt{2N_c}}\phi_{P,V}(x) = \frac{1}{\sqrt{x(1-x)}} \int\frac{{\mathrm{d}} ^2 \vec{k}_\bot}{2(2\pi)^3} \psi^{\lambda=0}_{\uparrow\downarrow\mp \downarrow\uparrow} (\vec{k}_\bot,x)
\end{equation}
with $f_{P,V}$ the decay constants for pseudoscalars and vectors, respectively. Note that DAs defined here are normalized to unity. We compare the ground states pseudoscalar DAs of the charmonium, $B_c$ meson, and bottomonium in Fig. \ref{fig8}. The width of the DA decreases while the peak height increases with the mass of the system, and approaches a $\delta$-function in the non-relativistic limit. For charmonium and bottomonium, peaks are at $x=1/2$ due to the equal mass of the constituent quark and anti-quark. While for $B_c$, the peak is close to the constituent quark mass fraction, i.e. $x= m_b/(m_b+m_{\bar{c}}) \approx 0.75$, which is consistent with the distribution of the LFWFs in the previous section. We present the ground-state pseudoscalar and vector DAs and their excited states. Note that the pseudoscalar and vector DAs have similar patterns but they are not identical. This is due to the different configuration mixings as controlled by the one-gluon exchange interaction in this model. The radial excited states have important distinctions: dips appear with the radial excitations.
This pattern also appears in charmonium and bottomonium in BLFQ and in other methods \cite{PhysRevD.77.034026,Hwang2009}. Wiggles near both extremes of $x$ arise from the limited range of basis spaces employed.
\begin{figure}
\centering
\includegraphics[scale=0.5]{DA_grd_all}
\hspace{1cm}
\includegraphics[scale=0.5]{DA_Bc}
\caption{\label{fig8}The distribution amplitudes(DAs) of the ground states of charmonium, $B_c$, and bottomonium (left panel). DAs of the pseudoscalar and vector $B_c$'s and their radial excitations (right panel).
Vertical dashed lines indicate the constituent quark mass fraction of the corresponding system. Specifically peaks of the DAs of quarkonium occur at $x=m_c/(m_c+m_{\bar{c}})=m_b/(m_b+m_{\bar{b}})=1/2$, and the peak of $B _c$ is close to $x = m_b/(m_b+m_{\bar{c}})\approx0.75$.}
\end{figure}
\section{\label{sec4}Summary}
In this work, we investigated the unequal-mass meson system $B_c \ (b\bar{c})$ with the BLFQ approach. All model parameters are fixed by reference to charmonium and bottomonium systems. We found reasonable agreement with existing experiments and with other theoretical calculations.
We carried out the calculations with the basis limit $N_\text{max} =L_\text{max} = 32$, which corresponds to the specific UV (IR) regulator $b\sqrt{N_\text{max}} \approx 6.77 \text{ GeV}\ (b/\sqrt{N_\text{max}} \simeq 0.21 \text{ GeV})$. We first predicted the $B_c$ mass spectrum
and presented the LFWFs of some selected states. These results are obtained from diagonalizing a light-front Hamiltonian based on light-front holography. We discussed a significant difference between the LFWFs of $B_c$ and heavy quarkonium: only the unequal-mass system $B_c$ allows both positive and negative charge parities in the wave functions, due to the absence of charge conjugation symmetry.
We calculated other observables with LFWFs such as the decay constants of pseudoscalar and vector states.
As additional applications and tests of our model, we calculated transverse charge and momentum densities.
Our successes here in applications of the model for heavy quarkonium to the unequal mass heavy meson system provide support for further extensions to lighter systems. One anticipates greater challenges to the model which are likely to require the inclusion of a dynamical gluon in a higher Fock sector. This naturally incorporates self-energy processes and raises challenging issues of renormalization \cite{PhysRevD.77.085028,PhysRevD.92.065005}. Additional significant physics should also be included such as chiral symmetry breaking, etc.
\section{Acknowledgments}
We wish to thank Shaoyang Jia, Meijian Li, Wenyang Qian and Anji Yu for valuable discussions. We also thank Dr. Zhigang Wang for clarifying the result from the QCD sum rule \cite{Wang2013} that we quote in TABLE \ref{tb2}.
P. Maris thanks the Fundação de Amparo \`{a} Pesquisa do Estado de São Paulo (FAPESP) for support under grand No. 2017/19371-0.
This work was supported in part by the Department of Energy under Grant No. DE-FG02- 87ER40371, DE-SC0018223 (SciDAC-4/NUCLEI), DE-SC0015376 (DOE Topical Collaboration in Nuclear Theory for Double-Beta Decay and Fundamental Symmetries) and DE-FG02-04ER41302. Computational resources were provided by the National Energy Research Supercomputer Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,000
|
All Wye Valley Garage Mechanics are fully qualified to work on Land Rover and other 4x4 vehicles with many years of experience.
The price you see is the price you pay, Book with Wye Valley and you can save up to 60% off Main Dealer Prices.
We guarantee our work and strive to give honest and accurate information to all our customers.
We keep you vehicle working for you whether it be family vehicle or work horse.
"Used Wye Valley Garage For Over 5 Years."
"The knowledge is second to none in. Saved me thousands of pounds over the years."
"Best Land Rover Specialist for miles."
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,522
|
Coming to IMAX & ABC!
The first teaser for "Marvel's Inhumans" has arrived with rumblings of treason in the city of Attilan!
Debuting in IMAX theaters September 1 before the entire series comes to ABC, "Marvel's Inhumans" tells the story of Black Bolt and the Inhuman royal family like you've never seen before.
Check out the first teaser below, and follow @TheInhumans on Twitter and Inhumans on Facebook for the latest!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,119
|
Wastepaper serves as feedstock for paper and cardboard producing factories. Research in wastepaper processing is currently focused on designing specialized processing equipment and developing chemicals and auxiliaries securing the required level of wastepaper preparation.
When assessing the total paper and cardboard processing market volume, it is necessary to take into consideration the fact that the share of paper and cardboard constitutes 21.6% of its overall volume, and 23,408,400 tons of solid wastepaper were exported from Russian cities in 2009. According to Abercade estimates, in 2009 the volume of paper and cardboard waste constituted 2,785,600 tons.
According to experts, the share of processed paper wastes is estimated at 20%, therefore the volume of paper and cardboard processed in 2009 constituted 557,100 tons.
As compared to 2008, the volume of export increased by 19.4% in kind and decreased by 28.1% in terms of money.
About 28.7% of wastepaper export came from Rostovpererabotka Closed Joint-Stock Company. The Company exports 5B wastepaper to the Rubezhansky Cardboard and Box Factory in Ukraine, and the volume of exported wastepaper increased as compared to 2008.
Vtorexim-Sever Ltd., with its 20.7%, rates second in the volume of export. The enterprise exports 5B wastepaper to the Rubezhansky Cardboard and Box Factory and the Kiev Cardboard and Paper Factory in Ukraine. Voronezhvtorma Ltd., with its 11.2%, rates third in the total volume of wastepaper export. It exports 5B and 7B wastepaper to the Kiev Cardboard and Paper Factory. In 2009, the aggregate share of the three leading enterprises constituted 60.6% of the total volume of export.
In 2009, the most popular brand of the exported wastepaper was 5B. It constituted 76.8% of exports.
7B wastepaper, with its 7.4%, rated second in the 2009 volume of export.
In 2009, the wastepaper import into Russia was insignificant amounting only to 1,081,900 tons. It decreased by 42.5% as compared to 2008.
In terms of money, in 2009 the volume of import dropped by $184,600, or 46.9%.
In 2009, the Russian market shrank by 8.2% in kind as compared to the previous year. Such a reduction was primarily connected with an increase in export deliveries of Russian wastepaper while its domestic collection decreased.
The structure of wastepaper consumption by Russian manufacturers of cardboard, roofing materials and other products is dominated by 5B wastepaper, which constitutes 58% of the total consumption volume. A substantial share in the wastepaper consumption structure belongs to 7B (18%) and 8B (7%) wastepaper brands. The volume of 5B, 6B, 7B and 8B wastepaper consumption amounts to approximately 92% of the total wastepaper consumption.
The largest wastepaper processing enterprises in Russia are the Stora Enso Packaging Ltd, Naberezhno-Chelninsky Cardboard-and-Paper Mill Closed Joint-Stock Company, Aleksinskaya Cardboard Factory, Stupinsky Cardboard and Printing Mill Ltd., and Kartontara Open Joint-Stock Company, each processing over 100,000 tons of wastepaper annually. The Balakhninsky Pulp-and-Cardboard Mill, Permsky Pulp-and-Paper Mill, Svetogorsky Pulp-and-Paper Mill, Ryazansky Cardboard-and-Ruberoid Plant, Karavayevo Open Joint-Stock Company, Altaikrovlya Open Joint-Stock Company, and Bryanskaya Paper Factory Production Amalgamation can process from 20,000 to 50,000 tons of wastepaper annually. The other manufacturers have the annual capacity of 20,000 tons and less.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,128
|
For many of us, our daily fight with negativity comes from a history of disappointment(s) and false promises. We have been let down, scammed, swindled, hurt, misinformed, or misled in our lifetime – or someone close to us has been.
Negative experiences tend to stick with us. And if we go through a number of them, especially in a condensed amount of time, it can affect our moods and our ability to think positively. We become more hyperaware of the bad things happening around us.
When this happens, our mind becomes more closed, we become more defensive and walls go up, even if that is not our intention. Many times we don't even realize it. All of this leads to a negative mindset.
Have you ever bought a new car and then start seeing it all over the road? It's not necessarily that "everyone else" bought the same car; you are just more aware of it.
This happens with negative experiences too. Sometimes it happens when we are just plain feeling down for one reason or another. Other times, it happens with major life changes and experiences. Example: the loss of a job. You don't know what it is like until you have been through it and you think you are alone. But when you do lose your job, you find there is this "club" of others who have been through the same sort of loss. Some of those people have been able to make it out of the rut and others are still on the search for a job. In the latter, it can sometimes appear that the world keeps throwing us signs that it's an uphill battle. We look around and see positive for everyone else making us feel alone and defeated. It can be a quick downward spiral – if we let it.
You might be thinking, who welcomes negativity? It's oftentimes unintentional. Good news travels fast, but somehow, bad news travels faster. When we are more aware of the negative, it is what our minds gravitate toward, whether it is because we are seeking someone who has it worse than us, or, we are seeking someone to relate to. Do you ever watch the news and think there is more crime, abuse, and scandal than there was 10 years ago? It might be true. It might also be that we hear about it more often (or a combination of the two). And it might also be because we have recently been through a tough situation and we are allowing ourselves to gravitate to other tough situations.
Technology is a part of our culture – television, news feeds, internet – we are constantly being fed information. Think about the Internet for a moment; it has given us the ability to look things up as soon as we wonder them. However, there is a huge chance what we find is less than accurate. A large portion of what we see, read, and hear is laden with opinion and not fact.
Technology is a wonderful thing, but it can also empower us in the wrong kinds of ways. Our mouths and our gasoline to the wrong information. It truly is hard to believe what is "real" anymore.
So, how do you look at life and opportunity with possibility instead of an armor of negativity and doubt – even with a history of discouragement? Even when the world and your conscience both seem to be shouting at you to disbelieve?
My advice is to open your mind and feed your brain. Educate yourself. It takes a little more effort these days because we are so connected that research doesn't seem necessary. But it is necessary. Good does exist, and your belief in it will multiply over time. Your search for the good can become viral. Be a part of the voice that spreads good news and inspiration and you will make positive butterfly effect.
There is a big difference between knowledge and wisdom.
What was good yesterday might not be good today; and on the contrary, what was bad yesterday is not necessarily bad today. And beyond the laws, who determines what is right or wrong, good or bad anyway?
Instead of believing you already know everything, be willing to do some research. Instead of thinking everything is a scam, or fearing disappointment, learn to ask questions. Consider the worst case scenario in keeping the open mind (it's not that bad, is it?). Also, consider the source – where or who is the information coming from? Are there others involved? If so, what is their credibility and reputation? Consider finding multiple sources and continue the search for fact (and not feeling) in all scenarios.
Be the person who truly KNOWS what you are saying NO to (and knows what you are saying YES to). Doing so will help to protect you from the false promises and the future disappointments and will keep you open to opportunities.
Just as not everyone has good intentions, not everyone wants to take you for a ride or recruit you into a scam. Open-minded does not mean naive. This is why educating yourself to become more wise is so important. You will find in doing this, you will not be defensive, but instead, smart and open to possibility. Fueling your brain with facts and being open-minded will provide space for the proper positive things to come your way.
History might repeat itself – but it doesn't mean it is going to repeat on you. Remember, what happened once won't necessarily happen again – and it likely won't happen to you. That's not to say you will never face disappointment in the future, but you can make a deliberate effort to fuel your brain and then find and attract the good.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,502
|
\section{Introduction}
Low-density parity-check (LDPC) codes are linear codes that were introduced
by Gallager in 1962 \cite{my_ref:r1} and re-discovered by MacKay
in 1995 \cite{mackay1995good}. The ensemble of LDPC codes that we consider
(e.g. see \cite{my_ref:lubyca1} and \cite{my_ref:urbanke}) is defined
by the edge degree distribution (d.d.) functions $\lambda(x)=\sum_{k\geq2}\lambda_{k}x^{k-1}$
and $\rho(x)=\sum_{k\geq2}\rho_{k}x^{k-1}$. The standard encoding
and decoding algorithms are based on the bit-level operations. However,
when applied to the transmission of data packets, it is natural to
perform the encoding and decoding algorithm at the packet level rather
than the bit level. For example, if we are going to transmit 32 bits
as a packet, then we can use error-correcting codes over the, rather
large, alphabet with $2^{32}$ elements.
Let $Y$ be the r.v. which is the output of the the $q$-SC given the transmitted r.v. $X$. Then, the channel transition probability can be described as \[
p(Y=y|X=x)=\left\{ \begin{array}{lcl}
1-p & \mbox{if $x=y$}\\
p/(q-1) & \mbox{if $x\neq y$}\end{array}\right.\]
where $x$ (resp. $y$) is the transmitted (resp. received) symbol
and $x,y\in GF(q)$. The capacity of the $q$-SC is $1+(1-p)\log_{q}(1-p)+p\log_{q}p-p\log_{q}(q-1)$
which is approximately equal to $1-p$ symbols per channel use for large $q$. This implies
the number of symbols which can be reliably transmitted per channel use of the $q$-SC with large $q$ is approximately equal
to that of the BEC with erasure probability $p$. Moreover,
the behavior of the $q$-SC with large $q$ is similar to the BEC
in the sense that: i) incorrectly received symbols from the $q$-SC
provide almost no information about the transmitted symbol and ii)
error detection (e.g., a CRC) can be added to each symbol with negligible
overhead \cite{my_ref:ShokITW04}.
Binary LDPC codes for the $q$-SC with moderate $q$ are proposed and optimized
based on EXIT charts in \cite{my_ref:isit2008weidmann} and \cite{my_ref:LW_turbo08}.
It is known that the complexity of the FFT-based belief-propagation
algorithm for $q$-ary LDPC codes scales like $O(q\log q)$. Even
for moderate sizes of $q$, such as $q=256$, this renders such algorithms
ineffective in practice. However, when $q$ is large, an interesting
effect can be used to facilitate decoding: if a symbol is received
in error, then it is essentially a randomly chosen element of the
alphabet, and the parity-check equations involving this symbol is
very unlikely to be valid.
Based on this idea, Luby and Mitzenmacher develop an
elegant algorithm for decoding LDPC codes on the $q$-SC for large
$q$ \cite{my_ref:luby_and_mitz}. However, their paper did not present simulation results and
left capacity-achieving ensembles as an interesting open problem.
Metzner presented similar ideas earlier in \cite{my_ref:Metz} and
\cite{my_ref:Metz2}, but the focus and analysis is quite different.
Davey and MacKay also develop and analyze a symbol-level message-passing
decoder over small finite fields in \cite{my_ref:Davey}. A number of
approaches to the $q$-SC (for large $q$) based on interleaved Reed-Solomon
codes are also possible \cite{my_ref:ShokITW04} \cite{my_ref:BKY03}.
In \cite{my_ref:isit2004wangshok}, Shokrollahi and Wang discuss two
ways of approaching capacity. The first uses a two-stage approach
where the first stage uses a Tornado code and verification decoding.
The second is, in fact, equivalent to one of the decoders we discuss
in this paper.%
\footnote{The description of the second method in \cite{my_ref:isit2004wangshok}
is very brief and we believe its capacity-achieving nature deserves
further attention.%
} When we discovered this, the authors were kind enough to send us
an extended abstract \cite{my_ref:Shokwangpersonal} which contains
more details. Still, the authors did not consider the theoretical
performance with a maximum list size constraint, the actual performance
of the decoder via simulation, or false verification (FV) due to cycles
in the decoding graph. In this paper, we describe the algorithm in
detail and consider those details.
Inspired by \cite{my_ref:luby_and_mitz}, we introduce
list-message-passing (LMP) decoding with verification for LDPC codes
on the $q$-SC. Instead of passing a single value between symbol and
check nodes, we pass a list of candidates to improve the decoding
threshold. This modification also increases the probability of FV.
So, we analyze the causes of FV and discuss techniques to mitigate
FV. It is worth noting that the LMP decoder we consider is somewhat
different than the list extension suggested in \cite{my_ref:luby_and_mitz}.
Their approach uses a peeling-style decoder based on verification
rather than erasures. Also, the algorithms in \cite{my_ref:luby_and_mitz}
are proposed in a node-based (NB) style but analyzed using message-based
(MB) decoders. It is implicitly assumed that the two approaches are
equivalent. In fact, this is not always true. In this paper, we consider
the differences between NB and MB decoders and derive an asymptotic
analysis for NB decoders.
The paper is organized as follows. In Section \ref{sec:descp_ana}, we describe the
LMP algorithm for bounded and unbounded list size and use density
evolution (DE) \cite{my_ref:DE} to analyze its performance. The difference
between NB and MB decoders for the first (LM1) and second algorithm
(LM2) in \cite{my_ref:luby_and_mitz} is discussed and the NB decoder
analysis is derived in Section \ref{sec:nb_ana}, respectively. The error floor of the LMP algorithms is considered in Section \ref{sec:err_flr}.
In Section \ref{sec:cmp_opt}, we use differential
evolution to optimize code ensembles. We describe the simulation of
these codes and compare the results with the theoretical thresholds.
We also compare our results with previously published results in this
area \cite{my_ref:luby_and_mitz} and \cite{my_ref:isit2004wangshok}.
In Section \ref{sec:sim}, simulation results are shown. Applications of the
LMP algorithm are discussed and conclusions are given in Section \ref{sec:con}.
\section{Description and Analysis}
\label{sec:descp_ana}
\subsection{Description of the Decoding Algorithm}
\label{sec:decoder} The LMP decoder we discuss is designed mainly
for the $q$-SC and is based on local decoding operations applied
to lists of messages containing probable codeword symbols. The list messages passed in the
graph have three types: \emph{verified} (V), \emph{unverified} (U)
and \emph{erasure} (E). Every V-message has a symbol value associated
with it. Every U-message has a list of symbols associated with
it. Following \cite{my_ref:luby_and_mitz}, we mark messages
as verified when they are very likely to be correct. In particular,
we will find that the probability of FV approaches zero as $q$ goes
to infinity.
The LMP decoder works by passing list-messages around the decoding
graph. Instead of passing a single code symbol (e.g., Gallager A/B
algorithm \cite{my_ref:r1}) or a probability distribution over all
possible code symbols (e.g., \cite{my_ref:Davey}), we pass a list
of values that are more likely to be correct than the other messages.
At a check node, the output list contains all symbols which could
satisfy the check constraint for the given input lists. At the check
node, the output message will be verified if and only if all the incoming
messages are verified. At a node of degree $d$, the associativity
and commutativity of the node-processing operation allow it to be
decomposed into $(d-1)$ basic%
\footnote{Here we use {}``basic{}`` to emphasize that it maps two list-messages to a single list message.%
} operations (e.g., $a\!+\! b\!+\! c\!+\! d\!=\!(a\!+\! b)\!+\!(c\!+\! d)$).
In such a scheme, the computational complexity of each basic operation
is proportional to $s^{2}$ at the check node and $s\ln s$ at the
variable node%
\footnote{The basic operation at the variable node can be done by $s$ binary
searches of length $s$ and the complexity of a binary search of length
$s$ is $O(\ln s)$%
}, where $s$ is the list size of the input list. The list size grows
rapidly as the number of iterations increases. In order to make the
algorithm practical, we have to truncate the list to keep the list
size within some maximum value, denoted $S_{max}$. In our
analysis, we also find that, after the number of iterations exceeds
half the girth of the decoding graph, the probability of FV increases
very rapidly. We analyze the reasons of FV and classify the FV's into
two types. We find that the codes described in \cite{my_ref:luby_and_mitz}
and \cite{my_ref:isit2004wangshok} both suffer from type-II FV. In
Section \ref{sec:err_flr}, we analyze these FV's and propose a scheme that reduces the
probability of FV.
The message-passing decoding algorithm using list messages (or LMP)
applies the following simple rules to calculate the output messages
for a check node:
\begin{itemize}
\item If all the input messages are verified, then the output becomes verified
with the value which makes all the incoming messages sum to zero.
\item If any input message is an erasure, then the output message becomes
an erasure.
\item If there is no erasure on the input lists, then the output list contains
all symbols which could satisfy the check constraint for the given
input lists.
\item If the output list size is larger than $S_{max}$, then the output
message is an erasure.
\end{itemize}
It applies the following rules to calculate the output messages of
a variable node:
\begin{itemize}
\item If all the input messages are erasures or there are multiple verified
messages which disagree, then output message is the channel received
value.
\item If any of the input messages are verified (and there is no disagreement)
or a symbol appears more than once, then the output message becomes
verified with the same value as the verified input message or the
symbol which appears more than once.
\item If there is no verified message on the input lists and no symbol appears
more than once, then the output list is the union of all input lists.
\item If the output message has list size larger than $S_{max}$, then the
output message is the received value from the channel.
\end{itemize}
\vspace{-1mm}
\subsection{DE for Unbounded List Size Decoding Algorithm}
To apply DE to the LMP decoder with unbounded list sizes, denoted
LMP-$\infty$ (i.e., $S_{max}=\infty$), we consider three quantities
which evolve with the iteration number $i$. Let $x_{i}$ be the probability
that the correct symbol is not on the list passed from a variable
node to a check node. Let $y_{i}$ be the probability that the message
passed from a variable node to a check node is not verified. Let $z_{i}$
be the average list size passed from a variable node to a check node.
The same variables are {}``marked'' $(\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i})$
to represent the same values for messages passed from the check nodes
to the variable nodes (i.e., the half-iteration value). We also assume
all the messages are independent, that is, we assume that the bipartite graph
has girth greater than twice the number of decoding iterations.
First, we consider the probability, $x_{i}$, that the correct
symbol is not on the list. For any degree-$d$ check node, the correct
message symbol will only be on the edge output list if all of the
other $d-1$ input lists contain their corresponding correct symbols.
This implies that $\tilde{x}_{i}=1-\rho(1-x_{i}$). For any degree-$d$
variable node, the correct message symbol is not on the edge
output list only if it is not on any of the other $d-1$ edge input lists.
This implies that $x_{i+1}=p\lambda(\tilde{x}_{i})$. This behavior
is very similar to erasure decoding of LDPC codes on the BEC and gives
the identical update equation \vspace{-1mm}
\begin{equation}
\label{eq1}
x_{i+1}=p\lambda\left(1-\rho(1-x_{i})\right)
\end{equation}
where $p$ is the $q$-SC error probability. Note that throughout the DE analysis, we assume that $q$ is sufficiently large. Next, we consider the
probability, $y_{i}$, that the message is not verified. For any degree-$d$
check node, an edge output message is verified only if all of the
other $d-1$ edge input messages are verified. For any degree-$d$
variable node, an edge output message is verified if any symbol on
the other $d-1$ edge input lists is verified or occurs twice which
implies $\tilde{y}_{i}=1-\rho(1-y_{i})$. The event that the output
message is not verified can be broken into the union of two disjoint
events: (i) the correct symbol is not on any of the input lists, and
(ii) the symbol from the channel is incorrect and the correct symbol
is on exactly one of the input lists and not verified. For a degree-$d$
variable node, this implies that \vspace{-1mm}
\begin{equation}
\Pr(\textrm{not verified})=\left(\tilde{x}_{i}\right)^{d-1}+p(d-1)\left(\tilde{y}_{i}-\tilde{x}_{i}\right)\left(\tilde{x}_{i}\right)^{d-2}.\label{eq2}\end{equation}
Summing over the d.d. gives the update equation
\begin{align}\label{eq3}
y_{i+1}= & \lambda\left(1-\rho(1-x_{i})\right)+p\left(\rho(1-x_{i})-\rho(1-y_{i})\right)\lambda'\left(1-\rho(1-x_{i})\right).\end{align}
It is important to note that (\ref{eq1}) and (\ref{eq3}) were published
first in \cite[Thm. 2]{my_ref:isit2004wangshok} (by mapping $x_{i}=p_{i}$
and $y_{i}=p_{i}+q_{i}$), but were derived independently by us.
Finally, we consider the average list size $z_{i}$. For any degree-$d$
check node, the output list size is equal%
\footnote{It is actually upper bounded because we ignore the possibility of
collisions between incorrect entries, but the probability of this
occurring is negligible as $q$ goes to infinity.%
}~to the product of the sizes of the other $d-1$ input lists. Since
the mean of the product of i.i.d. random variables is equal to the
product of the means, this implies that $\tilde{z}_{i}=\rho(z_{i})$.
For any degree-$d$ variable node, the output list size is equal to
one%
\footnote{A single symbol is always received from the channel.%
}~plus the sum of the sizes of the other $d-1$ input lists if the
output is not verified and one otherwise. Again, the mean of the sum
of $d-1$ i.i.d. random variables is simply $d-1$ times the mean
of the distribution, so the average output list size is given by
\[
1+\left(\left(\tilde{x}_{i}\right)^{d-1}+p(d-1)\left(\tilde{y}_{i}-\tilde{x}_{i}\right)\left(\tilde{x}_{i}\right)^{d-2}\right)(d-1)\tilde{z}_{i}.\]
This gives the update equation \begin{align*}
z_{i+1}= & 1\!+\!\left[\tilde{x}_{i}\lambda'\left(\tilde{x}_{i}\right)\!+\! p\left(\tilde{y}_{i}\!-\!\tilde{x}_{i}\right)\left(\lambda'\left(\tilde{x}_{i}\right)\!+\!\tilde{x}_{i}\lambda''\left(\tilde{x}_{i}\right)\right)\right]\rho(z_{i}).\end{align*}
For the LMP decoding algorithm, the threshold of an ensemble $(\lambda(x),\rho(x))$
is defined to be \[
p^{*}\triangleq\sup\left\{ p\in(0,1]\bigg{|}p\lambda(1-\rho(1-x))<x\;\forall\; x\in(0,1]\right\} .\]
Next, we show that some codes can achieve channel capacity using
this decoding algorithm. \begin{theorem} \label{thm1} Let $p^{*}$
be the threshold of the d.d. pair $(\lambda(x),\rho(x))$ and assume
that the channel error rate $p$ is less than $p^{*}$. In this case,
the probability $y_{i}$ that a message is not verified in the $i$-th
decoding iteration satisfies $\lim_{i\rightarrow\infty}y_{i}\rightarrow0$.
Moreover, for any $\epsilon>0$, there exists a $q<\infty$ such that
LMP decoding of a long random $(\lambda,\rho)$ LDPC code, on a $q$-SC
with error probability $p$, results in a symbol error rate of less
than $\epsilon$. \end{theorem} \begin{IEEEproof} See Appendix A.
\end{IEEEproof} \begin{remark} Note that the convergence condition,
$p^{*}\lambda(1-\rho(1-x))<x$ for $x\in(0,1]$, is identical to the
BEC case but that $x$ has a different meaning. In the DE equation
for the $q$-SC, $x$ is the probability that the correct value is not
on the list. In the DE equation for the BEC, $x$ is the probability
that the message is an erasure. This tells us any capacity-achieving
ensemble for the BEC is capacity-achieving for the $q$-SC with LMP-$\infty$
algorithm and large $q$. This also gives some intuition about the
behavior of the $q$-SC for large $q$. For example, when $q$ is
large, an incorrectly received value behaves like an erasure \cite{my_ref:ShokITW04}.
\end{remark}
\begin{corollary} \label{thm:lowrate} The code with d.d. pair $\lambda(x)=x$
and $\rho(x)=(1-\epsilon)x+\epsilon x^{2}$ has a threshold of $1-\frac{\epsilon}{1+\epsilon}$
and a rate of $r>\frac{\epsilon}{3(1+\epsilon)}$. Therefore, it achieves
a rate of $\Theta(\delta)$ for a channel error rate of $p=1-\delta$.
\end{corollary} \begin{IEEEproof} Follows from $\left(1-\frac{\epsilon}{1+\epsilon}\right)\lambda\left(1-\rho(1-x)\right)<x$
for $x\in(0,1]$ and Theorem \ref{thm1}. \end{IEEEproof}
\begin{remark} We believe that Corollary \ref{thm:lowrate} provides
the first linear-time decodable construction of rate $\Theta(\delta)$
for a random-error model with error probability $1-\delta$. A discussion
of linear-time encodable/decodable codes, for both random and adversarial
errors, can be found in \cite{my_ref:Guruswami03}. The complexity
also depends on the required list size which may be extremely large (though
independent of the block length). Unfortunately, we do not have explicit
bounds on the required alphabet size or list size for this construction.
\end{remark}
In practice, we cannot implement a list decoder with unbounded list
size. Therefore, we also evaluate the LMP decoder under a bounded
list size assumption.
\subsection{DE for the Decoding Algorithm with Bounded List Size}
First, we introduce some definitions and notation for the DE analysis with
bounded list size decoding algorithm. Note that, in the bounded list-size LMP algorithm, each list may contain at most $S_{max}$
symbols. For convenience, we classify the messages into four types:
\begin{description}
\item [{(V)}] \emph{Verified}: message is verified and has list size 1.
\item [{(E)}] \emph{Erasure}: message is an erasure and has list size 0.
\item [{(L)}] \emph{Correct on list}: message is not verified or erased
and the correct symbol is on the list.
\item [{(N)}] \emph{Correct not on list}: message is not verified or erased,
and the correct symbol is not on the list.
\end{description}
For the first two message types, we only need to track the fraction,
$V_{i}$ and $E_{i}$, of message types in the $i$-th iteration.
For the third and the fourth types of messages, we also need to track
the list sizes. Therefore, we track the characteristic function of
the list size for these messages, given by $L_{i}(x)$ and $N_{i}(x)$.
The coefficient of $x^{j}$ represents the probability that the message
has list size $j$. Specifically, $L_{i}(x)$ is defined by \[
L_{i}(x)=\sum_{j=1}^{S_{max}}l_{i,j}x^{j},\]
where $l_{i,j}$ is the probability that, in the $i$-th decoding
iteration, the correct symbol is on the list and the message list
has size $j$. The function $N_{i}(x)$ is defined similarly. This
implies that $L_{i}(1)$ is the probability that the list contains
the correct symbol and that it is not verified. For the same reason,
$N_{i}(1)$ gives the probability that the list does not contain the
correct symbol and that it is not verified. For the simplicity of expression,
we denote the overall density as $P_{i}=[V_{i},E_{i},L_{i}(x),N_{i}(x)]$. The same variables are
{}``marked'' $(\tilde{V},\tilde{E},\tilde{L},\tilde{N}$ and $\tilde{P})$
to represent the same values for messages passed from the check nodes
to the variable nodes (i.e., the half-iteration value).
Using these definitions, we find that DE can be computed efficiently
by using arithmetic of polynomials. For the convenience of analysis
and implementation, we use a sequence of basic operations plus a
separate truncation operator to represent a multiple-input multiple-output
operation. We use $\boxplus$ to denote the check-node operator and
$\otimes$ to denote the variable-node operator. Using this, the DE
for the variable-node basic operation $P^{(3)}=\tilde{P}^{(1)}\otimes\tilde{P}^{(2)}$
is given by \vspace{-2mm}
\begin{align}
V^{(3)}= & \tilde{V}^{(1)}\!+\!\tilde{V}^{(2)}\!-\!\tilde{V}^{(1)}\tilde{V}^{(2)}\!+\!\tilde{L}^{(1)}(1)\tilde{L}^{(2)}(1)\label{eq8}\\
E^{(3)}= & \tilde{E}^{(1)}\tilde{E}^{(2)}\label{eq9}\\
L^{(3)}(x)= & \tilde{L}^{(1)}(x)\left(\tilde{E}^{(2)}\!+\!\tilde{N}^{(2)}(x)\right)\!+\!\tilde{L}^{(2)}(x)\left(\tilde{E}^{(1)}\!+\!\tilde{N}^{(1)}(x)\right)\label{eq10}\\
N^{(3)}(x)= & \tilde{N}^{(1)}(x)\tilde{E}^{(2)}\!+\!\tilde{N}^{(2)}(x)\tilde{E}^{(1)}\!+\!\tilde{N}^{(1)}(x)\tilde{N}^{(2)}(x).\label{eq11}\end{align}
Note that (\ref{eq8}) to (\ref{eq11}) do not yet consider
the list size truncation and the channel value.
For the basic check-node operation $\tilde{P}^{(3)}=P^{(1)}\boxplus P^{(2)}$,
the DE is given by \begin{align}
\tilde{V}^{(3)}= & V^{(1)}V^{(2)}\\
\tilde{E}^{(3)}= & E^{(1)}\!+\! E^{(2)}-E^{(1)}E^{(2)}\\
\tilde{L}^{(3)}(z)= & \left[V^{(1)}L^{(2)}(z)\!+\! V^{(2)}L^{(1)}(z)\!+\! L^{(1)}(x)L^{(2)}(y)\right]_{x^{j}y^{k}\rightarrow z^{jk}}\\
\tilde{N}^{(3)}(z)= & \left[N^{(1)}(x)N^{(2)}(y)\!+\! N^{(1)}(x)\left(V^{(2)}y\!+\! L^{(2)}(y)\right)\!+\right.\nonumber \\
& \left.N^{(2)}(x)\left(V^{(1)}y\!+\! L^{(1)}(y)\right)\right]_{x^{j}y^{k}\rightarrow z^{jk}}\end{align}
where the subscript $x^{j}y^{k}\rightarrow z^{jk}$ means the replacement
of variables. Finally, the truncation of lists to size $S_{max}$
is handled by truncation operators which map densities to densities.
We use $\mathcal{T}$ and $\mathcal{T'}$ to denote the truncation
operation at the check and variable nodes. Specifically, we truncate
terms with degree higher than $S_{max}$ in the polynomials $L(x)$
and $N(x)$. At check nodes, the truncated probability mass is moved
to $E$.
\noindent At variable nodes, lists longer than $S_{max}$ are replaced
by the channel value. Let $P'_i=\left(\tilde{P}_{i}^{\otimes k-1}\right)$ be the an intermediate density which is the
result of applying the basic operation $k-1$ times on $\tilde{P}_{i}$. The correct symbol node message density after considering the channel value and truncation would be $\mathcal{T'}(P'_i)$. To analyze this, we separate $L'_i(x)$ into two
terms: $ {A'_i}(x)$ with degree less than $S_{max}$ and $x^{S_{max}}{B'_i}(x)$
with degree at least $S_{max}$. Likewise, we separate ${N'_i}(x)$
into $ {C'_i}(x)$ and $x^{S_{max}} {D'_i}(x)$. The inclusion
of the channel symbol and the truncation are combined into a single
operation \[
{\textstyle P_i\!=\!\mathcal{T'}\left(\left[ {V'_i}, {E'_i}, {A'_i}(x)+x^{S_{max}}{B'_i}(x), {C'_i}(x)+x^{S_{max}} {D'_i}(x)\right]\right)}\]
defined by \begin{align}
V_i= & \, {V'_i}\!+\!(1-p)\left( {A'_i}(1)+ {B'_i}(1)\right)\label{eq17}\\
E_i= & \,0\\
L_i(x)= & \,(1-p)x\left({E'_i}\!+\! {C'_i}(x)\!+\! {D'_i}(1)\right)\!+\! px {A'_i}(x)\\
N_i(x)= & \, px\left( {E'_i}\!+\! {B'_i}(1)\!+\! {C'_i}(x)\!+\! {D'_i}(1)\right).\end{align}
Note that in (\ref{eq17}), the term $(1-p)\left( {A'_i}(1)+ {B'_i}(1)\right)$
is due to the fact that messages are compared for possible verification
before truncation.
The overall DE recursion is easily written in terms of the forward
(symbol to check) density $P_{i}$ and the backward (check to symbol) density
$\tilde{P}_{i}$ by taking the irregularity into account. The initial density is $P_{0}=[0,0,(1-p)x,px]$,
where $p$ is the error probability of the $q$-SC channel, and the
recursion is given by
\begin{align}
\tilde{P}_{i}= & \sum_{k=2}^{d_{c}}\rho_{k}\,\mathcal{T}\left(P_{i}^{\boxplus k-1}\right)\\
P_{i+1}= & \sum_{k=2}^{d_{v}}\lambda_{k}\,\mathcal{T'}\left(\tilde{P}_{i}^{\otimes k-1}\right).\end{align}
Note that the DE recursion is not one-dimensional. This makes it
difficult to optimize the ensemble analytically. It remains an open problem
to find the closed-form expression of the threshold in terms of the
maximum list size, d.d. pairs, and the alphabet size $q$. In section \ref{sec:cmp_opt}, we will fix
the maximum variable and check degrees, code rate, $q$ and maximum
list size and optimize the threshold over the d.d. pairs by using
a numerical approach.
\begin{section}{Analysis of Node-based Algorithms}
\label{sec:nb_ana}
\begin{subsection}{Differential Equation Analysis of LM1-NB}
We refer to the first and second algorithms in \cite{my_ref:luby_and_mitz}
as LM1 and LM2, respectively. Each algorithm can be viewed either
as message-based (MB) or node-based (NB). The first and second algorithms
in \cite{my_ref:isit2004wangshok} and \cite{my_ref:Shokwangpersonal}
are referred to as SW1 and SW2. These algorithms are summarized in
Table~\ref{tab:table3}. Note that, if no verification occurs,
the variable node (VN) sends the ({}``channel value'', U) and the check node (CN) sends the ({}``expected
correct value'',U) in all these algorithms. The algorithms SW1, SW2 and LMP are all MB algorithms,
but can be modified to be NB algorithms.
\subsubsection{Motivation}
\begin{small} %
\begin{table}[t]
\caption{Brief Description of Message-Passing Algorithms for $q$-SC}
\label{tab:table3} \centering \begin{tabular}{|c|l|}
\hline
\textbf{Alg.} & \hspace{1in} \textbf{Description} \tabularnewline
\hline
\multirow{1}{*}{LMP-$S_{max}$} & LMP as described in Section~\ref{sec:decoder} with maximum list size $S_{max}$ \tabularnewline
\hline
\multirow{1}{*}{LM1-MB} & MP decoder that passes (value, $U$/$V$). \cite[III.B]{my_ref:luby_and_mitz} \tabularnewline
& At VN's, output is $V$ if any input is $V$ or message matches \tabularnewline
& \hspace{3mm} channel value, otherwise pass channel value. \tabularnewline
& At CN's, output is $V$ if all inputs are $V$. \tabularnewline
\hline
\multirow{1}{*}{LM1-NB} & Peeling decoder with VN state (value, $U$/$V$). \cite[III.B]{my_ref:luby_and_mitz} \tabularnewline
& At CN's, if all neighbors sum to 0, then all neighbors get $V$. \tabularnewline
& At CN's, if all neighbors but one are $V$, then last gets $V$. \tabularnewline
\hline
\multirow{1}{*}{LM2-MB} & The same as LM1-MB with one additional rule. \cite[IV.A]{my_ref:luby_and_mitz}. \tabularnewline
& At VN's, if two input messages match, then output $V$. \tabularnewline
\hline
\multirow{1}{*}{LM2-NB} & The same as LM1-NB with one additional rule. \cite[IV.A]{my_ref:luby_and_mitz}. \tabularnewline
& At VN's, if two neighbor values same, then VN gets $V$. \tabularnewline
\hline
\multirow{1}{*}{SW1} & Identical to LM2-MB\tabularnewline
\hline
\multirow{1}{*}{SW2} & Identical to LMP-$\infty$. \cite[Thm. 2]{my_ref:isit2004wangshok} \tabularnewline
\hline
\end{tabular}
\end{table}
\end{small}
In \cite{my_ref:luby_and_mitz}, the algorithms are proposed in the
node-based (NB) style \cite[Section III-A and IV]{my_ref:luby_and_mitz},
but analyzed in the message-based (MB) style \cite[Section III-B and IV]{my_ref:luby_and_mitz}.
It is easy to verify that the LM1-NB and LM1-MB have identical performance, but this is not true for the
NB and MB LM2 algorithms. In this section, we will
show the differences between the NB decoder and MB decoder and derive
a precise analysis for LM1-NB.
First, we show the equivalence between LM1-MB and LM1-NB. \begin{theorem}\label{thm2}
Any verification that occurs in LM1-NB also occurs in LM1-MB and vice
versa. Therefore, LM1-NB and LM1-MB are equivalent. \end{theorem}
\begin{IEEEproof} See Appendix B. \end{IEEEproof} \begin{remark}
The theorem shows the equivalence between LM1-NB and LM1-MB. This
also implies the stable error patterns or stopping sets of LM1-NB and LM1-MB
are the same. \end{remark}
In the NB decoder, the verification status is associated with the
node. Once a node is verified, all the outgoing messages are verified.
In the MB decoder, the status is associated with the edge/message
and the outgoing messages may have different verification status.
NB algorithms cannot, in general, be analyzed using DE because the
independence assumption between messages does not hold. Therefore,
we develop peeling-style decoders which are equivalent to LM1-NB and
LM2-NB and use differential equations to analyze them.
Following \cite{my_ref:lubyca1}, we analyze the peeling-style decoder
using differential equations to track the average number of edges
(grouped into types) in the graph as decoding progresses. From the
results from \cite{my_ref:diffeqn} and \cite{my_ref:lubyca1}, we
know that the actual number of edges (of any type), in any particular
decoding realization is tightly concentrated around the average over
the lifetime of the random process. In a peeling-style decoder for
$GF(q)$, a variable node and its edges are removed after verification.
The check node keeps track of the new parity constraint (i.e., the
value to which the attached variables must sum) by subtracting values
associated with the removed edges.
\subsubsection{Analysis of Peeling-Style Decoding}
First, we introduce some notation and definitions for the analysis.
A variable node (VN) whose channel value is correctly received is
called a correct variable node (CVN), otherwise it is called an incorrect
variable node (IVN). A check node (CN) with $i$ edges connected to
the CVN's and $j$ edges connected to the IVN's will be said to have
C-degree $i$ and I-degree $j$, or type $n_{i,j}$.
We also define the following quantities:
\begin{itemize}
\item $t$: decoding time or the fraction of VNs removed from graph
\item $L_{i}(t)$: the number of edges connected to CVN's with degree $i$
at time $t$.
\item $R_{j}(t)$: the number of edges connected to IVN's with degree $j$
at time $t$.
\item $N_{i,j}(t)$: the number of edges connected to CN's with C-degree
$i$ and I-degree $j$.
\item $E_{l}(t)$: the remaining number of edges connected to CVN's at time
$t$.
\item $E_{r}(t)$: the remaining number of edges connected to IVN's at time
$t$.
\item $a(t)$: the average degree of CVN's which have at least 1 edge coming from CN's of type $n_{i,1},i\ge 1$, \[
a(t)=\frac{\sum_{k=1}^{d_v}{kL_k(t)}}{E_l(t)}\]
\item $b(t)$: the average degree of IVN's which have at least 1 edge coming from CN's of type $n_{0,1}$, \[
b(t)=\frac{\sum_{k=1}^{d_v}{kR_k(t)}}{E_r(t)}\]
\item $E$: number of edges in the original graph, \[
E=E_{l}(0)+E_{r}(0)\]
\end{itemize}
Counting edges in three ways gives the following equations: \[
\sum_{i\ge1}L_{i}(t)+\sum_{i\ge1}R_{i}(t)=E_{l}(t)+E_{r}(t)=\sum_{i\ge0}\sum_{j\ge0,(i,j)\neq(0,0)}N_{i,j}(t).\]
These r.v.'s represent a particular realization of the decoder. The
differential equations are defined for the normalized (i.e., divided
by $E$) expected values of these variables. We use lower-case notation
(e.g., $l_{i}(t)$, $r_{i}(t)$, $n_{i,j}(t)$, etc.) for these deterministic
trajectories. For a finite system, the decoder removes exactly one
variable node in one time step of $\Delta t$.
The description of peeling-style decoder is as follows. The peeling-style
decoder removes one CVN or IVN in each time step by the following
rules:
\begin{description}
\setlength{\labelwidth}{11mm}
\setlength{\itemindent}{2mm}
\item [ \bf{CER}:]If any CN has its edges all connected to CVN's, pick
one of the CVN's and remove it and all its edges.
\item [ \bf{IER1}:]If any IVN has at least one edge connected to a
CN of type $(0,1)$, then the value of the IVN is given by the attached
CN and we remove the IVN and all its outgoing edges.
\end{description}
If both CER and IER1 can be applied, then one is chosen randomly as
described below.
\begin{figure}[t]
\centering
\includegraphics[width=0.25\columnwidth,angle=270,viewport=150 180 450 600]{fig4.eps}
\caption{Tanner graph for differential equation analysis.}\label{fig:fig3}
\end{figure}
Since both rules remove exactly one VN, the decoding process either
finishes in exactly $N$ steps or stops early and cannot continue.
The first case occurs only when either the IER1 or CER condition is
satisfied in every time step. When the decoder stops early, the pattern
of CVNs and IVNs remaining is known as a stopping set. We also note
that the rules above, though described differently, are equivalent
to the first node-based algorithm (LM1-NB) introduced in \cite{my_ref:luby_and_mitz}.
\subsubsection{Analysis}
Recall that in the node-based algorithm for LM1 we have two verification
rules. The first rule is that if all messages but one are verified
at a CN, then all messages are verified. We call this type-I incorrect-edge-removal
(IER1) and this is only possible when $n_{0,1}(t)>0$. The second
rule is: if all messages sum to zero at a CN, then all messages are verified.
We call this as correct-edge-removal (CER) in the peeling-style decoder
and this requires $n_{i,0}>0$ for some $i\ge1$. The peeling-style
decoder performs one operation in time step. The operation is random
and can be either CER or IER1. When both operations are possible,
we choose randomly between these two rules by picking CER with probability
$c_{1}(t)$ and IER1 with probability $c_{2}(t)$, where \begin{align*}
c_{1}(t) & =\frac{\sum_{i\geq1}n_{i,0}(t)}{\sum_{i\geq1}n_{i,0}(t)+n_{0,1}(t)}\\
c_{2}(t) & =\frac{n_{0,1}(t)}{\sum_{i\geq1}n_{i,0}(t)+n_{0,1}(t)}.\end{align*}
This weighted sum ensures that the expected change in the decoder state is Lipschitz continuous if either $c_{1}(t)$ or $c_{2}(t)$ is strictly positive.
Therefore, the differential equations can be written as \begin{align*}
\frac{\mbox{d}l_{i}(t)}{\mbox{d}t} & =c_{1}(t)\frac{\mbox{d}l_{i}^{(1)}(t)}{\mbox{d}t}+c_{2}(t)\frac{\mbox{d}l_{i}^{(2)}(t)}{\mbox{d}t}\\
\frac{\mbox{d}r_{i}(t)}{\mbox{d}t} & =c_{1}(t)\frac{\mbox{d}r_{i}^{(1)}(t)}{\mbox{d}t}+c_{2}(t)\frac{\mbox{d}r_{i}^{(2)}(t)}{\mbox{d}t}\\
\frac{\mbox{d}n_{i,j}(t)}{\mbox{d}t} & =c_{1}(t)\frac{\mbox{d}n_{i,j}^{(1)}(t)}{\mbox{d}t}+c_{2}(t)\frac{\mbox{d}n_{i,j}^{(2)}(t)}{\mbox{d}t},\end{align*}
where $^{(1)}$ and $^{(2)}$ denote, respectively, the effects of
CER and IER1.
\subsubsection{CER Analysis}
If the CER operation is picked, then we choose randomly an edge attached
to a CN of type $(i,0)$ with $i\geq1$. This VN endpoint of this
edge is distributed uniformly across the CVN edge sockets. Therefore,
it will be attached to a CVN of degree $k$ with probability $\frac{l_{k}(t)}{e_{l}(t)}$.
Therefore, one has the following differential equations for $l_{k}$
and $r_{k}$ \[
\frac{\mbox{d}l_{k}^{(1)}(t)}{\mbox{d}t}=\frac{l_{k}(t)}{e_{l}(t)}(-k),\textrm{ for }k\ge1\]
and \[
\frac{\mbox{d}r_{k}^{(1)}(t)}{\mbox{d}t}=0.\]
For the effect on check edges, we can think of removing a CVN with
degree $k$ as first randomly picking an edge of type $(k,0)$ connected
to that CVN and then removing all the other $k-1$ edges (called reflected
edges) attached to the same CVN. The $k-1$ reflected edges are uniformly
distributed over the $E_{l}(t)$ correct sockets of the CN's. Averaging
over all graphs, the $k-1$ reflected edges hit $\frac{n_{i,j}(t)i(k-1)}{(i+j)e_{l}(t)}$
CN's of type $(i,j)$. Averaging over the degree $k$ shows that the
reflected edges hit $\frac{n_{i,j}(t)i(a(t)-1)}{(i+j)e_{l}(t)}$ CN's
of type $(i,j)$.
If a CN of type $(i,j)$ is hit by a reflected edge, then we lose $i+j$
edges of type $(i,j)$ and gain $i-1+j$ edges of type $(i-1,j)$.
Hence, one has the following differential equation for $j>0$ and
$i+j\le d_{c}$ \[
\frac{\mbox{d}n_{i,j}^{(1)}(t)}{\mbox{d}t}=\left(p_{i+1,j}^{(1)}(t)-p_{i,j}^{(1)}(t)\right)(i+j)\]
where \[
p_{i,j}^{(1)}(t)=\frac{n_{i,j}(t)i(a(t)-1)}{(i+j)e_{l}(t)}.\]
One should keep in mind that $n_{i,j}(t)=0$ for $i+j>d_{c}$.
For $n_{i,j}^{(1)}(t)$ with $j=0$, the effect from above must be
combined with effect of the type-$(i,0)$ initial edge that was chosen.
So the differential equation becomes \[
\frac{\mbox{d}n_{i,0}^{(1)}(t)}{\mbox{d}t}=\left(p_{i+1,0}^{(1)}(t)-p_{i,0}^{(1)}(t)\right)i+\left(q_{i+1}^{(1)}(t)-q_{i}^{(1)}(t)\right)i\]
where \[
q_{i}^{(1)}(t)=\frac{n_{i,0}(t)}{\sum_{m\geq1}n_{m,0}(t)}.\] Note that $p^{(1)}_{d_c+1,0}(t)\triangleq0$ and $q^{(1)}_{d_c+1}(t )\triangleq0$
\subsubsection{IER1 Analysis}
If the IER1 operation is picked, then we choose a random CN of type
$(0,1)$ and follow its only edge to the set of IVNs. The edge is attached
uniformly to this set, so the differential equations for IER1 can
be written as \[
\frac{\mbox{d}l_{k}^{(2)}(t)}{\mbox{d}t}=0,\]
\[
\frac{\mbox{d}r_{k}^{(2)}(t)}{\mbox{d}t}=\frac{r_{k}(t)}{e_{r}(t)}(-k),\textrm{ for }k\ge1\]
and \[
\frac{\mbox{d}n_{i,j}^{(2)}(t)}{\mbox{d}t}=\left(p_{i,j+1}^{(2)}(t)-p_{i,j}^{(2)}(t)\right)(i+j),\textrm{ for }(i,j)\neq(0,1)\]
where \[
p_{i,j}^{(2)}(t)=\frac{n_{i,j}(t)j(b(t)-1)}{(i+j)e_{r}(t)}.\]
For $n_{i,j}(t)$ with $(i,j)=(0,1)$, the differential equation
must also account for the initial edge and becomes \[
\frac{\mbox{d}n_{0,1}^{(2)}(t)}{\mbox{d}t}=\left(p_{0,2}^{(2)}(t)-p_{0,1}^{(2)}(t)\right)-1.\]
Notice that even for (3,6) codes, there are 30 differential equations%
\footnote{There are 28 for $n_{i,j}$ ($i,j\in[0,\cdots,6]$ such that $i+j\le6$),
1 for $r_{k}(t)$, and 1 for $l_{k}(t)$.%
} to solve. So we solve the differential equations numerically and
the threshold for (3,6) code with LM1 is $p^{*}=0.169$. This coincides
with the result from density evolution analysis for LM1-MB in \cite{my_ref:luby_and_mitz}
and hints at the equivalence between LM1-NB and LM1-MB. In the proof
of Theorem \ref{thm2} we make this equivalence precise by showing
that the stopping sets of LM1-NB and LM1-MB are the same. \end{subsection}
\begin{subsection}{Differential Equation Analysis of LM2-NB}
Similar to the analysis of LM1-NB algorithm, we analyze LM2-NB algorithm
by analyzing a peeling-style decoder which is equivalent to the LM2-NB
decoding algorithm. The peeling-style decoder removes one CVN or IVN
during each time step according to the following rules:
\begin{description}
\item [{CER:}] If any CN has all its edges connected to CVN's, pick one
of the CVN's and remove it.
\item [{IER1:}] If any IVN has any messages from CN's with type $n_{0,1}$,
then the IVN and all its outgoing edges can be removed and we track
the correct value by subtracting the value from the check node.
\item [{IER2:}] If any IVN is attached to more than one CN with I-degree
1, then it will be verified and all its outgoing edges can be removed.
\end{description}
For the reason of simplicity, we first introduce some definitions
and short-hand notations.
\begin{itemize}
\item Correct edges: edges which are connected to CVN's
\item Incorrect edges: edges which are connected to IVN's
\item CER edges: edges which are connected to check nodes with type
$n_{i,0}$ for $i\ge1$
\item IER1 edges: edges which are connected to check nodes with type
$n_{0,1}$
\item IER2 edges: edges which connect IVN's and the check nodes with
type $n_{i,1}$ for $i\ge1$
\item NIE edges: normal incorrect edges, which are incorrect edges but neither
IER1 edges nor IER2 edges
\item CER nodes: CVN's which have at least one CER edge
\item IER1 nodes: IVN's which have at least one IER1 edge
\item IER2 nodes: IVN's which have at least two IER2 edge
\item NIE nodes: IVN's which contain at most 1 IER2 edge and no IER1 edges.
\end{itemize}
Note that an IVN can be both an IER1 node and an IER2 node at the
same time.
The analysis of LM2-NB is much more complicated than LM1-NB because
the IER2 operation makes the distribution of IER2 edges dependent
on each other. In each IER2 operation, one IVN with more than 2 IER2
edges is removed, therefore the rest of the IER2 edges are more
likely to land on different IVN's.
The basic idea to analyze LM2-NB decoder is to separate the incorrect
edges into types and assume that mapping between sockets is given by
a uniform random permutation. Strictly speaking, this is not true and another
approach, which leads to the same differential equations, is used when
considering a formal proof of correctness.
In detail, we model the
structure of an LDPC code during LM2-NB decoding as shown in Fig. \ref{fig:v(d)}
with one type for correct edges and three types for incorrect edges.
The following calculations assume the four permutations, labeled CER, NIE, IER2, and
IER1, are all uniform random permutations.
\begin{figure}
\centering{}\includegraphics[width=0.5\columnwidth]{LM2NB}\caption{\label{fig:v(d)}Graph structure of an LDPC code with LM2-NB decoding
algorithm}
\end{figure}
The peeling-style decoder randomly chooses one VN from the set of
CER IER1 and IER2 nodes and removes this node and all its edges at
each step. The idea of the analysis is to first calculate the probability
of choosing a VN with a certain type, i.e., CER, IER1 or IER2, and
the node degree. We then analyze how removing this VN affects the
system parameters.
In the analysis, we will track the evolution of the following system
parameters.
\begin{itemize}
\item $l_{k}(t)$: the fraction of edges connected to CVN's with degree
$k$, $0<k\le d_{v}$ at time $t$. \footnote{We don't track $l_0(t)$ and simply set $l_0(t)=0.$}
\item $r_{i,j,k}(t):$ the fraction of edges connected to IVN's with $i$
NIE edges, $j$ IER2 edges and $k$ IER1 edges at time $t$, $i,j,k\in\{0,1,\dots,d_{v}\}$
and $0<i+j+k\le d_{v}$. \footnote{We don't track $r_{0,0,0}(t)$ and simply set $r_{0,0,0}(t)=0.$}
\item $n_{i,j}(t):$ the fraction of edges connected to check nodes with
$i$ correct edges and $j$ incorrect edges at time $t$, $i,j\in\{0,1,\dots,d_{c}\}$
and $0<i+j\le d_{c}$. \footnote{We don't track $n_{0,0}(t)$ and simply set $n_{0,0}(t)=0.$}
\end{itemize}
We note that, when we say {}``fraction'', we mean the number of
a certain type of edges/nodes normalized by the number of edges/nodes
in the \emph{original graph}.
The following quantities can be calculated from $l_{k}(t)$, $r_{i,j,k}(t)$
and $n_{i,j}(t)$.
\begin{itemize}
\item $e_{l}(t)\triangleq\sum_{k=1}^{d_{v}}l_{k}(t)$: the fraction of correct
edges
\item $e_{r}(t)\triangleq\sum_{i=0}^{d_{v}}\sum_{j=0}^{d_{v}-i}\sum_{k=0}^{d_{v}-i-j}r_{i,j,k}(t)$:
the fraction of correct edges
\item $\eta_{0}(t)\triangleq\sum_{j=2}^{d_{c}}\sum_{i=0}^{d_{c}-j}\frac{jn_{i,j}(t)}{i+j}=\sum_{i=1}^{d_{v}}\sum_{j=0}^{d_{v}-i}\sum_{k=0}^{d_{v}-i-j}\frac{ir_{i,j,k}(t)}{i+j+k}$:
the fraction of NIE edges
\item $\eta_{1}(t)\triangleq n_{0,1}(t)=\sum_{k=1}^{d_{v}}\sum_{j=0}^{d_{v}-k}\sum_{i=0}^{d_{v}-i-j}\frac{kr_{i,j,k}(t)}{(i+j+k)}$:
the fraction of IER1 edges
\item $\eta_{2}(t)\triangleq\sum_{i=1}^{d_{c}}\frac{n_{i,1}(t)}{(i+1)}=\sum_{j=1}^{d_{v}}\sum_{i=0}^{d_{v}-j}\sum_{k=0}^{d_{v}-i-j}\frac{jr_{i,j,k}(t)}{(i+j+k)}$:
the fraction of IER2 edges
\item $s_{0}(t)\triangleq\sum_{j=0}^{1}\sum_{i=1}^{d_{v}-j}\frac{r_{i,j,0}}{i+j}$:
the fraction of NIE nodes
\item $s_{1}(t)\triangleq\sum_{k=1}^{d_{v}}\frac{n_{k,0}(t)}{k}$: the fraction
of CER nodes
\item $s_{2}(t)\triangleq\sum_{k=1}^{d_{v}}\sum_{i=0}^{d_{v}-k}\sum_{j=0}^{d_{v}-i-k}\frac{r_{i,j,k}}{i+j+k}$:
the fraction of IER1 nodes
\item $s_{3}(t)\triangleq\sum_{j=2}^{d_{v}}\sum_{i=0}^{d_{v}-j}\sum_{k=0}^{d_{v}-i-j}\frac{r_{i,j,k}}{i+j+k}$:
the fraction of IER2 nodes
\end{itemize}
As in the LM1-NB analysis, we use superscript $^{(1)}$ to denote
the contribution of the CER operations. We use $^{(2)}$ to denote
the contribution of the IER1 operations and $^{(3)}$ to denote the
contribution of the IER2 operations. Since we assume that the decoder
randomly chooses a VN from the set of CER, IER1 and IER2 nodes and
removes all its edges during each time step, the differential equations
of the system parameters can be written as the weighted sum of the
contributions of the CER, IER1 and IER2 operations. The weights are chosen
as \begin{align*}
c_{1}(t) & =\frac{s_{1}(t)}{(s_{1}(t)+s_{2}(t)+s_{3}(t))}\\
c_{2}(t) & =\frac{s_{2}(t)}{(s_{1}(t)+s_{2}(t)+s_{3}(t))}\\
c_{3}(t) & =\frac{s_{3}(t)}{(s_{1}(t)+s_{2}(t)+s_{3}(t))}.\end{align*}
This weighted sum ensures that the expected change in the decoder state is Lipschitz continuous if any one $c_{1}(t)$, $c_{2}(t)$, or $c_{3} (t)$ is strictly positive.
Next, we will show how CER, IER1 and IER2 operations affect the system
parameters.
Given the d.d. pair $(\lambda,\rho)$ and the channel error probability
$p$, we initialize the state as follows. Since a fraction $(1-p)\lambda_{k}$
of the edges are connected to CVN's of degree $k$, we initialize
$l_{k}(t)$ with \[
l_{k}(0)=(1-p)\lambda_{k},\]
for $k=1,2,\dots d_{v}$. Noticing that each CN socket is connected
to a correct edge with probability $(1-p)$ and incorrect edge with
probability $p$, we initialize $n_{i,j}(t)$ with\[
n_{i,j}(0)=\rho_{i+j}\binom{i+j}{i}(1-p)^{i}p^{j},\]
for $i+j\in\{1,2,\dots,d_{c}\}$. The probability that an IVN socket
is connected to an NIE, IER1 edge, or IER2 edge is denoted respectively
by $g_0$, $g_1$ or $g_2$ with
\begin{align*}
g_0 &= \frac{1}{p}\sum_{j'=2}^{d_{c}}\sum_{i'=0}^{d_{c}-j'}\frac{j'n_{i',j'}(0)}{i'+j'} \\
g_1 &= \frac{1}{p} \sum_{i'=1}^{d_{c}}\frac{n_{i',1}(0)}{(i'+1)} \\
g_2 &= \frac{1}{p} n_{0,1}(0).
\end{align*}
Therefore, we initialize $r_{i,j,k}(t)$ with\[
r_{i,j,k}(0)=p\lambda_{i+j+k}\binom{i+j+k}{i,j,k} g_{0}^i g_{1}^j g_{2}^k ,\]
for $i+j+k\in\{1,2,\dots,d_{v}\}$.
\subsubsection{CER analysis}
The analysis for $\frac{\textrm{d}l_{k}^{(1)}(t)}{\textrm{d}t}$ is
the same as LM1-NB analysis. In the CER operation, the decoder randomly
selects a CER edge. With probability $\frac{l_{k}(t)}{e_{l}(t)}$,
a CVN with degree $k$ is chosen, this decreases the number of edges
of type $l_{k}$ by $k$. This gives \[
\frac{\mbox{d}l_{k}^{(1)}(t)}{\mbox{d}t}=\frac{-kl_{k}(t)}{e_{l}(t)},\mbox{ }k\ge1.\]
For $j\ge1$ and $i+j\le d_{c}$ \[
\frac{\mbox{d}n_{i,j}^{(1)}(t)}{\mbox{d}t}=\left(p_{i+1,j}^{(1)}(t)-p_{i,j}^{(1)}(t)\right)(i+j)\]
where $a(t)=\frac{\sum_{k=1}^{d_{v}}kl_{k}(t)}{e_{l}(t)}$ is the
average degree of the CVN's which are hit by the initially chosen
CER edge and $p_{i,j}^{(1)}=\frac{n_{i,j}(t)i(a(t)-1)}{(i+j)e_{l}(t)}$
is the average number of CN's with type $n_{i,j}$ hit by the $a(t)-1$
reflecting edges.
For $j=0$ and $i\ge1$, we also have to consider the initially chosen
CER edge. This gives \[
\frac{\mbox{d}n_{i,0}^{(1)}(t)}{\mbox{d}t}=\left(p_{i+1,0}^{(1)}(t)-p_{i,0}^{(1)}(t)\right)i+\left(q_{i+1}^{(1)}(t)-q_{i}^{(1)}(t)\right)i\]
where $q_{i}^{(1)}(t)=\frac{n_{i,0}(t)}{\sum_{m\ge1}n_{m,0}(t)}$
is the probability that the initially chosen CER edge is of type $n_{i,0}$.
When one of the reflecting edge of the removed CER node hits a CN
of type $n_{1,1}$, an IER2 edge becomes an IER1 edge. This is the
only way the CER operation can affect $r_{i,j,k}.$ On average, each
CER operation generates $a(t)-1$ reflecting edges. For each reflecting
edge, the probability that it hits a CN of type $n_{1,1}$ is $\frac{n_{1,1}(t)}{2e_{l}(t)}$.
Once a reflecting edge hits a CN with type $n_{1,1},$ one IER2 edge
is changed to IER1 edge, but not removed. By considering this, when
$j\neq d_{v}$ and $k\neq0$, we have \begin{align*}
\frac{\textrm{d}r_{i,j,k}^{(1)}(t)}{\mbox{d}t} & =(a(t)-1)\frac{n_{1,1}(t)}{2e_{l}(t)}\left(\frac{jr_{i,j,k}(t)}{(i+j+k)\eta_{2}(t)}(-(i+j+k))-\frac{(j+1)r_{i,j+1,k-1}(t)}{(i+j+k)\eta_{2}(t)}(-(i+j+k))\right)\\
& =(a(t)-1)\frac{n_{1,1}(t)}{2e_{l}(t)}\left(\frac{-jr_{i,j,k}(t)}{\eta_{2}(t)}+\frac{(j+1)r_{i,j+1,k-1}(t)}{\eta_{2}(t)}\right).\end{align*}
If $k=0$ or $k\neq d_{v}$, then the IVN's with type $r_{i,j,k}$
can only lose edges and \[
\frac{\textrm{d}r_{i,j,k}^{(1)}(t)}{\textrm{d}t}=(a(t)-1)\frac{n_{1,1}(t)}{2e_{l}(t)}\left(\frac{-jr_{i,j,k}(t)}{\eta_{2}(t)}\right).\]
\subsubsection{IER2 analysis}
Since IER2 operation does not affect $l_{k}(t)$, we have \[
\frac{\mbox{d}l_{k}^{(3)}(t)}{\mbox{d}t}=0.\]
To analyze how IER2 operation changes $n_{i,j}(t)$ and $r_{i,j,k}(t)$,
we first calculate the probability that a randomly chosen IER2 node
is of type $r_{i,j,k}$ as follows \[
\Pr\left(\mbox{type }r_{i,j,k}|\mbox{IER2 node}\right)=\frac{\frac{r_{i,j,k}(t)}{i+j+k}}{s_{2}(t)}\]
if $j\ge2$ and $i+j+k\neq0$. Otherwise, $\Pr\left(\mbox{type }r_{i,j,k}|\mbox{IER2 node}\right)=0$.
Let's denote $\frac{\textrm{d}n_{i',j'}^{(3)}(t)}{\textrm{d}t}$ caused
by removing one NIE edge as $u_{i',j'}(t)$, $\frac{\textrm{d}n_{i',j'}^{(3)}(t)}{\textrm{d}t}$
caused by removing one IER2 edge as $v_{i',j'}(t)$ and $\frac{\textrm{d}n_{i',j'}^{(3)}(t)}{\textrm{d}t}$
caused by removing one IER1 edge as $w_{i',j'}(t)$. Then, we can
write $\frac{\textrm{d}n_{i',j'}^{(3)}(t)}{\textrm{d}t}$ as \[
\frac{\textrm{d}n_{i',j'}^{(3)}(t)}{\textrm{d}t}=\sum_{i=0}^{d_{v}}\sum_{j=0}^{d_{v}}\sum_{k=0}^{d_{v}}\Pr\left(\mbox{type }r_{i,j,k}|\mbox{IER2 node}\right)\left(iu_{i',j'}(t)+jv_{i',j'}(t)+kw_{i',j'}(t)\right).\]
First, we consider $u_{i',j'}(t)$. If an NIE edge is chosen from
the IVN side, it hits a CN of type $n_{i,j}$ with probability $\frac{jn_{i,j}(t)}{\eta_{0}(i+j)}$
if $j\ge2$ and with probability 0 otherwise. When $j\ge2$ and $j\le d_{v}-1$,
we have \begin{align*}
u_{i',j'}(t) & =\frac{j'n_{i',j'}(t)}{(i'+j')\eta_{0}(t)}(-(i'+j'))+\frac{(j'+1)n_{i',j'+1}(i'+j')}{(i'+j'+1)\eta_{0}(t)}\\
& =\frac{-j'n_{i',j'}(t)}{\eta_{0}(t)}+\frac{(j'+1)n_{i',j'+1}(i'+j')}{(i'+j'+1)\eta_{0}(t)}\end{align*}
and, when $j=d_{v},$ we have \[
u_{i',j'}(t)=\frac{-j'n_{i'j'}(t)}{\eta_{0}(t)}.\]
Since an NIE edge cannot be connected to a CN with type $n_{i',1}$,
we must treat $j'=1$ separately. Notice that $n_{i',1}$ can still
gain edges from $n_{i',2}$, we have\[
u_{i',1}(t)=\frac{2n_{i',2}(t)(i'+1)}{(i'+2)\eta_{0}(t)}.\]
When $j'=0$, CN's with type $n_{i',0}$ do not have any NIE edges, so we have $u_{i',0}(t)=0$.
Now we consider $v_{i',j'}(t)$. Since edges of type $n_{i',j'}$
with $j\ge2$ cannot be IER2 edges, $n_{i'j'}$ with $j\ge2$ is not
affected by removing IER2 edges. The IER2 edge removal reduces the
number of edges of type $n_{i',1}$, $i\ge1$, so we have $v_{i',1}=-\frac{n_{i',1}}{\eta_{2}(t)}$.
When $j'=0$ and $i'\ge1$, we have $v_{i',0}=\frac{n_{i',1}}{\eta_{2}(t)}$.
Only CN's with type $n_{0,1}$ are affected when we remove an
IER1 edge on the IVN side. So we have $w_{0,1}=1$
and $w_{i',j'}=0$ when $(i',j')\neq(0,1).$
Next, we derive the differential equation for $r_{i,j,k}$ caused
by removing an IER2 node. If the decoder removes an IER2 node with
type $r_{i',j',k'},$ we need to study how this affects $r_{i,j,k}(t).$
There are two effects caused by removing an IER2 node of type $r_{i',j',k'}$.
When we remove an IER2 node of type $r_{i',j',k'},$ we remove $i'$
NIE edges, $j'$ IER2 edges and $k'$ IER1 edges. For each removed
edge, if we look at the CN side, it may cause the types of some other
edges on the same CN to change and therefore affect $r_{i,j,k}(t)$.
We also call the edges other than the one coming from the removed
IER2 node as {}``CN reflecting edges''. Let $u'_{i,j,k}(t)$ be the contribution to
$\frac{\textrm{d}r_{i,j,k}(t)}{\textrm{d}t}$ caused by the CN reflecting
edges of an NIE edge on the CN. Let $v'_{i,j,k}(t)$ be the contribution to $\frac{\textrm{d}r_{i,j,k}(t)}{\textrm{d}t}$
caused by the CN reflecting edges of an IER2 edge on the CN. Let $w'_{i,j,k}(t)$
be the contribution to $\frac{\textrm{d}r_{i,j,k}(t)}{\textrm{d}t}$ caused by the CN
reflecting edges of an IER1 edge on the CN. Then we can write $\frac{\textrm{d}r_{i,j,k}^{(3)}(t)}{\textrm{d}t}$
as \begin{align*}
\frac{\textrm{d}r_{i,j,k}^{(3)}(t)}{\textrm{d}t} & =-\Pr\left(\mbox{type }r_{i,j,k}|\mbox{IER2 node}\right)(i'+j'+k')+\\
& \sum_{i'=0}^{d_{v}}\sum_{j'=0}^{d_{v}}\sum_{k'=0}^{d_{v}}\Pr\left(\mbox{type }r_{i',j',k'}|\mbox{IER2 node}\right)\left(i'u'_{i,j,k}(t)+j'v'_{i,j,k}(t)+k'w'_{i,j,k}(t)\right).\end{align*}
There are two ways that the CN reflecting edges of an NIE edge can
affect $r_{i,j,k}(t)$. The first one is when the CN is of type $n_{i,2},$
$1\le i\le d_{c}-2$. Removing an NIE can change the type of the other
incorrect edge from NIE to IER2. The second way is when the CN is
of type $n_{0,2}$. Removing an NIE can change the type of the other
incorrect edge from NIE to IER1. Notice that the probability that
an NIE hits a CN of type $n_{i,2},$ $1\le i\le d_{c}-2$ is $\frac{\sum_{i=1}^{d_{c}-2}\frac{2n_{i,2}(t)}{i+2}}{\eta_{0}(t)}$,
the probability that an NIE hits a CN of type $n_{0,2}$ is $\frac{n_{0,2}(t)}{\eta_{0}(t)}$
and the probability that an NIE edge is connected to an IVN of type
$r_{i,j,k}$ is $\frac{ir_{i,j,k}(t)}{(i+j+k)\eta_{0}(t)}$. Therefore, we can write \begin{align*}
u'_{i,j,k}(t) & =\frac{\sum_{i=1}^{d_{c}-2}\frac{2n_{i,2}(t)}{i+2}}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}+\frac{(i+1)r_{i+1,j-1,k}(t)}{\eta_{0}(t)}\right)\\
& +\frac{n_{0,2}(t)}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}+\frac{(i+1)r_{i+1,j,k-1}(t)}{\eta_{0}(t)}\right)\end{align*}
if $i\ne d_{v}$, $j\neq0$ and $k\neq0$.
When $i\neq d_{v}$, $j\neq0$ and $k=0$, we have \begin{align*}
u'_{i,j,k}(t) & =\frac{\sum_{i=1}^{d_{c}-2}\frac{2n_{i,2}(t)}{i+2}}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}+\frac{(i+1)r_{i+1,j-1,k}(t)}{\eta_{0}(t)}\right)\\
& +\frac{n_{0,2}(t)}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}\right).\end{align*}
When $i\neq d_{v}$, $j=0$ and $k\neq0$, we have \begin{align*}
u'_{i,j,k}(t) & =\frac{\sum_{i=1}^{d_{c}-2}\frac{2n_{i,2}(t)}{i+2}}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}\right)\\
& +\frac{n_{0,2}(t)}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}+\frac{(i+1)r_{i+1,j,k-1}(t)}{\eta_{0}(t)}\right).\end{align*}
When $i\neq d_{v}$, we have \begin{align*}
u'_{i,j,k}(t) & =\frac{\sum_{i=1}^{d_{c}-2}\frac{2n_{i,2}(t)}{i+2}}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}\right)\\
& +\frac{n_{0,2}(t)}{\eta_{0}(t)}\left(-\frac{ir_{i,j,k}(t)}{\eta_{0}(t)}\right).\end{align*}
Since there are no CN reflecting edges of type IER1 and IER2, $v'_{i,j,k}(t)=0$
and $w'_{i,j,k}(t)=0$.
Like IER2 operations, the IER1 operation does not affect $l_{k}(t)$.
So, we have \[
\frac{\textrm{d}l_{k}^{(2)}(t)}{\textrm{d}t}=0.\]
To analyze how IER1 changes $n_{i,j}(t)$ and $r_{i,j,k}(t)$, we
first calculate the probability that a randomly chosen IER1 node is
of type $r_{i,j,k}$ as follows \[
\Pr\left(\mbox{type }r_{i,j,k}|\mbox{IER1 node}\right)=\frac{\frac{r_{i,j,k}(t)}{i+j+k}}{s_{1}(t)}\]
when $k\ge1$ and $\Pr\left(\mbox{type }r_{i,j,k}|\mbox{IER1 node}\right)=0$
when $k=0$.
For the same reason, \[
\frac{\textrm{d}n_{i',j'}^{(2)}(t)}{\textrm{d}t}=\sum_{i=0}^{d_{v}}\sum_{j=0}^{d_{v}}\sum_{k=0}^{d_{v}}\Pr\left(\mbox{type }r_{i,j,k}|\mbox{IER1 node}\right)\left(iu_{i',j'}(t)+jv_{i',j'}(t)+kw_{i',j'}(t)\right)\]
and \begin{align*}
\frac{\textrm{d}r_{i,j,k}^{(2)}(t)}{\textrm{d}t} & =-\Pr\left(\mbox{type }r_{i',j',k'}|\mbox{IER1 node}\right)(i'+j'+k')+\\
& \sum_{i'=0}^{d_{v}}\sum_{j'=0}^{d_{v}}\sum_{k'=0}^{d_{v}}\Pr\left(\mbox{type }r_{i',j',k'}|\mbox{IER1 node}\right)\left(i'u'_{i,j,k}(t)+j'v'_{i,j,k}(t)+k'w'_{i,j,k}(t)\right).\end{align*}
The program we use to perform these computations and find the LM2-NB
threshold is available online at http://ece.tamu.edu/\textasciitilde{}hpfister/software/lm2nb\_threshold.m~.
\subsection{Accuracy of the Analysis}
Consider the peeling decoder for the BEC introduced in \cite{my_ref:luby_and_mitz}.
Throughout the decoding process, one reveals and then removes edges
one at a time from a hidden random graph. The analysis of this decoder
is simplified by the fact that, given the current residual degree
distribution, the unrevealed portion of the graph remains uniform
for every decoding trajectory. In fact, one can build a finite-length
decoding simulation never constructs the actual decoding graph. Instead,
it tracks only the residual degree distribution of the graph and implicitly
chooses a random decoding graph one edge at a time.
For asymptotically long codes,\cite{my_ref:luby_and_mitz} used this approach
to derive an analysis based on differential equations. This analysis
is actually quite general and can also be applied to other peeling-style
decoders in which the unrevealed graph is not uniform. One may observe
this from its proof of correctness, which depends only on two important
observations. First, the distribution of all decoding paths is concentrated
very tightly around its average as the system size increases. Second,
the expected change in the decoder state can be written as a Lipschitz
function of the current decoder state. If one augments the decoding
state to include enough information so that the expected change can
be computed from the augmented state (even for non-uniform residual
graphs), then the theorem still applies.
The differential equation computes the average evolution, over all
random bipartite graphs, of the system parameters as
the block length $n$ goes to infinity. While the numerical simulation
of long codes gives the evolution of the system parameters of a particular
code (a particular bipartite graph) as $n$ goes to infinity. To prove
that the differential equation analysis precisely predicts the evolution
of the system parameters of a particular code, one must show the
concentration of the evolution of the system parameters of a particular
code around the ensemble average as $n$ goes to infinity.
In the LM2-NB algorithm, one node is removed at a time but this can also
be viewed this is removing each edge sequentially. The main difference
for LM2-NB algorithm is that we have more edge types and we track
some details of the edge types on both the check nodes and the variable
nodes. This causes a significant problem in the analysis because updating
the exact effect of edge removal requires revealing some edges before
they will be removed. For example, the CER operation can cause an
IER2 edge to become an IER1 edge, but revealing the adjacent symbol
node (or type) renders the analysis intractable.
Unfortunately, our proof of correctness still relies on two unproven
assumptions which we state as a conjectures. This section leverages the
framework of \cite{my_ref:luby_and_mitz,my_ref:diffeqn,wormald_de_1997}
by describing only the new discrete-time random process $H_t$ associated
with our analysis.
We first introduce the definitions of the random process. In this subsection,
we use $t$ to represent the discrete time. We
follow the same notation used in \cite{my_ref:luby_and_mitz}. Let the life span of
the random process be $\alpha_0 n$. Let $\Omega$ denote a probability space
and $S$ be a measurable space of observations. A discrete-time random
process over $\Omega$ with observations $S$ is a sequence $Q\triangleq(Q_{0},Q_{1},\dots)$
of random variables where $Q_{t}$ contains the information revealed
at $t$-th step. We denote the history of the process up to time $t$
as $H_{t}\triangleq(Q_{0},Q_{1},\dots,Q_{t}).$ Let $S^{+}:=\cup_{i\ge1}S^{i}$
denote the set of all histories and $\mathcal{Y}$ be the set of all
decoder states. One typically uses a state space that tracks the number
of edges of a certain type (e.g., the degree of the attached nodes).
We define the random process as follows. The total number of edges connected to
IVN's with type $r_{i,j,k}$ at time $t$ is denoted $R_{i,j,k}(t)$ and the total number of edges connected to
check nodes with type $n_{i,j}$ is $N_{i,j}(t)$.
The main difference is that we track the average $\bar{R}_{i,j,k}(t)\triangleq E[R_{i,j,k}(t)|H_t]$ of node degree distribution rather than the exact value.
Let $R(t)$, $\bar{R}(t)$, and $N(t)$ be vectors of random variables formed by including all valid $i,j,k$ tuples for each variable.
Using this, the decoder state at time $t$ is given by $Y_{t}\triangleq\{N(t),\bar{R}(t)\}$.
To connect this with \cite[Theorem~5.1]{wormald_de_1997}, we define the
history of our random process as follows. In the beginning of the
decoding, we label the variable/check nodes by their degrees. When the
decoder removes an edge, the revealed information $Q_{t}$ contains
the degree of the variable node and type of the check node to which the removed
edge is connected to. We note that sometimes the edge-removal operation changes the
type of the unremoved edge on that check node. In this case, $Q_{t}$
also contains the information about the type of the check node to which this CN-reflecting edge is connected
to. But $Q_{t}$ does not contain any information about the
IVN that this CN-reflecting edge is connected to. By defining the
history in this manner, $Y_t$ is a deterministic function of $H_{t}$
and can be made to satisfy the conditions of \cite[Theorem~5.1]{wormald_de_1997}.
The following conjecture, which basically says that $R_i,j,k(t)$ concentrates, encapsulates
one of the unproven assumptions needed to establish the correctness of this analysis.
\begin{conject}
$\lim_{n\rightarrow \infty}\Pr\left(\sup_{0\le t\le \alpha_0 n}\left|\bar{R}_{i,j,k}(t)-R_{i,j,k}(t)\right|\ge n^{5/6}\right)=0$ holds for all $\{i,j,k:0\le i\le d_v,0\le j\le d_v,0\le k\le d_v,0\le i+j+k\le d_v\}.$
\end{conject}
The next observation is that the expected drift $E[Y_{t+1}-Y_t|H_t]$ can be
computed exactly in terms of $R(t)$ if the four edge-type permutations
are uniform. But, only $\bar{R}(t)$ can be computed exactly from $H_t$.
Let $f(Y_t)$ denote the expected drift under the uniform assumption using
$\bar{R}(t)$ instead of $R(t)$. Since $R(t)$ is concentrated around
$\bar{R}(t)$, by assumption, and $f$ is Lipschitz, this is not the main
difficulty. Instead, the uniform assumption is problematic and
the following conjecture sidesteps the problem by assuming that the
the true expected drift $E[Y_{t+1}-Y_t|H_t]$ is asymptotically equal
to $f(Y_t)$.
\begin{conject}
$\lim_{n\rightarrow \infty}\Pr\left( \sup_{0\le t\le \alpha_0 n}\lVert E[Y_{t+1}-Y_t|H_t] - f(Y_t) \rVert_{\infty} \ge n^{5/6}\right)=0$.
\end{conject}
If these conjectures hold true, then \cite[Theorem~5.1]{wormald_de_1997}
can be used to show that the differential equation correctly models the
expected decoding trajectory and actual realizations concentrate tightly
around this expectation. In particular, we find that $N_{i,j}(t)$ concentrates
around $n_{i,j}(t/n)$ and both $\bar R_{i,j,k}(t)$ and $R_{i,j,k}(t)$ concentrate
around $r_{i,j,k}(t/n)$. Empirically, these conclusions are supported by a
large number of simulations.
\iffalse
In particular, the random process satisfies condition $(i)$ with
$\gamma=0$ and condition $(ii)$ with $\lambda_1 = m^{-5/6}$.
Since the drift is a Lipschitz function
differential equation.
permutations, this if the $R(t)$ is known and history defines $\bar{R}(t)$ iexactlywhich expected drift, given the history,
$ E[Y_{t+1}-Y_t|H_t]= E\left[ f\left(t/m,\frac{N(t)}{m},\frac{R(t)}{m} \right) \bigg| H_t \right]$.
, the Lipschitz condition $(iii)$ with4ion ($i$) in \cite{wormald_de_1997, Theorem 5.1}. The condition ($iii$)
is also verified. We note that $E[Y_{t+1}-Y_t|H_t]=f\left(t/m,\frac{N(t)}{m},\frac{R(t)}{m} \right)$.
By the following conjecture and the Lipschitz condition of $f\left(t/m,\frac{N(t)}{m},\frac{R(t)}{m} \right)$, we have $E[Y_{t+1}-Y_t|H_t]=f\left(t/m,\frac{N(t)}{m},\frac{\bar{R}(t)}{m} \right)+o(1)$.
This shows condition ($ii$) of Theorem 5.1 in \cite{wormald_de_1997} is also satisfied.
By Theorem 5.1 in \cite{wormald_de_1997}, one can show the concentration of $Y_t$, or $N(t)$ and $\bar R(t)$. Since $R(t)$
and $\bar R(t)$ concentrate around the same function, we complete the proof.
\fi
\end{subsection}
\end{section}
\section{Error Floor Analysis of LMP Algorithms}
\label{sec:err_flr}
During the simulation of the optimized ensembles of Table \ref{tab:opt_our},
we observed an error floor that warranted more attention. While one might expect a floor due to finite $q$ effects, the simulation uses $q$ large enough so that no FV's were observed in the error floor
regime. Instead, the error floor is due to the event that some symbols remain unverified when the decoding
terminates. This motivates us to analyze the error floor of LMP algorithms. We need to point
out that, when $q$ is relatively small, error floors are caused by a mixture of
several reasons such as type-I FV, type-II FV (which
we will discuss later) and
the event that some symbols remain unverified when the decoding terminates.
These reasons are coupled together and affect each other.
But this is not the case of our interest and there are three reasons for this.
The first reason is that this is not the setting in our simulation, (e.g., the error floors
observed in the simulation are not caused by FV). The second reason is that, in practice, one would like to
let the assumption ``the verified symbols are correct with high probability" hold to make the verification-based algorithms
to work well and to make the analysis correct. This is can be done by picking large
enough $q$ as we did in the simulation. Note that, if FV has significant
impact on the algorithms, then both the density evolution analysis and the differential equation
analysis break down and the thresholds are not correct anymore.
The last reason is for the simplicity of analysis. One can
analyze the error floor caused by different reasons separately since they are not coupled.
We note that, even though the error floor is not
caused by FV, we still provide an analysis of FV for sake of the completeness. The analysis actually
helps us understand why the dominant error events caused by FV can be avoided by increasing $q$. The analysis
is derived by considering each effect separately.
\subsection{The Union Bound for ML Decoding}
First, we derive the union bound on the probability of error with ML decoding
for the $q$-SC. To match our simulations with the union bounds, we
expurgate (i.e., ignore) all codeword weights that have an expected
multiplicity less than 1.
First, we summarize a few results from \cite[p.~497]{RU-2008}
that characterize the low-weight codewords of LDPC codes with degree-2
variable nodes. When the block length $n$ is large, all of these low-weight
codewords are caused, with high probability, by short cycles of degree-2
nodes. For binary codes, the number of codewords with weight $k$
is a random variable which converges to a Poisson distribution with
mean $\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}}{2k}$. When the
channel quality is high (i.e., high SNR, low error/erasure rate),
the probability of ML decoding error is mainly caused by low-weight
codewords.
For non-binary $GF(q)$ codes, a codeword is supported on a cycle
of degree-2 nodes only if the product of the edge weights is 1. This
occurs with probability $1/(q-1)$ if we choose the i.i.d. uniform
random edge weights for the code. Hence, the number of $GF(q)$ codewords
of weight $k$ is a random variable, denoted $B_{k}$, which converges
to a Poisson distribution with mean $b_{k}=\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}}{2k(q-1)}$.
After expurgating weights that have an expected multiplicity less
than 1, $k_{1}=\arg\min_{k\geq1}b_{k}^{(n)}\ge1$ becomes the minimum
codeword weight. An upper bound on the pairwise error probability (PEP) of the $q$-SC with error probability
$p$ is given by the following lemma.
\begin{lemma}
\label{lemma3} Let $y$ be the received symbol sequence assuming the
all-zero codeword was transmitted. Let $u$ be any codeword with exactly
$k$ non-zero symbols. Then, the probability that the ML decoder chooses $u$ over
the all-zero codeword is upper bounded by
\[p_{2,k} \leq \left(p\frac{q-2}{q-1} +\sqrt{\frac{4p(1-p)}{q-1}} \right)^k.\]
\end{lemma}
\begin{IEEEproof} See Appendix C. \end{IEEEproof}
\begin{remark} Notice that $b_{k}$ is exponential in $k$ and the
PEP is also exponential in $k$. The union bound for the frame error
rate, due to low-weight codewords, can be written as \[
P_{B}\le\sum_{k=k_{1}}^{\infty}b_{k}p_{2,k}.\]
It is easy to see $k_{1}=\Omega(\log q)$ and the sum is dominated by
the first term $b_{k_{1}}p_{2,k_{1}}$ which has the smallest exponent.
When $q$ is large, the PEP upper bound is on the order of $O\left(p^{k}\right)$.
Therefore. the order of the union bound on frame error rate with ML
decoding is \[
P_{B}=O\left(\frac{\left(\lambda_{2}\rho^{'}(1)p\right)^{\log q}}{q\log q}\right)\]
and the expected number of symbols in error is \[
O\left(\frac{\left(\lambda_{2}\rho^{'}(1)p\right)^{\log q}}{q}\right),\] if $p\lambda_2\rho'(1)<1.$
\end{remark}
\subsection{Error Analysis for LMP Algorithms}
\label{false_v}
The error of LMP algorithm comes from two types of decoding failure.
The first type of decoding failure is due to unverified symbols. The
second one is caused by the FV. To understand
the performance of LMP algorithms, we analyze these types of failure
separately. Note that when we analyze each error type,
we neglect the interaction for the simplicity of analysis.
The FV's can be classified into two types. The first type is, as \cite{my_ref:luby_and_mitz}
mentions, when the error magnitudes in a single check sum to zero;
we call this type-I FV. For single-element lists, it occurs with probability
roughly $1/q$ (i.e., the chance that two uniform random symbols are
equal). For multiple lists with multiple entries, we analyze the FV
probability under the assumption that no list contains the correct
symbol. In this case, each list is uniform on the $q-1$ incorrect
symbols. For $m$ lists of size $s_{1},\ldots,s_{m}$, the type-I FV
probability is given by $1-\binom{q-1}{s_{1},s_{2},\cdots,s_{m}}\big/\prod_{i=1}^{m}\binom{q-1}{s_{i}}$.
In general, the Birthday paradox applies and the FV probability is
roughly $s^{2}\binom{m}{2}/q$ for large $q$ and equal size lists.
The second type of FV is that messages become more and more correlated
as the number of iterations grows, so that an incorrect message may
go through different paths and return to the same node. We denote
this kind of FV as a type-II FV.
Note that these are two different types of FV and one does not affect
another. We cannot avoid type-II FV by increasing $q$ without randomizing the edge weights and we cannot
avoid type-I FV by constraining the number of decoding iterations
to be within half of the girth (or increasing the girth). Fig.~\ref{fig2}
shows an example of type-II FV. In Fig.~\ref{fig2}, there is an
8-cycle in the graph and we assume the variable node on the right
has an incorrect incoming message {}``$a$''. Assume that the all-zero
codeword is transmitted, all the incoming messages at each variable
node are not verified, the list size is less than $S_{max}$, and
each incoming message at each check node contains the correct symbol.
In this case, the incorrect symbol will travel along the cycle and
cause FV's at all variable nodes along the cycle. If the characteristic
of the field is 2, there are a total of $c/2$ FV's occurring along
the cycle, where $c$ is the length of the cycle. This type of FV
can be reduced significantly by choosing each non-zero entry in the
parity-check matrix randomly from the non-zero elements of Galois
field. In this case, a cycle causes a type-II FV only if the product
of the edge-weights along that cycle is 1. Therefore, we suggest choosing
the non-zero entries of the parity-check matrix randomly to mitigate
type-II FV. Recall that the idea to use non-binary elements in the
parity-check matrix appears in the early works on the LDPC codes over
$GF(q)$ \cite{my_ref:Davey}.
\subsection{An Upper Bound on the Probability of Type-II FV on Cycles}
In this subsection, we analyze the probability of error caused by
type-II FV. Note that type-II FV occurs only when the depth-$2k$
directed neighborhood of an edge (or a node) has cycles. But type-I
FV occurs at every edge (or node). The order of the probability that
type-I FV occurs is approximately $O(1/q)$ \cite{my_ref:luby_and_mitz}.
The probability of type-II FV is hard to analyze because it depends
on $q$, $S_{max}$ and $k$ in a complicated way. But an upper bound
of the probability of the type-II FV is derived in this section.
Since the probability of type-II FV is dominated by short cycles of
degree-2 nodes, we only analyze type-II FV along cycles of degree-2
nodes. As we will soon see, the probability of type-II FV is exponential
in the length of the cycle. So, the error caused by type-II FV on
cycles is dominated by short cycles. We also assume $S_{max}$ to
be large enough such that an incorrectly received value can pass around
a cycle without being truncated. This assumption makes our analysis
an upper bound. Another condition required for an incorrectly received
value to participate in a type-II FV is that the product of the edge
weights along the cycle is 1. If we assume that almost all edges not
on the cycle are verified, then once any edge on the cycle is verified,
all edges will be verified in the next $k$ iterations. So we also
assume that nodes along a cycle are either all verified or all unverified.
We note that there are three possible patterns of verification on
a cycle, depending on the received values. The first case is that
all the nodes are received incorrectly. As mentioned above, the incorrect
value passes around the cycle without being truncated, comes back to
the node again and falsely verifies the outgoing messages of the node.
So all messages will be falsely verified (if they are all received
incorrectly) after $k$ iterations. Note that this happens with probability
$\frac{1}{q-1}p^{k}.$ The second case is that all messages are verified
correctly, say, no FV. Note that this does not require
all the nodes to have correctly received values. For example, if any
pair of adjacent nodes are received correctly, it is easy to see all
messages will be correctly verified. The last case is, there is at
least 1 incorrectly received node in any pair of adjacent nodes and
there is at least 1 node with correctly received value on the cycle.
In this case, all messages will be verified after $k$ iterations,
i.e., messages from correct nodes are verified correctly and those
from incorrect nodes are falsely verified. Then the verified messages
will propagate and half of the messages will be verified correctly
and the other half will be falsely verified. Note that this happens
with probability $\frac{1}{q-1}2\left(p^{k/2}-p^{k}\right)\approx\frac{2p^{k/2}}{q-1}$
and this approximation gives an upper bound even if we combine the
previous $\frac{1}{q-1}p^{k}$ term.
Recall that the number of cycles with length $k$ converges to a Poisson
with mean $\frac{\left(\lambda_{2}\rho'(1)\right)^{k}}{2k}.$ Using
the union bound, we can upper bound on the ensemble average probability
of any type-II FV event with \[
\Pr(\mbox{any type-II FV})\le\sum_{k=k_{1}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}}{2k(q-1)}2p^{\frac{k}{2}}=\sum_{k=k_{1}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\sqrt{p}\right)^{k}}{k(q-1)}.\]
The ensemble average number of nodes involved in type-II FV events
is given by \[
\mbox{E}\left[\mbox{symbols in type-II FV}\right]\le\sum_{k=k_{1}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}}{2k(q-1)}2kp^{\frac{k}{2}}=\sum_{k=k_{1}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\sqrt{p}\right)^{k}}{(q-1)}.\]
The upper bound on the frame error rate of type-II FV is on the order
of $O\left(\frac{\left(\lambda_{2}\rho^{'}(1)\sqrt{p}\right)^{\log q}}{(q-1)\log q}\right)$
and the upper bound on the ensemble average number of nodes in type-II
FV symbol is on the order of $O\left(\frac{\left(\lambda_{2}\rho^{'}(1)\sqrt{p}\right)}{(q-1)}\right)$.
Notice that both bounds are decreasing functions of $q$.
\subsection{An Upper Bound on the Probability of Unverification on Cycles}
In the simulation of the optimized ensembles from Table \ref{tab:opt_our},
we observe significant error floors and all the error events are caused by some unverified symbols when the decoding terminates.
In this subsection, We derive the union bound for the probability
of decoder failure caused by the symbols on short cycles which never
become verified. We call this event as \emph{unverification} and we denote it by UV. As described
above, to match the settings of the simulation and simplify the analysis, we assume $q$ is large enough to have arbitrarily small
probability of both type-I and type-II FV. In this case, the error
is dominated by the unverified messages because the following analysis
shows that the union bound on the probability of unverification is
independent of $q$.
In contrast to type-II FV, unverification event does not require cycles,
i.e., unverification occurs even on subgraphs without cycles. But
in the low error-rate regime, the dominant unverification events occur
on short cycles of degree-2 nodes. Therefore, we only analyze the
probability of unverification caused by short cycles of degree-2 nodes.
Consider a degree-2 cycle of length $k$ and assume that no FV occurs
in the neighborhood of this cycle. Assuming the maximum list size
is $S_{max}$, the condition for UV
is that there is at most one correctly received value along $S_{max}+1$
adjacent variable nodes. Note that we don't consider type-II FV since
type-II FV occurs with probability $\frac{1}{q-1}$ and we can choose
$q$ to be arbitrarily large. On the other hand, unverification does
not require the product of the edge weights on a cycle to be 1, so
we cannot mitigate it by increasing $q$. So the union bound on the
probability of unverification on a cycle with length $k$ is \[
P_{U}\le\sum_{k\ge k_{2}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}}{2k}\phi(S_{max},p,k)\]
where $k_{2}=\arg\min_{k\geq1}\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}}{2k}\ge1$
and $\phi(S_{max},p,k)$ is the UV probability which is given by the
following lemma.
\begin{lemma}
\label{lemma53} Let the cycle have length $k$, the maximum
list size be $s$, and the channel error probability be $p$. Then, the probability
of an unverification event on a degree-2 cycle of length-$k$
is $\phi(s,p,k)=\textrm{Tr}\left(B^{k}(p)\right)$ where
$B(p)$ is the $(s+1)$ by $(s+1)$ matrix
\begin{equation}
B(p)=\left[\begin{array}{cccccc}
p & 1-p & 0 & 0 & \cdots & 0\\
0 & 0 & p & 0 & \cdots & 0\\
0 & 0 & 0 & p & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 & 0 & \cdots & p\\
p & 0 & 0 & 0 & \cdots & 0\end{array}\right].\label{statematrix}\end{equation}
\end{lemma}
\begin{IEEEproof} See Appendix D. \end{IEEEproof}
Finally, the union bound on the average number of symbols involved
in unverification events is \begin{equation}
\mbox{E}\left[\mbox{unverified symbols}\right]\le\sum_{k\ge k_{2}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}}{2}\phi(S_{max},p,k).\label{ub_unver}\end{equation}
Note that if we have to choose some small $q$ and we need to consider type-II FV, then the union bound $P_U$ can be easily rewritten as
\[
P_{U}\le\sum_{k\ge k_{2}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}(q-2)}{2k(q-1)}\phi(S_{max},p,k)\]
since all symbols
will always be verified if the product of the weights on the edges equals to 1 if $s$ is larger than half of the length of the cycle,\footnote{When $s$ is not large enough, this analysis
provides an upper bound.}
the necessary conditions for unverification are the UV condition mentioned above and the product of the weights
on the edges does not equal to 1. The union bound on the average number of symbols involved
in unverification events is \begin{equation}
\mbox{E}\left[\mbox{unverified symbols}\right]\le\sum_{k\ge k_{2}}^{\infty}\frac{\left(\lambda_{2}\rho^{'}(1)\right)^{k}(q-2)}{2(q-1)}\phi(S_{max},p,k).\label{ub_unver2}\end{equation}
Let's look at (\ref{ub_unver}) and (\ref{ub_unver2}), we can see that the average number of unverified symbols scales exponentially with $k$. The ensemble with larger $\lambda_{2}\rho^{'}(1)$ will have
more short degree-2 cycles and more average unverified symbols. The average number of unverified symbols depends on the maximum list size $S_{max}$ in a complicated way. Intuitively,
if $S_{max}$ is larger, then the constraint that "there is at most one correct symbol along $S_{max}$ adjacent variable nodes`` becomes stronger since we assume the probability of
seeing a correct symbol is higher than that of seeing a incorrect symbol. Therefore, unverification is less likely to happen and the average number of unverified symbols will decrease as $S_{max}$ increases.
Note that (\ref{ub_unver}) does not depend on $q$ and (\ref{ub_unver2}) depends on $q$ weakly.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\columnwidth]{fig2.eps}
\caption{An example of type-II FV's.}\label{fig2}
\end{figure}
\begin{small} %
\begin{table}[b]
\caption{Optimization Results for LMP Algorithms (rate 1/2)}
\label{tab:opt_our} \centering \begin{tabular}{|c|c|c|c|}
\hline
\textbf{Alg.} & $\boldsymbol{\lambda}\mathbf{(x)}$ & $\boldsymbol{\rho}\mathbf{(x)}$ & $\mathbf{p^{*}}$ \tabularnewline
\hline LMP-1 & $.1200x\!+\!.3500x^{2}\!+\!.0400x^{4}\!+\!.4900x^{14}$ & $x^{8}$ & .2591 \tabularnewline
\hline LMP-1 & $.1650x\!+\!.3145x^{2}\!+\!.0085x^{4}\!+\!.2111x^{14}\!+\!.0265x^{24}\!+\!.0070x^{34}\!+\!.2674x^{49}$ & $.0030x^{2}\!+\!.9970x^{10}$ & .2593 \tabularnewline
\hline
LMP-8 & $.32x\!+\!.24x^{2}\!+\!.26x^{8}\!+\!.19x^{14}$ & $.02x^{4}\!+\!.82x^{6}\!+\!.16x^{8}$ & .288
\tabularnewline
\hline
LMP-32 & $.40x\!+\!.20x^{3}\!+\!.13x^{5}\!+\!.04x^{8}\!+\!.23x^{14}$ & $.04x^{4}\!+\!.96x^{6}$ & .303 \tabularnewline
\hline
LMP-$\infty$ & $.34x\!+\!.16x^{2}\!+\!.21x^{4}\!+\!.29x^{14}$ & $x^{7}$ & .480 \tabularnewline
\hline
LM2-MB & $.2x\!+\!.3x^{3}\!+\!.05x^{5}\!+\!.45x^{11}$ & $x^{8}$ & .289 \tabularnewline
\hline
\end{tabular}
\end{table}
\end{small}
One might expect that the stability condition of the LMP-$S_{max}$ decoding algorithms
can be used to analyze the error floor. Actually, one can show that the stability condition for LMP-$S_{max}$ decoding
of irregular LDPC codes is identical to that of the BEC, which is
$p \lambda_2 \rho'(1) < 1$.
This is not much help for predicting the error floor though, because
for codes with degree-2 nodes, the error floor is determined
mainly by short cycles of degree-2 nodes instead.
A finite number of degree-2 cycles is predicted instead by the condition
$\lambda_2 \rho'(1) < 1$.
\section{Comparison and Optimization}
\label{sec:cmp_opt}
In this section, we compare the proposed algorithm with maximum list
size $S_{max}$ (LMP-$S_{max}$) with other message-passing decoding algorithms
for the $q$-SC. We note that the LM2-MB algorithm is identical to
SW1 for any code ensemble because the decoding rules are the same.
LM2-MB, SW1 and LMP-1 are identical for (3,6) regular LDPC codes because
the list size is always 1 and erasures never happen in LMP-1 for (3,6)
regular LDPC codes. The LMP-$\infty$ algorithm is identical to
SW2.
There are two important differences between the LMP algorithm and
previous algorithms: (i) erasures and (ii) FV recovery. The LMP algorithm
passes erasures because, with a limited list size, it is better to
pass an erasure than to keep unlikely symbols on the list. The LMP
algorithm also detects FV events and passes an erasure if they cause
disagreement between verified symbols later in decoding, and can sometimes
recover from a FV event. LM1-NB and LM2-NB fix the status of a variable
node once it is verified and pass the verified value in all following
iterations.
The results in \cite{my_ref:luby_and_mitz} and \cite{my_ref:Shokwangpersonal}
also do not consider the effects of type-II FV. These FV events degrade
the performance in practical systems with moderate block lengths, and
therefore we use random entries in the parity-check matrix to mitigate
these effects.
Using the DE analysis of the LMP-$S_{max}$ algorithm, we can improve the
threshold by optimizing the degree distribution pair $(\lambda,\rho)$.
Since the DE recursion is not one-dimensional, we use differential
evolution to optimize the code ensembles \cite{Storn-jgo97}. In Table~\ref{tab:opt_our},
we show the results of optimizing rate-$\frac{1}{2}$ ensembles for
LMP with a maximum list size of 1, 8, 32, and $\infty$. Thresholds
for LM1 and LM2-NB/MB with rate 1/2 are also shown. In all but one
case, the maximum variable-node degree is 15 and the maximum check-node
degree is 9. The second table
entry allowed for larger degrees (in order to improve performance)
but very little gain was observed. We can also see that there is a
gain of between 0.05 and 0.07 over the thresholds of (3,6) regular
ensemble with the same decoder.
\section{Simulation Results}
\label{sec:sim}
In this part, we show the simulation results for (3,6) regular LDPC
codes using various decoding algorithms as well as the simulation
results for the optimized ensembles shown in Table II with LMP algorithms
in Fig.~\ref{fig:sim36}. In the simulation of optimized ensembles,
we try different maximum list sizes and different finite fields. We
use notation {}``LMP$S_{max}$,$q$,ensemble'' to denote the simulation
result of LMP algorithm with maximum list size $S_{max}$, finite field
$GF(q)$ and the simulated ensemble. We choose the block length to be
100000. The parity-check matrices are chosen randomly without 4-cycles.
Each non-zero entry in the parity-check matrix is chosen uniformly
from $\textrm{GF}(q)\setminus0$. This allows us to keep the FV probability
low. The maximum number of decoding iterations is fixed to be 200
and more than 1000 blocks are run for each point. These results are
compared with the theoretical thresholds. Table \ref{tab:thresh36}
shows the theoretical thresholds of $(3,6)$ regular codes on the
$q$-SC for different algorithms and Table II shows the thresholds
for the optimized ensembles. The numerical results match the theoretical
thresholds very well.
\begin{small} %
\begin{table}[t]
\caption{Threshold vs. algorithm for the (3,6) regular LDPC ensemble}
\label{tab:thresh36} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
LMP-1 & LMP-8 & LMP-32 & LMP-$\infty$ & LM1 & LM2-MB & LM2-NB \tabularnewline
\hline
.210 & .217 & .232 & .429 & .169 & .210 & .259 \tabularnewline
\hline
\end{tabular}
\end{table}
\end{small}
In the results of (3,6) regular codes simulation, we cannot see any
error floor because there is almost no FV in the simulation. The LM2-NB
performs much better than other algorithms with list size 1 for (3,6)
regular ensemble. In the optimized ensembles, there are a large number of degree-2 variable nodes which cause the significant error floor. By evaluating (\ref{ub_unver}), the predicted
error floor caused by unverification is $1.6\times10^{-5}$ for the
optimized $S_{max}=1$ ensemble, $8.3\times10^{-7}$ for the optimized
$S_{max}=8$ ensemble, and $1.5\times10^{-6}$ for the optimized $S_{max}=32$
ensemble. From the results, we see the analysis of unverification
events matches the numerical results very well.
\section{Conclusions}
\label{sec:con}
In this paper, we discuss list-message-passing (LMP) decoding algorithms
for the $q$-ary symmetric channel ($q$-SC). It is shown that capacity-achieving
ensembles for the BEC achieve capacity on the $q$-SC when the list
size is unbounded and $q$ goes to infinity. Decoding thresholds are also calculated by density
evolution (DE). We also derive a new analysis for the node-based algorithms
described in \cite{my_ref:luby_and_mitz}. The causes of false verification
(FV) are analyzed and random entries in the parity-check matrix are
used to avoid type-II FV. Degree profiles are optimized for
the LMP decoder and reasonable gains are obtained. Finally, simulations
show that, with list sizes larger than 8, the proposed LMP algorithm
outperforms previously proposed algorithms. In the simulation, we observe significant error floor
for the optimized code ensemble. The error floor is caused by the unverified symbols when decoding terminates.
We also derive the analysis of the error floor. That matches the simulation results very well.
While we focus on the $q$-SC in this work, there are a number of
other applications of LMP decoding that are also quite interesting.
For example, the iterative decoding algorithm described in \cite{my_ref:isit2006sarvotham}
for compressed sensing is actually the natural extension of LM1 to
continuous alphabets. For this reason, the LMP decoder may also be
used to improve the threshold of compressed sensing. This is, in some
sense, more valuable because there are a number of good coding schemes
for the $q$-SC, but few low-complexity near-optimal decoders for
compressed sensing. This extension is explored more thoroughly in \cite{zp-it-cs}.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\columnwidth,viewport=150 10 450 600]{sim_results.eps}
\caption{Simulation results for (3,6) regular codes with block
length 100000.}\label{fig:sim36}
\end{figure}
\appendices{ }
\begin{section}{Proof of theorem \ref{thm1}}
\begin{IEEEproof} Given $p\lambda(1-\rho(1-x))<x$ for $x\in(0,1]$,
we start by showing that both $x_{i}$ and $y_{i}$ go to zero as
$i$ goes to infinity. To do this, we let $\alpha=\sup_{x\in(0,1)}\frac{1}{x}p\lambda(1-\rho(1-x))$
and note that $\alpha<1$ because $p<p^{*}$. It is also easy to see
that, starting from $x_{0}=1$, we have $x_{i}\leq\alpha^{i}$ and
$x_{i}\rightarrow0$. Next, we rewrite (\ref{eq3}) as \begin{align*}
y_{i+1} & =\frac{1}{p}x_{i+1}+p\left(\rho(1-x_{i})-\rho(1-y_{i})\right)\lambda'(1-\rho(1-x_{i}))\\
& \stackrel{(a)}{\leq}\frac{1}{p}\alpha^{i+1}+p\left(1-\rho'(1)\alpha^{i}-\rho(1-y_{i})\right)\left(\lambda_{2}+O(\alpha^{i})\right)\\
& \stackrel{(b)}{\leq}\frac{1}{p}\alpha^{i+1}+p\lambda(1-\rho(1-y_{i}))\left(1+O(\alpha^{i})\right)\\
& \stackrel{(c)}{\leq}\frac{1}{p}\alpha^{i+1}+\alpha y_{i}\left(1+O(\alpha^{i})\right),\end{align*}
where $(a)$ follows from $\rho(1-x)\leq1-\rho'(1)x$, $(b)$ follows
from $\lambda_{2}(1-\rho(1-y))\leq\lambda(1-\rho(1-y))$, and $(c)$
follows from $p\lambda(1-\rho(1-y))\leq\alpha y$. It is easy to verify
that $y_{i+1}<y_{i}$ as long as $y_{i}>\frac{\alpha^{i+1}}{p(1-\alpha(1+O(\alpha^{i})))}$.
Therefore, we find that $y_{i}\rightarrow0$ because the recursion
does not have any positive fixed points as $i\rightarrow\infty$.
Moreover, one can show that $y_{i}$ eventually decreases exponentially
at a rate arbitrarily close to $\alpha$.
Note that the decoding error comes from two reasons, one is the event
that message is not verified and the other one is the event that the
message is falsely verified. Next, we are going to show that the actual
performance of a code converges to the ensemble average exponentially
which means almost every code in a capacity-achieving ensemble has
capacity-achieving performance. Note that the concentration effect
and the decay of FV probability hold regardless whether the error probability
of the decoder converges to zero or not.
We can prove that the performance of a particular code converges to
the threshold which is the average performance of a tree-like ensemble
in a similar way in \cite{my_ref:DE}, where the average is over the
graph ensemble $\left(\lambda(x),\rho(x)\right)$ and all the channel
inputs. There are two difference between our scenario and \cite{my_ref:DE},
i.e, our algorithm passes a list of values with unbounded list size,
the second difference is the graph may be irregular in our case. Here
we only mention the brief procedure of the proof. We can let $Z^{(l)}/E$
denote the fraction of \emph{unverified} messages at the $l$-th iteration,
where $E$ is the number of edges in the graph. Note that $Z^{(l)}$
denotes the number of \emph{incorrect} and \emph{erasure} messages
in \cite{my_ref:DE}. Following \cite{my_ref:DE}, we can break the failure
the probability into a tree-like neighborhood term and a Martingale
concentration term to get \[
\Pr\left(\left|\frac{Z^{(l)}(\mathbf{s})}{E}-y_{l}\right|\ge\epsilon\right)\le\]
\[
\Pr\left(\left|\frac{Z^{(l)}(\mathbf{s})}{E}-\frac{\textrm{E}\left[Z^{(l)}(\mathbf{s})\right]}{E}\right|\ge\epsilon/2\right)+\]
\[
\Pr\left(\left|\frac{\textrm{E}\left[Z^{(l)}(\mathbf{s})\right]}{E}-y_{l}\right|\ge\epsilon/2\right)\]
where $\mathbf{s}$ is an arbitrary codeword chosen from ensemble
$\left(\lambda,\rho\right)$, $Z^{(l)}(\mathbf{s})$ is the random
variable that denotes the number of unverified variable-to-check messages
after $l$ decoding iterations. $E$ is the number of edges in the
graph. This means that the concentration bound consists two parts: concentration from a particular
code to the ensemble with cycles and concentration from a ensemble
with cycles to the tree-like ensemble. Notice that the proof of the
later concentration and the proof of the probability of a tree-like
neighborhood are not limited to the specific decoding algorithm and
the definition of $Z^{(l)}$, the proof is omitted here. By forming
a Doob's martingale on the edge-exposure and applying Azuma's inequality,
we can prove the concentration from a particular code to the ensemble
in the same manner as \cite{my_ref:DE}. In our scenario, the proof
of bounded difference of the martingale, the right hand side of
\cite[Eq. (16)]{my_ref:DE} should be the cardinality of depth $2l$
directed neighbor of $e$, $\frac{\left|\mathcal{\vec{N}}_{e}^{(2l)}\right|}{2}$.
The right hand side of \cite[Eq. (17)]{my_ref:DE} should be $4\left|\mathcal{\vec{N}}_{e}^{(2l)}\right|$.
The $\beta$ in applying Azuma's inequality is $\sum_{k=1}^{E}\left(4\left|\mathcal{\vec{N}}_{e}^{(2l)}\right|\right)^{2}+\sum_{k=1}^{n}\left(4\left|\mathcal{\vec{N}}_{e}^{(2l)}\right|\right)^{2}$.
So far, we prove that, for an arbitrary small constant $\epsilon/2$,
there exist positive numbers $\beta$ and $\gamma$, such that if
$n>\frac{2\gamma}{\epsilon}$, then \[
\Pr\left(\bigg|{\frac{Z^{(l)}(\mathbf{s})}{E}-y_{l}\bigg|\ge\epsilon}\right)\le e^{-\beta\epsilon^{2}n}\]
Note that the similar proof can be found in \cite{my_ref:DE} (the
proof of Theorem 2) and \cite{my_ref:kavcic-capacity} (the proof
of Theorem 1). Note that \cite{my_ref:DE} proves for the regular
code ensemble and \cite{my_ref:kavcic-capacity} extends the proof
to the irregular code ensemble. So, for an arbitrary code $\mathbf{s}$
and an arbitrary small quantity $\epsilon$, the fraction of unverified
message is less than $\epsilon/2$ as $n$ goes to infinity.
In \cite{my_ref:DE} and \cite{my_ref:kavcic-capacity}, it is proved
that, when a code graph is chosen uniformly at random from all possible
graphs with degree distribution pair $(\lambda(x),\rho(x))$, \[
\Pr\left(\mbox{neighborhood of depth 2$l$ is not tree-like}\right)\le\frac{\gamma}{n}\]
where $\gamma$ is a constant independent of $n$. So, given $\epsilon$,
we can choose $n$ large enough such that the number of variable nodes
which are involved in cycles of length less than $2l$ is less than
$n\epsilon/2$ with probability arbitrarily close to one as $n$ goes
to infinity. So the probability of error caused by type-II FV's is
upper bounded by $\epsilon/2$ (for the notation of type-I and type-II
FV, please refer to Section \ref{false_v}). Here, we don't consider
the type-I FV's because the probability of type-I FV's can be forced
arbitrarily close to zero by choosing a large enough $q$. \end{IEEEproof}
\end{section}
\begin{section}{Proof of Theorem \ref{thm2}}
All verifications that occur in LM1-NB also occur in LM1-MB and vice versa.
So LM1-NB and LM1-MB are equivalent.
\begin{IEEEproof} The operations of LM1-MB and LM1-NB are different
because they have different verification rules (see Table. I). We
can prove they are equivalent by showing the verification occurs in
LM1-MB also occurs in LM1-NB and vice versa, but in different decoding
steps. Let's first look at the check node when the summation of all
messages equals to zero but there are more than 1 messages are unverified.
In this case, LM1-NB will verify all the messages. In LM1-MB none
of them will be verified but all the values will be correct. In the
following iteration, all these messages will be verified on their
variable nodes. Notice that this is the only case verification occurs
in LM1-NB but not in LM1-MB. So verification in LM1-NB also occurs
in LM1-MB. Let's then look at the variable node when any incoming
message is correct and the channel value is correct. In LM1-MB, the
outgoing message will be verified. In LM1-NB the message will be correct
but not verified. Notice that the incoming is correct means all the
other messages are correct at the check node, so the unverified correct
message will be verified in the next step on check node. Notice that
this is the only case verification occurs in LM1-MB but not in LM1-NB.
So verification in LM1-MB also occurs in LM1-NB. \end{IEEEproof}
\end{section}
\begin{section}{Proof of Lemma \ref{lemma3}}
Let $y$ be the received symbol sequence assuming the
all-zero codeword was transmitted. Let $u$ be any codeword with exactly
$k$ non-zero symbols.
It is easy to verify that the probability that ML decoder chooses $u$ over
the all-zero codeword is given by
\[ p_{2,k}=\sum_{j=0}^{k}\sum_{i=0}^{j}\binom{k}{i,j,k-i-j}(1-p)^{i}\left(\frac{p}{q-1}\right)^{j}\left(\frac{p(q-2)}{q-1}\right)^{k-i-j}.\]
Using the multinomial theorem, it is also easy to verify that
\begin{align*}
A(x) &= \left((1-p)+\frac{p}{q-1}x^{2}+\frac{p(q-2)}{q-1}x\right)^{k} \\
&= \sum_{j=0}^{k}\sum_{i=0}^{k-j}\binom{k}{i,j,k-i-j}(1-p)^{i}\left(\frac{p}{q-1}\right)^{j}\left(\frac{p(q-2)}{q-1}\right)^{k-i-j}x^{k-i+j} \\
&\triangleq \sum_{l=0}^{2k} A_{l} x^{l},
\end{align*}
where $A_{l}$ is the coefficient of $x^l$ in $A(x)$.
Finally, we observe that $p_{2,k} = \sum_{l=k}^{2k} A_{l}$ is simply an
unweighted sum of a subset of terms in $A(x)$ (namely, those where $k-i+j \geq k$).
This implies that
\[ x^{k}p_{2,k}=\sum_{l=k}^{2k}A_{l} x^{k} \le A(x)\]
for any $x\ge1.$ Therefore, we can compute the Chernoff-type bound
\[ p_{2,k}\le\inf_{x\ge1}x^{-k}A(x).\]
By taking derivative of $x^{-k}A(x)$ over $x$ and setting it to
zero, we arrive at the bound
\[p_{2,k} \leq \left(p\frac{q-2}{q-1} +\sqrt{\frac{4p(1-p)}{q-1}} \right)^k.\]
\end{section}
\begin{section}{Proof of Lemma \ref{lemma53}}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\columnwidth]{fsm.eps}
\caption{Finite-state machine for Lemma \ref{lemma53}.}
\end{center}
\label{fig:fsm}
\end{figure}
\begin{IEEEproof} An unverification event occurs on a degree-2 cycle
of length-$k$ when there is at most one correct variable node in any
adjacent set of $s+1$ nodes.
Let the set of all error patterns (i.e., 0 means correct and 1 means error)
of length-$k$ which satisfy the UV condition be $\Phi(s,p,k)\subseteq \{0,1\}^k$.
Using the Hamming weight $w(z)$, of an error pattern as $z$, to count the number of
errors, we can write the probability of UV as
\[ \phi(s,p,k)=\sum_{z\in\Phi(s,p,k)}p^{w(z)}(1-p)^{k-w(z)}.\]
This expression can be evaluated using the transfer matrix method to
enumerate all weighted walks through a particular digraph. If we walk through the nodes along
the cycle by picking an arbitrary node as the starting node, the UV
constraint can be seen as $k$-steps of a particular finite-state machine.
Since we are walking on a cycle, the initial state must equal to the final
state.
The finite-state machine, which is shown in Fig. \ref{fig:fsm}, has $s+1$ states $\{0,1,\ldots,s\}$.
Let state 0 be the state where we are free to choose either a correct or incorrect symbol
(i.e., the previous $s$ symbols are all incorrect).
This state has a self-loop associated with the next symbol also being incorrect.
Let state $i>0$ be the state where the past $i$ values consist of one correct symbol followed by
$i-1$ incorrect symbols.
Notice that only state 0 may generate correct symbols.
By defining the transfer matrix with (\ref{statematrix}), the
probability that the UV condition holds is therefore $\phi(s,p,k)=\textrm{Tr}\left(B^{k}(p)\right)$.
\end{IEEEproof}
\end{section}
\bibliographystyle{ieeetr}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,209
|
{"url":"https:\/\/pypi.org\/project\/webgym\/","text":"Reinforcement Learning Environments for 50+ web-based tasks\n\n## WebGym\n\nThe WebGym package provides learning environments for agents to perceive the world-wide-web like how we (humans) perceive \u2013 using the pixels rendered on to the display screen. The agent interacts with the environment using keyboard and mouse events as actions. This allows the agent to experience the world-wide-web like how we do thereby require no new additional modifications for the agents to train. This allows us to train RL agents that can directly work with the web-based pages and applications to complete real-world tasks. It is an extension of Wolrd-Of-Bits (WOB) & MiniWoB++.\n\nWebGym is part of TensorFlow Reinforcement Learning Cookbook. More details about this package, see Deep RL Web Assistants discussed in Chapter 6 - RL in real-world: Building intelligent agents to complete your To-dos\n\n## Project details\n\n### Source Distribution\n\nwebgym-1.0.6.tar.gz (1.5 MB view hashes)\n\nUploaded source\n\n### Built Distributions\n\nwebgym-1.0.6-py3.8.egg (1.7 MB view hashes)\n\nUploaded 3 8\n\nwebgym-1.0.6-py3-none-any.whl (1.7 MB view hashes)\n\nUploaded py3","date":"2023-02-01 22:33:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21259905397891998, \"perplexity\": 6138.0504121371605}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499953.47\/warc\/CC-MAIN-20230201211725-20230202001725-00411.warc.gz\"}"}
| null | null |
by Tom hmv London, 20/01/2016 Bio hmv.com Editor. Peanut butter, punk rock and pillows.
It took home almost $600 in box office receipts, drew universal critical acclaim and scored award nominations across the globe, it's muscular military drama American Sniper.
Starring Bradley Cooper and Sienna Miller and directed by Clint Eastwood, this is the story of Chris Kyle, the most lethal sniper in the history of the US military. Although unmatched on the battlefield, Kyle's life was a turbulent one and this is a heavy, tense drama with brilliant central performances and well worth investing in.
To get Kyle's powerful physique Bradley Cooper had to step up his workout regime and put on over forty pounds of muscle mass, to do this he needed to eat, a lot, around 8,000 calories every day!
Originally when Bradley Cooper bought the rights to Kyle's autobiography he'd intended to produce the movie and nothing more, in fact he admitted he was eyeing his Guardians Of The Galaxy co-star Chris Pratt to take the role of Kyle. But, after he got involved in the development process, Cooper came to change his mind and opted to take the lead role.
Before Eastwood got the gig, both Spielberg and Russell were in talks to take the movie, but both passed, Spielberg because he wanted to do a far more expansive version of the story and Russell because he couldn't agree a deal with the studio.
When it came to casting the role of Tara Kyle a number of actresses were in the frame, these included House Of Cards' Kate Mara, Thor star Jaime Alexander and Lost leading lady Evangeline Lily. In the end Mara and Lily moved on to Fantastic Four and Ant-Man respectively and the role ended up going to Sienna Miller.
Although the movie received six nominations, including Best Picture, Best Director, Best Adapted Screenplay and Best Actor for Cooper, it ultimately ended up winning only one award for Best Sound Editing.
It might have only ended up with one Oscar, but the movie grossed almost $550 million at the box office, making it the biggest war movie of all time and taking it well past the previous record of $481 million set by Saving Private Ryan.
American Sniper is available now as part of hmv's awards campaign. To see the full range of titles, click here.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,738
|
In this post I will update you on what it's like to live in a Lithuanian village.
As you may know, I've recently purchased a plot of land in a rural place in Lithuania. The plot has a tiny hut, and when I purchased it, it was totally overgrown with grass, because previous owners neglected it.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,856
|
\section{Introduction}
\label{intro}
\cite{dyson60_science} introduced the idea, now referred to as a Dyson sphere or swarm, of large structures surrounding a star and collecting its energy for some intelligent use. A monolithic solid spherical shell could not be built stably \citep{dyson60_letters}, so a collection of many satellites, which \cite{dyson66_essay} demonstrated to be physically feasible, is the preferred model. However, for simplicity of language, we refer to any configuration of a starlight-manipulating megastructure as a Dyson sphere in this work. At the dawn of SETI, when radio searches \citep{cocconi_morrison, ozma} were the primary focus, \citeauthor{dyson60_science} proposed searching for infrared waste heat output from the technological use of starlight. He suggested that these structures may be best suited to orbits similar to that of Earth, where they would be in a temperature range of roughly 200--300 K, making $\sim$10 microns the ideal wavelength range to search.
Past works \citep{badescu95, dsreview} have presented the thermodynamics of Dyson spheres, including radiative feedback from the sphere onto the surface of a star. However, these assume that this irradiation has no significant effect on the interior of stars and thus their structure and evolution. This may not be adequate for accurately predicting the observable features of Dyson spheres that return significant feedback to their stars.
\subsection{Expectations for Irradiated Stars}
We begin with a look at the general effects of irradiation on stellar structure and evolution. Stars have negative gravitothermal heat capacity: the addition of energy causes them to expand and cool. In a simple theoretical demonstration, the total energy of a star is the sum of its thermal and gravitational energies:
\begin{equation}
E_{*} = E_{\rm therm} + E_{\rm grav} .
\end{equation}
We apply the virial theorem,
\begin{equation}
E_{\rm therm} = -\frac{1}{2} E_{\rm grav},
\end{equation}
to relate each energy type directly to the total energy:
\begin{equation}
E_{*} = \frac{1}{2} E_{\rm grav} = - E_{\rm therm}.
\end{equation}
When energy is added to a star ($E_*$ increases), gravitational energy increases and thermal energy decreases, so we see the star expand and cool both overall (because $E_{\rm therm}$ is lower) and on its surface (because, being larger at the same or a lower luminosity its effective temperature must drop). A larger star should also result in less pressure on a cooler core, so we also expect its luminosity to decrease.
To examine this effect quantitatively, we calculate the differences in radius and temperature between normal and irradiated stellar models. Foundational work on the irradiated stars was performed by \cite{tout89}, who modelled the evolution up to the helium flash of 0.5, 0.8, 1, and 2 M$_\odot$ stars in constant temperature radiation baths from 0 to 10$^4$ K. \citeauthor{tout89} explore their 1M$_\odot$ model in great detail, finding that in the early main-sequence in the 10$^4$ K bath, the star's radius increases significantly (by a factor of 1.4) and that its central temperature and nuclear luminosity decrease very slightly ($<$1\% and $\sim$1\% respectively). They suggest that this large radius increase appears to be caused by the outer convective region becoming radiative. Since a radiative temperature gradient is smaller than a convective gradient, the envelope must be larger to encapsulate a similar temperature drop. The lower luminosity results in a slightly extended main sequence lifetime ($\sim$2\% for the 10$^4$ K bath).
Since normal 2 M$_\odot$ models have radiative exteriors that are hotter to begin with, \citeauthor{tout89} find that they are not significantly affected by irradiation while on the main sequence. The irradiated 0.8 M$_\odot$ models, normally with an outer convective zone and radiative core, behave similarly to the 1 M$_\odot$ models, expanding and cooling in the outer regions, but not strongly affecting their main sequence lifetimes. Interestingly, \citeauthor{tout89} find that the 10$^4$ K irradiation decreased the main sequence lifetime of their 0.5 M$_\odot$ star by roughly half. They found the star to expand and cool overall, but a radiative region formed, and the thermal energy was distributed in such a way that the central temperature actually increased.
\subsection{Radiative Feedback from Dyson Spheres}
There are two primary ways that a star may undergo feedback from its surrounding sphere: materials may directly reflect some amount of starlight back onto the star, and/or the sphere may become warm and emit thermally. In the classic idea of a Dyson sphere, the sphere's goal is to collect energy, not to reflect it back. As the sphere collects energy, it will heat up and inevitably emit thermally. Some of this emission may end up going back toward the star, so we can model this process as the diffuse reflection of some fraction of starlight.
Although not spherically symmetrical like the typical modeling of a Dyson sphere, the related concept of ``stellar engines" proposes a partial sphere which uses a significant portion of a star's luminosity to do mechanical work on the star. First proposed by \cite{shkadov}, this could be in the form of a large mirror placed at some distance from the star, disturbing the radiative symmetry of the radiation field and altering its space velocity. \cite{engines00} noted that this reflection will cause the photospheric temperature to increase and gradually change the star's steady state, but they assumed that the nuclear reaction rate is unchanged. In this scenario, a smooth, mirrored surface reflects back as much light as possible, so we can model it as the specular reflection of some fraction of starlight.
\subsection{Dyson Sphere Searches}
As originally suggested by \citeauthor{dyson60_science}, and expanded upon by \cite{sagan_walker}, the search for Dyson spheres began as a search for infrared sources in the Galaxy, assuming that they would appear as blackbodies with a temperature of a few hundred Kelvins. The {\it IRAS} survey revealed the presence of many infrared sources in the galaxy, but the issue remains of how to distinguish between artificial structures and natural objects of similar mid-infrared fluxes, such as young stars with circumstellar disks, or giant stars with dusty atmospheres. Several searches for Dyson spheres with \textit{IRAS} were performed. \cite{iras1985} and \cite{iras2000} each found several objects with SEDs resembling that of a blackbody with effective temperature between 3-400 K. Each noted that further observations would be needed to rule out natural sources. A series of studies \citep{jugaku95, jugaku97, jugaku00, jugaku2004} combined {\it IRAS} and {\it 2MASS} data to search for partial dyson spheres around solar-type (FGK) stars within 25 pc of the Sun; they found no candidates in their 384 stars studied.
\cite{iras2009} explored {\it IRAS} Low Resolution Spectrometer (LRS) sources with temperatures below 600 K for Dyson spheres, finding 16 candidates in a study sensitive to solar luminosity objects out to 300 pc. \citeauthor{iras2009} noted that most of these candidates have possible non-technological explanations and that more information is needed to rule out or strengthen the case for any of them.
\cite{teodorani} proposed a more modern approach to the infrared search for Dyson spheres than could be done with {\it IRAS}, using the higher-resolution {\it Spitzer} space telescope to search for sun-like stars with infrared excess. This proposed method, however, assumes that only 1\% of total starlight is obscured, as it relies on optical detectability. \cite{ghat2} laid out the AGENT formalism for describing Dyson spheres and mentioned backwarming as a complication in the prediction of star-Dyson sphere system SEDs. Their {\it WISE} search focused on Kardashev Type III civilizations, or galaxies full of Dyson spheres, and concludes that they are rare or non-existent. \citeauthor{ghat2} also note that infrared waste heat is much easier to detect than blocked optical starlight, except in cases where a very large fraction of the starlight is blocked.
Observing a lack of or change in optical light has also been the focus of many Dyson sphere searches. One such method is looking for strange light curves that could be indicative of transiting megastructures (\cite{arnold, teodorani, ghat3}). \cite{zackrisson18} proposed and demonstrated the use of {\it Gaia} to search for optically underluminous stars, by identifying those with spectrophotometric distance estimates far larger than parallax distances.
When {\it JWST} is available for use, \cite{whitepaper} propose that it, in combination with {\it WISE} and {\it Gaia}, can be used to put robust upper limits on stellar energy collection throughout our Galaxy and across others.
\subsection{This Work}
In this work, we use {\tt MESA} \citep{mesa1, mesa2, mesa3, mesa4, mesa5} to model the effects of Dyson sphere feedback on the structure and evolution of stars. We use the AGENT formalism of \cite{ghat2} and radiative feedback formulation of \cite{dsreview} to compute absolute magnitudes of many configurations of combined star-Dyson sphere systems. We produce color-magnitude diagrams for selected {\it Gaia} and {\it WISE} bands and specify when these internal changes in the stars are significant in observations.
Section \ref{sec:mesa} describes the {\tt MESA} modelling methods used in this work. Section \ref{sec:irrad} describes our recreation of the irradiated stellar models of \cite{tout89} in {\tt MESA}. Sections \ref{sec:struc} and \ref{sec:evol} discuss the impacts of the feedback model on stellar structure and evolution. In section \ref{sec:obs}, we demonstrate the observational consequences of these effects. Section \ref{sec:concl} concludes the work and motivates future Dyson sphere searches.
\section{{\tt MESA} Techniques} \label{sec:mesa}
{\tt MESA} \citep[Modules for Experiments in Stellar Astrophysics;][]{mesa1, mesa2, mesa3, mesa4, mesa5} is an open source tool for stellar physics calculations. We use the {\tt MESAstar} module to simulate stellar evolution under the presence of irradiation. The program's adaptive timesteps allow for the thermal processes of external irradiation to occur in appropriate intervals. {\tt MESA} provides three methods for irradiation \citep{mesa2}. Method 1 applies irradiation in the form of energy injection at some specified depth. Method 2 alters the model's boundary conditions in a manner appropriate for irradiated giant planets. Method 3 is similar to 1 but allows for more flexibility with the use of custom energy injection routines, adding any amount of energy at any cell in the 1-dimensional model. The \citetalias{mesa2} paper demonstrates that these three methods produce equivalent results in the case of irradiated planets. Because it is the most flexible of the three methods (and because Method 2 is only appropriate for planets) we adopt the third method, using the {\tt other\_energy} module to deposit irradiation into the outermost cell of the model star.
\subsection{Stars in Constant Temperature Baths}
We first attempt to recreate the results of \cite{tout89} in MESA. We run simulations of 0.5, 0.8, 1, and 2 M$_\odot$ stars in temperature baths from 0 to 10$^4$ K. This temperature bath is implemented using the {\tt run\_star\_extras} function {\tt other\_energy}, which continually pours energy onto a star. For a constant temperature bath, this is implemented as,
\begin{equation}
L_{\rm extra} = 4 \pi \sigma R_*^2 T_{\rm b}^4 ,
\end{equation}
where $L_{\rm extra}/dm$ is the input to {\tt MESA}'s {\tt other\_energy} function (in units of luminosity/mass), $\sigma$ is the Stefan-Boltzmann constant, $R_*$ is the star's radius, $T_{\rm b}$ is the bath temperature in Kelvin, and $dm$ is the mass contained in the outermost cell in our model.
\subsection{Pre-Dyson Sphere Models}
Our {\tt MESA} models begin with the creation of zero age main sequence (ZAMS) models for a range of stellar masses. For stars of mass $\leq 1M_\odot$, we allow evolution along the main sequence for 4.6 Gyr before applying the feedback of a Dyson sphere. For stars with mass $> 1M_\odot$, we evolve to half of the star's main sequence lifetime (in the absence of added feedback).
\subsection{Calculating Dyson Sphere Feedback}
To replicate the radiation of a Dyson Sphere, we pour some fraction of the star's luminosity back onto the surface of the star.
Physically, the extra luminosity is a simple function of the star's luminosity,
\begin{equation}
L_{\rm extra} = f L_*,
\end{equation}
where $L_{\rm extra}/dm$ is the input to {\tt MESA}'s {\tt other\_energy} function (in units of luminosity/mass), $L_*$ is the star's luminosity, $f$ is the fraction of the star's luminosity reflected back on itself, and $dm$ is the mass contained in the outermost cell in our model.
However, {\tt MESA}'s finite and relatively long timesteps add some complications, so we have to add some corrections into our function. Without any feedback, the luminosity of the outer layer ($L[1]$) will be equivalent to that just below it ($L[2]$), so we start with,
\begin{equation}
L_{\rm extra} = f L[1] = f L[2] .
\end{equation}
When Dyson sphere feedback hits the star, its surface heats up, and in steady state the surface luminosity increases to:
\begin{equation}
L[1] = (1+f) L[2],
\end{equation}
which then increases feedback:
\begin{equation}
L_{\rm extra} = f (1+f) L[2] .
\end{equation}
This again increases surface luminosity,
\begin{equation}
L[1] = (1+f(1+f)) L[2],
\end{equation}
which increases feedback:
\begin{equation}
L_{\rm extra} = f(1+f(1+f)) L[2].
\end{equation}
This feedback cycle happens much quicker than the {\tt MESA} timescale, so we implement a correction factor which extends this cycle to infinity, following the limits taken in Equations 26 and 27 of \cite{dsreview}. This results in a feedback luminosity of
\begin{equation}
L_{\rm extra} = \left( \sum_{i=0}^{\infty} f^i \right) L[2] = \frac{1}{1-f} L[2] .
\end{equation}
In essence, because the ``reflection'' of light between the star and the sphere happens much faster than an evolutionary timestep in MESA, we have just treated the sphere and star as partial mirrors within a timestep. In the case of $f=1$, the energy is completely trapped between the sphere and star and so $L_{\rm extra}$ diverges (because the number of reflections does).
We also need to correct for stellar luminosity evolution within a timestep; we found that without such a correction our simulations under-irradiated the star by potentially significant amounts. We forecast our next step luminosity as the current step's plus the difference between the current step and the one before it. So, we replace $L[2]$ with $2L[2] - L_{\rm prev}[2]$. Thus, our final input for {\tt other\_energy} is:
\begin{equation}
L_{\rm extra} = \frac{f}{1-f}(2L[2] - L_{\rm prev}[2]).
\end{equation}
We found that for large values of $f$, this feedback could become unstable and generate large oscillations in the radiative feedback. To correct this, we also implement a gradual onset of the feedback over multiple steps for $f>0.25$ cases to prevent instability. We checked the fidelity of our implementation of this model in the MESA simulations (all with $f \leq 0.50$) by confirming that $|L_{extra}-fL[1]|<10^{-5}$.
\subsection{Final Model Parameters}
We evolve stars with masses 0.2, 0.4, 1, and 2 $M_\odot$ under the effects of Dyson sphere feedback, with a set of different feedback levels from 1-50\%. We pick up where the mid-main sequence models described above left off and evolve each star to the end of its main sequence phase. All of these models include the repeated feedback correction. Due to a numerical stability issue, all models except for the 50\% feedback case for 0.2 and 0.4 M$_\odot$ stars include the luminosity evolution correction.
\section{Consistency with Prior Work} \label{sec:irrad}
In Figure~\ref{fig:tout}, we compare our constant-temperature bath irradiated stars with those of \cite{tout89} to check for consistency. Our 1M$_\odot$ \citetalias{mesa1} models see a similar shrinking of the main sequence convective zones and that they almost entirely vanish for the 10$^{3.94}$ and 10$^4$ K bath models. This results in a slightly extended main sequence lifetime ($\sim$2\% for the 10$^4$ K bath). We find similar agreement for the 0.8 and 2 M$_\odot$ stars.
Our results, however, disagree significantly for the 0.5 M$_\odot$ star. The {\tt MESA} modelling found the irradiated 0.5 $M_\odot$ star to cool, even in the core, and to increase in lifetime by roughly 60\%. Without full details of the models used by \citeauthor{tout89}, it is hard to say for sure why our results varied. A comparison between our stars' radius evolution and that of \cite{tout89} is shown in Figure \ref{fig:tout}. We suggest that the difference could be due to {\tt MESA}'s having more modern and accurate opacity tables for cool stars than those used in older stellar models.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{tout_fig5.png}
\includegraphics[width=0.51\textwidth]{lifetimes.pdf}
\caption{The evolution of stellar radius over time for 0.5, 0.8, and 1 M$_\odot$ stars over time, for a normal star and one in a 10,000 K bath. Left: Figure 5 from \cite{tout89} (solid lines: 10,000 K bath; dotted lines: normal star). Right: Our recreation with MESA. The 0.8 and 1 M$_\odot$ stars evolve very similarly between the two figures. The irradiated 0.5M$_\odot$ model, however, survives much longer in our version that that of \citeauthor{tout89}, nearly doubling its normal main sequence lifetime instead of halving it.}
\label{fig:tout}
\end{figure}
\section{A Dyson Sphere's Feedback's Impact on Stellar Structure} \label{sec:struc}
We evolve each star from the onset of its Dyson sphere feedback to the end of the main sequence. Along the way, we save a model of each star after it has 100 Myr to settle from the onset of the sphere. In this section, we compare the structures of each model at this stage, focusing on primarily on temperature. In order to view both the core and envelope in detail, we use the {\tt logit$_{10}(m/M_*)$} function to plot mass, where
\begin{equation}
{\rm logit_{10}}(x) = \log_{10}\left(\frac{x}{1-x}\right).
\end{equation}
\noindent This scaling extends both the center and edge of the star logarithmically, so -10 refers to the innermost $10^{-10}$ of the star by mass, +10 refers to the outermost $10^{-10}$ by mass, and 0 refers to the midway point where $m/M_* = 0.5$.
\subsection{Predominantly Convective Stars}
Temperature structure of the fully convective 0.2M$_\odot$ star models given 100 Myr to settle after DS feedback onset are shown in Figure \ref{fig:struc02}. We find significant decreases in central temperature in the DS feedback cases, on the order of 10$^5$ K. This internal temperature change dramatically decreases nuclear fusion, with luminosity decreasing by nearly 50\% in the 50\% feedback case. We also see an increase in radius throughout, particularly prominently in the outer half of the star (not shown).
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{M02_100Myrdifs_T_conv.pdf}
\caption{Temperature structure of a 0.2M$_\odot$ star, given 100 Myr to settle after the onset of Dyson sphere feedback. See Section~\ref{sec:struc} for an explanation of the x-axis. Left: Temperature throughout each stellar model. Right: Temperature deviation from the zero feedback model throughout each of the three feedback models. The stars undergoing feedback have lower central temperatures than the ordinary star and nearly identical outer temperatures.}
\label{fig:struc02}
\end{figure}
Figure \ref{fig:sctruc04} shows the temperature structure of the 0.4M$_\odot$ models 100 Myr after the Dyson sphere onset. This star has a radiative interior containing 35\% of its mass, covered by a convective exterior. The border is shown with a dotted line and is where temperature structure varies the most. We see a spike in temperature for the feedback models just above the bottom of the convective region, where the incoming energy piles up due to inefficient transport into the radiative region below. This is followed by a dramatic dip at the top of the radiative zone, where the irradiation that has made it through has caused expansion and cooling. The spike then settles out toward a central temperature which is reduced by up to hundreds of thousands Kelvin for the stars undergoing feedback. Luminosity decreases for the models with feedback. This effect is less dramatic here than in the 0.2 M$_\odot$ stars, but but it is still significant, decreasing nuclear luminosity by nearly 40\% for the 50\% feedback case.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{M04_100Myrdifs_T_conv.pdf}
\caption{Temperature structure of a 0.4M$_\odot$ star, given 100 Myr to settle after the onset of Dyson sphere feedback. Left: Temperature throughout each stellar model. Right: Temperature deviation from the zero feedback model throughout each of the three feedback models. The dotted line marks the bottom of the convective zone, 0.14M$_\odot$ from the star's center. The stars with feedback again are cooler in their centers and show a temperature at their outer edges. Just above the star's convective boundary, though, we see a spike of higher temperature for these stars.}
\label{fig:sctruc04}
\end{figure}
\subsection{Predominantly Radiative Stars}
The 1 M$_\odot$ star, shown in Figure \ref{fig:struc1}, is primarily radiative, with a light convective exterior containing 2\% of its mass. As the left side of the figure shows, the changes seen are quite small on the scale of the star's actual temperature, but the right panel shows that the small effects are still quite distinctive. Overall we see a similar shape in the temperature differential as for the 0.4M$_\odot$ star, which also has a convective exterior and radiative interior, though with quite different proportions. We see a large spike in increased temperature as one moves toward the convection-radiation border for the models with feedback. Near the bottom of this convective zone, the temperature difference rapidly declines and becomes negative, settling down to a slightly lower central temperature than the normal star. The central temperatures of the 1 M$\odot$ models are much less strongly affected than the 0.4 M$_\odot$ set.
This slight decrease in central temperature produces a very slight decrease in luminosity. We see little expanding and cooling in the radiative interior, but the outer convective zone grows significantly in radius (not shown).
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{M1_100Myrdifs_T_conv.pdf}
\caption{Temperature structure of a 1M$_\odot$ star, given 100 Myr to settle after the onset of Dyson sphere feedback. Left: Temperature throughout each stellar model. Right: Temperature deviation from the zero feedback model throughout each of the three feedback models. The dotted line marks the bottom of the convective zone, 0.98M$_\odot$ from the star's center. The models undergoing feedback are still slightly cooler in the center here but remain the same temperature at their surfaces. Above the convective boundary, we again see a spike in high temperature.}
\label{fig:struc1}
\end{figure}
The 2 M$_\odot$ star, shown in Figure \ref{fig:struc2}, has a convective 0.21 M$_\odot$ core, surrounded by a large radiative region. While overall, the left side of the figure shows that the star is not very strongly affected, the right side shows that some points throughout the star have patterns of temperature deviation. At the very surface of the star, we can see that irradiation has increased the surface temperature, though this quickly decreases and goes away as one moves further into the star. This effect is also reflected in the radius (not shown) which is inflated in the very outermost regions of the irradiated stars, then returns to roughly the level of the model with no feedback. Very little of the feedback reaches further into the star, but that which does produces interesting effects surrounding the radiation-convection border.
As one moves deeper into the star approaching this border, temperature begins to decrease, where the nearby convective zone is able to effectively transport heat away. Near the top of this convective zone, the temperature spikes back up, evening out toward a central temperature that is slightly higher for the models with feedback. Also in this region, we see nearly vertical dip and spike in the model with 50\% feedback, which was found to be a numerical artifact. At this turnover point just below the radiative zone, the stars undergoing feedback also see a disturbance in radius (not shown), with a slight decrease before settling out to match up with the feedback-free model. The decreased central temperature is not strong enough to significantly impact nuclear burning. The star's luminosity is essentially unaffected in the core, though the outer region shows very slightly decreased, where fusion is very slightly perturbed at the turnover point below the radiation-convection border.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{M2_100Myrdifs_T_conv.pdf}
\caption{Temperature structure of a 2M$_\odot$ star, given 100 Myr to settle after the onset of Dyson sphere feedback. Left: Temperature throughout each stellar model. Right: Temperature deviation from the zero feedback model throughout each of the three feedback models. The dashed line marks the top of the star's convective interior at 0.21M$_\odot$ from its center. The 50\% feedback model has a numerical artifact just below the convective boundary. We see a very slightly higher internal temperature for the stars with feedback, then a dip to lower temperature at the convective boundary. Moving outward they return to the normal star's temperature, then just at the very outermost edge, they fall below then spike back above the normal temperature.}
\label{fig:struc2}
\end{figure}
\section{Feedback's Impact Through Main Sequence Evolution} \label{sec:evol}
\subsection{Predominantly Convective Stars}
Figure \ref{fig:evol02} shows the nuclear luminosity and radius evolution through the main sequence of the 0.2M$_\odot$ models for a few selected feedback levels. The star with 15\% luminosity feedback significantly decreases in nuclear burning and has its main sequence lifetime extended from 970 Gyr to 1070 Gyr. The 50\% feedback case decreases nuclear burning very dramatically and extends the main sequence lifetime to 1500 Gyr, an increase of over $1/3$. We see a significant increase in radius for 15\% feedback and even more so for 50\%.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{02M_TAMSevol_L.pdf}
\includegraphics[width=0.49\textwidth]{02M_TAMSevol_R.pdf}
\caption{Luminosity (left) and radius (right) evolution of 0.2M$_\odot$ stars for different feedback levels. For stars with significant feedback, nuclear burning dramatically drops, and radius expands. These stars survive longer on the main sequence.}
\label{fig:evol02}
\end{figure}
We see a similar but slightly less dramatic effect on the 0.4M$_\odot$ models, shown in Figure \ref{fig:evol04}. For higher feedback, nuclear burning decreases and radius increases as the star expands and cools. The 15\% case extends the main sequence lifetime from 199 Gyr to 215 Gyr. The 50\% case has a main sequence lifetime of 267 Gyr.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{04M_TAMSevol_L.pdf}
\includegraphics[width=0.49\textwidth]{04M_TAMSevol_R.pdf}
\caption{Luminosity (left) and radius (right) evolution of 0.4M$_\odot$ stars for different feedback levels. Nuclear luminosity decreases and radius increases for stars with feedback again, though slightly less dramatically than for the 0.2M$_\odot$ star. Their lifetimes are also extended.}
\label{fig:evol04}
\end{figure}
At the onset of the Dyson sphere feedback, the 0.2M$_\odot$ star is completely convective, while the 0.4M$_\odot$ star is radiative out to 35\% of its mass and convective in the exterior. As predicted, we see a very strong cooling and expanding effect on these highly convective stars, significantly extending their lifetimes. The effect is stronger on a fully convective star than one with a radiative core.
\subsection{Predominantly Radiative Stars}
Figure \ref{fig:evol1} shows the nuclear luminosity and radius evolution through the main sequence for selected feedback levels on a 1M$_\odot$ star. We see very slight dips in luminosity at the onset of the Dyson sphere. The nuclear burning decreases very slightly, causing lifetimes to be slightly extended, from 8.88 Gyr to 8.90 Gyr for 15\% feedback and 8.94 Gyr for 50\% feedback. Interestingly, in contrast with the minor effects on fusion and lifetime, we see dramatic increases in stellar radius, by a factor greater than 3 for the 50\% case.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{1M_TAMSevol_L.pdf}
\includegraphics[width=0.49\textwidth]{1M_TAMSevol_R.pdf}
\caption{Luminosity (left) and radius (right) evolution of 1M$_\odot$ stars for different feedback levels. Radiative feedback has a large effect on the convective envelope, causing these stars' radii to grow significantly. But, because this envelope has so little of the star's mass, nuclear burning only very slightly decreases, and lifetimes are not significantly effected.}
\label{fig:evol1}
\end{figure}
The evolution of our 2M$_\odot$ models is shown in Figure \ref{fig:evol2}. Feedback has no significant effect on the bulk evolution of these stars.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{2M_TAMSevol_L.pdf}
\includegraphics[width=0.49\textwidth]{2M_TAMSevol_R.pdf}
\caption{Luminosity (left) and radius (right) evolution of 2M$_\odot$ stars for different feedback fractions. Because this type of star has a radiative envelope, feedback has virtually no effect on the star's bulk properties.}
\label{fig:evol2}
\end{figure}
At the onset of the Dyson sphere feedback, a 1M$_\odot$ star is primarily radiative (out to 98\% of its mass) with a convective envelope. The 2M$_\odot$ star has an inner convective core, while the exterior is fully radiative. In the 1M$_\odot$ star, we see that the convective envelope carries some of the reflected energy into the stellar interior. The radiative core gets a slight cooling effect, slightly slowing fusion, while the envelope dramatically expands, radically changing the star's radius. With the 2M$_\odot$ star's lack of an outer convective region, very little of the reflected energy is able to reach the interior of the star, and the evolution of its bulk properties is essentially unaffected.
\section{Observational Characterization} \label{sec:obs}
To characterize the combined star-Dyson sphere system, we combine the spectra of the star and Dyson sphere into a system spectrum, from which we can calculate absolute magnitudes. We assume that the Dyson spheres emit waste heat as blackbodies at their effective temperatures. We compute this system as the sum of three components: the star, the interior of the Dyson Sphere, both of which are obscured by the Dyson sphere, and the exterior of the sphere, which is unobscured. We need to know the fraction of light that immediately escapes the system from these three sources, and the balance of radiation between the interior and exterior surfaces of the sphere. We begin with the full system flux equation 44 of \cite{dsreview}:
\begin{equation} \label{eq:review44}
\Phi_\gamma = \frac{\pi}{d^2}(k_1 \Phi_{*,\gamma} + k_2 B_\gamma(T_\mathrm{int,eff}) + k_3 B_\gamma(T_\mathrm{ext,eff})),
\end{equation}
where $d$ is the distance to the system, $\Phi_{*,\gamma}$ is the star's specific intensity, $B_\gamma(T)$ is the Planck function for the Dyson sphere surface temperatures, and $k_i$ values are scaling factors we must calculate for each configuration. We calculate the luminosity of each component, scale them by the appropriate factors, and apply bolometric corrections to calculate absolute magnitudes. The bolometric corrections for the stars come from {\tt MESA} model output, and those for the Dyson spheres come from a modified version of {\tt blackbody.py} from {\tt mesa-r12778}'s {\tt colors} module \citep{mesa4}.
For each of our stellar masses we apply our set of {\tt MESA} magnitudes for stars with no feedback and those with feedback levels between 1-50\%. In addition to these, we calculate the observable properties of Dyson spheres with lower $f$ values, between $10^{-6} - 0.003$. These low $f$ values have no significant effect on the star itself, so we use our {\tt MESA} model with no feedback as the stellar components of these systems.
\subsection{The Stellar Component}
\citetalias{mesa1} implements bolometric corrections (BCs) to estimate absolute magnitudes in user-specified filter systems \citep{mesa4}. The BCs are calculated by linear interpolation over tables of $\log(T_{\rm eff})
, $\log(g)
, and [M/H].
We input BC tables from {\tt MESA} Isochrones and Stellar Tracks \citep[MIST,][]{mist1, mist2} for the {\it Gaia}
and {\it WISE} filter systems.
We select one point in the evolution of each star as our characteristic luminosity and bolometric correction values. For the 1 and 2M$_\odot$ stars, we select the point in time at which the star is halfway between the onset of its Dyson sphere and the end of its main sequence. For the 0.2 and 0.4M$_\odot$ stars, whose lifetimes are significantly longer than the age of the universe, we select the age of 10 Gyr, as their properties do not evolve significantly between the Dyson sphere's onset and the age of the universe.
\subsection{The Dyson Sphere Component}
So far, the only aspect of the Dyson spheres that we have defined is the fraction of luminosity reflected back onto the star ($f$). To characterize the structure in an observable way, we apply the AGENT formalism of \cite{ghat2} and \cite{dsreview}. The AGENT formalism is named for the five defining parameters of the Dyson sphere system (all powers normalized by the star's power output): the power of starlight intercepted $\alpha$, the power produced from other sources (e.g. fossil fuels) $\epsilon$, the power of the Dyson sphere's thermal waste heat $\gamma$, the power of other waste energy disposal $\nu$, and the characteristic temperature of the Dyson sphere's waste heat $T_{\rm waste}$. In this work, we set $\epsilon=\nu=0$, assuming that the structure does not generate its own energy or emit significant low-entropy emission (e.g. radio transmissions).
We adopt a spherical shell model, centered over a star of radius $R_*$, with radius $R$ and Bond albedo $a$. Photons leaving the interior of the Dyson sphere have 3 possible fates: reflection, transmission, and absorption. We parametrize the fractional portions of radiation for each of these fates with:
\begin{equation}
a+t+e=1,
\end{equation}
where $a$, $t$, and $e$ are the fractions of photons that get reflected, transmitted, and absorbed, respectively. For our feedback effect, we also need the parameter $s$, representing the probability that a photon emitted from or reflected by the interior of the sphere in a random direction will not immediately strike the star:
\begin{equation}
s = \sqrt{1-(R_*/R)^2}.
\end{equation}
We also assume that the structures radiate heat equally on the interior and exterior (i.e.\ there is no strong thermal management), indicated as $\zeta = 1/2$ in the AGENT formalism.
We examine 2 limiting cases for the $a$, and $e$ parameters. The first case, which we refer to as hot Dyson spheres, assumes black material, adopting a Bond albedo $a=0$. The second case, which we call cold Dyson spheres, corresponds to a spherical, specularly reflecting mirror that returns all light it collections back to the star. Such material absorbs no heat, so $e=0$. Intermediate cases will have properties that are a compromise between these extremes. In both cases, we vary $t$, which corresponds to the transmittance of the sphere, which can be interpreted as the fraction of the star's solid angle the sphere does not cover. Since our models are 1D, this is not completely appropriate for stellar engines with large deviations from spherical symmetry.
\subsubsection{Case 1: Hot Dyson Spheres}
For the hot Dyson sphere case, we assume that the sphere's material is black, so its Bond albedo is $a=0$. We assume that the sphere's transmissivity is equivalent at optical and infrared wavelengths, corresponding to opaque absorbers covering a fraction $e=1-t$ of the star.
Applying these assumptions to the AGENT formalism, we calculate the parameters that describe a system's observability, $\alpha$ and $\gamma$. We start by filling in our assumptions in the equations for the fractions of photons from the star and sphere that end up absorbed by the star, absorbed by the sphere, or escaped, in the limit of purely diffuse reflection from the shell, based on Table 2 from \cite{dsreview}. Our version is shown in Table \ref{tab:fs}.
\begin{table}[]
\centering
\caption{The fractions of stellar and Dyson sphere photons that end up being absorbed by the star, absorbed by the sphere, or escaping. This is the recreation of Table 2 of \citeauthor{dsreview} for the two configurations we explore: hot black spheres and cold mirrored spheres.}
\begin{tabular}{c|c c|c c}
& \multicolumn{2}{c|}{Starlight (*)} & \multicolumn{2}{c}{Thermal Emission from Sphere (s)} \\
& hot & cold & hot & cold \\
\hline
Absorbed by Star (*) & 0 & 1-t & $\frac{1}{2} (1-s)$ & - \\
Absorbed by Sphere (s) & 1-t & 0 & $\frac{1}{2} s (1-t)$ & - \\
Escape (e) & t & t & $\frac{1}{2} (1 + s t)$ & -
\end{tabular}
\label{tab:fs}
\end{table}
We apply these to Equations 32-33 in \cite{dsreview} to calculate luminosity ratios required for energy balance:
\begin{equation}
\frac{L_*}{\widetilde{L}} = \frac{2-s(1-t)}{1+t},
\end{equation}
\begin{equation} \label{eq:LsLtilde}
\frac{L_{\rm s}}{\widetilde{L}} = \frac{2(1-t)}{1+t},
\end{equation}
\noindent where $L_*$ is the star's total luminosity, $\widetilde{L}$ is the star's luminosity due to power generated in its core (equivalent to L[2] used in {\tt MESA} and the above sections), and $L_{\rm s}$ is the luminosity of the Dyson sphere. We then plug these into Equations 34-35 to calculate our AGENT observability parameters:
\begin{equation} \label{eq:gamma_hot}
\alpha = \gamma = \frac{(1-t)(1+st)}{1+t}
\end{equation}
where $\alpha$ is the fraction of the star's nuclear luminosity which does not escape the system as starlight and $\gamma$ is the fraction that ultimately escapes the system as thermal emission from the Dyson sphere. From our above equations, our feedback parameter relates to this formalism as:
\begin{equation}
f = \frac{(1-s)(1-t)}{1+t}.
\end{equation}
For each of our stars at their selected point in time, we calculate a Dyson sphere's temperature and size for each of our feedback fractions $f$, for $t$ values from 0 to 0.99 (or the maximum possible for non-negative $s$ values.) From equations 37-40 in \cite{dsreview}, we calculate the temperature $T_{\rm s}$ of the Dyson sphere. Using our $L_{\rm s}$ from Equation \ref{eq:LsLtilde}, our effective temperature is:
\begin{equation}
T_{\rm s} = \left( \frac{(1-s^2) \tilde{L}}{4\pi (1+t) \sigma R_*^2} \right)^{1/4},
\end{equation}
where $R_*$ is the star's radius. For this configuration, our Dyson sphere has the same internal and external temperature, so we can rewrite Equation \ref{eq:review44}, given Equations 45-47 of \cite{dsreview} as:
\begin{equation}
\Phi_\gamma = \frac{\pi}{d^2}(t R^2_* \Phi_{*,\gamma} + (1-t)(1+st) R^2_\mathrm{s} B_\gamma(T_\mathrm{s,eff})).
\end{equation}
\subsubsection{Case 2: Cold, Mirrored Dyson Spheres}
For case 2, we examine the opposite extreme, where the Dyson sphere is a spherical mirror returning starlight to the star without heating up. The Dyson sphere itself does not then have any significant temperature or luminosity. This corresponds to $e=0$ and $a=1-t$ in our equations. Photon fates in this case, assuming purely specular reflection are shown in Table \ref{tab:fs} with those of case 1.
For this case, our energy balance luminosity equations become:
\begin{equation}
\frac{L}{\widetilde{L}} = \frac{1}{t},
\end{equation}
\begin{equation} \label{eq:LsLtilde2}
\frac{L_{\rm s}}{\widetilde{L}} = 0.
\end{equation}
Our AGENT observability parameters are:
\begin{equation} \label{eq:alpha_cold}
\alpha = \gamma = 0 .
\end{equation}
With no Dyson sphere luminosity, the luminosity poured onto our star is simply the fraction of its light reflected:
\begin{equation}
f = a = 1-t .
\end{equation}
For each of our stellar models, we just see the luminosity of the star scaled by the factor $t$, with no additional thermal component, so our total luminosity equation is simply:
\begin{equation}
\Phi_\gamma = \frac{\pi}{d^2} t R_*^2 \Phi_{*,\gamma}.
\end{equation}
\section{Discussion}
\subsection{Color-magnitude Diagrams}
We take a closer look at selected {\it Gaia} and {\it WISE} color-magnitude diagrams (CMDs) shown for low-mass stars in in Figures \ref{fig:lm_cmds} and intermediate-mass stars in \ref{fig:hm_cmds}. From {\it Gaia}, we use the G (green), G$_{\rm BP}$ (blue), and G$_{\rm RP}$ (red) filters to characterize the systems at optical wavelengths. From {\it WISE}, we use the W4 (22 $\mu$m) filter to include an infrared component.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{cmd_02M.pdf}
\includegraphics[width=0.9\textwidth]{cmd_04M.pdf}
\caption{Low-mass star color-magnitude diagrams for combined star and Dyson sphere systems. Top: 0.2M$_\odot$, Bottom: 0.4M$_\odot$. The bare star is shown as a black dot, and the extending black dotted line shows a range of cold, reflective Dyson spheres from $f=0$ to $f=0.50$. Colored lines are drawn at intervals from $t=0$ to $t=0.99$ to show the hot Dyson sphere models. Light gray marks a Dyson sphere at radius 0.01 AU and medium gray at 0.1 AU. }
\label{fig:lm_cmds}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{cmd_1M.pdf}
\includegraphics[width=0.9\textwidth]{cmd_2M.pdf}
\caption{Intermediate-mass star color-magnitude diagrams for combined star and Dyson sphere systems. Top: 2M$_\odot$, Bottom: 2M$_\odot$. The bare star is shown as a black dot, and the extending black dotted line shows a range of cold, reflective Dyson spheres, from $f=0$ to $f=0.50$. Colored lines are drawn at intervals from $t=0$ to $t=0.99$ to show the hot Dyson sphere models. Medium gray marks a Dyson sphere at radius 0.1 AU and dark gray at 1 AU. }
\label{fig:hm_cmds}
\end{figure}
In each CMD, we show the bare star and a line tracing cold, mirrored sphere systems with reflection fractions from 0 to 0.50. We also show a series of curves tracing the magnitudes of hot, non-reflective Dyson spheres with different transmission levels. Along these curves, the feedback parameter $f$ can range from 10$^{-6}$ to 0.50, though unphysical $f$ and $t$ combinations which would require a Dyson sphere smaller than its host star are excluded. In addition, we trace select lines of constant hot Dyson sphere radius. The data behind the figures are available in machine-readable form.
For the hot-to-warm Dyson sphere configurations, high values of $f$ begin in the upper left of each CMD and extend downward toward lower $f$. For each stellar mass, we see similar general trends. In the left-side diagrams, moving toward higher $f$, we see the stars dim in absolute G magnitude, as they redden then bluen again in G$_{\rm BP}$-G$_{\rm RP}$. This happens because at very high feedback levels, the Dyson sphere's temperature is very close to that of the star. As feedback decreases, we see the Dyson sphere begin to cool, reddening the system. At very low feedback levels, the Dyson spheres become very cool and no longer contribute significantly to optical magnitudes, so we begin to see the star's natural color again. On the right-side figures, we see that each constant $t$ line generally dims and reddens as we move toward lower feedback levels. We see continuous reddening here as we move to cooler and cooler Dyson spheres, as the G-W4 color can pick up the infrared output of these relatively low-temperature objects.
Moving from high $t$ lines to lower, we see this effect exaggerated, as the Dyson spheres intercept more starlight and contribute more to the overall appearance. As the classical idea of a Dyson sphere, we can examine a solar mass star with low transmission of starlight through the sphere and a Dyson sphere radius of roughly 1 AU. We see that the feedback levels are very low and that the systems will appear, relative to a bare solar mass star, to be dimmed in the optical range and reddened in both optical and infrared colors.
Next, we examine the CMDs for cold, mirrored spheres. Low feedback levels (high transmission values) begin at the bare star point. For the 0.2M$_\odot$ star, we see dimming in the G band and no significant change in color, as the star's effective temperature does not significantly change, and there is no Dyson sphere emission. For the 0.4M$_\odot$ star, we see dimming in the G band and the star getting slightly bluer, as the star very slightly increases in effective temperature. The 1M$_\odot$ star does not significantly dim in the G band and becomes bluer. Feedback significantly increases its effective temperature, making it bluer, but not appearing brighter due to decreased transmission. The 2M$_\odot$ star dims and becomes bluer. Its spectrum peaks in UV wavelengths, so the effective temperature increase is not enough to counteract the decreased transmission by the higher coverage mirror systems.
\subsection{When Feedback Matters}
The feedback of energy back a stellar surface resulting from a warm and/or reflective Dyson sphere can strongly affect the appearance of a star under certain circumstances, particularly low-mass stars and high feedback $f$ values. For low mass stars, we have demonstrated that nuclear luminosity is significantly affected. To cause a 1\% change in the star's nuclear luminosity, 0.2 and 0.4 M$_\odot$ stars require 1.2\% and 1.3\% feedback, respectively. Interestingly, we see a partial cancellation in the effects on the star's effective temperature. Feedback increases the temperature of the star's exterior, increasing luminosity, while the reduction in nuclear burning decreases luminosity. Ultimately, their effective temperatures do not change noticeably for the feedback levels explored ($\leq$50\%).
For high mass stars, feedback cannot penetrate far into the star, so a very large amount is required to affect nuclear burning, often above the limit of 50\% explored here. To produce a 1\% change in the 1M$_\odot$ star's nuclear luminosity, 45\% feedback is required. None of our models were able to significantly change the 2M$_\odot$ star's nuclear burning. Since there is not a significant cooling effect in these stars, we see that their effective temperatures can be significantly impacted. For a 1\% change in effective temperature for 1 and 2 M$_\odot$ stars, feedback levels of 5.8\% and 5.5\%, respectively, are required. For a cold, mirrored sphere, for example, these would be the fraction of the star's solid angle ($f = a = 1-t$) required to be covered in mirrors for the feedback to significantly affect stars' effective temperatures.
To demonstrate how these feedback levels translate to physical properties of hot Dyson spheres, we plot $\alpha$ (the fraction of stellar luminosity which does not escape as starlight) and $T_{\rm sphere}$ for each stellar mass at the $f$ values that cause significant changes to stars' appearances in Figure \ref{fig:alpha_ts}. Ultimately, these significant observable changes only occur for Dyson spheres on the order of a few thousand Kelvin and not for those of typically assumed Dyson sphere temperatures of a few hundred Kelvin.
\begin{figure}
\centering
\includegraphics{alpha_teff.pdf}
\caption{Lines of T$_{\rm sphere}$ versus $\alpha$ for hot Dyson spheres at which 1\% changes in stellar properties occur. For the low mass stars, we trace the range of possible sphere properties at which L$_{\rm nuc}$ changes by 1\%, as stellar T$_{\rm eff}$ does not significantly change. For the higher mass stars, L$_{\rm nuc}$ does not significantly change, so we trace the limit for T$_{\rm eff}$ to change by 1\%. The changes become stronger as one moves toward the upper right of the lines (hotter spheres that convert more starlight). Ultimately, none of the stars change significantly in nuclear burning or effective temperature for the typically assumed Dyson sphere temperatures of a few hundred Kelvin. }
\label{fig:alpha_ts}
\end{figure}
\subsection{Stellar Engineering}
We have limited our analysis to $f<0.5$ because this represents an extreme outer limit to what might be expected from Dyson Spheres used as energy collectors, and in part because higher values required additional modifications to {\tt MESA} to capture the physics correctly.
But higher values, even up to $f=1$, might be interesting to consider as components of a stellar engineering project. As we have shown, returning starlight to a star can increase its lifetime, especially if a significant fraction of the star's outer mass is convective.
Whether such a project could succeed on a star with a substantial radiative component is unclear, but in principle ``bottling up'' a star completely should unbind it on a Kelvin-Helmholtz timescale and quench the nuclear activity in its core. Such a project might be desirable and thus be done intentionally, either to extend the star's life, prevent it from experiencing post-main-sequence evolution, or even extract its mass. Exploring such possibilities is a topic for future work.
\section{Conclusion} \label{sec:concl}
Irradiated stars expand and cool. A Dyson sphere may send a fraction of a star's light back toward it, either by direct reflection or thermal re-emission. This returning energy can be effectively transported through convective zones but not radiative zones. So, it can have strong impacts on low mass main sequence stars with deep convective zones which extend to the surface. It causes them to expand and cool, slowing fusion and increasing main sequence lifetimes. For higher mass stars with little to no convective exterior, the returned energy cannot penetrate far into the star and therefore has little effect on the star's structure and evolution, besides some surface heating.
We have used \citetalias{mesa1} to model the structure and evolution of stars with masses from 0.2 to 2 M$_\odot$ with returned luminosity fractions from 0.01 to 0.50. We have incorporated the effects of feedback on stars and calculated Dyson sphere properties using the AGENT formalism of \cite{ghat2} and the feedback equations of \cite{dsreview}. We have compiled absolute magnitudes in {\it WISE} and {\it Gaia} filters for a variety of combined star-Dyson sphere systems. These are shown in color-magnitude diagrams in this work and are also available as machine-readable tables.
For our 0.2 and 0.4 M$_\odot$ stars, feedback levels above roughly 1\% cause at least a 1\% change in nuclear luminosity; their effective temperatures do not significantly change. For our 1 and 2 M$_\odot$ stars, feedback levels above roughly 6\% cause at least a 1\% change in the star's effective temperature; their nuclear luminosities do not significantly change. Physically, these limits may correspond with a cold, mirrored surface covering the specified fraction of the star's solid angle. For light-absorbing, non-reflective Dyson spheres, these feedback levels correspond to very hot spheres, with temperatures of thousands of Kelvin. This is well above the few hundred Kelvin temperatures typically assumed in Dyson sphere studies, but such hot spheres have been considered in the literature before \citep{Osmanov2018}.
Although the circumstances under which this feedback affects stellar structures are rather extreme, we have demonstrated that it can have a significant effect on the observable properties of certain systems, including mirrored stellar engines and hot Dyson spheres. Mirrored spheres might be relevant for stellar engineering projects designed to extend a star's lifetime, reduce its luminosity, or extract its mass.
We have also compiled expected absolute magnitudes for Dyson spheres at a wide range of feedback levels, including at significantly lower, perhaps more likely levels of feedback, to help guide future searches.
\begin{acknowledgements}
We thank Noah Tuchow and Josiah Schwab for helpful advice regarding the {\tt MESA} modelling process.
The Penn State Extraterrestrial Intelligence Center and Center for Exoplanets and Habitable Worlds are supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. This research has made use of NASA's Astrophysics Data System Bibliographic Services.
\end{acknowledgements}
\vspace{5mm}
\software{ MESA \citep{mesa1,mesa2,mesa3,mesa4,mesa5}, Astropy \citep{astropy1, astropy2}, Matplotlib \citep{matplotlib}, NumPy \citep{numpy}, SciPy \citep{scipy}}
\typeout{}
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,595
|
Sharon Louise Carr (née en 1981), également connue sous le nom de "The Devil's Daughter ", est une femme britannique qui est la plus jeune femme meurtrière de Grande-Bretagne. En juin 1992, âgée de seulement 12 ans, elle a assassiné Katie Rackliff, 18 ans, après l'avoir choisie au hasard alors qu'elle rentrait chez elle d'une boîte de nuit à Camberley. Le meurtre n'a d'abord pas été résolu jusqu'en juin 1994, lorsque Carr a attaqué et poignardé un autre élève de la Collingwood College Comprehensive School sans raison apparente, puis s'est vanté à plusieurs reprises du meurtre de Rackliff auprès de ses amis et de sa famille et dans ses entrées de journal faites en prison. En 1997, elle fut reconnue coupable de meurtre, ce qui suscita beaucoup de curiosité dans les médias à cause de son jeune âge et de la gravité de l'acte. Elle a été condamnée à au moins 14 ans de prison, mais elle a continué d'être détenue bien après l'expiration de cette peine minimale en raison de son comportement perturbateur dans l'établissement. En tant que prisonnière à statut restreint, elle a régulièrement agressé et tenté d'assassiner des membres du personnel et des codétenus, et a souvent exprimé sa volonté de tuer d'autres personnes.
Dans la culture populaire
Le cas de Carr a figuré dans un certain nombre de documentaires :
En 2014, Carr a fait l'objet d'un épisode de la saison 8 de Deadly Women, intitulé "Never too Young". L'émission indique à tort que sa peine est à perpétuité sans libération conditionnelle.
En 2017, Carr a fait l'objet d'un épisode de Teens Who Kill, une série diffusée sur Channel 5.
Le 22 octobre 2017, un documentaire sur Carr réalisé par la personnalité de la télévision Jo Frost a été diffusé sur Crime + Investigation, dans le cadre de la série Jo Frost on Britain's Killer Kids.
Voir également
Marie Bell
Lorraine Thorpe - la plus jeune femme double meurtrière de Grande-Bretagne
Meurtre d'Alison Shaughnessy
Notes et références
Lectures complémentaires
Violence contre les femmes au Royaume-Uni
Personnalité ayant souffert de schizophrénie
Meurtre commis par mineur
1997 au Royaume-Uni
1994 au Royaume-Uni
Naissance en 1981
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 971
|
{"url":"http:\/\/www.r-bloggers.com\/r-function-of-the-day-rle-2\/","text":"# R Function of the Day: rle\n\nSeptember 22, 2009\nBy\n\n(This article was first published on sigmafield - R, and kindly contributed to R-bloggers)\n\nEdit: This post originally appeared on my Wordpress blog on September 22, 2009. I present it here in its original form.\n\nThe R Function of the Day series will focus on describing in plain language how certain R functions work, focusing on simple examples that you can apply to gain insight into your own data.\n\nToday, I will discuss the rle function.\n\n### What situation is rle useful in?\n\nThe rle function is named for the acronym of \"run length encoding\". What does the term \"run length\" mean? Imagine you flip a coin 10 times and record the outcome as \"H\" if the coin lands showing heads, and \"T\" if the coin lands showing tails. You want to know what the longest streak of heads is. You also want to know the longest streak of tails. The run length is the length of consecutive types of a flip. If the outcome of our experiment was \"H T T H H H H H T H\", the longest run length of heads would be 5, since there are 5 consecutive heads starting at position 4, and the longest run length for tails would be 2, since there are two consecutive heads starting at position 2. If you just have 10 flips, it is pretty easy to simply eyeball the answer. But if you had 100 flips, or 100,000, it would not be easy at all. However, it is very easy with the rle function in R! That function will encode the entire result into its run lengths. Using the example above, we start with 1 H, then 2 Ts, 5 Hs, 1 T, and finally 1 H. That is exactly what the rle function computes, as you will see below in the example.\n\n### How do I use rle?\n\nFirst, we will simulate the results of a the coin flipping experiment. This is trivial in R using the sample function. We simulate flipping a coin 1000 times.\n\n\n> ## generate data for coin flipping example\n> coin <- sample(c(\"H\", \"T\"), 1000, replace = TRUE)\n> table(coin)\ncoin\nH T\n501 499\n[1] \"T\" \"H\" \"T\" \"T\" \"T\" \"H\" \"T\" \"H\" \"T\" \"T\" \"H\" \"T\" \"H\" \"T\"\n[15] \"T\" \"T\" \"H\" \"H\" \"H\" \"H\"\n\n\n\nWe can see the results of the first 20 tosses by using the head (as in \"beginning\", nothing to do with coin tosses) function on our coin vector.\n\nSo, our question is, what is the longest run of heads, and longest run of tails? First, what does the output of the rle function look like?\n\n\n> ## use the rle function on our SMALL EXAMPLE above\n> ## note results MATCH what I described above...\n> rle(c(\"H\", \"T\", \"T\", \"H\", \"H\", \"H\", \"H\", \"H\", \"T\", \"H\"))\nRun Length Encoding\nlengths: int [1:5] 1 2 5 1 1\nvalues : chr [1:5] \"H\" \"T\" \"H\" \"T\" \"H\"\n> ## use the rle function on our SIMULATED data\n> coin.rle <- rle(coin)\n> ## what is the structure of the returned result?\n> str(coin.rle)\nList of 2\n$lengths: int [1:500] 1 1 3 1 1 1 2 1 1 1 ...$ values : chr [1:500] \"T\" \"H\" \"T\" \"H\" ...\n- attr(*, \"class\")= chr \"rle\"\n> ## sort the data, this shows the longest run of\n> ## ANY type (heads OR tails)\n> sort(coin.rle$lengths, decreasing = TRUE) [1] 9 8 7 7 7 7 7 6 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 5 5 5 5 [28] 5 5 5 5 5 5 5 5 5 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 [55] 4 4 4 4 4 4 4 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 [82] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 [109] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 [136] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [163] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [190] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [217] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [244] 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [271] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [298] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [325] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [352] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [379] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [406] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [433] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [460] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [487] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 > ## use the tapply function to break up > ## into 2 groups, and then find the maximum > ## within each group > > tapply(coin.rle$lengths, coin.rle\\$values, max)\nH T\n9 8\n\n\n\nSo in this case the longest run of heads is 9 and the longest run of tails is 8. The tapply function was discussed in a previous R Function of the Day article.\n\n### Summary of rle\n\nThe rle function performs run length encoding. Although it is not used terribly often when programming in R, there are certain situations, such as time series and longitudinal data analysis, where knowing how it works can save a lot of time and give you insight into your data.","date":"2014-12-21 14:32:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44808223843574524, \"perplexity\": 1736.414894043837}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-52\/segments\/1418802771374.156\/warc\/CC-MAIN-20141217075251-00080-ip-10-231-17-201.ec2.internal.warc.gz\"}"}
| null | null |
form-test
Ireland & Dublin
Privacy + T&Cs
Retail & F&B
Hines investment and development
About Hines
Hines is a privately owned global real estate investment, development and management firm and historically, has developed, redeveloped or acquired 1,348 properties. Hines represents the global real estate benchmark for value creation, integrity, services and quality.
Hines was founded
Assets Under Management
Cities operated in
Global workforce
Usage: Mixed-use
Size: 130,000 sq m
Salesforce Tower, a 130,000 sq m, 61-story icon, is adjacent to the Transbay Transit Centre in San Francisco. Developed by Hines, the tower is a landmark addition to the San Francisco skyline and is the second tallest building on the West Coast.
Location: Milan
Porta Nuova is a 290,000 sq m mixed-use regeneration project near the centre of Milan. The development comprises 140,000 sq m of office space, 403 residential units, 40,000 sq m of retail and cultural venues. Porta Nuova is one of the largest urban developments in Europe.
CityCenter DC
Location: Washington D.C
CityCenter DC is a mixed-use regeneration development located in downtown Washington D.C. Consisting of 77,248 sq m of residential, 20,587 sq m of retail, 48,495 sq m of office, 370 key hotels, 1,800 car parking spaces and 1.5 acres of public spaces.
Size: 15, 284 sq m
In 2017, Hines Ireland, in partnership with the Peterson Group in Hong Kong, acquired an impressive portfolio of five buildings located on Dame Street in the heart of Dublin. The re-development will bring more than 15, 284 sq m of retail, office and restaurants to Dublin's city center.
South Dublin's new retail and leisure destination
Renders are illustrative only, subject to planning.
© Cherrywood / Hines 2019
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,516
|
package cn.cerc.mis.tools;
import cn.cerc.db.core.ServiceException;
public class DataUpdateException extends ServiceException {
private static final long serialVersionUID = -8184184817999373005L;
public DataUpdateException(Exception e) {
super(e.getMessage());
this.addSuppressed(e);
}
public DataUpdateException(String message) {
super(message);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,619
|
{"url":"https:\/\/lsinsight.org\/john-wallis","text":"# John Wallis\n\nHow much is John Wallis worth? \u2013 Wondering how wealthy & rich is John Wallis? Or maybe you\u2019re just curious about John Wallis\u2019s age, body measurements, height, weight, hair color, eye color, bra & waist size, bio, wiki, wealth and salary?\n\nJohn Wallis (Ashford, November 23, 1616 \u2013 Oxford, October 28, 1703) was an English mathematician who is partly credited with the development of modern calculus. It was a precursor of the infinitesimal calculus (introduced the use of the symbol\n\n${ displaystyle { infty}}$\n\n{ displaystyle { infty}}\n\nto represent the notion of infinity). Between 1643 and 1689 he was a cryptographer of Parliament and later of the Royal Court. He was also one of the founders of the Royal Society and a professor at the University of Oxford.\n\n## John_Wallis\u2019s Biography\n\nBorn in Ashford (Kent), he was the third of the five children of Reverend John Wallis and Joanna Chapman. He started his education at the local Ashford school, but moved to the James Movat school in Tenterden in 1625 due to the outbreak of a plague epidemic. He had his first contact with mathematics in 1631 at the Martin Holbeach school in Felsted; he liked them but his study of them was erratic, \u201cthe mathematics that we currently have, are rarely seen as academic studies, but as something mechanical\u201d (Scriba 1970).\n\nWith the intention of obtaining a PhD, in 1632 he was sent to Emmanuel College in Cambridge. There, he defended an argument about the doctrine of the circulation of blood; it is considered that it was the first time in Europe that this theory was publicly maintained in a discussion. In any case, his interests remained focused on mathematics. He obtained a degree in Arts in 1637, and a master\u2019s degree in 1640, later he joined the priesthood. He was granted a scholarship to study at Queen\u2019s College (Cambridge) in 1644, which did not stop him from continuing with his plans for his wedding with Susana Glyde, held on March 14, 1645.\n\nDuring this time, Wallis remained close to the Puritan party, which he helped to decipher the messages of the monarchists. The quality of the cryptography of the time was not uniform; Despite the individual successes of mathematicians such as Fran\u00e7ois Vi\u00e8te, the principles underlying the design and analysis of encryption were vaguely understood. Most of the ciphers were made with ad-hoc methods that relied on secret algorithms, as opposed to systems based on a variable key. Wallis made the latter much more secure and even described them as indecipherable.\n\nHe was also concerned about the use that foreign powers might make of encryption; rejected, for example, a request to teach cryptography to students of Hannover conducted in 1697 by Gottfried Leibniz.\n\nBack in London (in 1643 he had been appointed chaplain of San Gabriel on Fenchurch Street), Wallis joins the group of scientists who would later form the Royal Society. At last he was able to satisfy his mathematical interests, and in a few weeks in 1647 he succeeded in dominating the book Clavis Mathematicae by William Oughtred. In a short time, he began writing his own treatises on a wide range of subjects: throughout his life, Wallis made significant contributions to trigonometry, calculus, geometry and the analysis of infinite series.\n\nJohn Wallis joined the moderate Presbyterians in supporting the proposal against the execution of Charles I, which earned him the permanent hostility of the Independents. Despite his opposition, he was proposed in 1649 to occupy the Savilian Chair of Geometry at the University of Oxford, where he lived until his death on October 28, 1703. Apart from his work in mathematics, he also wrote about theology, logic, English grammar and philosophy; He was also one of the pioneers in the introduction in England of a teaching system for deaf-mutes, inspired by the method of the Spanish Juan de Pablo Bonet.\n\n## More Facts about John Wallis\n\nThe John Wallis\u2019s statistics like age, body measurements, height, weight, bio, wiki, net worth posted above have been gathered from a lot of credible websites and online sources. But, there are a few factors that will affect the statistics, so, the above figures may not be 100% accurate.","date":"2020-09-22 11:43:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 1, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.41427117586135864, \"perplexity\": 2116.7038757152914}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400205950.35\/warc\/CC-MAIN-20200922094539-20200922124539-00318.warc.gz\"}"}
| null | null |
Mr. David Sinclair received a prestigious honor from the Institute of Demolition Engineers (IDE) earlier this month. Mr. David Darsey, President of the IDE, awarded Mr. Sinclair with an Honorary Fellowship of the Institute on November 16, 2018 at Drapers Hall, City of London, U.K.
Mr. Sinclair currently holds several honorary titles, including Honorary Life Vice President of the National Federation of Demolition Contractors and Honorary Life Vice President of the European Demolition Association. Additionally, he is a member of the European Federation of Explosives Engineers, was appointed as Demolition Engineer to the United Nations in 2011, and was appointed as a Director of the National Demolition Association (NDA) in early 2018.
Presently, Mr. Sinclair is a Technical Director and Subject Matter Expert (SME) for Decommissioning and Demolition for Envirocon. He continues to provide expertise and technical support to our demolition teams on projects across the United States.
Please join us in congratulating Mr. Sinclair on another fine accomplishment.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,768
|
Bersih in the Media
'Stay away or face arrest'
By Andrew Sagayam (The Star)
KUALA LUMPUR: Anyone who takes part in the illegal gathering on Nov 10 at Dataran Merdeka to demand for a "fair and clean" general election will be arrested.
Police will not hesitate to pick up anyone seen in the area where 100,000 people are expected to congregate from 3pm.
The plan was for those attending the gathering to march to various parts of the city before heading to Istana Negara where a memorandum would be
submitted to the Yang di-Pertuan Agong.
City police chief Deputy Comm Datuk Zul Hasnan Najib Baharudin said police would act to ensure peace and order, and advised the public not to believe
claims made by the organisers that the gathering was legitimate, as applications for the permit would be not approved.
"I am responsible for the safety of the public here in the city. As a law enforcer, I will not compromise in this matter.
"The law must be observed and I advise the public to stay away from the gathering," DCP Zul Hasnan said.
He said he met with the organisers of the gathering yesterday and advised them not to go ahead as the permit would not be issued.
"If they want to submit a memorandum, they have the right to do so, but they don't need a large group to do that.
"They can go to Istana Negara to hand over the memorandum but there must be fewer than 10 people," DCP Zul Hasnan said, adding that a large number of
people would cause unnecessary traffic congestion.
He said those who participated in the illegal assembly would be arrested under Section 27(5) of the Police Act 1967.
Parti Keadilan Rakyat is said to be the main organiser of the gathering.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,651
|
Home/Entertainment/The Best CMovies Alternatives for 2022
The Best CMovies Alternatives for 2022
Maria James August 14, 2022 Entertainment Leave a comment 193 Views
Because movies and cinema are such a big part of people's lives, everyone wants to have fun and enjoy their free time. Only a few movies are being shown in theaters because of the ongoing pandemic. As a result, over-the-top (OTT) services like Netflix, Hulu, and others are becoming more popular. People use different websites and apps for streaming movies online to watch the same movies on their phones and TVs at home. Several free movie streaming sites, like cmovies, rainierland, putlocker, and 123movies, are becoming more popular because they offer different benefits.
One of the best websites is CMovies, which is a streaming platform where you can watch TV shows and movies for free (usually by making an account on the site) or for a small fee. This CMovies website is popular in part because it has a wide variety of genres and is completely legal. The website doesn't have its own storage, so it sends people to other Streaming websites.
Pirated movie sites like movies, movies, and others have popped up to serve people who want to watch free movies and TV episodes on their computers. Even though they are popular, they are neither safe nor legal ways to stream. This article talks about one of these options, Cmovies, as well as other options you could use if you can't get to Cmovies or if it doesn't work right on your computer. But before I talk about the other options, I'd like to explain the most important thing about this website.
This page talks about alternatives to CMovies and includes the following:
Even if the user interface, the number of TV shows and movies available, and the occasional legal problems make you want to look for another movie streaming site, you probably won't. Many EU and Western countries consider it illegal to use some free movie streaming services, so it is best to look at some CMovies alternatives.
Here is a list of the best alternatives to Movie4u, where you can watch everything from the latest blockbusters to masterpieces from the early 20th century. Think about the fact that not all websites offer the same services, and read the description carefully.
1: VIPRow
Sports enthusiasts may view their preferred matches in high quality from all over the world for free on VIPRow Sports. Anywhere on the planet should be able to visit this website if you have a stable connection to the internet. Compared to VipLeague, VIPRow has more sports and is easier to use on a mobile device.
The website lets you watch a wide range of sports, including games from the National Football League, the English Premier League, Major League Baseball, and the National Basketball Association. This page has links to streaming sites and other useful information, like trivia. For instance, a query like "Do you know who the smallest NBA player was?" can appear on an NBA feed. Trivia contests don't affect the quality of the streaming, but they do make the customer experience better and build trust.
2: GoMovies
GoMovies is a well-known website where you can watch movies. The best thing about the Gomovie website is that it is easy to use and gives you quick access to a wide range of movies and TV shows.
. The faster people learn how to use the go movies online platform, the sooner they will sign up.
The information has also been put into groups so that clients can easily find what they need. The number of categories keeps going up, which is good for users.
If you've used GoMovies before and are waiting for them to come out, here are some Movie4u options to think about. Not only do these competitors look like GoMovies, but they also have more features.
3: Amazon Prime
Amazon Mobile LLC created the wildly popular home entertainment program Amazon Prime Video, which allows customers to watch and download The Man in the High Castle, The Grand Tour, and a slew of other popular movies and television shows. It makes use of a variety of popular news articles to present its consumers with an engaging experience.
Users can obtain all freely accessible stuff for free with this software. It offers Bollywood and regional Indian music and is well-liked in the world's most populous countries.
The Amazon Prime Video app, like other similar apps, requires a subscription to access the 100+ premium channels and top films. Amazon Prime Video maintains a large library of content and frequently updates its database with the most latest and cutting-edge additions.
This application allows you to watch more channels, documentaries, and uncensored television shows. Amazon Prime Video has a variety of categories, such as New Release, TV Show, Documentaries, and Sports Channel.
You can also use the search box or the categories to find what you're looking for (Action, War, Comedy, Love Stories, etc.). Users of Android and iOS may easily access Amazon Prime Video.
4: Movie4u
A "Movie4u alternative" is a website that provides clients with free HD streaming services for TV episodes and movies without requiring them to download or register. Movie4u's target demographic consists of users who enjoy HD movies, TV series, and full-length content. This website allows users to watch both the top movies on IMDB and the most watched movies.
Future films are frequently updated on Movie4u, keeping viewers informed. Movie4u's film categories contain films from all over the world. The only ways to find a movie on Movie4u were to use the advanced search box or to browse the available categories. The user can find the movie by typing its tag or title into the search box.
Read More: Cash For Cars – What Is It? How Does It Work?
Previous Cash For Cars – What Is It? How Does It Work?
Next The Ultimate Guide to Startups
Introduction Netflix is a service that allows you to watch your favorite movies and shows …
India Switchgear Market Report 2022-2027, Share, Growth
How to Get Food Delivered in Nanaimo
Best Phones in 2022 at Carolina Phone and IPad Repair
Five Things You Should Consider Before Buying a New House
6 Ways to Choose a Shop for Cell Phone Repair in Richmond
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,889
|
~ The only things we truly possess are those things that we are continually rediscovering.
Brothers, you came from our own people. You are killing your own brothers. Any human order to kill must be subordinate to the law of God, which says, 'Thou shalt not kill'. No soldier is obliged to obey an order contrary to the law of God. No one has to obey an immoral law. It is high time you obeyed your consciences rather than sinful orders. The church cannot remain silent before such an abomination. …In the name of God, in the name of this suffering people whose cry rises to heaven more loudly each day, I implore you, I beg you, I order you: stop the repression!
Osama Is Dead, but Was Justice Done?
This work by Amanti is licensed under a Creative Commons Attribution-No Derivative Works 3.0 United States License.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,794
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8" >
<title>Warsaw 2015
- Proposals</title>
<meta name="author" content="" >
<link rel="alternate" type="application/rss+xml" title="devopsdays RSS Feed" href="http://www.devopsdays.org/feed/" >
<script type="text/javascript" src="https://www.google.com/jsapi"></script>
<script type="text/javascript">
google.load('jquery', '1.3.2');
</script>
<!---This is a combined jAmpersand, jqwindont , jPullquote -->
<script type="text/javascript" src="/js/devops.js"></script>
<!--- Blueprint CSS Framework Screen + Fancytype-Screen + jedi.css -->
<link rel="stylesheet" href="/css/devops.min.css" type="text/css" media="screen, projection">
<link rel="stylesheet" href="/css/blueprint/print.css" type="text/css" media="print">
<!--[if IE]>
<link rel="stylesheet" href="/css/blueprint/ie.css" type="text/css" media="screen, projection">
<![endif]-->
</head>
<body onload="initialize()">
<div class="container ">
<div class="span-24 last" id="header">
<div class="span-16 first">
<img src="/images/devopsdays-banner.png" title="devopsdays banner" width="801" height="115" alt="devopdays banner" ><br>
</div>
<div class="span-8 last">
</div>
</div>
<div class="span-24 last">
<div class="span-15 first">
<div id="headermenu">
<table >
<tr>
<td>
<a href="/"><img alt="home" title="home" src="/images/home.png"></a>
<a href="/">Home</a>
</td>
<td>
<a href="/contact/"><img alt="contact" title="contact" src="/images/contact.png"></a>
<a href="/contact/">Contact</a>
</td>
<td>
<a href="/events/"><img alt="events" title="events" src="/images/events.png"></a>
<a href="/events/">Events</a>
</td>
<td>
<a href="/presentations/"><img alt="presentations" title="presentations" src="/images/presentations.png"></a>
<a href="/presentations/">Presentations</a>
</td>
<td>
<a href="/blog/"><img alt="blog" title="blog" src="/images/blog.png"></a>
<a href="/blog/">Blog</a>
</td>
</tr>
</table>
</div>
</div>
<div class="span-8 last">
</div>
<div class="span-24 last" id="title">
<div class="span-15 first">
<h1>Warsaw 2015
- Proposals </h1>
</div>
<div class="span-8 last">
</div>
<h1>Gold partners</h1>
</div>
<div class="span-15 ">
<div class="span-15 last ">
<div class="submenu">
<h3>
<a href="/events/2015-warsaw/">Welcome</a>
<a href="/events/2015-warsaw/location">Location</a>
<a href="https://rejestracja.proidea.org.pl/registration/form.html?conferenceId=9A66580322513117A49E03F6A43C884E&utm_campaign=website&utm_source=orgsite_nav&utm_medium=link">Register</a>
<a href="/events/2015-warsaw/program">Program</a>
<a href="/events/2015-warsaw/propose">Ignites</a>
<a href="/events/2015-warsaw/partners">Partners</a>
<a href="/events/2015-warsaw/contact">Contact</a>
<a href="/events/2015-warsaw/conduct">Code of Conduct</a>
</h3>
</div>
<center>
<strong><em>Note:</em></strong></br>
We are no longer accepting new talk proposals!<br/>
Check our current <a href='/events/2015-warsaw/program'>program</a>, it's ready!
</center>
<br/>
<hr/>
<p>This page list the suggested topics and/or speakers, proposed talks and ignites we have received.<br/><br/>
<strong><em>Note:</em></strong> Some talks will most likely provide an opportunity for further discussion
in form of an <em>Open Spaces</em> style collaboration for people to talk about various ideas or whatever
they want.<br/></p>
<p>Help the organisers and presenters with your feedback!<br/>
Feel free to contact us by e-mail at: <a href='mailto:organizers-warsaw-2015@devopsdays.org
'>organizers-warsaw-2015@devopsdays.org
</a></p>
<h2>Conference Talks</h2>
<ol>
<li><a href="/events/2015-warsaw/proposals/BartoszChrabski_IBMBlueMixIntroduction/">IBM BlueMix Introduction</a> - Bartosz Chrabski
<li><a href="/events/2015-warsaw/proposals/PiotrBaranowski_HitchhikersGuideToCloudComputing/">Hitchhiker's guide to cloud computing</a> - Piotr Baranowski
<li><a href="/events/2015-warsaw/proposals/PiotrSzwed_GitGerritJenkinsFreeOpenSourceDevelopmentPlatform/">Git, Gerrit, Jenkins - Free & Open Source Development Platform</a> - Piotr Szwed
<li><a href="/events/2015-warsaw/proposals/RafalKuc_FromZeroToHeroWithLogstashAndElasticsearch/">From zero to hero - easy log centralization with Logstash and Elasticsearch</a> - Rafal Kuc
<li><a href="/events/2015-warsaw/proposals/MichaelDucy_ChangingTheBehaviorOfIT/">Changing the Behavior of IT</a> - Michael Ducy
<li><a href="/events/2015-warsaw/proposals/JanSvoboda_BuildingDeploymentAutomationPlatformsForContinuousDelivery/">Building Deployment Automation Platforms for Continuous Delivery</a> - Jan Svoboda
<li><a href="/events/2015-warsaw/proposals/DanielPijanowski_ATypicalSysadminDayWithRalph/">A typical sysadmin day with ralph</a> - Daniel Pijanowski
</ol>
<h2>Ignite Talks</h2>
<ol>
<li><a href="/events/2015-warsaw/proposals/GrzegorzNosek_SovietMilitaryHardwarePrimerForDevOps/">Soviet military hardware primer for DevOps</a> - Grzegorz Nosek
<li><a href="/events/2015-warsaw/proposals/GrzegorzNosek_SelfAwareMonitoring/">Self-aware monitoring</a> - Grzegorz Nosek
<li><a href="/events/2015-warsaw/proposals/LucaGibelli_ScalingOutYourStorageOneBrickAtATime/">Scaling out your storage, one brick at a time</a> - Luca Gibelli
<li><a href="/events/2015-warsaw/proposals/GrzegorzNosek_RemoveTheGuessworkFromDiagnosticsWithSysdig/">Remove the guesswork from diagnostics with sysdig</a> - Grzegorz Nosek
<li><a href="/events/2015-warsaw/proposals/PavelStepanov_ImprovingDeveloperExperienceInInteractionWithCISystems/">Improving developer experience in interaction with CI systems</a> - Pavel Stepanov
<li><a href="/events/2015-warsaw/proposals/BartoszGorczynski_ITSMInAgileTimes/">ITSM in Agile times</a> - Bartosz Górczyński
<li><a href="/events/2015-warsaw/proposals/BartoszChrabski_IBMBlueMixIntroduction/">IBM BlueMix Introduction</a> - Bartosz Chrabski
<li><a href="/events/2015-warsaw/proposals/ZoltanTothJuliannaGobolosSzabo_HowToBuildAPetabyteScaleDataInfrastructure/">How to build a petabyte-scale data infrastructure</a> - Zoltán Tóth and Julianna Göbölös-Szabó
<li><a href="/events/2015-warsaw/proposals/JoernBarthel_DockerOffshorePackagingApplicationsForHardToReachDatacenter/">Docker offshore - packaging applications for hard to reach datacenter</a> - Joern Barthel
<li><a href="/events/2015-warsaw/proposals/GrzegorzNosek_DevOpsJFDI/">DevOps: JFDI</a> - Grzegorz Nosek
<li><a href="/events/2015-warsaw/proposals/GarethWorkman_DevOpsInUKGovernment/">DevOps in UK Government</a> - Gareth Workman
<li><a href="/events/2015-warsaw/proposals/RomanPavlyuk_DevOpsAtHomeMakingItWorking/">DevOps at Home: making it working</a> - Roman Pavlyuk
<li><a href="/events/2015-warsaw/proposals/PiotrSzwed_DevOpsKnowledgeBase/">DevOps Knowledge Base</a> - Piotr Szwed
<li><a href="/events/2015-warsaw/proposals/AlexLomau_CloudFoundryHowItscooked/">Cloud Foundry: How it's cooked</a> - Alex Lomau
<li><a href="/events/2015-warsaw/proposals/AndrzejGrzesik_CheffingADepartmentOneDevAtATime/">Cheffing a department, one Dev at a time</a> - Andrzej Grzesik
<li><a href="/events/2015-warsaw/proposals/MattHarasymczuk_BranchManagementContinuousIntegrationAtAlmostNoCost/">Branch Management: Continuous Integration at Almost No Cost</a> - Matt Harasymczuk
<li><a href="/events/2015-warsaw/proposals/MattHarasymczuk_AgileSoftwareDevelopment/">Agile Software Development</a> - Matt Harasymczuk
<li><a href="/events/2015-warsaw/proposals/DanielPijanowski_ATypicalSysadminDayWithRalph/">A typical sysadmin day with ralph</a> - Daniel Pijanowski
</ol>
<h2>Open Sessions</h2>
<ol>
To be added. <i><a href='/events/2015-warsaw/propose'>Be the first to propose!</a></i>
</ol>
</div>
</div>
<div class="span-8 last">
<div class="span-8 last">
<h1>Main Partner</h1>
<a href='http://allegro.tech/'><img border=0 alt='allegro.tech' title='allegro.tech' width=100px height=100px src='/events/2015-warsaw/logos/allegrotech.png'></a>
<h1>Platinum Partner</h1>
<a href='http://www.uniteam.pl/'><img border=0 alt='Uniteam' title='Uniteam' width=100px height=100px src='/events/2015-warsaw/logos/uniteam.png'></a>
<h1>Gold Partner</h1>
<a href='https://www.kainos.pl/'><img border=0 alt='Kainos' title='Kainos' width=100px height=100px src='/events/2015-warsaw/logos/kainos.png'></a>
<h1>Regeneration Zone Partner</h1>
<a href='http://www.connectis.pl/'><img border=0 alt='Connectis' title='Connectis' width=100px height=100px src='/events/2015-warsaw/logos/connectis.png'></a>
<h1>Sponsor</h1>
<a href='http://coders-mill.com/'><img border=0 alt='Codersmill' title='Codersmill' width=100px height=100px src='/events/2015-warsaw/logos/codersmill.png'></a>
<h1>Partners</h1>
<a href='http://eventory.cc/'><img border=0 alt='EventoryApp' title='EventoryApp' width=100px height=100px src='/events/2015-warsaw/logos/eventory.png'></a>
<a href='http://kamea.travel.pl/'><img border=0 alt='Kamea' title='Kamea' width=100px height=100px src='/events/2015-warsaw/logos/kamea.png'></a>
<a href='https://teetbee.com/'><img border=0 alt='Teetbee' title='Teetbee' width=100px height=100px src='/events/2015-warsaw/logos/teetbee.png'></a>
<i> <a href='/events/2015-warsaw/partners'>Become one!</a></i>
</div>
<div class="span-8 last">
</div>
</div>
</div>
</div>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-9713393-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,769
|
Q: 12V Relay Actuation Depending on PWM or Constant DCV Input I am looking to actuate a 12V SPDT relay depending on weather or not a 12V PWM signal is received or not. Headlamps use a reduced power PWM signal for daytime running lamps. I want to be able to detect that signal separate from a 12VDC source (normal headlamp operation) and actuate a relay based on it to relocate that daytime running lamp to another location. I am an ME by trait but have limited experiences with circuits.
I have been able to use a simple RC circuit to turn that PWM into a lower constant voltage. Trouble is, I don't know how to compare the full 12V and the reduced voltage and also still carry enough amperage to control a relay coil.
Looking for something as simple as possible. Some have suggested using a using a 555 as a retrig monostable missing pulse detector or an RC circuit with a comparator. I don't know if either of these would work or how to specifically design them. Any help would be greatly appreciated!
A: You are worrying about a condition which is not necessarily a problem. Basically, you're concerned that you might not be able to reliably drive the relay when the 12 volts is being PWM'd, right?
Well, that has its own answer: Don't. That is, only actually drive the relay when PWM is not present. You need to be aware that relay contacts come in two flavors, normally open or normally closed, where "normally" means with the relay not activated.
So, use your filter/comparator to detect the presence/absence of PWM, and only drive the relay when the input is DC. Select your contacts to produce the result you want.
A: I'd recommend breaking your design into two stages:
1 - Detect the state of your PWM signal,
2 - Drive your relay coil accordingly
For the first stage, I'd recommend an op-amp comparator to determine the state of the PWM signal. Converting the PWM signal to a DC level is a good start towards accomplishing this. Basically, you want your comparator to give a 'high' output if the PWM signal is above some threshold, or a 'low' state if it's below the threshold. You can set the threshold voltage using a simple 2-resistor voltage divider from your 12V supply.
For the second stage, I'd recommend an N-type FET to serve as a low-side switch for the relay coil. When the output of the comparator goes 'high', the NFET conducts, and current flows through the relay coil to activate it. When the comparator goes 'low', the NFET stops conducting, and the relay is deactivated.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,375
|
{"url":"https:\/\/ideas.repec.org\/a\/spr\/metrik\/v77y2014i2p225-246.html","text":"Printed from https:\/\/ideas.repec.org\/\nMyIDEAS: Log in (now much improved!) to save this article\n\n# M-estimators for single-index model using B-spline\n\n## Author Info\n\nListed author(s):\n\u2022 Qingming Zou\n\u2022 Zhongyi Zhu\n\n()\n\nRegistered author(s):\n\n## Abstract\n\nThe single-index model is an important tool in multivariate nonparametric regression. This paper deals with M-estimators for the single-index model. Unlike the existing M-estimator for the single-index model, the unknown link function is approximated by B-spline and M-estimators for the parameter and the nonparametric component are obtained in one step. The proposed M-estimator of unknown function is shown to attain the convergence rate as that of the optimal global rate of convergence of estimators for nonparametric regression according to Stone (Ann Stat 8:1348\u20131360, 1980 ; Ann Stat 10:1040\u20131053, 1982 ), and the M-estimator of parameter is $$\\sqrt{n}$$ -consistent and asymptotically normal. A small sample simulation study showed that the M-estimators proposed in this paper are robust. An application to real data illustrates the estimator\u2019s usefulness. Copyright Springer-Verlag Berlin Heidelberg 2014\n\n## Download Info\n\nIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.\n\nFile URL: http:\/\/hdl.handle.net\/10.1007\/s00184-013-0434-z\nDownload Restriction: Access to full text is restricted to subscribers.\n\nAs the access to this document is restricted, you may want to look for a different version under \"Related research\" (further below) or search for a different version of it.\n\n## Bibliographic Info\n\nArticle provided by Springer in its journal Metrika.\n\nVolume (Year): 77 (2014)\nIssue (Month): 2 (February)\nPages: 225-246\n\nas\nin new window\n\n Handle: RePEc:spr:metrik:v:77:y:2014:i:2:p:225-246 DOI: 10.1007\/s00184-013-0434-z Contact details of provider: Web page: http:\/\/www.springer.com Order Information: Web: http:\/\/www.springer.com\/statistics\/journal\/184\/PS2\n\n## References\n\nReferences listed on IDEAS\nPlease report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on \"citations\" and make appropriate adjustments.:\n\nas\nin new window\n\n1. Prasad Naik & Chih-Ling Tsai, 2000. \"Partial least squares estimator for single-index models,\" Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 62(4), pages 763-771.\n2. Naisyin Wang, 2003. \"Marginal nonparametric kernel regression accounting for within-subject correlation,\" Biometrika, Biometrika Trust, vol. 90(1), pages 43-52, March.\n3. Yingcun Xia & Howell Tong & W. K. Li & Li-Xing Zhu, 2002. \"An adaptive estimation of dimension reduction space,\" Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 64(3), pages 363-410.\n4. Jiti Gao & Hua Liang, 1997. \"Statistical Inference in Single-Index and Partially Nonlinear Models,\" Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 49(3), pages 493-517, September.\n5. Stoker, Thomas M, 1986. \"Consistent Estimation of Scaled Coefficients,\" Econometrica, Econometric Society, vol. 54(6), pages 1461-1481, November.\n6. Wu, Tracy Z. & Yu, Keming & Yu, Yan, 2010. \"Single-index quantile regression,\" Journal of Multivariate Analysis, Elsevier, vol. 101(7), pages 1607-1621, August.\n7. Powell, James L & Stock, James H & Stoker, Thomas M, 1989. \"Semiparametric Estimation of Index Coefficients,\" Econometrica, Econometric Society, vol. 57(6), pages 1403-1430, November.\n8. Xia, Yingcun & H\u00e4rdle, Wolfgang, 2006. \"Semi-parametric estimation of partially linear single-index models,\" Journal of Multivariate Analysis, Elsevier, vol. 97(5), pages 1162-1184, May.\n9. Jianhua Z. Huang, 2002. \"Varying-coefficient models and basis function approximations for the analysis of repeated measurements,\" Biometrika, Biometrika Trust, vol. 89(1), pages 111-128, March.\n10. Li, Jianbo & Zhang, Riquan, 2011. \"Partially varying coefficient single index proportional hazards regression models,\" Computational Statistics & Data Analysis, Elsevier, vol. 55(1), pages 389-400, January.\nFull references (including those not matched with items on IDEAS)\n\n## Lists\n\nThis item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.\n\n## Corrections\n\nWhen requesting a correction, please mention this item's handle: RePEc:spr:metrik:v:77:y:2014:i:2:p:225-246. See general information about how to correct material in RePEc.\n\nFor technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Sonal Shukla)\n\nor (Rebekah McClure)\n\nIf you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.\n\nIf references are entirely missing, you can add them using this form.\n\nIf the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.\n\nIf you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the \"citations\" tab in your profile, as there may be some citations waiting for confirmation.\n\nPlease note that corrections may take a couple of weeks to filter through the various RePEc services.\n\nThis information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.","date":"2017-10-20 14:41:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.41553252935409546, \"perplexity\": 5074.592186587578}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187824225.41\/warc\/CC-MAIN-20171020135519-20171020155519-00145.warc.gz\"}"}
| null | null |
Tag Archives: Leases
Fiji parties urged to outline land policy ahead of poll – ABC News 24
June 25, 2014 Customary Land, Fiji, Fiji Elections, iTLTB, Leases, Property RightsAiyaz Sayad-Khaiyum, customary land, Fiji, Fiji Election, iTaukei, Leases, Property rightsCustomary Land Solutions
Following on from his 7th June editorial in the Fiji Times, Professor Spike Boydell has been interviewed by ABC correspondent Sean Dorney for this item on ABC News 24 'The World', which first aired on 24 June 2014. It also includes comments from the Attorney General Aiyaz Sayad-Khaiyum and Prof Satish Chand. The link to the piece on the ABC website is available here.
"What people want is stability and land is central to that stability." (Spike Boydell)
Call for Land Policies ahead of Fiji Elections – ABC Pacific Beat
June 25, 2014 Customary Land, Fiji, Fiji Elections, iTLTB, Leases, Property RightsFiji Election, iTaukei, Land, Leases, political parties, Property rightsCustomary Land Solutions
https://customarylandsolutions.files.wordpress.com/2014/06/boydell-on-pacific-beat-on-fiji-land-and-elections-20140624.mp3
Professor Spike Boydell has been interviewed on ABC Radio Australia about the need for political parties in Fiji to explain their land policies ahead of the elections. A transcript of the interview is available here. The full piece on ABC Radio Australia – Pacific Beat – first aired on 24 June 2013 is available here.
Why land is central to Fiji's future stability
June 7, 2014 Customary Land, Fiji, Fiji Elections, iTLTB, NLTBFiji, Fiji Election, Fiji Land, iTaukei, iTaukei Land Trust Board, iTLTB, Leases, political parties, Property rights, VanuaCustomary Land Solutions
With 100 days to go to the Fiji elections in September, none of the political parties have yet explained in their manifesto's how they will deal with land (indeed, where are the manifesto's?). In his feature editorial in the Fiji Times on Saturday 7th June, Spike Boydell highlights that being clear on land issues, having equitable leases that are fit for purpose at market rents, and respecting the paramountcy of iTaukei land – the vanua – is central to long term economic and political stability in Fiji.
Pacific Regional Symposium – Land and Property Rights in the South Pacific – Honiara 5-7 August 2014
April 24, 2014 Customary LandCarbon Property Rights, CASLE, Climate Change, Honiara, IAAPLPR, Land, Land Trusts, Leases, Property rights, Resource Compensation, Solomon Islands, South Pacific, Symposium, UTS: APCCRPRCustomary Land Solutions
CLS members Spike Boydell, Ulai Baya and John Sheehan are co-facilitating the Pacific Regional Symposium – Land and Property Rights in the South Pacific – Honiara 5-7 August 2014 (flyer & registration form) with Mike McDermott.
This Symposium is a joint initiative of the Commonwealth Association of Surveying and Land Economy (CASLE), the University of Technology, Sydney: Asia-Pacific Centre for Complex Real Property Rights (UTS: APCCRPR) and the International Academic Association for Planning Law and Property Rights (IAAPLPR). It is being hosted by the Solomon Islands Ministry of Lands, Housing and Surveys. It has been made possible through a small grant from the Commonwealth Foundation and the support of the Ministry of Lands, Housing and Surveys.
This is the second regional Land and Property Rights symposium co-facilitated by the UTS: APCCRPR and the IAAPLPR.
Please click on the highlighted text above, or the image on the left for more information.
The CLS site will be the digital repository for the symposium resources and video record of the event. If you are unable to attend, but would like to be notified when the resources are online, please complete the following contact form:
INDIGENOUS PEOPLE, NOT AUSTRALIANS, SHOULD DETERMINE VANUATU'S FUTURE
We agree with much of Joel Simo's useful news article published in the Sydney Morning Herald last week (available here). However, whilst we know and have respect for Joel Simo, and accept that leases have been inappropriately drafted in Vanuatu, we remain of the view that appropriately drafted leases are part of the solution as well (see our guidance on leaseholds). This view is not shared by the Melanesian Indigenous Land Defence Alliance (MILDA) – but hope that they will come to realise that leases are the only way that surplus customary land can be accessed and made economically productive by third parties. What is important is ensuring that the leases are at commercial rents (as opposed to unimproved capital value), regularly reviewed, have an appropriate duration (for the customary landowners), and any improvements should be returned with the land to the customary landowners at lease expiry (or renewal) in good and tenantable repair. The problem, as we see it, is that historically in the Pacific leases have been drafted to benefit external investors and colonial interests rather than the customary landowners.
It is time to reverse this trend.
Chat on December 1, 2013 UncategorizedJoel Simo, Leases, MILDA, VanuatuCustomary Land Solutions
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 577
|
The Synagogues
Exibitions and Events
Internships and Traineeships
The Jewish Quarter
The Great Synagogue
In 1870, with the breach of Porta Pia, the Italian Army conquered Rome and the city, with all its territory, was absorbed into the Kingdom of Italy: the temporal power of the popes had ended. Later on, Rome was declared as the capital of the Kingdom.
In the nineteenth century the Jews reached, as in the rest of Europe, full emancipation and equal civil rights.
From then, the Jewish communities were able to erect, after centuries of limitations, their monumental synagogues.
The Jews decided to build the most impressive synagogue in the city, the Great Synagogue, in the same neighborhood where, for centuries, they used to be locked up. The building stands in the area of the former ghetto, which had been demolished and reclaimed, by following the 1888 town plan.
The winners of the public competition for the new synagogue project were Osvaldo Armanni and Vincenzo Costa. The Synagogue was inaugurated in 1904.
The monumental building is surmounted by a square-based dome, which is covered in aluminum.
The interior, with its bimah (pulpit) placed in a way that does not exactly conform to the tradition of the "Roman rite", is richly decorated in art nouveau style.
The Spanish Synagogue
At the end of the nineteenth century, the Jewish Community of Rome wanted to replace the Ghetto's five ancient synagogues (Cinque Scole) with a monumental one.
The community intended to reserve an oratory for the Spanish rite, officiated in Rome at least since 1492, with the arrival of the Jews expelled from Spain.
In 1932 the Spanish Synagogue was placed inside the Great Synagogue monumental building.
In 1948 it was embellished with the marble furnishings belonging to the Cinque Scole, thus recreating the ghetto's ancient synagogues atmosphere with their splendid marbles and fabrics
Via Catalana (Sinagoga), Roma
info@test-wp.museoebraico.roma.it
Copyright © 2023 Jewish Museum of Rome - Privacy policy
Coerentemente con l'impegno continuo del Museo Ebraico di Roma di offrire i migliori servizi ai propri utenti, abbiamo modificato alcune delle nostre politiche per rispondere ai requisiti del nuovo Regolamento Europeo per la Protezione dei Dati Personali (GDPR). In particolare abbiamo aggiornato la Privacy Policy e la Cookie Policy per renderle più chiare e trasparenti e per introdurre i nuovi diritti che il Regolamento ti garantisce. Ti invitiamo a prenderne visione. Accetta Rifiuta Settings
Newsletter - Jewish Museum of Rome
The subscription to the newsletter of the Jewish Museum of Rome is free and can be deactivated at any time by clicking on the appropriate link within the newsletter itself.
By receiving the Museum newsletter, I will have the opportunity to be always updated on events, exhibitions and anything else happening at the Museum.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,262
|
Донатас Банионис (28. април 1924 — 4. септембар 2014) био је литвански и совјетски глумац. Најпознатији је по главној улози у филму Соларис.
Почео је своју каријеру у филмовима на литванском језику, но касније је прешао на филмове на руском језику. Био је такође глумац и у позоришту.
Референце
Спољашње везе
Рођени 1924.
Умрли 2014.
Литвански глумци
Совјетски глумци
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 376
|
> enable its most vicious competitor.
> > > find at http://opensource.org/docs/osd.
> > down the GPLv3.
> > believe that both licenses submitted by Microsoft are OSD-compliant.
> > > source software as well.
> Open Source Programs Manager, Google Inc.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 274
|
{"url":"https:\/\/codereview.stackexchange.com\/questions\/175928\/a-counterexample-in-balanced-parentheses\/175939","text":"A counterexample in balanced parentheses\n\nIn this program I had to analyse if the parenthesis given are well balanced. For instance, input (()) is correct and input ())(() is incorrect. I've tested with other similar inputs, so my question is: can my code fail with some weird expression (using parentheses) or will it always work correctly?\n\npublic class Main {\n\npublic static void main(String[] args) {\nScanner x = new Scanner(System.in);\nStack<String> parenthesis = new Stack<>();\n\nSystem.out.println(\"Introduce the length of your expression\");\nint l = x.nextInt();\n\nx.nextLine();\nfor (int i = 0; i < l; i++) {\nSystem.out.println(\"Introduce an element\");\nString e = x.next();\nparenthesis.push(e);\n}\n\nanalysis(parenthesis);\n}\n\npublic static void analysis(Stack<String> st) {\n\nStack<String> closed = new Stack<>();\/\/only for parentesis like this: )\n\nint ww = 0;\nwhile (!st.isEmpty() && ww != 1) {\nString s =st.pop();\nif (0 == s.compareTo(\"(\") && !closed.isEmpty()) {\nclosed.pop();\n} else if ((0 == s.compareTo(\"(\")) && closed.isEmpty()) {\nSystem.out.println(\"incorrect expression.\");\nww = 1;\n} else {\nclosed.push(s);\n}\n}\nif (ww == 1) {\nSystem.out.println();\n} else if (!closed.isEmpty()) {\nSystem.out.println(\"incorrect espression\");\n} else {\nSystem.out.println(\"correct\");\n}\n\n}\n}\n\n\u2022 It seems to me that ((x) would pass, when it shouldn't Sep 18, 2017 at 6:22\n\u2022 @janos no, it works correctly. I didn't consider arguments inside the parenthesis but if you just put as input: (() the output will be 'incorrect expression' Sep 18, 2017 at 7:09\n\u2022 A better exercise, which you might consider trying right now is: modify your code so that it detects correct vs incorrect strings in the language of balanced parentheses with {}[]() parens. So ({()[]}[]) would be legal but ({())} would not be. Now it is not so simple as just counting how many opens and closed you see; you have to actually use the contents of the stack to solve this problem, and not, as others have noted, simply use the stack as a complicated integer. Sep 18, 2017 at 20:10\n\u2022 Hint: the way you've written the code is not wrong, but it is more complicated than it needs to be. Try this: Make an empty stack. Go through the inputs. When you see a (, push it on the stack. When you see a ), check to see if the stack is empty; if it is, you have a bad input. If not, pop the stack. If you end up with a non-empty stack, you have a bad input. Otherwise you have a good input. This program should be much shorter and simpler than yours. Sep 18, 2017 at 20:16\n\u2022 @Michelle do you know why? Because it doesn't improve readability and is more prone to causing NullPointerExceptions (when s == null) than the equals in my comment. Sep 19, 2017 at 15:16\n\nInside the analysis method, you only use the variable ww as 0 or 1. This would be better as a boolean type instead of int type.\n\nCurrently analysis has two actions:\n\n1. Validate the input\n2. Write the response to the console.\n\nThe analysis would be more reusable if you removed the printing and put it in a different function. And then analysis would return String instead of void.\n\nThis:\n\npublic static void analysis(Stack<String> st) {\n...<snip>...\nif (ww == 1) {\nSystem.out.println();\n} else if (!closed.isEmpty()) {\nSystem.out.println(\"incorrect espression\");\n} else {\nSystem.out.println(\"correct\");\n}\n}\n\n\nbecomes\n\npublic static String analysis(Stack<String> st) {\n...<snip>...\nif (ww == 1) {\nreturn \"\\n\";\n} else if (!closed.isEmpty()) {\nreturn \"incorrect espression\\n\";\n} else {\nreturn \"correct\\n\";\n}\n}\n\n\nSome of your variable names are really good (such as parentheses), but some are only a single letter or two, which can be hard to understand by someone else reading your code. l would be better as length, x could be scanner or input, etc. I still haven't figured out what ww is, other than a flag of some kind.\n\nIs there a reason the user needs to type each character in the parentheses one at a time instead of all at once?\n\nYou have a typo in one of your incorrect espression blocks. And one ends with a period while the other does not.\n\n\u2022 why would be better to use ww as a boolean type variable? Sep 18, 2017 at 20:36\n\u2022 I feel more comfortable typing character in the parentheses one at a time, it's just an option, also I don't see disadvantages on doing it Sep 18, 2017 at 20:43\n\u2022 @Michelle That's too generic. The name of a flag variable should say what condition it's flagging. E.g. in a search function you might call it found, in a validation function it might be called valid. Sep 22, 2017 at 3:28\n\u2022 But in general, flags are boolean, since they're either on or off. Sep 22, 2017 at 3:29\n\u2022 I see, well in my program the flag show when an error appear, so maybe it should be named found found a mistake. @Barmar Sep 22, 2017 at 4:08\n\nMay I suggest a simpler solution: You need only a counter. it is incremented by 1 whenever you encounter an openning parenthesis \"(\" and is decreased by 1 whenever you encounter a closing parenthesis \")\" The input is invalid if the counter drops below zero or is not zero by the end of scanning.\n\nEDIT\nsince OP specified they must use a stack, we can follow @JollyJoker's advice: counter increament is implemented by a push operation, counter decrement by a pop one. counter falling below zero is implemented by attemt to pop when the stack is empty. by the end of scanning the stack has to be empty.\n\n\u2022 what if you receive as input: ')(' ? it shouldn't be correct Sep 18, 2017 at 7:12\n\u2022 on the first \")\" the counter gets below zero... Sep 18, 2017 at 7:43\n\u2022 in my answer I meant if (the counter drops below zero) or (is not zero by the end of scanning) so the input is invalid the instant the counter gets below zero which really mean an extra closing parenthesis Sep 18, 2017 at 7:45\n\u2022 very nice\/short\/good answer but it's part of the exercise to use Stacks :( (that's why the tag) Sep 18, 2017 at 18:58\n\u2022 @Michelle You can think of the stack size as the counter. Sep 19, 2017 at 9:00","date":"2023-03-21 10:31:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26918116211891174, \"perplexity\": 1530.7646725502398}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943695.23\/warc\/CC-MAIN-20230321095704-20230321125704-00706.warc.gz\"}"}
| null | null |
Presentation on Q1 2009 Earning Report of Ebay Inc.
View more presentations from earningreport.
Remember the Razorfish Facebook Connect scenario making e-commerce truely social? Well, it's not a scenario anymore: digtal agency Fluid launched their FluidSocial product, enabling Facebook Friend-based shopping and merchandising; and implemented it on Vans.com and Jansport.com.
FluidSocial enables conversations (comments and live chat) with your friends directly on a product page.
- For online retailers, these friend interactions allows for shoppers to never leave the site to get feedback from friends.
- For shoppers, it lets them see ratings, comments and real-time chat on specific products from friends whose opinions they value and trust more than from strangers.
Ju's data suggests the total slice of U.S. e-commerce through Amazon was about 34% in Q4, up from 27% a year earlier.
Could Sold Out Products Increase Email Click Through?
Chad White from the Retail Email Blog recently spotted this email from TigerDirect that dynamically updates image files when a product sells out. This practice prevents the frustration and disappointment when one clicks to a product that's no longer available, creates urgency for other products and may prompt the recipient to open TigerDirect emails right away in the future.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,691
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.