arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
2011.11210
|
\section{Introduction}
\label{s:intro}
The demand for automatic modeling of city-scale urban environments has recently attracted increasing interest \citep{remondino2017critical}. Specifically, high-quality 3D reconstruction of road networks, which form the skeleton of urban scenes, is useful in a variety of applications, including navigation maps, autonomous driving, and urban planning \citep{yang2017computing,chen2019higher,wenzel2019simultaneous}. Recently, massive airborne datasets have been collected for several cities around the world \citep{google2020earth,sugarbaker20143d,isenburg2020open}. Although airborne light detection and ranging (LiDAR) has been widely used in the last two decades \citep{kada20093d,haala2010update}, advances in large-scale structure-from-motion (SFM) \citep{schonberger2016structure} and multi-view stereo (MVS) \citep{vu2009towards,jancosek2011multi} pipelines have enabled the automatic generation of city-scale triangular surface models from aerial oblique images, which are also enriched with high-resolution textures.
Aerial oblique images are arguably the most widely used datasets for 3D modeling of urban environments \citep{google2020earth}; however, photogrammetric point clouds and meshes are geometrically less accurate and regular than LiDAR points at similar resolutions \citep{nex2014photogrammetric, hu2016texture}. In addition, photogrammetric meshes are also commonly contaminated by occluded objects and topological defects \citep{verdie2015lod, hu2016stable}, particularly in road areas. These issues require a practical approach to improve the quality of existing photo-realistic meshes from aerial oblique images. In particular, the major objective of this study is to remove vehicles from both the geometries and textures of mesh models so that clean results can be obtained for further applications. Despite the recent progress in the processing of textures of mesh models \citep{prada2018gradient}, MVS pipelines for photogrammetric meshes of road regions cause certain problems that should be addressed.
\textit{1) Occlusion and dynamic objects.} Although penta-view aerial oblique camera systems can capture ground objects from multiple viewpoints \citep{remondino2015oblique}, occlusions on the ground, particularly in road areas, are inevitable in urban environments with dense building rise-ups. The geometries of photogrammetric meshes are generally noise-laden, and the textures are severely blurred (Figure \ref{fig:mesh_problems}a). In addition, existing MVS pipelines cannot handle dynamic objects, \textit{ for example } vehicles. We argue that it is hardly possible to resolve these issues in an MVS pipeline for aerial oblique images because the aforementioned defects are inherently embedded in the images. Therefore, direct editing of the meshes to improve quality may be the most practical solution.
\textit{2) Discrete and discontinuous atlases for textures.} A photogrammetric surface mesh, as a 2D manifold embedded in 3D space, should be unwrapped to 2D planar space so that it can be consumed by the textures. This step is also termed surface parametrization \citep{levy2002least}. Although the existence of a continuous conformal map for any 2D manifold has been proved, in practice, photogrammetric meshes should be separated into different segments (\textit{e.g., } charts) and packed into one or multiple texture images (\textit{e.g., } atlases) (Figure \ref{fig:mesh_problems}b). Segmentation is generally used to alleviate distortion in surface parametrization for large patches and to achieve fitting into a single image with a limited field of view \citep{waechter2014let}. It is quite difficult to directly process texture images, which are discrete and discontinuous.
\begin{figure}[htb]
\centering
\subcaptionbox{Defects of meshes.}[0.49\linewidth]{\includegraphics[width=\linewidth]{mesh_defects.pdf}}
\subcaptionbox{Discontinuous UV atlases}[0.49\linewidth]{\includegraphics[width=\linewidth]{discontinuous_texture.pdf}}
\caption{Common problems of photogrammetric meshes.}
\label{fig:mesh_problems}
\end{figure}
To resolve these issues, we propose a structure-aware completion method for the textures of photogrammetric meshes to improve the mesh quality of urban roads.
The intuitive principle is to detect vehicles, directly flatten the geometries, and replace the textures according to repeated patterns of urban roads.
Specifically, we first transform the discrete texture atlas to a continuous screen space through the graphics pipeline.
Subsequently, vehicles are detected and masked as voids using publicly available deep learning approaches \cite{ren2015faster}.
Then, the void areas are automatically completed using the proposed structure-aware image completion method.
Finally, the completed textures are remapped to the original texture atlases, and the triangles corresponding to the voids are flattened.
In summary, the major advantages of the proposed mesh correction approaches are the following: 1) an efficient strategy to handle discontinuous texture atlases by directly rendering photogrammetric meshes, and 2) structure-aware image completion methods to remove vehicles on the roads of urban environments.
The rest of this paper is organized as follows: Section \ref{s:related_work} provides a brief review of related work. Section \ref{s:method_overview} introduces the mesh completion workflow. Sections \ref{s:texture_editing}, \ref{s:detection}, and \ref{s:image_completion} elaborate the details of proposed method. Experimental evaluations are presented in Section \ref{s:results}. Finally, the last section concludes the paper.
\section{Related work}
\label{s:related_work}
The most relevant literature includes 1) texture mapping and processing \citep{waechter2014let,prada2018gradient}, 2) object detection \citep{ren2015faster,redmon2016you}, and 3) image completion \citep{he2012statistics}.
\paragraph{1) Texture mapping and processing}
Currently, massive collections of city-scale aerial oblique images \citep{remondino2015oblique} have been obtained.
With the advances in bundle adjustment \citep{hu2015reliable,verykokou2018oblique} and dense image matching \citep{hirschmuller2008stereo,hu2016texture} for aerial oblique images, high-density photogrammetric point clouds can be obtained, from which surface meshes can be constructed \citep{kazhdan2013screened,jancosek2011multi}.
These meshes are then enriched with high-resolution textures by mapping the color information from the original images \citep{zhou2014color,bi2017patch,waechter2014let}.
In theory, any manifold surface immersed in 3D space can be continuously unwrapped in 2D space \citep{crane2013digital}; this process is termed surface parametrization \citep{levy2002least}.
Unfortunately, however, a continuous parametrization inevitably leads to significantly distorted shapes in 2D space \citep{sorkine2002bounded}, particularly for large and irregular meshes.
Therefore, the texture mapping of a photogrammetric mesh is intentionally segregated into different segments (\textit{charts}), and different parts are then packed into a single or several texture images (\textit{atlases}) \citep{levy2002least,liu2019atlas,limper2018box}.
The boundaries for different charts are commonly known as seam lines.
Several similar strategies are available to generate seam lines or, equivalently, charts, including variational shape approximation \citep{cohen2004variational}, local geometries \citep{allene2008seamless,zhang2020robust}, and multi-view geometric and photometric consistencies \citep{waechter2014let,bi2017patch}.
The processing of discontinuous texture atlases is quite difficult \citep{yuksel2019rethinking}.
To alleviate color differences in the continuous image space (\textit{i.e., } image mosaicking), we could either estimate the color transfer function based on the overlapping region \citep{yu2017auto,hu2019color}, or reformulate the problem as a gradient-domain blending problem \citep{agarwala2007efficient,kazhdan2010distributed}.
However, in discontinuous texture atlases, it is difficult to detect or even define the overlaps and gradient breaks on the seam lines.
Thus, \cite{waechter2014let} only locally blends the color in a region with a fixed width along seam lines.
To resolve this discontinuity, \cite{prada2018gradient} and \cite{liu2017seamless} formulate the problem in a meticulously selected continuous function space based on the finite element approach.
However, these approaches cannot handle varying mesh typologies and large texture sizes (e.g., $8192\times8192$) \citep{waechter2014let}, and therefore, they are probably not suitable for editing photogrammetric meshes.
Instead, we propose a practical approach to edit mesh textures by efficiently generating a continuous mapping through orthogonal rendering of the meshes.
\paragraph{2) Object detection}
Traditional object detection methods generally use low-level features, such as Harr-like features \citep{papageorgiou1998general}, local binary patterns \citep{ojala2002multiresolution}, and histograms of oriented gradients \citep{dalal2005histograms}.
Some notable approaches include the V-J detector \citep{viola2004robust} and deformable part models \citep{felzenszwalb2009object}.
However, owing to the semantic gap between low-level vision features and high-level semantic information, only low performance has been achieved for a considerable time \citep{girshick2014rich}.
With the re-invention of deep neural networks, particularly deep convolutional neural networks, learned features \citep{simonyan2014very} pre-trained on large datasets have demonstrated superior performance compared with previously known shallow methods.
A typical approach, known as the regional convolutional neural network (RCNN) \citep{girshick2014rich}, can perform unsupervised detection of te bounding boxes of certain salient objects \citep{uijlings2013selective}, aggregate the features in the bounding box through regional pooling, and classify each bounding box into specified categories.
This strategy is further improved by reusing the feature maps \citep{girshick2015fast} and learning the generation of bounding boxes \citep{ren2015faster}; this strategy is known as Faster RCNN.
These approaches consist of two separate stages: detection and classification of bounding boxes.
To improve efficiency, a more concise one-stage approach is proposed \citep{redmon2016you}, namely ``you only look once.''
Its principle is to tessellate an image into regular grids, and regress each grid to the corresponding locations and most likely classes.
Similar approaches have been proposed to further improve effectiveness \citep{liu2016ssd, redmon2017yolo9000, redmon2018yolov3}.
Owing to the superior performance of deep learning in object detection, we also use a standard approach \citep{ren2015faster} to detect vehicles on roads.
\paragraph{3) Image completion}
With regard to filling void image regions, extensive research has been conducted on image inpainting and completion. The corresponding methods are based either on partial differential equations (PDEs) or on sampling.
\cite{bertalmio2000image} repaired images by diffusing the edge information of the region of interest (ROI) to unknown regions.
A similar approach was proposed using total variation \citep{shen2002mathematical}, and was improved through an anti-aliasing strategy \citep{aubert2006mathematical}.
However, PDE-based methods cannot fill large voids, as they are only driven by local information.
Regarding sample-based approaches, \cite{criminisi2004region} selected the best matching patch according to the isophote. This method was improved by considering the weighted averaging of multiple patches \citep{wong2008nonlocal}.
A milestone on sample-based image completion is patch matching \citep{barnes2009patchmatch}, which significantly accelerates the search for the most similar patches through random guess and expansion.
Several state-of-the-art methods are based on patch matching: \cite{he2012statistics} extracted transnational regularities using the statistics of patch offsets, and \cite{huang2014image} further considered affine deformations.
Although sample-based methods can repair large missing images, they are limited to structured linear patterns \citep{iizuka2017globally}, which are probably the most common patterns for urban roads.
Recently, image completion based on deep learning \citep{graves2013generating, yeh2017semantic, iizuka2017globally} has made progress by using generative models \citep{iizuka2017globally, yu2018generative, nazeri2019edgeconnect}. However, deep learning approaches rely on massive training data, which are difficult to obtain; otherwise, it is difficult to synthesize high-resolution images \citep{wu2017survey}. In this paper, we propose a structure-aware image completion that directly uses linear features to guide the search for similar patches.
\section{Structure-aware completion of photogrammetric meshes for urban roads}
\label{approach}
\subsection{Overview and problem setup}
\label{s:method_overview}
\subsubsection{Overview of the approach}
To overcome the problem caused by discrete and discontinuous textures, we establish a one-to-one mapping between the atlas and the orthogonally rendered image, whereby vehicle regions are extracted and filled.
In addition, we apply a linear feature, which is the most common feature in urban roads, to guide and constrain image completion and thus improve the output linear structures.
The overall workflow of the proposed method is shown in Figure \ref{fig:flow_of_method}; it consists of five parts.
Beginning with the textured meshes and an input ROI, we first render the geometry primitives (e.g., triangles) in the ROI to two buffers: an ID buffer that records the primitive IDs, and a color buffer that records the color information. Subsequently, we establish a mapping between a texel (a pixel in the texture atlas) and a pixel of color buffer using the corresponding primitive IDs.
After texture integration, we apply Faster R-CNN to the integrated image to detect vehicles and generate a mask according to the bounding boxes of the detected objects. Subsequently, we complete the image with a mask using the proposed algorithm, which involves two sequential steps: detecting the road direction from translational regularities by RANSAC, and extracting edges from the road image. The proposed algorithm uses this information to guide and constrain the completion process. Finally, the completion result is used, and the pipeline is rendered to update the texture atlas so that automatic correction of the textured model can be achieved.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{workflow.pdf}
\caption{Workflow of photogrammetric mesh correction in road areas. First, the discontinuous texture of the selected ROI is integrated by rendering, vehicles are detected, and a mask image is generated using Faster R-CNN. Then, regularities are extracted and used to guide and constrain the image completion process. Subsequently, the corrected integrated image is deintegrated by rendering to obtain the corrected texture atlas. Finally, the texture atlas is replaced with a corrected texture atlas to achieve texture correction of the mesh.}
\label{fig:flow_of_method}
\end{figure}
\subsubsection{Problem setup}
More formally, the inputs consist of a 2-manifold geometry mesh $ \mathcal{M}(V,F) $ and a 2D UV mesh $ \mathcal{M}'(V',F') $ \citep{prada2018gradient}, as shown in Figures \ref{fig:mesh} a and b.
$ V\in\mathbb{R}^{3 \times N} $ and $V'\in\mathbb{R}^{2 \times N'}$ are the vertices of the meshes.
In general, the number of vertices is not equal to $N \ne N'$ because of the seam lines in the mesh $ M $.
$F\in\mathbb{Z}^{3\times M}$ and $F'\in\mathbb{Z}^{3\times M'}$ are the facets; each column of $F$ and $F'$ records three indices into vertices $V$ and $V'$, respectively.
Although the indices may be different, the order and number of the facets recorded in $F$ and $F'$ are the same, that is, $M=M'$.
There is also a texture image $\mathcal{I}$ associated with the UV mesh $\mathcal{M}'$. To avoid confusion, we term the coordinates on the texture image \textit{texel} $\boldsymbol{t}(u,v)$ (Figure \ref{fig:mesh}c) \citep{prada2018gradient} rather than \textit{pixel} $\boldsymbol{p}(x,y)$ for normal images.
The purpose of this study is to flatten the geometries by modifying the vertices $V$ of the mesh $\mathcal{M}$, and to correct the texture image $\mathcal{I}$ so that cleaner street scenes can be obtained in an urban environment.
\begin{figure}[htb]
\centering
\subcaptionbox{2-Manifold geometry mesh $\mathcal{M}$}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/mesh/mesh.pdf}}
\subcaptionbox{2D UV mesh $\mathcal{M}'$}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/mesh/uvmesh.pdf}}
\subcaptionbox{Texture image $\mathcal{I}$}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/mesh/texel.pdf}}
\caption{A textured mesh in this paper consists of three parts: a geometry mesh $\mathcal{M}$, an UV mesh $\mathcal{M}'$, and a texture image $\mathcal{I}$.}
\label{fig:mesh}
\end{figure}
\subsection{Integration and deintegration of texture image}
\label{s:texture_editing}
As discontinuous texture images are difficult to process, we first integrate the texels in a specified ROI into a continuous image, by directly rendering the textured mesh models \citep{zhu2020leveraging}.
After performing the correction steps in the continuous image space, we deintegrate the modified pixels to the corresponding texels using the methods described below.
Figure \ref{fig:tex_integration} shows the integration and deintegration processes of the texture image.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{images/texture_integration_deintegration.pdf}
\caption{Integration and deintegration of texture image. Beginning with a mesh $\mathcal{M}$, a UV mesh $\mathcal{M}'$, and the texture $\mathcal{I}$, we first render the mesh in the ROI to two rasters: the color $\mathcal{R}_c$ and the facet raster $\mathcal{R}_f$. $R_c$ and $R_f$ record the grayscale values and facet numbers from the original meshes, respectively. After editing $\mathcal{R}_c$, we estimate a mapping $\boldsymbol{t}=T(\boldsymbol{p})$ using $\mathcal{R}_f$ and deintegrating the edited pixels to the texture image.}
\label{fig:tex_integration}
\end{figure}
\subsubsection{Texture integration}
\label{subs:texture_integration}
The graphics render pipeline can efficiently project the mesh models to the screen space, and shade the fragment on the screen from the texture.
Despite the textured mesh models, two matrices are required that define the projection from the model space to screen viewport: the projection matrix $\mathbf{P}$ and the view matrix $\mathbf{V}$. As in \citep{zhu2020leveraging}, we use the \textit{ortho} and \textit{perspective} routines in GLM \citep{glm2019opengl} for the projection $\mathbf{P}$ and view $\mathbf{V}$ matrix, respectively.
The parameters of the \textit{ortho} and \textit{perspective} routines can be intuitively determined through the bounding box of the mesh models in the selected ROI.
In addition, the geometries outside the ROI are discarded in the following processing.
The direct output of the render pipeline is a color raster $\mathcal{R}_c$, which samples the texture by bilinear interpolation of the texels.
However, if only $\mathcal{R}_c$ is available, the information required to map between $\mathcal{R}_c$ and the original texture image $\mathcal{I}$ is insufficient.
Therefore, we also allocate another raster $\mathcal{R}_f$ in the render pipeline.
Each pixel $\boldsymbol{p}(x,y) \in \mathbb{Z}$ in $\mathcal{R}_f$ stores the facet index $f$ of the mesh $F\in\mathcal{M}$ or, equivalently, the UV mesh $F'\in\mathcal{M}'$.
It should be noted that $\mathcal{R}_f$ and the color raster $\mathcal{R}_c$ are obtained simultaneously in a single rendering frame.
The \textit{gl\_PrimitiveID} in the fragment language of OpenGL is a constant that contains the index of the current primitive in the rendering pipeline, that is, $f=gl\_PrimitiveID$.
Similar inputs are also available for other graphic application programming interfaces.
Therefore, both $\mathcal{R}_c$ and $\mathcal{R}_f$, as shown in Figure \ref{fig:tex_integration}, can be obtained in real time.
\subsubsection{Texture deintegration}
\label{subs:model_update}
To map the edited image $\mathcal{R}_c$ back to the texture image $\mathcal{I}$, a one-to-one mapping should be esyablished between the two images, that is, $\boldsymbol{t}=T(\boldsymbol{p})$, as shown in Figure \ref{fig:tex_integration}. Although the transformation from mesh $\mathcal{M}$ to UV mesh $\mathcal{M}'$ is generally assumed to be conformal \citep{levy2002least}, we relax this assumption to an affine transformation because of numerical rounding or other factors.
Therefore, we can use the trilinear coordinates $\boldsymbol{c}(a,b,c)$ \citep{wolfram2020trilinear} to interpolate $\eta(\cdot)$ the position inside the same facet $f$ for the mesh $\mathcal{M}$, UV mesh $\mathcal{M}'$, and rendered image $\mathcal{R}_c$.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{images/trilinear.pdf}
\caption{Mapping between rendered image $\mathcal{R}_c$ and texture image $\mathcal{I}$ using trilinear coordinates.}
\label{fig:trilinear}
\end{figure}
Figure \ref{fig:trilinear} shows the process to establish the transformation $\boldsymbol{t}=T(\boldsymbol{p})$.
For each modified pixel $\boldsymbol{o}\in\mathcal{R}_c$, the corresponding facet index is directly loaded from $\mathcal{R}_f$ as $f=\mathcal{R}_f(\boldsymbol{p})$.
The trilinear coordinates of pixel $\boldsymbol{p}$ inside the corresponding triangle $f$ can be computed in closed form as $\boldsymbol{c}=\eta_f(\boldsymbol{p})$ \citep{wolfram2020trilinear}.
Then, in the same facet of the UV mesh $\mathcal{M}'$, the inverse mapping directly yields the position of the texel as $\boldsymbol{t}=\eta_f^{-1}(\boldsymbol{c})$.
In fact, inside each facet, the transformation $\boldsymbol{t}=T(\boldsymbol{p})$ is equivalent to an affine transformation.
In the implementation, the mapping $T(\cdot)$ is pre-computed for each triangle and stored pixel-wise.
Another practical issue for the integration and deintegration processes is related to the data structure of the mesh models.
The mesh models are tiled into small fragments and organized in a tree structure for different LODs.
Each small tile is associated with a texture.
In the implementation, the UV Mesh $\mathcal{M}'$ also contains an array to store the atlas index.
Moreover, multiple transformations $T(\cdot)$ are used to account for multiple texture atlases.
In fact, even without tiling, a single mesh can also contain multiple texture images; the same strategy is used to handle the above issue.
\subsection{Vehicle detection and mask generation using Faster R-CNN}
\label{s:detection}
To remove the vehicles from the photogrammetric mesh models, we use Faster R-CNN \citep{ren2015faster} for vehicle detection in the rendered image $\mathcal{R}_c$, as shown in Figure \ref{fig:faster_rcnn}.
As Faster R-CNN \citep{ren2015faster} is a well-established and industrially proven method, we only briefly introduce the training process in the following.
\begin{figure}[h]
\centering
\subcaptionbox{Rendered image $\mathcal{R}_c$}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/rcnn/image.pdf}}
\subcaptionbox{Detected objects}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/rcnn/detection.pdf}}
\subcaptionbox{Dilated mask $\mathcal{R}_m$}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/rcnn/mask.pdf}}
\caption{Vehicle detection and mask generation. Faster R-CNN is used to detect objects in (a) the rendered color image, which are represented as (b) axis-aligned bounding boxes; the regions are dilated to account for shadows and other defects, and (c) are masked for further completion.}
\label{fig:faster_rcnn}
\end{figure}
We use a pre-trained VGG16 model \citep{simonyan2014very} as the backbone for feature extraction owing to its simplicity.
Although existing datasets \citep{everingham2010pascal,lin2014microsoft} already contain training samples for vehicles, they are generally in perspective view rather than orthogonal view from the top.
Therefore, we interactively label a small set of datasets for object detection using LabelImg \citep{tzutalin2015labelimg}.
The datasets are exported in Pascal VOC format \citep{everingham2010pascal}.
We use photogrammetric mesh models covering both the campus of Southwest Jiaotong University (SWJTU) and part of Shenzhen to generate the training samples.
The images were obtained using an UAV (unmanned aerial vehicle) and a manned aircraft for SWJTU and Shenzhen, respectively.
Typical road areas are selected and rendered from the tiled mesh models.
In summary, approximately 250 patches from these two datasets are collected and interactively labeled for training.
During training, data augmentation by rotation and mirroring are used for improved generalizability.
After the detection of the bounding boxes, each object is enlarged by approximately 10\% to account for shadows and other possible defects in the textured models. The output is a masked raster $\mathcal{R}_m$, as shown in Figure \ref{fig:faster_rcnn}c.
\subsection{Extraction of linear regularities}
\label{subs:regularity_extraction}
Structured scenes, such as regularly arranged objects, are probably the most challenging cases for image completion \citep{he2012statistics}.
Unfortunately, scenes of urban roads featuring repeated line markers are highly structured.
It is desirable to explicitly consider regularities in the image completion process for improved performance in structured environments \citep{liu2010computational}.
Therefore, in this study, we first extract two types of linear regularities before the completion of the void regions in the rendered images, that is, translational regularities and linear features.
More specifically, this study adopts the PatchMatch-based \citep{barnes2009patchmatch} approach to complete void regions of vehicles based on \cite{huang2014image}.
PatchMatch is based on an effective approach to generate the nearest neighbor field (NNF) $\mathcal{N}$.
Each pixel of the NNF $\boldsymbol{v}=\mathcal{N}(\boldsymbol{p})$ denotes the offset $\boldsymbol{v}$ to the correspondence in the same image.
The completion is performed by filling the void region with the most self-similar patch.
The regularities are injected in this step to guide the generation of the NNF, which not only accelerates the convergence speed but also makes the NNF structure-aware.
\paragraph{Translational Regularity}
Inspired by previous work \citep{he2012statistics,huang2014image}, we also detect translational regularities using matched image features.
However, we found that for urban roads, the offsets between matched features exhibit clear linear regularities.
Specifically, we first extract SIFT \citep{lowe2004distinctive} features in the rendered image and then obtain matches using the standard ratio check.
Unlike in the case of feature matching between two images, no further outlier filtering using random sample consensus (RANSAC) \citep{fischler1981random} is used.
Three typical results for the feature matches are shown in Figure \ref{fig:offsets}.
It should be noted that the offsets of the feature matches exhibit a clear pattern of pointwise and linear clusters.
The pointwise clustering center indicates that similar patches are generally located at fixed intervals, such as equal distances between the road markings.
On the other hand, the linear pattern indicates that self-similar patches are quite likely to be retrieved by searching along the corresponding direction.
In addition, the orthogonal direction is commonly coexistent.
Although pointwise clusters are also highly common, in this study, we only enforce linear regularities for two reasons: (1) The point centers are generally aligned along the same line, and (2) as there are an excessive number of point centers, considering all of them significantly affects the efficiency of patch-matching completion \citep{barnes2009patchmatch}.
In summary, the orientations $\theta$ detected by RANSAC \citep{fischler1981random} and the orthogonal directions are then used in structure-aware image completion, that is, a set of $n$ angles $\Theta=\{\theta_1,...,\theta_n\}$.
\begin{figure}[H]
\centering
\subcaptionbox{}[0.32\linewidth]{
\includegraphics[width=\linewidth]{images/offset/1_match.pdf}\vspace{1em}
\includegraphics[width=\linewidth]{images/offset/1.pdf}
}
\subcaptionbox{}[0.32\linewidth]{
\includegraphics[width=\linewidth]{images/offset/2_match.pdf}\vspace{1em}
\includegraphics[width=\linewidth]{images/offset/2.pdf}
}
\subcaptionbox{}[0.32\linewidth]{
\includegraphics[width=\linewidth]{images/offset/3_match.pdf}\vspace{1em}
\includegraphics[width=\linewidth]{images/offset/3.pdf}
}
\caption{Translational regularities of the matches for three typical scenarios. The top row shows the feature matches of the images to be completed, where the blue lines indicate the offsets of two matched points (red and yellow rectangles). The bottom row shows scatter plots of the offsets for the corresponding match results, which exhibit obvious linear patterns.}
\label{fig:offsets}
\end{figure}
\paragraph{Edge Feature}
Translational regularities are a good indicator of mid-level knowledge for the scenes, and they are used in guided completion.
In addition, we directly consider low-level vision features, such as image edges, which also form the skeleton of road scenes.
Although there are several choices to detect edge features, such as the line segment detector (LSD) \citep{von2012lsd,zhu2020interactive}, we found that LSD and other approaches designed for contour extraction are more sensitive to scene noise (shown in Figure \ref{fig:edge_detection}).
Unfortunately, the rendered image from the textured meshes is inevitably noise-laden owing to defects in the MVS pipeline.
Therefore, we directly adopt an efficient gradient filter to extract binary edges $\mathcal{R}_l$ from the rendered image $\mathcal{R}_c$, namely, the Prewitt filter.
\begin{figure}[H]
\centering
\subcaptionbox{LSD}[0.3\linewidth]{
\includegraphics[width=\linewidth]{images/LSD.pdf}
}
\subcaptionbox{Prewitt}[0.3\linewidth]{
\includegraphics[width=\linewidth]{images/Prewitt.pdf}
}
\subcaptionbox{Canny}[0.3\linewidth]{
\includegraphics[width=\linewidth]{images/Canny.pdf}
}
\caption{Detected edges by different methods. (a) Lines extracted by LSD. (b) and (c) show the detected edges using Prewitt and Canny operators, respectively.}
\label{fig:edge_detection}
\end{figure}
\subsection{Structure-aware completion of urban roads}
\label{s:image_completion}
The objective of the completion of urban roads is to recover void regions in $\mathcal{R}_m$ (Figure \ref{fig:faster_rcnn}c) using the orientations $\Theta$ of translational regularities (Figure \ref{fig:offsets}) and edge maps $\mathcal{R}_l$.
A pyramid scheme is established based on the PatchMatch strategy \citep{barnes2009patchmatch} to search for the best NNF $\mathcal{N}$ progressively (Figure \ref{fig:image_completion}).
Namely, beginning with the coarsest level, a random $\mathcal{N}$ is initialized, and the edge map $\mathcal{R}_l$ is generated from the coarsest image mask $\mathcal{R}_m$.
We first establish a priority queue $Q$, which prefers pixels with higher edge scores. The order of the PatchMatch-based expansion is determined by the priority queue $Q$ rather than the original scanline-based strategy \citep{barnes2009patchmatch}.
For each pixel $\boldsymbol{p}$ in $Q$, the NNF $\mathcal{N}(\boldsymbol{p})$ is refined using an improved updating strategy guided by the linear regularities $\Theta$.
An example of the progressively refined NNF $\mathcal{N}$ and completed image $\mathcal{R}_c$ is shown in Figure \ref{fig:NNF}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{guided_completion.pdf}
\caption{Pyramid scheme for the structure-aware completion of urban roads. A priority queue is generated from the edge maps to prioritize the structured region. In addition, a guided random expansion is considered with translational regularities to retrieve more structured regions. The NNF $\mathcal{N}$ determines the offset between the most self-similar source $S$ and the target $T$ patches.}
\label{fig:image_completion}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{updated_nnf.pdf}
\caption{Updated NNF $\mathcal{N}$ and completed image $\mathcal{R}_c$ using a pyramid scheme. The white masks in the first row indicate the masked regions.}
\label{fig:NNF}
\end{figure}
\subsubsection{Generation of priority queue with edge maps}
\label{subs:priority}
Beginning with a randomly initialized NNF $\mathcal{N}$, the vanilla PatchMatch refines $\mathcal{N}$ in the canonical scanline direction in odd iterations, that is, top to down and left to right, and in the reverse direction in even iterations \citep{barnes2009patchmatch}.
However, this may clear artifacts in structured regions.
If the patch is expanded from the textureless region to the structured region, the regularity embedded in the structures may not be preserved.
Therefore, we propose an effective approach to remedy this.
We argue that structured regions generally have higher edge responses, that is, number of edge pixels in a local patch.
Specifically, the sum of the center-aligned square patch on the binary edge map $\mathcal{R}_l$ is used to represent the responses of the edge.
Then, the pixels in the void region are sorted in descending order according to the edge responses in the priority queue $Q = \{\boldsymbol{p}_1, \boldsymbol{p}_2, ..., \boldsymbol{p}_n\}$.
\subsubsection{Structure-aware similarity measure of patches}
To refine the NNF $\mathcal{N}$, \cite{barnes2009patchmatch} compares the similarity of the current pixel $\boldsymbol{p}$ with that of its 4-neighborhood with regard to the offset $\boldsymbol{v}=\mathcal{N}(\boldsymbol{p})$. Two square patches are generated as follows:
\begin{equation}
\begin{split}
T(\boldsymbol{p})&=\{\mathcal{R}_c(\boldsymbol{p}+\boldsymbol{s})|\boldsymbol{s}\in[-\tfrac{W}{2},\tfrac{W}{2}]\times[-\tfrac{W}{2},\tfrac{W}{2}] \} \\
S(\boldsymbol{p}, \boldsymbol{v})&=T(\boldsymbol{p}+\boldsymbol{v})
\end{split}
\end{equation}
where $T$ and $S$ are the target and source patches centered around $\boldsymbol{p}$ and $\boldsymbol{p}+\boldsymbol{v}$, respectively.
The target patch $T$ is inside the region to be completed, and the source patch $S$ is outside the region.
It should be noted that during patch expansion, the offset $\boldsymbol{v}$ may not be sampled from the corresponding pixel $\boldsymbol{p}$ of NNF $\mathcal{N}$; it can also be sampled from the 4-neighborhood $N_4(\boldsymbol{p})$ of $\boldsymbol{p}$ or even from a random location \citep{barnes2009patchmatch}.
$W$ is the patch size, and $W=21$ is generally used.
The similarity (or matching cost) can be intuitively computed from the grayscale values of the two patches \citep{barnes2009patchmatch}.
However, we also consider the regularities $\Theta$ in the computation of the similarity measure.
Specifically, the similarity measure consists of three terms: appearance $E_a$, proximity $E_p$, and regularity $E_r$. That is,
\begin{equation}
E=E_a+\lambda_1 E_p + \lambda_2 E_r
\end{equation}
where $\lambda_1=5\times10^{-4}$ and $\lambda_2=0.5$ are chosen empirically.
\paragraph{1) Appearance cost $E_a(\boldsymbol{p},\boldsymbol{v})$.}
We use the sum of the absolute differences between the target and source patches to measure the appearance difference as follows:
\begin{equation}
E_a(\boldsymbol{p},\boldsymbol{v})=\sum_{i} w_i| T_i(\boldsymbol{p}) - T_i(\boldsymbol{p}+\boldsymbol{v}))|
\end{equation}
where $T_i$ and $S_i$ denote the grayscale values of the $i$-th pixel of the patch,
and $w_i$ is an isotropic weight generated from a Gaussian kernel \citep{huang2014image}.
\paragraph{2) Proximity cost $E_p(\boldsymbol{p,v})$}
Generally, nearby pixels are preferred over distant pixels \citep{kopf2012quality}. Therefore, we add an additional penalty $E_p$ to prevent selecting distant patches in the NNF $\mathcal{N}$, as in \citep{huang2014image}:
\begin{equation}
E_p(\boldsymbol{p,v})=\frac{||\boldsymbol{v}||^2}{\sigma_d(\boldsymbol{p})^2+\sigma_c^2}
\end{equation}
where $\sigma_d$ and $\sigma_c$ are normalizers. Specifically, $\sigma_d(\boldsymbol{p})$ is the distance to the nearest border of the invalid regions, and $\sigma_c=\max(w,h)/8$ accounts for the size of image $(w,h)$.
$E_p(\boldsymbol{p})$ prefers small offsets in the NNF $\mathcal{N}$.
\paragraph{3) Regularity cost $E_r(\boldsymbol{v})$.}
In structured scenes, the offset direction $\theta_{\boldsymbol{v}}$ in the NNF should be consistent with the detected regularities $\Theta$. We use the minimum angle difference to measure the regularity cost.
\begin{equation}
E_r(\boldsymbol{v}) = \min_{\theta \in \Theta}{\cos(\theta_{\boldsymbol{v}} - \theta)}
\end{equation}
\subsubsection{Guided random expansion}
\label{subs:direction_constraint}
During the expansion of the NNF $\mathcal{N}$, PatchMatch \citep{barnes2009patchmatch} tests the similarities of both the 4-Neighborhood $N_4(\boldsymbol{p})$ and a set of random pixels centered around $\boldsymbol{p}$ as $R(\boldsymbol{p})=\{\boldsymbol{q}_1,\boldsymbol{q}_2, ..., \boldsymbol{q}_r\}$.
The random set prevents the occurrence of local minima of the expansion.
In this study, the directions determined by the translational regularities $\Theta$ are also used to guide the random expansion, as in the case of the regularity cost.
Specifically, the random pixels should be selected in a rectangular buffer along the direction $\theta \in \Theta$ and the orthogonal direction $\theta + \tfrac{\pi}{2}$, as indicated by the shaded blue region in Figure \ref{fig:image_completion}.
In addition, a series of $r$ random pixels with decreasing radius are selected.
The radius for pixel $\boldsymbol{q}_r$ is determined by $\max(w,h) \times (\tfrac{1}{2})^r$.
In summary, a single iteration of the refinement process for the proposed structure-aware PatchMatch is shown in Algorithm \ref{algo:patch-match}.
\begin{algorithm}[H]
\caption{Structure-aware PatchMatch.}
\label{algo:patch-match}
\begin{algorithmic}[2]
\Procedure{PatchMatch}{$\mathcal{N},\mathcal{R}_c,Q,R$}
\For{$\boldsymbol{p} \in Q$ }
\For{$\boldsymbol{q} \in N_4(\boldsymbol{p})$} \Comment{Neighborhood Expansion}
\If{$E(\boldsymbol{p}, \mathcal{N}(\boldsymbol{q})) < E(\boldsymbol{p}, \mathcal{N}(\boldsymbol{p}))$}
\State $\mathcal{N}(\boldsymbol{p}) \gets \mathcal{N}(\boldsymbol{q})$
\EndIf
\EndFor
\For{$\boldsymbol{q} \in R(\boldsymbol{p})$} \Comment{Guided Random Expansion}
\If{$E(\boldsymbol{p}, \mathcal{N}(\boldsymbol{q})) < E(\boldsymbol{p}, \mathcal{N}(\boldsymbol{p}))$}
\State $\mathcal{N}(\boldsymbol{p}) \gets \mathcal{N}(\boldsymbol{q})$
\EndIf
\EndFor
\EndFor
\State \Return $\mathcal{N}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Experimental evaluation and analysis}
\label{s:results}
\subsection{Dataset description}
\label{subs:data}
To evaluate the proposed methods, we use three datasets, which consist of highways, parking lots, and roads with and without marker lines (Figure \ref{fig:dataset}).
The first was collected from the campus of Southwest Jiaotong University (SWJTU) in Chengdu, China.
Two typical regions are considered, including a road with no parking signs and a cross intersection.
The second was collected from a block in Shenzhen of China.
We select an alley with numerous vehicles parked and a residential area for the experiments.
The third was provided by courtesy of ISPRS in Dortmund, Germany \citep{nex2015isprs}.
The road areas on a bridge and a park lot on a building roof were selected for the experiments.
\begin{figure}[h]
\centering
\subcaptionbox{SWJTU}[\linewidth]{
\includegraphics[width=0.48\linewidth]{images/dataset/swjtu.png}
\includegraphics[width=0.25\linewidth]{images/dataset/swjtu1.png}
\includegraphics[width=0.25\linewidth]{images/dataset/swjtu2.png}
}
\subcaptionbox{Shenzhen}[\linewidth]{
\includegraphics[width=0.48\linewidth]{images/dataset/shenzhen.png}
\includegraphics[width=0.25\linewidth]{images/dataset/shenzhen1.png}
\includegraphics[width=0.25\linewidth]{images/dataset/shenzhen2.png}
}
\subcaptionbox{Dortmund}[\linewidth]{
\includegraphics[width=0.48\linewidth]{images/dataset/dort.png}
\includegraphics[width=0.25\linewidth]{images/dataset/dort1.png}
\includegraphics[width=0.25\linewidth]{images/dataset/dort2.png}
}
\caption{Datasets used for the experimental evaluations. Different road scenarios are considered. In addition, the three datasets were captured by different sensors at different ground resolutions.}
\label{fig:dataset}
\end{figure}
In addition to the different road scenarios, the three datasets were also captured by different sensors at different relative flight heights on different platforms.
Table \ref{tab:datasets} lists the type of sensor, relative flight height, ground sample distance (GSD), and the number of images for the three datasets.
It should be also noted that the SWJTU dataset was captured by a UAV, and the other two by a manned aircraft.
\begin{table}[h]
\centering
\caption{Detailed description of the datasets}
\label{tab:datasets}
\begin{tabular}{c|l|ccc}
\hline
\multicolumn{2}{c|}{Dataset} & SWJTU & Shenzhen & Dortmund \\ \hline
\multicolumn{2}{c|}{Sensor} & SONY ICLE-5100 & PHASE ONE IQ180 & IGI PentaCam \\
\multicolumn{2}{c|}{Relative flight height (m)} & 85.23 & 918.66 & 831.97 \\
\multicolumn{2}{c|}{Ground sample distance (cm)} & 1.6-2.0 & 6.0-8.0 & 8.0-12.0 \\ \hline
\end{tabular}
\end{table}
\subsection{Results}
\label{subs:qualitative}
\subsubsection{Texture integration}
To resolve the issue caused by the discontinuous texture image, we first integrate the texels in different charts into a unified image through orthogonal rendering of the textured mesh models.
Figure \ref{fig:res_integration} shows the integrated image for the three datasets.
The red rectangle in the top-left part of each subfigure is interactively selected by the operator, and only the triangles inside the selected ROI are considered for further processing.
The yellow highlighted regions in the bottom-left part of each subfigure indicate the facets involved in the UV mesh.
Even though the ROI only accounts for a small rectangular area, several small charts are involved; direct processing is difficult owing to the fragments in the UV mesh.
This is resolved through efficient rendering of the mesh models (Figure \ref{fig:res_integration}, right part).
The resolution of the viewport is selected according to the average GSD of the model; this ensures that the texture information is preserved during the rendering process.
\begin{figure}[H]
\centering
\subcaptionbox{SWJTU}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/integration/swjtu.pdf}}
\subcaptionbox{Shenzhen}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/integration/shenzhen.pdf}}
\subcaptionbox{Dortmund}[0.32\linewidth]{\includegraphics[width=\linewidth]{images/integration/dort.pdf}}
\caption{Integration of the texture by orthogonal rendering of the mesh models. The red rectangle in the top-left of each subfigure is the selected ROI in the mesh model. The yellow highlight in the bottom-left part indicates the corresponding facets selected on the UV mesh. The right part of each subfigure shows the rendered image.}
\label{fig:res_integration}
\end{figure}
\subsubsection{Image and mesh completion}
After obtaining the rendered image $\mathcal{R}_c$, we directly detect the vehicles using Faster R-CNN \citep{ren2015faster}, from which the mask image $\mathcal{R}_m$ is obtained.
Then, the void regions are completed using the proposed methods as $\mathcal{R}_c'$ and deintegrated to mesh texture.
In addition, we flatten the triangles inside the masked regions for the completed mesh $\mathcal{M}'$.
Figures \ref{fig:results_swjtu} to \ref{fig:results_dort} show the completed results for both the images and mesh models.
Two regions for each dataset are separated into the top and bottom halves.
Except for the shadow regions in the Dortmund dataset, the results are quite satisfactory.
The road markers are recovered reasonably well, such as the crosswalk and X-cross in the SWJTU dataset, and the structured labels in the parking lot of the Dortmund dataset.
For the Shenzhen dataset, the faint structure patterns on the textureless road are also preserved.
The right part of each figure shows the mesh models before and after completion from four different viewpoints.
The proposed methods can produce cleaner road scenes after flattening the geometries and completing the textures of the mesh models. More supplementary video demonstrations involving large regions are available at \url{https://vrlab.org.cn/~hanhu/projects/mesh}.
\begin{figure}[H]
\centering
\subcaptionbox{Image completion}[0.26\linewidth]{
\includegraphics[width=\linewidth]{images/results/imageCompletion/swjtu1.pdf}\vspace{0.2em}
\includegraphics[width=\linewidth]{images/results/imageCompletion/swjtu2.pdf}
}
\subcaptionbox{Mesh completion}[0.5\linewidth]{
\includegraphics[width=\linewidth]{images/results/meshCompletion/swjtu1.pdf}\vspace{0.2em}
\includegraphics[width=\linewidth]{images/results/meshCompletion/swjtu2.pdf}
}
\caption{Completed mesh models for the two regions of the SWJTU dataset. (a) Completion of images; the four columns show the rendered image $\mathcal{R}_c$, indicated bounding boxes, masked image $\mathcal{R}_m$, and completed image $\mathcal{R}_c'$. (b) Completed mesh models; the top and bottom rows for each region show the original models $\mathcal{M}$ and completed models $\mathcal{M}'$, respectively.}
\label{fig:results_swjtu}
\end{figure}
\begin{figure}[H]
\centering
\subcaptionbox{Image completion}[0.26\linewidth]{
\includegraphics[width=\linewidth]{images/results/imageCompletion/shenzhen1.pdf}\vspace{0.2em}
\includegraphics[width=\linewidth]{images/results/imageCompletion/shenzhen2.pdf}
}
\subcaptionbox{Mesh completion}[0.5\linewidth]{
\includegraphics[width=\linewidth]{images/results/meshCompletion/shenzhen1.pdf}\vspace{0.2em}
\includegraphics[width=\linewidth]{images/results/meshCompletion/shenzhen2.pdf}
}
\caption{Completed mesh models for the two regions of the Shenzhen dataset. (a) Completion of images; the four columns show the rendered image $\mathcal{R}_c$, indicated bounding boxes, masked image $\mathcal{R}_m$, and completed image $\mathcal{R}_c'$. (b) Completed mesh models; the top and bottom rows for each region show the original models $\mathcal{M}$ and completed models $\mathcal{M}'$, respectively.}
\label{fig:results_shenzhen}
\end{figure}
\begin{figure}[H]
\centering
\subcaptionbox{Image completion}[0.26\linewidth]{
\includegraphics[width=\linewidth]{images/results/imageCompletion/dort1.pdf}\vspace{0.2em}
\includegraphics[width=\linewidth]{images/results/imageCompletion/dort2.pdf}
}
\subcaptionbox{Mesh completion}[0.5\linewidth]{
\includegraphics[width=\linewidth]{images/results/meshCompletion/dort1.pdf}\vspace{0.2em}
\includegraphics[width=\linewidth]{images/results/meshCompletion/dort2.pdf}
}
\caption{Completed mesh models for the two regions of the Dortmund dataset. (a) Completion of images; the four columns show the rendered image $\mathcal{R}_c$, indicated bounding boxes, masked image $\mathcal{R}_m$, and completed image $\mathcal{R}_c'$. (b) Completed mesh models; the top and bottom rows for each region show the original models $\mathcal{M}$ and completed models $\mathcal{M}'$, respectively.}
\label{fig:results_dort}
\end{figure}
\subsection{Comparison of image completion}
\subsubsection{Qualitative comparison}
To evaluate the performance of image completion against state-of-the-art approaches, two approaches are considered: methods based on statistics of patch offsets \citep{he2012statistics}, and planar structure guidance \citep{huang2014image}. Figures \ref{fig:comparisons_swjtu} to \ref{fig:comparisons_dort} show the image completion results of the algorithms mentioned above. The first column shows images to be completed; the second column shows reference results edited manually using Adobe Photoshop \citep{adobe2020photoshop}. Zoomed regions are also shown to provide details of the completion results.
By comparing the results on different datasets, it can be seen that the results by \cite{he2012statistics} exhibit artificial noise, particularly the result under the simplest scenario in Shenzhen. In addition, although the method in \cite{huang2014image} yields better results than that in \cite{he2012statistics}, blurry areas occur, and perfect linear features cannot be preserved in some experiments. The proposed method exhibits better performance than the other two methods. Specifically, distinct linear features are preserved to achieve desirable completion results, as for example, the zebra crossing and parking lines.
\begin{figure}[H]
\centering
\subcaptionbox{Input}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/swjtu-ori.pdf}
}
\subcaptionbox{Reference}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/swjtu-ps.pdf}
}
\subcaptionbox{\cite{he2012statistics}}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/swjtu-he.pdf}
}
\subcaptionbox{\cite{huang2014image}}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/swjtu-planar.pdf}
}
\subcaptionbox{Proposed}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/swjtu-proposed.pdf}
}
\caption{Comparisons of different completion methods on the SWJTU dataset. The red rectangles indicate enlarged regions.}
\label{fig:comparisons_swjtu}
\end{figure}
\begin{figure}[H]
\centering
\subcaptionbox{Input}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/shenzhen-ori.pdf}
}
\subcaptionbox{Reference}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/shenzhen-ps.pdf}
}
\subcaptionbox{\cite{he2012statistics}}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/shenzhen-he.pdf}
}
\subcaptionbox{\cite{huang2014image}}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/shenzhen-planar.pdf}
}
\subcaptionbox{Proposed}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/shenzhen-proposed.pdf}
}
\caption{Comparisons of different completion methods on the SWJTU dataset. The red rectangles indicate enlarged regions.}
\label{fig:comparisons_shenzhen}
\end{figure}
\begin{figure}[H]
\centering
\subcaptionbox{Input}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/dort-ori.pdf}
}
\subcaptionbox{Reference}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/dort-ps.pdf}
}
\subcaptionbox{\cite{he2012statistics}}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/dort-he.pdf}
}
\subcaptionbox{\cite{huang2014image}}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/dort-planar.pdf}
}
\subcaptionbox{Proposed}[0.19\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_methods/dort-proposed.pdf}
}
\caption{Comparisons of different completion methods on the SWJTU dataset. The red rectangles indicate enlarged regions.}
\label{fig:comparisons_dort}
\end{figure}
\subsubsection{Quantitative comparison}
The peak signal-to-noise ratio (PSNR) is the most common and widely used objective evaluation index for images. Structural similarity (SSIM) is another image quality evaluation index. It measures image similarity in terms of brightness, contrast, and structure. Herein, we evaluate the completion results on road area texture integration images in terms of PSNR and SSIM, as in \citep{hore2010image}.
We consider a manually repaired image using Adobe Photoshop as a reference to calculate the PSNR and SSIM of different methods. Table \ref{tab:completion_evaluation} shows these evaluation indexes \citep{hore2010image}. It can be seen that the proposed method has better average PSNR and SSIM than the other two state-of-the-art methods. Regarding PSNR, the proposed method achieved the best results for all the testing samples; regarding SSIM, it achieved the best results for four out of six samples.
\begin{table}[h]
\centering
\caption{Comparison with other approaches in terms of PSNR and SSIM on the six datasets. The best results are highlighted in bold.}
\label{tab:completion_evaluation}
\resizebox{\textwidth}{!}{
\begin{tabular}{c|c|cccccc|c}
\hline
\multicolumn{2}{c|}{Dataset} & SWJTU1 & SWJTU2 & Shenzhen1 & Shenzhen2 & Dortmund1 & Dortmund2 & Average \\ \hline
\multirow{3}{*}{PSNR} & Proposed method & \textbf{27.49} & \textbf{25.40} & \textbf{26.56} & \textbf{24.98} & \textbf{29.67} & \textbf{33.88} & \textbf{28.00} \\
& \cite{he2012statistics} & 26.92 & 25.30 & 26.20 & 23.16 & 29.45 & 32.94 & 27.33 \\
& \cite{huang2014image} & 25.38 & 24.51 & 22.54 & 22.84 & 27.09 & 32.42 & 25.80 \\ \hline
\multirow{3}{*}{SSIM} & Proposed method & \textbf{0.932} & \textbf{0.888} & 0.918 & \textbf{0.894} & 0.950 & \textbf{0.975} & \textbf{0.926} \\
& \cite{he2012statistics} & 0.919 & 0.878 & \textbf{0.926} & 0.883 & \textbf{0.954} & 0.969 & 0.921 \\
& \cite{huang2014image} & 0.925 & 0.882 & 0.920 & 0.866 & 0.947 & 0.954 & 0.916 \\ \hline
\end{tabular}
}
\end{table}
\subsection{Analysis of linear guidance and directional constraint}
The proposed method prioritizes the target patches and constrains the random searching area to improve the completion result.
To evaluate the effect of the priority setting by linear guidance and directionally constrained searching, we conducted ablation studies. Figure \ref{fig:if_buffer_sort} shows typical comparisons between methods under different settings.
\begin{table}[H]
\centering
\caption{Completion results evaluation under different algorithm settings. The best results are highlighted in bold.}
\label{tab:evaluation_indexes}
\resizebox{\textwidth}{!}{
\begin{tabular}{c|c|cccccc}
\hline
Evaluation & Method & SWJTU1 & SWJTU2 & Shenzhen1 & Shenzhen2 & Dortmund1 & Dortmund2 \\ \hline
\multirow{4}{*}{PSNR} & Proposed & \textbf{27.27} & \textbf{25.67} & \textbf{25.24} & \textbf{28.96} & 28.36 & \textbf{34.11} \\
& w/o. Directional Guidance & 26.41 & 24.67 & 25.03 & 28.77 & \textbf{28.49} & 32.28 \\
& w/o. Linear Ordering & 26.53 & 24.86 & 25.10 & 28.55 & 28.48 & 32.33 \\
& w/o. Both & 26.15 & 25.00 & 24.94 & 28.45 & 28.48 & 34.07 \\ \hline
\multirow{4}{*}{SSIM} & Proposed & \textbf{0.930} & \textbf{0.893} & 0.901 & \textbf{0.944} & 0.935 & \textbf{0.976} \\
& w/o. Directional Guidance & 0.912 & 0.880 & 0.899 & \textbf{0.944} & 0.945 & 0.966 \\
& w/o. Linear Ordering & 0.915 & 0.876 & \textbf{0.902} & 0.941 & 0.945 & 0.966 \\
& w/o. Both & 0.910 & 0.878 & 0.900 & 0.942 & \textbf{0.947} & 0.975 \\ \hline
\end{tabular}
}
\end{table}
By comparing the results in Figure \ref{fig:if_buffer_sort}, it can be concluded that the two strategies remarkably improve the completion results in complex scenarios with numerous linear structures. In addition, these methods yield similar results in a simple scenario, for example, Shenzhen1. From Table \ref{tab:evaluation_indexes}, it can be seen, that except for Shenzhen1 and Dortmund1, the proposed method yields the best completion results. The failure of the proposed method in these datasets can be attributed to the fact that both datasets have few artificial linear structures. In fact, it is also demonstrated that the guidance of a linear feature is valid in image completion of urban road areas, which always have numerous linear features.
\begin{figure}[H]
\centering
\subcaptionbox{Proposed method}[0.24\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_settings/ours.pdf}
}
\subcaptionbox{w/o. Directional guidance}[0.24\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_settings/no_dir.pdf}
}
\subcaptionbox{w/o. Linear ordering}[0.24\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_settings/no_ord.pdf}
}
\subcaptionbox{w/o. Both}[0.24\linewidth]{
\includegraphics[width=\linewidth]{images/comparison/different_settings/no_both.pdf}
}
\caption{Completion results under different settings. Highlighted regions are where the errors appear.}
\label{fig:if_buffer_sort}
\end{figure}
\subsection{Discussion and limitations}
It can intuitively be seen that the proposed mesh completion method yields superior results, which are difficult to obtain by using UV editing tools in modeling software. After completion, regions with no useful texture image, for example, regions occluded by cars, can obtain a plausible texture. The quantitative experiments demonstrate that synthetic texture is sufficient as a model exhibition. Moreover, the proposed structure-aware image completion algorithm can achieve effects similar to those by traditional manual repair methods using Photoshop, with little interaction and higher automation. It is also more effective than state-of-the-art methods \citep{he2012statistics, huang2014image} in image completion of urban roads.
Despite the good completion results, there are certain limitations. First, the proposed image completion algorithm is not always stable because of the random search. However, we improved it compared with state-of-the-art methods \citep{barnes2009patchmatch, huang2014image}. Second, when blank regions have no connection with the surrounding pixels, the proposed completion algorithm cannot be applied. In the future, we will conduct related research on texture generation in semantic models.
\section{CONCLUSION}
\label{s:conclusion}
In this paper, we presented a road-area oblique photogrammetry 3D model completion method with few interactions. It can remove undesirable vehicles and retain a reasonable geometry structure and texture appearance, particularly artificial repetitive structures. Traditional methods cannot efficiently edit 3D models with discrete textures. We also proposed a texture editing method. In addition, a structure-aware image completion algorithm with superior performance on road area images was proposed for automatic texture completion. After experimental evaluation and analysis, the advantages and limitations of the proposed method were demonstrated.
\section*{Acknowledgments}
This work was supported in part by the National Key Research and Development Program of China (Project No. 2018YFC0825803) and by the National Natural Science Foundation of China (Project No. 41631174, 42071355, 41871291). In addition, the authors gratefully acknowledge the provision of the datasets by ISPRS and EuroSDR, which were released in conjunction with the ISPRS Scientific Initiatives 2014 and 2015, led by ISPRS ICWG I/Vb.
\bibliographystyle{model2-names}
|
1403.1863
|
\section{Introduction}
Synchronous Phasor Measurement Units (PMUs) are being massively deployed throughout the grid
and provide the dispatcher with time-stamped measurements relevant to the state of grid health as well as the required data for controlling and monitoring the system. Currently, PMU's provide the fastest measurements of grid status.
As a result, recent monitoring and control schemes rely primarily on PMU measurements.
For example,~\cite{diao} tries to increase voltage resilience to avoid voltage collapse by using synchronized PMU measurements and decision trees.
In addition,
\cite{Giannakis,Wiesel,hezhangJ}~
rely on phase angle measurements for fault detection and localization.
Nevertheless, we need to consider that it is not economically feasible to place PMUs in every node. Therefore, in some nodes in the system, State Estimators will still be used. PMUs are prone to false data injection attack and even if we do not consider that, part of the grid using the state estimators is the back door to false data injection attacks. Therefore, aforementioned methods
can be deluded by false data injection attack. Thus, it is crucial to have a mechanism for fast and accurate discovery of malicious tampering; both for preventing the attacks that may lead to blackouts, and for routine monitoring and control tasks of the smart grid, including state estimation and optimal power flow. It should be noted that our immunization scheme does not depend on use of PMU's in the network and it is applicable both for current and future grid.\\
\subsection{Summary of Results and Related Work}
We have designed a decentralized false data injection attack detection mechanism that utilizes bus phase angles Markov graph. We utilize Conditional Covariance Test (CCT)~\cite{AnimaW} to learn the structure of smart grid.
We show that under normal circumstances, and because of the grid structure, the Markov graph of voltage angles can be determined by the power grid graph; Therefore, a discrepancy between calculated Markov graph and learned structure triggers the alarm.
\begin{wrapfigure}{r}{.3\textwidth}
\centering
\includegraphics[width=2in]{figures/flowchart_improved.pdf}
\caption{\small Flowchart of our detection algorithm}
\label{fig:flowchart}
\end{wrapfigure}
Because of the connection between Markov graph of bus angle measurements and grid topology, our method can be implemented in a decentralized manner, i.e. at each sub-network. Currently, sub-network topology is available online and global network structure is available hourly~\cite{Giannakis}.
Not only by decentralization can we increase the speed and get closer to online detection,
but we also increase accuracy and stability by avoiding communication delays and synchronization problems when trying to send measurement data between locations far apart.
We noticeably decrease the amount of exchanged data to address privacy concerns as much as possible.\\
We show that our method can detect the most recently designed attack on the power grid that can deceive the State Estimator~\cite{Teixreia2011}. The attacker is equipped with vital data and assumes the knowledge of bus-branch model of the grid. To the best of our knowledge, our method is the first to detect such a sophisticated attack comprehensively and efficiently with any number of attacked nodes. It should be noted that our method can detect that the system is under attack as well as the set of nodes under the attack. The flowchart is shown in Figure~\ref{fig:flowchart}.
Although the authors of~\cite{berkeley} suggest an algorithm for PMU placement such that this attack is observable, they only claim an algorithm for 2-node attack and empirical approaches for 3, 4, 5-node attacks. According to~\cite{berkeley}, for cases where more than two nodes are under attack the complexity of the approach is said to be \textit{``disheartening."} Considering the fact that finding the number of needed PMUs is NP hard and that~\cite{berkeley} gives an upper bound and use a heuristic method for PMU placement, we need to mention that our algorithm has no hardware requirements, the complexity does not depend on the number of nodes under attack and it works for any number of attacked nodes. It is worth mentioning that even in the original paper presenting the attack for a relatively small network (IEEE-30), seven measurements from five nodes are manipulated. So it seems that the 2-node attack is not the most probable one.\\
There has been another line of work towards computing the ``security index" for different nodes to find the set of nodes that are more vulnerable to false data injection attacks~\cite{Sandberg}. Although these attempts are appreciated, our method differs greatly from such perspectives as such methods do not detect the attack state when it happens and they cannot find the set of attack nodes.\\
Dependency graph approach is used in~\cite{hezhangJ} for topology fault detection in the grid. However, since attacks on state estimators are not considered, such methods can be deceived by false data injection. Furthermore,~\cite{hezhangJ} uses a constrained maximum likelihood optimization for finding the information matrix while here an advanced structure learning method is used that captures the power grid structure better. This is because in the power grid the edges are distributed over the network. This is discussed in section \ref{sec:CCT}.\\
In addition, we show that our method can detect the case where the attacker manipulates reactive power data to lead the state estimator to wrong estimates of voltage. Such an attack can be designed to fake a voltage collapse or tricking the operator to cause a voltage collapse. The detection can be done by linearisation of AC power flow and considering the fluctuations around steady state. Then following our algorithm, it readily follows that such an attack can also be detected following similar approach as we do here for bus phase angles and active power.\\
\paragraph{Paper Outline: } The paper is organized as follows. In section $2$, we show that bus phase angles form a Gaussian Markov Random Field (GMRF) and discuss that their Markov graph can be determined by the grid structure.
In section $3$, we explain the Conditional Covariance Test (CCT)~\cite{AnimaW}, which we use for obtaining the Markov graph between bus phase angles, and discuss how we leverage it to perform optimally for the power grid.
The stealthy deception attack on the State Estimator is introduced in Section $4$. We elaborate on our detection scheme in Section $5$. Simulations are presented in Section $6$. We elaborate on the reactive power counterpart in Section $7$ and Section $8$ concludes the paper.
\section{Preliminaries and problem formulation}
\subsection{Preliminaries}
A Gaussian Markov Random Field (GMRF) is a family of jointly Gaussian distributions, which factor
according to a given graph. Given a graph $G = (V,E)$, with $V = \{1, . . . , p\}$, consider
a vector of Gaussian random variables $X = [X_1,X_2, . . . ,X_p]^T$ , where each node $i\in V$ is
associated with a scalar Gaussian random variable $X_i$. A Gaussian Markov Random Field
on $G$ has a probability density function (pdf) that can be parametrized as
\begin{align}
\label{user prob_distr}
f_X(x)\propto exp[-\frac{1}{2} x^TJx+h^Tx],
\end{align}
where $J$ is a positive-definite symmetric matrix whose sparsity pattern corresponds to that
of the graph $G$. More precisely,
\begin{align*}
J(i,j)=0 \Longleftrightarrow (i,j)\notin E.
\end{align*}
The matrix $J = \Sigma ^ {-1}$ is known as the \textit{potential} or \textit{information} matrix, the non-zero entries $J(i, j)$
as the edge potentials, and the vector $h$ as the vertex potential vector. In general, Graph $G=(V,E)$ is called the Markov graph (graphical model) underlying the joint probability distribution $f_X(x)$,
where node set $V$ represents random variable set $\{X_i\}$ and the edge set $E$ is defined in order to satisfy local Markov property.
For a Markov Random Field, local Markov property states that $X_i \perp {X}_{-\lbrace i,N(i)\rbrace}|X_{N(i)}$,
where $X_{N(i)}$ represents all random variables associated with the neighbors of $i$ in graph $G$
and ${X}_{-\lbrace i,N(i)\rbrace}$ denotes all variables except for $X_i$ and $ X_{N(i)}$.
\subsection{Bus phase angles GMRF} \label{sec:Bmatrix}
We now apply the preceding to bus phase angles.
The DC power flow model~\cite{state}
is often used for analysis of power systems in normal operations. When the system is stable, the phase angle differences are small, so $\sin(\theta_i-\theta_j) \sim \theta_i-\theta_j $.
By the DC power flow model, system state $X$ can be described using bus phase angles.
The active power flow on the transmission line connecting bus $i$ to bus $j$ is given by
\begin{align}
\label{one}
P_{ij} = b_{ij}(X_i - X_j),
\end{align}
where $X_i$ and $X_j$ denote the phasor angles at bus $i$ and
$j$ respectively, and $b_{ij}$ denotes the inverse of the line inductive reactance.
The power injected at bus $i$ equals the algebraic sum of the powers flowing away from bus $i$:
\begin{align}
\label{two}
P_{i} =\sum_{j\neq i}P_{ij}=\sum_{j\neq i} b_{ij}(X_i - X_j).
\end{align}
When buses $i$ and $j$ are not connected, $b_{ij} = 0$. Thus, it follows that the phasor angle at bus $i$ could be represented as
\begin{align}
\label{fund}
X_{i} =\sum_{j\neq i}\lbrace\frac{b_{ij}}{\sum_{i\neq j}b_{ij}}\rbrace X_j+\frac{1}{\sum_{j\neq i} b_{ij}} P_i.
\end{align}
Equation~\eqref{one} can also be rewritten in matrix form as
\begin{align}
\label{PBX}
P=BX,
\end{align}
where $P=[P_1,P_2,...,P_p]$ is the vector of injected active powers, $X=[X_1,X_2,...,X_p]$ is the vector of bus phase angles and
\begin{align}
\label{Bdef}
B=\left\lbrace \begin{array}{lcl}
-b_{ij} & \mbox{if} & i\neq j, \\
\sum_{j\neq i} b_{ij} & \mbox{if} &i=j.
\end{array}
\right.
\end{align}
\paragraph{Remark:} Note that, because of linearity of the DC power flow model, the above equations are valid for both the phase angles $X$ and powers injected $P$ and for {\it fluctuations} of phase angles $X$ and powers injected $P$ around their steady-state values. Specifically, if we let $\widetilde{P}$ refer to the vector of active power fluctuations and $\widetilde{X}$ to the vector of phase angles fluctuations, we have $\widetilde{P}=B\widetilde{X}$. In the sequel, the focus is on the DC power flow equations. Nevertheless, our analysis remains valid if we consider {\it fluctuations} around the steady-state values.
Because of load uncertainty, an injected power can be modeled as a random variable~\cite{Luettgen1993} and since an injected power models the superposition of many independent factors (e.g., loads, fluctuations of power outputs of wind turbines, etc.), it can be modeled as a Gaussian random variable. Thus, the linear relationship in \eqref{PBX} implies that the difference of phasor angles across
a line could be approximated by a Gaussian random variable truncated within $[0, 2\pi)$. Considering the fixed phasor at the slack bus,
it can be assumed that under steady-state conditions, phasor angle measurements are Gaussian random variables~\cite{hezhangJ}.
The next step is to find whether $X_i$'s satisfy local Markov property and, in the affirmative, discover the neighbor sets corresponding to each node. We do this by analyzing Equation~\eqref{fund}. If there were only the first term, we would conclude that set of nodes electrically connected to node $i$ satisfies the local Markov property, but the second term makes a difference.
Below we argue that an analysis of the second term of \eqref{fund} shows that this term causes some second-neighbors of $X_i$ to have a nonzero term in the $J$ matrix. In addition, for nodes that are more than two hop distance apart, $J_{ij}=0$. Therefore, as opposed to the claim in~\cite{hezhangJ}, a second-neighbor relationship {\it does exist} in the $J$-matrix. \\
As stated earlier, power injection at different buses have Gaussian distribution. We can assume that they are independent and without loss of generality they are zero mean. Therefore, the probability distribution function for $P$ vector is
\begin{align*}
f_P(P) \propto e^{-\frac{1}{2}P^T P}
\end{align*}
Since $P=BX$, we have
\begin{align*}
f_X(X) \propto e^{-\frac{1}{2}X^T B^T B X}
\end{align*}
Recalling the definition of probability distribution function for jointly Gaussian r.v. s in~\eqref{user prob_distr}, we get $J=B^T B$. Let $d(i,j)$ represent the hop distance between nodes $i$, $j$. Obviously, by definition of $B$ matrix, this leads to some nonzero $J_{ij}$ entries for $d(i,j)=2$. In addition, we state that
\begin{proposition}
Let $d(i,j)$ denote the hop distance between nodes $i$, $j$ on the power grid graph $G$.
\\Assume that the fluctuations of the powers injected at the nodes are Gaussian and mutually independent. Then
\begin{align*}
J_{ij}=0, \quad \quad \forall ~ d(i,j) >2.
\end{align*}
\end{proposition}
\begin{proof}
We argue by contradiction.
Assume $J_{ij} \neq 0$ for some $d(i,j) >2$. Since $J_{ij}=\sum_k B_{ik}B_{jk}$, it follows that
$\exists ~ k ~{s.t. } ~B_{ik}\neq 0, B_{jk} \neq 0$. By~\eqref{Bdef}, $B_{ik} \ne 0$ implies $d(i,k)=1$.
From there on, the triangle inequality implies that
$d(i,j) \leq d(i,k)+d(k,j)=1+1=2$,
which contradicts the assumption $d(i,j) >2$.
\end{proof}
It was shown in~\cite{sedghi:bookchapter} that for some graphs, the second-neighbor terms are smaller than the terms corresponding to the immediate electrical neighbors of $X_i$. More precisely, it was shown that for lattice structured grids,
this approximation falls under the generic fact of the tapering off of Fourier coefficients~\cite{sedghi:bookchapter}.
Therefore, we can approximate each neighborhood to the immediate electrical neighbors only.
We can also proceed with the exact relationships.
For simplicity, we opt the first-neighbor analysis. We explain shortly why CCT best describes this approximation.\\
Note that our detection method relies on graphical model of the variables. It is based on the fact that the Markov graph of bus phase angles changes under an attack. CCT is tuned with correct data and we prove that in case of the attack, Markov graph of compromised data does not follow the Markov graph of correct data. Hence, we can tune CCT by either exact relationships or approximate Markov graph. In both cases, the output in case of attack is different from the output tuned with correct data.
Therefore, it works for both approximate and exact neighborhoods.
\section{Structure Learning}
In the context of graphical models, model selection means finding the exact underlying Markov graph among a group of random variables based on samples of those random variables.
There are two main class of methods for learning the structure of the underlying graphical model, convex and non-convex methods. $\ell_1$-regularized maximum likelihood estimators are the main class of convex methods ~\cite{Friedman&etal:07,Ravikumar&etal:08Arxiv,JanzaminAnandkumar:CovDecomp2012ArXiv}. In these methods, the inverse covariance matrix is penalized with a convex $\ell_1$-regularizer in order to encourage sparsity in the estimated Markov graph structure. Other types of methods are the non-convex or greedy methods ~\cite{AnimaW}.
As we are faced with GMRF in our problem, it would be useful to exploit one of these structure learning methods.
\subsection{Conditional Covariance Test} \label{sec:CCT}
In order to learn the structure of the power grid, we utilize the new Gaussian Graphical Model Selection method called {\it Conditional Covariance Test (CCT)}~\cite{AnimaW}. CCT method estimates the structure of underlying graphical model given i.i.d. samples of the random variables.
CCT method is shown in Algorithm~\ref{CCT}.
\begin{algorithm}[t]
\caption{$CCT(x^n; \xi_{n,p},\eta)$ for structure learning using samples $x^n$~\cite{AnimaW}}
\label{CCT}
\begin{algorithmic}
\State \textbf{Initialize} $\widehat{G}^n_p=(V,\emptyset)$\\
\State For each $(i,j) \in V^2$,
\If
{~~~~~~~~~$\min_{\substack{{S \subset V\setminus \{i,j\}}\\{|S| \leq \eta}}} \widehat{\Sigma}(i, j|S) > \xi_{n,p}$,\\}
\State add $(i,j)$ to the edge set of $\widehat{G}^n_p$.\\
\EndIf
\State \textbf{Output}:$\widehat{G}^n_p$
\end{algorithmic}
\end{algorithm}
In Algorithm~\ref{CCT}, the output is an edge set corresponding to graph $G$ given
$n$ i.i.d. samples $x^n$, each of which has $p$ variables, a threshold $\xi_{n,p}$ (that depends on both $p$ and $n$) and a constant $\eta \in \mathbb{N}$, which is related to the local vertex separation property (described later). In our case, each bus phase angle represents one of the $p$ variables.\\
The sufficient condition for output of CCT to have structural consistency with the underlying Markov graph between variables is that the graph has to satisfy
local separation property and walk-summability~\cite{AnimaW}. An ensemble of graphs has the $(\eta,\gamma)$-local separation property if
for any $(i,j) \notin E(G)$, the maximum number of paths between $i$,$j$ of length at most $\gamma$ does not exceed $\eta$. A Gaussian model is said to be $\alpha$-walk summable
if
$||\bar{\textbf{R}}|| \leq \alpha < 1$
where $\bar{\textbf{R}}=[|r_{ij}|]$ and $||.||$ denotes the spectral or 2-norm of matrix, which for symmetric matrices is given by the maximum absolute eigenvalue~\cite{AnimaW}. $ \textbf{R} $ is the matrix consisting of partial correlation coefficients. It is zero on diagonal entries and for non-diagonal entries we have
\begin{align}
\label{partialr}
r_{ij}\triangleq & \frac{\Sigma(i,j|V \setminus{ \lbrace i,j \rbrace })}{\sqrt{\Sigma(i,i|{V\setminus { \lbrace i,j \rbrace}})\Sigma(j,j|{V \setminus { \lbrace i,j \rbrace}})}}\nonumber \\
=&-\frac{J(i,j)}{\sqrt{J(i,i)J(j,j)}}.
\end{align}
$r_{ij}$, the \textit{partial correlation coefficient} between variables $X_i$ and $X_j$ for $i \neq j$, measures their conditional covariance given all other variables~\cite{Lauritzen:book}.\\
Whether we use the exact or approximate neighborhood relationship, the Markov graph of bus phase angles is an example of bounded local path graphs that satisfy local separation property. We also checked the analyzed networks for walk-summability condition. As shown in \eqref{partialr} and definition of $\alpha$-walk summablity, this property depends only on $J$ matrix and thus on the topology of the grid. $\alpha$-walk summablity does not depend on operating point of the grid. \\
It is shown in~\cite{AnimaW} that under walk summability, the effect of faraway nodes on covariance decays exponentially with distance and the error in approximating the covariance by local neighboring decays exponentially with distance. So by correct tuning of the threshold $\xi_{n,p}$ and with enough samples, we expect the output of CCT method to follow the grid structure.\\
It is worth mentioning that when we use CCT method for structure learning of phasor data, our method is robust against measurement noise. The reason is that CCT analyzes conditional covariance of its input data. Measurement noise is white noise and in addition uncorrelated from the data. As a result, it does not change the conditional correlation between phasor data. Thus, our method is immune to measurement noise.\\
CCT distributes the edges fairly uniformly across the nodes, while the $\ell_1$ method tends to cluster all the edges together between the ``dominant'' variables leading to a densely connected component and several isolated points~\cite{AnimaW}. Therefore, CCT is more suitable for detecting the structure of the power grid where the edges are distributed over the network. It should be noted that the computational complexity of CCT is $O(p^{\eta+2})$, which is efficient for small $\eta$~\cite{AnimaW}. $\eta$ is the parameter associated with local separation property described above. \\
The sample complexity associated with CCT method is $n=\Omega(J_{min}^{-2}\log{p})$, where $J_{min}$ is the minimum absolute edge potential in the model~\cite{AnimaW}.
\subsection{Decentralization}
We want to find the Markov graph of our bus phasor measurements. Since we have made the connection between electrical connectivity and correlation, this helps us to decentralize our method to a great extent.
We consider the power network in its normal operating condition.
It consists of different areas connected together via border nodes.
Therefore, we decompose our network into these sub-areas.
Our method can be performed locally in the sub-networks.
The sub-network connection graph is available online from the protection system at each sub-network
and can be readily compared with the bus phase angle Markov graph.
In addition, only for border nodes we need to consider their out-of-area neighbors as well.
This can be done either by solving the power flow equations for that border link
or by receiving measurements from neighbor sub-networks.
Therefore, we run CCT for each sub-graph to figure out its Markov graph. Then we compare it with online network graph information to detect false data injection attack.\\
This decentralization reduces complexity and increases speed. Our decentralized method is a substitute for considering all measurements throughout the power grid, which requires a huge amount of data exchange and computation. In addition to having less nodes to analyze, this decentralization leads us to a smaller $\eta$ and greatly reduces computational complexity, which makes our method capable of being executed in very large networks.
Furthermore, since structure learning is performed locally, non-linear faraway relationships intrinsic to power systems do not play a role and our assumptions remain valid.
Moreover, utility companies are not willing to expose their information for economical competition reasons and there have been several attempts to make them do that~\cite{Sankar}. Thus it is desired to reduce the amount of data exchange between different areas and our method adequately fulfills this requirement.\\
\subsection{Online calculations}
For fast monitoring the power grid, we need an on-line algorithm.
As we show in this section, our algorithm can be developed as an iterative method that processes new data without the need for reprocessing earlier data.
Here, we derive an iterative formulation for the sample covariance matrix.
Then we use it to calculate the conditional covariance using
\begin{align*}
\widehat{\Sigma}(i, j|S) := \widehat{\Sigma}(i, j)- \widehat{\Sigma}(i, S)\widehat{\Sigma}^{-1}
(S, S)\widehat{\Sigma}(S, j).
\end{align*}
As we know, in general
\begin{align*}
\Sigma= E[(X-\mu)(X-\mu)^T]=E[XX^T]-\mu\mu^T.
\end{align*}
Let $\widehat{\Sigma}^{(n)}(X)$ denote the sample covariance matrix for a vector $X$ of $p$ elements from $n$ samples and let $\widehat{\mu}^{(n)}(X)$ be the corresponding sample mean.
In addition, let $X^{(i)}$ be the $i$th sample of our vector. Then we have
\begin{align}
\label{iter}
\widehat{\Sigma}^{(n)}(X)=\frac{1}{n-1}\sum_{i=1}^n X^{(i)}{X^{(i)}}^T-\widehat{\mu}^{(n)}{\widehat{\mu}^{(n)^T}}.
\end{align}
Therefore,
\begin{align}
\label{update}
\widehat{\Sigma}^{(n+1)}(X)=\frac{1}{n}[\sum_{i=1}^n X^{(i)}{X^{(i)}}^T+X^{(n+1)}{X^{(n+1)}}^T] -\widehat{\mu}^{(n+1)}{\widehat{\mu}^{(n+1)^T}},
\end{align}
where
\begin{align}
\label{samplemean}
{\widehat{\mu}^{(n+1)}} =\frac{1}{n+1}[n{\widehat{\mu}^{(n)}}+X^{(n+1)}].
\end{align}
By keeping the first term in~\eqref{iter} and the sample mean~\eqref{samplemean}, our updating rule is~\eqref{update}. Thus, we revise the sample covariance as soon as any bus phasor measurements changes and leverage it to reach conditional covariances needed for CCT.
It goes without saying that if the system demand and structure does not change and the system is not subject to false data injection attack, the voltage angles at nodes remain the same and there is no need to run any algorithm.
\section{Stealthy Deception Attack}
The most recent and most dreaded false data injection attack on the power grid was introduced in~\cite{Teixreia2011}. It assumes knowledge of bus-branch model of the system and is capable of deceiving the state estimator, damaging power network observatory, control, monitoring, demand response and pricing schemes~\cite{kosut2010malicious}.
For a $p$-bus electric power network, the $l=2p-1$ dimensional
state vector $x$ is $(\theta^T,V^T)^T$, where $V=(V_1,...,V_p)$ is the vector of voltage bus magnitudes and $\theta=(\theta_2,...,\theta_p)$ the vector of phase angles. It is assumed that the nonlinear measurement model for state estimation is defined by $z=h(x)+\epsilon$,
where $h(.)$ is the measurement function, $z=(z_P,z_Q)$ is the measurement vector consisting of active and reactive power flow measurements and $\epsilon$ is the measurement error. $H(x^k):=\frac{dh(x)}{dx}|_{x=x^k}$ denotes the Jacobian matrix of the measurement model $h(x)$ at $x^k$.\\
The goal of the stealthy deception attacker is to compromise the measurements available to the State Estimator (SE) such that
$z^a=z+a$,
where $z^a$ is the corrupted measurement and $a$ is the attack vector. Vector $a$ is designed such that the SE algorithm converges and the attack $a$ is undetected by the Bad Data Detection scheme.
Then it is shown that, under the DC power flow model,
such an attack can only be performed locally with $a \in \mathrm{Im}(H)$,
where $H=H_{P\theta}$ is the matrix connecting the vector of bus injected powers to the vector of bus phase angles, i.e., $P=H_{P\theta} \theta$. The attack is shown in Figure\ref{fig:SE}.
\begin{figure}[t]
\centering
\captionsetup{type=figure}
\includegraphics[width=3.5in]{figures/whole.pdf
\caption{\small Power grid under a cyber attack}
\label{fig:SE}
\end{figure}
\section{Stealthy Deception Attack Detection}
As mentioned earlier, the fundamental idea behind our detection scheme is that of structure learning.
Our learner, the CCT method, is first tuned with correct data representing the data structure,
which corresponds to the grid graph. Therefore, any attack that changes the structure alters the output of CCT method and this triggers the alarm.
Let us consider the aforementioned attack more specifically. As we are considering the DC power flow model
and all voltage magnitudes are considered to be 1 p.u.,
the state vector introduced in~\cite{Teixreia2011} reduces to the vector of voltage angles, $X$. Since
$a \in \mathrm{Im}(H)$, $\exists d$ such that $a=Hd$ and
\begin{align*}
z^a=z+a=H(X+d)=HX^a,
\end{align*}
where $X^a$ represents the vector of angles when the system is under attack,
$z^a$ is the attacked measurement vector and $X$ is the actual phasor angle vector. Considering~\eqref{two}, we have $H_{ij}=-b_{ij}$ for $i \neq j$ and $H_{ii}=\sum_{i\neq j}b_{ij}$, where $b_{ij}$ denotes the inverse of the line inductive reactance. We have
\begin{align}
\label{Xa}
X^a=X+d=H^{-1}P+H^{-1}a=H^{-1}(P+a).
\end{align}
As the definition of $H$ matrix shows, it is of rank $p-1$. Therefore, the above $H^{-1}$ denotes the pseudo inverse of $H$ matrix. Another way to address this singularity is to remove the row and column associated with slack bus. \\
From~\eqref{Xa},
\begin{align*}
\Sigma{(X^a,X^a)}= H^{-1}[\Sigma(P+a,P+a)]{H^{-1}}^T
= H^{-1}[\Sigma(P,P)+\Sigma(a,a)]{H^{-1}}^T.
\end{align*}
The above calculation assumes the attack vector being independent of current values in the network,
as demonstrated in definition of the attack~\cite{Teixreia2011}. \\
An attack is considered successful if it causes the operator to make a wrong decision. For that matter, the attacker would not insert just one wrong sample. In addition, if the attack vector remains constant, it does not cause any reaction. This eliminates the cases of constant attack vectors. Therefore, the attacker is expected to insert random vectors $a$ during some samples. Thus $ \Sigma(a,a) \neq 0 $ and
\begin{align}
\label{attack_d}
\Sigma{(X^a,X^a)} \neq \Sigma(X,X).
\end{align}
It is not difficult to show that if we remove the assumption on independence of attack vector and injected power, \eqref{attack_d} still holds.\\
Considering~\eqref{attack_d} and the fact that matrix inversion enjoys uniqueness property, this means that in case of an attack, the new $\Sigma^{-1}$ will not be the same as network's $J$ matrix in normal condition, i.e.,
$\Sigma^{-1}{(X^a,X^a)} \neq J_{normal}$,
and as a result,
the output of CCT method will not follow the grid structure
We use this mismatch to trigger the alarm. It should be noted that acceptable load changes do not change the Markov graph and as a result do not lead to false alarms. The reason is that such changes do not falsify DC power flow assumption and the Markov graph will continue to follow the defined information matrix.\\
After the alarm is triggered, the next step is to find which nodes are under attack.
\subsection{Detecting the Set of Attacked Nodes}
We use the \textit{correlation anomaly} metric~\cite{anomaly} to find the attacked nodes. This metric quantifies the contribution of each random variable to the difference between probability densities in two cases while considering the sparsity of the structure. Kullback-Leibler (KL) divergence is used as the measure of difference. As soon as an attack is detected, we use the attacked information matrix and the information matrix corresponding to the current topology of the grid to compute the anomaly score for each node. The nodes with highest anomaly score are announced as the nodes under attack. We investigate the implementation details in the next section.
It should be noted that the attack is performed locally and because of local Markov property, we are certain that no nodes from other sub-graphs contributes to the attack.
We should emphasize that the considered attack assumes the knowledge of the system's bus-branch model.
Therefore, the attacker is equipped with very critical information. Yet, we can mitigate such an ''intelligent'' attack.
\subsection{Reactive power versus voltage amplitude} \label{sec:AC}
As mentioned before, with similar calculations, we can consider the case where the attacker manipulates reactive power data to lead the state estimator to wrong estimates of voltage. Such an attack can be designed to fake a voltage collapse or tricking the operator to cause a change in the normal state of the grid. For example, if the attacker fakes a decreasing trend in voltage magnitude of a part of the grid, the operator will send more reactive power to that part and thus this causes voltage overload/underload. At this point, the protection system disconnects the corresponding lines. This can lead to outage in some areas and in worse cases to overloading in other parts of the grid that might cause blackouts and cascading events. The detection can be done by linearization of AC power flow and considering the fluctuations around steady state. Then following our algorithm, it readily follows that such an attack can also be detected following a similar approach as we developed here for bus phase angles and active power.
In this section we show how this analogy can be established.
The AC power flow states that the real power and the reactive power flowing from bus $i$ to bus $j$ are, respectively,
\begin{align*}
&P_{ij} =G_{ij}V^2_i -G_{ij}V_iV_j\cos (\theta _i - \theta _j) + b_{ij}V_iV_j\sin( \theta _i - \theta _j ),\\
&Q_{ij} = b_{ij}V^2_i -b_{ij}V_iV_j\cos(\theta _i -\theta _j) -G_{ij}V_iV_j \sin(\theta _i-\theta _j),
\end{align*}
where $V_i$ and $\theta_i$ are the voltage magnitude and phase angle, resp., at bus \#i and
$G_{ij}$ and $b_{ij}$ are the conductance and susceptance, resp., of line $ij$.
From ~\cite{Reza2010}, we obtain the following approximation of the AC {\it fluctuating} power flow:
\begin{align*}
&\widetilde{P}_{ij}=(b_{ij}\overline{V}_i \overline{V}_j\cos \overline{\theta}_{ij})(\widetilde{\theta}_i-\widetilde{\theta}_i),\\
&\widetilde{Q}_{ij}=(2b_{ij}\overline{V}_i -b_{ij}\overline{V}_j\cos \overline{\theta}_{ij})\tilde{V}_i-(b_{ij}\overline{V}_i\cos\overline{\theta}_{ij})\widetilde {V}_j,
\end{align*}
where bar denotes steady-state value, tilde means fluctuation around the steady-state value, and $\overline{\theta}_{ij}=\overline{\theta}_{i}-\overline{\theta}_{j}$. These fluctuating values
due to renewables and variable loads justify the utilization of probabilistic methods in power grid problems.\\
Now assuming that for the steady-state values of voltages we have $\overline{V}_i=\overline{V}_j \simeq 1 p.u.$
(per unit), and the fluctuations in angles are about the same such that $\cos \theta_{ij}=1$, we have
\begin{align}
\sublabon{equation}
\label{P}
\widetilde{P}_{ij}=b_{ij}(\widetilde{\theta}_i-\widetilde{\theta}_j),\\
\label{Q}
\widetilde{Q}_{ij}=b_{ij}(\widetilde{V}_i-\widetilde{V}_j).
\end{align}
\sublaboff{equation}
It is clear from~\eqref{P}-\eqref{Q} that we can follow the same discussions we had about real power and voltage angles, with reactive power and voltage magnitudes.\\
It can be argued that, as a result of uncertainty, the aggregate reactive power at each bus can be approximated as a Gaussian random variable and, because of Equation \eqref{Q},
voltage fluctuations around the steady-state value can be approximated as Gaussian random variables.
Therefore, the same path of approach as for phase angles can be followed to show the GMRF property for voltage amplitudes.
Comparing \eqref{Q} with \eqref{one} makes it clear that the same matrix, i.e.,
the $B$ matrix developed in Section~\ref{sec:Bmatrix},
is playing the role of correlating the voltage amplitudes;
therefore, assuming that the statistics of the active and reactive power fluctuations are similar,
the underlying graph is the same. This can be readily seen by comparing \eqref{P} and \eqref{Q}.
\section{Simulation}
In this section, we present experimental results. We consider IEEE 14-bus system as well as IEEE-30 bus system. First, we feed the system with Gaussian demand and simulate the power grid.
We use MATPOWER~\cite{MATPOWER} for solving the DC power flow equations for various demand and use the resulting angle measurements as the input to CCT algorithm. We leverage YALMIP~\cite{YALMIP} and SDPT3~\cite{sdpt3} to perform CCT method in MATLAB.
With the right choice of parameters and threshold, and enough measurements, the Markov graph follows the grid structure. We use the edit distance metric for tuning the threshold value. The edit distance between two graphs reveals the number of different edges in graphs, i.e., edges that exist in only one of the two graphs.
After the threshold is set, our detection algorithm works in the following manner. Each time the procedure is initiated, i.e., any PMU angle measurement or state estimator output changes, it updates the conditional covariances based on new data, runs CCT and checks the edit distance between the Markov graph of phasor data and the grid structure. A discrepancy triggers the alarm. Subsequently, the system uses metric to find all the buses under the attack. The flowchart of our method is shown in Figure~\ref{fig:flowchart}.
Next we introduce the stealthy deception attack on the system. The attack is designed according to the description above, i.e., it is a random vector such that $a \in \mathrm{Im}(H)$.
The attack is claimed to be successful only if performed locally on connected nodes. Having this constraint in mind, for IEEE-14 test case the maximum number of attacked nodes is six and for IEEE 30-bus system this number is eight. For IEEE-14 network, we consider the cases where two to six nodes are under attack. For IEEE-30 network, we consider the cases where two to eight nodes are under attack. For each case and for each network, we simulate all possible attack combinations. This is to make sure we have checked our detection scheme against all possible stealthy deception attacks. Each case is repeated 1000 times for different attack vector values.
When the attacker starts tampering with the data, the corrupted samples are added to the sample bin of CCT and therefore they are used in calculating the sample covariance matrix. With enough number of corrupted samples, our algorithm can be arbitrarily close to $100\%$ successful in detecting all cases and types of attacks discussed above, for both IEEE-14 and IEEE-30 bus systems.
The reason behind the trend shown in Figure 3 is that the Markov graph is changing from following the true information matrix towards showing some other random relationship that the attacker is producing. When the number of compromised samples increases, they gain more weight in the sample covariance. Hence, the chance of change in Markov graph and detection increases.
The minimum number of corrupted samples for having almost $100\%$ detection rate for IEEE 14-bus system is 130 and it is 50 for IEEE 30-bus system. Since IEEE-$30$ is more sparse compared to IEEE 14-bus system, our method performs better in the former case.Yet,
for a 60 Hz system, detection speed for IEEE 14-bus system is quite amazing as well.
Another interesting fact is the detection rate trend as the number of corrupted measurements increases. This is shown in Figure~\ref{fig:drate} for IEEE 14-bus system. Detection rate is averaged over all possible attack scenarios. It can be seen that even for small number of corrupted measurements our method presents a good performance. The detection rate is $90\%$ with $30$ corrupted samples.
\begin{figure}[t]
\centering
\captionsetup{type=figure}
\includegraphics[width=3.7in]{figures/14nem_improved.pdf}
\caption{\small Detection rate for IEEE 14-bus system}
\label{fig:drate}
\end{figure}
The next step is to find which nodes are attacked.
As stated earlier, we use anomaly score metric~\cite{anomaly} to detect such nodes.
As an example, Figure~\ref{fig:point7} shows the anomaly score plot for
the case where nodes $4, 5$ and $6$ are under the attack\footnote{The numbering system employed here is the one of the publised IEEE 14 system available at \url{https://www.ee.washington.edu/research/pstca/pf14/pg_tca14bus.htm}}. It means that a random vector is added to measurements at these nodes. This attack is repeated $1000$ times for different values building an attack size of $0.7$. Attack size refers to the expected value of the Euclidean norm of the attack vector.
\begin{figure}[b]
\centering
\captionsetup{type=figure}
\begin{psfrags}
\psfrag{Node Number}{Node Number}
\includegraphics[width=6in]{figures/point7.pdf}
\end{psfrags}
\caption{\small Anomaly score for IEEE 14-bus system, nodes 4, 5, 6 are under attack; Attack size is 0.7.}
\label{fig:point7}
\end{figure}
Simulation results show that as the attack size increases, the difference between anomaly score of the nodes under the attack and the uncompromised nodes increases and, as a result, it is easier to pinpoint the attacked nodes. For example, Figure~\ref{fig:3cases} compares the cases where the attack size is $1$, $0.7$ and $0.5$ for the previous attack scenario, i.e., nodes 4, 5, 6 are under attack.
It should be noted that in order for an attack to be successful in misleading the power system operator, the attack size should not be too small. More specifically, the attacker wants to make a change in the system state such that the change is noticeable, it results in wrong estimation in part of the grid or it triggers a reaction in the system. If the value of the system state under the attack is close to its real value, the system is not considered under the attack as it continues its normal operation.
It can be seen that, even for the smallest possible attack size
that would normally not lead to operator to react, the anomaly score plot will remain reliable.
For example, in the considered attack scenario, the anomaly plot performs well even for an attack size of $0.3$, while it seems that a potentially successful attack under normal standards needs a bigger attack size.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6in]{figures/3cases.pdf}
\caption{\small Anomaly score for IEEE 14-bus system for different attack sizes. Nodes 4, 5, 6 are under attack. Attack sizes are 0.5, 0.7, 1.}
\label{fig:3cases}
\end{center}
\end{figure}
There is another important issue in setting the threshold for anomaly score.
As can be seen in the above plots, a threshold of $0.3$ can give us a detection rate of $100\%$. If we increase the threshold, both detection rate and false alarm rate decrease, but for this application we want the design to satisfy the $100\%$ detection rate with a very low false alarm rate. It can readily be seen that our expectations are met. Simulation results for different attacks on IEEE 14-bus system shows that this threshold guarantees $100\%$ detection rate with a very low false alarm rate of $3.82\times 10^{-5}$.
\section{Discussion and Conclusion}
We proposed a decentralized false data injection attack detection scheme that is capable of detecting the most recent stealthy deception attack on the smart grid SCADA. To the best of our knowledge, our remedy is the first to comprehensively detect this sophisticated attack. In addition to detecting the attack state, our algorithm is capable of pinpointing the set of attacked nodes.
As stated earlier, the computational complexity of our method is polynomial and the decentralized property makes our scheme suitable for huge networks, yet with bearable complexity and run time.
Consequently, our approach can be extended to bigger networks, namely IEEE-118 and IEEE-300 bus systems.
We also argued that our method is capable of detecting attacks that manipulate reactive power measurements to cause inaccurate voltage amplitude data. The latter attack scenario is also important as it can lead to,
or mimic, a voltage collapse.
In conclusion, our method fortifies the power system against a large class of false data injection attacks
and our technique could becomes essential for current and future grid reliability, security and stability.
We introduced change detection for graphical model of the system and showed that it can be leveraged to detect attacks on the system and controlling scheme. This fortifying scheme is very crucial to maintain reliable control and stability. Change detection is a very well-known concept in control literature. Nevertheless, change detection in graphical models and using it for detecting the attacks is the new concept and our contribution to the field.
\subsection*{Acknowledgment}
We acknowledge detailed discussions with Majid Janzamin and thank him for valuable comments on graphical model selection and Conditional Covariance Test. The authors thank Reza Banirazi for discussions about power network operation and Phasor Measurement Units. We thank Anima Anandkumar for valuable insights into local separation property and walk summability.
\section{Introduction}
Synchronous Phasor Measurement Units (PMUs) are being massively deployed throughout the grid
and provide the dispatcher with time-stamped measurements relevant to the state of grid health as well as the required data for controlling and monitoring the system. Currently, PMU's provide the fastest measurements of grid status.
As a result, recent monitoring and control schemes rely primarily on PMU measurements.
For example,~\cite{diao} tries to increase voltage resilience to avoid voltage collapse by using synchronized PMU measurements and decision trees.
In addition,
\cite{Giannakis,Wiesel,hezhangJ}~
rely on phase angle measurements for fault detection and localization.
Nevertheless, we need to consider that it is not economically feasible to place PMUs in every node. Therefore, in some nodes in the system, State Estimators will still be used. PMUs are prone to false data injection attack and even if we do not consider that, part of the grid using the state estimators is the back door to false data injection attacks. Therefore, aforementioned methods
can be deluded by false data injection attack. Thus, it is crucial to have a mechanism for fast and accurate discovery of malicious tampering; both for preventing the attacks that may lead to blackouts, and for routine monitoring and control tasks of the smart grid, including state estimation and optimal power flow. It should be noted that our immunization scheme does not depend on use of PMU's in the network and it is applicable both for current and future grid.\\
\subsection{Summary of Results and Related Work}
We have designed a decentralized false data injection attack detection mechanism that utilizes bus phase angles Markov graph. We utilize Conditional Covariance Test (CCT)~\cite{AnimaW} to learn the structure of smart grid.
We show that under normal circumstances, and because of the grid structure, the Markov graph of voltage angles can be determined by the power grid graph; Therefore, a discrepancy between calculated Markov graph and learned structure triggers the alarm.
\begin{wrapfigure}{r}{.3\textwidth}
\centering
\includegraphics[width=2in]{figures/flowchart_improved.pdf}
\caption{\small Flowchart of our detection algorithm}
\label{fig:flowchart}
\end{wrapfigure}
Because of the connection between Markov graph of bus angle measurements and grid topology, our method can be implemented in a decentralized manner, i.e. at each sub-network. Currently, sub-network topology is available online and global network structure is available hourly~\cite{Giannakis}.
Not only by decentralization can we increase the speed and get closer to online detection,
but we also increase accuracy and stability by avoiding communication delays and synchronization problems when trying to send measurement data between locations far apart.
We noticeably decrease the amount of exchanged data to address privacy concerns as much as possible.\\
We show that our method can detect the most recently designed attack on the power grid that can deceive the State Estimator~\cite{Teixreia2011}. The attacker is equipped with vital data and assumes the knowledge of bus-branch model of the grid. To the best of our knowledge, our method is the first to detect such a sophisticated attack comprehensively and efficiently with any number of attacked nodes. It should be noted that our method can detect that the system is under attack as well as the set of nodes under the attack. The flowchart is shown in Figure~\ref{fig:flowchart}.
Although the authors of~\cite{berkeley} suggest an algorithm for PMU placement such that this attack is observable, they only claim an algorithm for 2-node attack and empirical approaches for 3, 4, 5-node attacks. According to~\cite{berkeley}, for cases where more than two nodes are under attack the complexity of the approach is said to be \textit{``disheartening."} Considering the fact that finding the number of needed PMUs is NP hard and that~\cite{berkeley} gives an upper bound and use a heuristic method for PMU placement, we need to mention that our algorithm has no hardware requirements, the complexity does not depend on the number of nodes under attack and it works for any number of attacked nodes. It is worth mentioning that even in the original paper presenting the attack for a relatively small network (IEEE-30), seven measurements from five nodes are manipulated. So it seems that the 2-node attack is not the most probable one.\\
There has been another line of work towards computing the ``security index" for different nodes to find the set of nodes that are more vulnerable to false data injection attacks~\cite{Sandberg}. Although these attempts are appreciated, our method differs greatly from such perspectives as such methods do not detect the attack state when it happens and they cannot find the set of attack nodes.\\
Dependency graph approach is used in~\cite{hezhangJ} for topology fault detection in the grid. However, since attacks on state estimators are not considered, such methods can be deceived by false data injection. Furthermore,~\cite{hezhangJ} uses a constrained maximum likelihood optimization for finding the information matrix while here an advanced structure learning method is used that captures the power grid structure better. This is because in the power grid the edges are distributed over the network. This is discussed in section \ref{sec:CCT}.\\
In addition, we show that our method can detect the case where the attacker manipulates reactive power data to lead the state estimator to wrong estimates of voltage. Such an attack can be designed to fake a voltage collapse or tricking the operator to cause a voltage collapse. The detection can be done by linearisation of AC power flow and considering the fluctuations around steady state. Then following our algorithm, it readily follows that such an attack can also be detected following similar approach as we do here for bus phase angles and active power.\\
\paragraph{Paper Outline: } The paper is organized as follows. In section $2$, we show that bus phase angles form a Gaussian Markov Random Field (GMRF) and discuss that their Markov graph can be determined by the grid structure.
In section $3$, we explain the Conditional Covariance Test (CCT)~\cite{AnimaW}, which we use for obtaining the Markov graph between bus phase angles, and discuss how we leverage it to perform optimally for the power grid.
The stealthy deception attack on the State Estimator is introduced in Section $4$. We elaborate on our detection scheme in Section $5$. Simulations are presented in Section $6$. We elaborate on the reactive power counterpart in Section $7$ and Section $8$ concludes the paper.
\section{Preliminaries and problem formulation}
\subsection{Preliminaries}
A Gaussian Markov Random Field (GMRF) is a family of jointly Gaussian distributions, which factor
according to a given graph. Given a graph $G = (V,E)$, with $V = \{1, . . . , p\}$, consider
a vector of Gaussian random variables $X = [X_1,X_2, . . . ,X_p]^T$ , where each node $i\in V$ is
associated with a scalar Gaussian random variable $X_i$. A Gaussian Markov Random Field
on $G$ has a probability density function (pdf) that can be parametrized as
\begin{align}
\label{user prob_distr}
f_X(x)\propto exp[-\frac{1}{2} x^TJx+h^Tx],
\end{align}
where $J$ is a positive-definite symmetric matrix whose sparsity pattern corresponds to that
of the graph $G$. More precisely,
\begin{align*}
J(i,j)=0 \Longleftrightarrow (i,j)\notin E.
\end{align*}
The matrix $J = \Sigma ^ {-1}$ is known as the \textit{potential} or \textit{information} matrix, the non-zero entries $J(i, j)$
as the edge potentials, and the vector $h$ as the vertex potential vector. In general, Graph $G=(V,E)$ is called the Markov graph (graphical model) underlying the joint probability distribution $f_X(x)$,
where node set $V$ represents random variable set $\{X_i\}$ and the edge set $E$ is defined in order to satisfy local Markov property.
For a Markov Random Field, local Markov property states that $X_i \perp {X}_{-\lbrace i,N(i)\rbrace}|X_{N(i)}$,
where $X_{N(i)}$ represents all random variables associated with the neighbors of $i$ in graph $G$
and ${X}_{-\lbrace i,N(i)\rbrace}$ denotes all variables except for $X_i$ and $ X_{N(i)}$.
\subsection{Bus phase angles GMRF} \label{sec:Bmatrix}
We now apply the preceding to bus phase angles.
The DC power flow model~\cite{state}
is often used for analysis of power systems in normal operations. When the system is stable, the phase angle differences are small, so $\sin(\theta_i-\theta_j) \sim \theta_i-\theta_j $.
By the DC power flow model, system state $X$ can be described using bus phase angles.
The active power flow on the transmission line connecting bus $i$ to bus $j$ is given by
\begin{align}
\label{one}
P_{ij} = b_{ij}(X_i - X_j),
\end{align}
where $X_i$ and $X_j$ denote the phasor angles at bus $i$ and
$j$ respectively, and $b_{ij}$ denotes the inverse of the line inductive reactance.
The power injected at bus $i$ equals the algebraic sum of the powers flowing away from bus $i$:
\begin{align}
\label{two}
P_{i} =\sum_{j\neq i}P_{ij}=\sum_{j\neq i} b_{ij}(X_i - X_j).
\end{align}
When buses $i$ and $j$ are not connected, $b_{ij} = 0$. Thus, it follows that the phasor angle at bus $i$ could be represented as
\begin{align}
\label{fund}
X_{i} =\sum_{j\neq i}\lbrace\frac{b_{ij}}{\sum_{i\neq j}b_{ij}}\rbrace X_j+\frac{1}{\sum_{j\neq i} b_{ij}} P_i.
\end{align}
Equation~\eqref{one} can also be rewritten in matrix form as
\begin{align}
\label{PBX}
P=BX,
\end{align}
where $P=[P_1,P_2,...,P_p]$ is the vector of injected active powers, $X=[X_1,X_2,...,X_p]$ is the vector of bus phase angles and
\begin{align}
\label{Bdef}
B=\left\lbrace \begin{array}{lcl}
-b_{ij} & \mbox{if} & i\neq j, \\
\sum_{j\neq i} b_{ij} & \mbox{if} &i=j.
\end{array}
\right.
\end{align}
\paragraph{Remark:} Note that, because of linearity of the DC power flow model, the above equations are valid for both the phase angles $X$ and powers injected $P$ and for {\it fluctuations} of phase angles $X$ and powers injected $P$ around their steady-state values. Specifically, if we let $\widetilde{P}$ refer to the vector of active power fluctuations and $\widetilde{X}$ to the vector of phase angles fluctuations, we have $\widetilde{P}=B\widetilde{X}$. In the sequel, the focus is on the DC power flow equations. Nevertheless, our analysis remains valid if we consider {\it fluctuations} around the steady-state values.
Because of load uncertainty, an injected power can be modeled as a random variable~\cite{Luettgen1993} and since an injected power models the superposition of many independent factors (e.g., loads, fluctuations of power outputs of wind turbines, etc.), it can be modeled as a Gaussian random variable. Thus, the linear relationship in \eqref{PBX} implies that the difference of phasor angles across
a line could be approximated by a Gaussian random variable truncated within $[0, 2\pi)$. Considering the fixed phasor at the slack bus,
it can be assumed that under steady-state conditions, phasor angle measurements are Gaussian random variables~\cite{hezhangJ}.
The next step is to find whether $X_i$'s satisfy local Markov property and, in the affirmative, discover the neighbor sets corresponding to each node. We do this by analyzing Equation~\eqref{fund}. If there were only the first term, we would conclude that set of nodes electrically connected to node $i$ satisfies the local Markov property, but the second term makes a difference.
Below we argue that an analysis of the second term of \eqref{fund} shows that this term causes some second-neighbors of $X_i$ to have a nonzero term in the $J$ matrix. In addition, for nodes that are more than two hop distance apart, $J_{ij}=0$. Therefore, as opposed to the claim in~\cite{hezhangJ}, a second-neighbor relationship {\it does exist} in the $J$-matrix. \\
As stated earlier, power injection at different buses have Gaussian distribution. We can assume that they are independent and without loss of generality they are zero mean. Therefore, the probability distribution function for $P$ vector is
\begin{align*}
f_P(P) \propto e^{-\frac{1}{2}P^T P}
\end{align*}
Since $P=BX$, we have
\begin{align*}
f_X(X) \propto e^{-\frac{1}{2}X^T B^T B X}
\end{align*}
Recalling the definition of probability distribution function for jointly Gaussian r.v. s in~\eqref{user prob_distr}, we get $J=B^T B$. Let $d(i,j)$ represent the hop distance between nodes $i$, $j$. Obviously, by definition of $B$ matrix, this leads to some nonzero $J_{ij}$ entries for $d(i,j)=2$. In addition, we state that
\begin{proposition}
Let $d(i,j)$ denote the hop distance between nodes $i$, $j$ on the power grid graph $G$.
\\Assume that the fluctuations of the powers injected at the nodes are Gaussian and mutually independent. Then
\begin{align*}
J_{ij}=0, \quad \quad \forall ~ d(i,j) >2.
\end{align*}
\end{proposition}
\begin{proof}
We argue by contradiction.
Assume $J_{ij} \neq 0$ for some $d(i,j) >2$. Since $J_{ij}=\sum_k B_{ik}B_{jk}$, it follows that
$\exists ~ k ~{s.t. } ~B_{ik}\neq 0, B_{jk} \neq 0$. By~\eqref{Bdef}, $B_{ik} \ne 0$ implies $d(i,k)=1$.
From there on, the triangle inequality implies that
$d(i,j) \leq d(i,k)+d(k,j)=1+1=2$,
which contradicts the assumption $d(i,j) >2$.
\end{proof}
It was shown in~\cite{sedghi:bookchapter} that for some graphs, the second-neighbor terms are smaller than the terms corresponding to the immediate electrical neighbors of $X_i$. More precisely, it was shown that for lattice structured grids,
this approximation falls under the generic fact of the tapering off of Fourier coefficients~\cite{sedghi:bookchapter}.
Therefore, we can approximate each neighborhood to the immediate electrical neighbors only.
We can also proceed with the exact relationships.
For simplicity, we opt the first-neighbor analysis. We explain shortly why CCT best describes this approximation.\\
Note that our detection method relies on graphical model of the variables. It is based on the fact that the Markov graph of bus phase angles changes under an attack. CCT is tuned with correct data and we prove that in case of the attack, Markov graph of compromised data does not follow the Markov graph of correct data. Hence, we can tune CCT by either exact relationships or approximate Markov graph. In both cases, the output in case of attack is different from the output tuned with correct data.
Therefore, it works for both approximate and exact neighborhoods.
\section{Structure Learning}
In the context of graphical models, model selection means finding the exact underlying Markov graph among a group of random variables based on samples of those random variables.
There are two main class of methods for learning the structure of the underlying graphical model, convex and non-convex methods. $\ell_1$-regularized maximum likelihood estimators are the main class of convex methods ~\cite{Friedman&etal:07,Ravikumar&etal:08Arxiv,JanzaminAnandkumar:CovDecomp2012ArXiv}. In these methods, the inverse covariance matrix is penalized with a convex $\ell_1$-regularizer in order to encourage sparsity in the estimated Markov graph structure. Other types of methods are the non-convex or greedy methods ~\cite{AnimaW}.
As we are faced with GMRF in our problem, it would be useful to exploit one of these structure learning methods.
\subsection{Conditional Covariance Test} \label{sec:CCT}
In order to learn the structure of the power grid, we utilize the new Gaussian Graphical Model Selection method called {\it Conditional Covariance Test (CCT)}~\cite{AnimaW}. CCT method estimates the structure of underlying graphical model given i.i.d. samples of the random variables.
CCT method is shown in Algorithm~\ref{CCT}.
\begin{algorithm}[t]
\caption{$CCT(x^n; \xi_{n,p},\eta)$ for structure learning using samples $x^n$~\cite{AnimaW}}
\label{CCT}
\begin{algorithmic}
\State \textbf{Initialize} $\widehat{G}^n_p=(V,\emptyset)$\\
\State For each $(i,j) \in V^2$,
\If
{~~~~~~~~~$\min_{\substack{{S \subset V\setminus \{i,j\}}\\{|S| \leq \eta}}} \widehat{\Sigma}(i, j|S) > \xi_{n,p}$,\\}
\State add $(i,j)$ to the edge set of $\widehat{G}^n_p$.\\
\EndIf
\State \textbf{Output}:$\widehat{G}^n_p$
\end{algorithmic}
\end{algorithm}
In Algorithm~\ref{CCT}, the output is an edge set corresponding to graph $G$ given
$n$ i.i.d. samples $x^n$, each of which has $p$ variables, a threshold $\xi_{n,p}$ (that depends on both $p$ and $n$) and a constant $\eta \in \mathbb{N}$, which is related to the local vertex separation property (described later). In our case, each bus phase angle represents one of the $p$ variables.\\
The sufficient condition for output of CCT to have structural consistency with the underlying Markov graph between variables is that the graph has to satisfy
local separation property and walk-summability~\cite{AnimaW}. An ensemble of graphs has the $(\eta,\gamma)$-local separation property if
for any $(i,j) \notin E(G)$, the maximum number of paths between $i$,$j$ of length at most $\gamma$ does not exceed $\eta$. A Gaussian model is said to be $\alpha$-walk summable
if
$||\bar{\textbf{R}}|| \leq \alpha < 1$
where $\bar{\textbf{R}}=[|r_{ij}|]$ and $||.||$ denotes the spectral or 2-norm of matrix, which for symmetric matrices is given by the maximum absolute eigenvalue~\cite{AnimaW}. $ \textbf{R} $ is the matrix consisting of partial correlation coefficients. It is zero on diagonal entries and for non-diagonal entries we have
\begin{align}
\label{partialr}
r_{ij}\triangleq & \frac{\Sigma(i,j|V \setminus{ \lbrace i,j \rbrace })}{\sqrt{\Sigma(i,i|{V\setminus { \lbrace i,j \rbrace}})\Sigma(j,j|{V \setminus { \lbrace i,j \rbrace}})}}\nonumber \\
=&-\frac{J(i,j)}{\sqrt{J(i,i)J(j,j)}}.
\end{align}
$r_{ij}$, the \textit{partial correlation coefficient} between variables $X_i$ and $X_j$ for $i \neq j$, measures their conditional covariance given all other variables~\cite{Lauritzen:book}.\\
Whether we use the exact or approximate neighborhood relationship, the Markov graph of bus phase angles is an example of bounded local path graphs that satisfy local separation property. We also checked the analyzed networks for walk-summability condition. As shown in \eqref{partialr} and definition of $\alpha$-walk summablity, this property depends only on $J$ matrix and thus on the topology of the grid. $\alpha$-walk summablity does not depend on operating point of the grid. \\
It is shown in~\cite{AnimaW} that under walk summability, the effect of faraway nodes on covariance decays exponentially with distance and the error in approximating the covariance by local neighboring decays exponentially with distance. So by correct tuning of the threshold $\xi_{n,p}$ and with enough samples, we expect the output of CCT method to follow the grid structure.\\
It is worth mentioning that when we use CCT method for structure learning of phasor data, our method is robust against measurement noise. The reason is that CCT analyzes conditional covariance of its input data. Measurement noise is white noise and in addition uncorrelated from the data. As a result, it does not change the conditional correlation between phasor data. Thus, our method is immune to measurement noise.\\
CCT distributes the edges fairly uniformly across the nodes, while the $\ell_1$ method tends to cluster all the edges together between the ``dominant'' variables leading to a densely connected component and several isolated points~\cite{AnimaW}. Therefore, CCT is more suitable for detecting the structure of the power grid where the edges are distributed over the network. It should be noted that the computational complexity of CCT is $O(p^{\eta+2})$, which is efficient for small $\eta$~\cite{AnimaW}. $\eta$ is the parameter associated with local separation property described above. \\
The sample complexity associated with CCT method is $n=\Omega(J_{min}^{-2}\log{p})$, where $J_{min}$ is the minimum absolute edge potential in the model~\cite{AnimaW}.
\subsection{Decentralization}
We want to find the Markov graph of our bus phasor measurements. Since we have made the connection between electrical connectivity and correlation, this helps us to decentralize our method to a great extent.
We consider the power network in its normal operating condition.
It consists of different areas connected together via border nodes.
Therefore, we decompose our network into these sub-areas.
Our method can be performed locally in the sub-networks.
The sub-network connection graph is available online from the protection system at each sub-network
and can be readily compared with the bus phase angle Markov graph.
In addition, only for border nodes we need to consider their out-of-area neighbors as well.
This can be done either by solving the power flow equations for that border link
or by receiving measurements from neighbor sub-networks.
Therefore, we run CCT for each sub-graph to figure out its Markov graph. Then we compare it with online network graph information to detect false data injection attack.\\
This decentralization reduces complexity and increases speed. Our decentralized method is a substitute for considering all measurements throughout the power grid, which requires a huge amount of data exchange and computation. In addition to having less nodes to analyze, this decentralization leads us to a smaller $\eta$ and greatly reduces computational complexity, which makes our method capable of being executed in very large networks.
Furthermore, since structure learning is performed locally, non-linear faraway relationships intrinsic to power systems do not play a role and our assumptions remain valid.
Moreover, utility companies are not willing to expose their information for economical competition reasons and there have been several attempts to make them do that~\cite{Sankar}. Thus it is desired to reduce the amount of data exchange between different areas and our method adequately fulfills this requirement.\\
\subsection{Online calculations}
For fast monitoring the power grid, we need an on-line algorithm.
As we show in this section, our algorithm can be developed as an iterative method that processes new data without the need for reprocessing earlier data.
Here, we derive an iterative formulation for the sample covariance matrix.
Then we use it to calculate the conditional covariance using
\begin{align*}
\widehat{\Sigma}(i, j|S) := \widehat{\Sigma}(i, j)- \widehat{\Sigma}(i, S)\widehat{\Sigma}^{-1}
(S, S)\widehat{\Sigma}(S, j).
\end{align*}
As we know, in general
\begin{align*}
\Sigma= E[(X-\mu)(X-\mu)^T]=E[XX^T]-\mu\mu^T.
\end{align*}
Let $\widehat{\Sigma}^{(n)}(X)$ denote the sample covariance matrix for a vector $X$ of $p$ elements from $n$ samples and let $\widehat{\mu}^{(n)}(X)$ be the corresponding sample mean.
In addition, let $X^{(i)}$ be the $i$th sample of our vector. Then we have
\begin{align}
\label{iter}
\widehat{\Sigma}^{(n)}(X)=\frac{1}{n-1}\sum_{i=1}^n X^{(i)}{X^{(i)}}^T-\widehat{\mu}^{(n)}{\widehat{\mu}^{(n)^T}}.
\end{align}
Therefore,
\begin{align}
\label{update}
\widehat{\Sigma}^{(n+1)}(X)=\frac{1}{n}[\sum_{i=1}^n X^{(i)}{X^{(i)}}^T+X^{(n+1)}{X^{(n+1)}}^T] -\widehat{\mu}^{(n+1)}{\widehat{\mu}^{(n+1)^T}},
\end{align}
where
\begin{align}
\label{samplemean}
{\widehat{\mu}^{(n+1)}} =\frac{1}{n+1}[n{\widehat{\mu}^{(n)}}+X^{(n+1)}].
\end{align}
By keeping the first term in~\eqref{iter} and the sample mean~\eqref{samplemean}, our updating rule is~\eqref{update}. Thus, we revise the sample covariance as soon as any bus phasor measurements changes and leverage it to reach conditional covariances needed for CCT.
It goes without saying that if the system demand and structure does not change and the system is not subject to false data injection attack, the voltage angles at nodes remain the same and there is no need to run any algorithm.
\section{Stealthy Deception Attack}
The most recent and most dreaded false data injection attack on the power grid was introduced in~\cite{Teixreia2011}. It assumes knowledge of bus-branch model of the system and is capable of deceiving the state estimator, damaging power network observatory, control, monitoring, demand response and pricing schemes~\cite{kosut2010malicious}.
For a $p$-bus electric power network, the $l=2p-1$ dimensional
state vector $x$ is $(\theta^T,V^T)^T$, where $V=(V_1,...,V_p)$ is the vector of voltage bus magnitudes and $\theta=(\theta_2,...,\theta_p)$ the vector of phase angles. It is assumed that the nonlinear measurement model for state estimation is defined by $z=h(x)+\epsilon$,
where $h(.)$ is the measurement function, $z=(z_P,z_Q)$ is the measurement vector consisting of active and reactive power flow measurements and $\epsilon$ is the measurement error. $H(x^k):=\frac{dh(x)}{dx}|_{x=x^k}$ denotes the Jacobian matrix of the measurement model $h(x)$ at $x^k$.\\
The goal of the stealthy deception attacker is to compromise the measurements available to the State Estimator (SE) such that
$z^a=z+a$,
where $z^a$ is the corrupted measurement and $a$ is the attack vector. Vector $a$ is designed such that the SE algorithm converges and the attack $a$ is undetected by the Bad Data Detection scheme.
Then it is shown that, under the DC power flow model,
such an attack can only be performed locally with $a \in \mathrm{Im}(H)$,
where $H=H_{P\theta}$ is the matrix connecting the vector of bus injected powers to the vector of bus phase angles, i.e., $P=H_{P\theta} \theta$. The attack is shown in Figure\ref{fig:SE}.
\begin{figure}[t]
\centering
\captionsetup{type=figure}
\includegraphics[width=3.5in]{figures/whole.pdf
\caption{\small Power grid under a cyber attack}
\label{fig:SE}
\end{figure}
\section{Stealthy Deception Attack Detection}
As mentioned earlier, the fundamental idea behind our detection scheme is that of structure learning.
Our learner, the CCT method, is first tuned with correct data representing the data structure,
which corresponds to the grid graph. Therefore, any attack that changes the structure alters the output of CCT method and this triggers the alarm.
Let us consider the aforementioned attack more specifically. As we are considering the DC power flow model
and all voltage magnitudes are considered to be 1 p.u.,
the state vector introduced in~\cite{Teixreia2011} reduces to the vector of voltage angles, $X$. Since
$a \in \mathrm{Im}(H)$, $\exists d$ such that $a=Hd$ and
\begin{align*}
z^a=z+a=H(X+d)=HX^a,
\end{align*}
where $X^a$ represents the vector of angles when the system is under attack,
$z^a$ is the attacked measurement vector and $X$ is the actual phasor angle vector. Considering~\eqref{two}, we have $H_{ij}=-b_{ij}$ for $i \neq j$ and $H_{ii}=\sum_{i\neq j}b_{ij}$, where $b_{ij}$ denotes the inverse of the line inductive reactance. We have
\begin{align}
\label{Xa}
X^a=X+d=H^{-1}P+H^{-1}a=H^{-1}(P+a).
\end{align}
As the definition of $H$ matrix shows, it is of rank $p-1$. Therefore, the above $H^{-1}$ denotes the pseudo inverse of $H$ matrix. Another way to address this singularity is to remove the row and column associated with slack bus. \\
From~\eqref{Xa},
\begin{align*}
\Sigma{(X^a,X^a)}= H^{-1}[\Sigma(P+a,P+a)]{H^{-1}}^T
= H^{-1}[\Sigma(P,P)+\Sigma(a,a)]{H^{-1}}^T.
\end{align*}
The above calculation assumes the attack vector being independent of current values in the network,
as demonstrated in definition of the attack~\cite{Teixreia2011}. \\
An attack is considered successful if it causes the operator to make a wrong decision. For that matter, the attacker would not insert just one wrong sample. In addition, if the attack vector remains constant, it does not cause any reaction. This eliminates the cases of constant attack vectors. Therefore, the attacker is expected to insert random vectors $a$ during some samples. Thus $ \Sigma(a,a) \neq 0 $ and
\begin{align}
\label{attack_d}
\Sigma{(X^a,X^a)} \neq \Sigma(X,X).
\end{align}
It is not difficult to show that if we remove the assumption on independence of attack vector and injected power, \eqref{attack_d} still holds.\\
Considering~\eqref{attack_d} and the fact that matrix inversion enjoys uniqueness property, this means that in case of an attack, the new $\Sigma^{-1}$ will not be the same as network's $J$ matrix in normal condition, i.e.,
$\Sigma^{-1}{(X^a,X^a)} \neq J_{normal}$,
and as a result,
the output of CCT method will not follow the grid structure
We use this mismatch to trigger the alarm. It should be noted that acceptable load changes do not change the Markov graph and as a result do not lead to false alarms. The reason is that such changes do not falsify DC power flow assumption and the Markov graph will continue to follow the defined information matrix.\\
After the alarm is triggered, the next step is to find which nodes are under attack.
\subsection{Detecting the Set of Attacked Nodes}
We use the \textit{correlation anomaly} metric~\cite{anomaly} to find the attacked nodes. This metric quantifies the contribution of each random variable to the difference between probability densities in two cases while considering the sparsity of the structure. Kullback-Leibler (KL) divergence is used as the measure of difference. As soon as an attack is detected, we use the attacked information matrix and the information matrix corresponding to the current topology of the grid to compute the anomaly score for each node. The nodes with highest anomaly score are announced as the nodes under attack. We investigate the implementation details in the next section.
It should be noted that the attack is performed locally and because of local Markov property, we are certain that no nodes from other sub-graphs contributes to the attack.
We should emphasize that the considered attack assumes the knowledge of the system's bus-branch model.
Therefore, the attacker is equipped with very critical information. Yet, we can mitigate such an ''intelligent'' attack.
\subsection{Reactive power versus voltage amplitude} \label{sec:AC}
As mentioned before, with similar calculations, we can consider the case where the attacker manipulates reactive power data to lead the state estimator to wrong estimates of voltage. Such an attack can be designed to fake a voltage collapse or tricking the operator to cause a change in the normal state of the grid. For example, if the attacker fakes a decreasing trend in voltage magnitude of a part of the grid, the operator will send more reactive power to that part and thus this causes voltage overload/underload. At this point, the protection system disconnects the corresponding lines. This can lead to outage in some areas and in worse cases to overloading in other parts of the grid that might cause blackouts and cascading events. The detection can be done by linearization of AC power flow and considering the fluctuations around steady state. Then following our algorithm, it readily follows that such an attack can also be detected following a similar approach as we developed here for bus phase angles and active power.
In this section we show how this analogy can be established.
The AC power flow states that the real power and the reactive power flowing from bus $i$ to bus $j$ are, respectively,
\begin{align*}
&P_{ij} =G_{ij}V^2_i -G_{ij}V_iV_j\cos (\theta _i - \theta _j) + b_{ij}V_iV_j\sin( \theta _i - \theta _j ),\\
&Q_{ij} = b_{ij}V^2_i -b_{ij}V_iV_j\cos(\theta _i -\theta _j) -G_{ij}V_iV_j \sin(\theta _i-\theta _j),
\end{align*}
where $V_i$ and $\theta_i$ are the voltage magnitude and phase angle, resp., at bus \#i and
$G_{ij}$ and $b_{ij}$ are the conductance and susceptance, resp., of line $ij$.
From ~\cite{Reza2010}, we obtain the following approximation of the AC {\it fluctuating} power flow:
\begin{align*}
&\widetilde{P}_{ij}=(b_{ij}\overline{V}_i \overline{V}_j\cos \overline{\theta}_{ij})(\widetilde{\theta}_i-\widetilde{\theta}_i),\\
&\widetilde{Q}_{ij}=(2b_{ij}\overline{V}_i -b_{ij}\overline{V}_j\cos \overline{\theta}_{ij})\tilde{V}_i-(b_{ij}\overline{V}_i\cos\overline{\theta}_{ij})\widetilde {V}_j,
\end{align*}
where bar denotes steady-state value, tilde means fluctuation around the steady-state value, and $\overline{\theta}_{ij}=\overline{\theta}_{i}-\overline{\theta}_{j}$. These fluctuating values
due to renewables and variable loads justify the utilization of probabilistic methods in power grid problems.\\
Now assuming that for the steady-state values of voltages we have $\overline{V}_i=\overline{V}_j \simeq 1 p.u.$
(per unit), and the fluctuations in angles are about the same such that $\cos \theta_{ij}=1$, we have
\begin{align}
\sublabon{equation}
\label{P}
\widetilde{P}_{ij}=b_{ij}(\widetilde{\theta}_i-\widetilde{\theta}_j),\\
\label{Q}
\widetilde{Q}_{ij}=b_{ij}(\widetilde{V}_i-\widetilde{V}_j).
\end{align}
\sublaboff{equation}
It is clear from~\eqref{P}-\eqref{Q} that we can follow the same discussions we had about real power and voltage angles, with reactive power and voltage magnitudes.\\
It can be argued that, as a result of uncertainty, the aggregate reactive power at each bus can be approximated as a Gaussian random variable and, because of Equation \eqref{Q},
voltage fluctuations around the steady-state value can be approximated as Gaussian random variables.
Therefore, the same path of approach as for phase angles can be followed to show the GMRF property for voltage amplitudes.
Comparing \eqref{Q} with \eqref{one} makes it clear that the same matrix, i.e.,
the $B$ matrix developed in Section~\ref{sec:Bmatrix},
is playing the role of correlating the voltage amplitudes;
therefore, assuming that the statistics of the active and reactive power fluctuations are similar,
the underlying graph is the same. This can be readily seen by comparing \eqref{P} and \eqref{Q}.
\section{Simulation}
In this section, we present experimental results. We consider IEEE 14-bus system as well as IEEE-30 bus system. First, we feed the system with Gaussian demand and simulate the power grid.
We use MATPOWER~\cite{MATPOWER} for solving the DC power flow equations for various demand and use the resulting angle measurements as the input to CCT algorithm. We leverage YALMIP~\cite{YALMIP} and SDPT3~\cite{sdpt3} to perform CCT method in MATLAB.
With the right choice of parameters and threshold, and enough measurements, the Markov graph follows the grid structure. We use the edit distance metric for tuning the threshold value. The edit distance between two graphs reveals the number of different edges in graphs, i.e., edges that exist in only one of the two graphs.
After the threshold is set, our detection algorithm works in the following manner. Each time the procedure is initiated, i.e., any PMU angle measurement or state estimator output changes, it updates the conditional covariances based on new data, runs CCT and checks the edit distance between the Markov graph of phasor data and the grid structure. A discrepancy triggers the alarm. Subsequently, the system uses metric to find all the buses under the attack. The flowchart of our method is shown in Figure~\ref{fig:flowchart}.
Next we introduce the stealthy deception attack on the system. The attack is designed according to the description above, i.e., it is a random vector such that $a \in \mathrm{Im}(H)$.
The attack is claimed to be successful only if performed locally on connected nodes. Having this constraint in mind, for IEEE-14 test case the maximum number of attacked nodes is six and for IEEE 30-bus system this number is eight. For IEEE-14 network, we consider the cases where two to six nodes are under attack. For IEEE-30 network, we consider the cases where two to eight nodes are under attack. For each case and for each network, we simulate all possible attack combinations. This is to make sure we have checked our detection scheme against all possible stealthy deception attacks. Each case is repeated 1000 times for different attack vector values.
When the attacker starts tampering with the data, the corrupted samples are added to the sample bin of CCT and therefore they are used in calculating the sample covariance matrix. With enough number of corrupted samples, our algorithm can be arbitrarily close to $100\%$ successful in detecting all cases and types of attacks discussed above, for both IEEE-14 and IEEE-30 bus systems.
The reason behind the trend shown in Figure 3 is that the Markov graph is changing from following the true information matrix towards showing some other random relationship that the attacker is producing. When the number of compromised samples increases, they gain more weight in the sample covariance. Hence, the chance of change in Markov graph and detection increases.
The minimum number of corrupted samples for having almost $100\%$ detection rate for IEEE 14-bus system is 130 and it is 50 for IEEE 30-bus system. Since IEEE-$30$ is more sparse compared to IEEE 14-bus system, our method performs better in the former case.Yet,
for a 60 Hz system, detection speed for IEEE 14-bus system is quite amazing as well.
Another interesting fact is the detection rate trend as the number of corrupted measurements increases. This is shown in Figure~\ref{fig:drate} for IEEE 14-bus system. Detection rate is averaged over all possible attack scenarios. It can be seen that even for small number of corrupted measurements our method presents a good performance. The detection rate is $90\%$ with $30$ corrupted samples.
\begin{figure}[t]
\centering
\captionsetup{type=figure}
\includegraphics[width=3.7in]{figures/14nem_improved.pdf}
\caption{\small Detection rate for IEEE 14-bus system}
\label{fig:drate}
\end{figure}
The next step is to find which nodes are attacked.
As stated earlier, we use anomaly score metric~\cite{anomaly} to detect such nodes.
As an example, Figure~\ref{fig:point7} shows the anomaly score plot for
the case where nodes $4, 5$ and $6$ are under the attack\footnote{The numbering system employed here is the one of the publised IEEE 14 system available at \url{https://www.ee.washington.edu/research/pstca/pf14/pg_tca14bus.htm}}. It means that a random vector is added to measurements at these nodes. This attack is repeated $1000$ times for different values building an attack size of $0.7$. Attack size refers to the expected value of the Euclidean norm of the attack vector.
\begin{figure}[b]
\centering
\captionsetup{type=figure}
\begin{psfrags}
\psfrag{Node Number}{Node Number}
\includegraphics[width=6in]{figures/point7.pdf}
\end{psfrags}
\caption{\small Anomaly score for IEEE 14-bus system, nodes 4, 5, 6 are under attack; Attack size is 0.7.}
\label{fig:point7}
\end{figure}
Simulation results show that as the attack size increases, the difference between anomaly score of the nodes under the attack and the uncompromised nodes increases and, as a result, it is easier to pinpoint the attacked nodes. For example, Figure~\ref{fig:3cases} compares the cases where the attack size is $1$, $0.7$ and $0.5$ for the previous attack scenario, i.e., nodes 4, 5, 6 are under attack.
It should be noted that in order for an attack to be successful in misleading the power system operator, the attack size should not be too small. More specifically, the attacker wants to make a change in the system state such that the change is noticeable, it results in wrong estimation in part of the grid or it triggers a reaction in the system. If the value of the system state under the attack is close to its real value, the system is not considered under the attack as it continues its normal operation.
It can be seen that, even for the smallest possible attack size
that would normally not lead to operator to react, the anomaly score plot will remain reliable.
For example, in the considered attack scenario, the anomaly plot performs well even for an attack size of $0.3$, while it seems that a potentially successful attack under normal standards needs a bigger attack size.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6in]{figures/3cases.pdf}
\caption{\small Anomaly score for IEEE 14-bus system for different attack sizes. Nodes 4, 5, 6 are under attack. Attack sizes are 0.5, 0.7, 1.}
\label{fig:3cases}
\end{center}
\end{figure}
There is another important issue in setting the threshold for anomaly score.
As can be seen in the above plots, a threshold of $0.3$ can give us a detection rate of $100\%$. If we increase the threshold, both detection rate and false alarm rate decrease, but for this application we want the design to satisfy the $100\%$ detection rate with a very low false alarm rate. It can readily be seen that our expectations are met. Simulation results for different attacks on IEEE 14-bus system shows that this threshold guarantees $100\%$ detection rate with a very low false alarm rate of $3.82\times 10^{-5}$.
\section{Discussion and Conclusion}
We proposed a decentralized false data injection attack detection scheme that is capable of detecting the most recent stealthy deception attack on the smart grid SCADA. To the best of our knowledge, our remedy is the first to comprehensively detect this sophisticated attack. In addition to detecting the attack state, our algorithm is capable of pinpointing the set of attacked nodes.
As stated earlier, the computational complexity of our method is polynomial and the decentralized property makes our scheme suitable for huge networks, yet with bearable complexity and run time.
Consequently, our approach can be extended to bigger networks, namely IEEE-118 and IEEE-300 bus systems.
We also argued that our method is capable of detecting attacks that manipulate reactive power measurements to cause inaccurate voltage amplitude data. The latter attack scenario is also important as it can lead to,
or mimic, a voltage collapse.
In conclusion, our method fortifies the power system against a large class of false data injection attacks
and our technique could becomes essential for current and future grid reliability, security and stability.
We introduced change detection for graphical model of the system and showed that it can be leveraged to detect attacks on the system and controlling scheme. This fortifying scheme is very crucial to maintain reliable control and stability. Change detection is a very well-known concept in control literature. Nevertheless, change detection in graphical models and using it for detecting the attacks is the new concept and our contribution to the field.
\subsection*{Acknowledgment}
We acknowledge detailed discussions with Majid Janzamin and thank him for valuable comments on graphical model selection and Conditional Covariance Test. The authors thank Reza Banirazi for discussions about power network operation and Phasor Measurement Units. We thank Anima Anandkumar for valuable insights into local separation property and walk summability.
|
# Generate an object containing the data in “Aphid_longevity_A06_405.txt” using read.table()
Aphids <- read.table("Aphid_longevity_A06_405.txt", header = TRUE)
# Check that the data have been imported properly using str()
# str(Aphids)
# SUCCESS! RESULT => 'data.frame': 55 obs. of 2 variables:
# $ subline : Factor w/ 7 levels "H30","H323","H402",..: 2 7 5 3 1 4 6 2 1 6 ...
# $ age.days: int 22 27 20 26 21 34 29 14 25 27 ...
# Draw a boxplot with the factor levels on the x-axis and longevity on the y-axis
boxplot(age.days ~ subline,
data = Aphids,
main = "Longevity of Aphids Infected with Different Strains of H.defensa",
xlab = "Factor Levels",
ylab = "Longevity"
)
# Error: $ operator is invalid for atomic vectors (Aphids$age.days)
# Is there anything that might make an ANOVA unsuitable for analysing these data as they are?
|
|
###############################################################
# oml4spark_function_create_balanced_input.r
#
# Function to create a Balanced Dataset based on Input
# Dataset and formula.
#
# Input can be HDFS ID, HIVE, IMPALA, Spark DF or R dataframe
#
# It allows for a range to be choosen
# for the TARGET proportion so that if one thinks the
# proportion is within that range, then the returned Spark DF
# is the original input
#
# Usage: createBalancedInput( input_bal ,
# formula_bal ,
# feedback = FALSE ,
# rangeForNoProcess = c(0.45,0.55)
# )
#
#
# Copyright (c) 2020 Oracle Corporation
# The Universal Permissive License (UPL), Version 1.0
#
# https://oss.oracle.com/licenses/upl/
#
###############################################################
#######################################
### GENERATE A BALANCED SAMPLE
### FROM ANY INPUT
### HDFS ID, SPARK, HIVE, R Dataframe
#######################################
createBalancedInput <- function(input_bal, formula_bal, reduceToFormula=FALSE,
feedback = FALSE, rangeForNoProcess = c(0.45,0.55),
sampleSize = 0) {
# Extract the Target variable from the formula
targetFromFormula <- strsplit(deparse(formula_bal), " ")[[1]][1]
# If the Target has an "as.factor", remove it for processing
if (startsWith(targetFromFormula,"as.factor("))
{ targetFromFormula <- regmatches(targetFromFormula,
gregexpr("(?<=\\().+?(?=\\))",
targetFromFormula,
perl = T))[[1]]
}
# If the user wants to run a full verbose mode, store the info
if (grepl(feedback, "FULL", fixed = TRUE))
{verbose_user <- TRUE
} else {verbose_user <- FALSE}
# Find the ideal number of Partitions to use when creating the Spark DF
# To Maximize Spark parallel utilization
sparkXinst <- as.numeric(spark.property('spark.executor.instances'))
sparkXcores <- as.numeric(spark.property('spark.executor.cores'))
ideal_partitions <- sparkXinst*sparkXcores
# Push the INPUT DATA to Spark (if it's not already)
# In Case it is a Spark DF already we don't do anything
if (!((spark.connected()) && (class(input_bal)[1]=="jobjRef"))) {
# Check if the input if a DFS ID (HDFS)
if (is.hdfs.id(input_bal)) {
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) print('Input is HDFS...processing')
dat_df <- orch.df.fromCSV(input_bal,
minPartitions = ideal_partitions,
verbose = FALSE ) # Convert the input HDFS to Spark DF
} else
# Check if the input is HIVE and load it into Spark DF
if ( ore.is.connected(type='HIVE') && (is.ore.frame(input_bal)) ) {
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) print('Input is HIVE Table...processing')
dat_df <- ORCHcore:::.ora.getHiveDF(table=input_bal@sqlTable)
} else
# Check if the input is IMPALA
if ( ore.is.connected(type='IMPALA') && (is.ore.frame(input_bal)) ) {
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) print('Input is IMPALA Table...processing')
dat_df <- ORCHcore:::.ora.getHiveDF(table=input_bal@sqlTable)
} else
# For R Dataframe it is a two-step process for now
if (is.data.frame(input_bal)){
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) print('Input is R Dataframe...processing')
dat_hdfs <- hdfs.put(input_bal)
dat_df <- orch.df.fromCSV(dat_hdfs,
minPartitions = ideal_partitions,
verbose = FALSE ) # Convert the input HDFS to Spark DF
}
} else
# If it's already a Spark DF then just point to it
{ if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) print('Input is already Spark DF')
dat_df <- input_bal}
# Persist the Spark DF for added performance
orch.df.persist(dat_df, storageLevel = "MEMORY_ONLY", verbose = verbose_user)
# Extract Original terms from formula to reduce the original Dataset (if indicated)
formulaTerms <- terms(x=formula_bal, data=orch.df.collect(dat_df$limit(1L)))
# Extract Var names from formula
tempVars <- gsub(".*~","",Reduce(paste, deparse(formulaTerms)))
tempVars <- gsub(" ", "", tempVars)
tempVars <- gsub("-1","", tempVars)
# Final list
finalVarList <- strsplit( tempVars , "+", fixed = TRUE)[[1]]
# In case the user added "as.factor()" to the variables
removeAsFactor <- function(x) {
if (startsWith(x,"as.factor(")) {
regmatches(x, gregexpr("(?<=\\().+?(?=\\))", x, perl = T))[[1]]
} else x
}
finalVarList <- unlist(lapply(finalVarList,removeAsFactor))
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) {
print('List of Variables from Formula that are going to be in the output Spark DataFrame:')
print(c(targetFromFormula,finalVarList))
}
if (reduceToFormula==TRUE) {
# Select only the columns used by the formula plus the target
dat_df <- dat_df$selectExpr(append(targetFromFormula,finalVarList))
# Persist the Reduced Spark DF for added performance
orch.df.persist(dat_df, storageLevel = "MEMORY_ONLY", verbose = verbose_user)
}
# Prepare to create a SQL View with random name for the Spark DF
op <- options(digits.secs = 6)
time <- as.character(Sys.time())
options(op)
tempViewName <- paste0("tmp_view_",
paste(regmatches(time,
gregexpr('\\(?[0-9]+',
time))[[1]],
collapse = ''),
collapse = " ")
orch.df.createView(dat_df , tempViewName)
# Capture the proportion of Target=1 in order to balance the Data into 50/50
targetInfo <- orch.df.collect(orch.df.sql(paste0("select ",targetFromFormula,
" as target, count(*) as num_rows from ",
tempViewName," group by ",targetFromFormula,
" order by ",targetFromFormula)))
proportionTarget <- targetInfo[2,2]/sum(targetInfo$num_rows)
# Not needed, maybe future use: names(proportionTarget) <- c(as.character(targetInfo[2,1]),as.character(targetInfo[1,1]))
# Only need to Sample from Target = 0 if the proportion is outside of the Range given by the user.
# Default is 0.45 to 0.55, so if the proportion is already close enough to 0.5 we should not waste time sampling
if (findInterval(proportionTarget, rangeForNoProcess)) {
if (grepl(feedback, "FULL|TRUE", fixed = TRUE))
cat(paste0('\nTarget proportion in Input Data already within range ',
paste0(rangeForNoProcess, collapse = ' <-> '),' . No change done. \nTarget proportion is : ',
format(proportionTarget,digits=6), '\nNum Rows is : ',format(sum(targetInfo$num_rows),big.mark=',')))
balanced <- dat_df
} else {
cat(paste0('\nTarget proportion is outside the range ',paste0(rangeForNoProcess, collapse = ' <-> '),
' . Processing...\nTarget proportion is : ', format(proportionTarget,digits=6)))
# Select all Target = 1 records and put them into a Spark DF "input_1"
input_1 <- dat_df$filter(c(paste0(targetFromFormula," == '",targetInfo$target[which.min(targetInfo$num_rows)],"'")))
target_1_count <- targetInfo[2,2] # or input_1$count()
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) cat(paste0('\nTarget = ',targetInfo[2,1],' count : ',
format(target_1_count,big.mark = ",")))
# Select a sample of Target = 0 records and put them into a Spark DF "input_0"
input_0 <- dat_df$filter(c(paste0(targetFromFormula," == '",targetInfo$target[which.max(targetInfo$num_rows)],"'")))
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) cat(paste0('\nTarget = ',targetInfo[1,1],' count : ',
format(targetInfo[1,2],big.mark = ","))) # or input_0$count()
# Prepare the settings needed for the Sampling function of Spark on Data Frames "$sample"
samp_rate <- min(1,targetInfo[2,2]/targetInfo[1,2])
seed_long <- .jlong(12345L)
# Runs the sample of Target = 0 records a little bigger (with an offset) to avoid limitations
# by Spark DF "$sample" function sometimes sampling smaller than desired samples
offset <- 10*(1/target_1_count)
if ((samp_rate+offset)>=1) {offset <- 1 - samp_rate -0.01}
sample_0 <- input_0$sample(FALSE,samp_rate+offset,seed_long)
# Trims the sample of Target = 0 records to the ideal size (same as te target = 1) using the function "$limit"
input_0_samp <- sample_0$limit(as.integer(target_1_count))
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) cat(paste0('\nSampled Target = ',
targetInfo[1,1],' count : ',
format(input_0_samp$count(),big.mark = ",")))
# Use the function "$union" from Spark Data Frame to join both Target and non-Target portions
balanced <- input_1$union(input_0_samp)
if (sampleSize >0) {
newSampRate <- (sampleSize/balanced$count())
if (newSampRate < 1) {
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) cat(paste0('\nSampling final balanced count from ',
format(balanced$count(),big.mark = ","),' down to ',
format(sampleSize,big.mark = ","),' records'))
sample_final <- balanced$sample(FALSE,newSampRate+offset,seed_long)
balanced <- sample_final$limit(as.integer(sampleSize))
orch.df.unpersist(sample_final)
} else {
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) cat(paste0('\nSampling requested of ',
format(sampleSize,big.mark = ","),
' is larger than final balanced count and was ignored.'
))
}
}
orch.df.unpersist(dat_df)
orch.df.unpersist(input_1)
orch.df.unpersist(input_0)
orch.df.unpersist(input_0_samp)
orch.df.persist(balanced, storageLevel = 'MEMORY_ONLY', verbose = FALSE)
if (grepl(feedback, "FULL|TRUE", fixed = TRUE)) cat(paste0('\nBalanced Final count : ',
format(balanced$count(),big.mark = ","),
'\n'))
}
return(balanced)
}
|
|
writexy <- function(file, x, y, xprecision=3, yprecision=3) {
fx <- formatC(x, digits=xprecision, format="g", flag="-")
fy <- formatC(y, digits=yprecision, format="g", flag="-")
dfr <- data.frame(fx, fy)
write.table(dfr, file=file, sep="\t", row.names=F, col.names=F, quote=F)
}
x <- c(1, 2, 3, 1e11)
y <- sqrt(x)
writexy("test.txt", x, y, yp=5)
|
|
#' Make display maps for a species
#'
#' Make display maps for a species.
#' @param species Name of species
#' @param family Family of species
#' @param focus2Sp Spatial data frame object in equal-area projection
#' @param gadm1Sp Spatial object with states, does not have to be cropped to focus2Sp
#' @param q Vector of posterior samples of false positive rate of detection
#' @param waic WAIC object or \code{NULL}
#' @param loo LOO object or \code{NULL}
#' @param minUnconverged Proportion of unconverged parameters that is acceptable for a model
#' @param outDir Directory to which to save image
#' @param isOk TRUE/FALSE if model is sufficiently converged.
#' @param appendToName \code{NULL} or character to append to file name.
#' @param footer \code{NULL} or character to append to add to footer.
#' @param ... Arguments to pass to text()
#' @return Write an image to disk.
#' @export
mapBayesODM <- function(
species,
family,
focus2Sp,
gadm1Sp,
q,
waic,
loo,
minUnconverged,
outDir,
isOk,
appendToName = NULL,
footer = NULL,
...
) {
### formatting
occCol <- 'forestgreen' # green--map color for occupancy
occUncertCol <- 'red' # map color for uncertainty in occupancy
knownOccCol <- 'steelblue3' # map color for counties with known occurrences
effortCol <- '#6a51a3' # blue--map color number of collections
lwd <- 0.4 # line width of political borders
### assess convergence
######################
# if (TRUE) {
# ignore <- c('z[[]', 'falseDetect[[]', 'constraintFalseDetect[[]')
# conv <- rhatStats(mcmc$samples, rhatThresh = 1.1, minConv = minUnconverged, ignore = ignore)
# } else {
# if (FALSE) conv <- list(); conv$sufficient <- TRUE
# }
### prepare plot elements
#########################
# centroids of counties with occurrences
occ2Sp <- focus2Sp[focus2Sp@data$detect > 0, ]
centsSp <- rgeos::gCentroid(focus2Sp[focus2Sp@data$detect > 0, ], byid=TRUE)
# fix any potentially orphaned holes
focus2Sp <- rgeos::gBuffer(focus2Sp, width=0, byid=TRUE)
# create plotting extent: focal region plus a buffer
ext <- raster::extent(focus2Sp)
extSp <- as(ext, 'SpatialPolygons')
raster::projection(extSp) <- raster::projection(focusSp)
extCentSp <- rgeos::gCentroid(extSp)
maxDist_m <- rgeos::gDistance(extCentSp, extSp, hausdorff=TRUE)
extSp <- rgeos::gBuffer(extSp, width=0.05 * maxDist_m)
ext <- raster::extent(extSp)
extSp <- as(ext, 'SpatialPolygons')
projection(extSp) <- raster::projection(focus2Sp)
# crop GADM1
extSp_inGadm <- sp::spTransform(extSp, sp::CRS(raster::projection(gadm1Sp)))
gadm1SpCrop <- raster::crop(gadm1Sp, extSp_inGadm)
gadm1SpCrop <- sp::spTransform(gadm1SpCrop, sp::CRS(raster::projection(focusSp)))
### plot
########
# ok <- if (isOk) { 'ok' } else { 'notOk'}
ok <- 'notAssessed'
filename <- paste0(tolower(family), '_', species)
if (!is.null(appendToName)) filename <- paste0(filename, '_', appendToName)
png(paste0(outDir, '/', filename, '.png'), width=1800, height=1100, res=300)
# layout
par(bg='white', oma=c(0.7, 0.5, 1.3, 0), mar=c(0, 2, 0, 0))
lay <- matrix(c(1, 1, 2, 1, 1, 3), nrow=2, byrow=TRUE)
layout(lay)
### occupancy
col <- occCol
plot(gadm1SpCrop, col='gray90', border=NA)
x <- focus2Sp@data$psi
if (any(focus2Sp@data$detect == 0)) x[focus2Sp@data$detect > 0] <- NA
xMaxNonDetect <- max(x, na.rm=TRUE)
cols <- scales::alpha(col, x / xMaxNonDetect)
plot(focus2Sp, col='white', border=NA, add=TRUE)
plot(focus2Sp, col=cols, border='gray80', lwd=lwd, add=TRUE)
plot(occ2Sp, col=knownOccCol, border='gray80', lwd=lwd, add=TRUE)
plot(gadm1SpCrop, col=NA, border='gray60', lwd=lwd, add=TRUE)
# legend
labels <- seq(0, xMaxNonDetect, length.out=5)
labels <- round(labels, 2)
labels <- sprintf('%0.2f', labels)
labels[1] <- '0'
width <- 0.04
height <- 0.9
legTitle <- 'Occupancy\nprobability'
legCex <- 0.62
swatches <- list(
list(swatchAdjY=c(0, 0.02), col=knownOccCol, border='gray', labels='Specimens')
)
legendary::legendGrad('left', inset=-0.01, title=legTitle, col=c('white', col), labels=labels, width=width, height=height, border='gray', boxBorder=NA, adjX=c(0.6, 1), adjY=c(0.06, 0.90), titleAdj=c(1.39, 0.95), labAdj=0.82, boxBg=NA, cex=legCex, lwd=0.5 * lwd, swatches=swatches, pos=2)
### uncertainty in occupancy
col <- occUncertCol
plot(gadm1SpCrop, col='gray90', border=NA)
x <- focus2Sp$psi90CI
if (any(focus2Sp@data$detect == 0)) x[focus2Sp@data$detect > 0] <- NA
xMaxNonDetect <- max(x, na.rm=TRUE)
cols <- scales::alpha(col, x / xMaxNonDetect)
plot(focus2Sp, col='white', border=NA, add=TRUE)
plot(focus2Sp, col=cols, border='gray80', lwd=lwd, add=TRUE)
plot(occ2Sp, col=knownOccCol, border='gray80', lwd=lwd, add=TRUE)
plot(gadm1SpCrop, col=NA, border='gray60', lwd=lwd, add=TRUE)
# legend
labels <- seq(0, xMaxNonDetect, length.out=5)
labels <- round(labels, 2)
labels <- sprintf('%0.2f', labels)
labels[1] <- '0'
legCex <- 0.58
width <- 0.12
height <- 0.3
legTitle <- 'Occupancy\nuncertainty\n(90% CI)'
swatches <- list(
list(swatchAdjY=c(0, 0.04), col=knownOccCol, border='gray', labels='Specimens')
)
legendary::legendGrad('left', inset=-0.01, title=legTitle, col=c('white', col), labels=labels, width=width, height=0.925, border='gray', boxBorder=NA, adjX=c(0, 0.29), adjY=c(0.07, 0.81), titleAdj=c(0.55, 0.93), labAdj=0.2, boxBg=NA, cex=legCex, lwd=0.5 * lwd, swatches=swatches, pos=2)
### effort
title <- expression(paste('Total '*italic(family.)))
legTitle <- 'Specimens'
legCex <- 0.35
col <- effortCol
plot(gadm1SpCrop, col='gray90', border=NA)
x <- focus2Sp@data$effort
x <- log10(x + 1)
cols <- scales::alpha(col, x / max(x))
plot(focus2Sp, col='white', border=NA, add=TRUE)
plot(focus2Sp, col=cols, border='gray80', lwd=lwd, add=TRUE)
plot(gadm1SpCrop, col=NA, border='gray60', lwd=lwd, add=TRUE)
points(centsSp, pch=16, cex=0.25)
# legend
labels <- seq(0, 1, by=0.2) * max(x)
labels <- round(10^labels - 1, digits=1)
labels <- sprintf('%.1f', labels)
labels[1] <- '0'
width <- 0.12
height <- 0.3
legCex <- 0.58
width <- 0.12
height <- 0.3
legTitle <- paste('Total\n', family)
legendary::legendGrad('left', inset=-0.01, title=legTitle, col=c('white', col), labels=labels, width=width, height=0.925, border='gray', boxBorder=NA, adjX=c(0, 0.29), adjY=c(0, 0.84), titleAdj=c(0.55, 0.93), labAdj=0.2, boxBg=NA, cex=legCex, lwd=0.5 * lwd, pos=2)
### titles
main <- substitute(paste(family, ': ', italic(species)), env=list(species=species, family=family))
mtext(text=main, at=c(0.01), outer=TRUE, cex=1, line=-0.3, adj=0)
if (!isOk) mtext(text='Insufficient convergence', at=c(0.01), outer=TRUE, cex=1, line=-1.3, adj=0, col='red')
text <- paste(footer, ' | ', date())
mtext(text=text, side=1, at=0.985, outer=TRUE, cex=0.35, line=-0.27, adj=1)
qMean <- round(mean(q), 3)
qLow <- round(quantile(q, 0.05), 3)
qHigh <- round(quantile(q, 0.95), 3)
# msg1 <- paste0('WAIC: ', round(waic$estimates['waic', 1], 2), ' | LOO: ', round(loo$estimates['looic', 1], 2))
msg1 <- ''
msg2 <- paste0('Probability all specimens in a county in which ', species, ' is truely absent are mistaken identifications: ', sprintf('%0.3f', qMean), ' (90% CI: ', sprintf('%0.3f', qLow), '-', sprintf('%0.3f', qHigh), ')')
mtext(msg1, side=1, at=0.005, outer=TRUE, cex=0.35, line=-0.8, adj=0)
mtext(msg2, side=1, at=0.005, outer=TRUE, cex=0.35, line=-0.3, adj=0)
dev.off()
}
|
|
#Data Frames
x <- 10:1
y <- -4:5
q <- c("Hockey", "Football", "Baseball", "Curling", "Rugby", "Lacrosse", "Basketball", "Tennis", "Cricket", "Soccer")
theDF <- data.frame(x,y,q)
theDF
theDF <- data.frame(First = x, Second = y, Sport = q)
str(theDF)
class(theDF$Sport)
theDF <- data.frame(First = x, Second = y, Sport = q, stringsAsFactors = FALSE)
class(theDF$Sport)
dim(theDF)
NROW(theDF)
nrow(theDF)
nrow(x)
length(x)
NROW(x)
names(theDF)
names(theDF)[3]
rownames(theDF)
rownames(theDF) <- c("One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Ten")
rownames(theDF)
theDF
rownames(theDF) <- NULL
rownmaes(theDF)
head(theDF)
tail(theDF)
class(theDF)
theDF$Sport
theDF[3, 2] #3rd row, 2nd column
theDF[3, 2:3] #3rd row, 2nd and 3rd column
theDF[c(3,5),2] #3rd and 5th row, 2nd column
theDF[c(2,4,5), c(1,2)]
theDF[,3]
theDF[1,]
theDF[,3,drop=FALSE]
theDF[,c("Sport","First")]
theDF[,"Sport", drop = FALSE]
#Lists
list1 <- list(1,2,3)
list1
list2 <- list(c(1,2,3))
list2
list3 <- list(c(1,2,3), 3:7)
list3
theDF <- data.frame(First=1:5,
Second=5:1,
Sport=c("Hockey","Lacrosse","Football",
"Curling","Tennis"),stringsAsFactors = FALSE)
theDF
list4 <- list(theDF, 1:10)
list4
list5 <- list(theDF, 1:10, list3)
list5
names(list5)
names(list5) <- c("data.frame", "vector", "list")
list5
list6 <- list(TheDataFrame = theDF, TheVector = 1:10, TheList = list3)
list6
#creating an empty list
emptyList <- vector(mode="list", length=4)
emptyList
emptyList[[1]] <- 5
emptyList
list5[[1]]
names(list5)
list5[["data.frame"]]$Sport
list5[[1]][,"Second"]
list5[[1]][,"Second", drop=FALSE]
length(list5)
list5
list5[[4]]
list5[[4]] <- 2
#Matrices
A <- matrix(1:10, nrow=5)
A
B <- matrix(21:30, nrow=5)
B
C <- matrix(21:30, nrow=2)
C
nrow(A)
ncol(A)
dim(A)
A+B
A*B
A==B
ncol(A)
nrow(B)
t(B) #transpose the matrix
A %*% t(B) #dot product of A and transposed B
A %*% C
colnames(A)
rownames(A)
colnames(A) <- c("Left", "Right")
rownames(A) <- c("1st", "2nd", "3rd", "4th", "5th")
A
colnames(B) <- c("First", "Second")
rownames(B) <- c("One", "Two", "Three", "Four", "Five")
LETTERS
letters
colnames(C) <- LETTERS[1:5]
rownames(C) <- c("Top", "Bottom")
C
A
t(A)
A%*%C
# matrix
first.matrix <- matrix(1:12, ncol=4)
first.matrix
second.matrix <- matrix(1:12, ncol=4, byrow=TRUE)
second.matrix
str(first.matrix)
str(second.matrix)
dim(first.matrix)
length(first.matrix)
# +vectors
vector1 <- c(12,4,5,6,9,3)
vector1
vector2 <- c(5,4,2,4,12,9)
vector2
matrix.one <- rbind(vector1,vector2) # each vector becomes a row in the matrix
matrix.one
cbind(1:3,4:6)
cbind(1:3,4:6,matrix(7:12,ncol=2)) # +matrix
# indices
first.matrix
first.matrix[1:2,2:3]
first.matrix[2:3,]
first.matrix[-2,-3]
#nr <- nrow(first.matrix) # num rows
#id <- nr * 2 + 2 # calculate the position of entry we want to exclude
#first.matrix[-id] # note the 8 is missing from the results
#
# replacing values
first.matrix.orig <- first.matrix
first.matrix
first.matrix[3,2] <- 4
first.matrix
first.matrix[2,] <- c(1,3)
first.matrix
first.matrix[1:2,3:4] <- c(8,4,2,1)
first.matrix
first.matrix <- first.matrix.orig
first.matrix
first.matrix + 4
first.matrix <- matrix(1:12, ncol=4)
first.matrix
second.matrix <- matrix(1:3, nrow=3, ncol=4)
second.matrix
first.matrix + second.matrix
# using this code: first.matrix <- matrix(1:12,ncol=4) \n first.matrix + 1:3 yields same result as above
first.matrix <- matrix(1:12, ncol=4)
first.matrix
first.matrix + 1:3
#
# Dataframes - stop here for next week...
#
baskets.of.granny <- c(12,4,5,6,9,3)
baskets.of.gerry <- c(5,4,2,4,12,9)
baskets.of.team <- rbind(baskets.of.granny,baskets.of.gerry)
baskets.of.team
baskets.df <- as.data.frame(t(baskets.of.team))
baskets.df
str(baskets.df)
nrow(baskets.df)
length(baskets.df)
#
# combining data
employee <- c("John Doe", "Jim Doe", "Joe Doe")
salary <- c(21000,23500,150000)
startdate <- as.Date(c("2010-11-2","2008-3-25","2002-5-11"))
employee.data <- data.frame(employee,salary,startdate)
str(employee.data)
names(employee.data)
names(employee.data)[3] <- "first day"
employee.data
rownames(employee.data) <- c("Sous Chef","Chef","Executive Chef")
employee.data
# dataset info
#
library(help="datasets")
|
|
#!/usr/bin/Rscript
library(DESeq2)
# script arguments
args <- commandArgs(trailingOnly=T)
rda_to_load <- args[1]
rda_to_save <- args[2]
# loading
load(rda_to_load)
rlog <- data
# let DESeq do the PCA for us
pca <- DESeq2::plotPCA(rlog, intgroup="sample", ntop=1000, returnData=T)
# saving
data <- pca
save(data, file=rda_to_save)
|
|
incarceration_trends <- read.csv("https://raw.githubusercontent.com/vera-institute/incarceration-trends/master/incarceration_trends.csv")
incarceration_trends_jail_jurisdiction <- read.csv("https://raw.githubusercontent.com/vera-institute/incarceration-trends/master/incarceration_trends_jail_jurisdiction.csv")
# what is total jain popultion over time
# female vs male
# % female in jail in each state
library(ggplot2)
library(dplyr)
library(stringr)
library(tibble)
library(scales)
library(maps)
library(leaflet)
library(tidyverse)
library(usmap)
#summary table
#total population in jail every year
total <- incarceration_trends_jail_jurisdiction %>%
group_by(year) %>%
summarise(total = sum(total_jail_pop, na.rm = TRUE))
#highest total population in jail
high_total <- max(total$total)
# highest total population in jail year
high_total_year <- total %>%
filter(total == max(total)) %>%
pull(year)
#lowest total population in jail
low_total <- min(total$total)
#lowest total population in jail year
low_total_year <- total %>%
filter(total == min(total)) %>%
pull(year)
#female population in jail every year
females <- incarceration_trends_jail_jurisdiction %>%
group_by(year) %>%
summarise(female = sum(female_jail_pop, na.rm = TRUE))
#highest female population in jail
high_total_female <- max(females$female)
# highest female population in jail year
high_female_year <- females %>%
filter(female == max(female, na.rm=TRUE)) %>%
pull(year)
#lowest female population in jail
low_female <- min(females$female)
#lowest female population in jail year
low_female_year <- females %>%
filter(female == min(female)) %>%
pull(year)
#male population in jail every year
males <- incarceration_trends_jail_jurisdiction %>%
group_by(year) %>%
summarise(male = sum(male_jail_pop, na.rm = TRUE))
#highest male population in jail
high_total_male <- max(males$male)
# highest male population in jail year
high_male_year <- males %>%
filter(male == max(male, na.rm=TRUE)) %>%
pull(year)
#lowest male population in jail
low_male <- min(males$male)
#lowest male population in jail year
low_male_year <- males %>%
filter(male == min(male)) %>%
pull(year)
#jail admin in jail every year
variables <- incarceration_trends_jail_jurisdiction %>%
group_by(year) %>%
summarise(total_adm = sum(total_jail_adm, na.rm = TRUE))
#highest male population in jail
high_total_adm <- max(variables$total_adm)
# highest male population in jail year
high_adm_year <- variables %>%
filter(total_adm == max(total_adm, na.rm=TRUE)) %>%
pull(year)
#lowest male population in jail
low_adm <- min(variables$total_adm)
#lowest male population in jail year
low_male_adm <- variables %>%
filter(total_adm == min(total_adm)) %>%
pull(year)
#number of population in jail in each state in year 1970
maps <- incarceration_trends%>%
filter(year == "1970") %>%
group_by(state) %>%
summarize(state_pop = sum(total_pop))
total_population <- sum(maps$state_pop)
maps <- maps %>%
mutate(percent = state_pop/total_population *100)
#state that has highest percent in 1970
high_state <- maps %>%
filter(percent == max(percent, na.rm=TRUE)) %>%
pull(state)
#highest state percent
high_state_pop <- max(maps$percent)
#state that has lowest percent
low_state <- maps %>%
filter(percent == min(percent,na.rm=TRUE)) %>%
pull(state)
#lowest state percent
low_state_pop <- min(maps$percent)
# trend over time chart of populations of genders in jail
df1 <- merge(total,females, sort = TRUE)
summarys <- merge(df1, males, sort = TRUE)
line_chart <- ggplot(summarys, aes(x=year, y= total)) +
geom_line(aes(y=total, color = "total")) +
geom_line(aes(y=male, color = "male")) +
geom_line(aes(y=female, color = "female")) +
xlab("number of people in jail") +
ylab("year") +
ggtitle("total population in jail based on gender")
#variable comparasion chart of jail population vs jail adm
df2 <- merge(total,variables, sort = TRUE)
comparasion <- ggplot(df2, aes(x=total, y=total_adm)) +
geom_point() +
xlab("total jail population") +
ylab("total jail admin") +
ggtitle("total population in jail vs total admin in jail")
#map
map_chart <- plot_usmap(data = maps, values = "percent", color = "blue") +
scale_fill_continuous(
low = "white", high = "blue", name = "percent of total population in jail of US(1970)", label = scales::comma
) + labs(title = "percentage of jail population in US") + theme(legend.position = "right")
|
|
wesley_safadao <- function(day, month, year){
safadeza <- (sum(seq(1, month)) + (year / 100) * (50 - day))
anjo <- 100 - safadeza
return(c(safadeza, anjo))
}
say_safadeza <- function(day, month, year){
grau_de_safadeza <- data.frame(safadeza=NA, anjo=NA)
grau_de_safadeza[c('safadeza', 'anjo')] <- wesley_safadao(day, month, year)
sprintf("%.2f%% safado e %.2f%% anjo", grau_de_safadeza$safadeza, grau_de_safadeza$anjo)
}
|
|
REBOL [
Title: "Kaj-fossil-download"
Date: 23-Jul-2013/18:18:25+2:00
Name: none
Version: 0.0.1
Comment: "just very fast written version so far!"
File: none
Home: none
Author: "Oldes"
]
;Fossil server does not serve JS without this:
system/schemes/http/user-agent: {Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10}
download-fossil-dir: func[dir url /local page wasDir subdir][
print ["!!!!!!!!!!!!! " url]
page: read url
if not exists? dir [make-dir dir]
wasDir: what-dir
change-dir dir
parse/all page [
any [
thru {gebi("} copy id to {"} thru {).href="} copy path to {";} (
;print [id tab path]
if path/1 = #"/" [
tmp: rejoin [{a id='} id {'>}]
if parse page [thru tmp copy name to {</a>} to end][
print [id tab name tab path]
parse path [
thru "artifact/" copy artifact to end (
file: to file! name
if not exists? file [
if not error? try [
bin: read/binary rejoin [http://red.esperconsultancy.nl/ dir {raw/} name {?name=} artifact]
][
write/binary file bin
]
]
)
|
thru {dir?ci=} copy dirId to "&" thru {name=} copy dirName to end (
subdir: to-file dirName
if not exists? subdir [
download-fossil-dir subdir rejoin [http://red.esperconsultancy.nl/ dir {dir?ci=} dirId {&name=} dirName]
]
)
]
]
]
)
]
]
change-dir wasDir
]
foreach dir [
%Red-C-library
%Red-common
][
dir: dirize dir
download-fossil-dir dir rejoin [http://red.esperconsultancy.nl/ dir {dir?ci=tip}]
]
|
|
suppressPackageStartupMessages({
library(data.table)
library(quanteda)
library(spacyr)
library(parallel)
library(textclean)
library(fastTextR)
library(dbscan)
library(digest)
library(FNN)
library(igraph)
})
source('../tree-functions.r')
progressbar <- function(n, length = 50L) {
if (length > n) {
length <- n
}
if (length >= 3L) {
cat(sprintf("|%s|\n", paste(rep("-", length - 2L), collapse = "")))
}
else {
cat(paste(c(rep("|", length), "\n"), collapse = ""))
}
s <- seq_len(n)
sp <- s/n * length
target <- seq_len(length)
ids <- sapply(target, function(x) which.min(abs(sp - x)))
return(ids)
}
## * CORPUS
import.corpus <- function(infile = opt('corpus.original'),
text_var = opt('text.var'),
id_var = opt('id.var'),
group_id_var = opt('group.id.var'),
covariates = opt('covariates'),
outfile = projfile(opt('corpus.imported'))) {
state <- check.state(infile, text_var, id_var, group_id_var, covariates, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat('Importing corpus...\n')
if (!file.exists(infile))
stop('Corpus file not found')
corpus <- readRDS(infile)
if (is.null(text_var))
stop('Argument `text_var` is empty')
if (is.null(id_var)) {
corpus[, text_id := sprintf(sprintf('txt_%%0.%ii', nchar(.N)), .I)]
} else {
corpus[, text_id := get(id_var)]
}
if (is.null(group_id_var)) {
corpus[, text_group_id := text_id]
} else {
corpus[, text_group_id := get(group_id_var)]
}
corpus[, text := get(text_var)]
if (!is.null(covariates)) {
if ('ALL' %in% covariates) covariates <- names(corpus)
covariates <- setdiff(covariates, c('text_group_id', 'text_id', 'text',
text_var, id_var, group_id_var))
}
vars2keep <- c('text_group_id', 'text_id', 'text', covariates)
d <- corpus[, ..vars2keep]
saveRDS(corpus[, .N, .(text_group_id, text_id)], replace.file(projfile(opt('text.id.lookup'))))
saveRDS(d, replace.file(outfile))
success(state)
return(invisible(d))
}
preclean.corpus <- function(corpus = NULL, rules = opt('rules'),
infile = projfile(opt('corpus.imported')),
outfile = projfile(opt('corpus.precleaned'))
) {
state <- check.state(corpus, rules, infile, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat('Pre-cleaning text...\n')
if (is.null(corpus)) {
if (!file.exists(infile)) stop('Corpus not yet imported. Call `import.corpus()` first.')
corpus <- readRDS(infile)
}
if (!is.null(rules)) {
rules <- as.data.table(matrix(rules, byrow = T, ncol = 2))
setnames(rules, c('from', 'to'))
for (i in nrow(rules)) {
corpus[, text := gsub(rules[i, from], rules[i, to], text)]
}
}
corpus[, text := gsub('</*p>|</*span>|</*em>', ' ', text, perl=T)]
corpus[, text := gsub('"', '', text, fixed=T)]
corpus[, text := gsub('[()]', '', text, perl=T)]
corpus[, text := textclean::replace_contraction(text)]
saveRDS(corpus, replace.file(outfile))
success(state)
return(invisible(corpus))
}
## * SPACY PARSING
consolidate.entity <- function(parsed_corpus=NULL, entities=NULL,
infiles = c(parsed_corpus = projfile(opt('parsed.corpus')),
entities = projfile(opt('entities'))),
outfile = projfile(opt('parsed.corpus.cons'))) {
state <- check.state(parsed_corpus, entities, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat('Consolidating entities...\n')
if (is.null(parsed_corpus)) {
if (!file.exists(infiles[['parsed_corpus']]))
stop('Parsed corpus not yet parsed. Call `parse.corpus()` first.')
parsed_corpus <- readRDS(infiles[['parsed_corpus']])
}
if (is.null(entities)) {
if (!file.exists(infiles[['entities']])) stop('Entities file not found. Call `parse.corpus()` first.')
ents <- readRDS(infiles[['entities']])
} else ents <- entities
parsed <- entity_consolidate(parsed_corpus)
setDT(parsed)
parsed[ents, entities := T, on = c('token' = 'entity', 'doc_id', 'sentence_id')]
torepl <- ents[entity_type %in% c('CARDINAL', 'DATE', 'PERCENT', 'ORDINAL', 'TIME', 'QUANTITY', 'MONEY') &
grepl('_', entity, fixed=T)]
parsed[torepl, repl := T, on = c('token' = 'entity')]
parsed[repl == T, token := gsub('_', ' ', token, fixed=T)]
parsed[, repl := NULL]
saveRDS(parsed, replace.file(outfile))
success(state)
return(invisible(parsed))
}
parse.corpus <- function(corpus = NULL,
consolidate_entities = opt('consolidate.entities', T),
ncores = getOption("mc.cores", 2L),
infile = projfile(opt('corpus.precleaned')),
outfile = projfile(opt('parsed.corpus'))) {
state <- check.state(corpus, consolidate_entities, infile, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
quanteda_options(threads = ncores)
cat('Parsing text... May take some time!\n')
if (is.null(corpus)) {
if (!file.exists(infile)) stop('Corpus not yet pre-cleaned. Call `preclean.corpus()` first.')
corpus <- readRDS(infile)
}
txt <- corpus$text
names(txt) <- corpus$text_id
## ** Parse with Spacy
spacy_initialize(model = "en_core_web_md", python_executable = '/usr/bin/python3')
fun <- function(t) {
d <- spacy_parse(t, tag=F, lemma=F, dependency = F, nounphrase = F, additional_attributes = c("text"),
multithread = T)
return(d)
}
s <- split(txt, rep(seq_len(ncores), length.out = length(txt)))
system.time({
p <- mclapply(s, fun, mc.cores = ncores)
})
parsed <- do.call('rbind', p)
## ** Entities
ents <- entity_extract(parsed, type = 'all')
setDT(ents)
saveRDS(ents, replace.file(projfile(opt('entities'))))
## ** Consolidate entities
if (consolidate_entities) {
parsed <- consolidate.entity(parsed)
}
saveRDS(parsed, replace.file(outfile))
## ** Data.table version
parsedt <- copy(parsed)
setDT(parsedt)
setnames(parsedt, 'doc_id', 'text_id')
saveRDS(parsedt, replace.file(projfile(opt('parsed.corpus.dt'))))
success(state)
return(invisible(parsed))
}
reform.corpus <- function(parsed_corpus=NULL, from_consolidated = opt('consolidate.entities', T),
infile = ifelse(opt('consolidate.entities'),
projfile(opt('parsed.corpus.cons')),
projfile(opt('parsed.corpus'))),
outfile = projfile(opt('reformed.corpus'))) {
## parsed_corpus=NULL; from_consolidated = opt('consolidate.entities', T); infile = ifelse(opt('consolidate.entities'), projfile(opt('parsed.corpus.cons')), projfile(opt('parsed.corpus'))); outfile = projfile(opt('reformed.corpus'))
state <- check.state(parsed_corpus, from_consolidated, infile, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat(sprintf('Re-forming corpus from %sconsolidated parsed corpus...\n',
fifelse(from_consolidated, '', 'non-')))
if (is.null(parsed_corpus)) {
if (!file.exists(infile)) stop('Parsed corpus not available. Call `parse.corpus()` or `consolidate.entity()` first.')
parsed_corpus <- readRDS(infile)
}
parsed_corpus[grep('^[0-9,]{5,}$', token, perl=T), token := gsub(',', '', token, fixed=T)]
parsed_corpus[grep(' %', token, perl=T), token := gsub(' +%', '%', token, perl=T)]
parsed_corpus[
, token_clean := trimws(gsub(' {2,}', ' ',
fifelse(entities %in% T &
!(entities %in% c('MONEY', 'CARDINAL', 'DATE', 'PERCENT',
'ORDINAL', 'TIME', 'QUANTITY')), token,
tolower(gsub('[^a-zA-Z0-9_ āēīōūĀĒĪŌŪ%$]', ' ', token, perl=T))), perl=T))]
corp <- parsed_corpus[!grepl('^ *$', token_clean, perl=T)
, .(text=paste0(token_clean, collapse=' '))
, .(text_id=doc_id)]
saveRDS(corp, replace.file(outfile))
success(state)
return(invisible(corp))
}
## * UNSUPERVISED TRAINING
unsupervised.training <- function(corpus = NULL,
modeltype = opt('modeltype'),
bucket = opt('bucket'),
dim = opt('dim'),
epoch = opt('epoch'),
label = opt('label'),
loss = opt('loss'),
lr = opt('lr'),
lrUpdate = opt('lrUpdate'),
maxn = opt('maxn'),
minCount = opt('minCount'),
minn = opt('minn'),
neg = opt('neg'),
t = opt('t'),
verbose = opt('verbose'),
wordNgrams = opt('wordNgrams'),
ws = opt('ws'),
pretrained.vec = opt('pretrained.vec'),
ncores = getOption("mc.cores", 2L),
infile = projfile(opt('reformed.corpus')),
outfile = projfile(opt('model.file'))) {
## corpus = NULL; modeltype = opt('modeltype'); bucket = opt('bucket'); dim = opt('dim'); epoch = opt('epoch'); label = opt('label'); loss = opt('loss'); lr = opt('lr'); lrUpdate = opt('lrUpdate'); maxn = opt('maxn'); minCount = opt('minCount'); minn = opt('minn'); neg = opt('neg'); t = opt('t'); verbose = opt('verbose'); wordNgrams = opt('wordNgrams'); ws = opt('ws'); ncores = getOption("mc.cores", 2L); infile = projfile(opt('reformed.corpus')); outfile = projfile(opt('model.file'))
state <- check.state(corpus, modeltype, bucket, dim, epoch, label, loss, lr, lrUpdate, maxn,
minCount, minn, neg, t, verbose, wordNgrams, ws, infile,
outfile = outfile, readfun = NULL, pretrained.vec)
if (state$status == 0) return(invisible(state$output))
cat('Fitting unsupervised model...\n')
if (is.null(corpus)) {
if (!file.exists(infile)) stop('Corpus not yet cleaned. Call `clean_corpus()` first.')
corpus <- readRDS(infile)
}
tmpfile <- tempfile(tmpdir='.', fileext = '.txt')
fwrite(corpus, tmpfile, col.names = F)
cntrl <- ft_control(
loss = loss,
learning_rate = lr,
learn_update = lrUpdate,
word_vec_size = dim,
window_size = ws,
epoch = epoch,
min_count = minCount,
neg = neg,
min_ngram = minn,
max_ngram = maxn,
max_len_ngram = wordNgrams,
nbuckets = bucket,
threshold = t,
label = label,
verbose = verbose,
output = outfile,
save_output = T,
nthreads = ncores,
pretrained_vectors = pretrained.vec
)
model <- ft_train(tmpfile, method = modeltype, control = cntrl)
ft_save(model, outfile)
success(state)
return(invisible(outfile))
}
tfidf <- function(corpus=NULL, model=NULL,
infiles = c(corpus = projfile(opt('reformed.corpus')),
modelfile = projfile(opt('model.file'))),
outfile = projfile(opt('text.tfidf'))) {
state <- check.state(corpus, model, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat('Calculating TF-IDF...\n')
if (is.null(corpus)) {
if (!file.exists(infiles[['corpus']]))
stop('Corpus not yet cleaned. Call `clean_corpus()` first')
corpus <- readRDS(infiles[['corpus']])
}
if (is.null(model)) {
if (!file.exists(infiles[['modelfile']]))
stop('Unsupervised model not yet trained. Call `unsupervised.training()` first.')
model <- ft_load(infiles[['modelfile']])
} else model <- load_model(model)
toks <- tokenize_fastestword(corpus$text)
names(toks) <- corpus$text_id
tdt <- utils::stack(toks)
setDT(tdt)
tdt[, ind := as.integer(ind)]
setnames(tdt, c('word', 'text_id'))
lkup <- readRDS(projfile(opt('text.id.lookup')))
tdt[lkup, text_group_id := i.text_group_id, on = 'text_id']
tfs <- tdt[, .(indoc = .N), .(word, text_group_id)]
tfs[, tot_occ := sum(indoc), word]
tfs[, totdoc := sum(indoc), text_group_id]
tfs[, tf := indoc / totdoc]
tot.docs <- tdt[, uniqueN(text_group_id)]
idfs <- tdt[, .(ndocs = uniqueN(text_group_id)), word]
idfs[, totdocs := tot.docs]
idfs[, idf := log(totdocs / ndocs)]
tfs[idfs, idf := i.idf, on = 'word']
tfs[, tf_idf := tf * idf]
saveRDS(tfs, replace.file(outfile))
success(state)
return(invisible(tfs))
}
aggregate.embeddings <- function(corpus=NULL, model=NULL,
method = opt('aggregate.method'), tfidf=NULL,
infiles = c(corpus = projfile(opt('reformed.corpus')),
model = projfile(opt('model.file')),
idlkup = projfile(opt('text.id.lookup'))),
outfile = projfile(opt('embeddings'))) {
## corpus=NULL; model=NULL; method = opt('aggregate.method'); tfidf=NULL; infiles = c(corpus = projfile(opt('reformed.corpus')), model = projfile(opt('model.file')), idlkup = projfile(opt('text.id.lookup'))); outfile = projfile(opt('embeddings'))
state <- check.state(corpus, model, method, tfidf, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
if (!(method %in% c('basic', 'tf-idf', 'idf')))
stop('Aggregating method should be one of `basic`, `tf-idf`, `idf`')
cat(sprintf('Aggregating embeddings (%s method)...\n', method))
if (is.null(corpus)) {
if (!file.exists(infiles[['corpus']]))
stop('Corpus not yet cleaned. Call `clean_corpus()` first')
corpus <- readRDS(infiles[['corpus']])
}
if (is.null(model)) {
if (!file.exists(infiles[['model']]))
stop('Unsupervised model not yet trained. Call `unsupervised.training()` first.')
model <- ft_load(infiles[['model']])
} else model <- ft_load(model)
if (method == 'basic') {
l <- lapply(corpus$text, function(x) {
v <- ft_word_vectors(model, unlist(tokenize_fastestword(x)))
colMeans(v)
})
m <- do.call(rbind, l)
rownames(m) <- corpus$text_id
txtvec <- m
} else if (method == 'tf-idf') {
## ** Get tf-idf
if (is.null(tfidf)) {
if (!file.exists(projfile(opt('text.tfidf')))) {
tfidf <- tfidf(corpus, model)
} else tfidf <- readRDS(projfile(opt('text.tfidf')))
}
## ** Weighted average of word vectors based on tf-idf
i=1
txtvec <- do.call('rbind', mclapply(seq_along(corpus$text), function(i) {
vec <- get_word_vectors(model, toks[[i]])
normvec <- apply(vec, 1, function(x) sqrt(sum(x^2)))
normvec <- fifelse(normvec > 0, normvec, 1)
y <- vec / normvec
tfs1 <- tfidf[text_group_id == corpus$text_group_id[i]]
wt <- tfs1[match(rownames(y), word), tf_idf]
return(colSums(y * wt) / sum(wt))
}, mc.cores = 6))
rownames(txtvec) <- corpus$text_id
} else if (method == 'idf') {
## ** Calculate idf
tdt <- corpus[, .(word = unlist(tokenize_fastestword(text))),
.(text_id = as.integer(as.character(text_id)))]
lkup <- readRDS(infiles[['idlkup']])
tdt[lkup, text_group_id := i.text_group_id, on = 'text_id']
tot.docs <- tdt[, uniqueN(text_group_id)]
idfs <- tdt[, .(ndocs = uniqueN(text_group_id)), word]
idfs[, totdocs := tot.docs]
idfs[, idf := log(totdocs / ndocs)]
wv <- ft_word_vectors(model, ft_words(model))
wvdt <- as.data.table(wv, keep.rownames = T)
wvdt <- melt(wvdt, id.vars = 'rn')
wvdt[, variable := as.integer(variable)]
wvdt[, normv := sqrt(sum(value^2)), rn]
wvdt[, y := value / normv]
tdt[idfs, idf := i.idf, on = 'word']
tdt1 <- tdt
wvdt1 <- wvdt[rn %in% unique(tdt1$word)]
tdt1[, chunk := text_id %% 24]
txtvec3 <- do.call(rbind, mclapply(tdt1[, unique(chunk)], function(ch) {
w <- wvdt1[tdt1[chunk == ch], on = c('rn' = 'word'), allow.cartesian=T, nomatch=0]
w[, .(value = sum(y * idf) / sum(idf)), .(variable, text_id)]
}, mc.cores = 3))
txtvec <- dcast(txtvec3, text_id ~ variable)
txtvec <- as.matrix(txtvec, rownames=1)
} else stop('Method not recognised')
saveRDS(txtvec, replace.file(outfile))
success(state)
return(invisible(txtvec))
}
aggregate.embeddings.docs <- function(embeddings=NULL, idlkup=NULL,
infiles = c(embeddings = projfile(opt('embeddings')),
idlkup = projfile(opt('text.id.lookup'))),
outfile = projfile(opt('embeddings.docs'))) {
## embeddings=NULL; infiles = c(embeddings = projfile(opt('embeddings')), idlkup = projfile(opt('text.id.lookup'))); outfile = projfile(opt('embeddings.docs'))
state <- check.state(embeddings, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat('Aggregating embeddings by document...\n')
if (is.null(embeddings)) {
if (!file.exists(infiles[['embeddings']]))
stop('Embeddings not yet calculated. Call `aggregate.embeddings()` first')
embs <- readRDS(infiles[['embeddings']])
}
if (is.null(idlkup)) {
idlkup <- readRDS(infiles[['idlkup']])
}
idlkup <- idlkup[match(as.integer(rownames(embs)), text_id)]
stopifnot(identical(idlkup$text_id, as.integer(rownames(embs))))
e <- as.data.table(embs, keep.rownames=T)
e <- melt(e, id.vars = 'rn')
e[, rn := as.integer(rn)]
e[idlkup, text_group_id := i.text_group_id, on = c('rn' = 'text_id')]
e <- e[, mean(value), .(text_group_id, variable)]
e <- dcast(e, text_group_id ~ variable, value.var = 'V1')
e <- as.matrix(e, rownames = 'text_group_id')
saveRDS(e, replace.file(outfile))
}
calc.distances <- function(embeddings = NULL, corpus=NULL,
infiles = c(embs = projfile(opt('embeddings')),
corp = projfile(opt('corpus.imported'))),
outfile = projfile(opt('doc.distances'))) {
## embeddings = NULL; corpus=NULL; infiles = c(embs = projfile(opt('embeddings')), corp = projfile(opt('corpus.imported'))); outfile = projfile(opt('tree.data'))
state <- check.state(embeddings, corpus, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
if (is.null(embeddings)) {
if (!file.exists(infiles[['embs']]))
stop('Embeddings not yet aggregated. Call `aggregate.embeddings()` first')
embs <- readRDS(infiles[['embs']])
} else embs <- embeddings
if (is.null(corpus)) {
if (!file.exists(infiles[['corp']]))
stop('Corpus not yet created.')
corpus <- readRDS(infiles[['corp']])
}
Matrix <- as.matrix(embs)
sim <- Matrix / sqrt(rowSums(Matrix * Matrix))
sim <- sim %*% t(sim)
D_sim <- as.dist(1 - sim)
dm <- D_sim
saveRDS(dm, replace.file(outfile))
success(state)
return(invisible(dm))
}
most.similar <- function(distances = NULL, corpus = NULL, n.simils = 5L,
infiles = c(distances = projfile(opt('doc.distances')),
corp = projfile(opt('corpus.imported'))),
outfile = projfile(opt('doc.simils'))) {
state <- check.state(distances, n.simils, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
if (is.null(distances)) {
if (!file.exists(infiles[['distances']]))
stop('Distances not yet calculated. Call `calc.distances()` first')
dists <- readRDS(infiles[['distances']])
} else dists <- distances
dists <- as.matrix(dists)
if (is.null(corpus)) {
if (!file.exists(infiles[['corp']]))
stop('Corpus not yet created.')
corpus <- readRDS(infiles[['corp']])
}
rownames(dists) <- corpus[match(as.integer(rownames(dists)), text_id),
sub('\\.md$', '', text_group_id)]
colnames(dists) <- corpus[match(as.integer(colnames(dists)), text_id),
sub('\\.md$', '', text_group_id)]
most.simils <- sapply(rownames(dists), function(x) {
names(head(sort(dists[x,]), n.simils+1L)[-1])
}, simplify=F)
jsonlite::write_json(most.simils, replace.file(basename(outfile)))
out <- jsonlite::write_json(most.simils, replace.file(outfile))
success(state)
return(invisible(out))
}
deck2d <- function(d, xvar, yvar, labvar=NULL, colvar=NULL, zoom = 3, width = 1400, height = 800,
opacity = 0.5) {
library(deckgl)
if (!is.data.table(d)) {
d <- as.data.table(d)
}
if (is.null(colvar)) {
d[, col := "#5B3456"]
} else d[, col := get(colvar)]
properties <- list(
getPosition = get_position(xvar, yvar),
getColor = get_color_to_rgb_array('col'),
getFillColor = get_color_to_rgb_array('col')
)
if (!is.null(labvar)) {
d[, lab := get(labvar)]
properties$getTooltip = JS("object =>`${object.lab}`")
}
xm <- d[, median(get(xvar), na.rm=T)]
ym <- d[, median(get(yvar), na.rm=T)]
deck <- deckgl(latitude = ym, longitude = xm, zoom = zoom, width=width, height=height,
style = list(background = "#F5F5F5")) %>%
add_scatterplot_layer(data = d,
properties = properties,
getRadius = 500,
radiusScale = 18,
radiusMinPixels = 3,
radiusMaxPixels = 10,
opacity = opacity
)
deck
}
umap.plot <- function(distances = NULL, corpus = NULL,
infiles = c(distances = projfile(opt('doc.distances')),
corp = projfile(opt('corpus.imported'))),
outfile = projfile(opt('umap.plot'))) {
state <- check.state(distances, corpus, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
if (is.null(distances)) {
if (!file.exists(infiles[['distances']]))
stop('Distances not yet calculated. Call `calc.distances()` first')
dists <- readRDS(infiles[['distances']])
} else dists <- distances
dists <- as.matrix(dists)
if (is.null(corpus)) {
if (!file.exists(infiles[['corp']]))
stop('Corpus not yet created.')
corpus <- readRDS(infiles[['corp']])
}
corpus[, text_group_id := gsub('\\.md$', '', text_group_id)]
labs <- corpus[match(as.numeric(rownames(dists)), text_id),
sprintf('<b>%s</b><br><i>%s</i><br>%s',
text_group_id, title, text)]
u <- uwot::tumap(dists)
d <- as.data.table(u)
d[, lab := labs]
deck <- deck2d(d, 'V1', 'V2', 'lab', zoom = 5)
htmlwidgets::saveWidget(deck, replace.file(outfile))
success(state)
return(invisible(deck))
}
make.tree <- function(distances = NULL, corpus = NULL, square.edges = opt('square.edges'),
infiles = c(distances = projfile(opt('doc.distances')),
corp = projfile(opt('corpus.imported'))),
outfile = projfile(opt('tree'))) {
state <- check.state(distances, corpus, square.edges, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
if (is.null(distances)) {
if (!file.exists(infiles[['distances']]))
stop('Distances not yet calculated. Call `calc.distances()` first')
dists0 <- readRDS(infiles[['distances']])
} else dists0 <- distances
dists <- as.matrix(dists0)
if (is.null(corpus)) {
if (!file.exists(infiles[['corp']]))
stop('Corpus not yet created.')
corpus <- readRDS(infiles[['corp']])
}
corpus[, text_group_id := gsub('\\.md$', '', text_group_id)]
library(ape)
## library(phangorn)
labs <- corpus[match(as.numeric(rownames(dists)), text_id),
sprintf('<b>%s</b><br><i>%s</i><br>%s',
text_group_id, title, text)]
tree <- bionj(dists)
tree$tip.label <- corpus[match(as.numeric(tree$tip.label), text_id), sub('\\.md$', '', text_group_id)]
edges <- as.data.table(tree$edge)
setnames(edges, c('from', 'to'))
edges[, eid := .I]
tips <- data.table(label = tree$tip.label)[, id := .I]
edges[, to.tip := to %in% tips$id]
## ** Get coordinates without plotting
p <- plot2(tree, type = 'phylogram', plot = F,
show.tip.label = F, show.node.label = F,
edge.color = 'black', tip.color = 'red', cex = 0,
root.edge = F, use.edge.length = T,
no.margin = T, direction = 'rightwards')
ed <- data.table(p$coords$edge)
xy <- data.table(x = p$coords$xx, y = p$coords$yy)
xy[, id := .I]
edges[xy, `:=`(from.x = i.x, from.y = i.y), on = c('from' = 'id')]
edges[xy, `:=`(to.x = i.x, to.y = i.y), on = c('to' = 'id')]
## ** Make square segments
if (square.edges) {
e <- edges[from.x != to.x & from.y != to.y]
ed <- e[, .(to.x = c(from.x, to.x), from.y = c(from.y, to.y), to.tip = c(F, first(to.tip)),
extra = c(T, F))
, .(from, to, eid)]
ed <- merge(ed, e[, c('eid', setdiff(names(e), names(ed))), with=F], by = 'eid')
ed <- rbind(ed, edges[!(from.x != to.x & from.y != to.y)], fill = T)
} else ed <- copy(edges)
te <- copy(ed)
te[, seq_id := rowid(eid)]
## ** Deck tree
te[, col := '#43A1C9']
te[tips, text_group_id := i.label, on = c('to' = 'id')]
te[corpus, `:=`(title = i.title, text = i.text), on = 'text_group_id']
setnames(te, c('title', 'text'), c('pubtitle', 'pubtext'))
deck <- tree2deck(tedges=te, edges.colour.by = 'col', points.colour.by = 'col', tooltip = T,
edge.vars = c('eid'),
edge.lab.glue = '',
point.vars = c('eid', 'pubtitle', 'text_group_id', 'pubtitle', 'pubtext'),
node.lab.glue = '{eid}: <b>{text_group_id}</b><br><i>{pubtitle}</i><br>{pubtext}')
htmlwidgets::saveWidget(deck, replace.file(outfile))
success(state)
return(invisible(deck))
}
make.tree2 <- function(distances = NULL, corpus = NULL,
infiles = c(distances = projfile(opt('doc.distances')),
corp = projfile(opt('corpus.imported'))),
outfile = projfile(opt('tree2'))) {
state <- check.state(distances, corpus, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
if (is.null(distances)) {
if (!file.exists(infiles[['distances']]))
stop('Distances not yet calculated. Call `calc.distances()` first')
dists <- readRDS(infiles[['distances']])
} else dists <- distances
dists <- as.matrix(dists)
if (is.null(corpus)) {
if (!file.exists(infiles[['corp']]))
stop('Corpus not yet created.')
corpus <- readRDS(infiles[['corp']])
}
corpus[, text_group_id := gsub('\\.md$', '', text_group_id)]
library(visNetwork)
rownames(dists) <- corpus[match(as.numeric(rownames(dists)), text_id), sub('\\.md$', '', text_group_id)]
colnames(dists) <- corpus[match(as.numeric(colnames(dists)), text_id), sub('\\.md$', '', text_group_id)]
g <- graph.adjacency(dists, mode = 'directed', weighted=T, diag=F)
## g <- simplify(g, remove.multiple=TRUE, remove.loops=TRUE)
## E(g)$weight <- abs(E(g)$weight)
## g <- delete_edges(g, E(g)[which(E(g)$weight<0.8)])
## g <- delete.vertices(g, degree(g)==0)
mst <- igraph::mst(g, algorithm="prim")
## visIgraph(mst)
visnet <- toVisNetworkData(mst)
visnet$nodes$title <- corpus[match(visnet$nodes$id, sub('\\.md$', '', text_group_id)),
sprintf('<b>%s</b><br><i>%s</i><br>%s', text_group_id, title, text)]
vis <- visNetwork(nodes=visnet$nodes, edges=visnet$edges, width='1600px', heigh='1000px') %>%
## visIgraphLayout(layout = 'layout_in_circle') %>%
## visLayout(hierarchical = T) %>%
visHierarchicalLayout() %>%
visEdges(width = 5, color = "#9DC4A9") %>%
visNodes(size = 20, color = "#50AD8577") %>%
## visHierarchicalLayout() %>%
## visClusteringByHubsize(size = 3) %>%
visOptions(highlightNearest = F, #list(hover=T),
collapse = F, clickToUse = F) %>%
visPhysics(stabilization = F, #maxVelocity = 5000, minVelocity = 0.01, timestep=0.8,
## repulsion = list(nodeDistance = 2000),
## solver = 'hierarchicalRepulsion',
hierarchicalRepulsion = list(nodeDistance = 20
## , centralGravity = -0.01
, springLength = 1
, springConstant = 0.00001
))
htmlwidgets::saveWidget(vis, replace.file(outfile))
success(state)
return(invisible(vis))
}
do.umap <- function(embeddings = NULL, dims = opt('dims', 2L), pkg = opt('umap.pkg','uwot'),
normalise = opt('normalise',F),
infiles = projfile(opt('embeddings')),
outfile = projfile(opt('embeddings.umap'))) {
## embeddings = NULL; dims = opt('dims', 2L); pkg = opt('umap.pkg','uwot'); normalise = opt('normalise',F); infiles = projfile(opt('embeddings')); outfile = projfile(opt('embeddings.umap')); run_suff=NULL
## embeddings = NULL; dims = 2L; pkg = opt('umap.pkg','uwot'); normalise = opt('normalise',F); infiles = projfile(opt('embeddings')); outfile = projfile(opt('embeddings.umap')); run_suff=NULL
state <- check.state(dims, pkg, normalise, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat(sprintf('UMAP fitting to %i dimensions...\n', dims))
if (is.null(embeddings)) {
if (!file.exists(infiles))
stop('Embeddings not yet aggregated. Call `aggregate.embeddings()` first')
embs <- readRDS(infiles)
} else embs <- embeddings
if (normalise) {
normvec <- apply(embs, 1, function(x) sqrt(sum(x^2)))
normvec <- fifelse(normvec > 0, normvec, 1)
embs <- embs / normvec
}
if (pkg == 'umap') {
library(umap)
custom.config <- umap.defaults
custom.config$n_components <- dims
custom.config$n_neighbors <- opt('umap.n.neighbours')
custom.config$n_epochs <- opt('umap.n.epochs')
custom.config$spread <- opt('umap.spread')
um <- umap::umap(embs, custom.config, method = 'umap-learn')
dt <- as.data.frame(um$layout)
} else if (pkg == 'uwot') {
print(system.time({
um <- uwot::umap(embs, n_components = dims,
n_neighbors = opt('umap.n.neighbours'),
n_epochs = opt('umap.n.epochs'),
n_threads = getOption('mc.cores'),
n_sgd_threads = 'auto')
}))
dt <- as.data.frame(um)
} else stop('Only `umap` and `uwot` packages are supported')
setDT(dt)
setnames(dt, paste0('umap_v', seq_len(ncol(dt))))
dt[, text_id := rownames(embs)]
saveRDS(dt, replace.file(outfile))
success(state)
return(invisible(dt))
}
do.umap.2d <- function(embeddings = NULL, dims = opt('dims', 2L), pkg = opt('umap.pkg','uwot'),
normalise = opt('normalise',F),
infiles = projfile(opt('embeddings.umap')),
outfile = projfile(opt('embeddings.umap.2d'))) {
## embeddings = NULL; dims = opt('dims', 2L); pkg = opt('umap.pkg','uwot'); normalise = opt('normalise',F); infiles = projfile(opt('embeddings')); outfile = projfile(opt('embeddings.umap')); run_suff=NULL
## embeddings = NULL; dims = 2L; pkg = opt('umap.pkg','uwot'); normalise = opt('normalise',F); infiles = projfile(opt('embeddings.umap')); outfile = projfile(opt('embeddings.umap.2d')); run_suff=NULL
state <- check.state(dims, pkg, normalise, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat(sprintf('UMAP fitting to %i dimensions...\n', dims))
if (is.null(embeddings)) {
if (!file.exists(infiles))
stop('Embeddings not yet aggregated. Call `aggregate.embeddings()` first')
embs <- readRDS(infiles)
embs <- as.matrix(embs, rownames = 'text_id')
} else embs <- embeddings
if (normalise) {
normvec <- apply(embs, 1, function(x) sqrt(sum(x^2)))
normvec <- fifelse(normvec > 0, normvec, 1)
embs <- embs / normvec
}
if (pkg == 'umap') {
library(umap)
custom.config <- umap.defaults
custom.config$n_components <- dims
custom.config$n_neighbors <- opt('umap.n.neighbours')
custom.config$n_epochs <- opt('umap.n.epochs')
custom.config$spread <- opt('umap.spread')
um <- umap::umap(embs, custom.config, method = 'umap-learn')
dt <- as.data.frame(um$layout)
} else if (pkg == 'uwot') {
## system.time({
## um <- uwot::umap(embs, n_components = dims,
## n_neighbors = opt('umap.n.neighbours'),
## n_epochs = opt('umap.n.epochs'),
## n_threads = getOption('mc.cores'))
## })
print(system.time({
um <- uwot::umap(embs, n_components = dims,
n_neighbors = opt('umap.n.neighbours'),
n_epochs = opt('umap.n.epochs'),
n_threads = getOption('mc.cores'),
n_sgd_threads = 'auto')
}))
dt <- as.data.frame(um)
} else stop('Only `umap` and `uwot` packages are supported')
setDT(dt)
setnames(dt, paste0('umap_v', seq_len(ncol(dt))))
dt[, text_id := rownames(embs)]
saveRDS(dt, replace.file(outfile))
success(state)
return(invisible(dt))
}
do.umap.2d.doc <- function(embeddings = NULL, dims = opt('dims', 2L), pkg = opt('umap.pkg','uwot'),
n_neighbors = opt('umap.n.neighbours'),
n_epochs = opt('umap.n.epochs'),
normalise = opt('normalise',F),
infiles = projfile(opt('embeddings.docs')),
outfile = projfile(opt('embeddings.docs.umap.2d'))) {
## embeddings = NULL; dims = opt('dims', 2L); pkg = opt('umap.pkg','uwot'); normalise = opt('normalise',F); infiles = projfile(opt('embeddings.docs')); outfile = projfile(opt('embeddings.docs.umap.2d'))
state <- check.state(dims, pkg, n_neighbors, n_epochs, normalise, infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat(sprintf('UMAP fitting to %i dimensions by document...\n', dims))
if (is.null(embeddings)) {
if (!file.exists(infiles))
stop('Embeddings not yet aggregated. Call `aggregate.embeddings()` first')
embs <- readRDS(infiles)
} else embs <- embeddings
if (normalise) {
normvec <- apply(embs, 1, function(x) sqrt(sum(x^2)))
normvec <- fifelse(normvec > 0, normvec, 1)
embs <- embs / normvec
}
if (pkg == 'umap') {
library(umap)
custom.config <- umap.defaults
custom.config$n_components <- dims
custom.config$n_neighbors <- n_neighbors
custom.config$n_epochs <- n_epochs
custom.config$spread <- opt('umap.spread')
um <- umap::umap(embs, custom.config, method = 'umap-learn')
dt <- as.data.frame(um$layout)
} else if (pkg == 'uwot') {
print(system.time({
um <- uwot::umap(embs, n_components = dims,
n_neighbors = n_neighbors,
n_epochs = n_epochs,
n_threads = getOption('mc.cores'),
n_sgd_threads = 'auto')
}))
dt <- as.data.frame(um)
} else stop('Only `umap` and `uwot` packages are supported')
setDT(dt)
setnames(dt, paste0('umap_v', seq_len(ncol(dt))))
dt[, text_group_id := rownames(embs)]
saveRDS(dt, replace.file(outfile))
success(state)
return(invisible(dt))
}
clustering <- function(umapdt = NULL, eps = opt('eps'),
infile = projfile(opt('embeddings.umap')),
outfile = projfile(opt('text.clusters'))) {
state <- check.state(umapdt, eps, infile, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat('Clustering of embeddings...\n')
if (is.null(umapdt)) {
if (!file.exists(infile))
stop('UMAP not yet performed. Call `do.umap()` first')
umapdt <- readRDS(infile)
}
dt <- umapdt
dt_matrix <- as.matrix(dt[, -'text_id'])
k <- pmax(round(0.001 * nrow(dt)), 3) # max number of neighbours
cat('\tCalculating KNN distances...\n')
knn.norm <- get.knn(dt_matrix, k = k)
knn.norm <- data.frame(from = rep(1:nrow(knn.norm$nn.index), k),
to = as.vector(knn.norm$nn.index), weight = 1/(1 + as.vector(knn.norm$nn.dist)))
cat('\tCalculating graph from data...\n')
nw.norm <- graph_from_data_frame(knn.norm, directed = FALSE)
cat('\tSimplifying graph...\n')
nw.norm <- simplify(nw.norm)
cat('\tFinding clusters...\n')
lc.norm <- cluster_louvain(nw.norm)
dt[, cluster := as.integer(membership(lc.norm))]
resdt <- dt[, .(text_id, cluster)]
cat('\tPoing density...\n')
if (is.null(eps))
eps <- 0.3
resdt[, dens := pointdensity(dt_matrix, eps, type = 'frequency')]
cat(resdt[, sprintf('%i clusters found. %0.1f%% of text elements not in clusters\n',
uniqueN(cluster), 100 * mean(cluster == 0))])
saveRDS(resdt, replace.file(outfile))
success(state)
return(invisible(resdt))
}
label.clusters <- function(textclusters=NULL, corpus=NULL, umap2d=NULL,
add.coords = opt('create.map'), max.ngram = opt('max.ngram'),
n.tokens = opt('lab.n.tokens'), n.tokens.max = opt('lab.n.tokens.max'),
sep = opt('lab.sep'),
infiles = c(txt_cl = projfile(opt('text.clusters')),
corp = projfile(opt('reformed.corpus')),
um2d = projfile(opt('embeddings.umap.2d'))),
outfile = projfile(opt('clusters'))) {
state <- check.state(textclusters, corpus, umap2d, add.coords, max.ngram, n.tokens, n.tokens.max, sep,
infiles, outfile = outfile)
if (state$status == 0) return(invisible(state$output))
cat('Clusters labelling...\n')
if (is.null(textclusters)) {
if (!file.exists(infiles[['txt_cl']]))
stop('Clustering not yet performed. Call `clustering()` first')
textclusters <- readRDS(infiles[['txt_cl']])
}
if (is.null(corpus)) {
if (!file.exists(infiles[['corp']]))
stop('Corpus not yet created.')
corpus <- readRDS(infiles[['corp']])
}
corp <- quanteda::corpus(corpus, docid_field = 'text_id', text_field = 'text')
## ** Get tokens
toksn <- tokens(corp) %>%
tokens_remove('^[0-9]+$', valuetype = 'regex', verbose=T) %>%
tokens_select(min_nchar = 3) %>%
tokens_remove(c(stopwords('english'), letters))
## tokens_compound(phrase(c("new zealand", "united kingdom", "abortion law", "crime act", "crimes act",
## "law commission", "zea lander", "human life", "current law",
## "law change", "proposed law", "abortion service",
## "right life", "pregnant woman", "woman abortion",
## "zea land", "new zea land", "new zea lander", "new zea lander s",
## "de criminalise", "mental health", "unborn child", "et al",
## "legal issue", "health issue", "health care", "health system",
## "moral issue", "woman s", "woman 's", "p re",
## "mother's", "mother 's", "late term", "don t",
## "safe area", "safe areas"))) %>%
## tokens_compound('[0-9]+[ -]weeks?', valuetype = 'regex') %>%
## tokens_compound('[0-9]+[- ]months?', valuetype = 'regex')
## ** Remove sequentially duplicated words
tl <- as.list(toksn)
tl <- rapply(tl, function(x) x[!(shift(x, fill='') == x)], how = 'replace')
toksn <- as.tokens(tl)
## ** N-grams
toks12 <- tokens_ngrams(toksn, seq_len(max.ngram))
tl <- as.list(toks12)
tdt <- utils::stack(tl)
setDT(tdt)
setnames(tdt, c('word', 'text_id'))
tdt[textclusters, cluster := i.cluster, on = 'text_id']
## ** Tf-Idf
tfs <- tdt[, .(indoc = .N), .(word, cluster)]
tfs[, tot_occ := sum(indoc), word]
tfs[, totdoc := sum(indoc), cluster]
tfs[, tf := indoc / totdoc]
tot.docs <- tdt[, uniqueN(cluster)]
idfs <- tdt[, .(ndocs = uniqueN(cluster)), word]
idfs[, totdocs := tot.docs]
idfs[, idf := log(totdocs / ndocs)]
tfs[idfs, idf := i.idf, on = 'word']
tfs[, tf_idf := tf * idf]
setorder(tfs, cluster, -tf_idf)
tfsel <- tfs[, head(.SD, n.tokens.max), cluster] # only consider top tokens
## ** Remove 1-grams that are part of n-grams within best tokens
tfsel[, within_others := {
words <- .SD[, word]
sapply(words, function(w) any(grepl(w, setdiff(words, w), fixed = T)))
}, cluster]
tfsel <- tfsel[within_others == F]
## ** Get label for each cluster
labels <- tfsel[, .(label = paste(head(word, n.tokens), collapse=sep)), cluster]
labels <- merge(labels,
textclusters[, .(dens=mean(dens), n_txt = .N), cluster], by = 'cluster', sort=F)
setorder(labels, -dens)
labels[, cluster_i := .I]
labels[cluster == 0L, cluster_i := 0L]
## ** Add coordinates (run umap on 2D if necessary)
if (add.coords) {
if (is.null(umap2d)) {
umap2d <- readRDS(infiles[['um2d']])
}
umap2d[textclusters, cluster := i.cluster, on = 'text_id']
clcoords <- umap2d[, .(x = mean(umap_v1), y = mean(umap_v2)), cluster]
labels[clcoords, `:=`(x = i.x, y = i.y), on = 'cluster']
}
saveRDS(labels, replace.file(outfile))
success(state)
return(invisible(labels))
}
plot.clusters <- function(clusters=NULL, width=opt('plot.width'), height=opt('plot.height'),
infile = projfile(opt('clusters')),
outfile = projfile(opt('plot.file'))) {
state <- check.state(clusters, width, height, infile, outfile = outfile, readfun = NULL)
if (state$status == 0) return(invisible(state$output))
if (is.null(clusters)) {
if (!file.exists(infile))
stop('Clusters not formed yet.')
clusters <- readRDS(infile)
}
if (!all(c('x', 'y') %in% names(clusters))) {
warning('Cluster coordinates not in data. Specify `add.coords=TRUE` in `label.clusters()` or set `make.map = T` in project.options. Not plotting.\n')
return()
}
cat('Plotting clusters...\n')
clusters[, lab := gsub('_', ' ', gsub(', ', '\n', label))]
clusters[, col := fifelse(cluster != 0, rev(gplots::rich.colors(nrow(clusters))), '#999999FF')]
library(ggplot2)
library(ggrepel)
g <- ggplot(clusters, aes(x = x, y = y, size = n_txt, label = lab, colour = col)) +
geom_point() +
scale_fill_identity() +
geom_label_repel(size = 2, fill = NA, lineheight = 0.8) +
theme_void() +
theme(legend.position = 'none')
ggsave(replace.file(outfile), g, width=width, height=height)
success(state)
return(invisible(g))
}
make.deck <- function(clusters=NULL, text_clusters=NULL, umap2d=NULL, corpus=NULL,
p.shown=opt('deck.p.shown'),
zoom = opt('deck.zoom'), width = opt('deck.width'), height = opt('deck.height'),
opacity = opt('deck.opacity'),
deck_labelling = opt('deck.labelling'),
infiles = c(clust = projfile(opt('clusters')),
txtcl = projfile(opt('text.clusters')),
um2d = projfile(opt('embeddings.umap.2d')),
corp = projfile(opt('corpus.imported'))),
outfile = projfile(opt('deck.file'))) {
state <- check.state(clusters, text_clusters, umap2d, corpus, p.shown, zoom, width,
height, opacity, deck_labelling, infiles,
outfile = outfile, readfun = NULL)
if (state$status == 0) return(invisible(state$output))
if (is.null(clusters)) {
clusters <- readRDS(infiles[['clust']])
}
if (is.null(text_clusters)) {
text_clusters <- readRDS(infiles[['txtcl']])
}
if (is.null(umap2d)) {
umap2d <- readRDS(infiles[['um2d']])
}
if (is.null(corpus)) {
corpus <- readRDS(infiles[['corp']])
}
if (p.shown < 1) {
corpus <- corpus[sample(seq_len(.N), p.shown*.N)]
}
text_clusters[, text_id := as.integer(text_id)]
corpus[text_clusters, cluster := i.cluster, on = 'text_id']
umap2d[, text_id := as.integer(text_id)]
corpus[umap2d, `:=`(x = i.umap_v1, y = i.umap_v2), on = 'text_id']
clusters[, col := fifelse(cluster != 0, rev(gplots::rich.colors(nrow(clusters))), '#999999FF')]
corpus[clusters, `:=`(cluster_i = i.cluster_i, cl_label = i.label, col = i.col), on = 'cluster']
corpus[, lab := glue::glue(deck_labelling, .envir=.SD)]
dat <- corpus[, .(x, y, lab, col)]
dat <- dat[is.finite(x) & is.finite(y)]
library(deckgl)
properties <- list(
getPosition = get_position('x', 'y'),
getColor = get_color_to_rgb_array('col'),
getFillColor = get_color_to_rgb_array('col'),
getTooltip = JS("object =>`${object.lab}`")
)
xm <- dat[, median(x, na.rm=T)]
ym <- dat[, median(y, na.rm=T)]
deck <- deckgl(latitude = ym, longitude = xm, zoom = zoom, width=width, height=height,
style = list(background = "#F5F5F5")) %>%
add_scatterplot_layer(data = dat,
properties = properties,
getRadius = 500,
radiusScale = 18,
radiusMinPixels = 3,
radiusMaxPixels = 10,
opacity = opacity
)
htmlwidgets::saveWidget(deck, replace.file(basename(outfile)))
invisible(replace.file(outfile))
file.copy(basename(outfile), dirname(outfile), overwrite=T)
success(state)
return(deck)
}
make.deck.docs <- function(clusters=NULL, text_clusters=NULL, umap2d=NULL, corpus=NULL,
p.shown=opt('deck.p.shown'),
zoom = opt('deck.zoom'), width = opt('deck.width'), height = opt('deck.height'),
opacity = opt('deck.opacity'),
deck_labelling = opt('deck.labelling'),
infiles = c(clust = projfile(opt('clusters')),
txtcl = projfile(opt('text.clusters')),
um2d = projfile(opt('embeddings.docs.umap.2d')),
corp = projfile(opt('corpus.imported'))),
outfile = projfile(opt('deck.docs.file'))) {
## clusters=NULL; text_clusters=NULL; umap2d=NULL; corpus=NULL; p.shown=opt('deck.p.shown'); zoom = opt('deck.zoom'); width = opt('deck.width'); height = opt('deck.height'); opacity = opt('deck.opacity'); deck_labelling = opt('deck.labelling'); infiles = c(clust = projfile(opt('clusters')), txtcl = projfile(opt('text.clusters')), um2d = projfile(opt('embeddings.docs.umap.2d')), corp = projfile(opt('corpus.imported'))); outfile = projfile(opt('deck.file'))
state <- check.state(clusters, text_clusters, umap2d, corpus, p.shown, zoom, width,
height, opacity, deck_labelling, infiles,
outfile = outfile, readfun = NULL)
if (state$status == 0) return(invisible(state$output))
if (is.null(clusters)) {
clusters <- readRDS(infiles[['clust']])
}
if (is.null(text_clusters)) {
text_clusters <- readRDS(infiles[['txtcl']])
}
text_clusters[clusters, cluster_i := i.cluster_i, on = 'cluster']
if (is.null(umap2d)) {
umap2d <- readRDS(infiles[['um2d']])
}
if (is.null(corpus)) {
corpus <- readRDS(infiles[['corp']])
}
if (p.shown < 1) {
corpus <- corpus[sample(seq_len(.N), p.shown*.N)]
}
text_clusters[, text_id := as.integer(text_id)]
text_clusters[corpus, text_group_id := i.text_group_id, on = 'text_id']
tcl <- text_clusters[, .(.N, dens = max(dens)), .(text_group_id, cluster_i)]
setorder(tcl, text_group_id, -N)
doc_clusters <- tcl[, .(cluster_i = cluster_i[1L], dens = dens[1L]), text_group_id]
corpus <- corpus[, .(text = paste(text, collapse='<br>')), .(text_group_id, asset_id, source)]
corpus[doc_clusters, `:=`(cluster_i = i.cluster_i),
on = 'text_group_id']
## doc_clusters[clusters, col := i.col, on =
## corpus[
umap2d[, text_group_id := as.integer(text_group_id)]
corpus[umap2d, `:=`(x = i.umap_v1, y = i.umap_v2), on = 'text_group_id']
clusters[, col := fifelse(cluster != 0, rev(gplots::rich.colors(nrow(clusters))), '#999999FF')]
corpus[clusters, `:=`(cl_label = i.label, col = i.col), on = 'cluster_i']
corpus[, lab := glue::glue(deck_labelling, .envir=.SD)]
dat <- corpus[, .(x, y, lab, col)]
dat <- dat[is.finite(x) & is.finite(y)]
library(deckgl)
properties <- list(
getPosition = get_position('x', 'y'),
getColor = get_color_to_rgb_array('col'),
getFillColor = get_color_to_rgb_array('col'),
getTooltip = JS("object =>`${object.lab}`")
)
xm <- dat[, median(x, na.rm=T)]
ym <- dat[, median(y, na.rm=T)]
deck <- deckgl(latitude = ym, longitude = xm, zoom = zoom, width=width, height=height,
style = list(background = "#F5F5F5")) %>%
add_scatterplot_layer(data = dat,
properties = properties,
getRadius = 500,
radiusScale = 18,
radiusMinPixels = 3,
radiusMaxPixels = 10,
opacity = opacity
)
htmlwidgets::saveWidget(deck, replace.file(basename(outfile)))
invisible(replace.file(outfile))
file.copy(basename(outfile), dirname(outfile), overwrite=T)
success(state)
return(deck)
}
comparison <- function(projects, option.files='plot.file', tmpfold=tempdir(),
rmdfile=file.path(tmpfold, 'project-comparison.Rmd'),
htmlfile=file.path(tmpfold, 'project-comparison.html')) {
library(data.table)
library(knitr)
library(ggplot2)
rmd <- sprintf('---
title: Comparison of %s across projects
date: "%s"
output:
rmdformats::downcute:
lightbox: true
thumbnails: false
gallery: true
toc_depth: 3
toc_float:
collapsed: false
smooth_scoll: true
mode: selfcontained
---
\n', paste(option.files, collapse = ', '), Sys.time())
cat(rmd, file=rmdfile)
cat('\n\n# Projects compared\n\n', file = rmdfile, append = T)
pdt <- as.data.table(projects, keep.rownames =T)
tab <- kable(pdt, col.names = c('Folder', 'Label'), format = 'html', caption = 'Models being compared')
cat(tab, file = rmdfile, append = T)
## ** Loop throught outputs to compare
fil=option.files[1]
for (fil in option.files) {
cat(sprintf('\n\n# %s\n\n', fil), file = rmdfile, append=T)
## *** Loop through project folders
proji=1
for (proji in seq_along(projects)) {
project <- projects[proji]
cat(sprintf('\n\n## %s (%s)\n\n', project, names(project)), file = rmdfile, append = T)
initialise.project(proj.dir = names(project))
pfile <- projfile(opt(fil))
if (!file.exists(pfile)) {
warning('File `', pfile, '` not found')
return()
}
ext <- tolower(sub('.*\\.(.*)$', '\\1', pfile))
if (ext %in% c('svg', 'png', 'jpg')) {
cat(sprintf('<img src="%s" />\n', pfile), file = rmdfile, append = T)
} else if (ext %in% 'rds') {
dat <- readRDS(pfile)
if (is.data.frame(dat)) {
tab <- kable(dat, format = 'html')
cat(tab, file = rmdfile, append = T)
} else {
s <- capture.output(str(tab))
cat(s, file = rmdfile, append = T)
}
} else if (ext %in% 'html') {
cat(sprintf('<iframe width="1000" height="900" src="%s" frameborder="1" allowfullscreen></iframe>\n', pfile), file = rmdfile, append = T)
} else {
cat(pfile, file = rmdfile, append = T)
}
}
}
rmarkdown::render(input=rmdfile, output_file = htmlfile, clean=T)
system(sprintf('xdg-open %s', htmlfile), wait=F)
return(invisible(htmlfile))
}
|
|
# 2. faza: Uvoz podatkov
sl <- locale("sl", decimal_mark=",", grouping_mark=".")
# Podatki o zgodovini vozenj s kolesi se zacenjo leta 2010. Do vkljucno leta
# 2017 so bili podatki na AWS nalozeni letno oz. cetrtletno, poznejsi pa so
# nalozeni mesecno.
source('lib/funkcije.r', encoding='UTF-8')
PrenosCB(2011, 2021)
ImenaStolpcevCB <- ImenaStolpcev('tripdata')
# Ko na hitro pregledamo imena stolpcev tabel, se nam zdi, da so stolpci v
# tabelah enako poimenovane od tabele 1 (2011) do vkljucno tabele 53 (202003).
# Stolpci vsebujejo cas voznje, datum zacetka in konca voznje, ime in stevilka
# zacetne in koncne postaje, stevilko kolesa ter tip narocnine.
#
# Sledi spremeba v poimenovanju stolpcev, ki je dosledna za vse sledece
# podatke. Dodani se tudi stolpci za ID voznje, tip kolesa ter geografsko sirino in
# dolzino zacetne in koncne postaje. Odstranjen je stolpec za cas voznje.
#
# S spodnjo zanko lahko preverimo, da so stolpci res dosledni s tabelo 1 do
# tabele 53 ter da so dosledni s tabelo 54 do zadnje tabele 73.
for (i in 1:length(ImenaStolpcevCB)) {
if (i <= 53) {
if (all(ImenaStolpcevCB[[i]] == ImenaStolpcevCB[[53]])) {
ImenaStolpcevCB[[i]] = TRUE
}
}
else {
if (all(ImenaStolpcevCB[[i]] == ImenaStolpcevCB[[length(ImenaStolpcevCB)]])) {
ImenaStolpcevCB[[i]] = TRUE
}
}
}
all(as.logical(ImenaStolpcevCB)) == TRUE
# Najprej preberemo in uredimo tabele 54 do 73. Dodamo stolpec (duration),
# ki nam pove dolzino voznje v sekundah in odstranimo stolpec unikatnih imen
# vozenj (ride_id).
tbl2 <-
list.files(
path = './podatki',
pattern = 'tripdata',
full.names = TRUE
)[54:73] %>%
map_df(
~read_csv(
.,
show_col_types = FALSE,
col_types = cols(
.default = col_guess(),
start_station_id = col_double(),
end_station_id = col_double(),
member_casual = col_factor(),
rideable_type = col_factor()
)
)
) %>%
mutate(
duration = (ended_at - started_at) %>%
as.duration() %>%
as.numeric('seconds')
) %>%
select(
-ride_id
)
# Uredimo tabele 1 do 53. Stolpce preimenujemo v skladu s tabelo tbl2,
# dodamo stolpec (rideable_type), ki opisuje tip kolesa. Vse vrednosti tega
# stolpca nastavimo na docked_bike, saj ce pogledamo prvih nekaj mesecev tabele
# tbl2 so vse vrednosti docked_bike, sele kasneje se pojavijo drugi vnosi.
tbl1 <-
list.files(
path = './podatki',
pattern = 'tripdata',
full.names = TRUE
)[1:53] %>%
map_df(
~read_csv(
.,
show_col_types = FALSE,
col_types = cols(
.default = col_guess(),
'Start station number' = col_double(),
'End station number' = col_double(),
'Member type' = col_factor()
)
)
) %>%
rename(
duration = 'Duration',
started_at = 'Start date',
ended_at = 'End date',
start_station_id = 'Start station number',
start_station_name = 'Start station',
end_station_id = 'End station number',
end_station_name = 'End station',
member_casual = 'Member type'
) %>%
select(
-'Bike number'
) %>%
mutate(
rideable_type = 'docked_bike'
)
# Zelimo ustvariti loceno tabelo, ki vsebuje vse potrebne podatke o postajah.
# Preverimo ali tabela tbl2 vsebuje vse postaje, ki so v tabeli tbl1.
postaje_tbl1 <- tbl1$start_station_id %>%
unique()
postaje_tbl2 <- tbl2$start_station_id %>%
unique()
manjkajoce_postaje_id <- setdiff(postaje_tbl1,postaje_tbl2)
manjkajoce_postaje <- tbl1 %>%
select(
start_station_id,
start_station_name
) %>%
filter(
start_station_id == manjkajoce_postaje_id[1] |
start_station_id == manjkajoce_postaje_id[2] |
start_station_id == manjkajoce_postaje_id[3] |
start_station_id == manjkajoce_postaje_id[4] |
start_station_id == manjkajoce_postaje_id[5] |
start_station_id == manjkajoce_postaje_id[6]
) %>%
unique() %>%
arrange(start_station_id)
# Opazimo, da ne vseubje 6 postaj. Ko podrobneje pogledamo, imajo vse
# postaje kar nekaj vozenj, zato jih ne moremo kar ignorirati. Postaji
# 22nd & H St NW in 22nd & H St NW (disabled) bomo zdruzili v eno, saj si
# delita station_id 0.
# (Naj opozori, da je pri drugi postaji med St in NW dvojni presledk)
# Na sreco so postaje poimenovane po kriziscu cest in jih manjka le 5, njihove
# koordinate poiscemo na Google Maps in jih rocno dodamo v tabelo manjkajocih
# postaj.
tbl1 <- tbl1 %>%
mutate(
start_station_name = replace(
start_station_name,
start_station_name == '22nd & H NW (disabled)',
'22nd & H St NW'
),
end_station_name = replace(
end_station_name,
end_station_name == '22nd & H NW (disabled)',
'22nd & H St NW'
)
)
manjkajoce_postaje <- tbl1 %>%
select(
start_station_id,
start_station_name
) %>%
filter(
start_station_id == manjkajoce_postaje_id[1] |
start_station_id == manjkajoce_postaje_id[2] |
start_station_id == manjkajoce_postaje_id[3] |
start_station_id == manjkajoce_postaje_id[4] |
start_station_id == manjkajoce_postaje_id[5] |
start_station_id == manjkajoce_postaje_id[6]
) %>%
unique() %>%
arrange(start_station_id) %>%
add_column(
start_lat = c(38.89959, 38.86292, 38.88362, 39.09420, 38.92360),
start_lng = c(-77.04884, -77.05183, -76.95754, -77.13259, -77.23131)
)
# Iz tabele tbl2 izlusicmo id, ime in koordinate za vsako postajo. Ker se
# koordinate za iste postaje razlikujejo za nekaj decimalk vzamemo njihovo
# povprecje. Na koncu dodamo se manjkajoce postaje.
postaje <- tbl2 %>%
select(
station_name = start_station_name,
station_id = start_station_id,
start_lat,
start_lng
) %>%
group_by(
station_id,
station_name
) %>%
summarise(
lat = mean(start_lat),
lng = mean(start_lng)
) %>%
ungroup(
station_id,
station_name
) %>%
add_row(
station_name = manjkajoce_postaje$start_station_name,
station_id = manjkajoce_postaje$start_station_id,
lat = manjkajoce_postaje$start_lat,
lng = manjkajoce_postaje$start_lng
) %>%
arrange(station_id)
# Opazimo tudi nekaj sumljivih postaj, ki striljo ven:
#
# 32900 (Motivate BX Tech office) 38.96441, -77.01076
# 32901 (6035 Warehouse) 38.96381, -77.01027
# 32902 (Motivate Tech Office) NA, NA
# NA (MTL-ECO5-03) NA, NA
# NA (NA) 38.91626 ,-77.02439
#
# Zadnjo postajo, ki nima ne id stevilke ne imena, lahko takoj razlozimo.
# 17.6.2020 se prvic v bazi pojavi eletkricno kolo, ki si ga ni potrebno
# izposoditi oz. vrniti na doloceni postaji, temveč kjerkoli znoraj nekega
# omejenjga obomcja mesta. NA NA predstavlja povprecno lokacijo izposoje
# elektricnega kolesa.
#
# Ostale bomo podrobneje pogledali tako, da bomo presteli stevilo opravljenih
# vozenj iz ali v postajo, maksimalni cas voznje, minimalni cas voznje in
# povprecni cas voznje (v sekundah).
sumljive_postaje <- bind_rows(
tbl2 %>%
filter(
start_station_id == 32900 |
end_station_id == 32900
) %>%
mutate(
count = n()
) %>%
transmute(
station_id = 32900,
station_name = 'Motivate BX Tech office',
count = count,
max_duration = max(duration),
min_duration = min(duration),
avg_duration = mean(duration)
) %>%
unique(),
tbl2 %>%
filter(
start_station_id == 32901 |
end_station_id == 32901
) %>%
mutate(
count = n()
) %>%
transmute(
station_id = 32901,
station_name = '6035 Warehouse',
count = count,
max_duration = max(duration),
min_duration = min(duration),
avg_duration = mean(duration)
) %>%
unique(),
tbl2 %>%
filter(
start_station_id == 32902 |
end_station_id == 32902
) %>%
mutate(
count = n()
) %>%
transmute(
station_id = 32902,
station_name = 'Motivate Tech Office',
count = count,
max_duration = max(duration),
min_duration = min(duration),
avg_duration = mean(duration)
) %>%
unique(),
tbl2 %>%
filter(
start_station_name == 'MTL-ECO5-03' |
end_station_name == 'MTL-ECO5-03'
) %>%
mutate(
count = n()
) %>%
transmute(
station_id = NA,
station_name = 'MTL-ECO5-03',
count = count,
max_duration = max(duration),
min_duration = min(duration),
avg_duration = mean(duration)
) %>%
unique()
)
# Ko pogledamo tabelo sumljivih vidimi, da so vse zares sumljive, zato
# iz tabele tbl2 izlocimo voznje v in iz teh postaj, izlocimo jih tudi iz
# tabele postaj tako, da vse postaje se enkrat izracunamo.
tbl2 <- tbl2 %>%
filter(
start_station_name %in% sumljive_postaje$station_name == FALSE |
end_station_name %in% sumljive_postaje$station_name == FALSE
)
postaje <- tbl2 %>%
select(
station_name = start_station_name,
station_id = start_station_id,
start_lat,
start_lng
) %>%
group_by(
station_id,
station_name
) %>%
summarise(
lat = mean(start_lat),
lng = mean(start_lng)
) %>%
ungroup(
station_id,
station_name
) %>%
add_row(
station_name = manjkajoce_postaje$start_station_name,
station_id = manjkajoce_postaje$start_station_id,
lat = manjkajoce_postaje$start_lat,
lng = manjkajoce_postaje$start_lng
) %>%
arrange(station_id)
# Ko imamo tabelo postaj in njihovih koordinat lahko iz tabele tbl2 odstranimo
# koordinate, tabelo preuredimo in spnemo s tabelo tbl1.
tbl_CB <- bind_rows(
tbl1,
tbl2 <- tbl2 %>%
select(
duration,
started_at,
ended_at,
start_station_id,
start_station_name,
end_station_id,
end_station_name,
member_casual,
rideable_type
)
)
# Pospravimo nepotrebne tabele in spremenljivke in sprostimo spomin.
rm(
ImenaStolpcevCB,
postaje_tbl1,
postaje_tbl2,
manjkajoce_postaje_id,
manjkajoce_postaje,
sumljive_postaje,
tbl1,
tbl2
)
gc()
# Vremenski podatki vsebujejo kar nekaj nevsecnosti. Podatki FMTM in PGTM so
# zabelezeni le do 1.4.2013, a jih lahko spustimo, saj so nepomembni vetrovni
# podatki. WESD vsebuje le NA in jih odstranimo. Podatki TAVG (Average
# Temperature) se zacnejo beleziti sele 1.4.2013, kar bo pomembno vplivalno na
# analizo. Podatki WT** (Weather types) so zabelezeni z 1, ce so resnicni in NA,
# ce niso, zato jih prebermo kot logical in NA spremenimo v FALSE.
noaa <- read_csv(
'./podatki/NOAA.csv',
show_col_types = FALSE,
col_types = cols(
.default = col_guess(),
WT01 = col_logical(),
WT02 = col_logical(),
WT03 = col_logical(),
WT04 = col_logical(),
WT05 = col_logical(),
WT06 = col_logical(),
WT08 = col_logical(),
WT09 = col_logical(),
WT11 = col_logical(),
WT13 = col_logical(),
WT14 = col_logical(),
WT15 = col_logical(),
WT16 = col_logical(),
WT17 = col_logical(),
WT18 = col_logical(),
WT21 = col_logical(),
WT22 = col_logical()
)
) %>%
select(
-STATION,
-FMTM,
-PGTM,
-WESD
) %>%
replace_na(
list(
WT01 = FALSE,
WT02 = FALSE,
WT03 = FALSE,
WT04 = FALSE,
WT05 = FALSE,
WT06 = FALSE,
WT08 = FALSE,
WT09 = FALSE,
WT11 = FALSE,
WT13 = FALSE,
WT14 = FALSE,
WT15 = FALSE,
WT16 = FALSE,
WT17 = FALSE,
WT18 = FALSE,
WT21 = FALSE,
WT22 = FALSE
)
)
|
|
# 2. faza: Uvoz podatkov
sl <- locale("sl", decimal_mark=",", grouping_mark=".")
pretvornik.regij <- function(){ # 2 regiji (Posavska in Jugovzhodna Slovenija) imata različno poimenovanje v različnih virih
regije.slo = tibble(
regija = c(
"Gorenjska",
"Goriška",
"Jugovzhodna",
"Koroška",
"Obalno-kraška",
"Osrednjeslovenska",
"Podravska",
"Pomurska",
"Spodnjeposavska",
"Primorsko-notranjska",
"Savinjska",
"Zasavska",
"SLOVENIJA"
),
statisticna_regija = c(
"Gorenjska",
"Goriška",
"Jugovzhodna Slovenija",
"Koroška",
"Obalno-kraška",
"Osrednjeslovenska",
"Podravska",
"Pomurska",
"Posavska",
"Primorsko-notranjska",
"Savinjska",
"Zasavska",
"SLOVENIJA"
)
)
return(regije.slo)
}
# UVOZ
## PRIPADNOST OBČIN REGIJAM
uvoz.obcine.regije <- function(){
link <- "http://sl.wikipedia.org/wiki/Seznam_ob%C4%8Din_v_Sloveniji"
stran <- session(link) %>%
read_html()
tabela <- stran %>%
html_nodes(xpath="//table[@class='wikitable sortable']") %>%
.[[1]] %>% html_table(dec=",") %>%
select("Statistična regija", "Občina")
tabela[162, 2] <- "Sveta Trojica v Slov. goricah*"
tabela[163, 2] <- "Sveti Andraž v Slov. goricah"
tabela[165, 2] <- "Sveti Jurij v Slov. goricah"
tabela[62, 2] <- "Kanal"
names(tabela)[1] <- "regija"
names(tabela)[2] <- "obcine"
tabela <- right_join(pretvornik.regij(),
tabela,
by="regija") %>%
select(-regija) # spremenjeno poimenovanje dveh regij - zdaj se ujema z ostalimi podatki
return(tabela)
}
## PREBIVALSTVO
uvoz.prebivalstvo <- function(){
prebivalstvo <- read_csv2("podatki/prebivalstvo.csv", skip=2,
locale=locale(encoding="Windows-1250"),
col_types = cols(
.default = col_guess(),
SPOL = col_skip()
))
names(prebivalstvo)[1] <- "statisticna_regija"
prebivalstvo <- prebivalstvo %>%
pivot_longer(-c("statisticna_regija"), names_to = "leto", values_to = "stevilo_prebivalcev") %>%
mutate(leto = str_replace_all(leto, " Starost - SKUPAJ" , ""))
return(prebivalstvo)
}
## ŠTEVILO GRADBENIH DOVOLJENJ
uvoz.stevilo.gradbenih.dovoljenj <- function(){
stevilo.gradbenih.dovoljenj <- read_csv2("podatki/dovoljenja-za-gradnjo.csv",
skip=2,
locale=locale(encoding="Windows-1250"))
names(stevilo.gradbenih.dovoljenj)[1] <- "statisticna_regija"
names(stevilo.gradbenih.dovoljenj)[3] <- "tip_stavbe"
stevilo.gradbenih.dovoljenj <- stevilo.gradbenih.dovoljenj %>%
pivot_longer(-c(statisticna_regija, INVESTITOR, tip_stavbe), names_to = "x", values_to = "vrednost") %>%
separate(col = "x",
into = c("leto", "stevilo/povrsina_v_m2", "tip"),
sep = c(" ", " "))
stevilo.gradbenih.dovoljenj <- select(stevilo.gradbenih.dovoljenj, -c(INVESTITOR, tip_stavbe))
return(stevilo.gradbenih.dovoljenj)
}
## INDEKSI CEN STANOVANJSKIH NEPREMIČNIN
uvoz.indeksi.cen.stan.nepremicnin <- function(){
indeksi.cen.stan.nepremicnin <- read_csv2("podatki/indeksi-cen-stanovanjskih-nepremicnin.csv",
skip=2,
locale=locale(encoding="Windows-1250"),
na= "...")
names(indeksi.cen.stan.nepremicnin)[1] <- "stanovanjske_nepremicnine"
indeksi.cen.stan.nepremicnin <- indeksi.cen.stan.nepremicnin %>%
pivot_longer(!stanovanjske_nepremicnine,
names_to = "leto",
values_to = "povprecje_cetrtletij_glede_na_2015") %>%
separate(col = "leto",
into = c("leto", "x"),
sep = " ") %>%
select(-x)
return(indeksi.cen.stan.nepremicnin)
}
## INDEKSI GRADBENIH STROŠKOV
uvoz.indeksi.gradbenih.stroskov <- function(){
indeksi.gradbenih.stroskov <- read_csv2("podatki/indeksi-gradbenih-stroskov.csv",
skip=2,
locale=locale(encoding="Windows-1250"))
return(indeksi.gradbenih.stroskov)
}
## OCENA DOKONČANIH STANOVANJ PO OBČINAH IN PO REGIJAH
uvoz.ocena.dokoncanih.stanovanj.po.obcinah <- function(){
ocena.dokoncanih.stanovanj.po.obcinah <- read_csv2("podatki/ocena-dokoncanih-stanovanj-po-obcinah.csv",
skip=2,
locale=locale(encoding="Windows-1250"),
na = "-") %>%
pivot_longer(-c("OBČINE", "MERITVE"),
names_to = "x",
values_to = "vrednosti") %>%
separate(col = "OBČINE",
into = c("obcine", "y"),
sep = "/") %>%
separate(col = "x",
into = c("leto", "vrsta"),
sep = " ") %>%
select(-y)
return(ocena.dokoncanih.stanovanj.po.obcinah)
}
uvoz.ocena.dokoncanih.stanovanj.skupno.regije <- function(){
ocena.dokoncanih.stanovanj.po.obcinah <- uvoz.ocena.dokoncanih.stanovanj.po.obcinah() %>%
filter(vrsta == "Stanovanja") %>%
select(-vrsta)
ocena.dokoncanih.stanovanj.skupno.regije <- right_join(ocena.dokoncanih.stanovanj.po.obcinah,
uvoz.obcine.regije(),
by="obcine")
ocena.dokoncanih.stanovanj.skupno.regije <- ocena.dokoncanih.stanovanj.skupno.regije %>%
group_by(statisticna_regija, leto, MERITVE) %>%
summarise(vrednosti = sum(vrednosti,
na.rm = TRUE))
return(ocena.dokoncanih.stanovanj.skupno.regije)
}
## SELITVE PREBIVALSTVA
uvoz.selitve.prebivalstva <- function(){
selitve.prebivalstva <- read_csv2("podatki/selitve-prebivalstva.csv",
skip=2,
locale=locale(encoding="Windows-1250"))
names(selitve.prebivalstva)[1] <- "statisticna_regija"
selitve.prebivalstva <- selitve.prebivalstva %>%
pivot_longer(!statisticna_regija,
names_to = "x",
values_to = "stevilo") %>%
separate(col = "x",
into = c("leto", "priseljeni/odseljeni"),
sep = " ")
selitve.prebivalstva <- selitve.prebivalstva %>%
pivot_wider(names_from = "priseljeni/odseljeni",
values_from = "stevilo")
return(selitve.prebivalstva)
}
# TABELE
## TABELA 1
tabela11 <- function(){
tabela1 <- full_join(uvoz.stevilo.gradbenih.dovoljenj(),
uvoz.prebivalstvo(),
by = c("statisticna_regija", "leto"))
stevilo_stanovanj <- tabela1 %>%
filter(tip == "stanovanj") %>%
select(c(statisticna_regija, leto, vrednost, stevilo_prebivalcev))
names(stevilo_stanovanj)[3] <- "stevilo_stanovanj"
povrsina_stanovanj <- tabela1 %>%
filter(`stevilo/povrsina_v_m2` == "Površina") %>%
select(c(statisticna_regija, leto, vrednost, stevilo_prebivalcev))
names(povrsina_stanovanj)[3] <- "povrsina_stanovanj"
tabela1 <- full_join(stevilo_stanovanj,
povrsina_stanovanj,
by= c("statisticna_regija", "leto", "stevilo_prebivalcev"))
return(tabela1)
}
tabela111 <- function(){
tabela11 <- full_join(uvoz.ocena.dokoncanih.stanovanj.skupno.regije(),
uvoz.prebivalstvo(),
by = c("statisticna_regija", "leto")) %>%
pivot_wider(names_from = MERITVE, values_from = vrednosti) %>%
select(-"NA")
names(tabela11)[4] <- "povrsina_ocena_dokoncanih"
names(tabela11)[5] <- "stevilo_ocena_dokoncanih"
return(tabela11)
}
shrani.tabela1 <- full_join(tabela11(),
tabela111(),
by = c("leto", "statisticna_regija", "stevilo_prebivalcev")) %>%
filter(statisticna_regija != "SLOVENIJA") %>%
write_csv("podatki/shrani-st-izdanih-gradb-dovoljenj-in-ocena-dokoncanih-stanovanj.csv",
na= "NA",
append = FALSE,
col_names = TRUE)
# TABELA 2
tabela2 <- function(){
tabela2 <- full_join(uvoz.ocena.dokoncanih.stanovanj.po.obcinah(),
uvoz.obcine.regije(),
by = c("obcine")) %>%
filter(obcine != "SLOVENIJA") %>%
select(-obcine) %>%
group_by(MERITVE, leto, vrsta, statisticna_regija) %>%
summarise(vrednosti = sum(vrednosti, na.rm = TRUE))
return(tabela2)
}
shrani.tabela2 <- full_join(tabela2(),
uvoz.prebivalstvo(),
by = c("statisticna_regija", "leto")) %>%
filter(statisticna_regija != "SLOVENIJA") %>%
write_csv("podatki/shrani-ocena-dokoncanih-stanovanj-po-vrstah-stanovanj.csv",
na= "NA",
append = FALSE,
col_names = TRUE) %>%
pivot_wider(names_from = MERITVE, values_from = vrednosti)
# TABELA 3
# TABELA 4
shrani.tabela4 <- uvoz.indeksi.cen.stan.nepremicnin() %>%
write_csv("podatki/shrani-indeksi-stan-nepremicnin.csv", na= "NA", append = FALSE, col_names = TRUE)
# TABELA 5
tabela5 <- function(){
full_join(tabela11(),
tabela111(),
by = c("leto", "statisticna_regija", "stevilo_prebivalcev")) %>%
select(-stevilo_prebivalcev)
}
shrani.tabela5 <- full_join(tabela5(), uvoz.selitve.prebivalstva()) %>%
filter(statisticna_regija != "SLOVENIJA") %>%
write_csv("podatki/shrani-migracije-med-regijami.csv",
na= "NA",
append = FALSE,
col_names = TRUE)
|
|
#' Merge two lists
#'
#' Additive and non-additive merging of lists. \code{add} will merge lists additively, this means
#' that named items shared in both lists are added concatenated. \code{merge} is non-additive
#' merging and named items from list in the first argument will be replaced by items from
#' the list in the second argument.
#'
#' @param list1 a list
#' @param list2 a list
#' @return merged list
merge = function(list1, list2){
if( !( (is.list(list1) || is.null(list1)) && (is.list(list2) || is.null(list2)) ) )
stop("Both elements must be lists or NULL")
if(is.empty(list1) && is.empty(list2))
return(list())
if(is.empty(list1))
return(list2)
if(is.empty(list2))
return(list1)
rlist::list.merge(list1, list2)
}
#' @rdname merge
add = function(list1, list2){
if( !( (is.list(list1) || is.null(list1)) && (is.list(list2) || is.null(list2)) ) )
stop("Both elements must be lists or NULL")
if(is.empty(list1) && is.empty(list2))
return(list())
if(is.empty(list1))
return(list2)
if(is.empty(list2))
return(list1)
merge = function(x){c(list1[[x]], list2[[x]])}
names = unique(c(names(list1), names(list2)))
merged = lapply(names, merge)
names(merged) = names
merged
}
|
|
############################################################
# create_text_strings()
#
# Returns a list containing US English text strings.
#
# These are used by the following functions:
# * Import_Export_plot()
#
create_text_strings <- function(plottype='Exports',country='MZM_WORLD',fuel='oil',units='bbl',conprod='consumption') {
# Country names for special 'MZM_~' groupings
if (length(grep('MZM_',country)) == 1) {
country_list = c(
MZM_NSE='Severné more',
MZM_CRB='Karibik',
MZM_PRG='Perzský záliv',
MZM_WAF='Západná Afrika',
MZM_WORLD='Svet',
MZM_FSU='Bývalý Sovietsky zväz',
MZM_EU0='Európa (-BSZ)',
MZM_EU1='Europe (+BSZ)',
MZM_OPEC='OPEC',
MZM_OPEC10='OPEC-10',
MZM_GCC='Rada pre spoluprácu krajín Perzského zálivu',
MZM_GECF='Krajiny vyvážajúce zemný plyn',
MZM_GECF11='KVZP-11',
MZM_NON_OPEC='Mimo-OPEC',
MZM_OECD='OECD',
MZM_G7='G7',
MZM_O5='O5',
MZM_G75='G7 + O5',
MZM_BELU='Belgicko a Luxembursko',
MZM_TNA='Severná Amerika',
MZM_TSCA='Juž. a Str. Amerika',
MZM_TEE='Európa',
MZM_TME='Stredný Východ',
MZM_TAF='Afrika',
MZM_TAP='Ázia-Pacifik'
)
country = country_list[[country]]
}
title_coal = 'Uhlie'
title_oil = 'Ropa'
title_gas = 'Zemný Plyn'
title_nuclear = 'Jadro'
title_hydro = 'Voda'
title_all = 'Všetko'
title_consumption = 'Spotreba'
title_production = 'Produkcia'
units_mto = 'milióny ton za rok'
units_mtoe = 'milióny ton ekv. ropy za rok'
units_bbl = 'milióny barelov za deň'
units_ft3 = 'miliárd kubických stôp za deň'
units_m3 = 'miliárd kubických metrov za rok'
units_twh = 'Terawatt-hodín za rok'
units_joule = 'Exajoulov za rok'
# Resource
if (fuel == 'oil') {
resource = title_oil
} else if (fuel == 'gas') {
resource = title_gas
} else if ( fuel == 'coal') {
resource = title_coal
} else if ( fuel == 'nuclear') {
resource = title_nuclear
} else if ( fuel == 'hydro') {
resource = title_hydro
} else if ( fuel == 'all') {
resource = title_all
}
# Units
if (units == 'mtoe') {
if (fuel == 'oil') {
text_units = units_mto
} else {
text_units = units_mtoe
}
} else if (units == 'bbl') {
text_units = units_bbl
} else if (units == 'ft3') {
text_units = units_ft3
} else if (units == 'm3') {
text_units = units_m3
} else if (units == 'twh') {
text_units = units_twh
} else if (units == 'joule') {
text_units = units_joule
}
# Main titles
if (plottype == 'Sources') {
if (conprod == 'consumption') {
main1 = paste(country,': ',title_consumption)
} else if (conprod == 'production') {
main1 = paste(country,': ',title_production)
}
main2 = 'main2'
main3 = 'main3'
} else {
main1 = paste(country,': ',resource)
main2 = 'main2'
main3 = 'main3'
}
# TODO: Rename subtitle,fromto,earned?,spent?
# Assemble the list
txt = list(
main1 = main1,
main2 = main2,
main3 = main3,
subtitle = 'Údaje: Štatistický prehľad BP_2012 Grafika: mazamascience.com',
year = 'Rok',
units = text_units,
consumption = 'Spotreba',
production = 'Produkcia',
imports = 'čistý dovoz',
exports = 'čistý vývoz',
consumption_increased = 'spotreba stúpla o',
consumption_decreased = 'spotreba klesla o',
production_increased = 'produkcia stúpla o',
production_decreased = 'produkcia klesla o',
imports_increased = 'dovoz stúpol o',
imports_decreased = 'dovoz klesol o',
exports_increased = 'vývoz stúpol o',
exports_decreased = 'vývoz klesol o',
note_nodata = '* nedostupné údaje',
note_minvalue = '* minimálna hodnota',
msg_nodata = 'Nedostupné údaje',
country = country,
earned = 'zisk',
spent = 'útrata',
billion = 'miliarda',
missing = 'chýbajúce údaje',
net_0 = 'nula',
coal = 'uhlie',
oil = 'ropa',
gas = 'zemný plyn',
nuclear = 'jadro',
hydro = 'voda',
US = 'USA',
World = 'Svet',
percent = '% z celku',
energy_consumed_increased = 'Celková spotrebovaná energia stúpla o',
energy_consumed_decreased = 'Celková spotrebovaná energia klesla o',
energy_produced_increased = 'Celková vyrobená energia stúpla o',
energy_produced_decreased = 'Celková vyrobená energia klesla o',
percent_title = 'Percento výroby z každého zdroja.',
efficiency = '( 38% účinnosť )'
)
return(txt)
}
|
|
# Eric R. Gamazon
# Create genotype table for eQTL mapping
"%&%" = function(a,b) paste(a,b,sep="")
mydir = "/nas40t0/egamazon/VANDY/PREDIXCAN/";
for (i in (1:22))
{
a <- read.table(gzfile(mydir %&% 'DGN.imputed_maf0.05_R20.8.hapmapSnpsCEU.chr' %&% i %&% '.mldose.gz'), header=F)
a <- a[,-1]
a <- a[,-1]
a.t <- t(a)
dim(a.t)
snps <- read.table(gzfile(mydir %&% 'DGN.imputed_maf0.05_R20.8.hapmapSnpsCEU.chr' %&% i %&% '.mlinfo.gz'), header=T)
row.names(a.t) = t(snps[,1])
write.table(a.t, file=mydir %&% "DGN.imputed" %&% ".SNPxID" %&% i, sep="\t", row.names=T, quote=F, col.names=F)
}
|
|
#' Creates a Watershed object
#' @param stream Stream network raster, required
#' @param drainage Drainage direction raster, required
#' @param elevation Optional elevation raster
#' @param accumulation Optional flow accumulation raster
#' @param catchmentArea Optional catchment area raster
#' @param otherLayers RasterStack of other data layers to add to the Watershed object
#' @details All raster maps will be cropped to the stream network. The values in `stream` will
#' be automatically assigned to a reachID field in the Watershed object.
#' @return A watershed object
#' @export
Watershed <- function(stream, drainage, elevation, accumulation, catchmentArea, otherLayers) {
## drainage will be added later, after it is fixed by WSConnectivity
dataRasters <- list()
if(!missing(elevation)) dataRasters$elevation <- elevation
if(!missing(accumulation)) dataRasters$accumulation <- accumulation
if(!missing(catchmentArea)) dataRasters$catchmentArea <- catchmentArea
if(!missing(otherLayers)) dataRasters$otherLayers <- otherLayers
layerStack <- lapply(dataRasters, function(x) {
if(!raster::compareRaster(stream, x, stopiffalse = FALSE))
x <- raster::crop(x, stream)
raster::mask(x, stream)
})
## create pixel IDs and add other layers, if present
allRasters <- raster::stack(stream, stream)
names(allRasters) <- c('reachID', 'id')
if(length(layerStack) > 0) {
layerStack <- raster::stack(layerStack)
allRasters <- raster::addLayer(allRasters, layerStack)
}
maskIndices <- which(!is.na(raster::values(stream)))
allRasters$id[maskIndices] <- 1:length(maskIndices)
allSPDF <- rasterToSPDF(allRasters, complete.cases = TRUE)
if(!raster::compareRaster(allRasters, drainage, stopiffalse = FALSE))
drainage <- raster::crop(drainage, allRasters)
adjacency <- WSConnectivity(drainage, allRasters$id)
allSPDF <- sp::merge(allSPDF, adjacency$drainage, by = 'id', all.x = TRUE)
allSPDF$length <- WSComputeLength(allSPDF$drainage, raster::res(drainage))
allSPDF$vReachNumber <- allSPDF$reachID
wsobj <- list(data = allSPDF, adjacency = adjacency$adjacency)
class(wsobj) <- c("Watershed", class(wsobj))
wsobj = .rebuild_reach_topology(wsobj)
attr(wsobj, "version") <- packageVersion("WatershedTools")
return(wsobj)
}
#' Compute connectivity matrix
#'
#' @param drainage Drainage direction raster
#' @param stream Stream network raster; see `details`
#'
#' @details The stream network raster should be NA in all cells not considered a part of the
#' river network. The pixel values of the raster must be unique IDs representing individual
#' stream reaches to model. At present, the only supported reach size is a single pixel, thus
#' each pixel must have a unique value.
#' @return A list with two elements, the first containing corrected drainage directions, the
#' second with A [Matrix::sparseMatrix()] representation of the river network. For a `stream`
#' input raster with `n` non-NA cells, the dimensions of this matrix will be n by n. Dimnames
#' of the matrix will be the pixel IDs from the `stream` input raster. Values of the
#' matrix cells are either 0 or 1; a zero indicates no flow, a one in cell i,j indicates
#' that reach `i` receives water from reach `j`.
#' @keywords internal
WSConnectivity <- function(drainage, stream) {
ids <- raster::values(stream)
inds <- which(!is.na(ids))
ids <- ids[inds]
if(any(duplicated(ids)))
stop("Stream IDs must be all unique")
rowMat <- matrix(1:raster::nrow(drainage), nrow=raster::nrow(drainage),
ncol=raster::ncol(drainage))
colMat <- matrix(1:raster::ncol(drainage), nrow=raster::nrow(drainage),
ncol=raster::ncol(drainage), byrow=TRUE)
coordRas <- raster::stack(list(x = raster::raster(colMat, template = drainage),
y = raster::raster(rowMat, template = drainage), drainage = drainage, id = stream))
coordRas <- raster::mask(coordRas, stream)
xy <- WSFlowTo(coordRas[inds])
res <- xy[,c('fromID', 'drainage')]
colnames(res)[1] <- 'id'
list(drainage = res, adjacency = Matrix::sparseMatrix(xy[,'toID'], xy[,'fromID'],
dims=rep(length(inds), 2), dimnames = list(ids, ids), x = 1))
}
#' Compute which pixels flow into which other pixels
#' @param mat A matrix with minimum three columns, the first being the x-coordinate, second the y,
#' and third the ID.
#' @return A matrix of IDs, the first column the source, the second column the destination
#' @keywords internal
WSFlowTo <- function(mat) {
newy <- mat[,2]
newx <- mat[,1]
ind <- which(mat[,3] > 0)
xoffset <- c(1, 0, -1, -1, -1, 0, 1, 1)
yoffset <- c(-1, -1, -1, 0, 1, 1, 1, 0)
newx[ind] <- newx[ind] + xoffset[mat[ind,3]]
newy[ind] <- newy[ind] + yoffset[mat[ind,3]]
na_ind <- which(mat[,3] < 0 | newx < 1 | newy < 1 | newx > max(mat[,1]) | newy > max(mat[,2]))
newx[na_ind] <- newy[na_ind] <- mat[na_ind, 'drainage'] <- NA
resMat <- cbind(mat, newx, newy)
resMat <- merge(resMat[,c('newx', 'newy', 'id', 'drainage')], resMat[,c('x', 'y', 'id')], by = c(1,2), all.x = TRUE)
resMat <- resMat[,c(1,2,4,3,5)]
colnames(resMat)[4:5] <- c('fromID', 'toID')
resMat <- WSCheckDrainage(resMat, mat)
resMat <- resMat[order(resMat[,'fromID']),]
resMat <- resMat[complete.cases(resMat),]
return(resMat)
}
#' Check and fix problems with drainage direction
#' @param connMat preliminary connectivity matrix
#' @param drainMat Drainage direction matrix
#' @param prevProbs Previous number of problems, to allow stopping if no improvement on
#' subsequent calls
#' @details In some cases, drainage direction rasters don't agree with flow accumulation, resulting
#' in a delineated stream that doesn't have the right drainage direction. This function
#' attempts to detect and fix this in the adjacency matrix and the drainage layer
#' @keywords internal
#' @return A corrected connectivity matrix
WSCheckDrainage <- function(connMat, drainMat, prevProbs = NA) {
probs <- which(is.na(connMat[,'toID']) & connMat[,'drainage'] > 0)
if(length(probs) == 0 | (!is.na(prevProbs) & length(probs) == prevProbs))
return(connMat)
prFix <- do.call(rbind,
lapply(connMat[probs,'fromID'], WSFixDrainage, drainMat = drainMat, connMat = connMat))
connMat <- connMat[-probs,]
connMat <- rbind(connMat, prFix)
WSCheckDrainage(connMat, drainMat, prevProbs = length(probs))
}
#' Fix drainage direction for a single pixel
#' @param id ID of the problematic pixel
#' @param connMat preliminary connectivity matrix
#' @param drainMat Drainage direction matrix
#' @keywords internal
WSFixDrainage <- function(id, drainMat, connMat) {
i <- which(drainMat[,'id'] == id) # problem cell index
j <- which(connMat[,'toID'] == id) # upstream of problem cell
upID <- connMat[j,'fromID']
x <- drainMat[i,'x']
y <- drainMat[i,'y']
downInd <- which(drainMat[,'x'] >= x-1 & drainMat[,'x'] <= x+1 & drainMat[,'y'] >= y-1 & drainMat[,'y'] <= y+1 & !(drainMat[,'id'] %in% c(id, upID)))
out <- connMat[connMat[,'fromID'] == id,]
if(length(downInd) == 1) {
out[,'newx'] <- drainMat[downInd,'x']
out[,'newy'] <- drainMat[downInd,'y']
out[,'toID'] <- drainMat[downInd,'id']
xo <- out[,'newx'] - drainMat[i,'x']
yo <- out[,'newy'] - drainMat[i,'y']
xoffset <- c(1, 0, -1, -1, -1, 0, 1, 1)
yoffset <- c(-1, -1, -1, 0, 1, 1, 1, 0)
out[,'drainage'] <- which(xoffset == xo & yoffset == yo)
}
out
}
#' Compute length to next pixel given drainage direction
#' @param drainage drainage direction vector
#' @param cellsize size of each cell (vector of length 2)
#' @keywords internal
#' @return vector of lengths
WSComputeLength <- function(drainage, cellsize) {
cellLength <- rep(cellsize[1], length(drainage))
if(abs(cellsize[1] - cellsize[2]) > 1e-4) {
vertical <- which(drainage %in% c(2,6))
cellLength[vertical] <- cellsize[2]
}
diagonal <- which(drainage %in% c(1,3,5,7))
cellLength[diagonal] <- sqrt(cellsize[1]^2 + cellsize[2]^2)
cellLength
}
#' Get data from all confluences of a watershed
#'
#' @param ws Watershed object
#' @return A `data.frame` containing data for all confluences
#' @export
confluences <- function(ws) {
as.data.frame(ws$data[Matrix::rowSums(ws$adjacency) > 1,])
}
#' Get data from all headwaters of a watershed
#'
#' @param ws Watershed object
#' @return a `data.frame` containing data for all headwaters
#' @export
headwaters <- function(ws) {
as.data.frame(ws$data[Matrix::rowSums(ws$adjacency) == 0,])
}
#' Get data from all outlets of a watershed
#'
#' @param ws Watershed object
#' @param rid vector of reach IDs, if NA returns outlet for entire network
#' @param output Output type to return
#' @return a `data.frame` or a `SpatialPixelsDataFrame` containing data for all outlets
#' @export
outlets <- function(ws, rid, output = c("data.frame", "Spatial")) {
output = match.arg(output)
if(!missing(rid)) {
out_ind = sapply(rid, function(i) {
ii = which(ws$data$reachID == i)
mat = ws$adjacency[ii,ii, drop=FALSE]
pix = which(Matrix::colSums(mat) == 0)
as.integer(rownames(mat)[pix])
})
} else {
out_ind = which(Matrix::colSums(ws$adjacency) == 0)
}
res = ws$data[out_ind,]
if(output == "data.frame") {
res = as.data.frame(res)
}
res
}
#' Get the pixel ID of the next downstream pixel for each pixel in the watershed
#' @param ws A watershed object
#' @return A vector of pixel IDs
#' @export
downstreamPixelIds <- function(ws) {
mat <- Matrix::which(ws$adjacency == 1, arr.ind = TRUE)
endpt <- Matrix::which(Matrix::colSums(ws$adjacency) == 0)
mat <- rbind(mat, c(NA, endpt))
# rearrange so the UPSTREAM pixels (second column) indicate the row number
mat <- mat[order(mat[,2]),]
if(!all(mat[,2] == 1:nrow(mat)))
stop("There is an error with the topology")
mat[,1]
}
#' Extract pixelIDs from a watershed at spatial locations
#' @param ws A watershed object
#' @param x An object inheriting from `sp::SpatialPoints()`
#' @return A vector of pixel IDs
#' @export
extract <- function(ws, x) {
ras <- raster::rasterFromXYZ(ws[,c('x', 'y', 'id')])
raster::extract(ras, x)
}
#' Compute a site by pixel accumulation matrix
#'
#' The default behavior computes distance, where positive numbers indicate downstream
#' distances and negative numbers indicate upstream distances. Other variables can also
#' be used, but in all cases the values will be summed to compute the 'distance'
#'
#' Upstream distances do NOT include intermediate pixels; they only include pixels in `x`
#'
#' @param ws A Watershed
#' @param x A vector of pixel ids from which to compute the distance
#' @param variable The variable to use for the distance
#' @return A matrix with dimensions `length(x)` by `nrow(ws)`
#' @export
siteByPixel <- function(ws, x, variable = 'length') {
dsPixes <- downstreamPixelIds(ws)
dm <- dmat(x, dsPixes, nrow(ws$data), ws[,variable])
rownames(dm) <- x
colnames(dm) <- ws[,'id']
dm
}
|
|
# packages ----
library(knitr)
library(lubridate)
library(progress)
library(tidyverse)
library(xml2)
# functions ----
source("get_xml.r")
source("extract_overview.r")
source("extract_historic.r")
source("extract_dividends.r")
source("clean_overview.r")
source("clean_price.r")
source("clean_dividends.r")
source("download_ishares.r")
# parameters ----
dir_ishares <- "/path/to/folder"
dir_overview <- file.path(dir_ishares, "ishares_overview")
dir_price <- file.path(dir_ishares, "ishares_price")
dir_dividends <- file.path(dir_ishares, "ishares_dividends")
data_etf <- read_tsv(file.path(dir_ishares, "data_ishares_url.tsv"))
# get xml data ----
data_xml <- get_xml(data_etf$url[1])
# extract and clean data ----
map2(data_etf$name, data_etf$url, download_ishares)
# get exchange rate data ----
file_zip <- tempfile()
download.file("https://www.ecb.europa.eu/stats/eurofxref/eurofxref-hist.zip", file_zip)
data_fx <- read_csv(unz(file_zip, "eurofxref-hist.csv")) %>%
select(date = Date, usd_rate = USD, gbp_rate = GBP)
# aggregate data ----
ishares_data <- map(data_etf$name, ~{
# load files
overview <- read_tsv(file.path(dir_overview, str_c(.x, "_overview.tsv")))
price <- read_tsv(file.path(dir_price, str_c(.x, "_price.tsv")))
dividends <- read_tsv(file.path(dir_dividends, str_c(.x, "_dividends.tsv")))
# get metadata
name <- .x
isin <- overview$value[overview$parameter == "ISIN"]
# combine data
out <- tibble(name, isin) %>%
mutate(id = TRUE) %>%
left_join(mutate(price, id = TRUE), by = "id") %>%
select(-id)
# dividends
if(dim(dividends)[1] > 0) {
out <- out %>%
left_join(dividends, by = "date") %>%
mutate(dividend = coalesce(dividend * 0.725, 0)) %>%
mutate(dividend = cumsum(dividend))
} else {
out$dividend <- 0
}
out <- out %>%
mutate(price = price + dividend) %>%
select(-dividend)
return(out)
}) %>% bind_rows() %>%
filter(!is.na(date))
# convert to Euro returns ----
ishares_data <- ishares_data %>%
left_join(data_fx, by = "date") %>%
mutate(price = case_when(currency == "USD" ~ price / usd_rate,
currency == "GBP" ~ price / gbp_rate,
TRUE ~ price)) %>%
select(-currency, -usd_rate, -gbp_rate)
# trailing monthly returns ----
dates <- tibble(date = seq(from = min(ishares_data$date), to = max(ishares_data$date), by = 1))
dates$i <- rep(1:28, ceiling(nrow(dates) / 28))[seq(nrow(dates))]
data_returns <- map(unique(ishares_data$name), ~{
xprices <- ishares_data %>%
filter(name == .x) %>%
right_join(dates, by = "date") %>%
fill(name) %>%
filter(!is.na(name))
xreturns <- map(1:28, ~{
out <- xprices %>%
filter(i == .x) %>%
mutate(start_px = lag(price)) %>%
mutate(diff_px = price - start_px) %>%
mutate(return = diff_px / start_px) %>%
select(isin, name, date, return)
return(out)
}) %>%
bind_rows() %>%
arrange(date) %>%
filter(!is.na(return))
return(xreturns)
}) %>% bind_rows()
# average returns, variance, & Sharpe ratio ----
kpi <- data_returns %>%
group_by(isin, name) %>%
summarise(risk = var(return),
returns = mean(return)) %>%
ungroup() %>%
mutate(sharpe = (returns / risk) * sqrt(365 / 28)) %>%
mutate(risk = risk * sqrt(365 / 28) * 100,
returns = returns * (365 / 28) * 100)
kpi %>%
arrange(desc(sharpe)) %>%
select(ISIN = isin, Name = name, Return = returns, Risk = risk, Sharpe = sharpe) %>%
head(10) %>%
kable(digits = 2)
kpi %>%
select(Return = returns, Risk = risk, Sharpe = sharpe) %>%
ggplot() +
geom_abline(aes(intercept = 0, slope = 1)) +
geom_abline(aes(intercept = 0, slope = max(kpi$sharpe)), colour = "darkgreen") +
geom_point(aes(x = Risk, y = Return, colour = Sharpe))
# share of months with positive returns ----
data_returns %>%
mutate(return = return > 0) %>%
group_by(isin, name) %>%
summarise(week_pos = sum(return),
week_tot = n()) %>%
mutate(share_pos = week_pos / week_tot * 100) %>%
arrange(desc(share_pos)) %>%
select(ISIN = isin, Name = name, Share_Positives = share_pos, Weeks_Positive = week_pos, Weeks_Total = week_tot) %>%
head(10) %>%
kable(digits = 2)
|
|
#' @title kml.polygon
#' @description unknown
#' @family abysmally documented
#' @author unknown, \email{<unknown>@@dfo-mpo.gc.ca}
#' @export
kml.polygon = function( item='', name='', label='', style.id='', colour='ff0000cc', line.colour='4c7fffff', line.width="1", fill="0", outline="1", con='', style.id.highlight='', style.id.normal='' ) {
switch( item,
header = paste('
<Placemark>
<name>', name, '</name>
<description>', label, '</description>
<styleUrl>#', style.id, '</styleUrl>
<Polygon>
<gx:altitudeMode>relativeToGround</gx:altitudeMode>
<tessellate>1</tessellate>
<outerBoundaryIs>
<LinearRing>
<coordinates>', sep='' ),
footer = paste('
</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</Placemark>', sep='' ),
style = paste('
<Style id="', style.id,'">
<LineStyle>
<color>', line.colour, '</color>
<width>' ,line.width, '</width>
</LineStyle>
<PolyStyle>
<color>', colour, '</color>
<fill>', fill, '</fill>
<outline>', outline, '</outline>
</PolyStyle>
</Style>', sep='' ),
stylemap = paste('
<StyleMap id="', style.id, '">
<Pair>
<key>normal</key>
<styleUrl>#', style.id.normal, '</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#', style.id.highlight, '</styleUrl>
</Pair>
</StyleMap>', sep='')
)
}
|
|
#' Title
#'
#' @param model
#' @param parms
#' @param over
#' @param xrange
#' @param res
#' @param ini
#' @param t_max
#' @param method
#' @param colors
#' @param new
#'
#' @return
#' @export
#'
#' @examples
#' library(rgl)
#' p <- set_parms(livestock$defparms, set = list(b = 0.9, c = 0.2, f = 0, p = 0, alpha = 0.2))
#' plot_bifurcation3D(livestock, parms = p, res = 31)
#'
plot_bifurcation3D <- function(
model,
parms = model$parms,
over = "b",
xrange = c(0,1),
res = 21,
ini = c(0.9, 0.0001),
t_max = 150,
method = "ode45",
colors = c("#000000","#009933"),
new = FALSE
) {
plot3d(NA,NA,NA,
xlim = c(0,1), ylim = c(0,1), zlim = c(0,1),
xlab = "pressure", ylab = "vegetation cover", zlab = "local vegetation cover",
type = "n", box = TRUE)
rgl.bg(fogtype = "exp2", color = "white")
equilibria <- sim_bifurcations(model, over = over, xrange = xrange, ini = ini, t_max = t_max, res = res, parms = parms, method = method)
rgl.points(equilibria[,over],
equilibria$rho_1,
q_11(ini_rho(equilibria$rho_1, equilibria$rho_11)), color = "black", size = 8)
parms[[over]] <- seq(xrange[1],xrange[2],length = res)
parms$rho_ini <- ini
iterations <- expand.grid(parms)
iterations <- cbind(ID = 1:dim(iterations)[1],iterations)
iterations$b <- as.numeric(as.character(iterations$b))
iterations$L <- as.numeric(as.character(iterations$L))
foreach(iteration = iterations$ID, .packages = c("deSolve", "livestock", "foreach")) %dopar% {
sim_trajectories(model, parms = iterations[iteration,],
rho_1_ini = seq(0,0.99, length = 11),
times = c(0,150))
} -> trajectories
for(iteration in seq(iterations$ID[1], tail(iterations$ID,1),2)) {
sapply(trajectories[[iteration]], function(x){
rgl.linestrips(rep(iterations[iteration,]$b, times = length(x$rho_1)),
x$rho_1,
q_11(x$rho_1, x$rho_11),
col = "black")
}
)
}
}
|
|
myTestRule {
#Input parameter is:
# Standard output from clamscan
# Resource where clamscan was run
#Output parameter is:
# Status
#Execute the clamscan (Clam AntiVirus) utility "clamscan -ri VAULT_DIR"
# Note that the *VaultPath is the physical path for *Resource on *Host
# clamscan looks at the physical files in the iRODS vault
msiExecCmd("scanvault.py",*VaultPath,*Host,"null","null",*CmdOut);
#Extract result of the scan of the files in the vault on the specified host
msiGetStdoutInExecCmdOut(*CmdOut,*StdoutStr);
msiGetSystemTime(*Time, "human");
#Write result to an iRODS file
msiDataObjCreate(*OutputObj ++ "." ++ *Time ++ ".txt","renci-vault1",*D_FD);
msiDataObjWrite(*D_FD,*StdoutStr,*W_LEN);
msiDataObjClose(*D_FD,*Status);
writePosInt("stdout",*Status); writeline("stdout","");
#Execute the routine to extract information from the output file
msiFlagInfectedObjs(*OutputObj, *Resource, *Status);
}
INPUT *VaultPath="/home/rodsdev/loadingVault/", *Host="yellow.ils.unc.edu",*OutputObj="/tempZone/home/rods/loading/SCAN_RESULT", *Resource = "loadingResc"
OUTPUT ruleExecOut
|
|
#format of args <cran mirror> <number of default repos> <indexies of default repos> <users repos>
args <- commandArgs(TRUE)
chooseCRANmirror(ind = args[1])
number.defaults = as.numeric(args[2])
defaults = if(number.defaults>0){
args[3:(2+number.defaults)]
}
additional = if((length(args)-2-number.defaults)>0){
args[(3+number.defaults):length(args)]}
setRepositories(FALSE,defaults,additional)
p = available.packages()[,c("Package","Version","Repository")]
for( i in seq( length(p)/3)) {
print(paste(p[i,1],p[i,2],p[i,3],sep=" "))
}
|
|
# clean up
rm(list = ls())
# libraries
library(shiny)
library(magrittr)
library(plotly)
library(lubridate)
# module
source("HeatModule.R")
# data
load("heatmap.RData")
# small helper function to show time as nice string
GetTimeString <- function(){as.character(ymd_hms(paste(today(),format(Sys.time(), "%H-%M-%S"))))}
# code to render plotly heatmap
GetHeatmap <- function(data, width = 1000, height = 1000){
p <- plot_ly(z = data,
x = rownames(data),
y = colnames(data),
type = "heatmap",
hoverinfo = "all",
text = data) %>%
layout(font = list(size = 12, color = "gray"),
margin = list(l = 400, b = 400),
xaxis = list(title = "indicator", titlefont = list(size = 0.1, color = "blue")),
yaxis = list(title = "indicator", titlefont = list(size = 0.1, color = "blue")),
width = width,
height = height
)
return(p)
}
|
|
#!/usr/bin/env Rscript
# Parse the --file= argument out of command line args and
# determine where base directory is so that we can source
# our common sub-routines
arg0 <- sub("--file=(.*)", "\\1", grep("--file=", commandArgs(), value = TRUE))
dir0 <- dirname(arg0)
source(file.path(dir0, "common.r"))
theme_set(theme_grey(base_size = 17))
# Setup parameters for the script
params = matrix(c(
'help', 'h', 0, "logical",
'width', 'x', 2, "integer",
'height', 'y', 2, "integer",
'outfile', 'o', 2, "character",
'indir', 'i', 2, "character",
'tstart', '1', 2, "integer",
'tend', '2', 2, "integer",
'ylabel1stgraph', 'Y', 2, "character",
'title', 't', 2, "character"
), ncol=4, byrow=TRUE)
# Parse the parameters
opt = getopt(params)
if (!is.null(opt$help))
{
cat(paste(getopt(params, command = basename(arg0), usage = TRUE)))
q(status=1)
}
# Initialize defaults for opt
if (is.null(opt$width)) { opt$width = 1500 }
if (is.null(opt$height)) { opt$height = 2500 }
if (is.null(opt$indir)) { opt$indir = "current"}
if (is.null(opt$outfile)) { opt$outfile = file.path(opt$indir, "summary.png") }
if (is.null(opt$ylabel1stgraph)) { opt$ylabel1stgraph = "Ops/sec" }
if (is.null(opt$title)) { opt$title = "Throughput" }
# Load the benchmark data, passing the time-index range we're interested in
b = load_benchmark(opt$indir, opt$tstart, opt$tend)
# If there is no actual data available, bail
if (nrow(b$latencies) == 0)
{
stop("No latency information available to analyze in ", opt$indir)
}
png(file = opt$outfile, width = opt$width, height = opt$height)
# First plot req/sec from summary
thr <- qplot(elapsed,
successful / window,
data = b$summary,
geom = c("smooth", "point"),
xlab = "Time (secs)",
ylab = "Transactions / sec",
main = "Throughput") +
geom_smooth(aes(y = successful / window, colour = "Committed"), size=0.5) +
geom_point(aes(y = successful / window, colour = "Committed"), size=4.0) +
geom_smooth(aes(y = failed / window, colour = "Aborted"), size=0.5) +
geom_point(aes(y = failed / window, colour = "Aborted"), size=3.0) +
scale_colour_manual("Response", values = c("#FF665F", "#188125")) +
# increase legend point size
guides(colour = guide_legend(override.aes = list(size=5))) +
# set tick sequence
# set tick thousand mark
scale_y_continuous(breaks = seq(0,1000000,by=500),
labels=function(x) format(x, big.mark = ",", scientific = FALSE)) +
theme(plot.title = element_text(size=30),
plot.margin = margin(t = 10, r = 50, b = 0, l = 20),
axis.title.x = element_text(size=25),
axis.title.y = element_text(size=25),
axis.text.x = element_text(size=20),
axis.text.y = element_text(size=20),
legend.justification = c(1,0),
legend.position = c(1,0),
legend.box.margin = margin(t = 0, r = 50, b = 100, l = 0),
legend.title = element_text(size=30),
legend.text = element_text(size=25),
legend.key.size = unit(35, 'pt'))
# Setup common elements of the latency plots
latency_plot <- ggplot(b$latencies, aes(x = elapsed)) +
facet_grid(. ~ op) +
labs(x = "Time (secs)", y = "Latency (ms)") +
guides(colour = guide_legend(override.aes = list(size=5))) +
theme(plot.title = element_text(size=30),
plot.margin = margin(t = 10, r = 50, b = 0, l = 20),
axis.title.x = element_text(size=25),
axis.title.y = element_text(size=25),
axis.text.x = element_text(size=20),
axis.text.y = element_text(size=20),
legend.justification = c(1,1),
legend.position = c(1,1),
legend.box.margin = margin(t = 50, r = 50, b = 0, l = 0),
legend.title = element_text(size=30),
legend.text = element_text(size=25),
legend.key.size = unit(35, 'pt'))
# Plot median, mean and 95th percentiles
lat_all <- latency_plot +
labs(title = "Mean, Median, and 95th Percentile Latency") +
geom_smooth(aes(y = median, color = "median"), size=1) +
geom_point(aes(y = median, color = "median"), size=4.0) +
geom_smooth(aes(y = mean, color = "mean"), size=1) +
geom_point(aes(y = mean, color = "mean"), size=4.0) +
geom_smooth(aes(y = X95th, color = "95th"), size=1) +
geom_point(aes(y = X95th, color = "95th"), size=4.0) +
scale_colour_manual("Percentile", values = c("#FF665F", "#009D91", "#FFA700"))
# Plot 99th percentile
lat_99 <- latency_plot +
labs(title = "99th Percentile Latency") +
geom_smooth(aes(y = X99th, color = "99th"), size=1.5) +
geom_point(aes(y = X99th, color = "99th"), size=4.0) +
scale_colour_manual("Percentile", values = c("#FF665F", "#009D91", "#FFA700")) +
theme(plot.margin = margin(t = 10, r = 50, b = 10, l = 20),
legend.position="none")
grid.newpage()
pushViewport(viewport(layout = grid.layout(3, 1)))
vplayout <- function(x,y) viewport(layout.pos.row = x, layout.pos.col = y)
print(thr, vp = vplayout(1,1))
print(lat_all, vp = vplayout(2,1))
print(lat_99, vp = vplayout(3,1))
dev.off()
|
|
morse <- "ETIANMSURWDKGOHVF L PJBXCYZQ 54 3 2 16 7 8 90"
cat(sapply(readLines(tail(commandArgs(), n=1)), function(s) {
paste(lapply(strsplit(s, " ")[[1]], function(x) {
if (x == "") {
" "
} else {
p <- Reduce(function(a, b) {
if (b == '.') {
a * 2
} else {
a * 2 + 1
}
}, strsplit(x, "")[[1]], 1)
substr(morse, p-1, p-1)
}
}), collapse="")
}), sep="\n")
|
|
# Master Thesis Project - Extreme Value Theory
# Getting the experimental data above the 95 %
# and fitting a GEV distribution to it
# Clear the environment
rm(list=ls())
# Close all already open graphic windows
graphics.off()
# Sourcing the auxiliary files
source("loadStockData_plain.r")
# Remark : the csv uses the metastock data format
# 7 columns : - Ticker (identifier of the stock
# and stockmarket on which it is listed)
# - Date (yyyymmdd)
# - Open
# - High
# - Low
# - Close
# - Volume
# By default, the highest price is used i.e. column 3.
CHOICE <- 3
# Loading the original data for the five stocks then
# getting the LVMH and the Total data
df_plain <- loadStockData_plain(CHOICE)
# Getting the log-returns
data_bnp <- df_plain[[1]]
data_bnp <- diff(log(data_bnp))
data_carrefour <- df_plain[[2]]
data_carrefour <- diff(log(data_carrefour))
data_lvmh <- df_plain[[3]]
data_lvmh <- diff(log(data_lvmh))
data_sanofi <- df_plain[[4]]
data_sanofi <- diff(log(data_sanofi))
data_total <- df_plain[[5]]
data_total <- diff(log(data_total))
# Computing the 95 % - quantiles for each of the stocks
quantiles <- c(quantile(data_bnp,0.99), quantile(data_carrefour,0.995), quantile(data_lvmh,0.995),
quantile(data_sanofi,0.995), quantile(data_total,0.995))
# Getting the data above the threshold
data_bnp <- data_bnp[data_bnp > quantiles[1]]
data_carrefour <- data_carrefour[data_carrefour > quantiles[2]]
data_lvmh <- data_lvmh[data_lvmh > quantiles[3]]
data_sanofi <- data_sanofi[data_sanofi > quantiles[4]]
data_total <- data_total[data_total > quantiles[5]]
# Plotting the above-threshold data
x_label <- ""
y_label <- "Stock price"
title1 <- "BNP Paribas log-returns"
title2 <- "Carrefour log-returns"
title3 <- "LVMH log-returns"
title4 <- "Sanofi log-returns"
title5 <- "Total log-returns"
quartz()
png(file = "aboveThresholdData.png")
par(mfrow = c(3,2))
plot(data_bnp, pch = 1, col = "black", type = 'p', xlab = x_label, ylab = y_label, main = title1)
plot(data_carrefour, pch = 1, col = "blue", type = 'p', xlab = x_label, ylab = y_label, main = title2)
plot(data_lvmh, pch = 1, col = "green", type = 'p', xlab = x_label, ylab = y_label, main = title3)
plot(data_sanofi, pch = 1, col = "purple", type = 'p', xlab = x_label, ylab = y_label, main = title4)
plot(data_total, pch = 1, col = "red", type = 'p', xlab = x_label, ylab = y_label, main = title5)
dev.off()
graphics.off()
print("##################### BNP Paribas #####################")
quartz()
png(file = "gev.diag BNP Paribas")
gev.diag(gev.fit(data_bnp))
dev.off()
graphics.off()
print("##################### Carrefour #####################")
quartz()
png(file = "gev.diag Carrefour")
gev.diag(gev.fit(data_carrefour))
dev.off()
graphics.off()
print("##################### LVMH #####################")
quartz()
png(file = "gev.diag LVMH")
gev.diag(gev.fit(data_lvmh))
dev.off()
graphics.off()
print("##################### Sanofi #####################")
quartz()
png(file = "gev.diag Sanofi")
gev.diag(gev.fit(data_sanofi))
dev.off()
graphics.off()
print("##################### Total #####################")
quartz()
png(file = "gev.diag Total")
gev.diag(gev.fit(data_total))
dev.off()
graphics.off()
print("##################### LVMH 2 #####################")
data_lvmh <- diff(log(df_plain[[3]]))
data_lvmh <- data_lvmh[data_lvmh > quantile(data_lvmh,0.95)]
quartz()
png(file = "gev.diag LVMH2")
gev.diag(gev.fit(data_lvmh))
dev.off()
graphics.off()
print("##################### BNP 2 #####################")
data_bnp <- diff(log(df_plain[[1]]))
data_bnp <- data_bnp[data_bnp > quantile(data_bnp,0.99)]
quartz()
png(file = "gev.diag BNP2")
gev.diag(gev.fit(data_total))
dev.off()
graphics.off()
print("#################################################")
print("#################################################")
print("#################################################")
data_lvmh <- as.numeric((read.csv("daily_LVMH_table.csv", header = FALSE))[[3]])
quartz()
png(file = "LVMH_daily.png")
plot(1:length(data_lvmh), data_lvmh)
dev.off()
graphics.off()
lvmh_quant <- quantile(data_lvmh, 0.99)
data_lvmh <- data_lvmh[data_lvmh > lvmh_quant]
data_lvmh <- diff(log(data_lvmh))
quartz()
png(file = "LVMH_daily_99quant.png")
plot(1:length(data_lvmh), data_lvmh)
dev.off()
graphics.off()
quartz()
png(file = "gev.diag LVMH3")
gev.diag(gev.fit(data_lvmh))
dev.off()
graphics.off()
|
|
# Nombre:
# Fecha:
# Curso: Métodos Cuantitativos.
# Prof. Héctor Bahamonde.
# 1. Esta tarea combina los ej's chicos #2, #3 y #4. Volviendo del break, haremos la tareas grandes #1 y #2 combinadas.
# 2. Se entrega el viernes a las 5 pm en uCampus.
# 3. Si quieres trabajar en grupo, se puede hacer. Pero las notas son individuales.
# 4. Esta tarea es "multi-nota". Cada seccion tiene una nota de acuerdo al ej. que coresponda.
# 5. Recuerda ir comentando con "#" para ir explicando el raciocinio.
#################################################################
# EJ CHICO 2
#################################################################
# instala la libreria.
install.packages("dslabs")
# carga la libreria
library("dslabs")
# carga la base de datos
data(admissions)
#################################################################
# 2.1: Cambiando etiquetas y modificando nombres de variables
#################################################################
# 1. Que tipo de variable es "gender"? Usa un comando para contestar esta pregunta.
# 2. Cambia el nombre de la variable "gender" a "genero".
# 3. Cambia el tipo de variable "genero" a factor/variable cualitativa. Pista: necesito que ocupes "levels".
#################################################################
# 2.2: Transformarciones y creacion de nuevas variables
#################################################################
# 1. Usando la misma base de datos, crea una nueva variable que represente el % de personas que son aceptadas en cada "major".
# Un "major" es una carrera (en EEUU). Tu "major" seria, por ej., "economics". Sólo que aquí estan puestos como "A", "B", "C", etc. Atencion: esta nueva variable debe ser parte de la base de datos.
# 2. En base a lo anterior, que puedes concluir?
#################################################################
# EJ CHICO 3
#################################################################
# Carga la base de datos titulada base_mociones.xlsx
#################################################################
# 3.1
#################################################################
# Para este ejercicio vamos a utilizar una base de datos con información de los miembros de la cámara de diputados durante el periodo 1990-2018 construida por Florencia Olivares, Magíster en la Pontificia Universidad Católica. Esta base fue usada en su tesis de pregrado para su posterior publicación en una revista académica. Contiene las mociones "pro mujer" ingresada por cada integrante de la camara en su respectivo periodo de 4 años.
#################################################################
# 3.2
#################################################################
# Carga la base de datos que está en formato .xlsx. ¡Recuerda ponerle un buen nombre!
# ¿Cuantas observaciones y variables (columnas) tenemos?
# ¿Cual sera la unidad observacional de la base de datos?
# ¿Que significa cada variable?
#################################################################
# 3.3
#################################################################
# Crea una nueva base de datos cambiandole el nombre de alguna de las variables por otro que te parezca mas legible o mas correcto. ¡Recuerda colocar buenos nombres! En el resto del ejercicio, utiliza esa base de datos.
#################################################################
# 3.4
#################################################################
# Filtra las observaciones de la base de datos, para que esta solo contenga las diputadas mujeres (no es necesario obtener un un objeto nuevo).
#################################################################
# 3.5
#################################################################
# Obten un resumen separado para hombres y mujeres de filiación política que incluya la suma de sus mociones pro mujer.
# ¿Quién ha presentado mas mociones pro mujer dentro de las mujeres?
# ¿Quién ha presentado más mociones pro mujer dentro de los hombres?
#################################################################
# EJ CHICO 4
#################################################################
# Carga la base de datos titulada cow.dta
# En este ej. pensaremos en la siguiente pregunta:
# Que hace a los paises ricos? Sera la democracia? Y/o la estabilidad? Y/o La cantidad de poblacion?
#################################################################
# 4.1
#################################################################
# Cual es la variable dependiente?
# Responde:
#################################################################
# 4.2
#################################################################
# Crea una hipotesis y una hipotesis alternativa.
# En esta seccion deberas pensar en hipotesis bivariadas.
# (a) Hipotesis: Yo creo que X1 causa Y porque....?
# (b) Hipotesis alternativa: Yo creo que X2 causa Y porque...?
#################################################################
# 4.3
#################################################################
# Estima dos modelos lineales multivariados (y ~ X1, pero incluyendo controles X2, X3, etc.). Deberas estimar un modelo por cada hipotesis. En total, dos modelos (uno de la hipotesis y otro de la hipotesis alternativa). Ambos deben incluir controles. Presenta tus resultados usando "summary", e interpretando correctamente los coeficientes. Que podemos aprender de estos dos modelos?
# Modelo 1
# Modelo 2
# Que podemos aprender de estos dos modelos?
|
|
# Agave Platform Science API
#
# Power your digital lab and reduce the time from theory to discovery using the Agave Science-as-a-Service API Platform. Agave provides hosted services that allow researchers to manage data, conduct experiments, and publish and share results from anywhere at any time.
#
# Agave Platform version: 2.2.14
#
# Generated by: https://github.com/swagger-api/swagger-codegen.git
#' TransferTaskProgressSummary Class
#'
#' This represents a rollup of a data transfer operation.
#'
#' @field averageRate Average transfer rate in bytes/sec
#' @field source The source URL of the transfer operation
#' @field totalActiveTransfers The number of concurrent transfers behind this transfer task
#' @field totalBytes The total number of bytes to be transferred
#' @field totalBytesTransferred The total number of bytes transferred thus far
#' @field totalFiles The total number of files transferred to date
#' @field uuid The id of the transfer task associated with this object
#'
#' @importFrom R6 R6Class
#' @importFrom jsonlite fromJSON toJSON
#' @export
TransferTaskProgressSummary <- R6::R6Class(
'TransferTaskProgressSummary',
public = list(
`averageRate` = NULL,
`source` = NULL,
`totalActiveTransfers` = NULL,
`totalBytes` = NULL,
`totalBytesTransferred` = NULL,
`totalFiles` = NULL,
`uuid` = NULL,
initialize = function(`averageRate`, `source`, `totalActiveTransfers`, `totalBytes`, `totalBytesTransferred`, `totalFiles`, `uuid`){
if (!missing(`averageRate`)) {
stopifnot(is.numeric(`averageRate`), length(`averageRate`) == 1)
self$`averageRate` <- `averageRate`
}
if (!missing(`source`)) {
stopifnot(is.character(`source`), length(`source`) == 1)
self$`source` <- `source`
}
if (!missing(`totalActiveTransfers`)) {
stopifnot(is.numeric(`totalActiveTransfers`), length(`totalActiveTransfers`) == 1)
self$`totalActiveTransfers` <- `totalActiveTransfers`
}
if (!missing(`totalBytes`)) {
stopifnot(is.numeric(`totalBytes`), length(`totalBytes`) == 1)
self$`totalBytes` <- `totalBytes`
}
if (!missing(`totalBytesTransferred`)) {
stopifnot(is.numeric(`totalBytesTransferred`), length(`totalBytesTransferred`) == 1)
self$`totalBytesTransferred` <- `totalBytesTransferred`
}
if (!missing(`totalFiles`)) {
stopifnot(is.numeric(`totalFiles`), length(`totalFiles`) == 1)
self$`totalFiles` <- `totalFiles`
}
if (!missing(`uuid`)) {
stopifnot(is.character(`uuid`), length(`uuid`) == 1)
self$`uuid` <- `uuid`
}
},
asJSON = function() {
self$toJSON()
},
toJSON = function() {
TransferTaskProgressSummaryObject <- list()
if (!is.null(self$`averageRate`)) {
TransferTaskProgressSummaryObject[['averageRate']] <- self$`averageRate`
}
else {
TransferTaskProgressSummaryObject[['averageRate']] <- NULL
}
if (!is.null(self$`source`)) {
TransferTaskProgressSummaryObject[['source']] <- self$`source`
}
else {
TransferTaskProgressSummaryObject[['source']] <- NULL
}
if (!is.null(self$`totalActiveTransfers`)) {
TransferTaskProgressSummaryObject[['totalActiveTransfers']] <- self$`totalActiveTransfers`
}
else {
TransferTaskProgressSummaryObject[['totalActiveTransfers']] <- NULL
}
if (!is.null(self$`totalBytes`)) {
TransferTaskProgressSummaryObject[['totalBytes']] <- self$`totalBytes`
}
else {
TransferTaskProgressSummaryObject[['totalBytes']] <- NULL
}
if (!is.null(self$`totalBytesTransferred`)) {
TransferTaskProgressSummaryObject[['totalBytesTransferred']] <- self$`totalBytesTransferred`
}
else {
TransferTaskProgressSummaryObject[['totalBytesTransferred']] <- NULL
}
if (!is.null(self$`totalFiles`)) {
TransferTaskProgressSummaryObject[['totalFiles']] <- self$`totalFiles`
}
else {
TransferTaskProgressSummaryObject[['totalFiles']] <- NULL
}
if (!is.null(self$`uuid`)) {
TransferTaskProgressSummaryObject[['uuid']] <- self$`uuid`
}
else {
TransferTaskProgressSummaryObject[['uuid']] <- NULL
}
TransferTaskProgressSummaryObject
},
fromJSON = function(TransferTaskProgressSummaryObject) {
if (is.character(TransferTaskProgressSummaryObject)) {
TransferTaskProgressSummaryObject <- jsonlite::fromJSON(TransferTaskProgressSummaryJson)
}
if ("result" %in% names(TransferTaskProgressSummaryObject)) {
TransferTaskProgressSummaryObject <- TransferTaskProgressSummaryObject$result
}
if (!is.null(TransferTaskProgressSummaryObject$`averageRate`)) {
self$`averageRate` <- TransferTaskProgressSummaryObject$`averageRate`
}
if (!is.null(TransferTaskProgressSummaryObject$`source`)) {
self$`source` <- TransferTaskProgressSummaryObject$`source`
}
if (!is.null(TransferTaskProgressSummaryObject$`totalActiveTransfers`)) {
self$`totalActiveTransfers` <- TransferTaskProgressSummaryObject$`totalActiveTransfers`
}
if (!is.null(TransferTaskProgressSummaryObject$`totalBytes`)) {
self$`totalBytes` <- TransferTaskProgressSummaryObject$`totalBytes`
}
if (!is.null(TransferTaskProgressSummaryObject$`totalBytesTransferred`)) {
self$`totalBytesTransferred` <- TransferTaskProgressSummaryObject$`totalBytesTransferred`
}
if (!is.null(TransferTaskProgressSummaryObject$`totalFiles`)) {
self$`totalFiles` <- TransferTaskProgressSummaryObject$`totalFiles`
}
if (!is.null(TransferTaskProgressSummaryObject$`uuid`)) {
self$`uuid` <- TransferTaskProgressSummaryObject$`uuid`
}
},
toJSONString = function() {
sprintf(
'{
"averageRate": %d,
"source": %s,
"totalActiveTransfers": %d,
"totalBytes": %d,
"totalBytesTransferred": %d,
"totalFiles": %d,
"uuid": %s
}',
ifelse( is.null(self$`averageRate`),"null",paste0(c('"', self$`averageRate`, '"'))),
ifelse( is.null(self$`source`),"null",paste0(c('"', self$`source`, '"'))),
ifelse( is.null(self$`totalActiveTransfers`),"null",paste0(c('"', self$`totalActiveTransfers`, '"'))),
ifelse( is.null(self$`totalBytes`),"null",paste0(c('"', self$`totalBytes`, '"'))),
ifelse( is.null(self$`totalBytesTransferred`),"null",paste0(c('"', self$`totalBytesTransferred`, '"'))),
ifelse( is.null(self$`totalFiles`),"null",paste0(c('"', self$`totalFiles`, '"'))),
ifelse( is.null(self$`uuid`),"null",paste0(c('"', self$`uuid`, '"')))
)
},
fromJSONString = function(TransferTaskProgressSummaryJson) {
TransferTaskProgressSummaryObject <- jsonlite::fromJSON(TransferTaskProgressSummaryJson)
self::fromJSON(TransferTaskProgressSummaryObject)
}
)
)
|
|
# qnoise R API
#
# mat - input data matrix
# noiseType -
# 0: Missing
# 1: Inconsistency
# 2: Outlier
# 3: Error
# 4: Duplicate
# percentage - percentage of the noise seed
# granularity -
# 0: ROW
# 1: CELL
# model - tuple picking model
# 0: Random
# 1: Histogram
# filteredColumn - the columns to be filtered, a list of string.
# seed - seed for duplicatations [0 - 1]
# distance - distance from active domain, a list of double.
# constraints - string constraints
# logFile - logFile
inject <- function(
mat, noiseType, percentage,
granularity=0L, model=0L, filteredColumns=NULL,
seed=0.0, distance=NULL, constraints=NULL, logFile=NULL
) {
library(rJava)
.jinit()
.jaddClassPath("out/bin/qnoise.jar")
.jaddClassPath("out/bin/guava-14.0.1.jar")
.jaddClassPath("out/bin/opencsv-2.3.jar")
.jaddClassPath("out/bin/javatuples-1.2.jar")
# convert data
mat[] <- as.character(mat)
data <- .jarray(mat, dispatch=TRUE)
data <- .jcast(data, "[[Ljava/lang/String;")
# filtered columns
if (is.null(filteredColumns)) {
filteredColumns <- .jcast(.jnull(), "[Ljava/lang/String;")
} else {
filteredColumns <- .jarray(filteredColumns)
}
# distance
if (is.null(distance)) {
distance <- .jcast(.jnull(), "[D")
} else {
distance <- .jarray(distance)
}
# constraints
if (is.null(constraints)) {
constraints <- .jcast(.jnull(), "[Ljava/lang/String;")
} else {
constraints <- .jarray(constraints)
}
# logFile
if (is.null(logFile)) {
logFile <- .jcast(.jnull(), "java/lang/String")
}
# convert mat
result <- .jcall(
"qa.qcri.qnoise.QnoiseFacade",
"[[Ljava/lang/String;",
"inject",
data,
noiseType,
granularity,
percentage,
model,
filteredColumns,
seed,
distance,
constraints,
logFile
)
return (lapply(result, .jevalArray))
}
|
|
#' Effective number of codons
#'
#' This function calculate the effective number of codons (ENC) using Wright's formula.
#'
#' @param x a list of KZsqns objects
#' @param spp.list an array indicating the the species identity of each KZsqns object in the list \emph{x}. The default option is to calculate an ENC for each KZsqns object. When \emph{spp.list} is specified, an ENC is calculated for each unique element of \emph{spp.list}
#' @param y a number to assign the type of codon table to be used for the translation. 0 = standard codon table, 2 = vertibrate mitchodrial (See dna2aa for additional options). If no number is specified or an option other than those provided above is specified, there will be a warning and the standard codon table will be used
#' @param formula a char array to denote the formula for calculating ENC. Options include: 'w': Wright's formula, 'f4-': Fuglsang's Ncf4- formula.
#' @return an numeric array of the ENCs
#' @section Warning:
#' According to Wright's formula, the ENC is the sum of the heterogeneity of all the degeneracy classes. When all the codons belonging to a degeneracy class is missing, the ENC can not be calculated. A warning will result when it happens. The most likely situation leading to such a condition is that the sequence of codons is too short. When the sequence is short, the calculated ENC is likely to deviate much from the underlying value. It is recommended to increase the sample size to solve this issue when Wright's formula is prefered. Alternatively, Fuglsang's Ncf4- formula could be used instead.
#' @references
#' Fuglsang, A., 2006 Estimating the "Effective Number of Codons": The Wright way of determining codon homozygosity leads to superior estimates. Genetics 172:1301-1307
#' Wright, F., 1990 The 'effective number of codons' used in a gene. Gene 87: 23-29.
#' @examples
#' dnalist = vector('list', 2)
#' dnalist[[1]] = CodonTable2[sample(1:64, 1e4, TRUE),1]
#' attr(dnalist[[1]], 'class') <- 'KZsqns'
#' dnalist[[2]] = CodonTable2[sample(1:64, 1e4, TRUE),1]
#' attr(dnalist[[2]], 'class') <- 'KZsqns'
#' enc(x = dnalist, y = 2) # Two numbers should be around 60 as there are 60 animo acid coding codons and 4 stop codons for codon table 2
#'
#' dnalist = vector('list', 2)
#' dnalist[[1]] = CodonTable2[sample(1:64, 1e4, TRUE),1]
#' attr(dnalist[[1]], 'class') <- 'KZsqns'
#' dnalist[[2]] = CodonTable2[sample(1:64, 1e4, TRUE),1]
#' attr(dnalist[[2]], 'class') <- 'KZsqns'
#' enc(x = dnalist, y = 4) # Two numbers should be around 62 as there are 60 animo acid coding codons and 2 stop codons for codon table 4
#' @export
enc <- function(x, y=0, spp.list='n', formula = 'w'){
if(!is.list(x)) x = list(x)
cTable = arg_check(x[[1]], "KZsqns", y)
calc = nc_wright
switch(formula,
'f4-' = {calc = nc_fuglsang4m},
'w' = {},
{
warning("Not a valid formula, and Wright's formula will be used.")
})
if(length(spp.list)==1 && spp.list=='n'){
cat("No species list is provided\nCalculating ENCs for each KZsqns object.\n")
return(sapply(x, function(i){calc(table(i), cTable)}))
} else {
if (length(spp.list)!=length(x)) stop("The lengths of the species list and the list of KZsqns do not match")
spp = unique(spp.list)
fixx = function(i){
jobs = which(spp.list==spp[i])
ans = rep(0, 64)
for(j in jobs){
ans = rbind(ans, table(x[[j]])[cTable[,1]])
}
ans = colSums(ans, na.rm = TRUE)
names(ans) <- cTable[,1]
return(ans)
}
ans = sapply(1:length(spp), function(i){calc(fixx(i), cTable)})
names(ans) <- spp
return(ans)
}
}
|
|
hanoimove <- function(ndisks, from, to, via) {
if ( ndisks == 1 )
cat("move disk from", from, "to", to, "\n")
else {
hanoimove(ndisks-1, from, via, to)
hanoimove(1, from, to, via)
hanoimove(ndisks-1, via, to, from)
}
}
hanoimove(4,1,2,3)
|
|
library(shiny)
library(hackR)
shinyServer(function(input, output, session){
localstate <- reactiveValues(text="")
observeEvent(input$button_leetify, {
leethandle <- leetify(handle=input$inputbox, case.type=input$case.type, sub.type=input$sub.type, leetness=input$leetness, include.unicode=input$include.unicode)
localstate$text <- capture.output(cat(leethandle))
})
observeEvent(input$button_random, {
leethandle <- rleet(case.type=input$case.type, sub.type=input$sub.type, leetness=input$leetness, include.unicode=input$include.unicode)
localstate$text <- capture.output(cat(leethandle))
})
output$text <- renderUI({
localstate$text
})
})
# shinyServer(function(input, output, session){
# localstate <- reactiveValues()
# localstate$ng <- NULL
# localstate$tb <- capture.output(cat("Input the text you want to process in the box on the left."))
#
# observeEvent(input$button_process, {
# ### Fit
# localstate$ng <- NULL
# invisible(gc())
#
# localstate$ng <- ngram(str=input$inputbox, n=input$the_n_in_ngram)
# out <- gsub(x=capture.output(print(localstate$ng)), pattern="\\[1\\]|\"", replacement="")
# localstate$tb <- capture.output(cat(paste0("PROCESSED: ", out, ".")))
# })
#
# ### Inspect
# observeEvent(input$button_inspect, {
# validate(
# need(!is.null(localstate$ng), "You must first process some input text.")
# )
#
# localstate$tb <- HTML(paste(capture.output(print(localstate$ng, output="truncated")), collapse="<br/>"))
# })
#
# ### Babble
# observeEvent(input$button_babble, {
# validate(
# need(!is.null(localstate$ng), "You must first process some input text."),
# need(input$ng_seed == "" || suppressWarnings(!is.na(as.integer(input$ng_seed))), "Seed must be a number or blank.")
# )
#
# seed <- input$ng_seed
# if (seed == "")
# seed <- ngram:::getseed()
# else
# seed <- as.integer(seed)
#
# localstate$tb <- capture.output(cat(babble(ng=localstate$ng, genlen=input$babble_len, seed=seed)))
# })
#
#
# output$text <- renderUI({
# localstate$tb
# })
# })
|
|
/*
** Resources for the sys.path initialization, the Python options
** and the preference filename
*/
#include "Types.r"
#include "patchlevel.h"
#include "pythonresources.h"
/* A few resource type declarations */
type 'Popt' {
literal byte version = POPT_VERSION_CURRENT;
byte noInspect = 0, inspect = 1;
byte noVerbose = 0, verbose = 1;
byte noOptimize = 0, optimize = 1;
byte noUnbuffered = 0, unbuffered = 1;
byte noDebugParser = 0, debugParser = 1;
byte unused_0 = 0, unused_1 = 1;
byte closeAlways = POPT_KEEPCONSOLE_NEVER,
noCloseOutput = POPT_KEEPCONSOLE_OUTPUT,
noCloseError = POPT_KEEPCONSOLE_ERROR,
closeNever = POPT_KEEPCONSOLE_ALWAYS;
byte interactiveOptions = 0, noInteractiveOptions = 1;
byte argcArgv = 0, noArgcArgv = 1;
byte newStandardExceptions = 0, oldStandardExceptions = 1;
byte sitePython = 0, noSitePython = 1;
byte navService = 0, noNavService = 1;
byte noDelayConsole = 0, delayConsole = 1;
byte noDivisionWarning = 0, divisionWarning = 1;
byte noUnixNewlines = 0, unixNewlines = 1;
};
type 'TMPL' {
wide array {
pstring;
literal longint;
};
};
/* The resources themselves */
/* Popt template, for editing them in ResEdit */
resource 'TMPL' (PYTHONOPTIONS_ID, "Popt") {
{
"preference version", 'DBYT',
"Interactive after script", 'DBYT',
"Verbose import", 'DBYT',
"Optimize", 'DBYT',
"Unbuffered stdio", 'DBYT',
"Debug parser", 'DBYT',
"Keep window on normal exit", 'DBYT',
"Keep window on error exit", 'DBYT',
"No interactive option dialog", 'DBYT',
"No argc/argv emulation", 'DBYT',
"Old standard exceptions", 'DBYT',
"No site-python support", 'DBYT',
"No NavServices in macfs", 'DBYT',
"Delay console window", 'DBYT',
"Warnings for old-style division", 'DBYT',
"Allow unix newlines on textfile input",'DBYT',
}
};
/* The default-default Python options */
resource 'Popt' (PYTHONOPTIONS_ID, "Options") {
POPT_VERSION_CURRENT,
noInspect,
noVerbose,
noOptimize,
noUnbuffered,
noDebugParser,
unused_0,
noCloseOutput,
interactiveOptions,
argcArgv,
newStandardExceptions,
sitePython,
navService,
noDelayConsole,
noDivisionWarning,
unixNewlines,
};
/* The sys.path initializer */
resource 'STR#' (PYTHONPATH_ID, "sys.path initialization") {
{
"$(PYTHON)",
"$(PYTHON):Lib",
"$(PYTHON):Lib:lib-dynload",
"$(PYTHON):Lib:plat-mac",
"$(PYTHON):Lib:plat-mac:lib-scriptpackages",
"$(PYTHON):Mac:Lib",
"$(PYTHON):Extensions:img:Mac",
"$(PYTHON):Extensions:img:Lib",
"$(PYTHON):Extensions:Imaging",
"$(PYTHON):Lib:lib-tk",
"$(PYTHON):Lib:site-packages",
}
};
/* The preferences filename */
resource 'STR ' (PREFFILENAME_ID, PREFFILENAME_PASCAL_NAME) {
$$Format("Python %s Preferences", PY_VERSION)
};
|
|
#!/usr/bin/env Rscript
# Making figures for AGU Fall Meeting presentation 9 Dec 2019
# Eric Barefoot
# Dec 2019
# load packages
library(tidyverse)
library(here)
library(broom)
library(lme4)
library(devtools) # for the next thing
library(colorspace)
library(paleohydror)
library(purrr)
library(rsample)
library(segmented)
if (interactive()) {
require(colorout)
}
# establish directories
figdir = here('figures', 'outputs', 'agu_2019_talk')
# load data
chamb_data = readRDS(file = here('data','derived_data', 'chamberlin_model_data.rds'))
chamb_model = readRDS(file = here('data','derived_data', 'chamberlin_stat_model.rds'))
model = readRDS(file = here('data','derived_data','piceance_3d_model_data.rds'))
field = readRDS(file = here('data','derived_data','piceance_field_data.rds'))
combined = readRDS(file = here('data', 'derived_data', 'piceance_field_model_data.rds'))
barpres = readRDS(file = here('data', 'derived_data', 'piceance_bar_preservation.rds'))
strat_model_data = readRDS(here('data', 'derived_data', 'strat_model_results_20191202.rds'))
# Define a color palette for the stratigraphy.
cpal = hcl(h = c(80, 120, 240), c = rep(100, 3), l = c(85, 65, 20))
# Get data from field organized into tables for plotting
field_depths = field %>% filter(structure %in% c('bar','channel') & meas_type %in% c('thickness', 'dimensions')) %>% mutate(meas_a = case_when(meas_type == 'thickness' ~ meas_a, meas_type == 'dimensions' ~ meas_b))
model_depths = model %>% filter(interpretations %in% c('full'))
all_depths = bind_rows(field_depths, model_depths, .id = 'data_source') %>% mutate(data_source = recode(.$data_source, `1` = 'field', `2` = 'model'))
all_depths = all_depths %>% mutate(formID = case_when(formation == 'ohio_creek' ~ 3, formation == 'atwell_gulch' ~ 2, formation == 'molina' ~ 1, formation == 'shire' ~ 0))
legend_ord = levels(with(all_depths, reorder(formation, formID)))
formation_labeller = as_labeller(c(`0` = 'Shire', `1` = 'Molina', `2` = 'Atwell Gulch'))
source_labeller = as_labeller(c('field' = 'Field Data', 'model' = '3D Model Data'))
bedloadSamples = field %>%
filter(meas_type == 'sample' & !(structure %in% c('outcrop', 'formation', 'bar_drape')) & meas_a < 2000) %>%
mutate(D50 = meas_a * 1e-6) %>%
mutate(formID = case_when(
formation == 'ohio_creek' ~ 3,
formation == 'atwell_gulch' ~ 2,
formation == 'molina' ~ 1,
formation == 'shire' ~ 0))
legend_ord = levels(with(bedloadSamples, reorder(formation, formID)))
samples_for_slope = bedloadSamples %>% select(set_id, D50, textures, composition, samp_ind, samp_code, samp_description)
depths_for_slope = all_depths %>% filter(data_source == 'field') %>% select(-c(textures, composition, samp_ind, samp_code, samp_description))
paleohydroset = inner_join(samples_for_slope, depths_for_slope, by = 'set_id') %>%
mutate(S = trampush_slp(D50, meas_a))
# modify and model data from suite of model runs.
strat_model_dataFilt = strat_model_data %>%
filter(between(IR, 0.3, 0.55)) %>%
filter(most_common_avulsion == 'compensational') %>%
mutate(datatype = 'Model Run')
strat_model_dataFilt2 = strat_model_dataFilt %>% mutate(presInv = 1 / (pres_percents-1.01), n2gInv = 1/n2g)
n2gMod = lm(n2gInv ~ presInv + latmob + IR, data = strat_model_dataFilt2, weight = 1-pres_percents)
n2gModSimple = lm(n2gInv ~ presInv, data = strat_model_dataFilt2, weight = 1-pres_percents)
reworkingMod = lm(reworking ~ pres_percents + latmob + IR, data = strat_model_dataFilt2)
reworkingModSeg = segmented(reworkingMod, seg.Z = ~pres_percents)
reworkingModSimple = lm(reworking ~ pres_percents, data = strat_model_dataFilt2)
reworkingModSegSimple = segmented(reworkingModSimple, seg.Z = ~pres_percents)
strat_model_dataFilt3 = strat_model_dataFilt2 %>% mutate(n2gPred = 1/predict(n2gModSimple)) %>%
mutate(reworkingPred = predict(reworkingModSegSimple))
avul = rep('compensational', nrow(barpres))
IR = rnorm(nrow(barpres), 0.4, 0.05)
latmob = rnorm(nrow(barpres), 8, 2)
barpres2 = bind_cols(barpres, tibble(avul, latmob, IR))
barpres3 = barpres2 %>% rename(pres_percents = 'perc_full') %>%
mutate(formation = recode(formation, shire = 'atwell_gulch', atwell_gulch = 'shire'), presInv = 1 / (pres_percents-1.01)) %>% ungroup()
barpres_pred = barpres3 %>% mutate(n2g = 1/predict(n2gMod, newdata = .), reworking = predict(reworkingModSeg, newdata = .)) %>%
mutate(formID = case_when(
formation == 'ohio_creek' ~ 3,
formation == 'atwell_gulch' ~ 2,
formation == 'molina' ~ 1,
formation == 'shire' ~ 0))
barpres_pred_means = barpres_pred %>% group_by(formation) %>% summarize(mean_reworking = mean(reworking), mean_elems = mean(pres_percents)) %>%
mutate(formID = case_when(
formation == 'ohio_creek' ~ 3,
formation == 'atwell_gulch' ~ 2,
formation == 'molina' ~ 1,
formation == 'shire' ~ 0))
d = barpres_pred_means
ann_data = tibble(x = c(rep(0,3), d$mean_elems), y = c(d$mean_reworking, rep(0.2,3)), xend = rep(d$mean_elems,2), yend = rep(d$mean_reworking,2), formation = rep(d$formation,2), formID = rep(d$formID,2))
#######################################################################
start_theme = theme_get()
pres_theme = theme_update(#line = element_line(color = '#bdbdbd'),
panel.background = element_rect(fill = '#bdbdbd', color = '#bdbdbd'),
legend.key = element_rect(fill = '#bdbdbd', color = '#bdbdbd'),
panel.grid = element_line(color = '#a79c99'),
strip.background = element_blank(),
strip.text.x = element_blank(),
plot.background = element_rect(fill = "transparent", color = 'transparent'),
legend.background = element_rect(fill = "transparent"))
depth_freq_field = ggplot() +
stat_bin(aes(x = meas_a, y = ..density.., fill = reorder(formation, formID)), geom = 'bar', position = "identity", bins = 20, data = filter(all_depths, data_source == 'field'), color = 'black', size = 0.5) +
scale_fill_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
labs(x = 'Flow Depth (m)', y = 'Probability Density') +
facet_wrap(vars(formID), ncol = 1, labeller = formation_labeller)
slope_freq = paleohydroset %>% ggplot() + stat_bin(aes(x = S, y = ..density.., fill = reorder(formation, formID)), geom = 'bar', position = 'identity', bins = 8, color = 'black', size = 0.5) +
scale_x_log10(breaks = c(1e-04, 2e-04, 5e-04, 0.001, 0.003), labels = c(expression(10^-4), expression(2%*%10^-4), expression(5%*%10^-4), expression(10^-3), expression(3%*%10^-3))) +
scale_fill_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
labs(x = 'Slope (-)', y = 'Probability Density') +
facet_wrap(vars(formID), ncol = 1)
plot_width = 5
ggsave(plot = slope_freq, filename = "petm_slope_piceance_frequency.png", path = figdir, width = plot_width, height = plot_width, units = "in", bg = 'transparent')
ggsave(plot = depth_freq_field, filename = "petm_depth_piceance_frequency.png", path = figdir, width = plot_width, height = plot_width, units = "in", bg = 'transparent')
strat_model_runs = ggplot(aes(x = pres_percents, y = reworking), data = strat_model_dataFilt3) +
geom_point() +
xlim(0,1) + ylim(0.4,1) +
scale_color_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
scale_fill_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
labs(x = '% fully preserved bars', y = '% undisturbed stratigraphy')
ggsave(plot = strat_model_runs, filename = "strat_model_runs.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
strat_model_runs_trend = strat_model_runs +
geom_line(aes(x = pres_percents, y = reworkingPred), data = strat_model_dataFilt3, color = 'red3', size = 1.5)
ggsave(plot = strat_model_runs_trend, filename = "strat_model_runs_trend.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
strat_model_runs_overlay = strat_model_runs +
geom_point(aes(x = pres_percents, y = reworking, color = formation), data = barpres_pred) +
theme(legend.position="none")
ggsave(plot = strat_model_runs_overlay, filename = "strat_model_runs_overlay.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
strat_model_runs_overlay_legend = strat_model_runs +
geom_point(aes(x = pres_percents, y = reworking, color = formation), data = barpres_pred)
ggsave(plot = strat_model_runs_overlay_legend, filename = "strat_model_runs_overlay_legend.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
bootstrap_plot = ggplot(aes(y = ..density.., fill = reorder(formation, formID)), data = barpres_pred) +
scale_fill_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
scale_color_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
facet_wrap(vars(formID), ncol = 1)
petm_bar_preservation = bootstrap_plot +
stat_bin(aes(x = pres_percents, color = reorder(formation, formID)), bins = 20, geom = 'bar', size = 0.5, alpha = 0.4) +
geom_vline(aes(xintercept = mean_elems, color = reorder(formation, formID)), data = d, size = 2) +
labs(x = '% fully preserved bars', y = 'Probability Density', color = 'Member')
ggsave(plot = petm_bar_preservation, filename = "petm_bar_preservation.png", path = figdir, width = plot_width*1.3, height = plot_width, units = "in", bg = 'transparent')
petm_sediment_retention = bootstrap_plot +
stat_bin(aes(x = reworking, color = reorder(formation, formID)), data = barpres_pred, bins = 20, geom = 'bar', size = 0.5, alpha = 0.4) +
geom_vline(aes(xintercept = mean_reworking, color = reorder(formation, formID)), data = d, size = 2) +
labs(x = '% undisturbed stratigraphy', y = 'Probability Density', color = 'Member')
ggsave(plot = last_plot(), filename = "petm_sediment_retention.png", path = figdir, width = plot_width*1.3, height = plot_width, units = "in", bg = 'transparent')
strat_model_runs_n2g = ggplot(aes(x = pres_percents, y = n2g), data = strat_model_dataFilt3) +
geom_point() +
xlim(0,1) + ylim(0,1) +
scale_color_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
scale_fill_manual(values = cpal, name = 'Member', labels = c('Shire','Molina','Atwell Gulch'), breaks = legend_ord) +
labs(x = '% fully preserved bars', y = '% Sand')
ggsave(plot = strat_model_runs_n2g, filename = "strat_model_runs_n2g.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
strat_model_runs_n2g_trend = strat_model_runs_n2g +
geom_line(aes(x = pres_percents, y = n2gPred), data = strat_model_dataFilt3, color = 'red3', size = 1.5)
ggsave(plot = strat_model_runs_n2g_trend, filename = "strat_model_runs_n2g_trend.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
############### BONUS SLIDES ###############
strat_model_runs_n2g_overlay = strat_model_runs_n2g +
geom_point(aes(x = pres_percents, y = n2g, color = formation), data = barpres_pred) +
theme(legend.position="none")
ggsave(plot = strat_model_runs_n2g_overlay, filename = "strat_model_runs_n2g_overlay.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
stratFilt = strat_model_data %>%
filter(between(IR, 0.3, 0.55))
reworking_by_avulsion_type = ggplot(aes(x = pres_percents, y = reworking, color = most_common_avulsion), data = stratFilt) +
geom_point() +
geom_line(aes(x = pres_percents, y = reworkingPred), data = strat_model_dataFilt3, color = '#009e2f', size = 1.5) +
xlim(0,1) + ylim(0.4,1) +
geom_abline() +
labs(x = '% fully preserved bars', y = '% undisturbed stratigraphy', color = 'Avulsion Rule')
ggsave(plot = reworking_by_avulsion_type, filename = "reworking_by_avulsion_type.png", path = figdir, width = 7, height = 5, units = "in", bg = 'transparent')
|
|
# Plot shaded lines for data series (after plot has been initiated)
#' Plot shaded polygons
#'
#' Plot shaded polygons between any number of input lines
#'
#' @param x Data frame with x values
#' @param y Data frame with y values
#' @param clr vector with colors
#' @param ... Other parameters parsed to polygon function
#'
#' @return Shaded areas in current plot
#' @export
plotRibbons <- function(x,y,clr,...){
for (i in 1:(ncol(x)-1)){
polygon(c(x[,i],rev(x[,i+1])),
c(y[,i],rev(y[,i+1])),
col=clr[i],
...)
}
}
#' @rdname plotRibbons
#' @export
pkShadeplot <- plotRibbons
|
|
# Merge the AERONET and the MetOffice data
aeronet <- import_aeronet_data("data/AERONET/lev20/920801_111119_Chilbolton.lev20")
meto <- import_meto_data("data/MetOffice/MetOData_All.csv")
index(meto) <- as.POSIXct(index(meto))
index(aeronet) <- as.POSIXct(index(aeronet))
ind <- index(aeronet)
rounded <- round_date(ind, 'hour')
time_diffs <- abs(ind - rounded) / 60
aeronet_sub <- aeronet[time_diffs < 15]
index(aeronet_sub) <- round_date(index(aeronet_sub), "hour")
# Options::
# Remove duplicates
#aeronet_sub <- aeronet_sub[!duplicated(aeronet_sub)]
# Use mean of observations at the same time
aeronet_sub <- aggregate(aeronet_sub, index(aeronet_sub), mean)
merged <- merge(aeronet_sub, meto, all=FALSE)
names(merged) <- c('aeronet', 'meto')
|
|
Rebol [
title: "SWF Examiner"
Author: "oldes"
Date: 21-11-2002
version: 0.0.12
File: %exam-swf.r
Email: oliva.david@seznam.cz
Purpose: {
Basic SWF parser which can
show all standard informations from the file.
}
Category: [file util 3]
History: [
0.0.20 [21-11-2002 {Now using direct streaming}]
0.0.12 [17-12-2001 {Support for new tags} "oldes"]
0.0.7 [30-11-2001 {
Fixed converting numbers from binary.
Added support for some other tags as morping and so on} "oldes"]
0.0.2 [6-11-2001 "New start..." "oldes"]
0.0.1 [3-Sep-2000 "Initial version" "oldes"]
]
comment: {}
]
details?: true
system/options/binary-base: 16
;--------------------------------------
swf-tags: make block! [
0 ["end" [print ""]]
1 ["showFrame" [print ""]]
2 ["DefineShape" [either details? [parse-DefineShape][probe tag-bin]]]
4 ["PlaceObject" [parse-PlaceObject]]
5 ["RemoveObject" [parse-RemoveObject]]
22 ["DefineShape2" [either details? [parse-DefineShape][probe tag-bin]]] ;Extends the capabilities of DefineShape with the ability to support more than 255 styles in the style list and multiple style lists in a single shape. (SWF 2.0)
24 ["Protected file!"]
32 ["DefineShape3" [either details? [parse-DefineShape][probe tag-bin]]] ;Extends the capabilities of DefineShape2 by extending all of the RGB color fields to support RGBA with alpha transparency. (SWF 3.0)
9 ["setBackgroundColor" [print to-tuple tag-bin]]
10 ["DefineFont" [parse-defineFont]]
11 ["DefineText" [parse-defineText]]
12 ["DoAction Tag" [print "" parse-ActionRecord tag-bin]]
13 ["DefineFontInfo"]
14 ["DefineSound" [parse-defineSound]]
15 ["StartSound" [parse-startSound]]
18 ["SoundStreamHead" [parse-SoundStreamHead ]]
19 ["SoundStreamBlock" [parse-MP3STREAMSOUNDDATA]]
;19 ["SoundStreamBlock" [write/binary/append %/j/test/t.mp3 skip tag-bin 4]]
;19 ["SoundStreamBlock" [print [length? tag-bin to-integer tag-bin-part/rev 2 to-integer tag-bin-part/rev 2 length? tag-bin] probe tag-bin]]
20 ["DefineBitsLossless" [parse-DefineBitsLossless]]
21 ["DefineBitsJPEG2" [parse-DefineBitsJPEG2]]
26 ["PlaceObject2" [probe tag-bin parse-PlaceObject2]]
28 ["RemoveObject2" ]
34 ["DefineButton2" [parse-DefineButton2]]
36 ["DefineBitsLossless2" [parse-DefineBitsLossless]]
37 ["DefineEditText" [parse-DefineEditText]]
39 ["DefineSprite" [parse-sprite]]
40 ["SWT-CharacterName" [
print ["ID:" tag-bin-part/rev 2 "=" mold to-string copy/part tag-bin find tag-bin #{00}]
]]
43 ["FrameLabel" [print mold to-string head remove back tail tag-bin]]
45 ["SoundStreamHead2" [print ""]]
46 ["DefineMorphShape" [parse-DefineMorphShape]]
48 ["DefineFont2" [parse-DefineFont2]]
;swf 5
56 ["ExportAssets" [parse-Assets]]
57 ["ImportAssets" [parse-Assets/import]]
58 ["EnableDebugger" [prin "Password:" probe tag-bin]]
;swf 6
59 ["DoInitAction" [print "" parse-ActionRecord/init tag-bin]]
60 ["EmbeddedVideo" [probe tag-bin]]
62 ["DefineFontInfo2" [parse-DefineFontInfo/mx]]
]
tag: length: data: none
indent: 0
;help functions:
getpart: func[bytes /rev /local tmp][
tmp: copy/part swf-bin bytes
swf-bin: skip swf-bin bytes
either rev [reverse tmp][tmp]
]
roundTo: func[val digits /local i d][
either parse mold val [copy i to "." 1 skip copy d to end][
load rejoin [i #"." copy/part d digits]
][ val ]
]
slice-bin: func [
"Slices the binary data to parts which length is specified in the bytes block"
bin [string! binary!]
bytes [block!]
/integers "Returns data as block of integers"
/local tmp b
][
tmp: make block! length? bytes
forall bytes [
b: copy/part bin bytes/1
append tmp either integers [to-integer debase/base refill-bits b 2][b]
bin: skip bin bytes/1
]
tmp
]
extract-data: func[type][
swf/chunk/data: slice-bin/integers swf/chunk/data select swf/chunk-bytes type
]
extend-int: func[num /local i][
i: num // 8
if i > 0 [num: num + 8 - i]
num
]
refill-bits: func[
"When an unsigned bit value is expanded into a larger word size the leftmost bits are filled with zeros."
bits
/local n
][
bits
n: (length? bits) // 8
if n > 0 [
n: 8 - n
insert/dup head bits #"0" n
]
bits
]
;end of help functions
bin-to-decimal: func [
{convert a binary native into a decimal value - also accepts a binary
string representation in the format returned by REAL/SHOW}
in [binary!]
/local sign exponent fraction
][
in: copy in
insert tail in copy/part in 4
in: reverse remove/part in 4
sign: either zero? to integer! (first in) / 128 [1][-1]
exponent: (first in) // 128 * 16 + to integer! (second in) / 16
fraction: to decimal! (second in) // 16
in: skip in 2
loop 6 [
fraction: fraction * 256 + first in
in: next in
]
sign * either zero? exponent [
2 ** -1074 * fraction
][
2 ** (exponent - 1023) * (2 ** -52 * fraction + 1)
]
]
SI16-to-int: func[i [binary!]][
i: reverse i
i: either #{8000} = and i #{8000} [
negate (32768 - to-integer (and i #{7FFF}))
][to integer! i]
]
UB-to-int: func[
"converts unsigned bits to integer"
bits [string! none!]
][
if none? bits [return 0]
to-integer debase/base refill-bits bits 2
]
SB-to-int: func[
"converts signed bits to integer"
bits [string! binary!]
][
if binary? bits [bits: enbase/base reverse bits 2]
to-integer debase/base head insert/dup bits bits/1 (32 - length? bits) 2
]
FB-to-int: func [
"converts signed fixed-point bit value to integer"
bits [string!]
/local s p x y
][
s: either bits/1 = #"1" [-1][1]
bits: copy next bits
p: (length? bits) - 16
parse bits [copy x p skip copy y to end]
if none? x [x: ""]
if none? y [y: "0"]
d: to-integer (UB-to-int y) / .65535
i: load rejoin [UB-to-int x "." d]
if s = -1 [
i: -1 * either (i // 1) = 0 [i][((to-integer i) + (1 - (i // 1)))]
]
i
]
bin-to-int: func[bin][to-integer reverse bin]
str-to-int: func[str][bin-to-int to-binary str]
get-rect: func[
"Parses Rectangle Record => returns block: [xmin xmax ymin ymax]"
bin [string! binary!]
/integers "returns values converted to integers"
/local nbits rect
][
nbits: to-integer debase/base (refill-bits copy/part bin 5) 2
skip-val: 5 + (4 * nbits)
rect: slice-bin (skip bin 5) reduce [nbits nbits nbits nbits]
if integers [forall rect [rect/1: SB-to-int rect/1] rect: head rect]
rect
]
tabs: has [t][t: make string! indent insert/dup t tab indent t]
ind-: does [indent: indent - 1]
ind+: does [indent: indent + 1]
tag-bin-part: func[bytes /rev /twips "Converts the result to number in twips" /local tmp][
tmp: copy/part tag-bin bytes
tag-bin: copy skip tag-bin bytes
either rev [
reverse tmp
][ either twips [(to-integer reverse tmp) / 20][tmp]]
]
get-count: func["Gets the count value from tag-bin (used in some tags)" /local c][
c: tag-bin-part 1
to-integer either c = #{FF} [tag-bin-part/rev 2][c]
]
parse-Assets: func[ /import /local assets file id name][
assets: make block! 6
ind+
either import [
parse/all tag-bin [
copy file to #"^@" 3 skip
some [
copy id 2 skip copy name to #"^@" 1 skip
(append assets reduce [bin-to-int to-binary id name])
]
]
assets: reduce [file assets]
print ["ImportingAssets" mold assets/2 "from" assets/1]
][
parse/all tag-bin [
2 skip
some [
copy id 2 skip copy name to #"^@" 1 skip
(append assets reduce [bin-to-int to-binary id name])
]
]
print ["ExportingAssets" mold assets]
]
ind-
assets
]
parse-MP3STREAMSOUNDDATA: func[][
ind+
print [tabs "SampleCount:" to-integer tag-bin-part/rev 2]
parse-MP3SOUNDDATA
ind-
]
parse-MP3SOUNDDATA: func[/local frames][
ind+
;probe tag-bin
print [tabs "SeekSamples:" SI16-to-int x: tag-bin-part 2]
frames: 0
while [not empty? tag-bin ][
frames: frames + 1
parse-MP3FRAME
]
print [tabs "frames:" frames]
ind-
]
parse-MP3FRAME: func[
/local crc sdsize tmp
Syncword MpegVersion Layer ProtectionBit
Bitrate SamplingRate PaddingBit Reserved
ChannelMode ModeExtension Copyright Original Emphasis
][
ind+
xxx: tmp: enbase/base tag-bin-part 4 2
set [
Syncword MpegVersion Layer ProtectionBit
Bitrate SamplingRate PaddingBit Reserved
ChannelMode ModeExtension Copyright Original Emphasis
] yyy: slice-bin/integers tmp [11 2 2 1 4 2 1 1 2 2 1 1 2]
comment {
Bitrate: pick either MpegVersion = 3 [
["free" 32 40 48 56 64 80 96 112 128 160 192 224 256 320 "bad"]
][
["free" 8 16 24 32 40 48 56 64 80 96 112 128 144 160 "bad"]
] (1 + Bitrate)
SamplingRate: pick switch MpegVersion [
3 [[44100 48000 32000 "--"]]
2 [[22050 24000 16000 "--"]]
0 [[11025 12000 8000 "--"]]
] (1 + SamplingRate)
sdsize: to-integer ((((either MpegVersion = 3 [144][72]) * Bitrate * 1000) / SamplingRate) + PaddingBit - 4)
}
Bitrate: pick (switch layer either MpegVersion = 3 [[
3 [[32 64 96 128 160 192 224 256 288 320 352 384 416 448]]
2 [[32 48 56 64 80 96 112 128 160 192 224 256 320 384]]
1 [[32 40 48 56 64 80 96 112 128 160 192 224 256 320]]
]][[
3 [[32 48 56 64 80 96 112 128 144 160 176 192 224 256]]
2 [[ 8 16 24 32 40 48 56 64 80 96 112 128 144 160]]
1 [[ 8 16 24 32 40 48 56 64 80 96 112 128 144 160]]
]]) Bitrate
SamplingRate: pick switch MpegVersion [
3 [[44100 48000 32000 "--"]]
2 [[22050 24000 16000 "--"]]
0 [[11025 12000 8000 "--"]]
] (1 + SamplingRate)
;sdsize: to-integer ((((either MpegVersion = 3 [144][72]) * Bitrate * 1000) / SamplingRate) + PaddingBit - 4)
;comment {
sdsize: either MpegVersion = 3 [ ;version 1
((( either layer = 3 [48000][144000]) * bitrate) / SamplingRate) + PaddingBit - 4
][
((( either layer = 3 [24000][72000]) * bitrate) / SamplingRate) + PaddingBit - 4
]
;}
;comment {
print [tabs
"MpegVersion:" pick [2.5 "" 2 1] (1 + MpegVersion)
"Layer:" pick ["" "III" "II" "I"] (1 + Layer)
"Protected by CRC:" ProtectionBit = 1
]
print [tabs
"Bitrate:" Bitrate
"SamplingRate:" SamplingRate
"PaddingBit:" PaddingBit = 1
]
print [tabs
"ChannelMode:" pick ["Stereo" "Joint stereo (Stereo)" "Dual channel" "Single channel (Mono)"] (1 + ChannelMode)
"Copyright:" Copyright = 1
"Original:" Original = 1
"Emphasis:" pick [none "50/15 ms" "" "CCIT J.17"] (1 + Emphasis)
]
print [tabs "SampleDataSize:" sdsize]
;}
if ProtectionBit = 0 [crc: tag-bin-part 2]
data: tag-bin-part to integer! sdsize
;probe length? tag-bin
ind-
]
parse-SoundStreamHead: func[][
ind+
flags: enbase/base tag-bin-part 2 2
print [tabs "Flags:" mold flags]
set [
Reserved psRate psSize psType
StreamSoundCompression StreamSoundRate StreamSoundSize StreamSoundType
] slice-bin/integers flags [4 2 1 1 4 2 1 1]
StreamSoundSampleCount: to-integer tag-bin-part/rev 2
print [tabs "PlaybackSoundRate:" pick [5.5 11 22 44] (1 + psRate) "kHz"]
print [tabs "PlaybackSoundSize:" pick ["snd8Bit" "snd16Bit"] (1 + psSize)]
print [tabs "PlaybackSoundType:" pick ["sndMono" "sndStereo"] (1 + psType)]
print [tabs "StreamSoundCompression:" pick ["uncompressed" "ADPCM" "MP3" "uncompressed little-endian" "" "" "Nellymoser"] (1 + StreamSoundCompression)]
print [tabs "StreamSoundRate:" pick [5.5 11 22 44] (1 + StreamSoundRate) "kHz"]
print [tabs "StreamSoundSize:" pick ["snd8Bit" "snd16Bit"] (1 + StreamSoundSize)]
print [tabs "StreamSoundType:" pick ["sndMono" "sndStereo"] (1 + StreamSoundType)]
print [tabs "StreamSoundSampleCount:" StreamSoundSampleCount]
ind-
]
parse-defineSound: func[/local flags sID sFormat sRate sSize sType][
ind+ print ""
print [tabs "Sound ID:" sID: tag-bin-part/rev 2]
flags: enbase/base tag-bin-part 1 2
print [tabs "Flags:" mold flags]
set [sFormat sRate sSize sType] slice-bin/integers flags [4 2 1 1]
print [tabs "SoundFormat:" pick ["uncompressed" "ADPCM" "MP3" "uncompressed little-endian" "" "" "Nellymoser"] (1 + sFormat)]
print [tabs "SoundRate: " pick [5.5 11 22 44] (1 + sRate) "kHz"]
print [tabs "SoundSize: " pick ["snd8Bit" "snd16Bit"] (1 + sSize)]
print [tabs "SoundType: " pick ["sndMono" "sndStereo"] (1 + sType)]
print [tabs "SoundSampleCount:" to-integer tag-bin-part/rev 4]
print [tabs "SoundData:" length? tag-bin "bytes"]
;if sFormat = 2 [write/binary rejoin [%/j/test/mp3_ sID ".mp3"] tag-bin]
switch sFormat [
0 [ write/binary %/j/test/x.wav tag-bin]
2 [ parse-MP3SOUNDDATA]
]
ind-
]
parse-startSound: func[][
ind+ print ""
print [tabs "Sound ID:" tag-bin-part/rev 2]
probe flags: enbase/base tag-bin-part 1 2
ind-
]
parse-DefineBitsJPEG2: func[][
ind+
print ""
print [tabs "Bitmap ID:" tag-bin-part/rev 2]
;write/binary %/e/jpg.test tag-bin
ind-
]
parse-DefineBitsLossless: func[/local tmp][
tmp: make block! 5
insert tmp tag-bin-part/rev 2 ;id
insert tail tmp tag-bin-part 1 ;BitmapFormat
insert tail tmp to-pair reduce [
to-integer tag-bin-part/rev 2
to-integer tag-bin-part/rev 2
] ;size
insert tail tmp tag-bin-part 1 ;BitmapColorTableSize
;probe ( zlib/decompress tag-bin)
tmp
]
parse-DefineMorphShape: func[/local i end-bin][
ind+
print "" ;tag-bin
print [tabs "Char ID:" tag-bin-part/rev 2]
print [tabs "Rect start:" mold get-rect/integers enbase/base tag-bin 2]
tag-bin: skip tag-bin (extend-int skip-val) / 8
print [tabs "Rect end :" mold get-rect/integers enbase/base tag-bin 2]
tag-bin: skip tag-bin (extend-int skip-val) / 8
print [tabs "Offset:" i: to-integer tag-bin-part/rev 4]
end-bin: copy skip tag-bin i
print [tabs "MorphFillStyles:" i: get-count]
loop i [parse-MORPHFILLSTYLE]
print [tabs "MorphLineStyles:" i: get-count]
ind+
loop i [
print [tabs "StartWidth:" to-integer tag-bin-part/rev 2]
print [tabs "EndWidth :" to-integer tag-bin-part/rev 2]
print [tabs "StartColor:" to-tuple tag-bin-part 4]
print [tabs "EndColor :" to-tuple tag-bin-part 4]
]
ind-
print [tabs "StartEdges:"]
ind+ parse-SHAPE ind-
print [tabs "EndEdges:"]
tag-bin: end-bin
ind+ parse-SHAPE ind-
ind-
]
parse-MORPHFILLSTYLE: func[/local type i][
ind+
print [tabs "Type:" type: tag-bin-part 1]
if type = #{00} [
print [tabs "StartColor:" to-tuple tag-bin-part 4]
print [tabs "EndColor :" to-tuple tag-bin-part 4]
]
if type = #{10} [
print [tabs "StartGradientMatrix:" parse-matrix]
print [tabs "EndGradientMatrix :" parse-matrix]
print [tabs "Gradients:" i: tag-bin-part 1]
ind+
loop i [
print [tabs "StartRatio:" tag-bin-part 1]
print [tabs "StartColor:" to-tuple tag-bin-part 4]
print [tabs "EndRatio :" tag-bin-part 1]
print [tabs "EndColor :" to-tuple tag-bin-part 4]
]
ind-
]
if find #{4041} type [
print [tabs "BitmapId:" tag-bin-part/rev 2]
print [tabs "StartBitmapMatrix:" parse-matrix]
print [tabs "EndBitmapMatrix :" parse-matrix]
]
ind-
]
parse-DefineButton2: func[/local tmp ofs key menu? bshapes bactions][
ind+
obj-id: to-integer tag-bin-part/rev 2
menu?: #{01} = tag-bin-part 1
;Offset to the first Button2ActionCondition
ofs: to-integer tag-bin-part/rev 2
;print [tabs "Offset:" ofs]
ofs: either ofs = 0 [(length? tag-bin) - 1][ofs - 3]
bshapes: parse-BUTTONRECORD tag-bin-part ofs
tag-bin-part 1 ;ButtonEndFlag = #{00}
bactions: make block! []
if not empty? tag-bin [
while [not tail? tag-bin][
ofs: to-integer tag-bin-part/rev 2
parse (enbase/base tag-bin-part/rev 2 2) [
copy key 7 skip copy tmp to end
]
st: make block! []
either menu? [
if tmp/1 = #"1" [insert st 'DragOut]
if tmp/2 = #"1" [insert st 'DragOver]
][
if tmp/3 = #"1" [insert st 'ReleaseOutside]
if tmp/4 = #"1" [insert st 'DragOver]
]
if tmp/5 = #"1" [insert st 'DragOut]
if tmp/6 = #"1" [insert st 'Release]
if tmp/7 = #"1" [insert st 'Press]
if tmp/8 = #"1" [insert st 'RollOut]
if tmp/9 = #"1" [insert st 'RollOver]
k: to-char ub-to-int key
if k <> #"^@" [insert st compose [key (k)]]
append/only bactions st
tmp: tag-bin-part either ofs = 0 [length? tag-bin][ofs - 4]
;first 7bits are reserved
append/only bactions parse-ActionRecord tmp
]
ind-
]
ind-
compose/deep [shapes [(bshapes)] actions [(bactions)]]
]
parse-BUTTONRECORD: func[bin /local buff tmp states][
buff: copy tag-bin tag-bin: copy bin
brecords: make block! 8
while [not tail? tag-bin][
tmp: copy skip (enbase/base tag-bin-part 1 2) 4
states: make block! 4
repeat i 4 [
if tmp/:i = #"1" [insert states pick [hit down over up] i]
]
append/only brecords states
ButtonCharacter: to-integer tag-bin-part/rev 2
ButtonLayer: to-integer tag-bin-part/rev 2
matrix: parse-matrix
parse-CXFORMWITHALPHA
repend/only brecords [ButtonCharacter ButtonLayer matrix]
]
tag-bin: buff
brecords
]
parse-DefineFont2: func[/local flags NameLen glyphs ofsTable wideOfs ofsFST
FontShapeTable
][
ind+
print ""
;probe tag-bin
print [tabs "Font ID:" tag-bin-part/rev 2]
flags: enbase/base tag-bin-part 1 2
print [tabs "Flags:" flags]
print [tabs "LanguageCode:" to-integer tag-bin-part 1]
NameLen: to-integer tag-bin-part 1
print [tabs "Name:" to-string tag-bin-part NameLen]
print [tabs "Glyphs:" glyphs: to-integer tag-bin-part/rev 2]
wideOfs: either flags/5 = #"1" [4][2]
OffsetTable: make block! glyphs
ofs: to-integer tag-bin-part/rev wideOfs ;offset to the first glyph in the shapeTbale
loop (glyphs - 1) [
append OffsetTable (to-integer tag-bin-part/rev wideOfs) - ofs
]
;print [tabs "OffsetTable:" ofsTable: tag-bin-part (glyphs * wideOfs)]
print [tabs "CodeOffset:" codeOffset: to-integer tag-bin-part/rev wideOfs]
append OffsetTable codeOffset - ofs
print [tabs "OffsetTable:" OffsetTable]
GlyphShapeTable: make block! glyphs
last-ofs: 0
foreach ofs OffsetTable [
append GlyphShapeTable tag-bin-part (ofs - last-ofs)
last-ofs: ofs
]
;ofsFST: codeOffset - (length? ofsTable) - wideOfs
;parse-SHAPE
;FontShapeTable: tag-bin-part ofsFST
;print [tabs "FontShapeTable:" length? FontShapeTable]
FontCodeTable: tag-bin-part (glyphs * wideOfs)
;print [tabs "FontCodeTable:" mold FontCodeTable]
if flags/1 = #"1" [
print [tabs "FontAscent:" SB-to-int tag-bin-part 2]
print [tabs "FontDescent:" SB-to-int tag-bin-part 2]
print [tabs "FontLeading:" SB-to-int tag-bin-part 2]
FontAdvanceTable: make block! glyphs
loop glyphs [
append FontAdvanceTable SB-to-int tag-bin-part 2
]
print [tabs "FontAdvanceTable:" mold FontAdvanceTable]
FontBoundsTable: make block! glyphs
loop glyphs [
append/only FontBoundsTable get-rect/integers enbase/base copy/part tag-bin 16 2
tag-bin: skip tag-bin (extend-int skip-val) / 8
]
print [tabs "FontBoundsTable:" mold FontBoundsTable]
print [tabs "KerningCount:" to-integer tag-bin-part/rev 2]
]
if not empty? tag-bin [
print [tabs "..." tag-bin]
]
ind-
]
parse-defineFont: func[/local id OffsetTable ofs GlyphShapeTable last-ofs][
ind+
print ""
print [tabs "Font ID:" id: tag-bin-part 2]
ofs: to-integer tag-bin-part/rev 2
OffsetTable: make block! ofs / 2
print [tabs "Glyphs: " ofs / 2]
loop (ofs / 2) - 1 [
append OffsetTable (to-integer tag-bin-part/rev 2) - ofs
]
append OffsetTable length? tag-bin
;print [tabs "OffsetTable:" mold OffsetTable]
GlyphShapeTable: make block! (ofs / 2)
last-ofs: 0
foreach ofs OffsetTable [
append GlyphShapeTable tag-bin-part (ofs - last-ofs)
last-ofs: ofs
]
;forall GlyphShapeTable [
; tag-bin: copy first GlyphShapeTable
; parse-SHAPE
;]
;print [tabs "...:" tag-bin]
ind-
return reduce [id GlyphShapeTable]
]
parse-DefineFontInfo: func[/mx /local nameLen flags CodeTable][
ind+
print ""
print [tabs "Font ID:" tag-bin-part 2]
NameLen: to-integer tag-bin-part 1
print [tabs "Name:" to-string tag-bin-part NameLen]
print [tabs "Flags:" flags: enbase/base tag-bin-part 1 2]
if mx [
print [tabs "LanguageCode:" to-integer tag-bin-part 1]
]
CodeTable: tag-bin
print [tabs "Glyphs in CodeTable:" (length? CodeTable) / 2]
ind-
]
parse-defineText: func[/local flags][
ind+
print ""
print [tabs "Text ID:" tag-bin-part/rev 2]
print [tabs "Rect:" mold get-rect/integers enbase/base tag-bin 2]
tag-bin: skip tag-bin (extend-int skip-val) / 8
print [tabs "Matrix:" tag-bin] parse-matrix
print [tabs "NglyphBits:" NglyphBits: to-integer tag-bin-part 1]
print [tabs "NadvanceBits:" NadvanceBits: to-integer tag-bin-part 1]
print [tabs "TextRecords:" tag-bin]
ind+
while [#{00} <> flags: tag-bin-part 1][
flags: enbase/base flags 2
either flags/1 = #"1" [
;Text Style Change Record
if flags/5 = #"1" [
print [tabs "TextFontID:" tag-bin-part/rev 2 ]
]
if flags/6 = #"1" [
print [tabs "TextColor:" to-tuple tag-bin-part either tagid = 11 [3][4]]
]
if flags/7 = #"1" [
print [tabs "TextXOffset:" to-integer tag-bin-part/rev 2 ]
]
if flags/8 = #"1" [
print [tabs "TextYOffset:" to-integer tag-bin-part/rev 2 ]
]
if flags/5 = #"1" [
print [tabs "TextHeight:" (to-integer tag-bin-part/rev 2) / 20 ]
]
][
;Glyph Record
print [tabs "TextGlyphCount:" nGlyphs: ub-to-int copy next flags]
bytes: (extend-int (nGlyphs * (NglyphBits + NadvanceBits))) / 8
bits: enbase/base tag-bin-part bytes 2
parse bits [any [
copy i NglyphBits skip
copy j NadvanceBits skip
(print [tabs "GlyphEntry:" ub-to-int i sb-to-int j])
]]
]
]
ind-
ind-
]
parse-DefineEditText: func[/local flags bits rect InitialText var][
ind+
print ""
probe tag-bin
print [tabs "TextID:" tag-bin-part/rev 2]
bits: enbase/base tag-bin 2
rect: get-rect bits
bits: skip bits extend-int (5 + (4 * length? rect/1))
forall rect [rect/1: SB-to-int rect/1] rect: head rect
print [tabs "Bounds:" rect]
flags: copy/part bits 16
tag-bin: load rejoin ["2#{" skip bits 16 "}"]
print [tabs "Flags:" flags]
ind+
if flags/8 = #"1" [
print [tabs "HasFont:" tag-bin-part/rev 2]
print [tabs "FontHeight:" tag-bin-part/twips 2]
]
if flags/6 = #"1" [print [tabs "TextColor:" tag-bin-part 4]]
if flags/7 = #"1" [print [tabs "MaxLength:" to-integer tag-bin-part/rev 2]]
if flags/11 = #"1" [
;HasLayout
print [tabs "Align:" pick ['left 'right 'center 'justify] 1 + to-integer tag-bin-part 1]
print [tabs "LeftMargin:" tag-bin-part/twips 2]
print [tabs "RightMargin:" tag-bin-part/twips 2]
print [tabs "Indent:" tag-bin-part/twips 2]
print [tabs "Leading:" tag-bin-part/twips 2]
]
ind-
parse/all tag-bin [copy var to #"^@" 1 skip InitialText: [to #"^@" | to end]]
print [tabs "VariableName:" var]
if flags/1 = #"1" [print [tabs "InitialText:" InitialText]]
probe debase/base bits 2
ind-
]
parse-PlaceObject: func[][
ind+
print "";tag-bin
print [tabs "CharID:" tag-bin-part 2]
print [tabs "Depth:" tag-bin-part 2]
parse-matrix
ind-
]
parse-RemoveObject: func[][
ind+
print "";tag-bin
print [tabs "CharID:" tag-bin-part 2]
print [tabs "Depth:" tag-bin-part 2]
ind-
]
parse-PlaceObject2: func[/local flags depth CharacterID r MXflags tmp][
ind+
print "";tag-bin
set [flags depth] slice-bin tag-bin-part 3 [1 2]
flags: enbase/base flags 2
print [tabs "Flags:" flags]
print [tabs "Depth:" to-integer reverse depth]
print [tabs either flags/8 = #"1" ["Character is already in the list"]["Placing new character"]]
if flags/7 = #"1" [
print [tabs "CharacterID:" to-integer tag-bin-part/rev 2]
]
if flags/6 = #"1" [
print [tabs "Matrix:" ]
parse-matrix
]
if flags/5 = #"1" [
print [tabs "ColorTransform:" tag-bin]
parse-CXFORM
]
if flags/4 = #"1" [
print [tabs "Ratio:" r: to-integer tag-bin-part/rev 2 rejoin ["( " roundTo (r / 65535 * 100) 2 "% )"]]
]
if flags/2 = #"1" [
print [tabs "ClipDepth:" to-integer tag-bin-part/rev 2]
]
if flags/3 = #"1" [
print [tabs "Name:" mold to-string tmp: copy/part tag-bin find tag-bin #{00}]
tag-bin: skip tag-bin 1 + length? tmp
]
if flags/1 = #"1" [
print [tabs "ClipActions:" tag-bin]
something: tag-bin-part 2
print [tabs "Flags:" flags: enbase/base tag-bin-part 2 2]
either swf/header/version > 5 [
probe something: tag-bin-part 2
while [#{00000000} <> type: tag-bin-part 4][
print [tabs "On" select [
#{01000000} "Load"
#{02000000} "EnterFrame"
#{04000000} "Unload"
#{10000000} "MouseDown"
#{20000000} "MouseUp"
#{08000000} "MouseMove"
#{40000000} "KeyDown"
#{80000000} "KeyUp"
#{00010000} "Data"
#{00040000} "Press"
#{00080000} "Release"
#{00100000} "ReleaseOutside"
#{00200000} "RollOver"
#{00400000} "RollOut"
#{00800000} "DragOver"
#{00000200} "Key"
] type ;"(" type enbase/base type 2")"
]
ofs: to-integer tag-bin-part/rev 4
tmp: tag-bin-part ofs
if type = #{00000200} [
print [tabs "<" select [
#{01} "keyLeft"
#{02} "keyRight"
#{03} "keyHome"
#{04} "keyEnd"
#{05} "keyInsert"
#{06} "keyDelete"
#{08} "keyBackspace"
#{0D} "keyEnter"
#{0E} "keyUp"
#{0F} "keyDown"
#{10} "keyPageUp"
#{11} "keyPageDown"
#{12} "keyTab"
#{13} "keyEscape"
#{20} "keySpace"
] copy/part tmp 1 ">"
]
tmp: next tmp
]
parse-ActionRecord tmp
]
][
while [#{0000} <> type: tag-bin-part 2][
print [tabs "On" select [
#{0100} "Load"
#{0200} "EnterFrame"
#{0400} "Unload"
#{1000} "MouseDown"
#{2000} "MouseUp"
#{0800} "MouseMove"
#{4000} "KeyDown"
#{8000} "KeyUp"
#{0001} "Data"
] type
]
ofs: to-integer tag-bin-part/rev 4
parse-ActionRecord tag-bin-part ofs
]
]
]
ind-
]
parse-CXFORM: func[/local bits flags nBits v1 v2 v3][
ind+
bits: enbase/base copy tag-bin 2
flags: copy/part bits 2
bits: skip bits 2
nBits: UB-to-int copy/part bits 4
bits: skip bits 4
used-bits: 6
if flags/2 = #"1" [
parse bits [copy v1 nBits skip copy v2 nBits skip copy v3 nBits skip copy bits to end]
print [tabs "Multiply:" SB-to-int v1 SB-to-int v2 SB-to-int v3]
used-bits: used-bits + (3 * nBits)
]
if flags/1 = #"1" [
parse bits [copy v1 nBits skip copy v2 nBits skip copy v3 nBits skip copy bits to end]
print [tabs "Addition:" SB-to-int v1 SB-to-int v2 SB-to-int v3]
used-bits: used-bits + (3 * nBits)
]
tag-bin: skip tag-bin (extend-int used-bits) / 8
ind-
]
parse-CXFORMWITHALPHA: func[/local bits flags nBits v1 v2 v3 v4][
ind+
bits: enbase/base copy tag-bin 2
flags: copy/part bits 2
bits: skip bits 2
nBits: UB-to-int copy/part bits 4
bits: skip bits 4
used-bits: 6
if flags/2 = #"1" [
parse bits [copy v1 nBits skip copy v2 nBits skip copy v3 nBits skip copy v4 nBits skip copy bits to end]
print [tabs "Multiply:" SB-to-int v1 SB-to-int v2 SB-to-int v3 SB-to-int v4]
used-bits: used-bits + (4 * nBits)
]
if flags/1 = #"1" [
parse bits [copy v1 nBits skip copy v2 nBits skip copy v3 nBits skip copy v4 nBits skip copy bits to end]
print [tabs "Addition:" SB-to-int v1 SB-to-int v2 SB-to-int v3 Sb-to-int v4]
used-bits: used-bits + (4 * nBits)
]
tag-bin: skip tag-bin (extend-int used-bits) / 8
ind-
]
parse-matrix: func[
/local bits used-bits val
ScaleX ScaleY RotateSkew0 RotateSkew1 TranslateX TranslateY matrix
][
bits: enbase/base copy tag-bin 2
used-bits: 7
matrix: make block! []
parse bits [
[
#"0" ;(tabs print "Has no scale")
|
#"1"
copy val 5 skip (val: UB-to-int val)
copy ScaleX val skip
copy ScaleY val skip
(
append matrix compose/deep [scale [(FB-to-int ScaleX) (FB-to-int ScaleY)]]
used-bits: used-bits + 5 + (2 * val)
)
]
[
#"0" ;(print "Has no rotation")
|
#"1"
copy val 5 skip (val: UB-to-int val)
copy RotateSkew0 val skip
copy RotateSkew1 val skip
(
append matrix compose/deep [rotate [(FB-to-int RotateSkew0) (FB-to-int RotateSkew1)]]
used-bits: used-bits + 5 + (2 * val)
)
]
copy val 5 skip (val: UB-to-int val)
copy TranslateX val skip
copy TranslateY val skip
(
if val = 0 [TranslateX: TranslateY: "0"]
append matrix compose/deep [at (to-pair reduce [SB-to-int TranslateX SB-to-int TranslateY])]
used-bits: used-bits + (2 * val)
)
to end
]
tag-bin: skip tag-bin (extend-int used-bits) / 8
matrix
]
parse-sprite: func[/local id frames tags][
tags: make block! 100
id: bin-to-int tag-bin-part 2 ;sprite ID
frames: bin-to-int tag-bin-part 2 ;FrameCount
foreach-tag tag-bin [insert tail tags reduce [tag data]]
foreach [id bin] tags [
tag-bin: copy bin
;probe reduce [id tag-bin]
;if id = 26 [probe parse-placeObject2]
]
return reduce [id frames tags]
]
parse-sprite: func[/local id frames tags][
tags: make block! 100
id: bin-to-int tag-bin-part 2 ;sprite ID
frames: bin-to-int tag-bin-part 2 ;FrameCount
foreach-tag tag-bin [insert tail tags reduce [tag data]]
foreach [id bin] tags [
tag-bin: copy bin
;probe reduce [id tag-bin]
;if id = 26 [probe parse-placeObject2]
]
return reduce [id frames tags]
]
parse-DefineShape: func [/local shapes][
ind+
obj-id: to-integer tag-bin-part/rev 2
obj-rect: get-rect/integers enbase/base tag-bin 2
tag-bin: skip tag-bin (extend-int skip-val) / 8
shapes: parse-SHAPE/WITHSTYLE
ind-
shapes
]
parse-SHAPE: func[/withstyle /local fills bits NumBits points b shapes linestyles][
linestyles: make block! 10
shapes: make block! 10
if withstyle [
parse-FILLSTYLEARRAY
linestyles: parse-LINESTYLEARRAY
]
NumBits: slice-bin/integers (enbase/base tag-bin-part 1 2) [4 4]
;print [tabs "NumFillBits:" NumBits/1]
;print [tabs "NumLineBits:" NumBits/2]
bits: enbase/base copy tag-bin 2
d-pos: 0x0 ;drawing position
d-type: none
points: make block! []
add-point: func[x y /curve /move /local type][
either move [
d-type: none type: none
d-pos: to-pair reduce [x y]
][
type: either curve ['curve]['line]
d-pos: to-pair reduce [
d-pos/1 + x
d-pos/2 + y
]
]
if all [d-type <> type not none? type][
if not none? d-type [
insert tail points last points
]
insert back tail points type
d-type: type
]
append points d-pos
d-pos
]
next-shape: func[][
append/only shapes copy points
clear points
d-type: none
]
while ["000000" <> copy/part bits 6 ][
either bits/1 = #"0" [
;STYLECHANGERECORD
states: next copy/part bits 6
bits: skip bits 6
;print [tabs "States:" states]
if states/5 = #"1" [
;Move bit count
MoveBits: UB-to-int copy/part bits 5
bits: skip bits 5
;print [tabs "MoveBits:" MoveBits]
MoveDeltaX: SB-to-int copy/part bits MoveBits
bits: skip bits MoveBits
MoveDeltaY: SB-to-int copy/part bits MoveBits
;print [tabs "MoveX:" MoveDeltaX / 20]
;print [tabs "MoveY:" MoveDeltaY / 20]
bits: skip bits MoveBits
]
if states/4 = #"1" [
;Fill style 0 change flag
FillStyle0: UB-to-int copy/part bits NumBits/1
bits: skip bits NumBits/1
;print [tabs "FillStyle0:" FillStyle0]
]
if states/3 = #"1" [
;Fill style 1 change flag
FillStyle1: UB-to-int copy/part bits NumBits/1
bits: skip bits NumBits/1
;print [tabs "FillStyle1:" FillStyle1]
]
if states/2 = #"1" [
;Line style change flag
LineStyle: UB-to-int copy/part bits NumBits/2
bits: skip bits NumBits/2
if states/1 <> #"1" [
append points either LineStyle = 0 [
[no edge]
][
st: linestyles/:LineStyle
compose/deep [edge [width (st/1) color (to-tuple st/2)]]
]
]
]
either states/1 <> #"1" [
add-point/move MoveDeltaX MoveDeltaY
][
;New styles flag
next-shape
print "NEW STYLES"
b: length? bits
bits: refill-bits copy bits
b: (length? bits) - b
tag-bin: debase/base bits 2
if b > 0 [tag-bin-part 1]
parse-FILLSTYLEARRAY
linestyles: parse-LINESTYLEARRAY
NumBits: slice-bin/integers (enbase/base tag-bin-part 1 2) [4 4]
;print [tabs "NumFillBits:" NumBits/1]
;print [tabs "NumLineBits:" NumBits/2]
bits: enbase/base copy tag-bin 2
]
][
;Edge Records
bits: next bits
Straight?: bits/1 = #"1"
bits: next bits
ind+
NBits: 2 + UB-to-int copy/part bits 4
bits: skip bits 4
;print [tabs "NBits:" NBits]
either Straight? [
;StraightFlag
;print [tabs "StraightFlag"]
LineFlag: bits/1
bits: next bits
either LineFlag = #"1" [
DeltaX: SB-to-int copy/part bits NBits
bits: skip bits NBits
DeltaY: SB-to-int copy/part bits NBits
bits: skip bits NBits
][
vertFlag?: #"1" = bits/1
bits: next bits
either vertFlag? [
DeltaY: SB-to-int copy/part bits NBits
DeltaX: 0
][
DeltaX: SB-to-int copy/part bits NBits
DeltaY: 0
]
bits: skip bits NBits
]
;print [tabs "X-Y:" DeltaX / 20 DeltaY / 20]
add-point DeltaX DeltaY
][
;print [tabs "CurvedFlag"]
CDeltaX: SB-to-int copy/part bits NBits
bits: skip bits NBits
CDeltaY: SB-to-int copy/part bits NBits
bits: skip bits NBits
add-point/curve CDeltaX CDeltaY
;print [tabs "Control X-Y:" CDeltaX / 20 CDeltaY / 20 ]
ADeltaX: SB-to-int copy/part bits NBits
bits: skip bits NBits
ADeltaY: SB-to-int copy/part bits NBits
bits: skip bits NBits
add-point/curve ADeltaX ADeltaY
;print [tabs "Anchor X-Y:" ADeltaX / 20 ADeltaY / 20 ]
]
ind-
]
]
next-shape
shapes
]
parse-FILLSTYLEARRAY: func[/local fills type color][
fills: get-count
;print [tabs "FillStyleCount:" fills ]
if fills > 0 [
ind+
loop fills [
type: tag-bin-part 1
;print [tabs "FillStyleType:" type]
if type = #{00} [
color: tag-bin-part either tagid = 32 [4][3]
;print [tabs "Color:" color]
]
if found? find #{1012} type [
;print [tabs "Gradient matrix:" ]
parse-matrix
i: to-integer tag-bin-part 1
;print [tabs "NumGradients:" i]
loop i [
print [tabs "Ratio:" to-integer tag-bin-part 1]
print [tabs "Color:" to-tuple tag-bin-part either tagid = 32 [4][3]]
]
]
if found? find #{4041} type [
print [tabs either type = #{40} ["tiled"]["clipped"] "bitmap" tag-bin-part/rev 2]
print [tabs "Bitmap matrix:" ] parse-matrix
]
]
ind-
]
]
parse-LINESTYLEARRAY: func[][
lines: get-count
linestyles: make block! lines
if tag-bin/1 > 0 [
ind+
loop lines [
append/only linestyles parse-LINESTYLE
]
ind-
]
linestyles
]
parse-LINESTYLE: func[/local width rgb][
width: bin-to-int tag-bin-part 2
either tagid = 32 [
rgb: tag-bin-part 4
][ rgb: tag-bin-part 3 ]
;print [tabs "width:" width "RGB:" rgb]
reduce [width rgb]
]
ConstantPool: make block! []
parse-ActionRecord: func[bin-data /init /local vals cp str pstr word dec reg logic i32 ofs undefined data codesize unknown][
ind+
;probe bin-data
if init [
print [tabs "For sprite:" copy/part bin-data 2]
bin-data: skip bin-data 2
]
actions: make block! []
aparsers: [
"ActionGetURL" [
print [tabs aname mold parse/all data "^@"]
]
"ActionConstantPool" [
ConstantPool: next parse/all data "^@"
print [tabs aname data mold ConstantPool]
]
"ActionIf" [
ofs: sb-to-int data
either ofs < 0 [
print [tabs aname data "(" ofs ")"]
][
print [tabs aname]
parse-ActionRecord bin-part ofs
]
]
"ActionDefineFunction" [
vals: make block! []
set [data codeSize] slice-bin data reduce [(length? data) - 2 2]
parse/all data [str word any [str]]
print [tabs aname rejoin [vals/1 mold skip vals 2]]
parse-ActionRecord bin-part bin-to-int codeSize
]
"???ActionDefineFunction2" [
vals: make block! []
use [name tmp][
parse/all data [copy name to #"^@" copy tmp to end]
print [tabs aname actionid name data]
set [data unknown codeSize] slice-bin data reduce [2 (length? data) - 4 2]
]
parse-ActionRecord bin-part bin-to-int codeSize
]
"ActionPush" [
vals: make block! []
parse/all data [some [cp | i32 | dec | pstr | logic | reg | null | undefined]]
print [tabs aname data mold vals]
]
]
cp: ["^H" copy v 1 skip
(append vals pick ConstantPool v: 1 + str-to-int v)
]
i32: ["^G" copy v 4 skip
(append vals v: str-to-int v)
]
pstr: ["^@" copy v to "^@" 1 skip
(append vals v)
]
logic: ["^E" copy v 1 skip
(append vals pick [false true] 1 + str-to-int v)
]
null: ["^B" ( append vals 'null )]
undefined: ["^C" (append vals 'undefined )]
dec: ["^F" copy v 8 skip
(append vals bin-to-decimal to-binary v)
]
reg: ["^D" copy v 1 skip
(append vals to-path join "register/" 1 + str-to-int v)
]
str: [copy v to "^@" 1 skip (append vals v) ]
word: [copy v 2 skip (append vals str-to-int v) ]
actionid: none
bin-part: func[bytes][b: copy/part bin-data bytes bin-data: skip bin-data bytes b]
while [all [actionid <> #{00} not empty? bin-data] ][
actionid: bin-part 1
length: to-integer either actionid > #{80} [reverse bin-part 2][0]
data: bin-part length
aname: select actionids actionid
switch/default aname aparsers [
print [tabs aname actionid data]
]
]
ind-
]
actionids: make block! [
#{00} "END of ActionRecord"
;SWF3 Actions
#{04} "ActionNextFrame"
#{05} "ActionPrevFrame"
#{06} "ActionPlay"
#{07} "ActionStop"
#{08} "ActionToggleQuality"
#{09} "ActionStopSounds"
#{81} "ActionGotoFrame"
#{83} "ActionGetURL"
#{8A} "ActionWaitForFrame"
#{8B} "ActionSetTarget"
#{8C} "ActionGoToLabel"
;Stack Operations
#{96} "ActionPush"
#{17} "ActionPop"
;Arithmetic Operators
#{0A} "ActionAdd"
#{0B} "ActionSubtract"
#{0C} "ActionMultiply"
#{0D} "ActionDivide"
;Numerical Comparison
#{0E} "ActionEquals"
#{0F} "ActionLess"
;Logical Operators
#{10} "ActionAnd"
#{11} "ActionOr"
#{12} "ActionNot"
;String Manipulation
#{13} "ActionStringEquals"
#{14} "ActionStringLength"
#{21} "ActionStringAdd"
#{15} "ActionStringExtract"
#{29} "ActionStringLess"
#{31} "ActionMBStringLength"
#{35} "ActionMBStringExtract"
;Type Conversion
#{18} "ActionToInteger"
#{32} "ActionCharToAscii"
#{33} "ActionAsciiToChar"
#{36} "ActionMBCharToAscii"
#{37} "ActionMBAsciiToChar"
;Control Flow
#{99} "ActionJump"
#{9D} "ActionIf"
#{9E} "ActionCall"
;Variables
#{1C} "ActionGetVariable"
#{1D} "ActionSetVariable"
;Movie Control
#{9A} "ActionGetURL2"
#{9F} "ActionGotoFrame2"
#{20} "ActionSetTarget2"
#{22} "ActionGetProperty"
#{23} "ActionSetProperty"
#{24} "ActionCloneSprite"
#{25} "ActionRemoveSprite"
#{27} "ActionStartDrag"
#{28} "ActionEndDrag"
#{8D} "ActionWaitForFrame2"
;Utilities
#{26} "ActionTrace"
#{34} "ActionGetTime"
#{30} "ActionRandomNumber"
;SWF 5
;ScriptObject Actions
#{3D} "ActionCallFunction"
#{52} "ActionCallMethod"
#{88} "ActionConstantPool"
#{9B} "ActionDefineFunction"
#{3C} "ActionDefineLocal"
#{41} "ActionDefineLocal2"
#{43} "ActionDefineObject" ;this was not in the specification!
#{3A} "ActionDelete"
#{3B} "ActionDelete2"
#{46} "ActionEnumerate"
#{49} "ActionEquals2"
#{4E} "ActionGetMember"
#{42} "ActionInitArray/Object"
#{53} "ActionNewMethod"
#{40} "ActionNewObject"
#{4F} "ActionSetMember"
#{45} "ActionTargetPath"
#{94} "ActionWith"
;Type Actions
#{4A} "ActionToNumber"
#{4B} "ActionToString"
#{44} "ActionTypeOf"
;Math Actions
#{47} "ActionAdd2"
#{48} "ActionLess2"
#{3F} "ActionModulo"
;Stack Operator Actions
#{60} "ActionBitAnd"
#{63} "ActionBitLShift"
#{61} "ActionBitOr"
#{64} "ActionBitRShift"
#{65} "ActionBitURShift"
#{62} "ActionBitXor"
#{51} "ActionDecrement"
#{50} "ActionIncrement"
#{4C} "ActionPushDuplicate"
#{3E} "ActionReturn"
#{4D} "ActionStackSwap"
#{87} "ActionStoreRegister"
;flashMX Actions
#{54} "ActionInstanceOf"
#{55} "ActionEnumerate2"
#{66} "ActionStrictEqual"
#{67} "ActionGreater"
#{68} "ActionStringGreater"
;flashMX2004 Actions ( guessing )
#{2A} "ActionThrow"
#{8E} "???ActionDefineFunction2"
#{8F} "ActionTry"
]
parse-swf-header: func[/local sig nbits rect version length tmp][
sig: stream-part 3
either sig <> #{465753} [
either sig = #{435753} [
print ["This file is compressed Flash MX file!"]
swf/header/version: to-integer stream-part 1
swf/header/length: to-integer stream-part/rev 4
tmp: copy swf-stream
error? try [close swf-stream]
if error? try [
swf-stream: copy to-binary decompress tmp
][
print "Cannot decompress the data:("
halt
]
][
print "Illegal swf header!"
close swf-stream
halt
]
][
swf/header/version: to-integer stream-part 1
swf/header/length: to-integer stream-part/rev 4
]
swf-buff: stream-part 1
nbits: to-integer debase/base (refill-bits copy/part (enbase/base swf-buff 2) 5) 2
insert tail swf-buff stream-part (((extend-int (5 + (4 * nbits))) / 8) - 1)
rect: slice-bin (skip enbase/base swf-buff 2 5) reduce [nbits nbits nbits nbits]
forall rect [rect/1: SB-to-int rect/1]
swf/header/frame-size: head rect
swf/header/frame-rate: to-integer stream-part 2
swf/header/frame-count: to-integer stream-part/rev 2
]
foreach-tag: func[bin action /local t][
bind action 'tag
while [not tail? bin][
t: copy/part bin 2
bin: skip bin 2
set [tag length] slice-bin (enbase/base (reverse t) 2) [10 6]
tag: to-integer debase/base refill-bits tag 2
length: to-integer debase/base refill-bits length 2
;print [tag length]
if length = 63 [length: to-integer reverse copy/part bin 4 bin: skip bin 4]
data: copy/part bin length
bin: skip bin length
do action
]
]
stream-part: func[bytes /rev /twips "Converts the result to number in twips" /local tmp][
tmp: copy/part swf-stream bytes
if binary? swf-stream [swf-stream: skip swf-stream bytes]
either rev [
reverse tmp
][ either twips [(to-integer reverse tmp) / 20][tmp]]
]
foreach-stream-tag: func[ action /local t rh-length][
bind action 'rh-length
while [all [not none? t: stream-part 2 not empty? t]][
rh-length: 2
set [tag length] slice-bin (enbase/base (reverse t) 2) [10 6]
tag: to-integer debase/base refill-bits tag 2
length: to-integer debase/base refill-bits length 2
if length = 63 [rh-length: 6 length: to-integer stream-part/rev 4]
data: either length > 0 [copy stream-part length][make binary! 0 ]
do action
]
]
show-info: make block! [
tagid: tag
use [ta][
ta: select swf-tags tag
either found? ta [
prin rejoin [tabs ta/1 "(" tagid "): "]
either none? ta/2 [
print [tag length data]
][
tag-bin: data
do ta/2
]
][
print [tabs tag length data]
]
]
;if tag = 12 [parse-ActionRecord data]
]
sysprint: get in system/words 'print
sysprin: get in system/words 'prin
set 'exam-swf func[
"Examines SWF file structure"
/file swf-file [file! url!] "the SWF source file"
/quiet "No visible output"
/store "If you want to store parsed tags in the swf/data block"
/local f info err
][
;--------[ global variables ]----------
swf: make object! [
header: make object! [
version: none
length: none
frame-size: make block! []
frame-rate: none
frame-count: none
]
rect: none
data: make block! 10
]
obj-id: 0
indent: 0
used-bits: 0
skip-val: none ;how many bits i'll have to skip
if none? swf-file [
swf-file: either empty? swf-file: ask "SWF file:" [%new.swf][
either "http://" = copy/part swf-file 7 [to-url swf-file][to-file swf-file]
]
]
if not exists? swf-file [
f: join swf-file ".swf"
either exists? f [swf-file: f][print ["Cannot found the file" swf-file "!"]]
]
swf-stream: open/direct/read/binary swf-file
if quiet [
prin: print: func[str][reduce str]
]
if error? err: try [
prin "Searching the binary file... "
parse-swf-header
print "-------------------------"
probe swf/header
info: make block! either store [[repend/only swf/data [tag length data]]][[]]
foreach-stream-tag append info show-info
print: :sysprint
prin: :sysprin
][
print: :sysprint
prin: :sysprin
if port? swf-stream [close swf-stream]
throw err
]
if port? swf-stream [close swf-stream]
swf
]
|
|
####### Final plots
source("https://raw.githubusercontent.com/Cdevenish/R-Material/master/Functions/w.xls.r")
## Data from Thompson_method scripts
load("sightings.rdata")
years
dataX <- data.frame(years = years, PXt, Basis = "All records")
head(dataX)
load("specimen.rdata")
dataX <- rbind(dataX,data.frame(years = years, PXt, Basis = "Specimens"))
colnames(dataX) <- c("years", "sd_lwr", "PXt", "sd_upr", "PXt_min", "PXt_max", "Basis")
head(dataX)
w.xls(dataX)
library(ggplot2)
ggplot(dataX, aes(x = years, y = PXt, col= Basis))+
geom_ribbon(aes(ymin=sd_lwr, ymax = sd_upr, fill = Basis), linetype = 0, alpha = 0.4)+
geom_ribbon(aes(ymin=PXt_min, ymax = PXt_max, fill = Basis), linetype = 0, alpha = 0.2)+
scale_color_manual(values = c("black", "darkred"))+
scale_fill_manual(aesthetics = "fill", values = c("grey10", "grey50"))+
geom_line(aes(linetype= Basis), size = 1)+
ylim(c(0,1))+
xlab("Years")+
scale_x_continuous(breaks = seq(1810, 2030,10))+
ylab("Probability that taxon is extant")+
theme_light()+
theme(legend.position = c(0.1,0.15))
ggsave("risk_2_on_1.png", width = 200, height = 150, units = "mm")
|
|
#' Scales for area or radius
#'
#' `scale_size()` scales area, `scale_radius()` scales radius. The size
#' aesthetic is most commonly used for points and text, and humans perceive
#' the area of points (not their radius), so this provides for optimal
#' perception. `scale_size_area()` ensures that a value of 0 is mapped
#' to a size of 0. `scale_size_binned()` is a binned version of `scale_size()` that
#' scales by area (but does not ensure 0 equals an area of zero). For a binned
#' equivalent of `scale_size_area()` use `scale_size_binned_area()`.
#'
#' @name scale_size
#' @inheritParams continuous_scale
#' @inheritParams binned_scale
#' @param range a numeric vector of length 2 that specifies the minimum and
#' maximum size of the plotting symbol after transformation.
#' @seealso [scale_size_area()] if you want 0 values to be mapped
#' to points with size 0.
#' @examples
#' p <- ggplot(mpg, aes(displ, hwy, size = hwy)) +
#' geom_point()
#' p
#' p + scale_size("Highway mpg")
#' p + scale_size(range = c(0, 10))
#'
#' # If you want zero value to have zero size, use scale_size_area:
#' p + scale_size_area()
#'
#' # Binning can sometimes make it easier to match the scaled data to the legend
#' p + scale_size_binned()
#'
#' # This is most useful when size is a count
#' ggplot(mpg, aes(class, cyl)) +
#' geom_count() +
#' scale_size_area()
#'
#' # If you want to map size to radius (usually bad idea), use scale_radius
#' p + scale_radius()
NULL
#' @rdname scale_size
#' @export
#' @usage NULL
scale_size_continuous <- function(name = waiver(), breaks = waiver(), labels = waiver(),
limits = NULL, range = c(1, 6),
trans = "identity", guide = "legend") {
continuous_scale("size", "area", area_pal(range), name = name,
breaks = breaks, labels = labels, limits = limits, trans = trans,
guide = guide)
}
#' @rdname scale_size
#' @export
scale_size <- scale_size_continuous
#' @rdname scale_size
#' @export
scale_radius <- function(name = waiver(), breaks = waiver(), labels = waiver(),
limits = NULL, range = c(1, 6),
trans = "identity", guide = "legend") {
continuous_scale("size", "radius", rescale_pal(range), name = name,
breaks = breaks, labels = labels, limits = limits, trans = trans,
guide = guide)
}
#' @rdname scale_size
#' @export
scale_size_binned <- function(name = waiver(), breaks = waiver(), labels = waiver(),
limits = NULL, range = c(1, 6), n.breaks = NULL,
nice.breaks = TRUE, trans = "identity", guide = "bins") {
binned_scale("size", "area_b", area_pal(range), name = name,
breaks = breaks, labels = labels, limits = limits, trans = trans,
n.breaks = n.breaks, nice.breaks = nice.breaks, guide = guide)
}
#' @rdname scale_size
#' @export
#' @usage NULL
scale_size_discrete <- function(...) {
warn("Using size for a discrete variable is not advised.")
scale_size_ordinal(...)
}
#' @rdname scale_size
#' @export
#' @usage NULL
scale_size_ordinal <- function(..., range = c(2, 6)) {
force(range)
discrete_scale(
"size",
"size_d",
function(n) {
area <- seq(range[1] ^ 2, range[2] ^ 2, length.out = n)
sqrt(area)
},
...
)
}
#' @inheritDotParams continuous_scale -aesthetics -scale_name -palette -rescaler
#' @param max_size Size of largest points.
#' @export
#' @rdname scale_size
scale_size_area <- function(..., max_size = 6) {
continuous_scale("size", "area",
palette = abs_area(max_size),
rescaler = rescale_max, ...)
}
#' @export
#' @rdname scale_size
scale_size_binned_area <- function(..., max_size = 6) {
binned_scale("size", "area_b",
palette = abs_area(max_size),
rescaler = rescale_max, ...)
}
#' @rdname scale_size
#' @export
#' @usage NULL
scale_size_datetime <- function(..., range = c(1, 6)) {
datetime_scale("size", "time", palette = area_pal(range), ...)
}
#' @rdname scale_size
#' @export
#' @usage NULL
scale_size_date <- function(..., range = c(1, 6)) {
datetime_scale("size", "date", palette = area_pal(range), ...)
}
|
|
#load library
library(MethComp)
#import data
df<-read.csv("gatti.csv", header = TRUE, sep = ";", dec= ",", stringsAsFactors = FALSE)
#
# #ignore columns
# df<-df[,1:22]
#
# head(df)
# str(df)
# #delete record number 99
# df<-df[-c(393:396),]
#
# df_Meth<-Meth(df, 2,1,3,7)
#
# plot(df_Meth)
#convert data to Meth object type and save pdf with meth plot
pdf()
for(i in c(4:22)){
df_Meth<-Meth(df, 2,1,3,i)
plot(df_Meth)
title(main=list(colnames(df[i]), col="red"), line=3, adj=0.45)
print(i)
}
dev.off()
#Bland-Altman plot
BA.plot(df_Meth, repl.conn=FALSE, meth.names = TRUE)
#passing bablok regression
reg<-PBreg(df_Meth)
print(reg)
plot(reg,subtype= 1, xlim = c(0,50), ylim= c(0,50))
#concordance correlation coefficient agreement=(misure del target, misure del test)
|
|
#'@title calcEF_PM
#'
#'@description
#'Calculates the appropriate particulate matter (PM10 or PM2.5) emission factor
#'(g/kWh) for the given parameters.
#'
#'@param engineType Engine type (vector of strings) (see
#'\code{\link{calcEngineType}}). Valid values are: \itemize{
#'\item "SSD"
#'\item "MSD"
#'\item "MSD-ED"
#'\item "GT"
#'\item "GT-ED"
#'\item "ST"
#'\item "LNG"
#'\item "HSD" (auxiliary only)
#'\item "Boiler" (boiler only)
#'}
#'@param location Location of vessel (vector of strings). Valid values are:
#'\itemize{
#' \item "ECA"
#' \item "OutsideECA"
#' \item "GreatLakes"
#' }
#'@param loadFactor Fractional percentage (between 0 and 1) of main engine
#' required to propel vessel at given speed (vector of numericals) (see
#' ShipPowerModel library). This parameter is optional. By default, it is not
#' used and the resulting emission factor is independent of engine load.
#'@param ECAfuelSulfurPercentage Fuel sulfur cap (percentage by weight) for the
#'Emissions Control Area (ECA). Default = 0.1\% (in effect Jan. 1, 2015)
#'@param GlobalfuelSulfurPercentage Fuel sulfur cap (percentage by weight) for
#'outside the Emissions Control Area (ECA). Default = 0.5\% (in effect Jan. 1,
#'2020)
#'@param pmSize Indicates whether output is for PM10 or PM2.5. Valid values are:
#'\itemize{\item"pm10" \item"pm2.5"}
#'@param main_aux_boiler Is this calculation for a propulsive (main), auxiliary
#'(aux), or boiler engine? Options: \itemize{
#' \item "main" (Default)
#' \item "aux"
#' \item "boiler"
#'}
#'
#'@details
#'Location is important for determining the fuel being used, as fuel sulfur
#'requirements and type of fuel typically used vary by location.
#'
#'For more information about calculating PM emission factors, see Section 3.5.3
#'of the Port Emissions Inventory Guidance.
#'
#'@return \code{EF_PM} (g/kWh) (vector of numericals)
#'
#'@references
#' \href{https://nepis.epa.gov/Exe/ZyPDF.cgi?Dockey=P10102U0.pdf}{EPA. 2020.
#' "Ports Emissions Inventory Guidance: Methodologies for Estimating
#' Port-Related and Goods Movement Mobile Source Emissions." Ann Arbor, MI:
#' Office of Transportation and Air Quality. US Environmental Protection Agency.}
#'
#'@seealso \itemize{
#' \item \code{\link{calcEngineType}}
#' \item ShipPowerModel library
#'}
#'
#'@examples
#'calcEF_PM(engineType = c("SSD","MSD","MSD-ED","SSD"),
#' location = c("ECA","OutsideECA","GreatLakes","ECA"),
#' loadFactor = c(0.02,0.3,0.8,1),
#' pmSize = "pm2.5",
#' main_aux_boiler = "main")
#'calcEF_PM(engineType = c("SSD","MSD","MSD-ED","SSD"),
#' location = c("ECA","OutsideECA","GreatLakes","ECA"),
#' loadFactor = NULL,
#' pmSize = "pm2.5",
#' main_aux_boiler = "main")
#'calcEF_PM(engineType = c("HSD","MSD","LNG"),
#' location = c("ECA","ECA","OutsideECA"),
#' pmSize = "pm10",
#' main_aux_boiler = "aux")
#'calcEF_PM(engineType = c("MSD","Boiler"),
#' location = c("ECA","OutsideECA"),
#' pmSize = "pm10",
#' main_aux_boiler = "boiler")
#'
#'@import data.table
#'@importFrom utils data
#'@importFrom utils tail
#'@importFrom stats weighted.mean
#'@export
calcEF_PM<-function(engineType,
location,
loadFactor=NULL,
ECAfuelSulfurPercentage=0.1,
GlobalfuelSulfurPercentage=0.5,
pmSize="pm10",
main_aux_boiler="main"
)
{
#bind variables to make devtools::check() happy
MainBSFC<-MainFuelMixTable<-AuxBSFC<-AuxFuelMixTable<-BoilerBSFC<-BoilerFuelMixTable<-
BSFC_LoadFactor<-BSFC_LoadFactor<-BSFC<-pm10<-PMnom<-PMnom<-fuelSulfurLevel<-
FSC<-MWR<-pm10EF<-.<-Proportion<-NULL
#Read In Emission Factor DataFrames
if(main_aux_boiler=="main"){
EFBSFC<-ShipEF::MainBSFC
fuelMixTable<-ShipEF::MainFuelMixTable
}else if(main_aux_boiler=="aux"){
EFBSFC<-ShipEF::AuxBSFC
fuelMixTable<-ShipEF::AuxFuelMixTable
}else if(main_aux_boiler=="boiler"){
EFBSFC<-ShipEF::BoilerBSFC
fuelMixTable<-ShipEF::BoilerFuelMixTable
}
EFSulfurEQCoefficients<-ShipEF::EFSulfurEQCoefficients
EF_PM_ST_GT_LNG<-ShipEF::EF_PM_ST_GT_LNG
#==================================================================
#EngineType doesn't matter for boilers
if(main_aux_boiler=="boiler"){engineType<-rep("Boiler",length(engineType))}
#join, fuel sulfur coefficients, with fuel mix table, and BSFC table
EF<-EFSulfurEQCoefficients[fuelMixTable[EFBSFC,
on=c("fuelType","engineType"),
allow.cartesian=TRUE],
on=c("fuelType"),
allow.cartesian=TRUE]
# create table mapping location to fuel sulfur percentage and join to the EF table
EF<-data.table::data.table(Location=c("ECA","OutsideECA","GreatLakes"),
fuelSulfurLevel=c(ECAfuelSulfurPercentage,GlobalfuelSulfurPercentage,ECAfuelSulfurPercentage)
)[EF, on=c("Location")]
#Create Table of Input Values for engine type and location
df<-data.table::data.table(ID=seq(1,length(engineType)),
engineType=engineType,
Location=location)
#add load factor to inputs table
#Use BSFC Baseline for auxiliary and boiler emission factors (by setting loadFactor to NA)
if(main_aux_boiler!="main"|is.null(loadFactor)){
df[,loadFactor:=NA]
}else{df[,loadFactor:=loadFactor]}
#join inputs to emission factor table (Adds ID, and load Factor)
EF<-EF[df,
on=c("engineType","Location"),
allow.cartesian=TRUE]
#Add BSFC_LoadFactor using BSFC and Load Factor, Using Jalkanen 2012/IMO GHG 3 Method
EF[,BSFC_LoadFactor:=calcLoadBSFC(loadFactor = loadFactor,
BSFC_Baseline = BSFC)
]
#Calculate EFs for MSD and SSD engines
# NOTE: This equation has been modified from the C3 RIA equation to appropriately respond to changing the BSFC
EF<-EF[,pm10:=PMnom + fuelSulfurLevel*BSFC_LoadFactor*FSC*MWR*0.0001]
#Fills In Emission Factors for Non-Calculated EFs
EF<-EF_PM_ST_GT_LNG[EF, on=c("engineType", "fuelType")]
EF<-EF[is.na(pm10EF)==FALSE, pm10:=pm10EF]
#Calculate Emission Factors Weighted By Engine Type and Location Fuel Use Proportions
EF<-EF[,.(pm10=weighted.mean(pm10,w=Proportion),
pm2.5=weighted.mean(0.92*pm10,w=Proportion)
),by=c("ID","Location","engineType")]
if(tolower(pmSize)==c("pm10")){
return(EF[,c("pm10"),with=FALSE])
}else if(tolower(pmSize)==c("pm2.5")){
return(EF[,c("pm2.5"),with=FALSE])
}
}
|
|
#Objetivo1: resolver el tema de los NAs y si se debe ordenar ascendente o descendente
#Objetivo2: aplicar a todas las variables
#limpio la memoria
rm(list=ls())
gc()
library("data.table")
#cargo los datos
dataset <- fread("201902.txt")
#creo una clase que se 1 cuando es BAJA+2 , y 0 en caso contrario
#esto me simplifica las cuentas
dataset[ , clase01:= as.numeric(clase_ternaria=="BAJA+2") ]
#creo una variable azar que me va a ser util
#inicializo el generador de numeros aleatorios
set.seed( 102191 )
dataset[ , azar := runif(nrow(dataset)) ]
#calculos basicos
universo <- nrow(dataset )
pos_total <- sum(dataset$clase01 )
neg_total <- universo - pos_total
#-------------------------------------------------------------
# Creo una funcion para automatizar
graficar_init = function()
{
#calculos basicos
universo <- nrow(dataset )
pos_total <- sum(dataset$clase01 )
neg_total <- universo - pos_total
#la diagonal
azar_neg <- c( 0, neg_total )
azar_pos <- c( 0, pos_total )
#grafico
plot( azar_neg,
azar_pos,
type="n",
main=paste( "ROC Curve" ),
xlab="neg",
ylab="pos",
pch=19)
lines( azar_neg, azar_pos, type="l" , col="black", lwd=2)
}
#----------------------
columna_graficar = function(dataset, pcolumna )
{
#calculos basicos
universo <- nrow(dataset )
pos_total <- sum(dataset$clase01 )
neg_total <- universo - pos_total
#ordeno por <pcolumna, azar>
univar <- dataset[ order(get(pcolumna), na.last=FALSE, azar), c("clase01", pcolumna), with=FALSE]
neg_acum <- cumsum( 1- univar$clase01 )
pos_acum <- cumsum( univar$clase01 )
lines( neg_acum, pos_acum, type="l" , col="red", lwd=2)
AUC_vector <- ( pos_acum*neg_acum + (pos_acum+pos_total)*(neg_total-neg_acum) ) / (2*pos_total*neg_total)
return( list( "variable"= pcolumna,
"valor" = univar[ which.max( AUC_vector ), get(pcolumna)],
"AUC_max" = max( AUC_vector)
) )
}
#----------------------
graficar_init()
columna_graficar( dataset, "Visa_cuenta_estado" )
#Los primeros son los = Na's
#La gran linea que luego cruza el azar es = 10
#La ultima peque?a linea que sube hasta el (1,1) son los { 11, 12, 19 }
columna_metricas = function(pcolumna, dataset)
{
#calculos basicos
universo <- nrow(dataset )
pos_total <- sum(dataset$clase01 )
neg_total <- universo - pos_total
pos_na <- sum( dataset[ is.na( get(pcolumna) ), clase01 ], na.rm=TRUE )
neg_na <- sum( 1- dataset[ is.na( get(pcolumna) ), clase01 ], na.rm=TRUE )
va_ultimo <- pos_na/( pos_na + neg_na + 1 ) < pos_total/neg_total
#ordeno creciente por <pcolumna, azar>
univar <- dataset[ order(get(pcolumna), na.last=va_ultimo, azar), c("clase01", pcolumna), with=FALSE]
neg_acum <- cumsum( 1- univar$clase01 )
pos_acum <- cumsum( univar$clase01 )
gan_acum <- 19500*pos_acum - 500*neg_acum
AUC_vector <- ( pos_acum*neg_acum + (pos_acum+pos_total)*(neg_total-neg_acum) ) / (2*pos_total*neg_total)
AUC_creciente_max <- max( AUC_vector)
gan_creciente_max <- max( gan_acum )
#ordeno DEcreciente por <pcolumna, azar>
univar <- dataset[ order(-get(pcolumna), na.last=va_ultimo, azar), c("clase01", pcolumna), with=FALSE]
neg_acum <- cumsum( 1- univar$clase01 )
pos_acum <- cumsum( univar$clase01 )
gan_acum <- 19500*pos_acum - 500*neg_acum
AUC_vector <- ( pos_acum*neg_acum + (pos_acum+pos_total)*(neg_total-neg_acum) ) / (2*pos_total*neg_total)
AUC_decreciente_max <- max( AUC_vector)
gan_decreciente_max <- max( gan_acum )
return( list( "columna" = pcolumna,
"AUC_max" = pmax( AUC_creciente_max, AUC_decreciente_max) ,
"gan_max" = pmax( gan_creciente_max, gan_decreciente_max)
)
)
}
#----------------------
columna_metricas( "mcuentas_saldo", dataset )
metricas <- lapply( colnames( dataset) , columna_metricas, dataset )
metricas <- rbindlist( metricas )
metricas <- metricas[ order( -AUC_max ), ]
metricas[ 1:10, ]
metricas <- metricas[ order( -gan_max ), ]
metricas[ 1:10, ]
columna_metricas( "ttarjeta_visa", dataset )
#------------------------------------------
columna_graficar_ganancia_n = function(dataset, pcolumna, pcantidad )
{
#calculos basicos
universo <- nrow(dataset )
pos_total <- sum(dataset$clase01 )
neg_total <- universo - pos_total
#ordeno por <pcolumna, azar>
univar <- dataset[ order(get(pcolumna), na.last=FALSE, azar), c("clase01", pcolumna), with=FALSE]
#acumulo positivos y negativos, operacion vectorial
neg_acum <- cumsum( 1- univar$clase01 )
pos_acum <- cumsum( univar$clase01 )
gan_acum <- 19500*pos_acum - 500*neg_acum
#grafico
plot( seq(pcantidad),
gan_acum[1:pcantidad],
type="n",
main=paste( "Ganancia ordenado por", pcolumna ),
xlab="registros",
ylab="Ganancia",
pch=19)
lines( seq(pcantidad), gan_acum[1:pcantidad], type="l" , col="blue", lwd=2)
return( list( "variable"= pcolumna,
"valor" = univar[ which.max( gan_acum ), get(pcolumna)],
"gan_max" = max( gan_acum),
"regis" = which.max( gan_acum )
)
)
}
#---------------------
columna_graficar_ganancia_n( dataset, "tmovimientos_ultimos90dias", 35000 )
columna_graficar_ganancia_n( dataset, "mcuentas_saldo", 1275.59 )
#Y ahora la tabla de contingencia
ftable(dataset[ tmovimientos_ultimos90dias <= 20, clase_ternaria])
ftable(dataset[ tmovimientos_ultimos90dias > 20, clase_ternaria])
|
|
lesmis <- read.delim("lesmis.dat",header = F,sep = ",")
polbooks <- read.delim("polbooks.dat", header = F, sep = ",")
#d <- data.frame((lesmis[,1] + 1), (lesmis[,2] + 1))
d <- data.frame((polbooks[,1] + 1), (polbooks[,2] + 1))
n <- max(d) #No. of vertices
a <- matrix(0,n,n,dimnames = list(c(seq(1:n)),c(seq(1:n))))
#Creating adjacency matrix from given list
for (i in 1:nrow(d)){
x <- d[i,1]
y <- d[i,2]
a[x,y] <- 1
a[y,x] <- 1
}
degree <- rowSums(a)
vertex <- seq(1,n)
show(data.frame(vertex,degree))
avg_degree <- sum(degree)/n
cat(sprintf("\nAverage degree is :"),avg_degree)
cat(sprintf("\nDensity is :"),avg_degree/n)
#Local Clustering coeff (LCC)
LCC <- NULL #LCC of each vertex
id <- NULL #index of neighbours
kC2 <- NULL #Possible connections between neighbours
for(i in 1:n){
k <- 0
kC2 <- 0
nd <- sum(a[i,]) #degree of each vertex
if(nd > 1){
id <- which(a[i,] == 1)
for(j in 1:(nd-1)){
for(l in (j+1):nd){
if(a[id[j],id[l]] == 1){k <- k + 1}
}
}
kC2 <- nd * (nd - 1) / 2
}
lc <- 0
if(kC2 != 0) {lc <- k/kC2}
LCC = c(LCC, lc)
}
#Degree Distribution
dm <- max(degree)
dist <- NULL
for (i in 0:dm) {
dist1 <- length(which(degree==i))/n
dist <- c(dist,dist1)
}
cat(sprintf("\n"))
deg <- seq(0,dm)
show(data.frame(deg,dist))
#Matrix for cosine similarities
m <- a %*% a #Matrix for common neighbors between vertices
sig <- matrix(0,n,n,byrow = T,dimnames = list(c(seq(1:n)),c(seq(1:n)))) #Cosine similarity mat
for (i in 1:n) {
for (j in 1:n) {
if (i != j) {
sig[i,j] <- m[i,j]/(sqrt(rowSums(a)[i])*sqrt(rowSums(a)[j]))
}
if(i == j){
sig[i,j] <- 1
}
}
}
#Matrix for Katz similarities
I <- diag(1,n)
e <- eigen(a)
alpha <- (1/max(e$values))* 0.7 #alpha < 1/k1 where k1 is largerst eigen value of a
ks <- solve(I - (alpha*a)) #Matrix of Katz Similarity
|
|
#' @export
on_load_called = 0L
.on_load = function (ns) {
ns$on_load_called = ns$on_load_called + 1L
}
#' @export
register_unload_callback = local({
unloaded = NULL
function (callback) {
unloaded <<- callback
}
}, envir = (callback = new.env()))
.on_unload = function (ns) {
if (! is.null(callback$unloaded)) {
callback$unloaded()
}
}
|
|
#include "AEConfig.h"
#ifndef AE_OS_WIN
#include "AE_General.r"
#endif
resource 'PiPL' (16000) {
{ /* array properties: 7 elements */
/* [1] */
Kind {
AEGP
},
/* [2] */
Name {
"Panelator"
},
/* [3] */
Category {
"General Plugin"
},
/* [4] */
Version {
196608
},
#ifdef AE_OS_WIN
#ifdef AE_PROC_INTELx64
CodeWin64X86 {"EntryPointFunc"},
#else
CodeWin32X86 {"EntryPointFunc"},
#endif
#else
#ifdef AE_OS_MAC
CodeMachOPowerPC {"EntryPointFunc"},
CodeMacIntel32 {"EntryPointFunc"},
CodeMacIntel64 {"EntryPointFunc"},
#endif
#endif
}
};
|
|
#' dmulti_ddirch_forecast.r
#' NOTE: this assumes the predictor "map" is mean annual precipitation in mm, and natural log transforms.
#'
#' @param mod #model output from ddirch fitting function.
#' @param cov_mu #dataframe of site-level covariates. Must include all covariates used in models.
#' @param cov_sd #associated sd's if present.
#' @param glob.covs #global level covariates and standard deviations to fill in missing data.
#' @param n.samp #number of times to sample covariates/parameters for forecast. Default 1000.
#' @param zero_parameter_uncertainty #turns off drawing from parameter distributions, keeps parameters fixed at means.
#' @param zero_predictor_uncertainty #turns off drawing from covariate distributions, keeps covaraites fixed at means.
#' @param zero_process_uncertainty #turns off process draw from rdirichlet. Basically just makes predictive interval = credible interval.
#'
#' @return #returns a list of forecasts (mean, 95% CI, 95% PI) same length as model list.
#' @export
#'
#' @examples
source('NEFI_functions/precision_matrix_match.r')
dmulti_ddirch_forecast <- function(mod, seq.depth = 1000, cov_mu, names, cov_sd = NA, n.samp = 1000,
zero_parameter_uncertainty = F,
zero_covariate_uncertainty = F,
zero_process_uncertainty = F,
make_it_work = F){
#run some tests.----
if(is.list(mod) == F){
stop("Your model object isn't a list. It really needs to be.")
}
#grab the model out of list
j.mod <- mod$jags_model #separate out the jags model.
#### organize your covariates and covariate standard deviations.----
preds <- as.character(mod$species_parameter_output$other$predictor)
#grab covariates that were predictors.
covs <- cov_mu[,colnames(cov_mu) %in% preds]
#add intercept.
covs <- cbind(rep(1,nrow(covs)), covs)
colnames(covs)[1] <- 'intercept'
#grab uncertainties in sd, if present.
if(is.data.frame(cov_sd)){
cov.sd <- precision_matrix_match(covs, cov_sd)
}
#re-order to match predictor order. Conditional otherwise it breaks the intercept only case.
if(ncol(covs) > 1){
covs <- data.frame(covs[,preds])
if(is.data.frame(cov_sd)){
cov.sd <- data.frame(cov.sd[,preds])
}
}
#### Sample parameter and covariate space, make predictions.----
pred.out <- list()
cred.out <- list()
for(j in 1:n.samp){
#Sample parameters from mcmc output.----
mcmc <- do.call(rbind,j.mod$mcmc)
mcmc.sample <- mcmc[sample(nrow(mcmc),1),]
#if we are fixing parameter uncertainty to zero, do something different.----
if(zero_parameter_uncertainty == T){
mcmc.sample <- colMeans(mcmc)
}
#grab x.m values, convert to matrix.
x.m <- mcmc.sample[grep("^x\\.m\\[", names(mcmc.sample))]
x.m <- matrix(x.m, nrow = ncol(covs), ncol = length(x.m)/ncol(covs))
#Sample from covariate distributions.----
#fix covariate uncertainty? Do this by making sd zero.
if(zero_covariate_uncertainty==T){
cov_sd <- NA
}
#only sample if you supplied uncertainties.
if(is.data.frame(cov_sd)){
now.cov <- matrix(NA, ncol = ncol(covs), nrow = nrow(covs))
for(k in 1:ncol(covs)){now.cov[,k] <- rnorm(nrow(covs),covs[,k], cov.sd[,k])}
colnames(now.cov) <- preds
now.cov <- data.frame(now.cov)
#log transform map values if this is one of your covariates, put covariates back in matrix form.
#anti-logit relEM, multiply by 100 if this is one of your covariates.
if('relEM' %in% colnames(now.cov)){now.cov$relEM <- boot::inv.logit(now.cov$relEM) * 100}
if('map' %in% colnames(now.cov)){now.cov$map <- log(now.cov$map)}
now.cov <- as.matrix(now.cov)
}
#If you did not supply covariate uncertainties then now.cov is just covs.
if(!is.data.frame(cov_sd)){
now.cov <- as.matrix(covs)
colnames(now.cov) <- preds
now.cov <- data.frame(now.cov)
#log transform map values if this is one of your covariates, put covariates back in matrix form.
#anti-logit relEM, multiply by 100 if this is one of your covariates.
if('relEM' %in% colnames(now.cov)){now.cov$relEM <- boot::inv.logit(now.cov$relEM) * 100}
if('map' %in% colnames(now.cov)){now.cov$map <- log(now.cov$map)}
now.cov <- as.matrix(now.cov)
}
#Combine covariates and parameters to make a prediction.----
pred.x.m <- matrix(NA, ncol=ncol(x.m), nrow = nrow(covs))
for(k in 1:ncol(x.m)){pred.x.m[,k] <- exp(now.cov %*% x.m[,k])}
#Sometimes parameter draws generate infinities for groups that didnt fit well. Force it?
if(make_it_work == T){
pred.x.m[pred.x.m == Inf] <- max(pred.x.m < Inf)
}
#get mean prediction and then draw from multinomial-dirichlet distribution.
cred.out[[j]] <- pred.x.m / rowSums(pred.x.m)
#prediction interval passes through Dirichlet and multinomial processes.
dirichlet_out <- DirichletReg::rdirichlet(nrow(pred.x.m), pred.x.m + 0.1)
multinom_out <- list()
for(i in 1:nrow(dirichlet_out)){
multinom_out[[i]] <- t(stats::rmultinom(1,seq.depth,dirichlet_out[i,]))
}
multinom_out <- do.call(rbind, multinom_out)
pred.out[[j]] <- multinom_out/rowSums(multinom_out)
#Turn off process uncertainty? If so pred.out is just cred.out.
if(zero_process_uncertainty == T){
pred.out[[j]] <- cred.out[[j]]
}
}
#Summarize prediction mean and confidence intervals.----
pred.mean <- apply(simplify2array(cred.out), 1:2, mean)
pred.ci.0.025 <- apply(simplify2array(cred.out), 1:2, quantile, probs = c(0.025))
pred.ci.0.975 <- apply(simplify2array(cred.out), 1:2, quantile, probs = c(0.975))
pred.pi.0.025 <- apply(simplify2array(pred.out), 1:2, quantile, probs = c(0.025))
pred.pi.0.975 <- apply(simplify2array(pred.out), 1:2, quantile, probs = c(0.975))
#batch into list, column names are group names, row names are siteIDs.
output <- list(pred.mean, pred.ci.0.025, pred.ci.0.975, pred.pi.0.025, pred.pi.0.975)
for(k in 1:length(output)){
colnames(output[[k]]) <- names(mod$species_parameter_output)
rownames(output[[k]]) <- names
}
names(output) <- c('mean','ci_0.025','ci_0.975','pi_0.025','pi_0.975')
#return output.----
return(output)
} #end function.----
|
|
ef <- function(model='Schwab', from=NA, to=NA, efdata=NA, adjdates=NULL, period='months',
annualize=TRUE, addline=TRUE, col='black', lty=1, pch=3) {
## create Efficient Frontier points to assess TWR vs. risk
## model = 'Schwab' uses a blend of US L, US S, Inter, Fixed, and Cash
## = 'SSP' uses a blend of US L and Fixed
## duration defined from one of the following
## addline == FALSE:
## if false, then execution is assumed primarily to obtain output
## for subsequent call with ef(symbols, efdata=efdata, from=xx, to=yy)
## from, to, and period:
## from = end of day to start
## to = end of day to end
## period = 'months' (default), 'days', 'weeks', 'years'
## efdata:
## efdata = output from prior execution of ef
## used to extract twri for benchmark so no call needed to yahoo
## efdata$twri needs to contain incremental twr values for:
## 'SPY', 'IWM', 'EFA', 'AGG', 'SHV'
## where:
## US L = SPY
## US S = IWM (iShares Russel 2000)
## Inter = EFA (iShares MSCI EAFE of large and mid cap
## in developed contries excluding US and Canada)
## Fixed = AGG
## Cash = SHV (iShares Short Treasury Bond, < 1 yr)
## to just get twri data for subsequent use
## twrief <- ef(period='months', addline=FALSE)$twri
## then to create ef line
## ef(model='Schwab',
if (is.na(efdata[1])) {
## efdata is not provided so need to get it
symbol <- c('SPY', 'IWM', 'EFA', 'AGG', 'SHV')
if (isFALSE(addline)) {
## do not worry about dates and just get data for subsequent call to ef()
out <- equity.twri(symbol, period=period)
twri <- na.omit(out)
} else if (is.null(adjdates)) {
out <- equity.history(symbol, from=from, to=to, period=period)
twri <- na.omit( out$twri )
} else {
out <- equity.twri(symbol, adjdates=adjdates)
twri <- na.omit(out)
}
twri_in <- NA
} else {
## efdata is provided so can use it directly
if (class(efdata)[1] == 'xts') {
## input efdata is simply an xts object of twri values
twri_in <- efdata
} else {
## input efdata is full output of prior run of ef()
twri_in <- efdata$twri
}
twri <- twri_in
}
## restrict to duration
if (is.na(from)) from <- zoo::index(twri)[1]
if (is.na(to)) to <- zoo::index(twri)[nrow(twri)]
xtsrange <- paste(noquote(from), '/', noquote(to), sep='')
xtsrange
twri <- twri[xtsrange]
if (model == 'test') {
twri <- head(twri, 4)
twri[1,] <- c(0.1, 0.11, 0.1, 0.1, 0.1)
twri[2,] <- c(0.1, 0.12, 0.1, 0.1, 0.1)
twri[3,] <- c(0.1, 0.13, 0.1, 0.1, 0.1)
twri[4,] <- c(0.1, 0.14, 0.11, 0.1, 0.1)
}
## calculate twrc and standard deviation
## 1st date should have twrc = 0
## twrc_apply <- apply(twri, 2, function(x) { prod(x+1) - 1 }) / (twri[1,] + 1)
twrc <- xts::as.xts( t(t(cumprod(twri+1)) / as.vector(twri[1,]+1) - 1) )
twrcl <- tail(twrc, 1)
std <- apply(twri[2:nrow(twri)], 2, sd)
if (isTRUE(annualize)) {
## calculate annualized standard deviation from monthly standard deviation
## std(xi) = sqrt ( sum(xi-xbar)^2 / N ) for population which seems to be what finance world uses
## If have x = monthly TWR and want std for y = yearly TWR
## then set F * std(x) = std(y) and solve for F.
## If assume yi = 12*xi and give credit for the fact that 1 yearly entry is from 12 measurments,
## then can use Ny = 12*Nm and F = sqrt(12).
## std( TWR_monthly - avg_TWR_monthly )
std <- std * 12^0.5
## average annual return
days.held <- as.numeric(as.Date(to) - as.Date(from))
twrcl <- (1 + twrcl)^(365.25 / days.held) - 1
}
## define asset class weights for requested benchmark model
if (model == 'Schwab') {
## US L US S Inter Fixed Cash
## ----- ----- ----- ----- ----
schwab_95_5 <- c(0.50, 0.20, 0.25, 0.00, 0.05) # bench - aggressive (95 / 5)
schwab_80_20 <- c(0.45, 0.15, 0.20, 0.15, 0.05) # bench - mod agg (80 / 20)
schwab_60_40 <- c(0.35, 0.10, 0.15, 0.35, 0.05) # bench - moderage (60 / 40)
schwab_40_60 <- c(0.25, 0.05, 0.10, 0.50, 0.10) # bench - mod consv (40 / 60)
schwab_20_80 <- c(0.15, 0.00, 0.05, 0.50, 0.30) # bench - consverv (20 / 80)
schwab_0_100 <- c(0.00, 0.00, 0.00, 0.40, 0.60) # bench - short term ( 0 /100)
weight <- rbind(schwab_95_5, schwab_80_20, schwab_60_40,
schwab_40_60, schwab_20_80, schwab_0_100)
} else if (model == 'test') {
## US L US S Inter Fixed Cash
## ----- ----- ----- ----- ----
one <- c(0.2, 0.3, 0.2, 0.2, 0.2)
two <- c(0, 1, 0, 1, 0 )
weight <- rbind(one, two)
} else {
## US L US S Inter Fixed Cash
## ----- ----- ----- ----- ----
SSP_500 <- c(1.00, 0.00, 0.00, 0.00, 0.00)
bench_80_20 <- c(0.80, 0.00, 0.00, 0.20, 0.00)
bench_60_40 <- c(0.60, 0.00, 0.00, 0.40, 0.00)
bench_40_60 <- c(0.40, 0.00, 0.00, 0.60, 0.00)
bench_20_80 <- c(0.20, 0.00, 0.00, 0.80, 0.00)
US_bonds <- c(0.00, 0.00, 0.00, 1.00, 0.00)
weight <- rbind(SSP_500, bench_80_20, bench_60_40, bench_40_60, bench_20_80, US_bonds)
}
colnames(weight) <- c('US L', 'US S', 'Inter', 'Fixed', 'Cash')
## ## approximation (not too bad for twr;
## no good for sd since different benchmarks can compensate for eachother)
## ## calculate benchmark twr to define efficient frontier twr
## eftwr <- weight %*% twrc # column vector
## colnames(eftwr) <- 'EF TWR'
## calculate twri for efficient frontier model
eftwri <- twri %*% t(weight) # matrix
## turn back into xts
eftwri <- xts::as.xts( zoo::as.zoo( eftwri, zoo::index(twri)))
## calculate cumulative twr and standard deviation
eftwrc <- t(t(cumprod(eftwri+1)) / as.vector(eftwri[1,]+1) - 1)
eftwrcl <- t( xts::last(eftwrc) )
colnames(eftwrcl) <- 'eftwrc'
efstd <- as.matrix( apply(eftwri[2:nrow(eftwri),], 2, sd) ) # column vector
colnames(efstd) <- 'efstd'
if (isTRUE(annualize)) {
## calculate annualized standard deviation from monthly standard deviation
eftwrcl <- (1 + eftwrcl)^(365.25 / days.held) - 1
efstd <- efstd * 12^0.5
}
ef <- as.data.frame( cbind(eftwrcl, efstd) )
if (isTRUE(addline)) {
## plot lines for whatever period the data was supplied for
lines(ef$efstd, ef$eftwrc, type='b', col=col, lty=lty, pch=pch)
}
return(list(model=model, weight=weight, twri_in=twri_in, twri=twri, twrc=twrc, std=std,
eftwri=eftwri, ef=ef, from=from, to=to))
}
## ef(from='2020-12-31', to='2021-11-11')
## ef(model='simple', from='2015-12-31', to='2021-11-30')
|
|
####################
##this is r developing code for running simulation to generate
##the DNA/protein microarray data
## the model (ref: smyth 2004, Bayesian hierarchical model and linear model)
## linear model:
## Y_ijk=alpha_i+beta_j+ gamma_ij+ epsilon_ijk
## see the doc named: "Microarray data simulation.doc"
## feng@BU 09/03/2016
##
## Note: Oct 1st 2016, the code listed here is the old one that before
## writing them up into package functions. We keep them for reference
##
####################################
library(limma)
set.seed(2004);
#define the variables
numTreatment<-2 #number of different beta
numGene<-1000 #number of alpha
sampleSize<-100 ##this is the number of repeats for each group
alpha0_sigma<-2 #variance for alpha prior
beta0_sigma<-3 #variance for beta prior
#v0<-10 # the unscaled factor for gamma given gamma<=>0
gamma0_diff_sigma<-3
#the effects are assumed to be non negative and also fixed effect
#but following a normal distribution
alpha0<-(rnorm(numGene, 0,alpha0_sigma))
beta0<-(rnorm(numTreatment,0,beta0_sigma))
#
p_diff<-0.01
numGene_diff<-floor(numGene*p_diff)
#priors for variance distribution
d0<-5
sSquare0<-2
gamma0<-matrix(rep(0,numGene*numTreatment),nrow=numGene, byrow=T)
##in this following code snip, we randomly distribute the differential gamma into
##different positions across different treatment group
##other part is
for(i in c(1:numTreatment))
{
pos_diff<-sample(numGene, size=numGene_diff, replace=T)
gamma0[pos_diff,i]<-(rnorm(numGene_diff,0,gamma0_diff_sigma))
}
##now ready to generate variance
gamma0_var<-matrix(rchisq(numGene*numTreatment, df=d0),nrow=numGene, byrow=T)
gamma0_var<-1/(d0*sSquare0*sSquare0)*gamma0_var
gamma0_var<-1/gamma0_var
#now we have everything ready, do the observation values
#put them into matrix first
groupInfo<-rep(0,sampleSize*numTreatment)
Y_ijk<-matrix(rep(0,numGene*numTreatment*sampleSize),nrow=numGene,byrow=T)
for(j in c(1:numTreatment)) #for different treatment
{
samplePos<-c(1:sampleSize)+(j-1)*sampleSize;
groupInfo[samplePos]<-j;
for(k in c(1:numGene))
{
#write the meta data first
Y_ijk[k,samplePos]<-alpha0[k]+beta0[j]+gamma0[k,j]
Y_ijk[k,samplePos]<-Y_ijk[k,samplePos]+rnorm(sampleSize,0,sqrt(gamma0_var[k,j]))
}
}
##==========>now testing the function code in the module
library(ARPPA)
nGene<-2000
nTreatment<-numTreatment #number of different beta
sampleSize<-20 ##this is the number of repeats for each group
alpha.mean<-0 #variance for alpha prior
alpha.sigma<-alpha0_sigma
beta.mean<-0
beta.sigma<-beta0_sigma #variance for beta prior
#v0<-10 # the unscaled factor for gamma given gamma<=>0
gammaN0.sigma<-gamma0_diff_sigma
p_diff<-0.01
#priors for variance distribution
d0<-10
s0<-2
#call it
set.seed(2004);
dataExp<-simulateExpression(nGene, nTreatment, sampleSize,
alpha.mean=alpha.mean, alpha.sigma=alpha.sigma,
beta.mean=beta.mean, beta.sigma=beta.sigma,
prob.nonZero=0.01, gamma.sigma=gammaN0.sigma,
epsilon.d0=d0, epsilon.s0=s0
)
##now start fitting the model to estimate the EBayes parameters
#defining a S3 function for calculating the prior variance and df
#based on the observed variance
calculatePriorVariance<-function (x,df)
{
eg<-log(x)-digamma(df/2)+log(df/2)
eg_bar<-mean(eg)
tg_d02<-mean((eg-eg_bar)*(eg-eg_bar)*length(x)/(length(x)-1)-trigamma(df/2))
d0<-trigammaInverse(tg_d02)*2
s02<-exp(eg_bar+digamma(d0/2)-log(d0/2))
prior<-list(d0=d0, s02=s02)
}
scaledChiSq_Pdf<-function(x, d0, s0)
{
x_t<-d0*s0*s0/(x*x)
#x_t<-sqrt(x_t)
1/(2^(d0/2)*gamma(d0/2))*x_t^(d0/2-1)*exp(-1*x_t/2)
}
####testing code
###following the example for R help for eBayes
# See also lmFit examples
# Simula2te gene expression data,
# 6 microarrays and 100 genes with one gene differentially expressed
set.seed(2004); invisible(runif(100))
M <- matrix(rnorm(100000*60,sd=0.3),100000,60)
M[1,] <- M[1,] + 1
fit <- lmFit(M)
# Moderated t-statistic
fit <- eBayes(fit)
x<-fit$sigma*fit$sigma
df<-fit$df.residual
r<-sampleVariancePrior(x,df)
#testing code 2
#testing the data generated by the Y_ijk model above
#first got the variance
Y_ijk<-dataExp$exp
tY<-data.frame(t(Y_ijk))
tY$group<-c(rep(1,sampleSize),rep(2,sampleSize))
varY<-aggregate(tY,by=list(tY$group),FUN=var) #by doing aggregate, the new dataframe is arranged to have first column for "by" criteria.
#that is why in the next step, we need to get rid of the first column and last one, since the last one is the one we added as the group info.
varK<-varY[,c(-1,-nGene-2)]
x<-as.numeric(c(varK[1,],varK[2,]))
df<-rep(sampleSize-1,length(x))
r2<-calculatePriorVariance(x,df)
|
|
# Util
standard_error <- function(X, y, robust){
if (robust %in% c("White", "white", "HC0")) {
std_error <- robust_stde0(X, y)
cov_type <- "heteroskadestic (White)"
} else if (robust %in% c("Hinkley", "hinkley", "HC1")) {
std_error <- robust_stde1(X, y)
cov_type <- "heteroskadestic (Hinkley)"
} else if (robust %in% c("Horn", "horn", "HC2")) {
std_error <- robust_stde2(X, y)
cov_type <- "heteroskadestic (Horn)"
} else if (robust %in% c("MacKinnon", "mackinnon", "HC3")) {
std_error <- robust_stde3(X, y)
cov_type <- "heteroskadestic (MacKinnon)"
} else {
std_error <- stde(X, y)
cov_type <- "homoskadestic"
}
return(list("value"=std_error, "cov_type"=cov_type))
}
# Formula
#' @export
homoskedastic_standard_error <- function(X, y){
diag(homoskedastic_covariance_estimator(X, y))^(1/2)
}
#' @export
white_heteroskedastic_standard_error <- function(X, y){
diag(white_heteroskedastic_covariance_estimator(X, y))^(1/2)
}
#' @export
hinkley_heteroskedastic_standard_error <- function(X, y){
diag(hinkley_heteroskedastic_covariance_estimator(X, y))^(1/2)
}
#' @export
horn_heteroskedastic_standard_error <- function(X, y){
diag(horn_heteroskedastic_covariance_estimator(X, y))^(1/2)
}
#' @export
mackinnon_heteroskedastic_standard_error <- function(X, y){
diag(mackinnon_heteroskedastic_covariance_estimator(X, y))^(1/2)
}
# Abbreviation
#' @export
stde <- homoskedastic_standard_error
#' @export
robust_stde0 <- white_heteroskedastic_standard_error
#' @export
robust_stde1 <- hinkley_heteroskedastic_standard_error
#' @export
robust_stde2 <- horn_heteroskedastic_standard_error
#' @export
robust_stde3 <- mackinnon_heteroskedastic_standard_error
|
|
# The functions require rchart-helper.R preloaded
getLineRangePlot =
function(seriesNames, lines, ranges, rangeName, yLabel, verticalLineDate=NULL, colors, timezone="UTC") {
series <- list()
minValue <- 0
for (i in 1:length(lines)) {
series[[2 * (i - 1) + 1]] <-
list(name=seriesNames[i], data=lines[[i]], zIndex=1, color=colors[i],
marker=list(fillColor="white", lineWidth=2, lineColor=colors[i]))
series[[2 * i]] <- list(name=rangeName, data=ranges[[i]], zIndex=0,
type="arearange", color=colors[i], lineWidth=0, linkedTo=":previous", fillOpacity=0.3)
minValue <- min(minValue, min(ranges[[i]][2,]))
}
chart <- Highcharts$new()
xAxis <- list(type="datetime")
if (!is.null(verticalLineDate)){
date <- as.POSIXlt(strptime(as.character(verticalLineDate), "%Y-%m-%d", tz=timezone))
xAxis[["plotLines"]] <- paste("[{color: 'red',",
"value: Date.UTC(", date$year + 1900, ",", date$mon, ",", date$mday, "),",
"width: 2}]", sep="")
}
chart$set(xAxis=xAxis)
chart$yAxis(title=list(text=yLabel), min=minValue)
chart$set(series=series)
return(chart)
}
getMean1SigmaTimelapsePlot = function(data, names, colors, yLabel, colName="x", verticalLineDate=NULL, timezone="UTC") {
col <- which(names(data[[1]])==colName)[1]
dateFactors <- list()
statsMean <- list()
statsStds <- list()
for (i in 1:length(data)) {
data[[i]]$x = data[[i]][, col]
stats =
group_by(data[[i]], date) %>%
summarize(mean=mean(x), stdLower=mean(x) - sd(x), stdUpper=(mean(x) + sd(x))) %>%
arrange(date)
stats = t(stats)
# Timpstamp in miliseconds
unixTimestamps <-
1000 * as.numeric(as.POSIXct(sort(unique(data[[i]]$date)),
origin="1970-01-01"))
statsMean[[i]] <- as.data.frame(rbind(setNames(unixTimestamps, nm=NULL),
as.numeric(stats[2,])))
names(statsMean[[i]]) <- NULL
statsStds[[i]] <- as.data.frame(rbind(setNames(unixTimestamps, nm=NULL),
as.numeric(stats[3,]),
as.numeric(stats[4,])))
names(statsStds[[i]]) <- NULL
}
return(getLineRangePlot(names, statsMean, statsStds, "1 sigma", yLabel, verticalLineDate, colors, timezone))
}
# getQ2TimelapsePlot
# data[[]]$x: Stats
# data[[]]$date: Date
getQ2TimelapsePlot = function(data, names, colors, yLabel, colName="x", verticalLineDate=NULL, timezone="UTC") {
col <- which(names(data[[1]])==colName)[1]
dateFactors <- list()
statsMedian <- list()
statsQ2 <- list()
for (i in 1:length(data)) {
dateFactors[[i]] <- as.factor(data[[i]]$date)
boxplot <- boxplot(data[[i]][, col] ~ dateFactors[[i]],
data=data.frame(dateFactors[[i]], data[[i]][, col]), plot=FALSE)
stats <- setNames(as.data.frame(boxplot$stats), nm=NULL)
# Timpstamp in miliseconds
unixTimestamps <-
1000 * as.numeric(as.POSIXct(sort(unique(data[[i]]$date)),
origin="1970-01-01"))
statsMedian[[i]] <- rbind(setNames(unixTimestamps, nm=NULL), stats[3,])
statsQ2[[i]] <- rbind(setNames(unixTimestamps, nm=NULL), stats[c(2, 4),])
}
return(getLineRangePlot(names, statsMedian, statsQ2, "50th quartile", yLabel, verticalLineDate, colors, timezone))
}
# Helper for creating histogram
getBinItemList = function(data, businesses, interval=100) {
binItemList <- c()
currentBin <- interval
maxBin <- max(data$count) + interval
while (currentBin < maxBin) {
items <- filter(data, currentBin - interval <= count & count < currentBin)
binItemList <- c(binItemList,
paste("< ", currentBin, "<br>",
paste(items$name, collapse="<br>, ")))
currentBin <- currentBin + interval
}
return(binItemList)
}
# getStackedHistogram
# data[[]]$x
getStackedHistogram = function(data,
names,
xLabel,
yLabel=NULL,
colName="x",
minBin=NULL,
maxBin = NULL,
interval=100,
logScale=FALSE,
logBase=exp(1),
normalize=FALSE,
colors = c("#7cb5ec", "#000000")) {
series <- list()
plotLines <- list()
col <- which(names(data[[1]])==colName)[1]
actualInterval <- interval
for (i in 1:length(data)) {
maxBin <- max(maxBin, max(data[[i]][, col], na.rm=TRUE), na.rm=TRUE)
minBin <- min(minBin, min(data[[i]][, col], na.rm=TRUE), na.rm=TRUE)
}
if (logScale) {
maxBin <- log(maxBin + 1, base=logBase)
minBin <- log(minBin + 1, base=logBase)
actualInterval <- log(interval, base=logBase)
}
for (i in 1:length(data)){
x <- as.vector(as.matrix(data[[i]][, col]))
if (logScale) {
x <- log(x + 1, base=logBase)
}
plotLines[[i * 2 - 1]] <-
list(color=colors[i],
value=mean(x),
width=2,
label=list(text="mean", style=list(color=colors[i]), verticalAlign="middle"))
plotLines[[i * 2]] <-
list(color=colors[i],
value=median(x),
dashStyle="dash",
width=2,
label=list(text="median", style=list(color=colors[i]), verticalAlign="middle"))
histogram <- hist(x, breaks=seq(minBin, maxBin + actualInterval, actualInterval), plot=FALSE)
histNames <- getBinItemList(data[[i]], interval=actualInterval)
nBins <- min(length(histogram$breaks), length(histogram$counts))
counts <- histogram$counts[1:nBins]
if (normalize) {
counts <- 100 * counts / nrow(data[[i]])
}
breaks <- c(histogram$breaks[2:nBins], histogram$breaks[nBins] + actualInterval)
bins <- getValues(
breaks,
counts,
name=histNames)
series[[i]] <- list(name=names[i], data=bins)
}
chart <- Highcharts$new()
chart$chart(type="column")
chart$plotOptions(
column="{ grouping: false, pointPadding: 0, borderWidth: 0, groupPadding: 0, shadow: false}")
chart$xAxis(title=paste("{text: '", xLabel, "'}", sep=""),
plotLines=plotLines)
if (is.null(yLabel)) {
if (normalize) {
yLabel <- "density (%)"
} else {
yLabel <- "count"
}
}
chart$yAxis(title=paste("{text: '", yLabel, "'}", sep=""))
chart$set(series=series)
return(chart)
}
# getTimelapseLinePlot
# data[[]]$x: Stats
# data[[]]$date: Date
getTimelapseLinePlot = function(data, names, yLabel, colName="x", verticalLineDate=NULL, timezone="UTC") {
series <- list()
col <- which(names(data[[1]])==colName)[1]
for (i in 1:length(data)){
timelapseValues <- getTimelapseValues(
as.POSIXlt(strptime(as.character(data[[i]]$date), "%Y-%m-%d", tz=timezone)),
data[[i]][, col])
series[[i]] <- list(name=names[i], data=timelapseValues)
}
chart <- Highcharts$new()
xAxis <- list(type="datetime")
if (!is.null(verticalLineDate)){
date <- as.POSIXlt(strptime(as.character(verticalLineDate), "%Y-%m-%d", tz=timezone))
xAxis[["plotLines"]] <- paste("[{color: 'red',",
"value: Date.UTC(", date$year + 1900, ",", date$mon, ",", date$mday, "),",
"width: 2}]", sep="")
}
chart$set(xAxis=xAxis)
chart$yAxis(title=paste("{text: '", yLabel, "'}", sep=""), gridLineColor="#FFFFFF")
chart$set(series=series)
return(chart)
}
# Difference-in-difference plot
# Use with DiffInDiffAggregate function
diffInDiffPlot = function(data,
idCol,
xCol,
idLabelCol=NULL,
xLabel="period",
yLabel="change",
periodNames=NULL,
legendStyle=list(align="right", verticalAlign="top", layout="vertical")
) {
dataChart <- Highcharts$new()
ids <- unique(data[, idCol])
numPeriod <- 0
if (is.null(idLabelCol)) {
idLabelCol = idCol
}
for (i in 1:length(ids)) {
current <- data[data[, idCol] == ids[i],]
numPeriod <- nrow(current)
name <- current[1,][, idLabelCol]
x <- seq(0, numPeriod - 1, 1)
y <- current[, xCol]
z <- current[, xCol]
seriesData <- getValues(x, y, z, name)
visible <- TRUE
dataChart$series(name=name,
data=seriesData,
showInLegend=TRUE,
visible=visible)
}
if (is.null(periodNames)) {
periodNames <- paste("period", x)
}
dataChart$xAxis(categories=periodNames)
dataChart$yAxis(title=list(text=yLabel), gridLineColor="#FFFFFF")
do.call(dataChart$legend, c(legendStyle))
dataChart$tooltip(pointFormat=getPointFormat(y=yLabel, z=NULL))
return (dataChart)
}
|
|
# Compare MAPE for different validation
# of the best model to be used in forecasting
source('2020-08-31-jsa-type-v2-ch2/03-evaluate/init.r')
# Inititalize parameters
model <- c("BGY-E1-C4-I20000-A0.99-T12-F50-V50")
dir_model_folder <- paste0(dir_analysis_edie_model, model, "/")
model_params <- parse_model_identifier(model)
# Original data
file <-paste0(dir_model_folder, "data.csv")
data_raw <- read.csv(file, na.strings=c(""), stringsAsFactors = T)
# Map model indices to original variables
mapping <- unique(data.frame(
groupname = as.character(data_raw$group),
group = as.numeric(data_raw$group)
))
# Read data
files <- dir(dir_model_folder, pattern = 'count_info.data.R', full.names = TRUE)
data <- read_rdump(files)
# Read data
files <- dir(
dir_model_folder,
pattern = 'count_info_ref.data.R',
full.names = TRUE
)
data_ref <- read_rdump(files)
# Read model
load(paste0(dir_model_folder, "fit.data"))
# Test
posterior_sim(data, fit)
posterior_forecast(data, 5, fit)
# Predict for 5 (and repeat for 10, 15, 20, 25)
n_samples <- 1000
pfile <- paste0(dir_model_folder, "ts.csv")
if(!file.exists(pfile)) {
dfs <- lapply(seq(5, model_params$va, 5), function(ptime) {
print(paste0("-------------- Forecast duration: ", ptime, " years"))
first_year <- max(data_raw$year) - model_params$va
# Using posterior_sim, predict using data itself
predictions_sim <- mclapply(1:n_samples, mc.cores = 1, function(ii) {
predictions_actual <- posterior_sim(data_ref, fit)
predictions_actual <- lapply(
predictions_actual, function(x) {
idx1 <- (length(x)-model_params$va+1)
idx2 <- (length(x)-model_params$va+ ptime)
x[idx1:idx2]
})
predictions_actual
})
# Using posterior_forecast, predict using naive method
predictions_forecast <- mclapply(
1:n_samples, mc.cores = 1, function(ii) {
posterior_forecast(data, ptime, fit)
})
# Coerce these lists of lists to data.table
sim_li <- lapply(1:n_samples, function(i) {
x <- predictions_sim[[i]]
list_to_df(x, i)
})
forecast_li <- lapply(1:n_samples, function(i) {
x <- predictions_forecast[[i]]
list_to_df(x, i)
})
sim_df <- rbindlist(sim_li)
sim_df$type <- "forecast"
forecast_df <- rbindlist(forecast_li)
forecast_df$type <- "naive forecast"
df <- rbind(forecast_df, sim_df)
# Wide to long
df_long <- melt(df, id.vars = c("group", "sim", "type"))
names(df_long)[which(names(df_long) == "variable")] <- "year"
names(df_long)[which(names(df_long) == "value")] <- "model"
df_long$year <-
as.integer(gsub("year_", "", df_long$year)) + first_year - 1
df_long$group <- mapping[match(df_long$group, mapping$group),]$groupname
# Merge with actual counts
df_obs <- data.table(data_raw)[,
list(obs = .N),
by = c("year", "group")
]
df_long <- merge(
df_long, df_obs,
by = c("year", "group")
)
df_long$ptime <- ptime
df_long
})
# Calculate MAPE
dfs <- rbindlist(dfs)
dfs$diff <- abs(dfs$model - dfs$obs)
dfs$perc <- dfs$diff / dfs$obs
fwrite(dfs, pfile) # persist
}
dfs <- fread(pfile)
prop <- 1 # proportion of MAPE
summary <- dfs[, list(MAPE = mean(perc)), by = c("ptime", "type")]
summary$MAPE1pt5 <- summary$MAPE * prop
summary$MAPEplt <- ifelse(
summary$type == "naive forecast",
summary$MAPE1pt5,
summary$MAPE
)
# Plot MAPE against ptime
# "naive forecast": not using data
# "forecast": using data
# "best window": largest window where MAPE of "naive forecast" >= "forecast"
lab_naive <- ifelse(
prop == 1,
paste0("Naive forecast (no data)"),
paste0("Naive forecast (no data) * ", prop)
)
p1 <- ggplot(summary, aes(x = ptime, y = summary$MAPEplt, fill = type)) +
geom_bar(stat = "identity", position = "dodge") + theme +
ylab("MAPE (%)\n") + xlab("\nPrediction duration") +
scale_fill_manual(
values = c('#999999','#E69F00'),
name = "",
labels = c("Validation forecast (using data)", lab_naive)
) + theme(legend.position="bottom")
output <- paste0(dir_model_folder, "ts.png")
ggsave(p1, file = output, width = 7, height = 4)
# Boxplot
# p2 <- ggplot(summary, aes(x = as.character(ptime), y = MAPE, fill = type)) +
# geom_boxplot() + theme +
# ylab("MAPE (%)\n") + xlab("\nPrediction duration") +
# scale_fill_manual(
# values = c('#999999','#E69F00'),
# name = "Type"
# )
# p2
|
|
#Packages to import
library(ggplot2)
library(reshape2)
library(cowplot)
library(lubridate)
#Import data
tsc <- read.table("timeseries_cluster-eps_0.1-num_26.csv",sep=",",header=TRUE)
tp.int <- read.table("timeseries_cluster-eps_0.1-num_26.csv",sep=",",header=FALSE)[1,6:ncol(tsc)]
#This is the first sample date
basedate <- date("2000-03-15")
#Make the rest of the dates offset from the base date
tp <- basedate + as.numeric(tp.int)
#Load this in for normalization
all <- read.table("all_clusters-eps_0.1.csv",sep=",",header=TRUE)
all.sum <- colSums(all[,7:ncol(all)])
#Normalize by sample sequence depth
tsc[,6:ncol(tsc)] <- t(t(tsc[,6:ncol(tsc)])/all.sum)
#Read in the OTU
otu <- read.table("otu_num_464.csv", sep=",", header=TRUE)
#Normalize by sample sequence depth
otu[,6:ncol(otu)] <- t(t(otu[,6:ncol(otu)])/all.sum)
#Melt for ggplot
otu.m <- melt(t(otu[,6:ncol(otu)]))
otu.df <- data.frame(Sequence=otu.m$Var2, Abundance=otu.m$value, Time=tp, TSC=rep(otu$TimeClustNumber, each=length(tp)))
otu.df <- otu.df[otu.df$TSC %in% c("6","84"),]
otu.df$TSC <- paste("TSC", otu.df$TSC)
otu.df$TSC <- factor(otu.df$TSC, levels=c("TSC 6","TSC 84"))
#Melt for ggplot
tsc.m <- melt(t(tsc[,6:ncol(tsc)]))
nseqs <- dim(tsc.m)[1]/length(tp)
classifications <- rep(tsc$TaxonomicID, each=length(tp))
df <- data.frame(Sequence=tsc.m$Var2, Abundance=tsc.m$value, Taxonomy=classifications, Time=tp)
#Filter out all the other stuff so we have a tidier plot featuring the abundant organisms only
keeptax <- c("k__Bacteria;p__Cyanobacteria;c__Synechococcophycideae;o__Synechococcales;f__Synechococcaceae;g__Synechococcus;unclassified",
"k__Bacteria;p__Bacteroidetes;c__[Saprospirae];o__[Saprospirales];bacI;bacI-A;unclassified")
df <- df[df$Taxonomy %in% keeptax,]
levels(df$Taxonomy)[levels(df$Taxonomy)==keeptax[1]] <- "Synechococcus"
levels(df$Taxonomy)[levels(df$Taxonomy)==keeptax[2]] <- "bacI-A"
#Left panel
p1<-ggplot(df, aes(x=Time, y=Abundance, color=Taxonomy, group=Sequence))+geom_line(alpha=0.7)+facet_grid(Taxonomy~., scale="free_y")+
ylab("Relative Abundance")+theme(legend.position="none")+scale_x_date(date_labels="%Y", date_breaks="1 year")+
geom_vline(xintercept=as.numeric(date("2000-09-01")+years(0:10)),alpha=0.5, col="yellow")+
ggtitle("TSC 26")
ggsave("MendotaFig1.pdf", height=4, width=7)
#Right panel
p2<-ggplot(otu.df, aes(x=Time, y=Abundance, group=Sequence, colour=as.factor(TSC)))+geom_line(alpha=0.7)+facet_grid(TSC~., scale="free_y")+
ylab("Relative Abundance")+theme(legend.position="none")+scale_x_date(date_labels="%Y", date_breaks="1 year")+
geom_vline(xintercept=as.numeric(date("2000-09-01")+years(0:10)),alpha=0.5, col="yellow")+
geom_vline(xintercept=as.numeric(date("2000-06-01")+years(0:10)),alpha=0.5, col="lightblue")+
scale_color_brewer(palette="Dark2")+ggtitle("OTU 464")
ggsave("MendotaFig1.5.pdf", height=4, width=7)
# Source : http://www.cookbook-r.com/Graphs/Multiple_graphs_on_one_page_(ggplot2)/
# Multiple plot function
#
# ggplot objects can be passed in ..., or to plotlist (as a list of ggplot objects)
# - cols: Number of columns in layout
# - layout: A matrix specifying the layout. If present, 'cols' is ignored.
#
# If the layout is something like matrix(c(1,2,3,3), nrow=2, byrow=TRUE),
# then plot 1 will go in the upper left, 2 will go in the upper right, and
# 3 will go all the way across the bottom.
#
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
library(grid)
# Make a list from the ... arguments and plotlist
plots <- c(list(...), plotlist)
numPlots = length(plots)
# If layout is NULL, then use 'cols' to determine layout
if (is.null(layout)) {
# Make the panel
# ncol: Number of columns of plots
# nrow: Number of rows needed, calculated from # of cols
layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
ncol = cols, nrow = ceiling(numPlots/cols))
}
if (numPlots==1) {
print(plots[[1]])
} else {
# Set up the page
grid.newpage()
pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))
# Make each plot, in the correct location
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))
print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col))
}
}
}
#Plot both figures side-by-side as well
pdf("Figure5_revisions.pdf", width=14, height=4)
multiplot(p1,p2,cols=2)
dev.off()
|
|
## Weight by the margin probabilities ###
rm(list = ls())
# install.packages(c("tidyverse", "msm", "foreign", "boot", "car",
# "nnet", "haven", "lavaan", "cowplot", "readstata13", "reshape2" ),
# quiet=TRUE)
#detach("packages:dplyr")
source("/Users/chrisweber/Desktop/Authoritarianism_V2/Authoritarianism_V2/configurations/BookFunctions.R")
### User functions to calculate stuff:
### Dependdencies
require(dplyr)
library(msm)
library(boot)
library(ggplot2)
library(nnet)
library(lavaan)
library(cowplot)
require(readstata13)
|
|
# Fit SSD to Silver dataset
# Copyright 2017 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
# We compare fits for the distributions with AIC
# Refer to https://cran.r-project.org/web/packages/fitdistrplus/vignettes/FAQ.html
# for information on the fitdistrplus package.
# Refer to AICcmodavg package for details on creating the AIC tables and
# how to do model averaging.
# Load the necessary libraries
library(AICcmodavg)
library(actuar) # for the burr distribution
library(FAdist) # for log-logistic
library(fitdistrplus)
library(ggplot2)
library(grid)
library(gridGraphics)
library(plyr) # for doing a separate analysi by chemical
library(VGAM) # for the gompertz, pareto, gumbel distributions
options(width=200)
source("mygofstat.r") # corrects and error in the code in the fistdistr package (groan)
#load test data
endpoints<-read.csv("SSDsilver.csv",header=TRUE,as.is=TRUE, strip.white=TRUE)
head(endpoints)
xtabs(~Chemical, data=endpoints, exclude=NULL, na.action=na.pass)
# create a directory for plots
dir.create(file.path("Plots"))
#use descdist function from fitdistrplus package to identify candidate distributions - create a Cullen Frey graph
sumstats <- plyr::ddply(endpoints, "Chemical", function(x, boot){
# create the individual plots
png(file.path("Plots",paste(x$Chemical,".png",sep="")), h=6, w=6, units="in", res=300)
res <- descdist(x$Conc,discrete=FALSE, boot=boot)
dev.off()
# return the summary statistics as a dataframe, rather than a list
attr(res, "class") <- NULL # remove class attribute from results
data.frame(res)
}, boot=100)
# summary statistics
sumstats
# create list of possible distributions
fit.list <- list( lnorm = list(data=NULL, distr="lnorm", method="mle"),
llog = list(data=NULL, distr="llog", method="mle"), # log logistic
gomp = list(data=NULL, distr="gompertz",method="mle"),
lgumbel= list(data=NULL, distr="lgumbel",method="mle"), # log gumbel
gamma = list(data=NULL, distr="gamma", method="mle"),
# pareto = list(data=NULL, distr="pareto", method="mle"),
weibull= list(data=NULL, distr="weibull",method="mle")
# burr = list(data=NULL, distr="burr", method="mle")
)
# define log-gumbel distribution
# These functions are needed because this is a non-standard distribution
dlgumbel <- function(x, location=0, scale=0, log=FALSE){
fx <- dgumbel(log(x), location=location, scale=scale, log=FALSE)/x
if(log) fx <- log(fx)
fx}
qlgumbel <- function(p, location=0, scale=0, lower.tail=TRUE, log.p=FALSE){
if(log.p) p<- exp(p)
if(!lower.tail) p <- 1-p
exp( qgumbel(p, location=location, scale=scale))}
plgumbel <- function(q, location=0, scale=0, lower.tail=TRUE, log.p=FALSE){
Fq <- pgumbel(log(q), location=location, scale=scale)
if(!lower.tail) Fq <- 1-Fq
if(log.p) Fq <- log(Fq)
Fq}
rlgumbel <- function(n, location=0, scale=0){ exp(rgumbel(n, location=location, scale=scale))}
#endpoints <- endpoints[ endpoints$Chemical=="Boron_CCME",] # for testing to fit only one chemical
# Fit each distribution from the list to each chemical
fit.all <- plyr::dlply(endpoints, "Chemical", function(x, probs=c(.05,.10), nboot=1000){
cat("Analyzing ", x[1,"Chemical"],"\n")
# fit all of the distributions in the list to this data. If
# a distribution fitting does not converge, then it returns NULL
dist.fits <- plyr::tryapply(fit.list, function(distr, x){
distr$data <- x$Conc
if(distr$distr=="burr"){
distr$start <-list(shape1=4, shape2=1, rate=1)
distr$method<- "mme"
distr$order <- 1:3
distr$memp <- function (x, order){sum(x^order)}
}
if(distr$distr=='gamma'){
distr$start <- list(scale=var(x$Conc)/mean(x$Conc),
shape=mean(x$Conc)^2/var(x$Conc)^2)
}
if(distr$distr == 'gompertz'){
# use the vgam to get the parameters of the fit
fit <- vglm(Conc~1, gompertz, data=x)
distr$start <- list(shape=exp(unname(coef(fit)[2])), scale=exp(unname(coef(fit)[1])) )
}
if(distr$distr=='lgumbel'){
distr$start <- list(location=mean(log(x$Conc)), scale=pi*sd(log(x$Conc))/sqrt(6))
}
if(distr$distr=='pareto'){ #use the vgam package to estimate the starting values
fit<- vglm(Conc~1, paretoff, data=x)
distr$start <- list(shape=exp(unname(coef(fit))))
distr$fix.arg<- list(scale=fit@extra$scale)
}
if(distr$distr=="llog" ){ # log=logistic
distr$start <- list(shape=mean(log(x$Conc)), scale=pi*sd(log(x$Conc))/sqrt(3))
}
cat(" Fitting ", distr$distr,"\n")
res <- do.call("fitdist", distr)
print(res)
res
},x=x)
# Get the aic table by hand.
aic.table <- plyr::ldply(dist.fits, function(x){
# extract the distribution name and AIC value
aic <- x$aic
distname <- x$distname
aicc <- AICcmodavg::AICc(x) # conflict with VGAM package
nparm <- length(x$estimate)
data.frame(distname=distname, aic=aic, k=nparm, aicc=aicc)
})
aic.table <- aic.table[order(aic.table$aicc),]
aic.table$delta.aicc <- aic.table$aicc - min(aic.table$aicc)
aic.table$weight <- round(exp(-aic.table$delta.aicc/2)/sum(exp(-aic.table$delta.aicc/2)),3)
# make predictions for HC5 based on all of the distributions
# Include a bootstrap approximation to the se of the estimates
pred.table <- plyr::ldply(dist.fits, function(x, probs=.05, nboot=1000){
distname <- x$distname
pred <- quantile(x, probs)
fit.boot <- bootdist(x, niter=nboot)
#browser()
se <- apply(quantile(fit.boot, probs=probs)$bootquant, 2, sd, na.rm=TRUE)
lcl <- quantile(fit.boot, probs=probs)$quantCI[1,]
ucl <- quantile(fit.boot, probs=probs)$quantCI[2,]
#cat("pred.table", distname, pred, se, lcl, ucl, "\n")
data.frame(distname=distname, quantile=probs, pred=unlist(pred$quantiles), se=se, lcl=unlist(lcl), ucl=unlist(ucl) )
}, probs=probs, nboot=nboot)
pred.table <- merge(pred.table, aic.table[,c("distname","weight")])
# compute the model averaged quantiles, the model averaged lcl and ucl, and unconditional se
q.modavg <- plyr::ddply(pred.table, "quantile", function(x){
avg.pred <- sum(x$pred * x$weight)
avg.lcl <- sum(x$lcl * x$weight)
avg.ucl <- sum(x$ucl * x$weight)
# get the unconditional se
avg.u.se <- sqrt(sum(x$weight * (x$se^2 + (x$pred-avg.pred)^2)))
data.frame(avg.pred=avg.pred, avg.u.se=avg.u.se, avg.lcl=avg.lcl, avg.ucl=avg.ucl)
})
q.modavg
# make the cdf plot comparing all of the fits
# capture the plot and save it
# get the list of distributions that were fit
dist.names <- plyr::laply(dist.fits, function(x) x$distname)
cdfcomp(dist.fits, xlogscale=TRUE, legendtext=dist.names,
main=paste("Empirical and theoretical CDF's - ",x[1,"Chemical"],sep=""))
gridGraphics::grid.echo()
cdf.comp.plot <- grid::grid.grab()
#compute the goodmess of fit statistics - NOTE this does not work for burr distribution, not sure why...
# must check to see if more than one distribution was fit - groan.
# Also had to fix an error in compute p-value for chi2 test when df=0
if(length(dist.fits) ==1) {gof.stat <- mygofstat(dist.fits[[1]], fitnames=dist.names)}
if(length(dist.fits) >1 ) {gof.stat <- mygofstat(dist.fits,fitnames =dist.names)}
# return the entire fitting stuff for this distibution
list(Chemical=x[1,"Chemical"], dist.fits=dist.fits, aic.table=aic.table, pred.table=pred.table,
q.modavg=q.modavg,
cdf.comp.plot =cdf.comp.plot,
gof.stat =gof.stat)
})
display.results <- function(x){
# display the results from the fitting functions
cat("\n\n\n\n\n\n\n\n\n *******************************************************************\n")
cat("\n\nResults when applied to ", x$Chemical, "\n")
cat("\n\nAICc table \n")
print(x$aic.table)
cat("\n\nPredictions of endpoints \n")
print(x$pred.table)
cat("\n\nModel averaged endpoint \n")
print(x$q.modavg)
cat("\n\nCDF comparative plot\n")
grid.newpage()
grid.draw(x$cdf.comp.plot)
cat("\n\nGoodness of fit statistics\n")
print(x$gof.stat)
}
# display all of the results from all of the Chemicals
plyr::l_ply(fit.all, display.results)
# save the comparative plot
plyr::l_ply(fit.all, function(x){
file.name=file.path("Plots",paste(x$Chemical,"-comparative-plot.png",sep=""))
png(file.name, h=6, w=6, units="in", res=300)
grid.newpage()
grid.draw(x$cdf.comp.plot)
dev.off()
})
|
|
#' Predict method for MBM objects
#'
#' @param x A previously-fit MBM object
#' @param newdata Optional dataset for prediction. If present, it should be a new dataset in
#' the same format used to fit the model (i.e., a site by covariate matrix). If missing,
#' predictions will be for the original data.
#' @param n_samples NA or integer; if NA, analytical predictions with standard deviation
#' are returned, otherwise posterior samples are returned.
#' @param type Whether to return predictions on the link or response scale
#' @details Prediction to new data is possible after the fact for mbm models, however
#' there can be performance penalties for doing so with large models. Thus, it is
#' sometimes preferable to predict during model fitting via the \code{predictX}
#' argument to the \code{\link{mbm}} function.
#'
#' All prediction is done on the link scale.
#'
#' This function caches to disk, thus it is important to ensure that adequate disk
#' space is available when using large prediction datasets.
#' @return A data frame of predictions and standard deviations (on the link scale); use
#' \code{x$y_rev_transform(x$rev_link(predictions$fit))} for the response scale.
#' @export
# predict.mbm <- function(x, newdata, n_samples = NA, GPy_location = NA, pyMsg = FALSE)
predict.mbm <- function(x, newdata, type=c("link", "response")) {
type <- match.arg(type)
if(missing(newdata)) {
newdata <- x$covariates
} else {
newdata <- as.matrix(newdata)
if(ncol(newdata) != ncol(x$x)) {
stop("newdata must have the same number of variables as the original data")
}
# parse newdata into dissimilarity format
newdata <- x$x_scaling(newdata)
newdata <- env_dissim(newdata)
newdata <- as.matrix(newdata[,-which(colnames(newdata) %in% c("site1", "site2"))])
}
pr <- x$pyobj$gp$predict_noiseless(newdata)
pr[[2]] <- sqrt(pr[[2]])
names(pr) <- c("mean", "sd")
if(type == "link") {
as.data.frame(pr)
} else {
pr <- pr[[1]]
pr <- x$y_rev_transform(x$inv_link(pr))
}
}
#' Spatial MBM prediction
#'
#' @param x A previously-fit MBM object
#' @param prdat New dataset to be used for prediction; either a raster stack or data
#' frame. See 'Details'
#' @param coords matrix with 2 columns containing X-Y coordinates for \code{prdat},
#' required if prdat does not have a \code{coordinates} method.
#' @param method How to compute the spatial predictions; see 'Details'
#' @param ... Other named parameters to pass to \code{\link{predict.mbm}}.
#' @details \code{prdat} can either be a raster stack with new variables (and spatial
#' information) for prediction, or a data frame-like object with previous
#' predictions from \code{\link{predict.mbm}} with 4 columns: 1. site1, 2. site2,
#' 3. mean, and 4. sd.
#'
#' For rasters, if a layer named 'names' is included (recommended), this layer will
#' be used as sitenames, otherwise they will be assigned unique numbers.
#'
#' If \code{method} is "slow", spatial predictions will be computed by first
#' predicting dissimilarity to all pairs of raster cells, then performing an
#' ordination on the dissimilarity matrix to produce an RGB raster of spatial
#' predictions.
#'
#' For method == 'fast' (currently not implemented), computation is sped up by first
#' performing hierarchical clustering on the predicted dissimilarity matrix for the
#' calibration data (which will have already been computed when mbm was run) to
#' produce cell categories. Each raster cell will then be assigned the category of
#' the calibration data point that is closest environmentally. Then, we compute the
#' dissimilarity matrix of the categories (based on the mean environmental
#' values). The ordination is performed as with the slow method on this
#' dissimilarity matrix.
#' @return An object of class mbmSP, which is a list with three named items: \code{fits}
#' is a 3-band gridded SpatialPointsDataFrame giving the first three prinipal
#' components of predicted pairwise dissimilarity, stdev is a SpatialPointsDataFrame
#' giving the mean of pairwise dissimilarities among all other sites in a given site,
#' and pcoa is the principal coordinates analysis for the fits. Both fits and stdev
#' can be made into rasters using raster::stack() and raster::raster().
#' @export
spatial_predict <- function(x, prdat, coords, method = c('slow', 'fast'), ...)
{
method <- match.arg(method)
if(method == 'fast') {
stop('Fast method is not implemented, use method="slow"')
} else {
if(inherits(prdat, "RasterStack") | inherits(prdat, "RasterLayer") |
inherits(prdat, "RasterBrick"))
{
preds <- predict_mbm_raster(x, prdat)
} else {
if(missing(coords))
coords <- coordinates(prdat)
fitMat <- make_symmetric(prdat, site1 ~ site2, value.var = "fit")
sdMat <- make_symmetric(prdat, site1 ~ site2, value.var = "stdev")
preds <- list(fits = fitMat, stdev = sdMat, coords = coords)
}
fits <- x$y_rev_transform(x$rev_link(preds$fits))
# for fits, use a PCoA to collapse to 3 axes.
# For stdev, use rowmeans to collapse to one
fitPCoA <- ade4::dudi.pco(as.dist(fits), scannf = FALSE, nf = 3)
fitPCoA_scaled <- as.data.frame(apply(as.matrix(fitPCoA$l1), 2, function(x) {
x <- x - min(x)
x / max(x)
}))
sdMeans <- data.frame(sd = rowMeans(preds$stdev, na.rm = TRUE))
# make grids
sp::coordinates(fitPCoA_scaled) <- sp::coordinates(sdMeans) <- preds$coords
sp::gridded(fitPCoA_scaled) <- sp::gridded(sdMeans) <- TRUE
ret <- list(fits = fitPCoA_scaled, stdev = sdMeans, pcoa = fitPCoA)
class(ret) <- c("mbmSP", class(ret))
}
ret
}
#' Prediction for MBM from a raster dataset
#'
#' @param x A previously-fit MBM object
#' @param rasterdat Raster stack containing named layers matching the variable names in x (i.e., colnames(x$covariates)[-1]).
#' If a layer named 'names' is included, this layer will be used as sitenames, otherwise they will be assigned unique
#' numbers
#' @param ... Other named parameters to pass to \code{\link{predict.mbm}}.
#' @return A named list; \code{fits} is a cell by cell matrix of predictions (on the link scale; use \code{x$y_rev_transform(x$rev_link(predictions$fit))}
#' for the response scale), \code{stdev} is a cell by cell matrix of standard deviations, and \code{coords} is a matrix of coordinates. Row/column
#' names in \code{fits} and \code{stdev} match the rownames in \code{coords}.
#' @export
predict_mbm_raster <- function(x, rasterdat, ...)
{
newdata <- raster::getValues(rasterdat[[colnames(x$covariates)[-1]]]) # the -1 is to account for the fact that the first covariate name is always distance
rows <- complete.cases(newdata)
newdata <- newdata[rows,]
coords <- sp::coordinates(rasterdat)[rows,]
if("names" %in% names(rasterdat))
{
names <- raster::getValues(rasterdat[['names']])
names <- names[rows]
} else {
names <- 1:nrow(newdata)
}
rownames(newdata) <- rownames(coords) <- names
preds <- predict(x, newdata, ...)
diagSites <- unique(c(preds$site1, preds$site2))
predsDF <- rbind(preds, data.frame(site1 = diagSites, site2 = diagSites, fit = 0, stdev = NA))
# make site by site matrices and fill in lower triangle
fitMat <- make_symmetric(predsDF, site1 ~ site2, value.var = "fit")
sdMat <- make_symmetric(predsDF, site1 ~ site2, value.var = "stdev")
list(fits = fitMat, stdev = sdMat, coords = coords)
}
## deprecated - old hack, delete if nothing breaks
# #' Turn an MBM prediction dataframe into a symmetric matrix
# #' @param DF MBM predfiction dataframe
# #' @param formula A formula to be passed to \code{link{reshape2::acast}}
# #' @param value.var Name of value variable for \code{acast}
# #' @param ... Additional parameters for \code{acast}
# #' @return A symmetric matrix of predictions
# #' @keywords internal
# make_symmetric <- function(DF, formula, value.var, ...)
# {
# mat <- reshape2::acast(DF, formula, value.var = value.var, ...)
# mat[lower.tri(mat)] <- t(mat)[lower.tri(mat)]
# mat
# }
|
|
#####################################################################
# PACOTES/BIBLIOTECAS #
#####################################################################
install.packages("RColorBrewer")
require("RColorBrewer")
library(ggplot2)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#
#####################################################################
# DEFAULT #
#####################################################################
options(scipen=10)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#
#####################################################################
# BANCO #
#####################################################################
#Carregando o banco e guardando-o na variável “terrorismo”
#Trocar o endereço pela localização do banco no seu respectivo computador
terrorism <- read.csv("C:/Users/ricar/Google Drive/BK_PC/EACH-MQAAE1/MQA_Trabalho/globalterrorismdb_0718dist.csv", header=FALSE)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#
#####################################################################
# MISSING DATA #
#####################################################################
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#
#####################################################################
# ANÁLISES POR VARIÁVEL #
#####################################################################
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>>>>>>> Regiões Atacadas <<<<<<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
#Capturando a coluna 11 do banco, criando um sumário da mesma, e delimitando um número máximo de elementos para o sumário
regioes <- summary(terrorism$V11[-1], maxsum = 10)
#Criando um gráfico de barras a partir dos dados da variável “regioes", especificando o título, e as cores de cada barra (delimitadas em um vetor)
barplot(
regioes,
main = "Barplot de Atentados Terroristas por Região (1970-2017)",
las = 2,
col = c("darkred", "darkblue", "darkgreen", "darkgrey", "saddlebrown", "gold", "darkorange2", "darkorchid", "darkturquoise", "white"),
names.arg = ""
)
#Criando uma legenda para o gráfico, definindo a posição dela no gráfico, passando os nomes dos elementos do gráfico, e logo após as respectivas cores de cada elemento (delimitadas em um vetor). Também foi especificada a largura da legenda (3), e o tamanho da fonte (0.8)
legend(
"topright",
names(regioes),
fill = c("darkred", "darkblue", "darkgreen", "darkgrey", "saddlebrown", "gold", "darkorange2", "darkorchid", "darkturquoise", "white"),
text.width = 3,
cex = 0.8
)
#-------------------------------------------------------------------#
#-------------------------Gráfico de Pizza--------------------------#
#-------------------------------------------------------------------#
pie(
regioes,
main = "Gráfico de Setores de Atentados Terroristas por Região (1970-2017)",
col = c("darkred", "darkblue", "darkgreen", "darkgrey", "saddlebrown", "gold", "darkorange2", "darkorchid", "darkturquoise", "white")
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
regioes <- summary(terrorism$V11[-1])[-8]
boxplot(
regioes,
main = "Boxplot de Atentados Terroristas por Região (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
regioes
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(regioes),
"\nDesvio Padrão: ", sd(regioes)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>>>>> Atentados por Ano <<<<<<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
anos <- summary(terrorism$V2)[-48]
cores <- c()
gerar_grad <- colorRampPalette(c("darkgreen", "gold3", "gold4", "red4", "darkred"))
gradiente <- gerar_grad(47)
for (i in 1:47){
cores[i] = gradiente[(47/anos["2014"])*anos[i]]
}
barplot(
anos,
main = "Barplot de Atentados Terroristas por Ano (1970-2017)",
las = 2,
col = cores
)
#-------------------------------------------------------------------#
#-------------------------Gráfico de Pizza--------------------------#
#-------------------------------------------------------------------#
pie(
anos,
main = "Gráfico de Setores de Atentados Terroristas por Ano (1970-2017)",
col = c("darkred", "darkblue", "darkgreen", "darkgrey", "saddlebrown", "gold", "darkorange2", "darkorchid", "darkturquoise", "white")
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
boxplot(
anos,
main = "Boxplot de Atentados Terroristas por Ano (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
anos
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(anos),
"\nDesvio Padrão: ", sd(anos)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>>>>> Métodos Usados <<<<<<<<<<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
metodos <- summary((terrorism[-1,])$V30, maxsum=7)
barplot(
metodos,
main = "Barplot de Métodos de Ataque (1970-2017)",
las = 2,
col = brewer.pal(n=7, name="Set1"),
names.arg = ""
)
legend(
"topright",
names(metodos),
fill = brewer.pal(n=7, name="Set1"),
text.width = 3,
cex = 1
)
#-------------------------------------------------------------------#
#-------------------------Gráfico de Pizza--------------------------#
#-------------------------------------------------------------------#
pie(
metodos,
main = "Gráfico de Setores de Métodos de Ataque (1970-2017)",
col = brewer.pal(n=7, name="Set1")
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
metodos <- summary((terrorism[-1,])$V30)[-3]
boxplot(
metodos,
main = "Boxplot de Métodos de Ataque (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
metodos
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(metodos),
"\nDesvio Padrão: ", sd(metodos)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>>>>>>> Objetivos <<<<<<<<<<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
objetivos <- summary((terrorism[-1,])$V36, maxsum=14)
barplot(
objetivos[1:7],
main = "Barplot dos 7 Maiores Objetivos dos Atentados (1970-2017)",
las = 2,
col = brewer.pal(n=7, name="Paired"),
names.arg = ""
)
legend(
"topright",
names(objetivos[1:7]),
fill = brewer.pal(n=7, name="Paired"),
text.width = 1.5,
cex = 1
)
barplot(
c(objetivos[1],objetivos[8:14]),
main = "Barplot de Outros Objetivos dos Atentados Comparados ao Maior Objetivo (1970-2017)",
las = 2,
col = brewer.pal(n=8, name="Paired"),
names.arg = ""
)
legend(
"topright",
names(c(objetivos[1],objetivos[8:14])),
fill = brewer.pal(n=8, name="Paired"),
text.width = 2 ,
cex = 1
)
#-------------------------------------------------------------------#
#-------------------------Gráfico de Pizza--------------------------#
#-------------------------------------------------------------------#
pie(
c(objetivos[1],objetivos[8:14]),
main = "Gráfico de Setores de Outros Objetivos dos Atentados Comparados ao Maior Objetivo (1970-2017)",
col = brewer.pal(n=8, name="Set1")
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
objetivos <- summary((terrorism[-1,])$V36)[-16]
boxplot(
objetivos,
main = "Boxplot dos Objetivos dos Ataques (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
objetivos
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(objetivos),
"\nDesvio Padrão: ", sd(objetivos)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>>>>>> Mortes por Ano <<<<<<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
mortesAno <- c()
for(i in 1970:2017){
mortesAno[i] = sum(as.numeric(terrorism[terrorism$V2==i,]$V99))
}
cores <- c()
gerar_grad <- colorRampPalette(c("darkgreen", "gold3", "gold4", "red4", "darkred"))
gradiente <- gerar_grad(48)
for (i in 1:48){
set = ceiling((48/mortesAno[2014])*mortesAno[(i+1969)])
if (set != 0){
cores[i] = gradiente[set]
}
}
barplot(
mortesAno[1970:2017],
main = "Barplot de Mortes por Ano (1970-2017)",
las = 2,
col = cores,
names.arg = 1970:2017
)
#-------------------------------------------------------------------#
#------------------------------Grafico------------------------------#
#-------------------------------------------------------------------#
plot(
mortesAno,
main = "Gráfico de Mortes por Ano (1970-2017)",
las = 2,
xlim = c(1970,2017),
ylab = "",
xlab = "",
type = "l",
lwd = 2
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
boxplot(
mortesAno[1970:2017],
main = "Boxplot de Mortes por Ano (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
mortesAno[1970:2017]
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(mortesAno[1970:2017]),
"\nDesvio Padrão: ", sd(mortesAno[1970:2017])
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>> Ataques(mult, indiv, suc, doub) <<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Graficos de Pizza-------------------------#
#-------------------------------------------------------------------#
atkMult <- summary(terrorism$V26[-1])[-1]
pie(
atkMult,
main = "Gráfico de Setores de Atentados Conectados X Atentados Não Conectados (1970-2017)",
col = c("red3", "green4"),
labels = c(
paste(
"Não Conectados: ",
round(
atkMult[1]/(sum(atkMult[1],atkMult[2]))*100
), "%"
),
paste(
"Conectados: ",
round(
atkMult[2]/(sum(atkMult[1],atkMult[2]))*100
), "%"
)
)
)
atkIndiv <- summary(terrorism$V69[-1])
pie(
atkIndiv,
main = "Gráfico de Setores de Atentados Individuais X Atentados Coletivos (1970-2017)",
col = c("red3", "green4"),
labels = c(
paste(
"Coletivos: ",
round(
atkIndiv[1]/(sum(atkIndiv[1],atkIndiv[2]))*100, 3
), "%"
),
paste(
"Individuais: ",
round(
atkIndiv[2]/(sum(atkIndiv[1],atkIndiv[2]))*100, 3
), "%"
)
)
)
atkSuc <- summary(terrorism$V27[-1])
pie(
atkSuc,
main = "Gráfico de Setores de Atentados Bem Sucedidos X Atentados Mal Sucedidos (1970-2017)",
col = c("red3", "green4"),
labels = c(
paste(
"Mal Sucedidos: ",
round(
atkSuc[1]/(sum(atkSuc[1],atkSuc[2]))*100
), "%"
),
paste(
"Bem Sucedidos: ",
round(
atkSuc[2]/(sum(atkSuc[1],atkSuc[2]))*100
), "%"
)
)
)
atkDoub <- summary(terrorism$V23[-1])[-1]
pie(
atkDoub,
main = "Gráfico de Setores de Atentados Duvidosos de Serem Terrorismo (1970-2017)",
col = c("grey", "red3", "green4"),
labels = c(
paste(
"Não Avaliados: ",
round(
atkDoub[1]/(sum(atkDoub[1],atkDoub[2],atkDoub[3]))*100
), "%"
),
paste(
"Não há Dúvidas: ",
round(
atkDoub[2]/(sum(atkDoub[1],atkDoub[2],atkDoub[3]))*100
), "%"
),
paste(
"Há Dúvidas: ",
round(
atkDoub[3]/(sum(atkDoub[1],atkDoub[2],atkDoub[3]))*100
), "%"
)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>> Nacionalidade dos Alvos <<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
natAlvos <- summary((terrorism[-1,])$V42, maxsum=9)[-9]
barplot(
natAlvos,
main = "Barplot das 8 Maiores Nacionalidades Alvos de Atentados (1970-2017)",
las = 2,
col = brewer.pal(n=8, name="Dark2"),
names.arg = ""
)
legend(
"topright",
names(natAlvos),
fill = brewer.pal(n=8, name="Dark2"),
text.width = 3,
cex = 0.85
)
#-------------------------------------------------------------------#
#-------------------------Gráfico de Pizza--------------------------#
#-------------------------------------------------------------------#
pie(
natAlvos,
main = "Gráfico de Setores das 8 Maiores Nacionalidades dos Alvos de Atentados (1970-2017)",
col = brewer.pal(n=8, name="Dark2")
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
natAlvos <- summary((terrorism[-1,])$V42, maxsum=220)[-131]
boxplot(
natAlvos,
main = "Boxplot das Nacionalidades dos Alvos de Atentados (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
natAlvos
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(natAlvos),
"\nDesvio Padrão: ", sd(natAlvos)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>>>>> Grupos Responsáveis <<<<<<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
grupos <- sort(summary((terrorism[-1,])$V59, maxsum=4000), decreasing=TRUE)[1:8]
barplot(
grupos,
main = "Barplot dos Grupos Responsáveis pelos Atentados (1970-2017)",
las = 2,
col = brewer.pal(n=8, name="Dark2"),
names.arg = ""
)
legend(
"topright",
names(grupos),
fill = brewer.pal(n=8, name="Dark2"),
text.width = 3,
cex = 0.85
)
#-------------------------------------------------------------------#
#-------------------------Gráfico de Pizza--------------------------#
#-------------------------------------------------------------------#
grupos <- sort(summary((terrorism[-1,])$V59, maxsum=4000), decreasing=TRUE)[-3538]
pie(
grupos,
main = "Gráfico de Setores dos Grupos Responsáveis pelos Atentados (1970-2017)",
col = brewer.pal(n=10, name="Paired")
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
boxplot(
grupos,
main = "Boxplot dos Grupos Responsáveis pelos Atentados (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
grupos
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(grupos),
"\nDesvio Padrão: ", sd(grupos)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>>>>>>>>>> Tipos de Armas <<<<<<<<<<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#-------------------------Grafico de Barras-------------------------#
#-------------------------------------------------------------------#
armas <- sort(summary((terrorism[-1,])$V83), decreasing=TRUE)[-13]
barplot(
armas,
main = "Barplot dos Tipos de Armas Usadas (1970-2017)",
las = 2,
col = c(brewer.pal(n=10, name="Accent"),"orange","red"),
names.arg = ""
)
legend(
"topright",
names(armas),
fill = c(brewer.pal(n=10, name="Accent"),"orange","red"),
text.width = 5.5,
cex = 0.8
)
#-------------------------------------------------------------------#
#-------------------------Gráfico de Pizza--------------------------#
#-------------------------------------------------------------------#
pie(
armas,
main = "Gráfico de Setores dos Tipos de Armas Usadas (1970-2017)",
col = c(brewer.pal(n=10, name="Accent"),"orange","red")
)
#-------------------------------------------------------------------#
#------------------------------Boxplot------------------------------#
#-------------------------------------------------------------------#
boxplot(
armas,
main = "Boxplot dos Tipos de Armas Usadas (1970-2017)",
las = 2
)
#-------------------------------------------------------------------#
#---------Mínimo-1ºQuartil-Mediana-Média-3ºQuartil-Máximo-----------#
#-------------------------------------------------------------------#
summary(
armas
)
#-------------------------------------------------------------------#
#--------------------Variancia-Desvio Padrão------------------------#
#-------------------------------------------------------------------#
writeLines(
paste(
"Variancia: ", var(armas),
"\nDesvio Padrão: ", sd(armas)
)
)
#------------------------------------------------------------------------------------------------#
#-------------------------------------------------------------------#
# #
#>>>>>>>>>>> Mortes/Ano por Atentados/Ano <<<<<<<<<<<<#
# #
#-------------------------------------------------------------------#
#------------------------Grafico de Dispersão-----------------------#
#-------------------------------------------------------------------#
anos <- summary(terrorism$V2)[-48]
anos2[1970:2017] = c(anos[1:33],0,anos[34:47]);
mortesAno <- c()
for(i in 1970:2017){
mortesAno[i] = sum(as.numeric(terrorism[terrorism$V2==i,]$V99))
}
plot(
anos2[1970:2017],
mortesAno[1970:2017],
ylab = "Mortes por Ano",
xlab = "Atentados por Ano",
main = "Gráfico de Dispersão de Mortes/Anos por Atentados/Ano"
)
|
|
REBOL [
System: "REBOL [R3] Language Interpreter and Run-time Environment"
Title: "Build REBOL 3.0 boot extension module"
Rights: {
Copyright 2012 REBOL Technologies
REBOL is a trademark of REBOL Technologies
}
License: {
Licensed under the Apache License, Version 2.0
See: http://www.apache.org/licenses/LICENSE-2.0
}
Author: "Carl Sassenrath"
Needs: 2.100.100
Purpose: {
Collects host-kit extension modules and writes them out
to a .h file in a compilable data format.
}
]
print "--- Make Host Boot Extension ---"
secure none
do %form-header.r
;-- Conversion to C strings, depending on compiler ---------------------------
to-cstr: either system/version/4 = 3 [
; Windows format:
func [str /local out] [
out: make string! 4 * (length? str)
out: insert out tab
forall str [
out: insert out reduce [to-integer first str ", "]
if zero? ((index? str) // 10) [out: insert out "^/^-"]
]
;remove/part out either (pick out -1) = #" " [-2][-4]
head out
]
][
; Other formats (Linux, OpenBSD, etc.):
func [str /local out data] [
out: make string! 4 * (length? str)
forall str [
data: copy/part str 16
str: skip str 15
data: enbase/base data 16
forall data [
insert data "\x"
data: skip data 3
]
data: tail data
insert data {"^/}
append out {"}
append out head data
]
head out
]
]
;-- Collect Sources ----------------------------------------------------------
collect-files: func [
"Collect contents of several source files and return combined with header."
files [block!]
/local source data header
][
source: make block! 1000
foreach file files [
data: load/all file
remove-each [a b] data [issue? a] ; commented sections
unless block? header: find data 'rebol [
print ["Missing header in:" file] halt
]
unless empty? source [data: next next data] ; first one includes header
append source data
]
source
]
;-- Emit Functions -----------------------------------------------------------
out: make string! 10000
emit: func [d] [repend out d]
emit-cmt: func [text] [
emit [
{/***********************************************************************
**
** } text {
**
***********************************************************************/
}]
]
form-name: func [word] [
uppercase replace/all replace/all to-string word #"-" #"_" #"?" #"Q"
]
emit-file: func [
"Emit command enum and source script code."
file [file!]
source [block!]
/local title name data exports words exported-words src prefix
][
source: collect-files source
title: select source/2 to-set-word 'title
name: form select source/2 to-set-word 'name
replace/all name "-" "_"
prefix: uppercase copy name
clear out
emit form-header/gen title second split-path file %make-host-ext.r
emit ["enum " name "_commands {^/"]
; Gather exported words if exports field is a block:
words: make block! 100
exported-words: make block! 100
src: source
while [src: find src set-word!] [
if all [
<no-export> <> first back src
find [command func function funct] src/2
][
append exported-words to-word src/1
]
if src/2 = 'command [append words to-word src/1]
src: next src
]
if block? exports: select second source to-set-word 'exports [
insert exports exported-words
]
foreach word words [emit [tab "CMD_" prefix #"_" replace/all form-name word "'" "_LIT" ",^/"]]
emit "};^/^/"
if src: select source to-set-word 'words [
emit ["enum " name "_words {^/"]
emit [tab "W_" prefix "_0,^/"]
foreach word src [emit [tab "W_" prefix #"_" form-name word ",^/"]]
emit "};^/^/"
]
emit "#ifdef INCLUDE_EXT_DATA^/"
data: append trim/head mold/only/flat source newline
append data to-char 0 ; null terminator may be required
emit ["const unsigned char RX_" name "[] = {^/" to-cstr data "^/};^/^/"]
emit "#endif^/"
write rejoin [%../include/ file %.h] out
; clear out
; emit form-header/gen join title " - Module Initialization" second split-path file %make-host-ext.r
; write rejoin [%../os/ file %.c] out
]
|
|
library(Sleuth3)
library(ggplot2)
head(case0501)
summary(case0501$Diet)
qplot(Diet,Lifetime,data=case0501,geom="boxplot")
case0501.aov<-aov(Lifetime~Diet,data=case0501)
case0501.aov
anova(case0501.aov)
t.test(Depth~Year,data=case0201,var.equal=TRUE)
case0201.aov<-aov(Depth~Year,data=case0201)
anova(case0201.aov)
(-4.5833)^2
0.97304^2
166.638/176
nrow(case0501) # Find total sample size
length(unique(case0501$Diet)) # How many different groups?
2546.8/44.6
|
|
# random forest and naive bayes
library( randomForest )
library( e1071 )
library( caTools )
setwd( '/path/to/showofhands/data' )
data = read.csv( 'train.csv' )
# which columns are not factors?
cols = colnames( data )
for ( i in 1:length( cols )) {
col_class = class( data[,i] )
if ( col_class != 'factor' ) {
cat( cols[i], col_class, "\n" )
}
}
"
UserID integer
YOB integer
Happy integer
votes integer
"
# clean-up
drops = c( 'UserID' )
data = data[, !( names( data ) %in% drops )]
data$Happy = as.factor( data$Happy )
# clean up YOB
data$YOB[data$YOB < 1930] = 0
data$YOB[data$YOB > 2004] = 0
data$YOB[is.na(data$YOB)] = 0
# train / test split
p_train = 0.8
n = nrow( data )
train_len = round( n * p_train )
test_start = train_len + 1
i = sample.int( n )
train_i = i[1:train_len]
test_i = i[test_start:n]
train = data[train_i,]
test = data[test_i,]
# random forest
y_test = as.factor( test$Happy )
ntree = 100
rf = randomForest( as.factor( Happy ) ~ ., data = train, ntree = ntree, do.trace = 10 )
p <- predict( rf, test, type = 'prob' )
probs = p[,2]
auc = colAUC( probs, y_test )
auc = auc[1]
cat( "Random forest AUC:", auc )
varImpPlot( rf, n.var = 20 )
# naive bayes
nb = naiveBayes( Happy ~ ., data = train )
# for predicting
drops = c( 'Happy' )
x_test = test[, !( names( test ) %in% drops )]
p = predict( nb, x_test, type = 'raw' )
probs = p[,2]
auc = colAUC( probs, y_test )
auc = auc[1]
cat( "\n\n" )
cat( "Naive Bayes AUC:", auc, "\n" )
|
|
library(shiny)
library(qvalue, lib.loc= .libPaths()[1])
library(shiny)
library(shinythemes)
library(shinyjs)
library(markdown)
library(DT)
library(shinyjqui)###need to upload package to server
library(shinycssloaders)
options(shiny.maxRequestSize=300*1024^2)
files <- list.files("data")
shinyUI(fluidPage(
tags$head(
tags$head(includeScript("google-analytics.js")),
tags$link(rel="stylesheet", type="text/css",href="style.css"),
tags$script(type="text/javascript", src = "md5.js"),
tags$script('!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?\'http\':\'https\';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");')
),
useShinyjs(),
uiOutput("app"),
# Application title
headerPanel(
list(tags$head(tags$style("body {background-color: white; }")),
"FDRCLIM", HTML('<img src="picture2.png", height="100px",
style="float:left"/>','<p style="color:grey"> FDR control of ExG associations </p>' ))
),
theme = shinytheme("journal") ,
jqui_draggabled( sidebarPanel(
selectInput('file', 'Choose ExG association', setNames(files, files)),
tags$hr(),
checkboxInput('header', 'Header', TRUE),
wellPanel(a(h4('Please cite us in any publication that utilizes information from Arabidopsis CLIMtools:'), href = "https://www.nature.com/articles/s41559-018-0754-5", h6('Ferrero-Serrano, Á & Assmann SM. Phenotypic and genome-wide association with the local environment of Arabidopsis. Nature Ecology & Evolution. doi: 10.1038/s41559-018-0754-5 (2019)' ))),
wellPanel(
p( strong( HTML("π<sub>0</sub>"), "estimate inputs")),
selectInput("pi0method", p("Choose a", HTML("π<sub>0</sub>"), "method:"),
choices = c("smoother", "bootstrap")),
sliderInput(inputId = "lambda", label = p(HTML("λ"),"range"),
min = 0, max = 1, value = c(0, 0.95), step = 0.01),
numericInput("step", p(HTML("λ"),"step size:"), 0.05),
numericInput("sdf", "smooth df:", 3.0),
checkboxInput(inputId = "pi0log", label = p(HTML("π<sub>0</sub>"), "smoother log"), value = FALSE)
),
wellPanel(
p(strong("Local FDR inputs")),
selectInput("transf", "Choose a transformation method:",
choices = c("probit", "logit")),
checkboxInput(inputId = "trunc", label = "truncate local FDR values", value = TRUE),
checkboxInput(inputId = "mono", label = "monotone", value = TRUE),
numericInput("adj", "adjust:", 1.5),
numericInput("eps", "threshold:", 10^-8)
),
wellPanel(
p(strong("Output")),
sliderInput("fdr",
"FDR level:",
step = 0.01,
value = 0.05,
min = 0,
max = 1),
checkboxInput(inputId = "pfdr", label = "pfdr", value = FALSE)
), wellPanel(a("Tweets by @ClimTools", class="twitter-timeline"
, href = "https://twitter.com/ClimTools"), style = "overflow-y:scroll; max-height: 1000px"
),h6('Contact us: clim.tools.lab@gmail.com'))
),
mainPanel(
###add code to get rid of error messages on the app.
tags$style(type="text/css",
".shiny-output-error { visibility: hidden; }",
".shiny-output-error:before { visibility: hidden; }"
),
tabsetPanel(id="tabSelected",
tabPanel("About", h4("Using the App"), uiOutput("about"), h4("References"), uiOutput("ref")),
# tabPanel("Figures", h4("Plot"), plotOutput("qvaluePlot"), h4("Histogram"), plotOutput("qvalueHist"), h4("Summary"), verbatimTextOutput("summary") ),
tabPanel("Figures", uiOutput("subTabs")),
tabPanel("Help", uiOutput("help")))
)
))
|
|
getwd()
setwd("R")
system("../src/run.bat pet_all")
source("read.admb.R")
?read.table
mc=read.table("../src/arc/pet_all_s2_mcout.rep",skip=1,header=F)
mc=read.table("../src/arc/pet_all_mcout.rep",skip=1,header=F)
colnames(mc)=c("B","N","p_loss","Rep_Rate_F","Ref_Rate_SF","M","ER","Survival","ObjFun","ObjF_Tags","ObjF_Repr","ObjF_M","x","x","x")
plot(density(mc$B),xlim=c(0,800),xlab="Biomass")
mc=read.table("../src/arc/seg_all_mcout.rep",skip=1,header=F)
mc=read.table("../src/arc/tan_all_mcout.rep",skip=1,header=F)
mc=read.table("../src/arc/pet_011_mcout.rep",skip=1,header=F)
mc=read.table("../src/arc/seg_011_mcout.rep",skip=1,header=F)
mc=read.table("../src/arc/tan_011_mcout.rep",skip=1,header=F)
.MyPlot("../src/arc/pet_all_mcout.rep")
.MyLines("../src/arc/pet_011_mcout.rep",col="green")
.MyLines("../src/arc/pet_all_s3_mcout.rep",col="red")
b
.MyPlot <- function( repObj = "../src/arc/tan_011_mcout.rep",xlim=c(0,700),main="Biomass"){
mc=read.table(repObj,skip=1,header=F)
colnames(mc)=c("B","N","p_loss","Rep_Rate_F","Ref_Rate_SF","M","ER","Survival","ObjFun","ObjF_Tags","ObjF_Repr","ObjF_M","x","x","x")
plot(density(mc$B),xlim=xlim,main=main,xlab="Biomass")
}
.MyLines <- function( repObj = "../src/arc/tan_011_mcout.rep",col=1,lty=1,lwd=1 ){
mc=read.table(repObj,skip=1,header=F)
colnames(mc)=c("B","N","p_loss","Rep_Rate_F","Ref_Rate_SF","M","ER","Survival","ObjFun","ObjF_Tags","ObjF_Repr","ObjF_M","x","x","x")
lines(density(mc$B),lty=lty,lwd=lwd,col=col)
}
quantile(mc$B,.8)
lines(density(mc$B),lty=1,lwd=2,col="red")
density(mc$B)
names(mc)
dim(mc)
p_all=read.rep("../src/arc/pet_all.rep")
p_alls1=read.rep("../src/arc/pet_all_s1.rep")
p_alls2=read.rep("../src/arc/pet_all_s2.rep")
p_alls3=read.rep("../src/arc/pet_all_s3.rep")
p_alls4=read.rep("../src/arc/pet_all_s4.rep")
plot(p_all$Obs_Tags,ylim=c(0,170),xlab="Event",ylab="Tags returned",cex=1.5)
points(p_all$Pred_Tags,pch=19,cex=1.2)
points(p_alls1$Pred_Tags,pch=19,cex=1.2,col="red")
points(p_alls2$Pred_Tags,pch=19,cex=1.2,col="blue")
points(p_alls3$Pred_Tags,pch=2,cex=1.2,col="blue")
points(p_alls4$Pred_Tags,pch=18,cex=1.2,col="red")
# For 2011
p_011=read.rep("../src/arc/pet_011.rep")
plot(p_011$Obs_Tags,ylim=c(0,170),xlab="Event",ylab="Tags returned",cex=1.5)
points(p_011$Pred_Tags,pch=19,cex=1.2)
# Seguam
p_seg=read.rep("../src/arc/seg_all.rep")
p_segs1=read.rep("../src/arc/seg_all_s1.rep")
p_segs2=read.rep("../src/arc/seg_all_s2.rep")
p_segs3=read.rep("../src/arc/seg_all_s3.rep")
p_seg011=read.rep("../src/arc/seg_011.rep")
p_seg011s1=read.rep("../src/arc/seg_011_s1.rep")
plot(p_seg$Obs_Tags,ylim=c(0,170),xlab="Event",ylab="Tags returned",cex=1.5)
points(p_seg$Pred_Tags,pch=19,cex=1.2)
plot(p_segs1$Obs_Tags,ylim=c(0,170),xlab="Event",ylab="Tags returned",cex=1.5)
points(p_segs1$Pred_Tags,pch=19,cex=1.2,col="red")
points(p_segs2$Pred_Tags,pch=18,cex=1.2)
points(p_segs3$Pred_Tags,pch=5,cex=1.8)
plot(p_seg011$Obs_Tags,ylim=c(0,180),xlab="Event",ylab="Tags returned",cex=1.5)
points(p_seg011$Pred_Tags,pch=19,cex=1.2)
plot(p_seg011s1$Obs_Tags,ylim=c(0,180),xlab="Event",ylab="Tags returned",cex=1.5)
points(p_seg011s1$Pred_Tags,pch=19,cex=1.2)
# Tanaga
p_tan=read.rep("../src/arc/tan_all.rep")
p_all=read.rep("../src/arc/pet_all.rep")
r_2011=read.table("../src/mcout1.rep",header=T)
r_2011_od=read.table("../src/mcout.rep",header=T)
r_all=read.table("../src/mcout_all.rep",header=T)
r_all_od= r_all
#=read.table("../src/mcout.rep",header=T)
pairs(r_all[,1:7],pch=19,cex=.6)
plot(density(r_all$N),ylim=c(0,.013),xlim=c(0,1000),xlab="Population abundance",main="Seguam",col="green")
lines(density(r_2011$N),col="red",lwd=1)
lines(density(r_2011_od$N),col="red",lwd=2,lty=2)
lines(density(r_all_od$N),col="green",lwd=2,lty=2)
windows()
plot(density(r_all$M),xlim=c(0.0,0.4),xlab="M ",main="Seguam",col="green")
lines(density(r_2011$M),col="red",lwd=1)
lines(density(r_2011_od$M),col="red",lwd=2,lty=2)
lines(density(r_all_od$M),col="green",lwd=2,lty=2)
|
|
petsc_matprinter_fmt <- function(fmt="default")
{
fmt_choices <- c("default", "matlab", "dense", "impl", "info", "info_detail", "common", "index", "symmodu", "vtk", "native", "basic", "lg", "contour")
fmt <- match.arg(tolower(fmt), fmt_choices)
fmt_int <- .Call(sbase_petsc_printer_lookup_code, fmt)
.Call(sbase_petsc_matprinter_fmt, as.integer(fmt_int))
invisible()
}
petsc_matprinter <- function(dim, ldim, data, row_ptr, col_ind, fmt="default")
{
petsc_matprinter_fmt(fmt=fmt)
.Call(sbase_petsc_matprinter, dim, ldim, data, row_ptr, col_ind)
invisible()
}
|
|
# Example Usage
# irule -F project_collection_creation.r "*proj_name='alice-project'" "*owner='alice'" "*collab_list='bobby, eve , joe'" *lifetime=0.001 *object_quota=5 *size_quota=500
project_collection_creation {
create_project_collection(
*proj_name, *owner, *collab_list,
*lifetime, *object_quota, *size_quota)
}
INPUT *proj_name=$, *owner=$, *collab_list=$, *lifetime=$, *object_quota=$, *size_quota=$
OUTPUT ruleExecOut
|
|
<<<<<<< HEAD
#### Healthy Neighborhoods Project: Using Ecological Data to Improve Community Health
### Neville Subproject: Using Random Forestes, Factor Analysis, and Logistic regression to Screen Variables for Imapcts on Public Health
## National Health and Nutrition Examination Survey: The R Project for Statistical Computing Script by DrewC!
# Detecting Prediabetes in those under 65
#### Section A: Prepare Code
### Step 1: Import Libraries and Import Dataset
## Open R Terminal
R # open R in VS Code (any terminal)
## Import Hadley Wickham Libraries
library(tidyverse) # All of the libraries above in one line of code
library(skimr) # Library used for easy summary of data
## Import Machine Learning Libraries
library(randomForest) # Popular random forest package for R
## Import Statistics Libraries
library(MASS) # Stepwise inclusion model with linear and logistic options
library(pROC) # ROC tests with AUC output
library(psych) # Survey analysis library with factor analysis
library(GPArotation) # Rotation options for factor analysis
## Import Data
setwd("C:/Users/drewc/GitHub/Healthy_Neighborhoods") # Set wd to project repository
df_nhanes = read.csv("_data/nhanes_1516_noRX_stage.csv") # Import dataset from _data folder
## Verify
dim(df_nhanes)
### Step 2: Prepare Data for Classificaiton
## Subset for outcome of interest
df_nhanes$outcome <- 0 # Add new outcome column and set value to 0
df_nhanes$outcome[df_nhanes$LBXGH >= 5.7 & df_nhanes$LBXGH < 6.4 ] <- 1 # Create new column based on conditions
df_nh = subset(df_nhanes, select = -c(SEQN, LBXGH, DIQ010))
## Resolve missing data
df_nev = df_nh[, -which(colSums(is.na(df_nh)) > 3646)] # Remove variables with high missing values
df_nev = subset(df_nev, select = -c(AUATYMTL, SLQ300, SLQ310, AUATYMTR)) # Remove variables with factor value types
df_nev = na.omit(df_nev) # Omit rows with NA from Data Frame
df_nev = df_nev[which(df_nev$RIDAGEYR < 65), ]
## Verify
dim(df_nev)
colnames(df_nev)
#### Section B: Vairable Selection using Quantitative and Qualitative Methods
### Step 3: Conduct Factor Analysis to Identify Latent Variables
## Subset by group with outcome
df_fa = subset(df_nev, outcome == 1) # Subset for outcome of interest
df_fa = subset(df_fa, select = -(outcome)) # Remove outcome variable
df_fa = na.omit(df_fa)
nrow(df_fa) # Check number of rows
## Perform Scree test and loadings
cor_fa = cor(df_fa)
cor_fa[!is.finite(cor_fa)] <- 0
model_scree = fa.parallel(cor_fa) # Create scree plot to determine number of latent variables
print(model_scree)
## Write Scree Test Output to File
result = model_scree # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "w") # Open file and "a" to append
write("Factor Analysis", file) # Insert title
write(" ", file) # Insert space below title
write("Eigenvalues from Scree Test", file) # Insert title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
## Idenitfy Loadings
factors = fa(r = cor_fa, nfactors = 10)
df_load = as.data.frame(unclass(factors$loadings))
df_ld = df_load[, which(apply(df_load, 2, max) > 0.5)] # Remove variables with high missing values
colnames(df_ld)
df_ld$factor1[df_ld$MR1 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor2[df_ld$MR2 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor3[df_ld$MR3 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor4[df_ld$MR4 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor5[df_ld$MR5 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor6[df_ld$MR6 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor7[df_ld$MR7 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor8[df_ld$MR8 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor9[df_ld$MR9 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor10[df_ld$MR10 > 0.5] <- 1 # Create new column based on conditions
df_ld = subset(df_ld, select = -c(MR1, MR2, MR3, MR4, MR5, MR6, MR7, MR8, MR9, MR10))
df_f1 = subset(df_ld, factor1 == 1) # Subset for outcome of interest
df_f1 = df_f1["factor1"]
df_f2 = subset(df_ld, factor2 == 1) # Subset for outcome of interest
df_f2 = df_f2["factor2"]
df_f3 = subset(df_ld, factor3 == 1) # Subset for outcome of
df_f3 = df_f3["factor3"]
df_f4 = subset(df_ld, factor4 == 1) # Subset for outcome of interest
df_f4 = df_f4["factor4"]
df_f5 = subset(df_ld, factor5 == 1) # Subset for outcome of interest
df_f5 = df_f5["factor5"]
df_f6 = subset(df_ld, factor6 == 1) # Subset for outcome of interest
df_f6 = df_f6["factor6"]
df_f7 = subset(df_ld, factor7 == 1) # Subset for outcome of interest
df_f7 = df_f7["factor7"]
df_f8 = subset(df_ld, factor8 == 1) # Subset for outcome of interest
df_f8 = df_f8["factor8"]
df_f9 = subset(df_ld, factor9 == 1) # Subset for outcome of interest
df_f9 = df_f9["factor9"]
df_f10 = subset(df_ld, factor10 == 1) # Subset for outcome of interest
df_f10 = df_f10["factor10"]
## Write Scree Test Output to File
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write("Factor Loadings", file) # Insert title
write(" ", file) # Insert space below title
capture.output(df_f1, file = file, append = TRUE) # write summary to file
capture.output(df_f2, file = file, append = TRUE) # write summary to file
capture.output(df_f3, file = file, append = TRUE) # write summary to file
capture.output(df_f4, file = file, append = TRUE) # write summary to file
capture.output(df_f5, file = file, append = TRUE) # write summary to file
capture.output(df_f6, file = file, append = TRUE) # write summary to file
capture.output(df_f7, file = file, append = TRUE) # write summary to file
capture.output(df_f8, file = file, append = TRUE) # write summary to file
capture.output(df_f9, file = file, append = TRUE) # write summary to file
capture.output(df_f10, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
### Step 4: Use a Random Forest to rank variables by importance
## Create a Random Forest
df_rf = df_nev
forest = randomForest(formula = outcome ~ ., data = df_rf, ntree = 1000, importance = TRUE) # This will take time
summary(forest) # Summary of forest model
## Tidy Results for Variable Classification
mat_forest = round(importance(forest), 2) # Round importance outcomes to two places,
df_forest = as.data.frame(mat_forest) # Save as data frame
df_forest = rownames_to_column(df_forest)
colnames(df_forest) <- c("Variable", "MSE", "Gini") # Change column names to easily readible. Both values correspond to the summed increse in either Mean Squared Error or Gini as the variable is removed. Both are important and used elsewhere in randomforests (Scikit-learn).
## Create Importance Variable Lists
df_rank = arrange(df_forest, desc(Gini)) # Descend by variable in data frame
df_rank = df_rank[which(df_rank$Gini > 0 & df_rank$MSE > 0), ]
print(df_rank) # Print output
## Write Random Forest Output to File
result = print(df_rank) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "w" to overwrite
write("Random Forest Variable Classification", file) # Insert title
write(" ", file) # Insert space below title
write.table(result, file, quote = FALSE, sep = " ") # write table of values to filee
write(" ", file) # Insert space below result
close(file) # Close file
### Summary Statistics of Model Variables
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "w" to overwrite
write("Summary of Model Variables", file) # Insert title
write(" ", file) # Insert space below result
write("OHX19TC", file) # Insert space below title
capture.output(summary(df_nev$OHX19TC), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("BMDAVSAD", file) # Insert space below title
capture.output(summary(df_nev$BMDAVSAD), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("INDFMMPI", file) # Insert space below title
capture.output(summary(df_nev$INDFMMPI), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("INQ080", file) # Insert space below title
capture.output(summary(df_nev$INQ080), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("BMXLEG", file) # Insert space below title
capture.output(summary(df_nev$BMXLEG), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("LBXWBCSI", file) # Insert space below title
capture.output(summary(df_nev$LBXWBCSI), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("LBXRDW", file) # Insert space below title
capture.output(summary(df_nev$LBXRDW), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("BPXDI3", file) # Insert space below title
capture.output(summary(df_nev$BPXDI3), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("LBDRFO", file) # Insert space below title
capture.output(summary(df_nev$LBDRFO), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
### Step 5: Liner Regression to Determine Direction of Impact
model_logistic <- glm(outcome~ OHX19TC + BMDAVSAD + INDFMMPI + INQ080 + BMXLEG + LBXWBCSI + LBXRDW + BPXDI3 + LBDRFO, data = df_nev)
summary(model_logistic)
## Write Model Output to File
result = summary(model_logistic) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write(" ", file) # Insert space below title
write("First Logistic Regression for Variable Direction", file) # Insert title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
#### Section C: Build Regression Model and Risk Score for Validation
### Step 5: Create Dichotomous Variables Based on 3rd Quartile
## Create New Variables
df_nh$V1 <- 0 # Add new outcome column and set value to 0
df_nh$V2 <- 0 # Add new outcome column and set value to 0
df_nh$V3 <- 0 # Add new outcome column and set value to 0
df_nh$V4 <- 0 # Add new outcome column and set value to 0
df_nh$V5 <- 0 # Add new outcome column and set value to 0
df_nh$V1[df_nh$OHX01TC == 4 | df_nh$OHX02TC == 4 | df_nh$OHX03TC == 4 | df_nh$OHX04TC == 4 | df_nh$OHX05TC == 4 | df_nh$OHX06TC == 4 | df_nh$OHX07TC == 4 | df_nh$OHX08TC == 4 | df_nh$OHX09TC == 4 | df_nh$OHX10TC == 4 | df_nh$OHX11TC == 4 | df_nh$OHX12TC == 4 | df_nh$OHX13TC == 4 | df_nh$OHX14TC == 4 | df_nh$OHX15TC == 4 | df_nh$OHX16TC == 4 | df_nh$OHX17TC == 4 | df_nh$OHX18TC == 4 | df_nh$OHX19TC == 4 | df_nh$OHX20TC == 4 | df_nh$OHX21TC == 4 | df_nh$OHX22TC == 4 | df_nh$OHX23TC == 4 | df_nh$OHX24TC == 4 | df_nh$OHX25TC == 4 | df_nh$OHX26TC == 4 | df_nh$OHX27TC == 4 | df_nh$OHX28TC == 4 | df_nh$OHX29TC == 4 | df_nh$OHX30TC == 4 | df_nh$OHX31TC == 4 | df_nh$OHX32TC == 4] <- 1 # Create new column based on conditions
df_nh$V2[df_nh$BMDAVSAD > 25.1] <- 1 # Create new column based on conditions
df_nh$V3[df_nh$INDFMMPI > 5] <- 1 # Create new column based on conditions
df_nh$V4[df_nh$INQ080 == 1] <- 1 # Create new column based on conditions
df_nh$V5[df_nh$BMXLEG < 41.58] <- 1 # Create new column based on conditions
## Resolve missing data
df_lon = subset(df_nh, select = c(outcome, V1, V2, V3, V4, V5)) # Remove variables with factor value types
df_lon = na.omit(df_lon) # Omit rows with NA from Data Frame
dim(df_lon)
### Step 6: Logistic Regression with Stepwise Selection for Final Model
## Create training and validation set
sample = sample.int(n = nrow(df_lon), size = floor(.50*nrow(df_lon)), replace = F) # Create training and testing dataset with 50% split
train = df_lon[sample, ] # Susbset data frame by sample
test = df_lon[-sample, ] # Subset data frame by removing sample
## Perform Logisitc Regression on selected variables
model_logistic = glm(outcome~ V1 + V2 + V3 + V4 + V5, data = train)
summary(model_logistic)
## Stepwise backward selection
model_back = stepAIC(model_logistic, direction = "backward") # Stepwise backwards selection on model
summary(model_back) # Output model summary and check for variables to remove for final model
## Write Quantitative Selection Model Output to File
result1 = print(summary(model_logistic)) # Save result df to variable
result2 = print(summary(model_back)) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write("Training Logistic Regression Model", file) # Insert space below title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below title
write("Stepwise Backwards Selection", file) # Insert space below title
write(" ", file) # Insert space below title
capture.output(result2, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below title
close(file) # Close file
### Step 7: Create Score and Verify
# Create new column based on conditions
test$score <- 0 # Add new outcome column and set value to 0
test$score = (20*test$V2) + (15*test$V4) + (10*test$V5)
## Final Model and AUC Score
model_score = glm(outcome~ score, data = test) # Perform logistic regression model on selected variables on test data
roc_test = roc(model_score$y, model_score$fitted.values, ci = T, plot = T) # Perform ROC test on test data
auc(roc_test) # Print AUC score
## Write Quantitative Selection Model Output to File
result = print(auc(roc_test)) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write("Risk Score Validation", file) # Insert space below title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below title
close(file) # Close file
=======
#### Healthy Neighborhoods Project: Using Ecological Data to Improve Community Health
### Neville Subproject: Using Random Forestes, Factor Analysis, and Logistic regression to Screen Variables for Imapcts on Public Health
## National Health and Nutrition Examination Survey: The R Project for Statistical Computing Script by DrewC!
# Detecting Prediabetes in those under 65
#### Section A: Prepare Code
### Step 1: Import Libraries and Import Dataset
## Open R Terminal
R # open R in VS Code (any terminal)
## Import Hadley Wickham Libraries
library(tidyverse) # All of the libraries above in one line of code
library(skimr) # Library used for easy summary of data
## Import Machine Learning Libraries
library(randomForest) # Popular random forest package for R
## Import Statistics Libraries
library(MASS) # Stepwise inclusion model with linear and logistic options
library(pROC) # ROC tests with AUC output
library(psych) # Survey analysis library with factor analysis
library(GPArotation) # Rotation options for factor analysis
## Import Data
setwd("C:/Users/drewc/GitHub/Healthy_Neighborhoods") # Set wd to project repository
df_nhanes = read.csv("_data/nhanes_1516_noRX_stage.csv") # Import dataset from _data folder
## Verify
dim(df_nhanes)
### Step 2: Prepare Data for Classificaiton
## Subset for outcome of interest
df_nhanes$outcome <- 0 # Add new outcome column and set value to 0
df_nhanes$outcome[df_nhanes$LBXGH >= 5.7 & df_nhanes$LBXGH < 6.4 ] <- 1 # Create new column based on conditions
df_nh = subset(df_nhanes, select = -c(SEQN, LBXGH, DIQ010))
## Resolve missing data
df_nev = df_nh[, -which(colSums(is.na(df_nh)) > 3646)] # Remove variables with high missing values
df_nev = subset(df_nev, select = -c(AUATYMTL, SLQ300, SLQ310, AUATYMTR)) # Remove variables with factor value types
df_nev = na.omit(df_nev) # Omit rows with NA from Data Frame
df_nev = df_nev[which(df_nev$RIDAGEYR < 65), ]
## Verify
dim(df_nev)
colnames(df_nev)
#### Section B: Vairable Selection using Quantitative and Qualitative Methods
### Step 3: Conduct Factor Analysis to Identify Latent Variables
## Subset by group with outcome
df_fa = subset(df_nev, outcome == 1) # Subset for outcome of interest
df_fa = subset(df_fa, select = -(outcome)) # Remove outcome variable
df_fa = na.omit(df_fa)
nrow(df_fa) # Check number of rows
## Perform Scree test and loadings
cor_fa = cor(df_fa)
cor_fa[!is.finite(cor_fa)] <- 0
model_scree = fa.parallel(cor_fa) # Create scree plot to determine number of latent variables
print(model_scree)
## Write Scree Test Output to File
result = model_scree # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "w") # Open file and "a" to append
write("Factor Analysis", file) # Insert title
write(" ", file) # Insert space below title
write("Eigenvalues from Scree Test", file) # Insert title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
## Idenitfy Loadings
factors = fa(r = cor_fa, nfactors = 10)
df_load = as.data.frame(unclass(factors$loadings))
df_ld = df_load[, which(apply(df_load, 2, max) > 0.5)] # Remove variables with high missing values
colnames(df_ld)
df_ld$factor1[df_ld$MR1 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor2[df_ld$MR2 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor3[df_ld$MR3 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor4[df_ld$MR4 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor5[df_ld$MR5 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor6[df_ld$MR6 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor7[df_ld$MR7 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor8[df_ld$MR8 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor9[df_ld$MR9 > 0.5] <- 1 # Create new column based on conditions
df_ld$factor10[df_ld$MR10 > 0.5] <- 1 # Create new column based on conditions
df_ld = subset(df_ld, select = -c(MR1, MR2, MR3, MR4, MR5, MR6, MR7, MR8, MR9, MR10))
df_f1 = subset(df_ld, factor1 == 1) # Subset for outcome of interest
df_f1 = df_f1["factor1"]
df_f2 = subset(df_ld, factor2 == 1) # Subset for outcome of interest
df_f2 = df_f2["factor2"]
df_f3 = subset(df_ld, factor3 == 1) # Subset for outcome of
df_f3 = df_f3["factor3"]
df_f4 = subset(df_ld, factor4 == 1) # Subset for outcome of interest
df_f4 = df_f4["factor4"]
df_f5 = subset(df_ld, factor5 == 1) # Subset for outcome of interest
df_f5 = df_f5["factor5"]
df_f6 = subset(df_ld, factor6 == 1) # Subset for outcome of interest
df_f6 = df_f6["factor6"]
df_f7 = subset(df_ld, factor7 == 1) # Subset for outcome of interest
df_f7 = df_f7["factor7"]
df_f8 = subset(df_ld, factor8 == 1) # Subset for outcome of interest
df_f8 = df_f8["factor8"]
df_f9 = subset(df_ld, factor9 == 1) # Subset for outcome of interest
df_f9 = df_f9["factor9"]
df_f10 = subset(df_ld, factor10 == 1) # Subset for outcome of interest
df_f10 = df_f10["factor10"]
## Write Scree Test Output to File
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write("Factor Loadings", file) # Insert title
write(" ", file) # Insert space below title
capture.output(df_f1, file = file, append = TRUE) # write summary to file
capture.output(df_f2, file = file, append = TRUE) # write summary to file
capture.output(df_f3, file = file, append = TRUE) # write summary to file
capture.output(df_f4, file = file, append = TRUE) # write summary to file
capture.output(df_f5, file = file, append = TRUE) # write summary to file
capture.output(df_f6, file = file, append = TRUE) # write summary to file
capture.output(df_f7, file = file, append = TRUE) # write summary to file
capture.output(df_f8, file = file, append = TRUE) # write summary to file
capture.output(df_f9, file = file, append = TRUE) # write summary to file
capture.output(df_f10, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
### Step 4: Use a Random Forest to rank variables by importance
## Create a Random Forest
df_rf = df_nev
forest = randomForest(formula = outcome ~ ., data = df_rf, ntree = 1000, importance = TRUE) # This will take time
summary(forest) # Summary of forest model
## Tidy Results for Variable Classification
mat_forest = round(importance(forest), 2) # Round importance outcomes to two places,
df_forest = as.data.frame(mat_forest) # Save as data frame
df_forest = rownames_to_column(df_forest)
colnames(df_forest) <- c("Variable", "MSE", "Gini") # Change column names to easily readible. Both values correspond to the summed increse in either Mean Squared Error or Gini as the variable is removed. Both are important and used elsewhere in randomforests (Scikit-learn).
## Create Importance Variable Lists
df_rank = arrange(df_forest, desc(Gini)) # Descend by variable in data frame
df_rank = df_rank[which(df_rank$Gini > 0 & df_rank$MSE > 0), ]
print(df_rank) # Print output
## Write Random Forest Output to File
result = print(df_rank) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "w" to overwrite
write("Random Forest Variable Classification", file) # Insert title
write(" ", file) # Insert space below title
write.table(result, file, quote = FALSE, sep = " ") # write table of values to filee
write(" ", file) # Insert space below result
close(file) # Close file
### Summary Statistics of Model Variables
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "w" to overwrite
write("Summary of Model Variables", file) # Insert title
write(" ", file) # Insert space below result
write("OHX19TC", file) # Insert space below title
capture.output(summary(df_nev$OHX19TC), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("BMDAVSAD", file) # Insert space below title
capture.output(summary(df_nev$BMDAVSAD), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("INDFMMPI", file) # Insert space below title
capture.output(summary(df_nev$INDFMMPI), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("INQ080", file) # Insert space below title
capture.output(summary(df_nev$INQ080), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("BMXLEG", file) # Insert space below title
capture.output(summary(df_nev$BMXLEG), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("LBXWBCSI", file) # Insert space below title
capture.output(summary(df_nev$LBXWBCSI), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("LBXRDW", file) # Insert space below title
capture.output(summary(df_nev$LBXRDW), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("BPXDI3", file) # Insert space below title
capture.output(summary(df_nev$BPXDI3), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
write("LBDRFO", file) # Insert space below title
capture.output(summary(df_nev$LBDRFO), file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
### Step 5: Liner Regression to Determine Direction of Impact
model_logistic <- glm(outcome~ OHX19TC + BMDAVSAD + INDFMMPI + INQ080 + BMXLEG + LBXWBCSI + LBXRDW + BPXDI3 + LBDRFO, data = df_nev)
summary(model_logistic)
## Write Model Output to File
result = summary(model_logistic) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write(" ", file) # Insert space below title
write("First Logistic Regression for Variable Direction", file) # Insert title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below result
close(file) # Close file
#### Section C: Build Regression Model and Risk Score for Validation
### Step 5: Create Dichotomous Variables Based on 3rd Quartile
## Create New Variables
df_nh$V1 <- 0 # Add new outcome column and set value to 0
df_nh$V2 <- 0 # Add new outcome column and set value to 0
df_nh$V3 <- 0 # Add new outcome column and set value to 0
df_nh$V4 <- 0 # Add new outcome column and set value to 0
df_nh$V5 <- 0 # Add new outcome column and set value to 0
df_nh$V1[df_nh$OHX01TC == 4 | df_nh$OHX02TC == 4 | df_nh$OHX03TC == 4 | df_nh$OHX04TC == 4 | df_nh$OHX05TC == 4 | df_nh$OHX06TC == 4 | df_nh$OHX07TC == 4 | df_nh$OHX08TC == 4 | df_nh$OHX09TC == 4 | df_nh$OHX10TC == 4 | df_nh$OHX11TC == 4 | df_nh$OHX12TC == 4 | df_nh$OHX13TC == 4 | df_nh$OHX14TC == 4 | df_nh$OHX15TC == 4 | df_nh$OHX16TC == 4 | df_nh$OHX17TC == 4 | df_nh$OHX18TC == 4 | df_nh$OHX19TC == 4 | df_nh$OHX20TC == 4 | df_nh$OHX21TC == 4 | df_nh$OHX22TC == 4 | df_nh$OHX23TC == 4 | df_nh$OHX24TC == 4 | df_nh$OHX25TC == 4 | df_nh$OHX26TC == 4 | df_nh$OHX27TC == 4 | df_nh$OHX28TC == 4 | df_nh$OHX29TC == 4 | df_nh$OHX30TC == 4 | df_nh$OHX31TC == 4 | df_nh$OHX32TC == 4] <- 1 # Create new column based on conditions
df_nh$V2[df_nh$BMDAVSAD > 25.1] <- 1 # Create new column based on conditions
df_nh$V3[df_nh$INDFMMPI > 5] <- 1 # Create new column based on conditions
df_nh$V4[df_nh$INQ080 == 1] <- 1 # Create new column based on conditions
df_nh$V5[df_nh$BMXLEG < 41.58] <- 1 # Create new column based on conditions
## Resolve missing data
df_lon = subset(df_nh, select = c(outcome, V1, V2, V3, V4, V5)) # Remove variables with factor value types
df_lon = na.omit(df_lon) # Omit rows with NA from Data Frame
dim(df_lon)
### Step 6: Logistic Regression with Stepwise Selection for Final Model
## Create training and validation set
sample = sample.int(n = nrow(df_lon), size = floor(.50*nrow(df_lon)), replace = F) # Create training and testing dataset with 50% split
train = df_lon[sample, ] # Susbset data frame by sample
test = df_lon[-sample, ] # Subset data frame by removing sample
## Perform Logisitc Regression on selected variables
model_logistic = glm(outcome~ V1 + V2 + V3 + V4 + V5, data = train)
summary(model_logistic)
## Stepwise backward selection
model_back = stepAIC(model_logistic, direction = "backward") # Stepwise backwards selection on model
summary(model_back) # Output model summary and check for variables to remove for final model
## Write Quantitative Selection Model Output to File
result1 = print(summary(model_logistic)) # Save result df to variable
result2 = print(summary(model_back)) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write("Training Logistic Regression Model", file) # Insert space below title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below title
write("Stepwise Backwards Selection", file) # Insert space below title
write(" ", file) # Insert space below title
capture.output(result2, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below title
close(file) # Close file
### Step 7: Create Score and Verify
# Create new column based on conditions
test$score <- 0 # Add new outcome column and set value to 0
test$score = (20*test$V2) + (15*test$V4) + (10*test$V5)
## Final Model and AUC Score
model_score = glm(outcome~ score, data = test) # Perform logistic regression model on selected variables on test data
roc_test = roc(model_score$y, model_score$fitted.values, ci = T, plot = T) # Perform ROC test on test data
auc(roc_test) # Print AUC score
## Write Quantitative Selection Model Output to File
result = print(auc(roc_test)) # Save result df to variable
file = file("neville/neville_nhanes_pdm_under_results.txt") # Open result file in subproject repository
open(file, "a") # Open file and "a" to append
write("Risk Score Validation", file) # Insert space below title
write(" ", file) # Insert space below title
capture.output(result, file = file, append = TRUE) # write summary to file
write(" ", file) # Insert space below title
close(file) # Close file
>>>>>>> 321eaea894d29a3f212ee3e111b86695f46cf6db
|
|
library(randomForest)
library(ROCR)
train_tf_idf = read.csv("/Users/ouyamei/Documents/GitHub/kaggle-crisis/data/kaggle_train_tf_idf.csv")
train_wc = read.csv("/Users/ouyamei/Documents/GitHub/kaggle-crisis/data/kaggle_train_wc.csv")
test_wc = read.csv("/Users/ouyamei/Documents/GitHub/kaggle-crisis/data/kaggle_test_wc.csv")
features = train_tf_idf[0:3000,c(-1,-502)]
label = as.factor(train_tf_idf$Predict[0:3000])
features_v = train_tf_idf[3000:4000,c(-1,-502)]
label_v = as.factor(train_tf_idf$Predict[3000:4000])
# find a best mtry parameter
bestmtry <- tuneRF(features,label, ntreeTry=100,
stepFactor=1.5,improve=0.01, trace=TRUE, plot=TRUE, dobest=FALSE)
# use parameter to train a random forest model
rf <- randomForest(x=features, y=label, mtry=373, ntree=500,
keep.forest=TRUE, importance=TRUE)
# create more model to compare, actually tried more than this
rf2 <- randomForest(x=features, y=label, mtry=373, ntree=500,
classwt=c(3293,707), importance=TRUE)
# get the statitic result
rf.pr = predict(rf,newdata=features_v)
error = mean(rf.pr!=label_v)
library(Epi)
ROC(form=label_v~rf.pr, plot="ROC")
important_varibles = importance(rf,type=1)
|
|
pwd:alibaba
openssl genrsa -out private-rsa.key 1024
openssl req -new -x509 -key private-rsa.key -out public-rsa-10years.cer -days 3650
openssl pkcs12 -export -out private-rsa.pfx -inkey private-rsa.key -in public-rsa-10years.cer
|
|
jmp = c(6, 1.5, 1.5, 0.25, 0.75, 1.5, 3, 0.5, 1.5)
iter = 1e8
out.length = 10000
burn.length = 1e8
resp.f = c(1.21,1.37,1.2,1.47,1.47,1.49,1.47,1.39,1.52)
|
|
library(testthat)
source("../src/data_structures.r")
source("../src/estimators.r")
source("../src/losses.r")
source("../src/constrained_boosting.r")
test_that("fit_const_pair",{
data <- dummyData()
expect_error(fit_const_pair(4,4),
"is(object = data, class2 = \"data\") is not TRUE",fixed=TRUE)
expect_error(fit_const_pair(data,"string"),
"synth_effect is not a numeric or integer vector",fixed=TRUE)
leaf_dictionary = fit_const_pair(data,2)
expect_equal(leaf_dictionary$"FALSE"$value,3)
expect_equal(leaf_dictionary$"TRUE"$value,5)
leaf_dictionary = fit_const_pair(data,-4)
expect_equal(leaf_dictionary$"FALSE"$value,6)
expect_equal(leaf_dictionary$"TRUE"$value,2)
})
test_that("predict_counterfactuals",{
estimator_pair <- TreatmentDictionary(5,6)
data <- dummyData()
expect_error(predict_counterfactuals(4,data),
"is(object = estimator_pair, class2 = \"treatment_dictionary\") is not TRUE",fixed=TRUE)
expect_error(predict_counterfactuals(estimator_pair,data),
"is(object = estimator_pair$`TRUE`, class2 = \"leaf\") is not TRUE",fixed=TRUE)
estimator_pair <- TreatmentDictionary( Leaf(5,bool_id = c(TRUE,TRUE)),
Leaf(6,bool_id = c(FALSE,TRUE)))
expect_error(predict_counterfactuals(estimator_pair,data$X),
"is(object = data, class2 = \"data\") is not TRUE",fixed=TRUE)
counterfactuals = predict_counterfactuals(estimator_pair,data)
expect_equal(length(counterfactuals$treated$'TRUE'),2)
expect_equal(counterfactuals$treated$'TRUE'[1],5)
expect_equal(length(counterfactuals$treated$'FALSE'),2)
expect_equal(counterfactuals$treated$'FALSE'[1],6)
v<-TreatmentDictionary(Dict$new(),Dict$new())
v$`TRUE`$set("2",0.2)
v$`TRUE`$set("3",0.4)
v$`FALSE`$set("2",0.6)
v$`FALSE`$set("3",0.8)
c<-predict_counterfactuals(estimator_pair,data,v)
expect_equal(c$treated$`TRUE`,c(0.4,0.4))
expect_equal(c$treated$`FALSE`,c(0.6,0.6))
})
test_that("obs_gradient",{
estimator_pair <- TreatmentDictionary(Leaf(5),Leaf(6))
data <- dummyData()
counterfactuals = predict_counterfactuals(estimator_pair,data)
residuals <- obs_gradient(data$Y,data$W,counterfactuals)
expect_equal(residuals$`TRUE`,-2)
expect_equal(residuals$`FALSE`,-1)
})
test_that("build_leaf_index",{
X1<-101:200
X2<-rep(0,times=100)
X<-cbind(X1,X2)
tree<-Node(c(TRUE),1,151,
Node(c(TRUE,TRUE),1,126,
Leaf(10,bool_id = c(TRUE,TRUE,TRUE)),
Leaf(20,bool_id = c(FALSE,TRUE,TRUE))),
Leaf(30,bool_id = c(FALSE,TRUE)))
tree_pair<-TreatmentDictionary(tree,tree)
dict<-build_leaf_index(tree_pair,X)
expect_equal(dict$`TRUE`,dict$`FALSE`)
expect_equal(dict$`TRUE`$keys(),c("2","6","7"))
expect_equal(dict$`TRUE`$get("2"),51:100)
expect_equal(dict$`TRUE`$get("6"),26:50)
expect_equal(dict$`TRUE`$get("7"),1:25)
})
test_that("filter_index",{
indices<-Dict$new()
indices$set("1",1:50)
indices$set("2",51)
indices$set("3",52:75)
indices$set("4",76:100)
filtered<-filter_index(indices,which(1:100%%2==0))
expect_equal(length(filtered$keys()),4)
expect_equal(filtered$get("1"),seq(2,50,by = 2))
expect_equal(filtered$get("2"),numeric(0))
expect_equal(filtered$get("3"),seq(52,74,by = 2))
expect_equal(filtered$get("4"),seq(76,100,by = 2))
filtered<-filter_index(indices,c(1,100))
expect_equal(length(filtered$keys()),4)
expect_equal(filtered$get("1"),1)
expect_equal(filtered$get("2"),numeric(0))
expect_equal(filtered$get("3"),numeric(0))
expect_equal(filtered$get("4"),100)
})
test_that("constrained_boost",{
N = 100
X1 = runif(N,min = -pi,max = pi)
X2 = seq(0,0,length.out = N)
X = cbind(X1,X2)
W = rbinom(n=N,size = 1,p=logistic(X1))
Y = sin(X1)+ rnorm(n = N, mean = 0,sd = 0.15)
data <- Data(X,W,Y)
# parameters for constrained boosting
synth_effect = -2 # the synthetic effect you want to have
n_trees=10 # max number of trees to use for constrained boosting
three_depth = 2 # max tree depth
regularization_par = 2 # regularization parameter
s = 10 # minimum number of samples per leaf in each tree
F<-constrained_boost(data,synth_effect,
n_trees,three_depth,regularization_par,
min_samples_leaf=s)
for(i in 1:(n_trees+1)){
data_synth_effect<-(sum(F[[i]]$treated$`TRUE`)-sum(F[[i]]$treated$`FALSE`))/N
expect_equal(data_synth_effect,synth_effect)
}
})
|
|
func_num=function(x){
num<-c(0,1,2,3,4,5,6,7,8,9)
datalist<-list(10)
a<-as.numeric(unlist(strsplit(as.character(x),"")))
return(num[which(is.na(match(num,a)))])
}
|
|
nox<-2; noy<-3;
paper<-F # graphics on paper=file (TRUE) or on screen (FALSE)
cleanup()
first.year.on.plot<-1974
last.year.on.plot<-2016
doGrid<-T
first.pch<-1 # first pch symbol
first.color<-1 # first color
palette("default") # good for clolorfull plots
#palette(gray(seq(0,.9,len=6))) # gray scale for papers, use len =500 to get black only
dirs<-c("NS_63-10-sep-2014","NS_key-2014-ver15_codage3","NS_key-2014-ver17")
labels<-c("2011", "2014-key","2015-key")
dirs<-c("NS_key-2011","NS_key-2014-ver13-final-key-run", "NorthSeaKeyRun")
labels<-c("2011-run", "2014-run","2015-run")
dirs<-c("NS_key-2014-ver17","NS_key-2017-ver02")
labels<-c("2014-run", "2017-run")
Init.function() # get SMS.contol object including sp.names
for (dir in dirs) {
Init.function(dir=file.path(root,dir)) # get SMS.contol object including sp.names
a<-Read.summary.data(dir=file.path(root,dir),read.init.function=F)
a<-subset(a,(Year>=first.year.on.plot & Year<=last.year.on.plot & Z>0))
a<-data.frame(scenario=labels[which(dirs==dir)],Variable="M2",Year=a$Year, quarter=a$Quarter,Species=a$Species, Age=a$Age,west=a$west)
if (dir==dirs[1]) alld<-a else alld<-rbind(alld,a)
}
head(a)
all<-subset(a,Species=='Cod' & quarter==1)
by(all,list(all$Species),function(x ) {
trellis.device(device = "windows",
color = T, width=5, height=9,pointsize = 2,
new = T, retain = FALSE)
print(xyplot( west~Year|paste('Age:',Age),groups=paste(quarter,scenario), data=x,
type='b',lwd=1 , layout=c(3,3), ylab='Weight in the stock',
strip = strip.custom( bg='white'),par.strip.text=list(cex=1, lines=1.5),
auto.key = list(space = "bottom", points = T, lines = F,cex=0.9, columns = 2) ,
xlim=c(1976,2017),
scales = list(x = list( cex=0.8), y= list(cex=0.8),alternating = 1,relation='free')
))
})
###################
# more species per plot
cleanup()
all$age<-paste(all$Species,", age ",all$Age,sep='')
all$ageRev<-paste("age ",all$Age,", ",all$Species,sep='')
all<-all[order(all$label,all$Age,all$Species.n,all$Year),]
all1<-subset(all,Species %in% c('Cod','Whiting','Haddock'))
trellis.device(device = "windows",
color = F, width=9, height=9,pointsize = 2,
new = T, retain = FALSE)
print(xyplot( M2~Year|age,groups=label, data=all1,
type='b',lwd=1 , layout=c(3,3), as.table=F, ylab='Predation mortality',
strip = strip.custom( bg='white'),par.strip.text=list(cex=1, lines=1.5),
auto.key = list(space = "bottom", points = T, lines = F,cex=0.9, columns = 3) ,
xlim=c(1976,2000),
scales = list(x = list( cex=0.8), y= list(cex=0.8),alternating = 1,relation='free')
))
###################
# more species per plot, used for paper
cleanup()
trellis.device(device = "windows",
color = F, width=10, height=11,pointsize = 2,
new = T, retain = FALSE)
#all1<-subset(all,Species %in% c('Cod','Whiting','Haddock'))
all1<-subset(all,Species %in% c('Herring','Nor. pout','Sandeel'))
xyplot( M2~Year|ageRev,groups=label, data=all1,
as.table=T, ylab='Predation mortality',
layout=c(3,3),
strip = strip.custom( bg='white'),par.strip.text=list(cex=1, lines=0),
auto.key = list(space = "bottom", points = T, lines = T,cex=1, columns = 2,title='Prey size selection', pch=c(1,2,3,4,5)) ,
xlim=c(1976,2000), # to allow relation='free' and having (0,0) included
scales = list(x = list( cex=0.8), y= list(cex=0.8),alternating = 1,relation='free'),
panel = function(x, y, subscripts, groups) {
ltext(x=1976,y=0,labels=all1[subscripts,'age'],pos=4,cex=1.1)
panel.superpose(x, y,subscripts=subscripts, groups=groups,type='b')
}
)
###################
# same as previous, but in color
cleanup()
trellis.device(device = "windows",
color = T, width=12, height=12,pointsize = 12,
new = T, retain = FALSE)
myCol<- c(1,2,3,4,5)
myLwd<-rep(3,5)
myLty<-c(1,2,3,4,5)
all1<-subset(all,Species %in% c('Cod','Whiting','Haddock'))
all1<-subset(all,Species %in% c('Herring','Nor. pout','Sandeel'))
all1<-subset(all,Species %in% c('Cod','Herring','Sandeel'))
print(xyplot( M2~Year|ageRev,groups=label, data=all1,
as.table=T, ylab='Predation mortality',
layout=c(3,3), between = list(y = c(1, 1),x = c(1, 1)),
strip = strip.custom( bg='white'),par.strip.text=list(cex=0, lines=0), main=NA,
key = list(space = "bottom", points = F, lines = T,cex=1, columns = 2,title='Prey size selection',col=myCol, lwd=myLwd*1.5,pch=NA,
text = list(lab = as.character(unique(all1$label)),col=1) ),
xlim=c(1976,2000), # to allow relation='free' and having (0,0) included
scales = list(x = list( cex=0.8), y= list(cex=0.8),alternating = 1,relation='free'),
panel = function(x, y, subscripts, groups) {
ltext(x=1976,y=0,labels=all1[subscripts,'age'],pos=4,cex=1.0 ,col=1,font=2)
panel.superpose(x, y,subscripts=subscripts, groups=groups,type='l',
col=myCol,lwd=myLwd)
}
))
}
|
|
# Mojang API
#
# No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator)
#
# OpenAPI spec version: 2020-06-05
#
# Generated by: https://openapi-generator.tech
#' CurrentPlayerIDs Class
#'
#' @field id
#' @field name
#' @field legacy
#' @field demo
#'
#' @importFrom R6 R6Class
#' @importFrom jsonlite fromJSON toJSON
#' @export
CurrentPlayerIDs <- R6::R6Class(
'CurrentPlayerIDs',
public = list(
`id` = NULL,
`name` = NULL,
`legacy` = NULL,
`demo` = NULL,
initialize = function(`id`, `name`, `legacy`, `demo`){
if (!missing(`id`)) {
stopifnot(is.character(`id`), length(`id`) == 1)
self$`id` <- `id`
}
if (!missing(`name`)) {
stopifnot(is.character(`name`), length(`name`) == 1)
self$`name` <- `name`
}
if (!missing(`legacy`)) {
self$`legacy` <- `legacy`
}
if (!missing(`demo`)) {
self$`demo` <- `demo`
}
},
toJSON = function() {
CurrentPlayerIDsObject <- list()
if (!is.null(self$`id`)) {
CurrentPlayerIDsObject[['id']] <- self$`id`
}
if (!is.null(self$`name`)) {
CurrentPlayerIDsObject[['name']] <- self$`name`
}
if (!is.null(self$`legacy`)) {
CurrentPlayerIDsObject[['legacy']] <- self$`legacy`
}
if (!is.null(self$`demo`)) {
CurrentPlayerIDsObject[['demo']] <- self$`demo`
}
CurrentPlayerIDsObject
},
fromJSON = function(CurrentPlayerIDsJson) {
CurrentPlayerIDsObject <- jsonlite::fromJSON(CurrentPlayerIDsJson)
if (!is.null(CurrentPlayerIDsObject$`id`)) {
self$`id` <- CurrentPlayerIDsObject$`id`
}
if (!is.null(CurrentPlayerIDsObject$`name`)) {
self$`name` <- CurrentPlayerIDsObject$`name`
}
if (!is.null(CurrentPlayerIDsObject$`legacy`)) {
self$`legacy` <- CurrentPlayerIDsObject$`legacy`
}
if (!is.null(CurrentPlayerIDsObject$`demo`)) {
self$`demo` <- CurrentPlayerIDsObject$`demo`
}
},
toJSONString = function() {
sprintf(
'{
"id": %s,
"name": %s,
"legacy": %s,
"demo": %s
}',
self$`id`,
self$`name`,
self$`legacy`,
self$`demo`
)
},
fromJSONString = function(CurrentPlayerIDsJson) {
CurrentPlayerIDsObject <- jsonlite::fromJSON(CurrentPlayerIDsJson)
self$`id` <- CurrentPlayerIDsObject$`id`
self$`name` <- CurrentPlayerIDsObject$`name`
self$`legacy` <- CurrentPlayerIDsObject$`legacy`
self$`demo` <- CurrentPlayerIDsObject$`demo`
}
)
)
|
|
#Naive bayes
library(caTools)
library(rpart)
library(rpart.plot)
library(caret)
library(e1071)
set.seed(125)
myd=read.csv('diabetes.csv')
myd$Outcome=factor(myd$Outcome,
c(0,1))
split=sample.split(myd,SplitRatio = 0.75)
train_data=subset(myd,split==TRUE)
test_data=subset(myd,split==FALSE)
classifier=naiveBayes(x=train_data,
y=train_data$Outcome)
pred=predict(classifier,newdata = test_data)
cm=table(test_data$Outcome,pred)
cm
result=confusionMatrix(pred,test_data$Outcome)
result
#Decision Tree
fit=rpart(train_data$Outcome~.,
data=train_data,
method="class")
#rpart.plot(fit)
pred1=predict(fit,newdata = test_data,type = "class")
cm1=table(test_data$Outcome,pred1)
result1=confusionMatrix(pred1,test_data$Outcome)
result1
|
|
offsetof(record, x) = 0
offsetof(record, y) = 0
offsetof(record, c) = 0
|
|
capwords <- function(s, strict = FALSE, sentence=T) {
cap <- function(s) paste(toupper(substring(s, 1, 1)),
{s <- substring(s, 2); if(strict) tolower(s) else s},
sep = "", collapse = " " )
if(sentence){
sapply(s, cap, USE.NAMES = !is.null(names(s)))
} else {
sapply(strsplit(s, split = " "), cap, USE.NAMES = !is.null(names(s)))
}
}
|
|
cat('fix_gridlines() now available as a function\n')
fix_gridlines = function(p){
p<<-p
pp <<- ggplot_build(p)$layout$panel_params[[1]]
grid_coords = lst(
x_major = b$layout$panel_params[[1]]$y$breaks
, x_minor = b$layout$panel_params[[1]]$y$minor_breaks
, y_major = b$layout$panel_params[[1]]$x$breaks
, y_minor = b$layout$panel_params[[1]]$x$minor_breaks
)
(
p
# + theme(
# panel.grid = element_blank()
# )
# + geom_vline(
# xintercept = grid_coords$x_major
# , size = 1
# , colour = 'white'
# # , alpha = .5
# )
# + geom_vline(
# xintercept = grid_coords$x_minor
# , size = .5
# , colour = 'white'
# , alpha = .5
# )
+ geom_hline(
yintercept = grid_coords$y_major
, size = 1
, colour = 'white'
, alpha = .5
)
+ geom_hline(
yintercept = grid_coords$y_minor
, size = .5
, colour = 'white'
, alpha = .5
)
) ->
p
return(p)
}
|
|
#!/usr/bin/env Rscript
setwd("~/tmp/jmh-dscg-benchmarks-results")
timestamp <- "20150109_1718"
# install.packages("vioplot")
# install.packages("beanplot")
# install.packages("ggplot2")
# install.packages("reshape2")
# install.packages("functional")
# install.packages("plyr")
# install.packages("extrafont")
# install.packages("scales")
library(vioplot)
library(beanplot)
library(ggplot2)
library(reshape2)
library(functional)
library(plyr) # needed to access . function
library(extrafont)
library(scales)
loadfonts()
capwords <- function(s, strict = FALSE) {
cap <- function(s) paste(toupper(substring(s, 1, 1)),
{s <- substring(s, 2); if(strict) tolower(s) else s},
sep = "", collapse = " " )
sapply(strsplit(s, split = " "), cap, USE.NAMES = !is.null(names(s)))
}
calculateMemoryFootprintOverhead <- function(requestedDataType, dataStructureOrigin) {
###
# Load 32-bit and 64-bit data and combine them.
##
dss32_fileName <- paste(paste("/Users/Michael/Dropbox/Research/hamt-improved-results/map-sizes-and-statistics", "32bit", timestamp, sep="-"), "csv", sep=".")
dss32_stats <- read.csv(dss32_fileName, sep=",", header=TRUE)
dss32_stats <- within(dss32_stats, arch <- factor(32))
#
dss64_fileName <- paste(paste("/Users/Michael/Dropbox/Research/hamt-improved-results/map-sizes-and-statistics", "64bit", timestamp, sep="-"), "csv", sep=".")
dss64_stats <- read.csv(dss64_fileName, sep=",", header=TRUE)
dss64_stats <- within(dss64_stats, arch <- factor(64))
#
dss_stats <- rbind(dss32_stats, dss64_stats)
classNameTheOther <- switch(dataStructureOrigin,
Scala = paste("scala.collection.immutable.Hash", capwords(tolower(requestedDataType)), sep = ""),
Clojure = paste("clojure.lang.PersistentHash", capwords(tolower(requestedDataType)), sep = ""))
classNameOurs <- paste("io.usethesource.capsule.Trie", capwords(tolower(requestedDataType)), "_5Bits", sep = "")
###
# If there are more measurements for one size, calculate the median.
# Currently we only have one measurment.
##
dss_stats_meltByElementCount <- melt(dss_stats, id.vars=c('elementCount', 'className', 'dataType', 'arch'), measure.vars=c('footprintInBytes')) # measure.vars=c('footprintInBytes')
dss_stats_castByMedian <- dcast(dss_stats_meltByElementCount, elementCount + className + dataType + arch ~ "footprintInBytes_median", median, fill=0)
mapClassName <- "io.usethesource.capsule.TrieMap_5Bits"
setClassName <- "io.usethesource.capsule.TrieSet_5Bits"
# mapClassName <- "io.usethesource.capsule.TrieMap_BleedingEdge"
# setClassName <- "io.usethesource.capsule.TrieSet_BleedingEdge"
###
# Calculate different baselines for comparison.
##
dss_stats_castByBaselinePDBDynamic <- aggregate(footprintInBytes_median ~ elementCount + dataType + arch, dss_stats_castByMedian[dss_stats_castByMedian$className == mapClassName | dss_stats_castByMedian$className == setClassName,], min)
names(dss_stats_castByBaselinePDBDynamic) <- c('elementCount', 'dataType', 'arch', 'footprintInBytes_baselinePDBDynamic')
# dss_stats_castByBaselinePDB0To4 <- aggregate(footprintInBytes_median ~ elementCount + dataType + arch, dss_stats_castByMedian[dss_stats_castByMedian$className == "io.usethesource.capsule.TrieMap" | dss_stats_castByMedian$className == "io.usethesource.capsule.TrieSet",], min)
# names(dss_stats_castByBaselinePDB0To4) <- c('elementCount', 'dataType', 'arch', 'footprintInBytes_baselinePDB0To4')
#
# dss_stats_castByBaselinePDB0To8 <- aggregate(footprintInBytes_median ~ elementCount + dataType + arch, dss_stats_castByMedian[dss_stats_castByMedian$className == "io.usethesource.capsule.TrieMap0To8" | dss_stats_castByMedian$className == "io.usethesource.capsule.TrieSet0To8",], min)
# names(dss_stats_castByBaselinePDB0To8) <- c('elementCount', 'dataType', 'arch', 'footprintInBytes_baselinePDB0To8')
#
# dss_stats_castByBaselinePDB0To12 <- aggregate(footprintInBytes_median ~ elementCount + dataType + arch, dss_stats_castByMedian[dss_stats_castByMedian$className == "io.usethesource.capsule.TrieMap0To12" | dss_stats_castByMedian$className == "io.usethesource.capsule.TrieSet0To12",], min)
# names(dss_stats_castByBaselinePDB0To12) <- c('elementCount', 'dataType', 'arch', 'footprintInBytes_baselinePDB0To12')
###
# Merges baselines.
##
dss_stats_with_min <- merge(dss_stats_castByMedian, dss_stats_castByBaselinePDBDynamic)
# dss_stats_with_min <- merge(dss_stats_with_min, dss_stats_castByBaselinePDB0To4)
# dss_stats_with_min <- merge(dss_stats_with_min, dss_stats_castByBaselinePDB0To8)
# dss_stats_with_min <- merge(dss_stats_with_min, dss_stats_castByBaselinePDB0To12)
# http://www.dummies.com/how-to/content/how-to-add-calculated-fields-to-data-in-r.navId-812016.html
dss_stats_with_min <- within(dss_stats_with_min, memoryOverheadFactorComparedToPDBDynamic <- dss_stats_with_min$footprintInBytes_median / footprintInBytes_baselinePDBDynamic)
dss_stats_with_min <- within(dss_stats_with_min, memorySavingComparedToPDBDynamic <- 1 - (dss_stats_with_min$footprintInBytes_baselinePDBDynamic / dss_stats_with_min$footprintInBytes_median))
#
# dss_stats_with_min <- within(dss_stats_with_min, memoryOverheadFactorComparedToPDB0To8 <- dss_stats_with_min$footprintInBytes_median / footprintInBytes_baselinePDB0To8)
# dss_stats_with_min <- within(dss_stats_with_min, memorySavingComparedToPDB0To8 <- 1 - (dss_stats_with_min$footprintInBytes_baselinePDB0To8 / dss_stats_with_min$footprintInBytes_median))
###
# How good score our specializations [map]?
##
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMapDynamic",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap0To8",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap0To12",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMapDynamic" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap0To8" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap0To12" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMapDynamic" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap0To8" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieMap0To12" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
###
# How good score our specializations [set]?
##
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSetDynamic",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet0To8",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet0To12",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSetDynamic" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet0To8" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet0To12" & dss_stats_with_min$arch == "32",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSetDynamic" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet0To8" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "io.usethesource.capsule.TrieSet0To12" & dss_stats_with_min$arch == "64",]$memorySavingComparedToPDBDynamic)
###
# Compare generic data structure to competition.
##
# median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashMap",]$memorySavingComparedToPDBDynamic)
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashMap",]$memorySavingComparedToPDBDynamic)
#
# median(dss_stats_with_min[dss_stats_with_min$className == "com.gs.collections.impl.map.mutable.UnifiedMap",]$memorySavingComparedToPDBDynamic)
# median(dss_stats_with_min[dss_stats_with_min$className == "java.util.HashMap",]$memorySavingComparedToPDBDynamic)
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.mutable.HashMap",]$memorySavingComparedToPDBDynamic)
# median(dss_stats_with_min[dss_stats_with_min$className == "com.google.common.collect.ImmutableMap",]$memorySavingComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashMap",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashMap" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashMap" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashMap",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashMap" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashMap" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashSet",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashSet" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashSet" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashSet",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashSet" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDBDynamic)
median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashSet" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDBDynamic)
# ###
# # Compare specialization to competition.
# ##
# median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashMap",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashMap" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashMap" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDB0To8)
#
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashMap",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashMap" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashMap" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDB0To8)
#
# median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashSet",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashSet" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "clojure.lang.PersistentHashSet" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDB0To8)
#
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashSet",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashSet" & dss_stats_with_min$arch == "32",]$memoryOverheadFactorComparedToPDB0To8)
# median(dss_stats_with_min[dss_stats_with_min$className == "scala.collection.immutable.HashSet" & dss_stats_with_min$arch == "64",]$memoryOverheadFactorComparedToPDB0To8)
# sel.tmp <- dss_stats_with_min[dss_stats_with_min$className != mapClassName & dss_stats_with_min$className != setClassName,]
# dss.tmp <- melt(sel.tmp, id.vars=c('elementCount', 'arch', 'dataType', 'className'), measure.vars = c('memoryOverheadFactorComparedToPDBDynamic'))
#
# # dss.tmp.cast_Map <- dcast(dss.tmp[dss.tmp$dataType == "MAP",], elementCount ~ className + dataType + arch + variable)
# # dss.tmp.cast_Set <- dcast(dss.tmp[dss.tmp$dataType == "SET",], elementCount ~ className + dataType + arch + variable)
# sel.tmp <- dss_stats_with_min[dss_stats_with_min$className == classNameTheOther,]
# dss.tmp <- melt(sel.tmp, id.vars=c('elementCount', 'arch', 'dataType', 'className'), measure.vars = c('memoryOverheadFactorComparedToPDBDynamic'))
#
# res <- dcast(dss.tmp[dss.tmp$dataType == requestedDataType,], elementCount ~ className + arch + dataType + variable)
#
# # sort: first 32 then 64 bit, inside first Scala, then Clojure
# # res[,c(1,4,2,5,3)]
#
# res
theOther <- dss_stats_castByMedian[dss_stats_castByMedian$className == classNameTheOther & dss_stats_castByMedian$dataType == requestedDataType,]
ours <- dss_stats_castByMedian[dss_stats_castByMedian$className == classNameOurs & dss_stats_castByMedian$dataType == requestedDataType,]
###
# BE AWARE: hard-coded switching from 'memory savings in %' to 'speedup factor'.
##
# memorySavingComparedToTheOther <- 1 - (ours$footprintInBytes_median / theOther$footprintInBytes_median)
memorySavingComparedToTheOther <- (theOther$footprintInBytes_median / ours$footprintInBytes_median)
sel.tmp = data.frame(ours$elementCount, ours$arch, memorySavingComparedToTheOther)
colnames(sel.tmp) <- c('elementCount', 'arch', 'memorySavingComparedToTheOther')
dss.tmp <- melt(sel.tmp, id.vars=c('elementCount', 'arch'), measure.vars = c('memorySavingComparedToTheOther'))
res <- dcast(dss.tmp, elementCount ~ arch + variable)
# print(res)
res
}
# http://stackoverflow.com/questions/11340444/is-there-an-r-function-to-format-number-using-unit-prefix
formatCsUnits__ <- function (number,rounding=T)
{
lut <- c(1e-24, 1e-21, 1e-18, 1e-15, 1e-12, 1e-09, 1e-06,
0.001, 1, 1000, 1e+06, 1e+09, 1e+12, 1e+15, 1e+18, 1e+21,
1e+24)
pre <- c("y", "z", "a", "f", "p", "n", "u", "m", "", "K",
"M", "G", "T", "P", "E", "Z", "Y")
ix <- findInterval(number, lut)
if (lut[ix]!=1) {
if (rounding==T) {
sistring <- paste(formatC(number/lut[ix], digits=0, format="f"),pre[ix], sep="")
}
else {
sistring <- paste(formatC(number/lut[ix], digits=0, format="f"), pre[ix], sep="")
}
}
else {
sistring <- paste(round(number, digits=0))
}
return(sistring)
}
formatCsUnits <- Vectorize(formatCsUnits__)
formatFactor__ <- function(arg,rounding=F) {
if (is.nan(arg)) {
x <- "0"
} else {
digits = 2
if (rounding==T) {
x <- format(round(arg, digits), nsmall=digits, digits=digits, scientific=FALSE)
} else {
x <- format(arg, nsmall=digits, digits=digits, scientific=FALSE)
}
}
# paste(x, "\\%", sep = "")
x
}
formatFactor <- Vectorize(formatFactor__)
formatPercent__ <- function(arg,rounding=F) {
if (is.nan(arg)) {
x <- "0"
} else {
argTimes100 <- as.numeric(arg) * 100
digits = 0
if (rounding==T) {
x <- format(round(argTimes100, digits), nsmall=digits, digits=digits, scientific=FALSE)
} else {
x <- format(argTimes100, nsmall=digits, digits=digits, scientific=FALSE)
}
}
# paste(x, "\\%", sep = "")
x
}
formatPercent <- Vectorize(formatPercent__)
formatNsmall2__ <- function(arg,rounding=T) {
if (is.nan(arg)) {
x <- "0"
} else {
if (rounding==T) {
x <- format(round(as.numeric(arg), 2), nsmall=2, digits=2, scientific=FALSE)
} else {
x <- format(round(as.numeric(arg), 2), nsmall=2, digits=2, scientific=FALSE)
}
}
}
formatNsmall2 <- Vectorize(formatNsmall2__)
latexMath__ <- function(arg) {
paste("$", arg, "$", sep = "")
}
latexMath <- Vectorize(latexMath__)
# latexMathFactor__ <- function(arg) {
# if (as.numeric(arg) < 1) {
# paste("${\\color{red}", arg, "\\times}$", sep = "")
# } else {
# paste("$", arg, "\\times$", sep = "")
# }
# }
latexMathFactor__ <- function(arg) {
arg_fmt <- formatFactor(arg, rounding=T)
if (as.numeric(arg) < 1) {
paste("${\\color{red}", arg_fmt, "}$", sep = "")
} else {
paste("$", arg_fmt, "$", sep = "")
}
}
latexMathFactor <- Vectorize(latexMathFactor__)
latexMathPercent__ <- function(arg) {
arg_fmt <- formatPercent(arg)
postfix <- "\\%"
if (is.na(arg) | is.nan(arg)) { # | !is.numeric(arg)
paste("$", "--", "$", sep = "")
} else {
if (as.numeric(arg) < 0) {
paste("${\\color{red}", arg_fmt, postfix, "}$", sep = "")
} else {
paste("$", arg_fmt, postfix, "$", sep = "")
}
}
}
latexMathPercent <- Vectorize(latexMathPercent__)
getBenchmarkMethodName__ <- function(arg) {
strsplit(as.character(arg), split = "[.]time")[[1]][2]
}
getBenchmarkMethodName <- Vectorize(getBenchmarkMethodName__)
benchmarksFileName <- paste(paste("/Users/Michael/Dropbox/Research/hamt-improved-results/results.all", timestamp, sep="-"), "log", sep=".")
benchmarks <- read.csv(benchmarksFileName, sep=",", header=TRUE, stringsAsFactors=FALSE)
colnames(benchmarks) <- c("Benchmark", "Mode", "Threads", "Samples", "Score", "ScoreError", "Unit", "Param_dataType", "Param_run", "Param_sampleDataSelection", "Param_size", "Param_valueFactoryFactory")
benchmarks$Benchmark <- getBenchmarkMethodName(benchmarks$Benchmark)
benchmarksCleaned <- benchmarks[benchmarks$Param_sampleDataSelection == "MATCH" & !grepl("@", benchmarks$Benchmark),c(-2,-3,-4,-7,-10)]
# benchmarksCleaned[benchmarksCleaned$Param_valueFactoryFactory == "VF_PDB_PERSISTENT_BLEEDING_EDGE", ]$Param_valueFactoryFactory <- "VF_PDB_PERSISTENT_CURRENT"
###
# If there are more measurements for one size, calculate the median.
# Currently we only have one measurment.
##
benchmarksCleaned = ddply(benchmarksCleaned, c("Benchmark", "Param_dataType", "Param_size", "Param_valueFactoryFactory"), function(x) c(Score = median(x$Score), ScoreError = median(x$ScoreError)))
#benchmarksByName <- melt(benchmarksCleaned, id.vars=c('Benchmark', 'Param_size', 'Param_dataType', 'Param_valueFactoryFactory')) # 'Param_valueFactoryFactory'
#ggplot(data=benchmarksByName, aes(x=variable, y=value, fill=as.factor(Param_valueFactoryFactory))) + geom_histogram(position="dodge", stat="identity") + xlab("node branching factor") + ylab("value") + scale_x_discrete(labels=as.character(seq(1, 64)))
ggplot(benchmarks[benchmarks$Param_size == 1000000,], aes(x=Param_valueFactoryFactory, y=Score, group=Benchmark, fill=Param_valueFactoryFactory)) + geom_bar(position="dodge", stat="identity") + facet_grid(Benchmark ~ Param_size, scales = "free")
#benchmarksCast <- dcast(benchmarksByName, Benchmark + Param_size ~ Param_valueFactoryFactory + Param_dataType + variable)
###
# Cache Statistics
##
benchmarksPerfStatFileName <- paste(paste("/Users/Michael/Dropbox/Research/hamt-improved-results/results.all", timestamp, sep="-"), "perf-stat", "log.xz", sep=".")
benchmarksPerfStat.all <- read.csv(benchmarksPerfStatFileName, sep=",", header=TRUE, stringsAsFactors=FALSE)
colnames(benchmarksPerfStat.all) <- c("Benchmark", "Param_size", "Param_valueFactoryFactory", "Param_dataType", "Param_run", "Param_sampleDataSelection", "L1_REF", "L1_MISSES", "L2_REF", "L2_HIT", "L3_REF", "L3_MISSES", "L3_MISSES_ALT")
#
benchmarksPerfStat <- subset(benchmarksPerfStat.all, select=-c(Param_sampleDataSelection))
benchmarksPerfStat$L2_MISSES <- benchmarksPerfStat$L2_REF - benchmarksPerfStat$L2_HIT
benchmarksPerfStat$L1_HIT <- benchmarksPerfStat$L1_REF - benchmarksPerfStat$L1_MISSES
benchmarksPerfStat$L3_HIT <- benchmarksPerfStat$L3_REF - benchmarksPerfStat$L3_MISSES
benchmarksPerfStat$L1_HIT_RATE <- benchmarksPerfStat$L1_HIT / benchmarksPerfStat$L1_REF
benchmarksPerfStat$L2_HIT_RATE <- benchmarksPerfStat$L2_HIT / benchmarksPerfStat$L2_REF
benchmarksPerfStat$L3_HIT_RATE <- benchmarksPerfStat$L3_HIT / benchmarksPerfStat$L3_REF
benchmarksPerfStat$L1_MISS_RATE <- 1 - benchmarksPerfStat$L1_HIT_RATE
benchmarksPerfStat$L2_MISS_RATE <- 1 - benchmarksPerfStat$L2_HIT_RATE
benchmarksPerfStat$L3_MISS_RATE <- 1 - benchmarksPerfStat$L3_HIT_RATE
benchmarksPerfStat.xxx = ddply(benchmarksPerfStat, c("Benchmark", "Param_dataType", "Param_size", "Param_valueFactoryFactory"), function(x) c(L1_REF = median(x$L1_REF), L1_HIT = median(x$L1_HIT), L1_MISSES = median(x$L1_MISSES), L1_HIT_RATE = median(x$L1_HIT_RATE), L1_MISS_RATE = median(x$L1_MISS_RATE), L2_REF = median(x$L2_REF), L2_HIT = median(x$L2_HIT), L2_MISSES = median(x$L2_MISSES), L2_HIT_RATE = median(x$L2_HIT_RATE), L2_MISS_RATE = median(x$L2_MISS_RATE), L3_REF = median(x$L3_REF), L3_HIT = median(x$L3_HIT), L3_MISSES = median(x$L3_MISSES), L3_HIT_RATE = median(x$L3_HIT_RATE), L3_MISS_RATE = median(x$L3_MISS_RATE), L3_MISSES_ALT = median(x$L3_MISSES_ALT)))
#ggplot(benchmarksPerfStat, aes(x=benchmarksPerfStat$benchmark, y=benchmarksPerfStat$L1.DCACHE.LOADS)) + geom_boxplot()
#ggplot(benchmarksPerfStat, aes(x=Param_valueFactoryFactory, y=Score, group=Benchmark, fill=Param_valueFactoryFactory)) + geom_bar(position="dodge", stat="identity") + facet_grid(Benchmark ~ Param_size, scales = "free")
benchmarksPerfStat.xxx.sub <- subset(benchmarksPerfStat.xxx, Param_dataType == "SET" & (Benchmark == "Iteration" | Benchmark == "EqualsRealDuplicate") & Param_size >= 1048576) # & (Param_size == 2097152 | Param_size == 8388608)
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_valueFactoryFactory, y=L3_REF, group=Benchmark, fill=Param_valueFactoryFactory)) + geom_bar(position="dodge", stat="identity") + facet_grid(Benchmark + Param_dataType ~ Param_size, scales = "free")
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_valueFactoryFactory, y=L3_REF, group=Benchmark, fill=Param_valueFactoryFactory)) + geom_bar(position="dodge", stat="identity") + facet_grid(Benchmark + Param_dataType ~ ., scales = "free")
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_size, y=L3_REF, colour=Param_valueFactoryFactory)) + geom_line() + facet_grid(Benchmark + Param_dataType ~ ., scales = "free") + scale_x_log10()
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_size, y=L3_MISSES, colour=Param_valueFactoryFactory)) + geom_line() + facet_grid(Benchmark + Param_dataType ~ ., scales = "free") + scale_x_log10()
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_size, y=L3_MISSES_ALT, colour=Param_valueFactoryFactory)) + geom_line() + facet_grid(Benchmark + Param_dataType ~ ., scales = "free") + scale_x_log10()
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_valueFactoryFactory, y=L3_REF, group=Benchmark, fill=Param_valueFactoryFactory)) + geom_bar(position="dodge", stat="identity") + facet_grid(Benchmark + Param_dataType ~ Param_size, scales = "free") # + scale_y_log10()
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_valueFactoryFactory, y=L3_MISSES, group=Benchmark, fill=Param_valueFactoryFactory)) + geom_bar(position="dodge", stat="identity") + facet_grid(Benchmark + Param_dataType ~ Param_size, scales = "free") # + scale_y_log10()
ggplot(benchmarksPerfStat.xxx.sub, aes(x=Param_valueFactoryFactory, y=L3_MISSES_ALT, group=Benchmark, fill=Param_valueFactoryFactory)) + geom_bar(position="dodge", stat="identity") + facet_grid(Benchmark + Param_dataType ~ Param_size, scales = "free") # + scale_y_log10()
#data.frame(benchmarksCleaned, benchmarksPerfStat)
#benchmarksByName <- melt(benchmarksCleaned[benchmarksCleaned$Param_dataType == "MAP",], id.vars=c('Benchmark', 'Param_size', 'Param_dataType', 'Param_valueFactoryFactory'))
benchmarksByName <- melt(data.frame(benchmarksCleaned, benchmarksPerfStat), id.vars=c('Benchmark', 'Param_size', 'Param_dataType', 'Param_valueFactoryFactory'))
# benchmarksTmpCast <- dcast(benchmarksByName, Benchmark + Param_size + Param_dataType ~ Param_valueFactoryFactory + variable)
# benchmarksTmpCast$VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score <- benchmarksTmpCast$VF_CLOJURE_Score / benchmarksTmpCast$VF_PDB_PERSISTENT_CURRENT_Score
# benchmarksTmpCast$VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score <- benchmarksTmpCast$VF_SCALA_Score / benchmarksTmpCast$VF_PDB_PERSISTENT_CURRENT_Score
# benchmarksByName$value <- formatPercent(benchmarksByName$value, rounding=F)
# benchmarksByName$value <- format(benchmarksByName$value, nsmall=2, digits=3, scientific=TRUE)
# benchmarksByName$Param_sizeLog2 <- paste("2^", log2(benchmarksByName$Param_size), sep = "")
# benchmarksByName$Param_sizeLog2 <- latexMath(paste("2^", log2(benchmarksByName$Param_size), sep = ""))
benchmarksByNameOutput <- data.frame(benchmarksByName)
# benchmarksByNameOutput$value <- formatPercent(benchmarksByName$value, rounding=F)
benchmarksByNameOutput$Param_out_sizeLog2 <- latexMath(paste("2^{", log2(benchmarksByName$Param_size), "}", sep = ""))
# benchmarksByNameOutput$Param_size <- latexMath(benchmarksByName$Param_size)
# benchmarksByNameOutput$value <- latexMath(benchmarksByName$value)
###
# OLD CODE
##
# # TODO: ensure that Param_dataType is always the same for each invocation
# benchmarksCast_Map <- dcast(benchmarksByNameOutput[benchmarksByNameOutput$Param_dataType == "MAP",], Benchmark + Param_size ~ Param_valueFactoryFactory + variable)
# benchmarksCast_Set <- dcast(benchmarksByNameOutput[benchmarksByNameOutput$Param_dataType == "SET",], Benchmark + Param_size ~ Param_valueFactoryFactory + variable)
#
# benchmarksCast_Map$Param_out_sizeLog2 <- latexMath(paste("2^{", log2(benchmarksCast_Map$Param_size), "}", sep = ""))
# benchmarksCast_Map$VF_CLOJURE_Interval <- latexMath(paste(benchmarksCast_Map$VF_CLOJURE_Score, "\\pm", benchmarksCast_Map$VF_CLOJURE_ScoreError))
# benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Interval <- latexMath(paste(benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Score, "\\pm", benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_ScoreError))
# benchmarksCast_Map$VF_SCALA_Interval <- latexMath(paste(benchmarksCast_Map$VF_SCALA_Score, "\\pm", benchmarksCast_Map$VF_SCALA_ScoreError))
# ###
# benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Score)
# benchmarksCast_Map$VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast_Map$VF_SCALA_Score / benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Score)
# benchmarksCast_Map$VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast_Map$VF_CLOJURE_Score / benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Score)
# benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_Score <- (benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast_Map$VF_SCALA_Score)
# benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_Score <- (benchmarksCast_Map$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast_Map$VF_CLOJURE_Score)
#
# benchmarksCast_Set$Param_out_sizeLog2 <- latexMath(paste("2^{", log2(benchmarksCast_Set$Param_size), "}", sep = ""))
# benchmarksCast_Set$VF_CLOJURE_Interval <- latexMath(paste(benchmarksCast_Set$VF_CLOJURE_Score, "\\pm", benchmarksCast_Set$VF_CLOJURE_ScoreError))
# benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Interval <- latexMath(paste(benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Score, "\\pm", benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_ScoreError))
# benchmarksCast_Set$VF_SCALA_Interval <- latexMath(paste(benchmarksCast_Set$VF_SCALA_Score, "\\pm", benchmarksCast_Set$VF_SCALA_ScoreError))
# ###
# benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Score)
# benchmarksCast_Set$VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast_Set$VF_SCALA_Score / benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Score)
# benchmarksCast_Set$VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast_Set$VF_CLOJURE_Score / benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Score)
# benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_Score <- (benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast_Set$VF_SCALA_Score)
# benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_Score <- (benchmarksCast_Set$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast_Set$VF_CLOJURE_Score)
#benchmarksCast <- data.frame(benchmarksCast_Map, benchmarksCast_Set)
# formatPercent(benchmarksCast$VF_CLOJURE_Score, rounding=F)
#
# format(benchmarksCast$VF_CLOJURE_Score, nsmall=2, digits=3, scientific=TRUE)
# format(benchmarksCast$VF_CLOJURE_ScoreError, nsmall=2, digits=3, scientific=TRUE)
# write.table(benchmarksCast_Map[,c(1,9,13,14,15)], file = "results_latex_map.tex", sep = " & ", row.names = FALSE, col.names = TRUE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
# write.table(benchmarksCast_Set[,c(1,9,13,14,15)], file = "results_latex_set.tex", sep = " & ", row.names = FALSE, col.names = TRUE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
# orderedBenchmarkNames <- c("ContainsKey", "Insert", "RemoveKey", "Iteration", "EntryIteration", "EqualsRealDuplicate", "EqualsDeltaDuplicate")
# orderedBenchmarkIDs <- seq(1:length(orderedBenchmarkNames))
#
# orderingByName <- data.frame(orderedBenchmarkIDs, orderedBenchmarkNames)
# colnames(orderingByName) <- c("BenchmarkSortingID", "Benchmark")
# selectComparisionColumns <- Vectorize(function(castedData, benchmarkName) {
# data.frame(castedData[castedData$Benchmark == benchmarkName,])[,c(13,14,15)]
# })
selectComparisionColumns <- function(inputData, measureVars, orderingByName) {
tmp.m <- melt(data=join(inputData, orderingByName), id.vars=c('BenchmarkSortingID', 'Benchmark', 'Param_size'), measure.vars=measureVars)
#tmp.m$value <- formatNsmall2(tmp.m$value, rounding=T)
tmp.c <- dcast(tmp.m, Param_size ~ BenchmarkSortingID + Benchmark + variable)
# tmp.c$Param_size <- latexMath(paste("2^{", log2(tmp.c$Param_size), "}", sep = ""))
tmp.c
}
selectComparisionColumnsSummary <- function(inputData, measureVars, orderingByName) {
tmp.m <- melt(data=join(inputData, orderingByName), id.vars=c('BenchmarkSortingID', 'Benchmark', 'Param_size'), measure.vars=measureVars)
tmp.c <- dcast(tmp.m, Param_size ~ BenchmarkSortingID + Benchmark + variable)
mins.c <- apply(tmp.c, c(2), min) # as.numeric(formatNsmall2(apply(tmp.c, c(2), min), rounding=T))
maxs.c <- apply(tmp.c, c(2), max) # as.numeric(formatNsmall2(apply(tmp.c, c(2), max), rounding=T))
#mean.c <- apply(tmp.c, c(2), mean)
medians.c <- apply(tmp.c, c(2), median) # as.numeric(formatNsmall2(apply(tmp.c, c(2), median), rounding=T))
res <- data.frame(rbind(mins.c, maxs.c, medians.c))[-1]
rownames(res) <- c('minimum', 'maximum', 'median')
res
}
calculateMemoryFootprintSummary <- function(inputData) {
mins.c <- apply(inputData, c(2), min) # as.numeric(formatNsmall2(apply(inputData, c(2), min), rounding=T))
maxs.c <- apply(inputData, c(2), max) # as.numeric(formatNsmall2(apply(inputData, c(2), max), rounding=T))
#mean.c <- apply(inputData, c(2), mean)
medians.c <- apply(inputData, c(2), median) # as.numeric(formatNsmall2(apply(inputData, c(2), median), rounding=T))
res <- data.frame(rbind(mins.c, maxs.c, medians.c))[-1]
rownames(res) <- c('minimum', 'maximum', 'median')
res
}
# calculateMemoryFootprintSummary <- function(inputData) {
# mins.c <- as.numeric(formatNsmall2(apply(inputData, c(2), min), rounding=T))
# maxs.c <- as.numeric(formatNsmall2(apply(inputData, c(2), max), rounding=T))
# medians.c <- as.numeric(formatNsmall2(apply(inputData, c(2), median), rounding=T))
#
# res <- data.frame(rbind(mins.c, maxs.c, medians.c))[-1]
# rownames(res) <- c('minimum', 'maximum', 'median')
# res
# }
###
# OLD CODE
##
# tableMapAll_summary <- selectComparisionColumnsSummary(benchmarksCast_Map, c('VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score', 'VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score'))
# tableSetAll_summary <- selectComparisionColumnsSummary(benchmarksCast_Set, c('VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score', 'VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score'))
#
# tableMapAll <- selectComparisionColumns(benchmarksCast_Map, c('VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score', 'VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score'))
# tableSetAll <- selectComparisionColumns(benchmarksCast_Set, c('VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score', 'VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score'))
#
# memFootprintMap <- calculateMemoryFootprintOverhead("MAP")
# memFootprintMap_fmt <- data.frame(sapply(1:NCOL(memFootprintMap), function(col_idx) { memFootprintMap[,c(col_idx)] <- latexMathFactor(formatNsmall2(memFootprintMap[,c(col_idx)], rounding=T))}))
# colnames(memFootprintMap_fmt) <- colnames(memFootprintMap)
# #
# memFootprintSet <- calculateMemoryFootprintOverhead("SET")
# memFootprintSet_fmt <- data.frame(sapply(1:NCOL(memFootprintSet), function(col_idx) { memFootprintSet[,c(col_idx)] <- latexMathFactor(formatNsmall2(memFootprintSet[,c(col_idx)], rounding=T))}))
# colnames(memFootprintSet_fmt) <- colnames(memFootprintSet)
#
# tableMapAll <- data.frame(tableMapAll, memFootprintMap_fmt[,c(2,3,4,5)])
# tableSetAll <- data.frame(tableSetAll, memFootprintSet_fmt[,c(2,3,4,5)])
#
#
#
# tableMapAll_summary <- data.frame(tableMapAll_summary, calculateMemoryFootprintSummary(memFootprintMap))
# tableSetAll_summary <- data.frame(tableSetAll_summary, calculateMemoryFootprintSummary(memFootprintSet))
#
# tableMapAll_summary_fmt <- data.frame(sapply(1:NCOL(tableMapAll_summary), function(col_idx) { tableMapAll_summary[,c(col_idx)] <- latexMathFactor(tableMapAll_summary[,c(col_idx)]) }))
# rownames(tableMapAll_summary_fmt) <- rownames(tableMapAll_summary)
# tableSetAll_summary_fmt <- data.frame(sapply(1:NCOL(tableSetAll_summary), function(col_idx) { tableSetAll_summary[,c(col_idx)] <- latexMathFactor(tableSetAll_summary[,c(col_idx)]) }))
# rownames(tableSetAll_summary_fmt) <- rownames(tableSetAll_summary)
#
# write.table(tableMapAll_summary_fmt, file = "all-benchmarks-map-summary.tex", sep = " & ", row.names = TRUE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
# write.table(tableSetAll_summary_fmt, file = "all-benchmarks-set-summary.tex", sep = " & ", row.names = TRUE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
#
# # tableMapAll <- data.frame(sapply(1:NCOL(tableMapAll), function(col_idx) { tableMapAll[,c(col_idx)] <- paste("\\tableMapAll_c", col_idx, "{", tableMapAll[,c(col_idx)], "}", sep = "") })) # colnames(tableMapAll)[col_idx]
# # tableSetAll <- data.frame(sapply(1:NCOL(tableSetAll), function(col_idx) { tableSetAll[,c(col_idx)] <- paste("\\tableSetAll_c", col_idx, "{", tableSetAll[,c(col_idx)], "}", sep = "") })) # colnames(tableSetAll)[col_idx]
#
# write.table(tableMapAll, file = "all-benchmarks-map.tex", sep = " & ", row.names = FALSE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
# write.table(tableSetAll, file = "all-benchmarks-set.tex", sep = " & ", row.names = FALSE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
orderedBenchmarkNames <- function(dataType) {
candidates <- c("ContainsKey", "Insert", "RemoveKey", "Iteration", "EntryIteration", "EqualsRealDuplicate", "EqualsDeltaDuplicate")
if (dataType == "MAP") {
candidates
} else {
candidates[candidates != "EntryIteration"]
}
}
orderedBenchmarkNamesForBoxplot <- function(dataType) {
candidates <- c("Lookup\n", "Insert\n", "Delete\n", "Iteration\n(Key)", "Iteration\n(Entry)", "Equality\n(Distinct)", "Equality\n(Derived)", "Footprint\n(32-bit)", "Footprint\n(64-bit)")
if (dataType == "MAP") {
candidates
} else {
candidates[candidates != "Iteration\n(Entry)"]
}
}
createTable <- function(input, dataType, dataStructureOrigin, measureVars, dataFormatter) {
lowerBoundExclusive <- 1
benchmarksCast <- dcast(input[input$Param_dataType == dataType & input$Param_size > lowerBoundExclusive,], Benchmark + Param_size ~ Param_valueFactoryFactory + variable)
benchmarksCast$Param_out_sizeLog2 <- latexMath(paste("2^{", log2(benchmarksCast$Param_size), "}", sep = ""))
benchmarksCast$VF_CLOJURE_Interval <- latexMath(paste(benchmarksCast$VF_CLOJURE_Score, "\\pm", benchmarksCast$VF_CLOJURE_ScoreError))
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Interval <- latexMath(paste(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score, "\\pm", benchmarksCast$VF_PDB_PERSISTENT_CURRENT_ScoreError))
benchmarksCast$VF_SCALA_Interval <- latexMath(paste(benchmarksCast$VF_SCALA_Score, "\\pm", benchmarksCast$VF_SCALA_ScoreError))
###
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score)
benchmarksCast$VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast$VF_SCALA_Score / benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score)
benchmarksCast$VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast$VF_CLOJURE_Score / benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score)
###
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_Score <- (benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast$VF_SCALA_Score)
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_Score <- (benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast$VF_CLOJURE_Score)
###
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_ScoreSavings <- (1 - benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_Score)
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_ScoreSavings <- (1 - benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_Score)
orderedBenchmarkNames <- orderedBenchmarkNames(dataType)
orderedBenchmarkIDs <- seq(1:length(orderedBenchmarkNames))
orderingByName <- data.frame(orderedBenchmarkIDs, orderedBenchmarkNames)
colnames(orderingByName) <- c("BenchmarkSortingID", "Benchmark")
# selectComparisionColumns <- Vectorize(function(castedData, benchmarkName) {
# data.frame(castedData[castedData$Benchmark == benchmarkName,])[,c(13,14,15)]
# })
tableAll_summary <- selectComparisionColumnsSummary(benchmarksCast, measureVars, orderingByName)
memFootprint <- calculateMemoryFootprintOverhead(dataType, dataStructureOrigin)
memFootprint <- memFootprint[memFootprint$elementCount > lowerBoundExclusive,]
memFootprint_fmt <- data.frame(sapply(1:NCOL(memFootprint), function(col_idx) { memFootprint[,c(col_idx)] <- dataFormatter(memFootprint[,c(col_idx)])}))
colnames(memFootprint_fmt) <- colnames(memFootprint)
tableAll <- selectComparisionColumns(benchmarksCast, measureVars, orderingByName)
tableAll <- tableAll[tableAll$Param_size > lowerBoundExclusive,]
tableAll <- data.frame(tableAll, memFootprint[,c(2,3)])
tableAll_fmt <- data.frame(
latexMath(paste("2^{", log2(tableAll$Param_size), "}", sep = "")),
sapply(2:NCOL(tableAll), function(col_idx) { tableAll[,c(col_idx)] <- dataFormatter(tableAll[,c(col_idx)])}))
colnames(tableAll_fmt) <- colnames(tableAll)
tableAll_summary <- data.frame(tableAll_summary, calculateMemoryFootprintSummary(memFootprint))
tableAll_summary_fmt <- data.frame(sapply(1:NCOL(tableAll_summary), function(col_idx) { tableAll_summary[,c(col_idx)] <- dataFormatter(tableAll_summary[,c(col_idx)])}))
rownames(tableAll_summary_fmt) <- rownames(tableAll_summary)
fileNameSummary <- paste(paste("all", "benchmarks", tolower(dataStructureOrigin), tolower(dataType), "summary", sep="-"), "tex", sep=".")
write.table(tableAll_summary_fmt, file = fileNameSummary, sep = " & ", row.names = TRUE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
#write.table(t(tableAll_summary_fmt), file = fileNameSummary, sep = " & ", row.names = TRUE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
fileName <- paste(paste("all", "benchmarks", tolower(dataStructureOrigin), tolower(dataType), sep="-"), "tex", sep=".")
write.table(tableAll_fmt, file = fileName, sep = " & ", row.names = FALSE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
#write.table(t(tableAll_fmt), file = fileName, sep = " & ", row.names = FALSE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
###
# Create boxplots as well
##
outFileName <-paste(paste("all", "benchmarks", tolower(dataStructureOrigin), tolower(dataType), "boxplot", sep="-"), "pdf", sep=".")
fontScalingFactor <- 1.2
pdf(outFileName, family = "Times", width = 10, height = 3)
selection <- tableAll[2:NCOL(tableAll)]
names(selection) <- orderedBenchmarkNamesForBoxplot(dataType)
par(mar = c(3.5,4.75,0,0) + 0.1)
par(mgp=c(3.5, 1.75, 0)) # c(axis.title.position, axis.label.position, axis.line.position)
boxplot(selection, ylim=range(-0.1, 1.0), yaxt="n", las=0, ylab="savings (in %)",
cex.lab=fontScalingFactor, cex.axis=fontScalingFactor, cex.main=fontScalingFactor, cex.sub=fontScalingFactor)
z <- c(0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
zz <- c("0%", "20%", "40%", "60%", "80%", "100%")
par(mgp=c(0, 0.75, 0)) # c(axis.title.position, axis.label.position, axis.line.position)
axis(2, at=z, labels=zz, las=2,
cex.lab=fontScalingFactor, cex.axis=fontScalingFactor, cex.main=fontScalingFactor, cex.sub=fontScalingFactor)
# abline(v = 5.5)
#abline(h = 0.75, lty=3)
#abline(h = 0.5, lty=3)
#abline(h = 0.25, lty=3)
abline(h = 0)
abline(h = -0.5, lty=3)
dev.off()
embed_fonts(outFileName)
}
# ###
# # Results as saving percentags
# ##
# measureVars_Scala <- c('VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_ScoreSavings')
# measureVars_Clojure <- c('VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_ScoreSavings')
# dataFormatter <- latexMathPercent
#
# createTable(benchmarksByNameOutput, "SET", "Scala", measureVars_Scala, dataFormatter)
# createTable(benchmarksByNameOutput, "SET", "Clojure", measureVars_Clojure, dataFormatter)
# createTable(benchmarksByNameOutput, "MAP", "Scala", measureVars_Scala, dataFormatter)
# createTable(benchmarksByNameOutput, "MAP", "Clojure", measureVars_Clojure, dataFormatter)
# ###
# # Results as speedup factors
# ##
# measureVars_Scala <- c('VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score')
# measureVars_Clojure <- c('VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score')
# dataFormatter <- latexMathFactor
#
# createTable(benchmarksByNameOutput, "SET", "Scala", measureVars_Scala, dataFormatter)
# createTable(benchmarksByNameOutput, "SET", "Clojure", measureVars_Clojure, dataFormatter)
# createTable(benchmarksByNameOutput, "MAP", "Scala", measureVars_Scala, dataFormatter)
# createTable(benchmarksByNameOutput, "MAP", "Clojure", measureVars_Clojure, dataFormatter)
createCacheStatTable <- function(input, dataType, dataStructureOrigin, measureVars, benchmarkName, dataFormatter) {
lowerBoundExclusive <- 1
benchmarksCast <- dcast(input[input$Benchmark == benchmarkName & input$Param_dataType == dataType & input$Param_size > lowerBoundExclusive,], Benchmark + Param_size ~ Param_valueFactoryFactory + variable)
my_ylim <- range(0, 1.0)
boxplot(ylim=my_ylim, benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L1_HIT_RATE, benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L2_HIT_RATE, benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L3_HIT_RATE)
boxplot(ylim=my_ylim, benchmarksCast$VF_SCALA_L1_HIT_RATE, benchmarksCast$VF_SCALA_L2_HIT_RATE, benchmarksCast$VF_SCALA_L3_HIT_RATE)
boxplot(ylim=my_ylim, benchmarksCast$VF_CLOJURE_L1_HIT_RATE, benchmarksCast$VF_CLOJURE_L2_HIT_RATE, benchmarksCast$VF_CLOJURE_L3_HIT_RATE)
boxplot(ylim=my_ylim, benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L1_HIT_RATE, benchmarksCast$VF_SCALA_L1_HIT_RATE, benchmarksCast$VF_CLOJURE_L1_HIT_RATE)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L1_HIT, benchmarksCast$VF_SCALA_L1_HIT, benchmarksCast$VF_CLOJURE_L1_HIT)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L1_REF, benchmarksCast$VF_SCALA_L1_REF, benchmarksCast$VF_CLOJURE_L1_REF)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L1_MISSES, benchmarksCast$VF_SCALA_L1_MISSES, benchmarksCast$VF_CLOJURE_L1_MISSES)
boxplot(ylim=my_ylim, benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L2_HIT_RATE, benchmarksCast$VF_SCALA_L2_HIT_RATE, benchmarksCast$VF_CLOJURE_L2_HIT_RATE)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L2_HIT, benchmarksCast$VF_SCALA_L2_HIT, benchmarksCast$VF_CLOJURE_L2_HIT)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L2_REF, benchmarksCast$VF_SCALA_L2_REF, benchmarksCast$VF_CLOJURE_L2_REF)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L2_MISSES, benchmarksCast$VF_SCALA_L2_MISSES, benchmarksCast$VF_CLOJURE_L2_MISSES)
boxplot(ylim=my_ylim, benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L3_HIT_RATE, benchmarksCast$VF_SCALA_L3_HIT_RATE, benchmarksCast$VF_CLOJURE_L3_HIT_RATE)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L3_HIT, benchmarksCast$VF_SCALA_L3_HIT, benchmarksCast$VF_CLOJURE_L3_HIT)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L3_REF, benchmarksCast$VF_SCALA_L3_REF, benchmarksCast$VF_CLOJURE_L3_REF)
boxplot(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_L3_MISSES, benchmarksCast$VF_SCALA_L3_MISSES, benchmarksCast$VF_CLOJURE_L3_MISSES)
sel <- c("Param_size", "Param_valueFactoryFactory", "L3_MISSES")
tmp <- dcast(input[input$Benchmark == benchmarkName & input$Param_dataType == dataType & input$Param_size > lowerBoundExclusive,], Benchmark + Param_size + Param_valueFactoryFactory ~ variable)[sel]
# tmp[,3:NCOL(tmp)] <- round(tmp[,3:NCOL(tmp)], 2)
tmp[,3:NCOL(tmp)] <- formatCsUnits(tmp[,3:NCOL(tmp)])
benchmarksCast$Param_out_sizeLog2 <- latexMath(paste("2^{", log2(benchmarksCast$Param_size), "}", sep = ""))
benchmarksCast$VF_CLOJURE_Interval <- latexMath(paste(benchmarksCast$VF_CLOJURE_Score, "\\pm", benchmarksCast$VF_CLOJURE_ScoreError))
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Interval <- latexMath(paste(benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score, "\\pm", benchmarksCast$VF_PDB_PERSISTENT_CURRENT_ScoreError))
benchmarksCast$VF_SCALA_Interval <- latexMath(paste(benchmarksCast$VF_SCALA_Score, "\\pm", benchmarksCast$VF_SCALA_ScoreError))
###
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score)
benchmarksCast$VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast$VF_SCALA_Score / benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score)
benchmarksCast$VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score <- (benchmarksCast$VF_CLOJURE_Score / benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score)
###
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_Score <- (benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast$VF_SCALA_Score)
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_Score <- (benchmarksCast$VF_PDB_PERSISTENT_CURRENT_Score / benchmarksCast$VF_CLOJURE_Score)
###
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_ScoreSavings <- (1 - benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_SCALA_Score)
benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_ScoreSavings <- (1 - benchmarksCast$VF_PDB_PERSISTENT_CURRENT_BY_VF_CLOJURE_Score)
orderedBenchmarkNames <- orderedBenchmarkNames(dataType)
orderedBenchmarkIDs <- seq(1:length(orderedBenchmarkNames))
orderingByName <- data.frame(orderedBenchmarkIDs, orderedBenchmarkNames)
colnames(orderingByName) <- c("BenchmarkSortingID", "Benchmark")
# selectComparisionColumns <- Vectorize(function(castedData, benchmarkName) {
# data.frame(castedData[castedData$Benchmark == benchmarkName,])[,c(13,14,15)]
# })
tableAll_summary <- selectComparisionColumnsSummary(benchmarksCast, measureVars, orderingByName)
tableAll <- selectComparisionColumns(benchmarksCast, measureVars, orderingByName)
tableAll <- tableAll[tableAll$Param_size > lowerBoundExclusive,]
# tableAll <- data.frame(tableAll, memFootprint[,c(2,3)])
tableAll_fmt <- data.frame(
latexMath(paste("2^{", log2(tableAll$Param_size), "}", sep = "")),
sapply(2:NCOL(tableAll), function(col_idx) { tableAll[,c(col_idx)] <- dataFormatter(tableAll[,c(col_idx)])}))
colnames(tableAll_fmt) <- colnames(tableAll)
# tableAll_summary <- data.frame(tableAll_summary, calculateMemoryFootprintSummary(memFootprint))
tableAll_summary_fmt <- data.frame(sapply(1:NCOL(tableAll_summary), function(col_idx) { tableAll_summary[,c(col_idx)] <- dataFormatter(tableAll_summary[,c(col_idx)])}))
rownames(tableAll_summary_fmt) <- rownames(tableAll_summary)
fileNameSummary <- paste(paste("all", "benchmarks", tolower(dataStructureOrigin), tolower(dataType), "summary", sep="-"), "tex", sep=".")
write.table(tableAll_summary_fmt, file = fileNameSummary, sep = " & ", row.names = TRUE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
#write.table(t(tableAll_summary_fmt), file = fileNameSummary, sep = " & ", row.names = TRUE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
fileName <- paste(paste("all", "benchmarks", tolower(dataStructureOrigin), tolower(dataType), sep="-"), "tex", sep=".")
write.table(tableAll_fmt, file = fileName, sep = " & ", row.names = FALSE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
#write.table(t(tableAll_fmt), file = fileName, sep = " & ", row.names = FALSE, col.names = FALSE, append = FALSE, quote = FALSE, eol = " \\\\ \n")
# ###
# # Create boxplots as well
# ##
# outFileName <-paste(paste("all", "benchmarks", tolower(dataStructureOrigin), tolower(dataType), "boxplot", sep="-"), "pdf", sep=".")
# fontScalingFactor <- 1.2
# pdf(outFileName, family = "Times", width = 10, height = 3)
#
# selection <- tableAll[2:NCOL(tableAll)]
# names(selection) <- orderedBenchmarkNamesForBoxplot(dataType)
#
# par(mar = c(3.5,4.75,0,0) + 0.1)
# par(mgp=c(3.5, 1.75, 0)) # c(axis.title.position, axis.label.position, axis.line.position)
#
# boxplot(selection, ylim=range(-0.1, 1.0), yaxt="n", las=0, ylab="savings (in %)",
# cex.lab=fontScalingFactor, cex.axis=fontScalingFactor, cex.main=fontScalingFactor, cex.sub=fontScalingFactor)
#
# z <- c(0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
# zz <- c("0%", "20%", "40%", "60%", "80%", "100%")
# par(mgp=c(0, 0.75, 0)) # c(axis.title.position, axis.label.position, axis.line.position)
# axis(2, at=z, labels=zz, las=2,
# cex.lab=fontScalingFactor, cex.axis=fontScalingFactor, cex.main=fontScalingFactor, cex.sub=fontScalingFactor)
#
# # abline(v = 5.5)
#
# #abline(h = 0.75, lty=3)
# #abline(h = 0.5, lty=3)
# #abline(h = 0.25, lty=3)
# abline(h = 0)
# abline(h = -0.5, lty=3)
# dev.off()
# embed_fonts(outFileName)
}
###
# Results as speedup factors
##
measureVars_Scala <- c('VF_SCALA_BY_VF_PDB_PERSISTENT_CURRENT_Score')
measureVars_Clojure <- c('VF_CLOJURE_BY_VF_PDB_PERSISTENT_CURRENT_Score')
dataFormatter <- latexMathFactor
# createCacheStatTable(benchmarksByNameOutput, "MAP", "Scala", measureVars_Scala, "EntryIteration", dataFormatter)
createCacheStatTable(benchmarksByNameOutput, "MAP", "Scala", measureVars_Scala, "Iteration", dataFormatter)
createCacheStatTable(benchmarksByNameOutput, "MAP", "Scala", measureVars_Scala, "EqualsRealDuplicate", dataFormatter)
# createCacheStatTable(benchmarksByNameOutput, "SET", "Clojure", measureVars_Clojure, dataFormatter)
# createCacheStatTable(benchmarksByNameOutput, "MAP", "Scala", measureVars_Scala, dataFormatter)
# createCacheStatTable(benchmarksByNameOutput, "MAP", "Clojure", measureVars_Clojure, dataFormatter)
|
|
plot.width = {img_width}
plot.height = {img_height}
plot.range = {date_range}
plot.locationid = {locationid}
plot.title = '{plot_title}'
plot.path = '{plot_path}'
date.min <- Sys.Date() - plot.range
type_list <- c(16274, 17887, 17888, 17889)
plot.typeids.str <- paste(type_list, collapse=',')
## Event List ##
event.query <- paste0(
'SELECT datetime, eventName ',
'FROM event_list ',
'WHERE eventGroup in (\'patch\') ',
'AND datetime > \'', date.min, '\' '
)
event <- sqlQuery(emd, event.query)
event$datetime <- as.POSIXlt(event$datetime, tz='GMT')
do_lines <- TRUE
if(nrow(event)==0){{
do_lines <- FALSE
}}
## Get Data ##
ec.query <- paste0(
'SELECT price_date AS `date`, price_time AS `hour`, locationid, typeid, ',
'SUM(IF(buy_sell=1, price_best, NULL)) AS `SellOrder` ',
'FROM snapshot_evecentral ',
'WHERE locationid=', plot.locationid, ' ',
'AND typeid IN (', plot.typeids.str, ') ',
'AND price_date > \'', date.min, '\' ',
'GROUP BY price_date, price_time, typeid'
)
ec <- sqlQuery(emd, ec.query)
odbcClose(emd)
ec$date <- as.Date(ec$date)
ec$typeid <- as.factor(ec$typeid)
ec <- subset(ec, SellOrder > 0)
ec$datetime <- paste(ec$date, ec$hour, sep=' ')
ec$datetime <- as.POSIXlt(ec$datetime, tz='GMT')
## CREST Lookups ##
CREST_BASE = 'https://crest-tq.eveonline.com/'
solarsystem.addr <- paste0(CREST_BASE, 'solarsystems/', plot.locationid, '/')
solarsystem.json <- fromJSON(readLines(solarsystem.addr))
solarsystem.name <- solarsystem.json$name
ec$locationName <- solarsystem.name
type_list <- unique(ec$typeid)
ec$typeName <- NA
for(type.index in 1:length(type_list)){{
type.name <- ''
type.id <- type_list[type.index]
type.addr <- paste0(CREST_BASE, 'inventory/types/', type.id, '/')
type.json <- fromJSON(readLines(type.addr))
type.name <- type.json$name
ec$typeName[ec$typeid==type.id] <- type.name
}}
## Plot Theme ##
theme_dark <- function( ... ) {{
theme(
text = element_text(color="gray90"),
title = element_text(size=rel(2.5),hjust=0.05,vjust=3.5),
axis.title.x = element_text(size=rel(0.75),hjust=0.5, vjust=0),
axis.title.y = element_text(size=rel(0.75),hjust=0.5, vjust=1.5),
plot.margin = unit(c(2,1,1,1), "cm"),
plot.background=element_rect(fill="gray8",color="gray8"),
panel.background=element_rect(fill="gray10",color="gray10"),
panel.grid.major = element_line(colour="gray17"),
panel.grid.minor = element_line(colour="gray12"),
axis.line = element_line(color = "gray50"),
plot.title = element_text(color="gray80"),
axis.title = element_text(color="gray70"),
axis.text = element_text(color="gray50",size=rel(1.1)),
legend.key = element_rect(fill="gray8",color="gray8"),
legend.background = element_rect(fill="gray8"),
legend.title = element_text(size=rel(0.6)),
legend.text = element_text(size=rel(1.1)),
strip.background = element_rect(fill="#252525"),
strip.text = element_text(size=rel(1.2))
) + theme(...)
}}
uniqueID <- unique(ec$typeid)
ec$SellOrder.smooth <- NA
for(id in uniqueID){{
prices <- ec$SellOrder[ec$typeid==id]
smooth <- rollmean(prices, k=24, align='right', fill=NA)
ec$SellOrder.smooth[ec$typeid==id] <- smooth
}}
plot.data <- ec
## Plot Data ##
plot <- ggplot(
plot.data,
aes(
x=datetime,
y=SellOrder.smooth,
color=typeName)
)
plot <- plot +geom_line(
size=1.5
)
plot <- plot + theme_dark()
plot <- plot + labs(
title=plot.title,
fill='Item Name',
x='date',
y='price'
)
if(do_lines){{
plot <- plot + geom_vline(
xintercept=as.numeric(event$datetime),
linetype=2,
color='white'
)
plot <- plot + geom_text(
aes(
x=datetime,
y=Inf,
label=eventName),
color='white',
angle=-90,
vjust=1.2,
hjust=0,
data=event,
inherit.aes=FALSE
)
}}
plot <- plot + scale_y_continuous(
position='right'
)
plot <- plot + scale_color_manual(
values=c(
"Oxygen Isotopes"="#097686",
"Hydrogen Isotopes"="#B7090D",
"Nitrogen Isotopes"="#2169E0",
"Helium Isotopes"="#EA8B25")
)
## Print Plot To File ##
png(
plot.path,
width=plot.width,
height=plot.height
)
print(plot)
dev.off()
|
|
r=359.16
https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d72013/media/images/d72013-013/svc:tesseract/full/full/359.16/default.jpg Accept:application/hocr+xml
|
|
CP009748.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP009748.1", 1L, "9.9", "questionable", 70L, 16L, "1114538-1124478", "PHAGE_Bacill_G_NC_023719(2)", "45.71", "Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 2L, "16.3", "incomplete", 40L, 27L, "1129262-1145591", "PHAGE_Brevib_Abouo_NC_029029(4)", "41.24","Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 3L, "27.2", "questionable", 90L, 34L, "1148512-1175756", "PHAGE_Deep_s_D6E_NC_019544(12)", "43.60","Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 4L, "31.7", "intact", 150L, 43L, "1230901-1262661", "PHAGE_Brevib_Jimmer1_NC_029104(9)", "46.42","Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 5L, "11.5", "incomplete", 20L, 11L, "1804241-1815772", "PHAGE_Bacill_AR9_NC_031039(3)", "42.00","Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 6L, "109.2", "incomplete", 50L, 156L, "2061311-2170562", "PHAGE_Bacill_SPbeta_NC_001884(79)", "35.52","Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 7L, "37.7", "intact", 150L, 42L, "2906936-2944703", "PHAGE_Bacill_PM1_NC_020883(7)", "37.06","Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 8L, "26.3", "questionable", 80L, 32L, "3053567-3079887", "PHAGE_Paenib_Tripp_NC_028930(15)", "48.32","Bacillus amyloliquefaciens ATCC 13952",
"CP009748.1", 9L, "14.7", "questionable", 70L, 17L, "3640744-3655532", "PHAGE_Entero_phi92_NC_023693(4)", "50.03","Bacillus amyloliquefaciens ATCC 13952"
)
CP009692.1 <- tibble::tribble(
~V0,~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP009692.1", 1L, "9.9", "incomplete", 10L, 9L, "185005-194935 ", "PHAGE_Prochl_P_SSM2_NC_006883(4)", "38.42", "Bacillus mycoides ATCC 6462",
"CP009692.1", 2L, "8.6", "incomplete", 10L, 10L, "226578-235222 ", "PHAGE_Bacill_G_NC_023719(2)", "36.89","Bacillus mycoides ATCC 6462",
"CP009692.1", 3L, "10.1", "incomplete", 10L, 10L, "464406-474562 ", "PHAGE_Bacill_G_NC_023719(2)", "37.92","Bacillus mycoides ATCC 6462",
"CP009692.1", 4L, "6.9", "incomplete", 20L, 7L, "765537-772502 ", "PHAGE_Staphy_phiN315_NC_004740(1)", "33.95","Bacillus mycoides ATCC 6462",
"CP009692.1", 5L, "9.7", "incomplete", 20L, 7L, "2313139-2322898 ", "PHAGE_Clostr_phiCD211_NC_029048(2)", "37.14","Bacillus mycoides ATCC 6462",
"CP009692.1", 6L, "23.3", "incomplete", 10L, 10L, "3715128-3738447 ", "PHAGE_Bacill_phBC6A52_NC_004821(5)", "34.61","Bacillus mycoides ATCC 6462",
"CP009692.1", 7L, "7.8", "incomplete", 20L, 7L, "3887163-3894984 ", "PHAGE_Serrat_MyoSmar_NC_048800(1)", "35.35","Bacillus mycoides ATCC 6462",
"CP009692.1", 8L, "11", "incomplete", 20L, 14L, "4525996-4537009 ", "PHAGE_Escher_ESCO13_NC_047770(4)", "36.53","Bacillus mycoides ATCC 6462",
"CP009692.1", 9L, "8", "incomplete", 10L, 8L, "5060703-5068794 ", "PHAGE_Bacill_G_NC_023719(1)", "37.85","Bacillus mycoides ATCC 6462",
"CP009692.1", 10L, "37", "questionable", 90L, 31L, "52295-89324 ", "PHAGE_Bacill_IEBH_NC_011167(3)", "32.99","Bacillus mycoides ATCC 6462",
"CP009692.1", 11L, "26.7", "incomplete", 60L, 37L, "97956-124715 ", "PHAGE_Bacill_BtCS33_NC_018085(4)", "33.75","Bacillus mycoides ATCC 6462",
"CP009692.1", 12L, "44.8", "questionable", 80L, 32L, "190310-235138 ", "PHAGE_Staphy_SPbeta_like_NC_029119(2)", "33.53","Bacillus mycoides ATCC 6462"
)
MOEA01000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"MOEA01000001.1", 1L, "32.4", "incomplete", 30L, 12L, "94093-126496", "PHAGE_Bacill_PBS1_NC_043027(4)", "41.45", "Bacillus amyloliquefaciens K2",
"MOEA01000001.1", 2L, "21.6", "incomplete", 60L, 23L, "239866-261558", "PHAGE_Bacill_phi105_NC_004167(17)", "44.78", "Bacillus amyloliquefaciens K2",
"MOEA01000001.1", 3L, "30.5", "incomplete", 20L, 27L, "262570-293139", "PHAGE_Brevib_Jimmer1_NC_029104(2)", "42.32", "Bacillus amyloliquefaciens K2",
"MOEA01000001.1", 4L, "32.1", "intact", 100L, 40L, "747372-779473", "PHAGE_Brevib_Jimmer1_NC_029104(9)", "46.65", "Bacillus amyloliquefaciens K2",
"MOEA01000001.1", 5L, "50.4", "questionable", 90L, 45L, "829278-879766", "PHAGE_Bacill_SPbeta_NC_001884(11)", "37.57", "Bacillus amyloliquefaciens K2",
"MOEA01000001.1", 6L, "11.9", "incomplete", 10L, 23L, "229095-240995", "PHAGE_Bacill_1_NC_009737(3)", "44.06", "Bacillus amyloliquefaciens K2",
"MOEA01000001.1", 7L, "50.1", "intact", 110L, 39L, "231528-281651", "PHAGE_Deep_s_D6E_NC_019544(12)", "42.99", "Bacillus amyloliquefaciens K2",
"MOEA01000001.1", 8L, "47.5", "incomplete", 50L, 67L, "9689-57276", "PHAGE_Bacill_SPP1_NC_004166(15)", "42.91", "Bacillus amyloliquefaciens K2"
)
CM000732.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000732.1", 1L, "26.4", "incomplete", 10L, 36L, "3311944-3338391", "PHAGE_Bacill_vB_BhaS_171_NC_030904(8)", "35.07", "Bacillus anthracis Rock3-42",
"CM000732.1", 2L, "37.2", "incomplete", 40L, 25L, "3362523-3399761", "PHAGE_Bacill_PfEFR_5_NC_031055(9)", "34.63", "Bacillus anthracis Rock3-42"
)
JHCA01000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"JHCA01000001.1", 1L, "33.6", "intact", 110L, 53L, "50192-83807", "PHAGE_Brevib_Jimmer1_NC_029104(8)", "44.91", "Bacillus subtilis subsp. stercoris D7XPN1"
)
AYTO01000001.1 <- tibble::tribble(
~V0,~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AYTO01000001.1", 1L, "19.3", "incomplete", 30L, 20L, "166071-185391", "PHAGE_Bacill_vB_BtS_BMBtp14_NC_048640(4)", "43.53", "Bacillus tequilensis ATCC BAA 819",
"AYTO01000001.1", 2L, "167.3", "intact", 150L, 217L, "1190592-1357943", "PHAGE_Brevib_Jenst_NC_028805(66)", "40.63", "Bacillus tequilensis ATCC BAA 819",
"AYTO01000001.1", 3L, "32.1", "intact", 110L, 48L, "1397607-1429744", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "45.08", "Bacillus tequilensis ATCC BAA 819",
"AYTO01000001.1", 4L, "21.3", "incomplete", 40L, 15L, "1875054-1896408", "PHAGE_Bacill_vB_BsuM_Goe3_NC_048652(2)", "40.33", "Bacillus tequilensis ATCC BAA 819",
"AYTO01000001.1", 5L, "7.7", "incomplete", 10L, 18L, "2110921-2118694", "PHAGE_Bacill_SPbeta_NC_001884(4)", "33.80", "Bacillus tequilensis ATCC BAA 819",
"AYTO01000001.1", 6L, "12", "incomplete", 20L, 18L, "3271248-3283282", "PHAGE_Thermu_OH2_NC_021784(2)", "42.43", "Bacillus tequilensis ATCC BAA 819"
)
JHUD02000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"JHUD02000001.1", 1L, "48.8", "intact", 91L, 50L, "1005556-1054406", "PHAGE_Bacill_phi105_NC_004167(31)", "38.46", "Bacillus pumilus SH-B9",
"JHUD02000001.1", 2L, "28.2", "incomplete", 40L, 37L, "17344-45633", "PHAGE_Brevib_Jimmer1_NC_029104(6)", "41.22", "Bacillus pumilus SH-B9"
)
AFWM01000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AFWM01000001.1", 1L, "8.2", "incomplete", 10L, 8L, "83502-91791", "PHAGE_Synech_S_SKS1_NC_020851(4)", "52.71", "Bacillus coagulans XZL4",
"AFWM01000001.1", 2L, "7.2", "incomplete", 10L, 14L, "12625-19840", "PHAGE_Bacill_Bobb_NC_024792(1)", "47.67", "Bacillus coagulans XZL4"
)
CM000725.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000725.1", 1L, "29", "incomplete", 20L, 16L, "2181046-2210047", "PHAGE_Bacill_phBC6A52_NC_004821(2)", "32.42", "Bacillus mycoides BDRD-ST196",
"CM000725.1", 2L, "13.6", "incomplete", 30L, 14L, "4130214-4143830", "PHAGE_Bacill_Fah_NC_007814(3)", "35.48", "Bacillus mycoides BDRD-ST196",
"CM000725.1", 3L, "46", "intact", 150L, 42L, "5337911-5384008", "PHAGE_Bacill_PfEFR_5_NC_031055(8)", "33.57", "Bacillus mycoides BDRD-ST196",
"CM000725.1", 4L, "17.7", "questionable", 80L, 18L, "5537133-5554905", "PHAGE_Staphy_SPbeta_like_NC_029119(2)", "30.61","Bacillus mycoides BDRD-ST196"
)
QVEJ01000001.1 <- tibble::tribble(
~V0,~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"QVEJ01000001.1", 1L, "32", "questionable", 90L, 42L, "348089-380173", "PHAGE_Brevib_Jimmer1_NC_029104(9)", "46.38","Bacillus amyloliquefaciens NRRL 942",
"QVEJ01000001.1", 2L, "11.5", "incomplete", 20L, 12L, "921791-933359", "PHAGE_Bacill_PBS1_NC_043027(4)", "41.98", "Bacillus amyloliquefaciens NRRL 942",
"QVEJ01000001.1", 3L, "27.2", "questionable", 90L, 39L, "3-27285", "PHAGE_Deep_s_D6E_NC_019544(12)", "43.71", "Bacillus amyloliquefaciens NRRL 942",
"QVEJ01000001.1", 4L, "26.6", "incomplete", 30L, 33L, "258615-285310", "PHAGE_Bacill_SPP1_NC_004166(15)", "42.74","Bacillus amyloliquefaciens NRRL 942",
"QVEJ01000001.1", 5L, "11.3", "incomplete", 20L, 16L, "9-11352", "PHAGE_Thermu_OH2_NC_021784(2)", "41.97", "Bacillus amyloliquefaciens NRRL 942"
)
AYTL01000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AYTL01000001.1", 1L, "24.5", "incomplete", 30L, 13L, "313197-337719", "PHAGE_Bacill_SP_10_NC_019487(2)", "40.93", "Bacillus mojavensis KCTC 3706",
"AYTL01000001.1", 2L, "30.5", "questionable", 90L, 40L, "387079-417601", "PHAGE_Brevib_Jimmer1_NC_029104(7)", "44.99", "Bacillus mojavensis KCTC 3706",
"AYTL01000001.1", 3L, "14.2", "incomplete", 20L, 23L, "879788-894002", "PHAGE_Bacill_SPbeta_NC_001884(5)", "36.24", "Bacillus mojavensis KCTC 3706",
"AYTL01000001.1", 4L, "18.1", "incomplete", 20L, 11L, "17562-35697", "PHAGE_Bacill_SPbeta_NC_001884(2)", "39.39", "Bacillus mojavensis KCTC 3706"
)
CP048852.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP048852.1", 1L, "15.7", "incomplete", 40L, 7L, "496547-512284", "PHAGE_Bacill_phi4J1_NC_029008(2)", "39.74", "Bacillus tequilensis EA-CB0015",
"CP048852.1", 2L, "32.2", "incomplete", 60L, 45L, "1232009-1264252", "PHAGE_Brevib_Jimmer2_NC_041976(9)", "44.87", "Bacillus tequilensis EA-CB0015",
"CP048852.1", 3L, "122.6", "questionable", 70L, 166L, "1728714-1851340", "PHAGE_Bacill_SPbeta_NC_001884(89)", "35.06", "Bacillus tequilensis EA-CB0015",
"CP048852.1", 4L, "22", "incomplete", 30L, 10L, "1864421-1886494", "PHAGE_Staphy_SPbeta_like_NC_029119(2)", "38.41", "Bacillus tequilensis EA-CB0015"
)
CM000729.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000729.1", 1L, "55.8", "intact", 110L, 47L, "2270017-2325856", "PHAGE_Bacill_phBC6A52_NC_004821(13)", "33.63", "Bacillus cereus Rock1-15",
"CM000729.1", 2L, "55.2", "questionable", 80L, 52L, "3313531-3368825", "PHAGE_Bacill_1_NC_009737(10)", "33.02", "Bacillus cereus Rock1-15"
)
CM000747.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000747.1", 1L, "39.1", "incomplete", 30L, 52L, "623829-662940", "PHAGE_Staphy_SpaA1_NC_018277(13)", "38.22", "Bacillus thuringiensis Bt407",
"CM000747.1", 2L, "101", "intact", 150L, 128L, "3631992-3733010", "PHAGE_Bacill_phBC6A51_NC_004820(52)", "36.81", "Bacillus thuringiensis Bt407",
"CM000747.1", 3L, "32.9", "incomplete", 50L, 45L, "4619603-4652549", "PHAGE_Bacill_phBC6A52_NC_004821(18)", "35.46","Bacillus thuringiensis Bt407",
"CM000747.1", 4L, "77.9", "intact", 110L, 90L, "5289746-5367679", "PHAGE_Lister_2389_NC_003291(14)", "32.45","Bacillus thuringiensis Bt407",
"CM000747.1", 5L, "12.1", "incomplete", 10L, 29L, "5841084-5853264", "PHAGE_Bacill_G_NC_023719(3)", "31.14","Bacillus thuringiensis Bt407"
)
LQYG01000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"LQYG01000001.1", 1L, "27.5", "intact", 120L, 43L, "14058-41613", "PHAGE_Paenib_HB10c2_NC_028758(6)", "41.97", "Bacillus coagulans B4098"
)
AEFM01000028.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AEFM01000028.1", 1L, "15.2", "incomplete", 20L, 23L, "70638-85846", "PHAGE_Bacill_SPbeta_NC_001884(6)", "34.38", "Bacillus atrophaeus ATCC 9372-1",
"AEFM01000028.1", 2L, "64.7", "incomplete", 30L, 118L, "91736-156521", "PHAGE_Bacill_SPbeta_NC_001884(44)", "34.07", "Bacillus atrophaeus ATCC 9372-1",
"AEFM01000028.1", 3L, "28.7", "incomplete", 10L, 21L, "90756-119478", "PHAGE_Bacill_SPbeta_NC_001884(4)", "40.63", "Bacillus atrophaeus ATCC 9372-1",
"AEFM01000028.1", 4L, "29.2", "incomplete", 20L, 22L, "251955-281193", "PHAGE_Bacill_Bobb_NC_024792(1)", "38.77", "Bacillus atrophaeus ATCC 9372-1",
"AEFM01000028.1", 5L, "32", "intact", 120L, 46L, "550660-582678", "PHAGE_Brevib_Jimmer1_NC_029104(9)", "44.31", "Bacillus atrophaeus ATCC 9372-1",
"AEFM01000028.1", 6L, "7.5", "incomplete", 20L, 16L, "32005-39578", "PHAGE_Clostr_phiCD505_NC_028764(2)", "35.83", "Bacillus atrophaeus ATCC 9372-1"
)
CM000722.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000722.1", 1L, "22", "questionable", 70L, 23L, "2386091-2408173", "PHAGE_Bacill_vB_BhaS_171_NC_030904(6)", "36.34", "Bacillus cereus m1550",
"CM000722.1", 2L, "23.7", "incomplete", 10L, 24L, "2410751-2434503", "PHAGE_Bacill_phBC6A52_NC_004821(8)", "32.22", "Bacillus cereus m1550"
)
CP000673.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP000673.1", 1L, "20.7", "incomplete", 50L, 32L, "448537-469264", "PHAGE_Clostr_c_st_NC_007581(3)", "28.50", "Clostridium kluyveri DSM 555",
"CP000673.1", 2L, "20.9", "incomplete", 40L, 26L, "1927476-1948379", "PHAGE_Clostr_phiCD27_NC_011398(9)", "33.16", "Clostridium kluyveri DSM 555",
"CP000673.1", 3L, "66.2", "intact", 110L, 104L, "1950518-2016752", "PHAGE_Clostr_phiMMP02_NC_019421(21)", "33.69", "Clostridium kluyveri DSM 555",
"CP000673.1", 4L, "47.5", "questionable", 70L, 86L, "2018891-2066482", "PHAGE_Clostr_phiMMP02_NC_019421(15)", "32.97", "Clostridium kluyveri DSM 555",
"CP000673.1", 5L, "24.6", "incomplete", 40L, 30L, "2711660-2736304", "PHAGE_Clostr_phiCT453A_NC_028991(5)", "42.47", "Clostridium kluyveri DSM 555",
"CP000673.1", 6L, "30.3", "incomplete", 60L, 22L, "2981232-3011604", "PHAGE_Clostr_phiCD211_NC_029048(3)", "28.30", "Clostridium kluyveri DSM 555",
"CP000673.1", 7L, "72", "intact", 150L, 102L, "3292608-3364698", "PHAGE_Clostr_phiMMP02_NC_019421(15)", "34.07", "Clostridium kluyveri DSM 555",
"CP000673.1", 8L, "40.5", "intact", 150L, 49L, "17409-57919", "PHAGE_Paenib_Harrison_NC_028746(12)", "33.44", "Clostridium kluyveri DSM 555"
)
CP00560.2 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP00560.2" ,1L, "12", "incomplete", 10L, 25L, "1215912-1228000", "PHAGE_Bacill_SPbeta_NC_001884(7)", "46.09", "Bacillus velezensis FZB42",
"CP00560.2" , 2L, "25.1", "incomplete", 20L, 11L, "1807798-1832992", "PHAGE_Bacill_PBS1_NC_043027(4)", "41.31", "Bacillus velezensis FZB42"
)
KB976672.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"KB976672.1", 1L, "34", "intact", 140L, 60L, "1-34079", "PHAGE_Bacill_IEBH_NC_011167(11)", "36.22", "Bacillus mycoides VD146",
"KB976672.1", 2L, "43.8", "intact", 120L, 64L, "24016-67883", "PHAGE_Bacill_BtCS33_NC_018085(6)", "32.50", "Bacillus mycoides VD146",
"KB976672.1", 3L, "35.5", "questionable", 70L, 44L, "115579-151169", "PHAGE_Brevib_Jimmer1_NC_029104(2)", "34.58", "Bacillus mycoides VD146",
"KB976672.1", 4L, "27.3", "incomplete", 30L, 26L, "2834608-2862000", "PHAGE_Bacill_vB_BhaS_171_NC_030904(8)", "34.33", "Bacillus mycoides VD146",
"KB976672.1", 5L, "11.7", "incomplete", 20L, 17L, "2862750-2874457", "PHAGE_Deep_s_D6E_NC_019544(2)", "34.49", "Bacillus mycoides VD146"
)
CM000753.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000753.1", 1L, "39.1", "incomplete", 30L, 53L, "611529-650639", "PHAGE_Staphy_SpaA1_NC_018277(13)", "38.22", "Bacillus thuringiensis ATCC 10792",
"CM000753.1", 2L, "13.7", "incomplete", 30L, 23L, "3537540-3551309", "PHAGE_Lister_vB_LmoS_188_NC_028871(2)", "33.96", "Bacillus thuringiensis ATCC 10792",
"CM000753.1", 3L, "73.2", "intact", 104L, 74L, "3611410-3684701", "PHAGE_Bacill_phBC6A51_NC_004820(52)", "37.48","Bacillus thuringiensis ATCC 10792",
"CM000753.1", 4L, "21.5", "incomplete", 30L, 36L, "5141141-5162662", "PHAGE_Bacill_Waukesha92_NC_025424(12)", "33.21","Bacillus thuringiensis ATCC 10792",
"CM000753.1", 5L, "28.6", "questionable", 90L, 32L, "5443822-5472459", "PHAGE_Bacill_vB_BhaS_171_NC_030904(6)", "35.20","Bacillus thuringiensis ATCC 10792",
"CM000753.1", 6L, "7.5", "incomplete", 10L, 13L, "5624719-5632310", "PHAGE_Bacill_phBC6A51_NC_004820(2)", "32.36","Bacillus thuringiensis ATCC 10792",
"CM000753.1", 7L, "35.2", "questionable", 80L, 26L, "5717043-5752280", "PHAGE_Bacill_PfEFR_5_NC_031055(3)", "34.37","Bacillus thuringiensis ATCC 10792",
"CM000753.1", 8L, "10.4", "incomplete", 20L, 13L, "5827724-5838214", "PHAGE_Bacill_phBC6A51_NC_004820(3)", "34.11","Bacillus thuringiensis ATCC 10792",
"CM000753.1", 9L, "33", "incomplete", 50L, 46L, "6083519-6116583", "PHAGE_Bacill_phBC6A52_NC_004821(18)", "35.46","Bacillus thuringiensis ATCC 10792",
"CM000753.1", 10L, "14.2", "incomplete", 60L, 16L, "6214844-6229130", "PHAGE_Lister_2389_NC_003291(13)", "35.72", "Bacillus thuringiensis ATCC 10792"
)
CM000758.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000758.1", 1L, "17.9", "incomplete", 30L, 30L, "379657-397629", "PHAGE_Bacill_phBC6A52_NC_004821(7)", "34.84", "Bacillus thuringiensis IBL 200",
"CM000758.1", 2L, "65.2", "intact", 130L, 55L, "1594932-1660189", "PHAGE_Brevib_Jenst_NC_028805(31)", "33.50", "Bacillus thuringiensis IBL 200",
"CM000758.1", 3L, "13.1", "incomplete", 20L, 11L, "2401318-2414454", "PHAGE_Bacill_phBC6A52_NC_004821(5)", "32.56","Bacillus thuringiensis IBL 200",
"CM000758.1", 4L, "12.2", "incomplete", 10L, 22L, "3149372-3161624", "PHAGE_Thermu_OH2_NC_021784(2)", "32.46","Bacillus thuringiensis IBL 200",
"CM000758.1", 5L, "43", "incomplete", 20L, 27L, "5445537-5488546", "PHAGE_Bacill_phBC6A52_NC_004821(4)", "31.53","Bacillus thuringiensis IBL 200",
"CM000758.1", 6L, "22.6", "incomplete", 50L, 21L, "5560789-5583413", "PHAGE_Clostr_phiMMP02_NC_019421(2)", "32.74","Bacillus thuringiensis IBL 200",
"CM000758.1", 7L, "53.8", "intact", 150L, 66L, "6106610-6160476", "PHAGE_Bacill_phi4J1_NC_029008(18)", "34.22","Bacillus thuringiensis IBL 200",
"CM000758.1", 8L, "36.9", "incomplete", 60L, 44L, "6169806-6206801", "PHAGE_Bacill_phBC6A52_NC_004821(17)", "33.84","Bacillus thuringiensis IBL 200",
"CM000758.1", 9L, "7.9", "incomplete", 50L, 10L, "6478759-6486730", "PHAGE_Bacill_phiNIT1_NC_021856(1)", "30.61","Bacillus thuringiensis IBL 200",
"CM000758.1", 10L, "11.7", "incomplete", 20L, 17L, "6719918-6731650", "PHAGE_Bacill_WBeta_NC_007734(8)", "34.73","Bacillus thuringiensis IBL 200"
)
CM000719.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CM000719.1", 1L, "46.1", "intact", 110L, 57L, "284900-331017", "PHAGE_Bacill_1_NC_009737(10)", "35.69", "Bacillus mycoides AH621",
"CM000719.1", 2L, "16.6", "incomplete", 20L, 18L, "2173831-2190470", "PHAGE_Plankt_PaV_LD_NC_016564(2)", "33.58", "Bacillus mycoides AH621",
"CM000719.1", 3L, "20.2", "incomplete", 30L, 34L, "5028218-5048479", "PHAGE_Bacill_phiCM3_NC_023599(5)", "32.63", "Bacillus mycoides AH621"
)
LLZC01000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"LLZC01000001.1", 1L, "19.2", "incomplete", 20L, 29L, "15-19236", "PHAGE_Bacill_Palmer_NC_028926(5)", "40.85", "Bacillus velezensis NRRL B-41580",
"LLZC01000001.1", 2L, "31.8", "incomplete", 30L, 39L, "19268-51081", "PHAGE_Clostr_phiCT453A_NC_028991(7)", "43.79", "Bacillus velezensis NRRL B-41580",
"LLZC01000001.1", 3L, "24.9", "incomplete", 50L, 26L, "7044-31996", "PHAGE_Bacill_phi105_NC_004167(17)", "43.72", "Bacillus velezensis NRRL B-41580",
"LLZC01000001.1", 4L, "18.5", "incomplete", 40L, 29L, "29750-48321", "PHAGE_Bacill_WBeta_NC_007734(3)", "39.90", "Bacillus velezensis NRRL B-41580",
"LLZC01000001.1", 5L, "32.3", "questionable", 90L, 42L, "523706-556036", "PHAGE_Brevib_Jimmer1_NC_029104(8)", "47.07", "Bacillus velezensis NRRL B-41580",
"LLZC01000001.1", 6L, "26.4", "incomplete", 20L, 9L, "111437-137851", "PHAGE_Bacill_PBS1_NC_043027(3)", "42.40", "Bacillus velezensis NRRL B-41580"
)
LSBB01000001.1 <-tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"LSBB01000001.1", 1L, "34.6", "incomplete", 40L, 19L, "707223-741828", "PHAGE_Thermu_OH2_NC_021784(2)", "44.18", "Bacillus atrophaeus NRRL NRS 213",
"LSBB01000001.1", 2L, "58.3", "questionable", 76L, 61L, "83321-141654", "PHAGE_Bacill_phi105_NC_004167(26)", "42.11", "Bacillus atrophaeus NRRL NRS 213",
"LSBB01000001.1", 3L, "32.5", "questionable", 90L, 45L, "360580-393158", "PHAGE_Brevib_Jimmer1_NC_029104(8)", "44.52", "Bacillus atrophaeus NRRL NRS 213",
"LSBB01000001.1", 4L, "7.9", "incomplete", 20L, 19L, "948399-956334", "PHAGE_Bacill_SPbeta_NC_001884(4)", "35.38", "Bacillus atrophaeus NRRL NRS 213",
"LSBB01000001.1", 5L, "53", "questionable", 90L, 73L, "173630-226680", "PHAGE_Bacill_SPP1_NC_004166(13)", "41.66", "Bacillus atrophaeus NRRL NRS 213"
)
ANAQ01000001.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"ANAQ01000001.1", 1L, "11.1", "incomplete", 30L, 11L, "1338-12494", "PHAGE_Clostr_phiCD111_NC_028905(2)", "48.35", "Bacillus coagulans H-1"
)
CP007666.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP007666.1", 1L, "8.7", "incomplete", 30L, 13L, "1027408-1036133", "PHAGE_Bacill_WBeta_NC_007734(5)", "34.17", "Bacillus anthracis Vollum",
"CP007666.1", 2L, "65.1", "questionable", 90L, 86L, "2326972-2392129", "PHAGE_Bacill_1_NC_009737(11)", "34.66", "Bacillus anthracis Vollum",
"CP007666.1", 3L, "51.9", "incomplete", 50L, 77L, "2616643-2668553", "PHAGE_Bacill_PfEFR_5_NC_031055(31)", "35.14", "Bacillus anthracis Vollum",
"CP007666.1", 4L, "14.6", "questionable", 70L, 16L, "3719101-3733745", "PHAGE_Staphy_vB_SepS_SEP9_NC_023582(5)", "36.02", "Bacillus anthracis Vollum",
"CP007666.1", 5L, "45.8", "intact", 110L, 56L, "4544116-4589918", "PHAGE_Bacill_phBC6A52_NC_004821(10)", "35.28", "Bacillus anthracis Vollum"
)
AE016877.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AE016877.1", 1L, "7", "incomplete", 40L, 8L, "1496153-1503241", "PHAGE_Clostr_c_st_NC_007581(3)", "36.27", "Bacillus cereus ATCC 14579",
"AE016877.1", 2L, "62.7", "intact", 146L, 94L, "1803335-1866036", "PHAGE_Bacill_phBC6A51_NC_004820(73)", "37.64", "Bacillus cereus ATCC 14579",
"AE016877.1", 3L, "52", "intact", 140L, 60L, "2523234-2575327", "PHAGE_Bacill_phBC6A52_NC_004821(46)", "34.09", "Bacillus cereus ATCC 14579",
"AE016877.1", 4L, "11.8", "incomplete", 10L, 19L, "3657003-3668825", "PHAGE_Bacill_phi4J1_NC_029008(4)", "34.45", "Bacillus cereus ATCC 14579",
"AE016877.1", 5L, "8.1", "incomplete", 40L, 10L, "5089874-5098042", "PHAGE_Clostr_c_st_NC_007581(3)", "36.10", "Bacillus cereus ATCC 14579",
"AE016877.1", 6L, "14.8", "intact", 98L, 24L, "360-15179", "PHAGE_Bacill_Bam35c_NC_005258(21)", "38.21", "Bacillus cereus ATCC 14579"
)
PVRE01000010.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"PVRE01000010.1", 1L, "30.7", "incomplete", 50L, 36L, "43-30775", "PHAGE_Bacill_phi105_NC_004167(20)", "39.08", "Bacillus pumilus MZGC1",
"PVRE01000010.1", 2L, "36.5", "questionable", 90L, 52L, "768760-805290", "PHAGE_Bacill_phBC6A52_NC_004821(7)", "40.13", "Bacillus pumilus MZGC1",
"PVRE01000010.1", 3L, "26.3", "incomplete", 40L, 36L, "937314-963701", "PHAGE_Brevib_Jimmer1_NC_029104(7)", "41.45", "Bacillus pumilus MZGC1",
"PVRE01000010.1", 4L, "43.2", "intact", 120L, 71L, "140945-184147", "PHAGE_Lister_B054_NC_009813(12)", "38.93", "Bacillus pumilus MZGC1",
"PVRE01000010.1", 5L, "14.2", "incomplete", 10L, 29L, "271524-285744", "PHAGE_Bacill_SPbeta_NC_001884(5)", "33.69", "Bacillus pumilus MZGC1",
"PVRE01000010.1", 6L, "112.6", "questionable", 90L, 161L, "3-112679", "PHAGE_Bacill_SPbeta_NC_001884(50)", "34.44", "Bacillus pumilus MZGC1"
)
FN597644.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"FN597644.1", 1L, "48.6", "incomplete", 60L, 76L, "572604-621293", "PHAGE_Bacill_SPP1_NC_004166(15)", "43.19", "Bacillus amyloliquefaciens DSM 7",
"FN597644.1", 2L, "43.7", "intact", 150L, 50L, "869860-913622", "PHAGE_Lactob_JCL1032_NC_019456(4)", "40.95", "Bacillus amyloliquefaciens DSM 7",
"FN597644.1", 3L, "66.6", "intact", 120L, 90L, "924916-991592", "PHAGE_Paenib_Tripp_NC_028930(27)", "48.23", "Bacillus amyloliquefaciens DSM 7",
"FN597644.1", 4L, "32.3", "intact", 100L, 43L, "1336684-1369017", "PHAGE_Brevib_Jimmer1_NC_029104(9)", "46.54", "Bacillus amyloliquefaciens DSM 7",
"FN597644.1", 5L, "11.5", "incomplete", 20L, 12L, "1926270-1937855", "PHAGE_Bacill_PBS1_NC_043027(4)", "41.98", "Bacillus amyloliquefaciens DSM 7",
"FN597644.1", 6L, "37.7", "intact", 140L, 50L, "2937280-2975047", "PHAGE_Bacill_PM1_NC_020883(7)", "37.06", "Bacillus amyloliquefaciens DSM 7",
"FN597644.1", 7L, "40.1", "intact", 130L, 49L, "3128282-3168441", "PHAGE_Bacill_phi105_NC_004167(11)", "44.48", "Bacillus amyloliquefaciens DSM 7"
)
CP002927.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP002927.1", 1L, "145.7", "incomplete", 60L, 187L, "1170280-1315979", "PHAGE_Bacill_SPbeta_NC_001884(87)", "36.34", "Bacillus amyloliquefaciens XH7",
"CP002927.1", 2L, "27.4", "incomplete", 20L, 12L, "1548037-1575480", "PHAGE_Bacill_PBS1_NC_043027(4)", "42.58", "Bacillus amyloliquefaciens XH7",
"CP002927.1", 3L, "32.8", "questionable", 90L, 40L, "2114870-2147683", "PHAGE_Brevib_Jimmer1_NC_029104(9)", "46.30", "Bacillus amyloliquefaciens XH7",
"CP002927.1", 4L, "27.8", "questionable", 90L, 36L, "2202744-2230573", "PHAGE_Deep_s_D6E_NC_019544(12)", "43.46", "Bacillus amyloliquefaciens XH7",
"CP002927.1", 5L, "17.2", "incomplete", 20L, 25L, "2234669-2251895", "PHAGE_Brevib_Abouo_NC_029029(4)", "43.49", "Bacillus amyloliquefaciens XH7",
"CP002927.1", 6L, "37.7", "intact", 140L, 46L, "2906108-2943875", "PHAGE_Bacill_PM1_NC_020883(7)", "37.06", "Bacillus amyloliquefaciens XH7",
"CP002927.1", 7L, "42", "intact", 110L, 50L, "3052737-3094763", "PHAGE_Paenib_Tripp_NC_028930(15)", "47.00", "Bacillus amyloliquefaciens XH7"
)
AE016879.1 <- tibble::tribble(
~V0,~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AE016879.1", 1L, "45.4", "intact", 110L, 57L, "442049-487496", "PHAGE_Bacill_phBC6A52_NC_004821(10)", "35.27", "Bacillus anthracis Ames",
"AE016879.1", 2L, "8.7", "incomplete", 30L, 13L, "2154700-2163425", "PHAGE_Bacill_WBeta_NC_007734(5)", "34.16", "Bacillus anthracis Ames",
"AE016879.1", 3L, "65.1", "questionable", 90L, 86L, "3453701-3518849", "PHAGE_Bacill_1_NC_009737(11)", "34.66", "Bacillus anthracis Ames",
"AE016879.1", 4L, "51.9", "incomplete", 50L, 78L, "3743581-3795489", "PHAGE_Bacill_PfEFR_5_NC_031055(31)", "35.14", "Bacillus anthracis Ames",
"AE016879.1", 5L, "14.6", "questionable", 70L, 16L, "4845035-4859679", "PHAGE_Staphy_vB_SepS_SEP9_NC_023582(5)", "36.03", "Bacillus anthracis Ames"
)
CP017247.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP017247.1", 1L, "45.6", "intact", 150L, 62L, "640436-686045", "PHAGE_Bacill_phBC6A52_NC_004821(8)", "41.32", "Bacillus licheniformis BL1202",
"CP017247.1", 2L, "40.6", "incomplete", 50L, 56L, "974743-1015425", "PHAGE_Paenib_Tripp_NC_028930(18)", "47.14", "Bacillus licheniformis BL1202",
"CP017247.1", 3L, "24.8", "intact", 120L, 24L, "1015528-1040343", "PHAGE_Paenib_Harrison_NC_028746(5)", "45.05", "Bacillus licheniformis BL1202",
"CP017247.1", 4L, "35.3", "intact", 110L, 46L, "1437154-1472539", "PHAGE_Brevib_Jimmer2_NC_041976(7)", "46.91", "Bacillus licheniformis BL1202",
"CP017247.1", 5L, "42.6", "intact", 150L, 60L, "1544311-1586939", "PHAGE_Bacill_SPP1_NC_004166(11)", "43.33", "Bacillus licheniformis BL1202",
"CP017247.1", 6L, "43.7", "intact", 110L, 51L, "2035073-2078840", "PHAGE_Bacill_phi105_NC_004167(13)", "41.80", "Bacillus licheniformis BL1202",
"CP017247.1", 7L, "41.5", "intact", 108L, 50L, "2966053-3007609", "PHAGE_Bacill_phi105_NC_004167(37)", "41.94", "Bacillus licheniformis BL1202",
"CP017247.1", 8L, "53", "intact", 100L, 65L, "3641986-3695069", "PHAGE_Bacill_phi105_NC_004167(22)", "42.18", "Bacillus licheniformis BL1202"
)
CP023729.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP023729.1", 1L, "35.3", "intact", 100L, 43L, "1321013-1356397", "PHAGE_Brevib_Jimmer1_NC_029104(7)", "46.89", "Bacillus licheniformis ATCC 9789",
"CP023729.1", 2L, "29.4", "incomplete", 50L, 40L, "1418565-1448062", "PHAGE_Paenib_Harrison_NC_028746(3)", "42.84", "Bacillus licheniformis ATCC 9789",
"CP023729.1", 3L, "22.3", "incomplete", 30L, 29L, "1448202-1470529", "PHAGE_Bacill_SPP1_NC_004166(15)", "44.16","Bacillus licheniformis ATCC 9789",
"CP023729.1", 4L, "39", "intact", 100L, 46L, "2827027-2866093", "PHAGE_Bacill_phi105_NC_004167(23)", "41.70","Bacillus licheniformis ATCC 9789",
"CP023729.1", 5L, "42.4", "intact", 97L, 57L, "3223617-3266066", "PHAGE_Bacill_phi105_NC_004167(33)", "41.88","Bacillus licheniformis ATCC 9789"
)
CP006881.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP006881.1", 1L, "33.7", "intact", 110L, 46L, "1277487-1311219", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "44.84", "Bacillus subtilis subsp. subtilis PY79",
"CP006881.1", 2L, "29.2", "incomplete", 30L, 37L, "2478956-2508167", "PHAGE_Clostr_phiCT9441A_NC_029022(7)", "40.33", "Bacillus subtilis subsp. subtilis PY79"
)
CP009749.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP009749.1", 1L, "11.9", "incomplete", 10L, 25L, "1251584-1263541", "PHAGE_Bacill_SPbeta_NC_001884(7)", "46.04", "Bacillus velezensis ATCC 19217",
"CP009749.1", 2L, "25.8", "incomplete", 20L, 10L, "1841297-1867154", "PHAGE_Bacill_PBS1_NC_043027(4)", "41.13", "Bacillus velezensis ATCC 19217",
"CP009749.1", 3L, "25.9", "incomplete", 50L, 22L, "2149009-2174921", "PHAGE_Bacill_phi105_NC_004167(15)", "42.18","Bacillus velezensis ATCC 19217",
"CP009749.1", 4L, "22", "incomplete", 30L, 20L, "2164628-2186694", "PHAGE_Clostr_phiCT453B_NC_029004(3)", "39.84","Bacillus velezensis ATCC 19217"
)
CP002905.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP002905.1", 1L, "24.2", "incomplete", 30L, 23L, "156042-180336", "PHAGE_Bacill_Pony_NC_022770(3)", "41.69", "Bacillus subtilis subsp. spizizenii TU-B-10",
"CP002905.1", 2L, "34.6", "incomplete", 40L, 42L, "184254-218854", "PHAGE_Clostr_phiCT453A_NC_028991(7)", "41.93", "Bacillus subtilis subsp. spizizenii TU-B-10",
"CP002905.1", 3L, "33.8", "incomplete", 40L, 23L, "215836-249734", "PHAGE_Bacill_Pony_NC_022770(3)", "40.85", "Bacillus subtilis subsp. spizizenii TU-B-10",
"CP002905.1", 4L, "34.6", "incomplete", 40L, 42L, "244731-279331", "PHAGE_Clostr_phiCT453A_NC_028991(7)", "41.93", "Bacillus subtilis subsp. spizizenii TU-B-10",
"CP002905.1", 5L, "35.2", "intact", 100L, 47L, "1419525-1454818", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "44.76", "Bacillus subtilis subsp. spizizenii TU-B-10",
"CP002905.1", 6L, "21.5", "incomplete", 20L, 9L, "2007226-2028763", "PHAGE_Bacill_SPbeta_NC_001884(2)", "40.51", "Bacillus subtilis subsp. spizizenii TU-B-10"
)
CP002183.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP002183.1", 1L, "35.1", "intact", 100L, 46L, "1271618-1306739", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "44.62","Bacillus subtilis subsp. spizizenii W23",
"CP002183.1", 2L, "19.9", "incomplete", 20L, 8L, "1851994-1871992", "PHAGE_Bacill_SPbeta_NC_001884(2)", "38.95", "Bacillus subtilis subsp. spizizenii W23",
"CP002183.1", 3L, "39.8", "incomplete", 60L, 59L, "1955050-1994948", "PHAGE_Bacill_phi105_NC_004167(25)", "39.64", "Bacillus subtilis subsp. spizizenii W23",
"CP002183.1", 4L, "29.5", "incomplete", 20L, 12L, "2903965-2933553", "PHAGE_Staphy_SPbeta_like_NC_029119(2)", "44.16","Bacillus subtilis subsp. spizizenii W23"
)
CP001176.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP001176.1", 1L, "8.6", "incomplete", 40L, 9L, "992721-1001410", "PHAGE_Bacill_JBP901_NC_027352(2)", "33.08", "Bacillus cereus B4264",
"CP001176.1", 2L, "59.2", "intact", 150L, 58L, "2503614-2562850", "PHAGE_Bacill_BtCS33_NC_018085(21)", "34.34", "Bacillus cereus B4264"
)
CP009709.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP009709.1", 1L, "8.7", "incomplete", 50L, 8L, "1668277-1677042", "PHAGE_Lactob_phiAT3_NC_005893(2)", "45.20", "Bacillus coagulans ATCC 7050",
"CP009709.1", 2L, "26", "incomplete", 30L, 32L, "2390789-2416834", "PHAGE_Lister_A006_NC_009815(5)", "41.42", "Bacillus coagulans ATCC 7050",
"CP009709.1", 3L, "23.8", "intact", 130L, 29L, "2411542-2435408", "PHAGE_Bacill_IEBH_NC_011167(11)", "41.16", "Bacillus coagulans ATCC 7050",
"CP009709.1", 4L, "20.9", "incomplete", 30L, 37L, "2445687-2466633", "PHAGE_Lister_A006_NC_009815(5)", "41.18", "Bacillus coagulans ATCC 7050",
"CP009709.1", 5L, "23.8", "intact", 130L, 26L, "2468857-2492727", "PHAGE_Bacill_IEBH_NC_011167(11)", "41.16", "Bacillus coagulans ATCC 7050"
)
CP011007.1 <- tibble::tribble(
~V0,~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP011007.1", 1L, "26.9", "questionable", 70L, 27L, "2507179-2534135", "PHAGE_Bacill_phi105_NC_004167(12)", "39.61", "Bacillus pumilus 7P",
"CP011007.1", 2L, "20.9", "incomplete", 30L, 24L, "2537418-2558321", "PHAGE_Bacill_WBeta_NC_007734(3)", "39.90", "Bacillus pumilus 7P",
"CP011007.1", 3L, "20.2", "incomplete", 20L, 27L, "2804168-2824417", "PHAGE_Brevib_Jimmer1_NC_029104(6)", "43.08", "Bacillus pumilus 7P",
"CP011007.1", 4L, "59.3", "intact", 150L, 84L, "2820204-2879562", "PHAGE_Brevib_Jimmer1_NC_029104(8)", "40.74", "Bacillus pumilus 7P"
)
CP001907.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP001907.1", 1L, "45", "incomplete", 50L, 67L, "689367-734428", "PHAGE_Bacill_phiCM3_NC_023599(15)", "37.69", "Bacillus thuringiensis CT-43",
"CP001907.1", 2L, "81.8", "intact", 120L, 94L, "1833618-1915435", "PHAGE_Bacill_phBC6A51_NC_004820(57)", "37.09", "Bacillus thuringiensis CT-43",
"CP001907.1", 3L, "28.5", "incomplete", 30L, 33L, "2342043-2370635", "PHAGE_Bacill_phBC6A52_NC_004821(5)", "32.88","Bacillus thuringiensis CT-43",
"CP001907.1", 4L, "59.4", "intact", 130L, 63L, "2591103-2650577", "PHAGE_Bacill_phBC6A52_NC_004821(19)", "33.70","Bacillus thuringiensis CT-43",
"CP001907.1", 5L, "43.6", "intact", 120L, 56L, "3773368-3817025", "PHAGE_Bacill_phBC6A52_NC_004821(9)", "35.28","Bacillus thuringiensis CT-43",
"CP001907.1", 6L, "41.6", "questionable", 70L, 61L, "4904766-4946374", "PHAGE_Bacill_phBC6A52_NC_004821(19)", "35.19","Bacillus thuringiensis CT-43",
"CP001907.1", 7L, "8.1", "incomplete", 20L, 10L, "20121-28232", "PHAGE_Bacill_G_NC_023719(2)", "34.16","Bacillus thuringiensis CT-43",
"CP001907.1", 8L, "19.5", "incomplete", 30L, 38L, "31945-51465", "PHAGE_Bacill_Waukesha92_NC_025424(13)", "33.97","Bacillus thuringiensis CT-43"
)
CP001283.1 <- tibble::tribble(
~V0,~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP001283.1", 1L, "54.9", "intact", 140L, 63L, "514264-569181", "PHAGE_Lister_2389_NC_003291(15)", "35.14", "Bacillus anthracis AH820",
"CP001283.1", 2L, "5.8", "incomplete", 20L, 7L, "1290814-1296627", "PHAGE_Bacill_Finn_NC_020480(4)", "40.94", "Bacillus anthracis AH820",
"CP001283.1", 3L, "48", "questionable", 70L, 58L, "4140140-4188182", "PHAGE_Bacill_vB_BceS_MY192_NC_048633(17)", "34.98","Bacillus anthracis AH820",
"CP001283.1", 4L, "28.3", "incomplete", 20L, 13L, "157038-185423", "PHAGE_Bacill_BalMu_1_NC_030945(3)", "35.58", "Bacillus anthracis AH820"
)
CP001186.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP001186.1", 1L, "48.1", "intact", 130L, 54L, "2470121-2518231", "PHAGE_Bacill_phBC6A52_NC_004821(12)", "34.10", "Bacillus cereus G9842",
"CP001186.1", 2L, "46.8", "incomplete", 50L, 51L, "2753573-2800411", "PHAGE_Bacill_phBC6A51_NC_004820(36)", "36.01", "Bacillus cereus G9842",
"CP001186.1", 3L, "20.8", "incomplete", 20L, 15L, "3622980-3643862", "PHAGE_Bacill_phi4J1_NC_029008(3)", "36.47", "Bacillus cereus G9842",
"CP001186.1", 4L, "23.7", "intact", 120L, 27L, "3747762-3771540", "PHAGE_Bacill_vB_BhaS_171_NC_030904(7)", "35.11", "Bacillus cereus G9842",
"CP001186.1", 5L, "10.4", "incomplete", 40L, 17L, "3775146-3785591", "PHAGE_Deep_s_D6E_NC_019544(2)", "34.24", "Bacillus cereus G9842",
"CP001186.1", 6L, "9.2", "questionable", 90L, 9L, "24595-33813", "PHAGE_Bacill_0305phi8_36_NC_009760(8)", "33.72", "Bacillus cereus G9842",
"CP001186.1", 7L, "8.1", "incomplete", 40L, 7L, "51054-59229", "PHAGE_Bacill_G_NC_023719(2)", "35.79", "Bacillus cereus G9842"
)
AP011541.2 <-tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AP011541.2", 1L, "22.7", "incomplete", 30L, 8L, "306078-328833", "PHAGE_Bacill_CP_51_NC_025423(2)", "45.42", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 2L, "18.8", "incomplete", 30L, 11L, "518758-537594", "PHAGE_Lister_vB_LmoS_188_NC_028871(2)", "37.52", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 3L, "45.3", "incomplete", 60L, 50L, "1325270-1370661", "PHAGE_Brevib_Jimmer2_NC_041976(9)", "44.48", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 4L, "14.2", "incomplete", 40L, 14L, "1791838-1806068", "PHAGE_Bacill_phi105_NC_048631(4)", "39.99", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 5L, "37.3", "intact", 140L, 45L, "1822947-1860322", "PHAGE_Bacill_phi105_NC_004167(16)", "40.75", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 6L, "15.6", "incomplete", 10L, 9L, "2657076-2672738", "PHAGE_Staphy_SPbeta_like_NC_029119(2)", "38.17", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 7L, "26.2", "incomplete", 10L, 33L, "3240205-3266407", "PHAGE_Bacill_phi105_NC_004167(23)", "41.38", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 8L, "40.7", "incomplete", 40L, 39L, "3260241-3301026", "PHAGE_Bacill_BtCS33_NC_018085(4)", "42.08", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 9L, "36.3", "questionable", 80L, 28L, "3360579-3396962", "PHAGE_Paenib_Tripp_NC_028930(12)", "45.47", "Bacillus subtilis subsp. natto BEST195",
"AP011541.2", 10L, "14.6", "questionable", 70L, 14L, "4011807-4026430", "PHAGE_Lactob_phiAT3_NC_005893(2)", "34.02","Bacillus subtilis subsp. natto BEST195"
)
CP020102.1 <- tibble::tribble(
~V0,~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP020102.1", 1L, "21.7", "incomplete", 30L, 11L, "528169-549913", "PHAGE_Clostr_phi3626_NC_003524(2)", "36.28", "Bacillus subtilis subsp subtilis NCIB 3610",
"CP020102.1", 2L, "33.7", "intact", 130L, 48L, "1314483-1348214", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "44.84", "Bacillus subtilis subsp subtilis NCIB 3610",
"CP020102.1", 3L, "136.4", "intact", 140L, 193L, "2151659-2288094", "PHAGE_Bacill_SPbeta_NC_001884(173)", "34.78", "Bacillus subtilis subsp subtilis NCIB 3610",
"CP020102.1", 4L, "40.6", "incomplete", 40L, 57L, "2661123-2701774", "PHAGE_Bacill_vB_BtS_BMBtp14_NC_048640(7)", "39.50", "Bacillus subtilis subsp subtilis NCIB 3610",
"CP020102.1", 5L, "69.6", "intact", 150L, 80L, "1158-70810", "PHAGE_Staphy_vB_SepS_SEP9_NC_023582(22)", "35.46", "Bacillus subtilis subsp subtilis NCIB 3610"
)
CP000903.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP000903.1", 1L, "56.6", "questionable", 80L, 66L, "3462572-3519240 ", "PHAGE_Bacill_1_NC_009737(11)", "34.71","Bacillus mycoides KBAB4",
"CP000903.1", 2L, "52.4", "intact", 120L, 51L, "3619791-3672243 ", "PHAGE_Bacill_vB_BhaS_171_NC_030904(9)", "35.69","Bacillus mycoides KBAB4",
"CP000903.1", 3L, "11.6", "incomplete", 30L, 13L, "123128-134814 ", "PHAGE_Bacill_IEBH_NC_011167(2)", "31.56","Bacillus mycoides KBAB4",
"CP000903.1", 4L, "5.4", "incomplete", 30L, 8L, "300589-306067 ", "PHAGE_Bacill_BceA1_NC_048628(4)", "34.92","Bacillus mycoides KBAB4",
"CP000903.1", 5L, "59.6", "intact", 130L, 76L, "60-59716", "PHAGE_Paenib_Tripp_NC_028930(28)", "43.38","Bacillus mycoides KBAB4",
"CP000903.1", 6L, "43.7", "intact", 110L, 58L, "3344-47085", "PHAGE_Bacill_IEBH_NC_011167(13)", "36.09","Bacillus mycoides KBAB4"
)
CP001598.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP001598.1", 1L, "45.4", "questionable", 90L, 48L, "441949-487396 ", "PHAGE_Bacill_1_NC_009737(9)", "35.27", "Bacillus anthracis A0248",
"CP001598.1", 2L, "65.1", "intact", 140L, 74L, "3453728-3518876 ", "PHAGE_Bacill_1_NC_009737(10)", "34.66", "Bacillus anthracis A0248",
"CP001598.1", 3L, "51.9", "incomplete", 60L, 73L, "3743608-3795516 ", "PHAGE_Bacill_PfEFR_4_NC_048641(33)", "35.14", "Bacillus anthracis A0248",
"CP001598.1", 4L, "14.6", "incomplete", 60L, 9L, "4845061-4859705 ", "PHAGE_Staphy_vB_SepS_SEP9_NC_023582(4)", "36.03", "Bacillus anthracis A0248",
"CP001598.1", 6L, "5.3", "incomplete", 50L, 8L, "112831-118163", "PHAGE_Bacill_SP_15_NC_031245(2)", "31.01", "Bacillus anthracis A0248",
"CP001598.1", 7L, "7.8", "incomplete", 10L, 8L, "76666-84505", "PHAGE_Staphy_phiRS7_NC_022914(1)", "32.02", "Bacillus anthracis A0248"
)
CP001903.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP001903.1", 1L, "28.2", "incomplete", 10L, 29L, "2485644-2513886", "PHAGE_Bacill_phBC6A52_NC_004821(11)", "33.23", "Bacillus cereus BMB171",
"CP001903.1", 2L, "19.4", "questionable", 80L, 20L, "2516310-2535766", "PHAGE_Bacill_vB_BhaS_171_NC_030904(6)", "35.69", "Bacillus cereus BMB171",
"CP001903.1", 3L, "11.7", "incomplete", 10L, 19L, "3581436-3593219", "PHAGE_Bacill_phi4J1_NC_029008(4)", "34.47", "Bacillus cereus BMB171",
"CP001903.1", 4L, "7.9", "incomplete", 30L, 7L, "28520-36480", "PHAGE_Bacill_Eyuki_NC_028944(1)", "34.96", "Bacillus cereus BMB171"
)
CP009611.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP009611.1", 1L, "31.8", "intact", 100L, 45L, "1216793-1248683", "PHAGE_Brevib_Osiris_NC_028969(8)", "46.99", "Bacillus velezensis Bs-916",
"CP009611.1", 2L, "23.3", "incomplete", 30L, 22L, "1797951-1821314", "PHAGE_Bacill_SP_10_NC_019487(4)", "44.34","Bacillus velezensis Bs-916",
"CP009611.1", 3L, "24.2", "incomplete", 20L, 12L, "1834541-1858827", "PHAGE_Bacill_PBS1_NC_043027(4)", "40.80", "Bacillus velezensis Bs-916",
"CP009611.1", 4L, "25.8", "incomplete", 50L, 28L, "2950851-2976659", "PHAGE_Bacill_SPP1_NC_004166(11)", "43.72","Bacillus velezensis Bs-916",
"CP009611.1", 5L, "17", "incomplete", 30L, 17L, "2980556-2997572", "PHAGE_Brevib_Abouo_NC_029029(4)", "44.21", "Bacillus velezensis Bs-916"
)
CP006890.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP006890.1", 1L, "12", "incomplete", 10L, 22L, "1244290-1256373", "PHAGE_Bacill_SPbeta_NC_001884(7)", "46.06","Bacillus velezensis SQR9",
"CP006890.1", 2L, "19.3", "incomplete", 20L, 9L, "1841744-1861092", "PHAGE_Bacill_PBS1_NC_043027(3)", "42.46","Bacillus velezensis SQR9",
"CP006890.1", 3L, "25.4", "questionable", 79L, 29L, "2001378-2026842", "PHAGE_Bacill_phi105_NC_004167(27)", "44.17","Bacillus velezensis SQR9",
"CP006890.1", 4L, "14.7", "incomplete", 40L, 27L, "2030223-2045008", "PHAGE_Bacill_WBeta_NC_007734(2)", "38.67","Bacillus velezensis SQR9",
"CP006890.1", 5L, "133.5", "questionable", 70L, 191L, "2130288-2263815", "PHAGE_Bacill_SPbeta_NC_001884(84)", "36.12","Bacillus velezensis SQR9",
"CP006890.1", 6L, "42.9", "questionable", 90L, 49L, "2315133-2358036", "PHAGE_Bacill_phi105_NC_004167(15)", "40.47","Bacillus velezensis SQR9"
)
CP029465.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP029465.1", 1L, "26.9", "incomplete", 30L, 12L, "524547-551466", "PHAGE_Bacill_phi4J1_NC_029008(2)", "36.04", "Bacillus subtilis subsp. inaquosorum KCTC 13429",
"CP029465.1", 2L, "30.8", "intact", 100L, 45L, "1377386-1408264", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "45.00", "Bacillus subtilis subsp. inaquosorum KCTC 13429",
"CP029465.1", 3L, "18.3", "incomplete", 20L, 8L, "2054688-2073035", "PHAGE_Bacill_PBS1_NC_043027(2)", "40.88", "Bacillus subtilis subsp. inaquosorum KCTC 13429",
"CP029465.1", 4L, "18.1", "incomplete", 20L, 15L, "2269492-2287611", "PHAGE_Bacill_SPbeta_NC_001884(7)", "34.10", "Bacillus subtilis subsp. inaquosorum KCTC 13429",
"CP029465.1", 5L, "36.7", "intact", 150L, 52L, "2842721-2879443", "PHAGE_Bacill_phi105_NC_004167(8)", "41.63", "Bacillus subtilis subsp. inaquosorum KCTC 13429"
)
CP004405.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP004405.1", 1L, "33.7", "intact", 110L, 46L, "1277676-1311433", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "44.86", "Bacillus subtilis subsp. subtilis BAB-1"
)
LT906438.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"LT906438.1", 1L, "23.6", "incomplete", 50L, 27L, "2372196-2395894", "PHAGE_Bacill_phi105_NC_004167(13)", "39.85","Bacillus pumilus NCTC 10337",
"LT906438.1", 2L, "15.4", "incomplete", 30L, 26L, "2400973-2416393", "PHAGE_Bacill_WBeta_NC_007734(2)", "38.45","Bacillus pumilus NCTC 10337",
"LT906438.1", 3L, "140", "intact", 150L, 218L, "2790850-2930912", "PHAGE_Paenib_Tripp_NC_028930(26)", "43.78","Bacillus pumilus NCTC 10337"
)
CP002468.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP002468.1", 1L, "29.3", "incomplete", 30L, 36L, "645918-675245 ", "PHAGE_Clostr_phiCT9441A_NC_029022(7)", "40.22", "Bacillus subtilis subsp. subtilis Bsn5",
"CP002468.1", 2L, "43.8", "intact", 140L, 69L, "2840142-2884005 ", "PHAGE_Lister_B054_NC_009813(14)", "42.19", "Bacillus subtilis subsp. subtilis Bsn5",
"CP002468.1", 3L, "35.1", "incomplete", 50L, 47L, "3526995-3562109 ", "PHAGE_Brevib_Jimmer2_NC_041976(9)", "44.96","Bacillus subtilis subsp. subtilis Bsn5"
)
# AL009126.3 <- tibble::tribble(
# ~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
# "AL009126.3", 1L, "21.7", "incomplete", 30L, 16L, "528129-549873", "PHAGE_Clostr_phiCD505_NC_028764(2)", "36.28","Bacillus subtilis subsp. subtilis 168",
# "AL009126.3", 2L, "33.1", "incomplete", 50L, 10L, "638092-671211", "PHAGE_Entero_vB_KleM_RaK2_NC_019526(1)", "41.48","Bacillus subtilis subsp. subtilis 168",
# "AL009126.3", 3L, "33.7", "intact", 100L, 45L, "1314453-1348182", "PHAGE_Brevib_Jimmer1_NC_029104(8)", "44.84","Bacillus subtilis subsp. subtilis 168",
# "AL009126.3", 4L, "9.5", "incomplete", 10L, 20L, "2055868-2065377", "PHAGE_Bacill_SPbeta_NC_001884(8)", "33.61","Bacillus subtilis subsp. subtilis 168",
# "AL009126.3", 5L, "136.4", "intact", 144L, 208L, "2151626-2288090", "PHAGE_Bacill_SPbeta_NC_001884(178)", "34.78","Bacillus subtilis subsp. subtilis 168",
# "AL009126.3", 6L, "48", "questionable", 90L, 49L, "2653062-2701098", "PHAGE_Clostr_phiCT9441A_NC_029022(7)", "38.86","Bacillus subtilis subsp. subtilis 168"
# ) # 3 regiones on detail data - on PH
AL009126.3 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AL009126.3", 1L, "21.7", "incomplete", 30L, 10L, "528129-549873", "PHAGE_Clostr_phi3626_NC_003524(2)", "36.28", "Bacillus subtilis subsp. subtilis 168",
"AL009126.3", 2L, "33.7", "intact", 120L, 47L, "1314453-1348182", "PHAGE_Brevib_Jimmer2_NC_041976(8)", "44.84", "Bacillus subtilis subsp. subtilis 168",
"AL009126.3", 3L, "136.4", "intact", 139L, 192L, "2151626-2288090", "PHAGE_Bacill_SPbeta_NC_001884(172)", "34.78", "Bacillus subtilis subsp. subtilis 168",
"AL009126.3", 4L, "40.6", "incomplete", 40L, 57L, "2661102-2701754", "PHAGE_Bacill_vB_BtS_BMBtp14_NC_048640(7)", "39.50","Bacillus subtilis subsp. subtilis 168"
)
AE017333.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"AE017333.1", 1L, "35.2", "intact", 100L, 43L, "1314824-1350062", "PHAGE_Brevib_Jimmer1_NC_029104(7)", "47.07", "Bacillus licheniformis DSM 13",
"AE017333.1", 2L, "41.5", "questionable", 80L, 63L, "1422556-1464148", "PHAGE_Bacill_SPP1_NC_004166(12)", "42.51","Bacillus licheniformis DSM 13",
"AE017333.1", 3L, "40.2", "intact", 150L, 46L, "1507400-1547601", "PHAGE_Bacill_WBeta_NC_007734(6)", "44.36","Bacillus licheniformis DSM 13",
"AE017333.1", 4L, "58.8", "intact", 120L, 81L, "3421796-3480600", "PHAGE_Bacill_phi105_NC_004167(19)", "42.29","Bacillus licheniformis DSM 13"
)
CP007640.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP007640.1", 1L, "32", "incomplete", 50L, 46L, "695969-727988", "PHAGE_Brevib_Jimmer1_NC_029104(8)", "44.31", "Bacillus atrophaeus BSS",
"CP007640.1", 2L, "87.2", "incomplete", 30L, 119L, "1500018-1587271", "PHAGE_Bacill_SPbeta_NC_001884(45)", "35.02", "Bacillus atrophaeus BSS",
"CP007640.1", 3L, "28.8", "incomplete", 20L, 33L, "1826335-1855177", "PHAGE_Lactob_Lb338_1_NC_012530(2)", "39.64", "Bacillus atrophaeus BSS"
)
CP014795.1 <- tibble::tribble(
~V0, ~V1, ~V2, ~V3, ~V4, ~V5, ~V6, ~V7, ~V8, ~V9,
"CP014795.1", 1L, "14.1", "incomplete", 50L, 12L, "19877-34050", "PHAGE_Geobac_E3_NC_029073(4)", "36.62", "Bacillus licheniformis SCK B11",
"CP014795.1", 2L, "118", "questionable", 70L, 148L, "29703-147788", "PHAGE_Bacill_SPbeta_NC_001884(59)", "35.87","Bacillus licheniformis SCK B11",
"CP014795.1", 3L, "55.5", "intact", 100L, 70L, "1610845-1666428", "PHAGE_Bacill_phi105_NC_004167(19)", "42.26","Bacillus licheniformis SCK B11",
"CP014795.1", 4L, "35.3", "intact", 100L, 42L, "3708337-3743712", "PHAGE_Brevib_Jimmer1_NC_029104(7)", "46.90","Bacillus licheniformis SCK B11",
"CP014795.1", 5L, "41.5", "questionable", 80L, 65L, "3816193-3857701", "PHAGE_Bacill_SPP1_NC_004166(12)", "42.64","Bacillus licheniformis SCK B11"
)
summaries <- rbind(CP009748.1,CP009692.1,MOEA01000001.1,CM000732.1,JHCA01000001.1,AYTO01000001.1,JHUD02000001.1,AFWM01000001.1,CM000725.1,QVEJ01000001.1,AYTL01000001.1,CP048852.1,CM000729.1,CM000747.1,LQYG01000001.1,AEFM01000028.1,CM000722.1,CP000673.1,CP00560.2,KB976672.1,CM000753.1,CM000758.1,CM000719.1,LLZC01000001.1,LSBB01000001.1,ANAQ01000001.1,CP007666.1,AE016877.1,PVRE01000010.1,FN597644.1,CP002927.1,AE016879.1,CP017247.1,CP023729.1,CP006881.1,CP009749.1,CP002905.1,CP002183.1,CP001176.1,CP009709.1,CP011007.1,CP001907.1,CP001283.1,CP001186.1,AP011541.2,CP020102.1,CP000903.1,CP001598.1,CP001903.1,CP009611.1,CP006890.1,CP029465.1,CP004405.1,LT906438.1,CP002468.1,AL009126.3,AE017333.1,CP007640.1,CP014795.1,CP014795.1)
library(tidyverse)
summaries %>%
rename(genome = V0,
region = V1,
region_length_kb = V2,
completeness = V3,
score = V4,
total_proteins = V5,
region_position = V6,
most_common_phage = V7,
gc_percentage = V8,
species = V9
) %>%
mutate(
genome = as.factor(genome),
species = as.factor(species),
region = as.factor(region),
completeness = as.factor(completeness),
gc_percentage = as.numeric(gc_percentage),
region_length_kb = as.numeric(region_length_kb)
) %>%
as_tibble() %>%
write_tsv('Data/2021-02-19_PHASTER-summaries.tsv')
|
|
#' Lists all players for a league on a given date, with full bio and other details.
#' @param date given date
#' @param sport mlb | nfl | nba | etc..
#' @param ... optional query parameters
#'
#' @examples
#' \dontrun{
#' j <- all_players("mlb", team = "bos", rosterstatus = c("assigned-to-roster", "assigned-to-injury-list"))
#' }
#' @export
all_players <- function(sport, date = Sys.Date(), ...) {
stopifnot(length(date) == 1L, length(sport) == 1L)
path <- sprintf("%s/players.json", sport)
query <- list(date = msf_date(date), ...)
query <- lapply(query, paste, collapse = ",")
result <- msf_api(path, query)
attr(result, "local_path") <- sprintf("%s/players/%s.json", sport, format(date, "%Y%m%d"))
result <- msf_class(result, "players")
result
}
|
|
volumes <- c(25, 45, 28, 79, 74, 61, 12, 68, 93, 39, 100)
soma <- sum(volumes)
# Soma de todos vetores
print(soma)
|
|
library(shiny)
library(ggplot2)
library(dplyr)
library(RCurl)
library(zoo)
us_counties = read.csv(text = getURL("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv"))
us_counties$date = as.Date(us_counties$date)
us_counties = us_counties[order(us_counties$county),]
# Define UI for covid by county app
ui <- pageWithSidebar(
# App title ----
headerPanel("Covid-19 Daily Cases per County"),
# Sidebar panel for inputs ----
sidebarPanel(
#State selector
selectInput("state", "State:",
choices = sort(unique(us_counties$state))),
#county selector
selectInput("county", "County:",
choices = NULL)
),
# Main panel for displaying outputs ----
mainPanel(
# Output: Formatted text for caption ----
h3(textOutput("caption")),
plotOutput("countyPlot"),
#HTML(
## "<p>New York Times github<a href='https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv'></a>!</p>"
# )
tagList("Data Source:",a("New York Times GitHub", href="https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv"))
)
)
# Define server logic to plot cases by county
server <- function(session,input, output) {
observe({
counties_in_state <- us_counties %>%
filter(state == input$state) %>%
select(county)
updateSelectInput(session, "county", "County:", choices = unique(counties_in_state))
})
filtered_data<-reactive({
us_counties %>%
filter(state==input$state, county==input$county) %>%
arrange("date") %>%
mutate(daily_cases = cases - lag(cases, default = first(cases))) %>%
mutate(rollavg=rollapply(daily_cases,7,mean, align='right',fill=NA))
})
# Compute the formula text ----
formulaText <- reactive({
paste("Cases in", input$county, "county", input$state)
})
# Return the formula text for printing as a caption ----
output$caption <- renderText({
formulaText()
})
# Generate a plot of daily cases by county
output$countyPlot <- renderPlot({
ggplot(filtered_data(), aes(x=date,y = daily_cases)) +
geom_col(fill = "dark blue") +
xlab("Date") +
ylab("New Cases") +
scale_x_date(date_breaks = "1 month", date_labels = "%b %Y") +
geom_line(data=filtered_data(), aes(x=date, y=rollavg), colour="red", size = 1.3)
})
}
shinyApp(ui, server)
|
|
data("PlantGrowth")
View(PlantGrowth)
anova_results<-aov(PlantGrowth$weight~PlantGrowth$group, data = PlantGrowth)
summary(anova_results)
|
|
# downtime graph
Sys.setlocale("LC_ALL", "C")
library(reshape2)
library(ggplot2)
library(scales)
downtime <- function(){
DF <- read.table(text=iconv("ID Equipment Downtime Running Idle
1 设备1 1.7 6.15 1
2 设备2 1.0 6.6 0.5
3 设备3 0.9 7.6 0
4 设备4 1.8 6.6 1.5
5 设备5 1.5 4.9 2
6 设备6 0.45 5.55 1
7 设备7 1.25 4.55 2","UTF-8","UTF-8"), header=TRUE)
#print(DF)
DF1 <- melt(DF, id.var=c("ID","Equipment"))
#print(DF1)
plt <- ggplot(DF1, aes(x = ID, y = value, fill = variable )) + #, fill = variable
geom_bar(stat = "identity", width=0.7, alpha=0.7, position="fill") +
#ggtitle(iconv("月停线时间","UTF-8","UTF-8")) + #iconv
theme(plot.title = element_text(lineheight=3, face="bold", color="black", size=12) )+
scale_x_continuous(breaks=DF1$ID, labels= DF1$Equipment) +
#scale_y_continuous(breaks=DF1$value) +
ylab("时长百分比(%)") +
scale_y_continuous(labels = percent) +
scale_fill_manual(values=c("#FF4136", "#2ECC40", "#FFDC00"),
guide = guide_legend(title = "图例")) +
scale_alpha_manual(values = c(0.2,0.2, 0.2))+
theme(axis.text.x = element_text(angle=30, vjust=0.1, size=7, color="#BBBFC2"),
axis.title.x=element_blank(),
axis.text.y = element_text(size=7, color="#BBBFC2"),
axis.title.y = element_text(size=10, color="#BBBFC2"),
panel.grid.major=element_line(color="#444444"),
panel.grid.minor=element_line(color="#444444"),
panel.background=element_blank(),
#panel.border=element_rect(color= "#1F1F1F"),
legend.title = element_text(size=7, color="#BBBFC2"),
legend.text = element_text(size=7, color="#BBBFC2"),
legend.background = element_rect(colour = '#1F1F1F', fill = '#1F1F1F'),
plot.background=element_rect(fill = "#1F1F1F", color = "#1F1F1F")
)
ggsave("output/downtime1.png",width=6, height=2.5)
}
downtime()
|
|
# https://towardsdatascience.com/animating-your-data-visualizations-like-a-boss-using-r-f94ae20843e3
hand_data <- read.csv(file="log_1616588440.log",head=TRUE,sep=" ")
hand_data <- transform(hand_data, timestamp = (timestamp - hand_data[1,1]))
hand_data <- transform(hand_data, height = (height * 0.001))
hand_data
library(plotly)
p <- hand_data %>%
plot_ly(
x = ~x,
y = ~y,
size = 10,
color = ~height,
frame = ~timestamp,
text = ~height,
hoverinfo = "text",
type = 'scatter',
mode = 'markers'
) %>%
layout(
xaxis = list(
type = "log"
)
)
|
|
########################################################
##### Author: Diego Valle Jones
##### Website: www.diegovalle.net
##### Date Created: Fri Mar 19 07:53:06 2010
########################################################
#Run all scripts and save the charts in the output directories
#If you have a slow computer you might want to go get a cup of coffee
source("initialize/init.r")
source("accidents-homicides-suicides/accidents-homicides-suicides.r")
source("Benford/benford.r")
source("guns-executions/guns-executions.r")
source("trends/seasonal-decomposition.r")
source("predictions/predictions.r")
source("historic/homicide-historic.r")
source("missing-homicides/missing-homicides.r")
source("missing-homicides/massacres.r")
source("CIEISP/cieisp.r")
source("CIEISP/michoacan.r")
source("INEGIvsSNSP/inegi-vs-snsp.r")
source("INEGIvsSNSP/snsp-vs-cieisp.r")
source("drugs/druguse.r")
source("drugs/eradication.r")
source("most-violent-counties/most-violent.r")
source("timelines/timelines-mun.r")
source("timelines/ciudad-juarez.r")
source("states/homicide-bystate.r")
#You need the shp files for the next lines
testMapsExist(source("choropleths/county-maps-homicide.r"))
testMapsExist(source("most-violent-counties/cities-mun.r"))
|
|
#ID GWAS IN BIOVU 23K EA
#both the genotype and phenotype data are processed by Xue.
#genotype data dir: /data/g***/z***/biovu/23k/geno/ea_chr/chr*
#binary trait dir: /data/c***/z***/data/biovu/pheno/EA23k_plink.txt
#adj cov: AGE_in_days,GENDER,PC1,PC2,PC3,PC4,PC5,ARRAY
args<-as.numeric(commandArgs(TRUE)) #528
pheno_info<-read.table('/data/g***/z***/hihost_gwas/info/N_info.txt',header = T,stringsAsFactors = F,sep='\t')
pheno_info<-pheno_info[pheno_info$N_cases>=100,] #24
#write.table(pheno_info,'/data/g***/z***/hihost_gwas/info/BioVU_23kEA_ID_GWAS_trait.txt',quote = F,sep = '\t',row.names = F)
phecode=paste0('X',pheno_info$PheCode) #'X117'
run_i=1
run_list<-list()
for (i in 1:length(phecode)){
for (j in 1:22){
run_list[[run_i]]<-c(i,j)
run_i=run_i+1
}
}
run_list<-run_list[[args]]
phecode<-phecode[run_list[1]]
chr<-run_list[2]
biovu<-read.table('/data/c***/z***/data/biovu/pheno/EA23k_plink.txt',header = T,stringsAsFactors = F)
if(phecode %in% colnames(biovu)){
N_cases<-length(which(biovu[,phecode]==2))
N_controls<-length(which(biovu[,phecode]==1))
cmd=paste0('plink2 --bfile /data/g***/z***/biovu/23k/geno/ea_chr/chr',chr,' --allow-no-sex --logistic hide-covar --pheno /data/c***/z***/data/biovu/pheno/EA23k_plink.txt --pheno-name ',phecode,' --geno 0.95 --hwe 0.0001 --covar /data/c***/z***/data/biovu/pheno/EA23k_plink_array.txt --covar-name AGE_in_days,GENDER,PC1,PC2,PC3,PC4,PC5,ARRAY --covar-variance-standardize --out /data/g***/z***/hihost_gwas/raw_asso/',phecode,'_chr',chr)
system(cmd,wait = T)
#asso_path=paste0('/data/c***/z***/projects/biovu/gwas/raw/',phecode,'_chr',chr,'.',phecode,'.glm.logistic')
#asso<-read.table(asso_path,header = T,stringsAsFactors = F,comment.char = '&')
#asso<-asso[asso$TEST=='ADD',]
#write.table(asso,paste0('/data/c***/z***/projects/biovu/gwas/raw/',phecode,'_chr',chr),quote = F,sep = '\t',row.names = F)
}
|
|
#!/usr/bin/env Rscript
#
# adonis.r - R slave for two-way permanova analysis with R package vegan
#
# Version 1.0.0 (July, 15, 2016)
#
# Copyright (c) 2016-- Lela Andrews
#
# This software is provided 'as-is', without any express or implied
# warranty. In no event will the authors be held liable for any damages
# arising from the use of this software.
#
# Permission is granted to anyone to use this software for any purpose,
# including commercial applications, and to alter it and redistribute it
# freely, subject to the following restrictions:
#
# 1. The origin of this software must not be misrepresented; you must not
# claim that you wrote the original software. If you use this software
# in a product, an acknowledgment in the product documentation would be
# appreciated but is not required.
# 2. Altered source versions must be plainly marked as such, and must not be
# misrepresented as being the original software.
# 3. This notice may not be removed or altered from any source distribution.
#
## Recieve input files from bash
args <- commandArgs(TRUE)
map=(args[1])
dmfile=(args[2])
factor1=(args[3])
factor2=(args[4])
f1temp=(args[5])
f2temp=(args[6])
outdir=(args[7])
perms0=(args[8])
perms <- as.integer(perms0)
## Load libraries
library(vegan)
## Read in data
mapfile <- read.csv(map, sep="\t", header=TRUE)
dm <- read.csv(dmfile, sep="\t", header=TRUE)
f1 <- mapfile[,factor1]
f2 <- mapfile[,factor2]
## Run permanova and print to screen
pm <- adonis(formula = dm ~ f1 * f2, permutations = perms)
pm
## End
q()
|
|
outpath1 = '../TCN/char_cnn/data/quora'
outpath2 = '../TCN/char_cnn/data/quora-small'
# Create data stream.
a = read.csv('train.csv')
x = paste(a$question_text, a$target, sep='ª')
# Partition and create large dataset.
n3 = length(x)
n2 = 9 * n3 %/% 10
n1 = 8 * n3 %/% 10
trn = x[1:n1]
vld = x[(n1+1):n2]
tst = x[(n2+1):n3]
dir.create(outpath1, showWarnings=F, recursive=T)
write.table(data.frame(x=trn), sprintf('%s/train.txt', outpath1), row.names=F, col.names=F, quote=F)
write.table(data.frame(x=vld), sprintf('%s/valid.txt', outpath1), row.names=F, col.names=F, quote=F)
write.table(data.frame(x=tst), sprintf('%s/test.txt', outpath1), row.names=F, col.names=F, quote=F)
# Thin by 10 to create small dataset.
x = x[1:(length(x) %/% 10)]
n3 = length(x)
n2 = 9 * n3 %/% 10
n1 = 8 * n3 %/% 10
trn = x[1:n1]
vld = x[(n1+1):n2]
tst = x[(n2+1):n3]
dir.create(outpath2, showWarnings=F, recursive=T)
write.table(data.frame(x=trn), sprintf('%s/train.txt', outpath2), row.names=F, col.names=F, quote=F)
write.table(data.frame(x=vld), sprintf('%s/valid.txt', outpath2), row.names=F, col.names=F, quote=F)
write.table(data.frame(x=tst), sprintf('%s/test.txt', outpath2), row.names=F, col.names=F, quote=F)
|
|
#' Get sea ice data.
#'
#' @export
#' @param year (numeric) a year
#' @param month (character) a month, as character abbrevation of a month
#' @param pole (character) one of S (south) or N (north)
#' @param format (character) one of shp (default), geotiff-extent (for geotiff
#' extent data), or geotiff-conc (for geotiff concentration data)
#' @param ... Further arguments passed on to `rgdal::readshpfile()` if
#' `format="shp"` or `raster::raster()` if not
#' @return data.frame if `format="shp"` (a fortified sp object);
#' `raster::raster()` if not
#' @seealso [sea_ice_tabular()]
#' @references See the "User Guide" pdf at https://nsidc.org/data/g02135
#' @examples \dontrun{
#' if (requireNamespace("raster")) {
#'
#' ## one year, one moth, one pole
#' sea_ice(year = 1990, month = "Apr", pole = "N")
#' sea_ice(year = 1990, month = "Apr", pole = "N", format = "geotiff-extent")
#' sea_ice(year = 1990, month = "Apr", pole = "N", format = "geotiff-conc")
#'
#' ## one year, one month, many poles
#' sea_ice(year = 1990, month = "Apr")
#'
#' ## one year, many months, many poles
#' sea_ice(year = 1990, month = c("Apr", "Jun", "Oct"))
#'
#' ## many years, one month, one pole
#' sea_ice(year = 1990:1992, month = "Sep", pole = "N")
#'
#' # get geotiff instead of shp data.
#' x <- sea_ice(year = 1990, month = "Apr", format = "geotiff-extent")
#' y <- sea_ice(year = 1990, month = "Apr", format = "geotiff-conc")
#' }
#'
#' }
sea_ice <- function(year = NULL, month = NULL, pole = NULL, format = "shp",
...) {
assert(year, c('integer', 'numeric'))
assert(month, 'character')
assert(pole, 'character')
assert(format, 'character')
if (!format %in% c("shp", "geotiff-extent", "geotiff-conc"))
stop("'format' must be one of: 'shp', 'geotiff-extent', 'geotiff-conc'")
urls <- seaiceeurls(yr=year, mo=month, pole, format)
if (format == "shp") {
check4pkg("rgdal")
lapply(urls, readshpfile, ...)
} else {
check4pkg("raster")
lapply(urls, function(w) suppressWarnings(raster::raster(w, ...)))
}
}
#' Make all urls for sea ice data
#'
#' @export
#' @keywords internal
#' @param yr (numeric) a year
#' @param mo (character) a month, as character abbrevation of a month
#' @param pole (character) one of S (south) or N (north)
#' @return A vector of urls (character)
#' @examples \dontrun{
#' # Get all urls
#' seaiceeurls()
#'
#' # for some range of years
#' seaiceeurls(yr = 1980:1983)
#' seaiceeurls(yr = 1980, mo = c("Jan", "Feb", "Mar"))
#' seaiceeurls(yr = 1980:1983, mo = c("Jan", "Apr", "Oct"))
#'
#' # Get urls for Feb of all years, both S and N poles
#' seaiceeurls(mo='Feb')
#'
#' # Get urls for Feb of all years, just S pole
#' seaiceeurls(mo='Feb', pole='S')
#'
#' # Get urls for Feb of 1980, just S pole
#' seaiceeurls(yr=1980, mo='Feb', pole='S')
#'
#' # GeoTIFF
#' seaiceeurls(yr=1980, mo='Feb', pole='S', format = "geotiff")
#' }
seaiceeurls <- function(yr = NULL, mo = NULL, pole = NULL, format = "shp") {
type <- if (!grepl("geotiff", format)) NULL else strsplit(format, "-")[[1]][2]
urls <- generate_urls(format, type)
if (!is.null(pole)) {
pole <- switch(format, shp=sprintf("_%s_", pole), sprintf("%s_", pole))
}
if (!is.null(yr)) yr <- sprintf("_%s", yr)
ss <- urls
if (!is.null(yr) & is.null(mo) & is.null(pole))
ss <- grep2(yr, urls)
if (is.null(yr) & !is.null(mo) & is.null(pole))
ss <- grep2(mo, urls)
if (is.null(yr) & is.null(mo) & !is.null(pole))
ss <- grep2(pole, urls)
if (!is.null(yr) & !is.null(mo) & is.null(pole))
ss <- grep2(yr, grep2(mo, urls))
if (!is.null(yr) & is.null(mo) & !is.null(pole))
ss <- grep2(yr, grep2(pole, urls))
if (is.null(yr) & !is.null(mo) & !is.null(pole))
ss <- grep2(pole, grep2(mo, urls))
if (!is.null(yr) & !is.null(mo) & !is.null(pole))
ss <- grep2(yr, grep2(pole, grep2(mo, urls)))
return( ss )
}
grep2 <- function(pattern, x) {
grep(paste0(pattern, collapse = "|"), x, value = TRUE)
}
generate_urls <- function(format, type) {
fun <- if (format == "shp") make_urls_shp else make_urls_geotiff
if (!is.null(type)) type <- switch(type, extent = "extent", "concentration")
yrs_prev <- seq(1979, year(today()) - 1, 1)
months_prevyr <- c(paste0(0, seq(1, 9)), c(10, 11, 12))
yrs_months <- do.call(c, lapply(yrs_prev, function(x)
paste(x, months_prevyr, sep = '')))
urls <- fun(yrs_months, month.abb, type = type)
# this year
months_thisyr <- seq(1, as.numeric(format(Sys.Date(), "%m")))
months_thisyr <- months_thisyr[-length(months_thisyr)]
if (!length(months_thisyr) == 0) {
months_thisyr <- vapply(months_thisyr, function(z) {
if (nchar(z) == 1) paste0(0, z) else as.character(z)
}, "")
yrs_months_thisyr <- paste0(format(Sys.Date(), "%Y"), months_thisyr)
eachmonth_thiyr <- month.abb[1:grep(format(Sys.Date() - months(1), "%b"),
month.abb)]
urls_thisyr <- fun(yrs_months_thisyr, eachmonth_thiyr, type = type)
} else {
urls_thisyr <- c()
}
# all urls
c(urls, urls_thisyr)
}
make_urls_shp <- function(yrs_months, mos, type = NULL) {
do.call(
"c",
lapply(c('south', 'north'), function(x) {
mm <- paste(
vapply(seq_along(mos), function(z) {
if (nchar(z) == 1) paste0(0, z) else as.character(z)
}, ""),
mos,
sep = "_"
)
tmp <- sprintf(ftp_url_shp, x, mm)
route <- paste('extent_', switch(x, south = "S", north = "N"),
'_', yrs_months, '_polygon_v3.0.zip',
sep = '')
file.path(tmp, route)
})
)
}
make_urls_geotiff <- function(yrs_months, mos, type = "extent") {
do.call(
"c",
lapply(c('south', 'north'), function(x) {
mm <- paste(
vapply(seq_along(mos), function(z) {
if (nchar(z) == 1) paste0(0, z) else as.character(z)
}, ""),
mos,
sep = "_"
)
tmp <- sprintf(ftp_url_geotiff, x, mm)
route <- paste(switch(x, south = "S", north = "N"),
'_', yrs_months, sprintf('_%s_v3.0.tif', type),
sep = '')
file.path(tmp, route)
})
)
}
ftp_url_shp <-
'ftp://sidads.colorado.edu/DATASETS/NOAA/G02135/%s/monthly/shapefiles/shp_extent/%s'
ftp_url_geotiff <-
'ftp://sidads.colorado.edu/DATASETS/NOAA/G02135/%s/monthly/geotiff/%s'
#' Function to read shapefiles
#'
#' @export
#' @keywords internal
#' @param x A url
#' @param storepath Path to store data in
#' @return An object of class sp
readshpfile <- function(x, storepath = NULL) {
filename <- strsplit(x, '/')[[1]][length(strsplit(x, '/')[[1]])]
filename_noending <- strsplit(filename, "\\.")[[1]][[1]]
if (is.null(storepath)) {
storepath <- tempdir()
}
path_write <- paste0(storepath, '/', filename_noending)
path <- paste0(storepath, '/', filename)
bb <- try(download.file(x, path, quiet = TRUE), silent = TRUE)
if (class(bb) == "try-error") {
stop('Data not available, ftp server may be down')
}
dir.create(path_write, showWarnings = FALSE)
unzip(path, exdir = path_write)
my_layer <- rgdal::ogrListLayers(path.expand(path_write))
ggplot2::fortify(
suppressWarnings(rgdal::readOGR(path.expand(path_write), layer = my_layer,
verbose = FALSE, stringsAsFactors = FALSE)))
}
#' ggplot2 map theme
#' @export
#' @keywords internal
theme_ice <- function() {
list(theme_bw(base_size = 18),
theme(panel.border = element_blank(),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.text = element_blank(),
axis.ticks = element_blank()),
labs(x = '', y = ''))
}
|
|
##############################################################################################
## Script to export single state-year municipal election returns with coalition aggregates. ##
## Output saved in ./xport/ directory with one column for each party. ##
##############################################################################################
# Edit your path to aymu1989-present.coalAgg.csv
setwd("/home/eric/Desktop/MXelsCalendGovt/elecReturns/data")
# if my machine use scripts in disk
pth <- ifelse (Sys.info()["user"] %in% c("eric", "magar"),
"~/Dropbox/data/useful-functions",
"https://raw.githubusercontent.com/emagar/useful-functions/master"
)
#pth <- "https://raw.githubusercontent.com/emagar/useful-functions/master" # debug
# Reads xport function
pth <- paste(pth, "xport-function.r", sep = "/")
source(pth)
## # if no internet connection, source from disk version
## source("~/dropbox/data/useful-functions/xport-function.r")
# Usage
xport(e=2, y=2019) # where e is edon or edo
# or be prompted for a year
xport(e=2)
|
|
%r
library(SparkR)
df <- sql("SELECT * FROM `v_7d`")
xt <- xtabs(~prior_type+current_type, data=collect(df))
print(xt)
chisq.test(xt)
|
|
linear.bounded <- function(x, a, b, minY = 0, maxY = 1) {
y = a * x + b
y[y>maxY] = maxY
y[y<minY] = minY
return(y)
}
|
|
\clearpage
\subsubsection{Case Statement} % (fold)
\label{sub:case_statement}
The case statement is the second kind of branching statement. This allows you to create paths that execute based on matching a value from an expression. This allows one case statement to handle many alternative paths.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{./topics/control-flow/diagrams/CaseStatement}
\caption{Case statement selectively runs multiple branches of code}
\label{fig:branching-case-statement}
\end{figure}
\mynote{
\begin{itemize}
\item The case statement is a kind of \textbf{action}. It allows you to command the computer to select a path based upon the value of an expression.
\item Each path within the Case Statement has a value. When the computer executes the case statement the path values are used to determine which path will be taken.
\item In C and Pascal the Case Statement only works with Ordinal Values. This limits you to using Character or Integer values within the Case Statement's Expression.
\item The Case Statement has one entry point, multiple paths, and then one exit point.
\end{itemize}
}
% section case_statement (end)
|
|
%!TEX root = ./jctt.tex
\newcommand{\rell}{^\ell} % raise to ellth power
\newcommand{\relll}{^{\ell+1}} % raise to ell + 1 th power
\newcommand{\rellh}{^{\ell+1/2}} % raise to ell + 1/2 power
\newcommand{\paren}[1]{\left(#1\right)}
\newcommand{\br}[1]{\left[#1\right]}
\newcommand{\curl}[1]{\left\{#1\right\}}
\newcommand{\eddphi}[1]{\edd_{#1}\phi_{#1}}
\newcommand{\ALPHA}[2]{\frac{#1}{\sigma_{t,#2} h_{#2}}}
\section{The VEF Method}
\subsection{The Algorithm}
Here, we describe the VEF method for a planar geometry, fixed-source problem:
\change{
\begin{equation}
\mu \pderiv{\psi}{x} \paren{x, \mu} + \sigma_t(x) \psi(x,\mu) =
\frac{\sigma_s(x)}{2} \phi(x) + \frac{Q(x)}{2} \,,
\end{equation}
where $\mu$ is the $x$--axis cosine of the direction of neutron flow, $\sigma_t(x)$ and $\sigma_s(x)$ are the total and scattering macroscopic cross sections, $Q(x)$ is the isotropic fixed-source and $\psi(x, \mu)$ is the angular flux,
and $\phi(x)$ is the scalar flux:
\begin{equation}
\phi(x) = \int_{-1}^1 \psi(x,\mu') \ud \mu' \,.
\end{equation}
}
Applying the Discrete Ordinates (\SN) angular discretization yields the following set of $N$ coupled, ordinary differential equations:
\begin{equation} \label{eq:sn}
\mu_n \dderiv{\psi_n}{x}(x) + \sigma_t(x) \psi_n(x) =
\frac{\sigma_s(x)}{2} \phi(x) + \frac{Q(x)}{2} \,, \quad 1 \leq n \leq N \,,
\end{equation}
where $\psi_n(x) = \psi(x, \mu_n)$ is the angular flux due to neutrons with directions in the cone defined by $\mu_n$. The $\mu_n$ are given by an $N$-point Gauss quadrature rule such that the scalar flux is numerically integrated as follows:
\begin{equation} \label{eq:phiquad}
\phi(x) = \sum_{n=1}^N w_n \psi_n(x) \,,
\end{equation}
where $w_n$ is the quadrature weight corresponding to $\mu_n$.
The VEF method begins by solving Eq.~\ref{eq:sn} while lagging the scattering source. This is called a Source Iteration (SI),
and is represented as follows:
\begin{equation} \label{eq:si}
\mu_n \dderiv{}{x}\psi_n\rellh(x) + \sigma_t(x) \psi_n\rellh(x) =
\frac{\sigma_s(x)}{2} \phi^\ell(x) + \frac{Q(x)}{2} \,, \quad 1 \leq n \leq N \,,
\end{equation}
where $\ell$ is the iteration index. The scalar flux used in the scattering source, $\phi\rell$, is assumed to be known either from the previous iteration or from the initial guess if $\ell=0$. The use of a half-integral index indicates that SI is the first of a two-step iteration scheme. If one is only doing SI without acceleration, the second step would simply be to set the final scalar flux iterate to the iterate after the source iteration:
\begin{equation} \label{eq:siupdate}
\phi(x)\relll = \phi(x)\rellh \,.
\end{equation}
However, SI is slow to converge in optically thick and highly scattering systems. This is the motivation for accelerating
SI using the VEF method.
The second iterative step of the VEF method is to obtain a final ``accelerated'' iterate for
the scalar flux by solving the VEF drift-diffusion equation using angular flux shape information from the source iteration step:
\begin{equation} \label{eq:drift}
-\dderiv{}{x} \frac{1}{\sigma_t(x)} \dderiv{}{x} \bracket{\edd\rellh(x)\phi\relll(x)} + \sigma_a(x) \phi\relll(x) = Q(x) \,,
\end{equation}
where the Eddington factor is given by
\begin{equation} \label{eq:eddington}
\edd\rellh(x) = \frac{\int_{-1}^1 \mu^2 \psi\rellh(x, \mu) \ud \mu}{\int_{-1}^1 \psi\rellh(x, \mu) \ud \mu} \, .
\end{equation}
\change{While transport-consistent boundary conditions must be defined for Eq. \ref{eq:drift}, there is not a unique way to do this. Thus, we postpone discussion of the boundary conditions to Sec. 2.3.}
Note that the Eddington factor depends only upon the angular shape of the angular flux, and not its magnitude. This drift-diffusion equation is derived by first taking the first two angular moments of Eq.~\ref{eq:sn}:
\begin{subequations}
\begin{equation} \label{eq:zero}
\dderiv{}{x} J (x) + \sigma_a(x) \phi(x) = Q(x) \,,
\end{equation}
\begin{equation} \label{eq:first}
\dderiv{}{x} \bracket{\edd(x) \phi (x)} + \sigma_t(x) J(x) = 0 \,,
\end{equation}
\end{subequations}
% added isotropic part since the MMS solution uses a non-isotropic source (removed JEM 11-18-17)
% added current definition since it was never defined.
\change{where $J(x)$ is the current:
\begin{equation}
J(x) = \int_{-1}^1 \mu' \psi(x,\mu') \ud \mu' \,.
\end{equation}}
Then Eq.~\ref{eq:first} is solved for $J(x)$, and this expression is then substituted into
Eq.~\ref{eq:zero}.
Performing a SI, computing the Eddington factor from the SI angular flux iterate, and then solving
the drift-diffusion equation to obtain a new scalar flux iterate completes one accelerated iteration. These iterations
are repeated until convergence of the scalar flux at step $\ell+1$ is achieved. The fluxes at steps $\ell+1/2$ and $\ell+1$ will
individually converge, but not necessarily to each other unless the \SN and drift-diffusion equations are consistently
differenced or the spatial truncation error is negligible.
Acceleration occurs because the angular shape of the angular flux, and thus the Eddington factor, converges much faster than the scalar flux. In addition, the solution of the drift-diffusion equation includes scattering. This inclusion compensates for lagging
the scattering source in the SI step.
The VEF method allows the \SN equations and drift-diffusion equations to be solved with arbitrarily different spatial discretization methods. The following sections present the application of the Lumped Linear Discontinuous Galerkin (LLDG) spatial discretization to the \SN equations and the constant-linear Mixed Finite-Element Method (MFEM) to the VEF drift-diffusion equation.
\subsection{Lumped Linear Discontinuous Galerkin \SN}
\begin{figure}
\centering
\input{figs/lldg.pdf_tex}
\caption{The distribution of unknowns within an LLDG cell. The superscript $+$ and $-$ indicate the angular fluxes for $\mu_n>0$ and $\mu_n<0$, respectively. }
\label{fig:lldg_grid}
\end{figure}
The spatial grid and distribution of unknowns for an LLDG cell are shown in Fig.~\ref{fig:lldg_grid}. We assume a computational domain of length $x_b$ discretized into $I$ cells. The indices for cell centers are integral and the indices for cell edges are half integral.
The two unknowns for discrete angle $\mu_n$ within cell $i$ are the left and right angular fluxes, $\psi_{n,i,L}\rellh$ and $\psi_{n,i,R}\rellh$. The angular flux dependence within cells is linear and is given in cell $i$ by
\begin{equation} \label{eq:afdef}
\psi_{n,i}\rellh(x) = \psi_{n,i,L}\rellh B_{i,L}(x) + \psi_{n,i,R}\rellh B_{i,R}(x) \,, \quad x \in (x_{i-1/2},x_{i+1/2}),
\end{equation}
where
\begin{subequations}
\begin{equation}\label{eq:bfunL}
B_{i,L}(x) = \begin{cases}
\frac{x_{i+1/2} - x}{x_{i+1/2} - x_{i-1/2}} \,, & x \in [x_{i-1/2}, x_{i+1/2}] \\
0 \,, & \text{otherwise}
\end{cases} \,,
\end{equation}
\begin{equation}\label{eq:bfunR}
B_{i,R}(x) = \begin{cases}
\frac{x - x_{i-1/2}}{x_{i+1/2} - x_{i-1/2}} \,, & x \in [x_{i-1/2}, x_{i+1/2}] \\
0 \,, & \text{otherwise}
\end{cases} \,,
\end{equation}
\end{subequations}
are the LLDG basis functions.
The cell centered angular flux is the average of the left and right discontinuous edge fluxes:
\begin{equation} \label{eq:lldg_i}
\psi_{n,i}\rellh = \half\left(\psi_{n,i,L}\rellh + \psi_{n,i,R}\rellh\right) \,.
\end{equation}
The interface or cell-edge fluxes are uniquely defined by upwinding:
\begin{subequations}
\begin{equation} \label{eq:downwind}
\psi_{n,i-1/2}\rellh = \begin{cases}
\psi_{n,i-1,R}\rellh \,, & \mu_n > 0 \\
\psi_{n,i,L}\rellh \,, & \mu_n < 0
\end{cases} \,,
\end{equation}
\begin{equation} \label{eq:upwind}
\psi_{n,i+1/2}\rellh = \begin{cases}
\psi_{n,i,R}\rellh \,, & \mu_n > 0 \\
\psi_{n,i+1,L}\rellh \,, & \mu_n < 0
\end{cases} \,.
\end{equation}
\end{subequations}
The fixed source is also assumed to be linear within each cell:
\begin{equation} \label{eq:Qdef}
Q_{i}(x) = Q_{i,L} B_{i,L}(x) + Q_{i,R} B_{i,R}(x) \,, \quad x \in [x_{i-1/2},x_{i+1/2}],
\end{equation}
Because there is no spatial derivative of the fixed source, there is no need to uniquely
define the fixed sources on the cell edges.
The unlumped Linear Discontinuous Galerkin discretization for Eq.~\ref{eq:si} is obtained by
substituting $\psi_{n,i}\rellh(x)$ from Eq.~\ref{eq:afdef} and $Q_{i}(x)$ from Eq.~\ref{eq:Qdef} into Eq.~\ref{eq:si},
sequentially multiplying the resultant equation by each basis function, and integrating over
each cell with integration by parts of the spatial derivative term.
The lumped discretization equations are obtained simply by performing all volumetric integrals (after formal integration by parts
of the spatial derivative term) using trapezoidal-rule quadrature.
The LLDG discretization of Eq.~\ref{eq:si} is given by:
\begin{subequations}
\begin{equation} \label{eq:lldg_l}
\mu_n \left(\psi_{n,i}\rellh - \psi_{n, i-1/2}\rellh\right)
+ \frac{\sigma_{t,i} h_i}{2} \psi_{n,i,L}\rellh
= \frac{\sigma_{s,i} h_i}{4} \phi_{i,L}\rell + \frac{h_i}{4} Q_{i,L} \,,
% 1 \leq n \leq N \,,
% 1 \leq i \leq I\,,
\end{equation}
\begin{equation} \label{eq:lldg_r}
\mu_n \left(\psi_{n,i+1/2}\rellh - \psi_{n,i}\rellh\right)
+ \frac{\sigma_{t,i} h_i}{2} \psi_{n,i,R}\rellh
= \frac{\sigma_{s,i} h_i}{4} \phi_{i,R}\rell + \frac{h_i}{4} Q_{i,R} \,,
% 1 \leq n \leq N \,,
% 1 \leq i \leq I\,,
\end{equation}
\end{subequations}
where $h_i$, $\sigma_{t,i}$, $\sigma_{s,i}$, and $Q_{i,L/R}$ are the cell width, total cross section, scattering cross section,
and fixed sources in cell $i$. The discontinuous scalar fluxes, $\phi_{i,L/R}\rell$, are assumed to be known from
the drift-diffusion step of the previous iteration or the initial guess when $\ell=0$. Equations \ref{eq:lldg_i}, \ref{eq:downwind}, \ref{eq:upwind}, \ref{eq:lldg_l}, and \ref{eq:lldg_r} can be combined and rewritten as
follows
\begin{equation} \label{eq:sweepLR}
\left[\begin{matrix}
\mu_n + \sigma_{t,i} h_i & \mu_n \\
-\mu_n & \sigma_{t,i} + \mu_n \\
\end{matrix}\right]
\left[\begin{matrix}
\psi_{n,i,L}\rellh \\ \psi_{n,i,R}\rellh
\end{matrix}\right]
= \left[\begin{matrix}
\frac{\sigma_{s,i}h_i}{2} \phi_{i,L}\rell + \frac{h_i}{2} Q_{i,L} + 2\mu_n \psi_{n,i-1,R}\rellh \\
\frac{\sigma_{s,i}h_i}{2} \phi_{i,R}\rell + \frac{h_i}{2} Q_{i,R}
\end{matrix}\right] \,,
\end{equation}
for sweeping from left to right ($\mu_n > 0$) and
\begin{equation} \label{eq:sweepRL}
\left[\begin{matrix}
-\mu_n + \sigma_{t,i}h_i & \mu_n \\
-\mu_n & -\mu_n + \sigma_{t,i}h_i \\
\end{matrix} \right]
\left[\begin{matrix}
\psi_{n,i,L}\rellh \\ \psi_{n,i,R}\rellh
\end{matrix} \right]
= \left[\begin{matrix}
\frac{\sigma_{s,i}h_i}{2} \phi_{i,L}\rell + \frac{h_i}{2} Q_{i,L} \\
\frac{\sigma_{s,i}h_i}{2} \phi_{i,R}\rell + \frac{h_i}{2} Q_{i,R} - 2\mu_n \psi_{n,i+1,L}\rellh
\end{matrix} \right]
\,,
\end{equation}
for sweeping from right to left ($\mu_n < 0$), respectively. The right hand sides of Eqs.~\ref{eq:sweepLR} and \ref{eq:sweepRL} are known
as the scalar flux from the previous iteration, the fixed source, and the angular flux entering from the upwind cell are all known. By supplying the flux entering the left side of the first cell, the solution for $\mu_n > 0$ can be propagated from left to right by solving Eq.~\ref{eq:sweepLR}. Similarly, supplying the incident flux on the right boundary allows the solution for $\mu_n < 0$ to be propagated from right to left with Eq.~\ref{eq:sweepRL}. The Variable Eddington Factors needed in the drift-diffusion acceleration step are computed at the cell edges as follows:
\begin{equation} \label{lldg:edde}
\edd\rellh_{i\pm 1/2} = \frac{
\sum_{n=1}^N \mu_n^2 \psi_{n,i\pm 1/2}\rellh w_n
}{
\sum_{n=1}^N \psi_{n,i\pm 1/2}\rellh w_n
} \,,
\end{equation}
where the $\psi_{n,i\pm1/2}\rellh$ are defined by Eqs.~\ref{eq:downwind} and \ref{eq:upwind}. The Eddington factors are
computed within cell $i$ as follows:
\begin{equation} \label{lldg:eddi}
\edd\rellh(x) = \frac{
\sum_{n=1}^N \mu_n^2 \psi_{n}\rellh(x) w_n
}{
\sum_{n=1}^N \psi_{n}\rellh(x) w_n
} \,, \quad x\in(x_{i-1/2},x_{i+1/2}),
\end{equation}
where $\psi_{n}\rellh(x)$ is defined by Eq.~\ref{eq:afdef}.
|
|
\documentclass[10pt,colorlinks=true,urlcolor=blue]{moderncv}
\usepackage{utopia}
\moderncvtheme[blue]{classic}
\usepackage[utf8]{inputenc}
\usepackage{multibbl}
\newbibliography{pub}
\newbibliography{techreports}
\newbibliography{talks}
\newbibliography{misc}
\newbibliography{conf}
\pagenumbering{arabic}
\usepackage{lastpage}
\rfoot{\addressfont\itshape\textcolor{gray}{Page \thepage\ of \pageref{LastPage}}}
%\newbibliography{talks}
\usepackage[scale=0.8]{geometry}
\newcommand{\cvdoublecolumn}[2]{%
\cvline{}{}{%
\begin{minipage}[t]{\listdoubleitemmaincolumnwidth}#1\end{minipage}%
\hfill%
\begin{minipage}[t]{\listdoubleitemmaincolumnwidth}#2\end{minipage}%
}%
}
% usage: \cvreference{name}{address line 1}{address line 2}{address line 3}{address line 4}{e-mail address}{phone number}
% Everything but the name is optional
% If \addresssymbol, \emailsymbol or \phonesymbol are specified, they will be used.
% (Per default, \addresssymbol isn't specified, the other two are specified.)
% If you don't like the symbols, remove them from the following code, including the tilde ~ (space).
\newcommand{\cvreference}[7]{%
\textbf{#1}\newline% Name
\ifthenelse{\equal{#2}{}}{}{\addresssymbol~#2\newline}%
\ifthenelse{\equal{#3}{}}{}{#3\newline}%
\ifthenelse{\equal{#4}{}}{}{#4\newline}%
\ifthenelse{\equal{#5}{}}{}{#5\newline}%
\ifthenelse{\equal{#6}{}}{}{\emailsymbol~\texttt{#6}\newline}%
\ifthenelse{\equal{#7}{}}{}{\phonesymbol~#7}}
\AtBeginDocument{\recomputelengths} \firstname{Minh}
\familyname{Tang} \address{Applied Mathematics and Statistics \\
Johns Hopkins University \\ 3400 N. Charles St}{Baltimore, MD
21218} \email{mtang10@jhu.edu}
\homepage{www.cis.jhu.edu/\textasciitilde minh}
\begin{document}
\maketitle
\section{Education}
\cventry{2010}{Ph.D in Computer Science}{\newline Indiana University Bloomington}{}{}{}
\cventry{2004}{M.S. in Computer Science}{\newline University of Wisconsin Milwaukee}{}{}{}
\cventry{2001}{B.S. in Computer Science}{\newline Assumption University, Thailand}{}{}{}
\section{Work Experience}
\cventry{01/17 -- now}{Associate Research Professor}
{\newline Department of Applied Mathematics and Statistics, Johns Hopkins University}{}{}{}
\cventry{07/14 -- 12/16}{Assistant Research Professor}
{\newline Department of Applied Mathematics and Statistics, Johns Hopkins University}{}{}{}
\cventry{10/10 -- 06/14}{Postdoctoral Fellow}{\newline Department of
Applied Mathematics and Statistics, Johns Hopkins
University}{}{}{}
\section{Research Interests}
\cvline{}{statistical pattern recognition, dimensionality reduction,
statistical inference on graphs}{}
\section{Grants and Research Award}
\cvline{\small 03/17 -- 08/21}{co-PI on DARPA Data-Driven Discovery of Models (PI: Carey Priebe)}
\cvline{\small 08/18 -- 08/19}{PI on Microsoft Research Award: Efficiency and Optimality in Graph Inference}
\section{Journal Publications}
\cvline{2018+}{J.~Cape and \textbf{M.~Tang} and C.~E.~Priebe. Signal-plus-noise matrix models: eigenvector deviations and fluctuations. {\em Biometrika}, accepted for publication. arXiv preprint at \url{http://arxiv.org/abs/1802.00381}}
\cvline{2018+}{A.~Athreya and \textbf{M.~Tang} and Y.~Park and C.~E.~Priebe. On estimation and inference in latent structure random graphs. {\em Statistical Science}, accepted for publication. arXiv preprint at \url{http://arxiv.org/abs/1806.01401}}
\cvline{2018+}{J.~Cape and \textbf{M.~Tang} and C.~E.~Priebe. The
two-to-infinity norm and singular subspace geometry with applications
to high-dimensional statistics. {\em Annals of Statistics}, accepted for publication. arXiv
preprint at \url{http://arxiv.org/abs/1705.08917}.}
\cvline{2018}{\textbf{M.~Tang} and C.~E.~Priebe. Limit theorems for
eigenvectors of the normalized Laplacian for random graphs. {\em
Annals of Statistics}, Vol. 46, pp. 2360--2415}
\cvline{2018}{A.~Athreya and D.~E.~Fishkind and K.~Levin and
V.~Lyzinski and Y.~Park and Y.~Qin and D.~L.~Sussman and
\textbf{M.~Tang} and J.~T.~Vogelstein and C.~E.~Priebe, Statistical
inference on random dot product graphs: a survey, {\em Journal of
Machine Learning Research}, Vol. 18.}
\cvline{2017}{J.~Cape, \textbf{M.~Tang} and C.~E.~Priebe. The
Kato-Temple inequality and eigenvalue concentration. {\em Electronic
Journal of Statistics}, Vol. 11, pp. 3954--3978.}
\cvline{2017}{V.~Lyzinski, \textbf{M.~Tang}, A.~Athreya, Y.~Park and
C.~E.~Priebe. Community detection and classification in hierarchical
stochastic blockmodels. {\em IEEE Transactions on Network Science and
Engineering}, Vol. 4, pp. 13--26.}
\cvline{2017}{\textbf{M.~Tang}, A.~Athreya, D.~L.~Sussman,
V.~Lyzinski, Y.~Park and C.~E.~Priebe. A semiparametric two-sample
hypothesis testing problem for random graphs. {\em Journal of
Computational and Graphical Statistics}, Vol. 26, pp. 344--354.}
\cvline{2017}{\textbf{M.~Tang}, A.~Athreya, D.~L.~Sussman,
V.~Lyzinski, and C.~E.~Priebe. A nonparametric two-sample hypothesis
testing problem for random dot product graphs. {\em Bernoulli},
Vol. 23, pp. 1599--1630.}
\cvline{2016}{S.~Suwan, D.~S.~Lee, R.~Tang, D.~L.~Sussman,
\textbf{M.~Tang} and C.~E.~Priebe. Empirical Bayes estimation for the
stochastic blockmodel. {\em Electronic Journal of Statistics},
Vol. 10, pp. 761--782.}
\cvline{2016}{ A.~Athreya, V.~Lyzinski, D.~J.~Marchette, C.~E.~Priebe,
D.~L.~Sussman and \textbf{M.~Tang}. A central limit theorem for
scaled eigenvectors of random dot product graphs. {\em Sankhya Series
A}, Vol. 78, pp. 1--18.}
\cvline{2015}{ C.~E.~Priebe, D.~L.~Sussman, \textbf{M.~Tang} and
J.~T.~Vogelstein. Statistical inference on errorfully observed
graphs. {\em Journal of Computational and Graphical Statistics},
Vol. 24, pp. 930--953.}
\cvline{2014}{ V.~Lyzinski, D.~L.~Sussman, \textbf{M.~Tang},
A.~Athreya and C.~E.~Priebe. Perfect clustering for stochastic
blockmodel graphs via adjacency spectral embedding. {\em Electronic
Journal of Statistics}, Vol 8, pp. 2905--2922.}
\cvline{2014}{ C.~Shen, M.~Sun, \textbf{M.~Tang} and
C.~E.~Priebe. Generalized canonical correlation analysis for
classification in high dimensions. {\em Journal of Multivariate
Analysis}, Vol. 130, pp. 310--322.}
\cvline{2014}{D.~L.~Sussman, \textbf{M.~Tang} and
C.~E.~Priebe. Consistent latent position estimation and vertex
classification for random dot product graphs. {\em IEEE Transactions
on Pattern Analysis and Machine Intelligence}, Vol. 36, pp. 48--57.}
\cvline{2014}{H.~Wang, \textbf{M.~Tang}, Y.~Park, and
C.~E.~Priebe. Locality statistics for anomaly detection in time-series
of graphs. {\em IEEE Transactions on Signal Processing.}, Vol. 62,
pp. 703--717.}
\cvline{2013}{D.~E.~Fishkind, D.~L.~Sussman, \textbf{M.~Tang},
J.~T.~Vogelstein, and C.~E.~Priebe. Consistent adjacency-spectral
partitioning for the stochastic block model when the model parameters
are unknown. {\em SIAM Journal on Matrix Analysis and Applications},
Vol. 34, pp. 23--39.}
\cvline{2013}{N.~H.~Lee, J.~Yoder, \textbf{M.~Tang} and C.~E.~Priebe.
On latent position inference from doubly stochastic messaging
activities. {\em Multiscale Modeling and Simulation}, Vol. 11,
pp. 683--718.}
\cvline{2013}{M.~Sun, C.~E.~Priebe and \textbf{M.~Tang}. Generalized
canonical correlation analysis for disparate data fusion. {\em Pattern
Recognition Letters}, Vol. 34, pp. 194--200.}
\cvline{2013}{\textbf{M.~Tang}, Y.~Park, N.~H.~Lee and C.~E.~Priebe.
Attribute fusion in a latent process model for time series of
graphs. {\em IEEE Transactions on Signal Processing}, Vol. 61,
pp. 1721--1732.}
\cvline{2013}{\textbf{M.~Tang} and D.~L.~Sussman and
C.~E.~Priebe. Universally consistent vertex classification for latent
positions graphs. {\em Annals of Statistics}, Vol. 41,
pp. 1406--1430.}
\cvline{2012}{D.~L.~Sussman, \textbf{M.~Tang}, D.~E.~Fishkind and
C.~E.~Priebe. A consistent adjacency spectral embedding for stochastic
blockmodel graphs. {\em Journal of the American Statistical
Association}, Vol. 107, pp. 1119--1128.}
\section{Preprints}
\cvline{2018}{C.~E.~Priebe and Y.~Park and J.~T.~Vogelstein and J.~M.~Conroy and V.~Lyzinski and \textbf{M.~Tang} and A.~Athreya and J.~Cape and E.~Bridgeford. On a 'two truths' phenomenon in spectral graph clustering. arXiv preprint at \url{http://arxiv.org/abs/1808.07801}}
\cvline{2018}{G.~Li and \textbf{M.~Tang} and N.~Charon and C.~E.~Priebe. A central limit theorem for classical multidimensional scaling. arXiv preprint at \url{http://arxiv.org/abs/1804.00631}.}
\cvline{2018}{\textbf{M.~Tang}. The eigenvalues of stochastic blockmodel graphs.arXiv preprint at \url{http://arxiv.org/abs/1803.11551}.}
\cvline{2018}{P.~Rubin-Delanchy and C.~E.~Priebe and \textbf{M.~Tang} and J.~Cape. A statistical interpretation of spectral embedding: the generalised random dot product graph.
arXiv preprint at \url{http://arxiv.org/abs/1709.05506}.}
\cvline{2017}{\textbf{M.~Tang} and J.~Cape and
C.~E.~Priebe. Asymptotically efficient estimators for stochastic
blockmodels: the naive MLE, the rank-constrained MLE, and the
spectral. arXiv preprint at
\url{http://arxiv.org/abs/1710.10936}.}
\cvline{2017}{J.~T.~Vogelstein and \textbf{M.~Tang} and E.~Bridgeford
and D.~Zheng and R.~Burns and M.~Maggioni. Linear optimal low rank
projection for high-dimensional multi-class data. arXiv preprint at \url{http://arxiv.org/abs/1709.01233}}
\cvline{2017}{R.~Tang and \textbf{M.~Tang} and J.~T.~Vogelstein and
C.~E.~Priebe. Robust estimation from multiple graphs under gross error
contamination. arXiv preprint at
\url{http://arxiv.org/abs/1707.03487}.}
\cvline{2017}{K.~Levin and A.~Athreya and \textbf{M.~Tang} and
C.~E.~Priebe and V.~Lyzinski. A central limit theorem for an omnibus
embedding of random dot product graphs. arXiv preprint at \url{http://arxiv.org/abs/1705.08832}.}
\cvline{2017}{P.~Rubin-Delanchy and C.~E.~Priebe and \textbf{M.~Tang}.
Consistency of adjacency spectral embedding for the mixed membership
stochastic blockmodel. arXiv preprint at
\url{http://arxiv.org/abs/1705.04518}.}
\cvline{2017}{C.~E.~Priebe and Y.~Park and \textbf{M.~Tang} and
A.~Athreya and V.~Lyzinski and J.~T.~Vogelstein and Y.~Qin and
B.~Cocanougher and K.~Eichler and M.~Zlatic and
A.~Cardona. arXiv preprint at \url{http://arxiv.org/abs/1704.03297}.}
\cvline{2016}{A.~Athreya, \textbf{M.~Tang}, V. Lyzinski, Y.Park,
B.~Lewis, M.~Kane, and C.~E.~Priebe. Numerical tolerance for spectral
decompositions of random dot product graphs. arXiv preprint at
\url{http://arxiv.org/abs/1608.00451}.}
\cvline{2013}{\textbf{M.~Tang}, Y.~Park and
C.~E.~Priebe. Out-of-sample extension for latent position
graphs. arXiv preprint at \url{http://arxiv.org/abs/1305.4893}.}
% \nocite{pub}{*}
% \nocite{conf}{*}
% \nocite{misc}{*}
% \nocite{talks}{*}
% \bibliographystyle{pub}{cv}
% \bibliography{pub}{cv1}{Journal articles}
% \bibliographystyle{misc}{cv}
% \bibliography{misc}{cv2}{Preprints}
\
\section{Invited Talks}
\cvline{\small 09/2017}{Department of Mathematics and Statistics, Boston University.}
\cvline{\small 08/2017}{Joint Statistical Meetings, Baltimore, MD, USA.}
\cvline{\small 11/2015}{Department of Statistics, Indiana University Bloomington.}
\cvline{\small 02/2015}{School of Industrial and Systems Engineering,
Georgia Institute of Technology.}
\cvline{\small 02/2015}{Department of Statistics, Virginia Tech.}
\cvline{\small 08/2014}{Joint Statistical Meetings, Boston, MA, USA.}
\cvline{\small 05/2012}{Interface Symposia, Houston, TX, USA.}
% \bibliographystyle{conf}{cv}
% \bibliography{conf}{cv4}{Conference articles}
%\cventry{\small Pending}{co-PI}{}{JHU Science of Learning Institute}{The ABC's of fusion and inference from multiple
% connectome modalities (PI: Youngser Park)}{}
%\cventry{\small Pending}{PI}{}{National Science Foundation}{Algorithmic and theoretical advances for community detection and classification in hierarchical networks (co-PI: Vince Lyzinski)}{}
%\cventry{\small 2012--2017}{Key Personnel}{}{Defense Advanced Research
% Projects Agency}{DARPA XDATA: Fusion and Inference from Multiple
% and Massive Disparate Distributed Dynamic Data Sets (PI: Carey Priebe)}{}
%\cventry{\small 2014--2016}{Key Personnel}{}{National Science
% Foundation}{NSF Brain Eager: Discovery and characterization of
% neural circuitry from behavior, connectivity patterns and activity
% patterns (PI: Carey Priebe)}{}
%
\section{Teaching}
\cvline{JHU}{Generalized linear mixed models \& longitudinal data analysis (Spring 2017 \& 2018)}
\cvline{JHU}{Professor Joel Dean Award for Excellence in
Teaching (Spring 2016)}
\cvline{JHU}{Topics in statistical pattern
recognition (Spring 2016)}
\cvline{JHU}{Applied statistics and data analysis (Fall 2013 -- Fall 2017)}
\cvline{JHU}{Statistical learning and high-dimensional data
analysis (Spring 2011)}
\section{Mentoring}
\cvline{JHU}{Dissertation committee: Cencheng Shen (2015; second
reader), Heng Wang (2015), Jordan Yoder (2016), Runze Tang (2017;
second reader), Shangsi Wang (2018).}
\cvline{JHU}{Research Advising: Joshua Cape (2015 -- present)}
\section{Professional Services}
\cvline{}{Refereed papers for {\em Annals of Statistics}, {\em Annals of Applied Statistics}, {\em Statistical Science}, {\em Journal
of Computational and Graphical Statistics}, {\em IEEE Transactions
on Signal Processing}, {\em IEEE Transactions on Network Science},
{\em Electronic Journal of Statistics}, {\em Journal of Machine
Learning Research}, {\em IEEE Transactions on Knowledge and Data
Engineering}. }
\section{References}
\cvdoublecolumn{\cvreference{Michael W. Trosset}
{Department of Statistics}
{Indiana University Bloomington}
{919 E. 10th St}
{Bloomington, IN 47408, USA.}
{mtrosset@indiana.edu}
{}%
}
{\cvreference{Carey E. Priebe}
{Applied Mathematics and Statistics}
{Johns Hopkins University}
{3400 N. Charles St}
{Baltimore, MD 21218, USA.}
{cep@jhu.edu}
{}%
}
\cvdoublecolumn{\cvreference{Dirk Van Gucht}
{School of Informatics, Computing and Engineering}
{Indiana University Bloomington}
{700 N. Woodlawn Ave}
{Bloomington, IN 47405, USA.}
{vgucht@indiana.edu}
{}%
}
{\cvreference{Laurent Younes}
{Applied Mathematics and Statistics}
{Johns Hopkins University}
{3400 N. Charles St}
{Baltimore, MD 21218, USA.}
{laurent.younes@jhu.edu}
{}%
}
% \cvdoublecolumn{\cvreference{Thitipong Tanprasert}
% {Faculty of Science and Technology}
% {Assumption University}
% {Ramkhamhaeng Soi 24}
% {Hua Mak, Bangkok 10240, Thailand.}
% {thitipong@sci-tech.au.edu}
% {}%
% {}
% }
% {\cvreference{Donniell E. Fishkind}
% {Applied Mathematics and Statistics}
% {Johns Hopkins University}
% {3400 N. Charles St}
% {Baltimore, MD 21218, USA.}
% {def@jhu.edu}
% {}%
%}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End:
|
|
%%% template.tex
%%%
%%% This LaTeX source document can be used as the basis for your technical
%%% paper or abstract. Intentionally stripped of annotation, the parameters
%%% and commands should be adjusted for your particular paper - title,
%%% author, article DOI, etc.
%%% The accompanying ``template.annotated.tex'' provides copious annotation
%%% for the commands and parameters found in the source document. (The code
%%% is identical in ``template.tex'' and ``template.annotated.tex.'')
\documentclass[conference, 12pt]{acmsiggraph}
\title{Rotation Averaging for Global SfM}
\author{Bryce Evans \and Rafael Farias Marinheiro}
% \contactemail{}
\affiliation{Cornell University\thanks{\{bae43, rf356\}@cornell.edu}}
\pdfauthor{Rafael F. Marinheiro}
\keywords{crowd simulation, user interaction, real-time rendering}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{subcaption}
\usepackage{algorithm}
\usepackage{algpseudocode}
% \usepackage{algorithm}
\begin{document}
%% \teaser{
%% \includegraphics[height=1.5in]{images/sampleteaser}
%% \caption{Spring Training 2009, Peoria, AZ.}
%% }
\maketitle
% \begin{abstract}
% We intend to create a interactive Crowd Simulation application using the approach described in \cite{kim2012interactive}. The user will be able to interact with the application with a commodity depth sensor \cite{Zhang:2012:MKS:2225053.2225203}, modifying the environment in real time.
% \end{abstract}
% \begin{figure}[Ht!]
% \centering
% \includegraphics[width=0.5\textwidth]{images/mpc3.jpg}
% \includegraphics[width=0.5\textwidth]{images/mpc2.jpg}
% \caption{World War Z (2013). Zombie agents were computationally simulated.}
% \label{fig:wwz}
% \end{figure}
% %% Use this only if you're preparing a technical paper to be published in the
% %% ACM 'Transactions on Graphics' journal.
% \begin{CRcatlist}
% \CRcat{I.2.11}{Artificial Intelligence}{Distributed Artificial Intelligence}{Multiagent systems};
% \CRcat{I.3.7}{Computer Graphics}{Three-Dimensional Graphics and Realism}{Animation};
% \CRcat{I.6.8}{Simulation and Modeling}{Types of Simulation}{Animation}
% \end{CRcatlist}
% %%% The ``\keywordlist'' prints out the user-defined keywords.
% \keywordlist
% \TOGlinkslist
% %% Required for all content.
% \copyrightspace
\section{Introduction}
Structure from motion is a powerful tool in reconstructing environments without artist modeling. By taking a series of images and finding common points between them, a network of relative camera is created to represent the scene but often has error that causes the reconstruction to have no feasible solution. Robust and efficient rotation averaging \cite{rotation} proposes several methods to solve this problem, but does not provide well useable code. Given the power in SfM applications, there is a long pipeline of steps but they are fractured and inconsistent. We present a Python module publicly available to integrate into the greater pipeline that solves the rotation averaging problem and interfaces well with other code available such as the translation averaging solver by \cite{translation}.
\section{Background}
Current SfM pipelines can be divided in two different classes: An Iterative SfM pipeline (see figure \ref{fig:iterative}) will try to iteratively add cameras to the working set and it will run a bundle adjustment after each iteration. The quality of the results generated by such pipelines depends on the ordering of the cameras and it is highly sensitive to outliers. One example of such a pipeline is the well-known PhotoTourism \cite{snavely2006photo}.
An Global SfM pipeline (see figure \ref{fig:global}) will instead divide the problem in two different global subproblems: The Rotation Averaging problem and the Translation Averaging problem. These problems can be described as follows: Consider a connected graph $G = (V,E)$. Associated with each edge $e = (i, j)$, there is a relative rotation $R_{ij} \in SO(3)$ and relative translation $t_{ij} \in R^3$. We want to assign to each vertex of the graph a global rotation and translation $R_i, t_i$ such that we minimize:
\begin{eqnarray}
\sum_{(i,j) \in E} d_1(R_{j}R_{i}^{-1}, R_{ij}) \\
\sum_{(i,j) \in E} d_2(t_{j}-t_{i}, R_{i}t_{ij})
\end{eqnarray}
Where $d_1$ and $d_2$ are metrics of $SO(3)$ and $R^3$ respectively. The first equation describes the Rotation Averaging Problem and the second equation describes the Translation Averaging problem. Both problems can be solved globally, so the final solution does not depend on the ordering of the cameras and it is more robust when working with outliers.
In this project, we have implemented the approach described in \cite{rotation} to solve the Rotation Averaging problem and we have used the approach described in \cite{translation} to solve the Translation Averaging problem.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{images/iterative_sfm.png}
\caption{Iterative SfM Pipeline}
\label{fig:iterative}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{images/global_sfm.png}
\caption{Global SfM Pipeline}
\label{fig:global}
\end{subfigure}
\caption{(a) On top, the Iterative SfM Pipeline. (b) At bottom, the Global SfM Pipeline.}
\label{fig:pipelines}
\end{figure}
\section{Rotation Averaging}
A Rotation Matrix $R$ is a $3 \times 3$ orthonormal matrix (i.e., $RR^\intercal = I = R^\intercal R$) such that $|R| = +1$. All rotation matrices form a closed group known as the Special Orthonormal Group $SO(3)$. The $SO(3)$ is also a Lie Group, so it has the following properties: (1) The neighborhood of a point in the group is topologically equivalent to a vector space known as the Lie algebra. (2) There exist a direct mapping between an element in the Lie Group and an element in the Lie Algebra. In the case of $SO(3)$, the associated Lie Algebra is know as $\mathfrak{so}(3)$.
The first property can be used to locally linearize the Rotation Averaging problem. Let $\omega = \theta n \in \mathfrak{so}(3)$, be the vector in the Lie Algebra that represents the rotation by $\theta$ radians around the unit-norm axis $n$. Let $R \in SO(3)$ be the equivalent rotation matrix (i.e. $\omega = \log(R)$). We can extrapolate the properties of the Lie Group to use the following approximation:
\begin{equation}
\log(R_{j}R_{i}^{-1}) = \log(R_{ij}) = \omega_{ij} \approx \omega_{j} - \omega_{i}
\label{eqn:approx_lie}
\end{equation}
Given the set of relative rotations, one can write a matrix $A$ that encodes a linear system with all the equation involving $\omega_{ij}$. Then, we would have the following linear system:
\begin{equation}
A\omega_{global} = \omega_{rel}
\end{equation}
However, equation (\ref{eqn:approx_lie}) is an approximation. So instead of solving directly for $\omega_{global}$, we can iteratively compute the error of our solution ($\Delta\omega_{rel}$) and then we can update our solution ($\Delta\omega_{global}$) by using the following equation:
\begin{equation}
A\Delta\omega_{global} = \Delta\omega_{rel}
\end{equation}
The algorithm is described in listings (\ref{algo:rot}). The linear system in line (\ref{algo:rot}:\ref{algo:roteq}) can be solved using different methods. One could use the Linear Least Squares Method to solve it, however it is not robust enough to deal with outliers. Instead, we use two different methods. The first one ($\ell1$ Rotation Averaging - $\ell1$RA) tries to minimize the $\ell1$-norm, i.e., it tries to minimize $|Ax - b|_1$ and the second one (Iteratively Reweighted Least Squares - IRLS) tries to robustly minimize the $\ell2$-norm (See \cite{rotation} for more details).
\begin{algorithm}[h]
\caption{Rotation Averaging Algorithm}\label{algo:rot}
\begin{algorithmic}[1]
\State \textbf{Input:} The Relative Rotations $R_{rel} = \{R_{ij}\}$ and an initial guess of the solution
\State \textbf{Output:} The Global Rotations $R_{global} = \{R_{i}\}$
\Procedure{RotationAveraging}{$R_{rel}$, $R_{guess}$}
\State $A \gets ComputeMatrix(R_{rel})$
\ForAll{$(i, j) \in R_{rel}$} \Comment{Compute $\Delta\omega_{rel}$}
\State $\Delta R_{ij} \gets R_{j}^{-1}R_{ij}R_{i}$
\State $\Delta\omega_{ij} \gets \log(\Delta R_{ij})$
\EndFor
\While{$|\Delta\omega_{rel}| \ge \epsilon$}
\State $\Delta\omega_{global} \gets A\backslash \Delta\omega_{rel}$ \Comment{Solve for $\Delta\omega_{global}$} \label{algo:roteq}
\ForAll{$i\in R_{global}$} \Comment{Update the solution}
\State $R_{i} \gets R_{i}\exp(\Delta\omega_{i})$
\EndFor
\EndWhile
\State \textbf{return} $R_{global}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Implementation and Results}
We have implemented the Rotation Averaging algorithms in Python. The source code only depends on a few Python packages (SciPy \cite{scipy} and NetworkX \cite{networkx}) and it is publicly available\footnote{\url{https://github.com/RafaelMarinheiro/RotationAveraging}}. We have also integrated our implementation with the SfM Init package\footnote{\url{https://github.com/RafaelMarinheiro/SfMInit}} \cite{translation}.
An initial guess for the Global Rotations is required to use these algorithms. To find the initial guess, we proceed as follows: Given the graph $G = (V, E)$, we assign to each edge $(i, j)$ a weight $w(i, j) = d(R_{ij}, I)$, then we choose the edges of the Minimum Spanning Tree. Then, we select a random camera and we assign the Global Rotation for each vertex using a Depth-First Order approach.
In our implementation, we have discovered that the algorithm has some numerical problems. For instance, if the input matrices are not quite rotation matrices, the conversions between $SO(3)$ and $\mathfrak{so}(3)$ and vice-versa will fail drastically. To solve this problem, we find the closest orthonormal matrix to the input matrix using the formula: $O = M(M^\intercal M)^{-\frac{1}{2}}$.
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.81\textwidth]{images/final_span.png}
\caption{Initial guess after Spanning Tree}
\label{fig:post_span}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.81\textwidth]{images/final_post_l1.png}
\caption{After $\ell1$RA}
\label{fig:post_l1ra}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.81\textwidth]{images/final_post_irls.png}
\caption{After IRLS}
\label{fig:post_irls}
\end{subfigure}
\caption{Histogram of error after each part of the algorithm. The abscissa value represents the error in degrees. The closer to zero, the better.}
\label{fig:results}
\end{figure}
We have tested our implementation against the Notredame dataset published by \cite{translation}. Figure (\ref{fig:results}) shows the error after each step of the algorithm. First, we run the Spanning Tree algorithm described earlier (see figure \ref{fig:post_span}), then we run only 5 iterations of the $\ell1$RA algorithm (see figure \ref{fig:post_l1ra}). After that, we use the output of the $\ell1$RA algorithm as an input to the IRLS algorithm and then we run 20 iterations of it (see figure \ref{fig:post_irls}). Notice that the error is indeed reduced after each step. The reported error metrics do match the results published both by \cite{rotation} and \cite{translation}.
\section{Conclusion}
In summary, we have successfuly implemented a Rotation Averaging algorithm. We have also made the code publicly available. Our implementation is easily extensible and it does not depend on non-free tools, so we hope that this will be a valuable tool for the research community.
\bibliographystyle{acmsiggraph}
\bibliography{proposal}
\end{document}
|
|
%% declare document class and geometry
\documentclass[12pt]{article} % use larger type; default would be 10pt
\usepackage[english]{babel} % for hyphenation dictionary
%\setdefaultlanguage{english} % polyglossia command for use with XeTeX / LuaTeX
\usepackage{geometry} % setup page geometry
\geometry{letterpaper, portrait, margin=1in}
%% import packages and commands
\input{header.tex}
%% title information
\title{Subj 2XXA -- Somesortof Theory -- LecXX}
\author{Joe Bruin}
\date{\today} % Activate to display a given date or no date (if empty),
% otherwise the current date is printed
% format: formatdate{dd}{mm}{yyyy}
%% begin document
\begin{document}
\maketitle
\section{Lecture on theory of things}
We begin with a study of blah blah.
\subsection{Blah blah}
In the study of blah blah we do blah blah.
\subsection{Beep boop}
Sometimes we want to beep boop.
\section{Meepers}
Meepers are an important topic in Somesortof Theory. We will cover them later.
%% end document
\end{document}
|
|
\section{Process' perspective}
\subsection{Interactions as developers}
%How do you interact as developers?
Due to the current Covid situation, the team was forced to meet and work online.
The team typically met on Discord in the DevOps exercise sessions, during the week, and in the weekends.
Often, collaboration was done through Visual Studio Code's Live Share (which enabled group/pair programming)
or working through the tasks/exercises individually, helping each other along the way.
\newline
\subsection{Team organization}
%How is the team organized?
The team strived to work in an agile manner, proceeding forward step by step, and adjusting work flows to meet ongoing challenges.
Mainly, the group worked together as one big group.
However, occasionally, the group divided into two subgroups to increase efficiency, whereby each subgroup focused on a selected topic.
Both groups then reconvened to demonstrate and discuss their findings.
Remaining work was distributed to the individual group members, assisting each other in the process.
\subsection{A description of stages and tools included in the CI/CD chains.}%\newline
The tools we used for CI/CD chains were Travis and GitHub Actions.
We started working with Travis, which worked very well for us.
Regretably, we experienced some problems with expenses in Travis.
We therefore decided to migrate to GitHub Actions.
This works just as well as Travis for us.
GitHub Actions is just an easy tool to use directly from GitHub, .
Since we are using GitHub to store our repository, GitHub Actions is a good choice since, it is easy to
integrate, easy to duplicate workflow, works faster, and has a better ecosystem of actions in a centralised app/action store.
It has also made it easy for observing the status of the pipelines, since it is all on GitHub.
Our stages include build, deploy and test. When we deploy we have the following stages, \textit{building and testing}, \textit{CodeQL} that autobuild attempts to build any compiled languages, \textit{Sonnarscanner} that runs project analysis, \textit{DockerBuild} which build the project, then \textit{deploying} and finally \textit{releasing} it all.\newline
\begin{figure}[h!]
\centering
\includegraphics[scale=0.7]{images/CIpipelinediagram.png}
\caption{An illustration providing an overview of the different steps in our main CI pipeline, used on the masterbranch. }
\end{figure}
Since we are using GitHub anyway to store our repository, GitHub Actions is a good choice since it is easy to
integrate, easy to duplicate workflow, and has a better ecosystem of actions in Github Marketplace, as well as a centralised app/action store.
It also provides an accessible way of observing the status of the pipelines, since it is all on GitHub, whereas we had to go to another webpage for Travis.
The stages in our main workflow are \textit{building and testing}, \textit{CodeQL} that via autobuilds attempts to build any compiled languages, \textit{Sonarscanner} that runs project analysis, \textit{DockerBuild} which build the project, then \textit{deploying} and finally \textit{releasing} it all.\newline
\subsection{Organization of repository} %\newline
For our project we have chosen to use the structure of mono-repository.
The reason for choosing this structure, was that during this project, we were only building one system.
Therefore we thought it would be best to keep everything in the same repository, which goes by the name of PythonKindergarten. \newline
\subsection{Applied branching strategy}
For development purposes, the team uses Git for Distributed Version Control and have adopted a 'long-running branches'-approach to branching. \cite{lecture02}
In this context, there are two main branches, master and develop (i.e. which we have called the 'development' branch) as well as many short-lived topic branches,
which are used for implementing features.
Essentially, the master branch acts as the stable branch, that is used for production code as well as releases, whereas the other branches act as the stage for code development.
Upon the completion of new features developed on the topic branches, these branches were merged into development and development was later merged into master. \cite{GitBranching}
This approach to branching works well in a centralized workflow such as ours, where the project is collaborated on in a shared repository (i.e. Github).
The actual process of working with long-running branches in a collaborative setting can present some challenges, especially with regards to eventual merge conflicts whenever branches are merged.
This can cleary be seen in the instance where, for example, two or more developers have been working on the same code segment(s) on separate branches and alterations have been made.
Generally, this can be resolved through creating pull requests, and have someone review the code.
\subsection{Applied development process and tools supporting it}
The Projects Board on Github was used to categorize open tasks and organize our development work. Specifically, the team would create an issue with a description added to the comments section, along with labels, such as 'need to have', 'group discussion', etc. attached to it. These same issues were each classified by a tag number and then assigned to different group members. An overview of the current tasks which needed attention (with their current status of either 'To do', 'In Progress' and 'Done') would then be visualized on (either of our two) Projects Boards.
One of the main advantages in working agile is the concept of dividing tasks into sprints.
Technically, in this project, every week has acted as a mini-sprint as the timespan between each lecture and the assignments that came with it stretched over a week.
The flexibility of being able to shape and adjust our work flows over such short timespans in order to meet unforeseen contingencies has proven to be quite essential in fulfilling the DevOps 'three ways' of working (as discussed in section 2.1.3 above).
%\newline
\subsection{System monitoring}
Monitoring of our system is done using Prometheus and Grafana, and is deployed using docker-compose. \newline
Prometheus acts as a data collector, periodically scraping metrics from configured targets and storing the collected data in the built-in, local, time series database.\newline
Our system consists of two docker nodes, a master and a worker, which are both configured as targets in our Prometheus deployment.\newline
By configuring our Prometheus server as a datasource within Grafana, the stored time series data can be visualized in preconfigured dashboards. Our metrics are visualized in two separate dashboards, accessible through the Grafana web interface.
To expose metrics in our system we use a NuGet package called prometheus-net \cite{prometheusnet}.
This package allows us to expose metrics on the /metrics endpoint, which can then be scraped and stored by Prometheus.
\newline
To extend the default metrics provided by prometheus-net, we use two additional packages:
prometheus-net.DotNetMetrics \cite{prometheusdotnetmetrics} and prometheus-net.AspNet \cite{prometheusaspnet}.
\newline
The DotNetMetrics package provides us with general dotnet metrics, such as GC, byte allocation, lock contention and exceptions.
\newline
The AspNet package provides us with ASP.NET specific metrics, such as requests/sec, request duration and error rates.
\newline
\newline
Snapshots of our two dashboards are publicly available on the following links:
\newline
Dotnet metrics: https://tinyurl.com/pythonkindergarten-dotnet
\newline
ASP.NET (api-specific) metrics: https://tinyurl.com/pythonkindergarten-aspnet
\subsection{Logging}
In our system we logged to ElasticSearch from our API, and it used SeriLogs to send these logs.
We also logged our simulation errors, which we divided into errors regarding follow, tweet, unfollow, connectionError, readTimeout and Register.
In order to aggregate these logs, we used ElasticSearch and Kibana. Additionally, we used ElasticSearch to store our logs in dedicated log indexes, while Kibana was used as a visualization tool for these same logs in ElasticSearch. This made it easy to keep track of our system and we quickly discovered if there was anything wrong. \newline
\subsection{Security assessment}
The identified sources are our web services, for logging and monitoring, and our MiniTwit application.
The servers we used to host our service are also listed, as well as Docker and Nginx.
The threat sources are XCSS, our firewall (UFW), Docker ignoring UFW, and a DDoS attack.
The risk analysis consisted of an exposed database connection string, which was clear text in our code.
Our private keys were stored locally on developer machines.
This could have catastrophic consequences if someone, by mistake, were to upload these keys to GitHub. Thereby, gaining admin rights to our servers.
Dependency vulnerabilities were also a possibility, since we did not check versions of our dependencies.
Had we been able to check versions in eg. our pipeline, it would have been easier to maintain our dependencies.
In regards to malicious users gaining access to user data. The impact would be minimized if we kept backups of our database. However we never got to do that.
After the security report was finalized, we started fixing some of our problems.
The biggest issue was an exposed database connection string. We fixed this issue, by storing the connection string as a Docker secret.
Another problem was the UFW firewall policies being ignored by Docker, which was solved by using the following command on the servers:
\begin{verbatim}
DOCKER_OPTS="--iptables=false
\end{verbatim}
\subsection{System Scaling and load-balancing}
\subsubsection{Scaling}
We applied both scaling (using docker swarm) and load-balancing (using Digital Ocean's Load Balancer) to our system.
Our Docker swarm consisted of two servers, a master node and a dedicated worker node, each running multiple service replicas.
These two servers were internally load-balancing using the Docker routing mesh where only one node needed to be known, in order to communicate with the swarm.
\subsubsection{Load-Balancing}
Situations can occur where every known swarm node is down, in which case the system as a whole might be unreachable, even though there might still be running nodes in the swarm.
\newline
To mitigate this, we used a Digital Ocean Load Balancer. This acted as a gateway (single entry-point) to our Docker swarm, balancing the load between each of our servers and executing health checks, which helped to ensure that the client was always connected to a reachable swarm node.
\newline
We later learned, that we could have gained the same availability benefits, by using heartbeat to coordinate routing of a shared floating ip.
This way we could have ensured that a shared floating ip would always point to an available node in our swarm, while leaving load balancing to the Docker routing mesh. Thus reducing costs and avoiding redundancy.
\newline
As an added bonus, the load balancer masks the ips of our servers from clients, making our system less prone to hackers.
Using these strategies we were able to scale far beyond our current setup, simply by joining more servers to our swarm and configuring these in the DO load balancer.
The only caveat was that our database is currently running on a single server, but could be migrated to a database cluster on Digital Ocean, AWS, Google Cloud or similar cloud providers.
\subsection{From idea to production}
Ideas were formed during group meetings or when we were faced with issues, where a solution was required.
Whenever an idea was accepted by the team, the process to make the idea a reality began.
A developer or a team of developers was assigned to develop a feature for the idea.
Work started off by creating a topic branch, which was checked out from the development branch.
This ensured that all new features were present in the new feature branch.
When the feature was finished, the feature branch was merged into the development branch.
Then the new feature was tested with other new features, to ensure that everything was working as expected, as new features could have been merged into development in the meantime.
After the manual testing was finished, the development branch was merged into the master branch.
At this point, the CI pipeline began to run and a script to update the docker containers on our master server was executed.
This script would pull a new image from Docker Hub and update its own docker containers.
Simultaneously, the master server would notify other servers in the docker swarm, to pull the new image and update their containers.
When all the servers were finished updating, the new features would be present in the production environment.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.