content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
\name{lcy.NMF.cluster}
\alias{lcy.NMF.cluster}
\title{
cluster samples
}
\description{
Used to identify signature which best separates samples into given clusters. First nmf is used to decide cluster members and then sam, pam are used to identify optimal signature.
}
\usage{
lcy.NMF.cluster(d, clinic, rank = 3, method = "brunet", marker.call = "kim", marker.unique = FALSE, log2 = FALSE, qvalue = 5, aucth = 0.7, err.cutoff = 0.2, thresh, silhouette = FALSE, seed = 123456,min.foldchange=0)
}
\arguments{
\item{d}{
none negative matrix. normally the data can be transformed into 2^x.
}
\item{clinic}{
data frame where rows are samples and columns are features. There can be NA.
}
\item{rank}{
a integer which is the number of cluster
}
\item{method}{
a character specifying the method used to cluster samples. Possible values are 'brunet' (default), 'lee', 'ns', 'nsNMF'.
}
\item{marker.call}{
method used to identify signature. Possible values are 'pam' (default), 'kim', 'max', 'simple'.
}
\item{marker.unique}{
logic value specifying whether unique markers for each subtype should be identified. default is FALSE. Unless there are strong possible markers for each subtype, it is not recommanded.
}
\item{log2}{
if you applied 2^x to transform data to positive values, set to FALSE (default). the data will be taken log2 for further downstream signature dientifcation method.
}
\item{qvalue}{
FDR rate, default 5.
}
\item{aucth}{
threshold for AUC (default 0.7).
}
\item{err.cutoff}{
threshold to determine optimal signature. default error rate is 0.2. The first signature which performs bettern than this threshold is used as signature.
}
\item{thresh}{
an index value between 1 to 30. It is used to decide optimal threshold above.
}
\item{silhouette}{
apply silhouette to select core samples associated with clustering. Samples with negative values are removed. Default is FALSE. It is recommanded if the data size if big enough (i.e. > 100).
}
\item{seed}{
an integer or one of 'random'(default),'none','ica','nndsvd'
}
\item{min.foldchange}{
a numeric value which indicates the fold change difference between two groups. it is used in sam function. Default is 0 which means fold change does count.
}
}
\details{
}
\value{
gene list. it might be changed in future
}
\author{
Chengyu Liu <chengyu.liu.cs@gmail.com>
}
\keyword{ nmf }
\keyword{ clustering }
| /man/lcy.nmf.cluster.Rd | no_license | chengyu-liu-cs/lcyCF | R | false | false | 2,468 | rd | \name{lcy.NMF.cluster}
\alias{lcy.NMF.cluster}
\title{
cluster samples
}
\description{
Used to identify signature which best separates samples into given clusters. First nmf is used to decide cluster members and then sam, pam are used to identify optimal signature.
}
\usage{
lcy.NMF.cluster(d, clinic, rank = 3, method = "brunet", marker.call = "kim", marker.unique = FALSE, log2 = FALSE, qvalue = 5, aucth = 0.7, err.cutoff = 0.2, thresh, silhouette = FALSE, seed = 123456,min.foldchange=0)
}
\arguments{
\item{d}{
none negative matrix. normally the data can be transformed into 2^x.
}
\item{clinic}{
data frame where rows are samples and columns are features. There can be NA.
}
\item{rank}{
a integer which is the number of cluster
}
\item{method}{
a character specifying the method used to cluster samples. Possible values are 'brunet' (default), 'lee', 'ns', 'nsNMF'.
}
\item{marker.call}{
method used to identify signature. Possible values are 'pam' (default), 'kim', 'max', 'simple'.
}
\item{marker.unique}{
logic value specifying whether unique markers for each subtype should be identified. default is FALSE. Unless there are strong possible markers for each subtype, it is not recommanded.
}
\item{log2}{
if you applied 2^x to transform data to positive values, set to FALSE (default). the data will be taken log2 for further downstream signature dientifcation method.
}
\item{qvalue}{
FDR rate, default 5.
}
\item{aucth}{
threshold for AUC (default 0.7).
}
\item{err.cutoff}{
threshold to determine optimal signature. default error rate is 0.2. The first signature which performs bettern than this threshold is used as signature.
}
\item{thresh}{
an index value between 1 to 30. It is used to decide optimal threshold above.
}
\item{silhouette}{
apply silhouette to select core samples associated with clustering. Samples with negative values are removed. Default is FALSE. It is recommanded if the data size if big enough (i.e. > 100).
}
\item{seed}{
an integer or one of 'random'(default),'none','ica','nndsvd'
}
\item{min.foldchange}{
a numeric value which indicates the fold change difference between two groups. it is used in sam function. Default is 0 which means fold change does count.
}
}
\details{
}
\value{
gene list. it might be changed in future
}
\author{
Chengyu Liu <chengyu.liu.cs@gmail.com>
}
\keyword{ nmf }
\keyword{ clustering }
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/FunctionsForGenes.R
\name{kasa.duplicationRemovalBySD}
\alias{kasa.duplicationRemovalBySD}
\title{Duplicated value removal by SD}
\usage{
kasa.duplicationRemovalBySD(x)
}
\arguments{
\item{x}{input dataframe of gene matrix}
}
\value{
removed dataframe
}
\description{
Duplicated value removal by SD
}
| /man/kasa.duplicationRemovalBySD.Rd | no_license | kasaha1/kasaBasicFunctions | R | false | true | 379 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/FunctionsForGenes.R
\name{kasa.duplicationRemovalBySD}
\alias{kasa.duplicationRemovalBySD}
\title{Duplicated value removal by SD}
\usage{
kasa.duplicationRemovalBySD(x)
}
\arguments{
\item{x}{input dataframe of gene matrix}
}
\value{
removed dataframe
}
\description{
Duplicated value removal by SD
}
|
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# mllib_regression.R: Provides methods for MLlib classification algorithms
# (except for tree-based algorithms) integration
#' S4 class that represents an LinearSVCModel
#'
#' @param jobj a Java object reference to the backing Scala LinearSVCModel
#' @note LinearSVCModel since 2.2.0
setClass("LinearSVCModel", representation(jobj = "jobj"))
#' S4 class that represents an LogisticRegressionModel
#'
#' @param jobj a Java object reference to the backing Scala LogisticRegressionModel
#' @note LogisticRegressionModel since 2.1.0
setClass("LogisticRegressionModel", representation(jobj = "jobj"))
#' S4 class that represents a MultilayerPerceptronClassificationModel
#'
#' @param jobj a Java object reference to the backing Scala MultilayerPerceptronClassifierWrapper
#' @note MultilayerPerceptronClassificationModel since 2.1.0
setClass("MultilayerPerceptronClassificationModel", representation(jobj = "jobj"))
#' S4 class that represents a NaiveBayesModel
#'
#' @param jobj a Java object reference to the backing Scala NaiveBayesWrapper
#' @note NaiveBayesModel since 2.0.0
setClass("NaiveBayesModel", representation(jobj = "jobj"))
#' S4 class that represents a FMClassificationModel
#'
#' @param jobj a Java object reference to the backing Scala FMClassifierWrapper
#' @note FMClassificationModel since 3.1.0
setClass("FMClassificationModel", representation(jobj = "jobj"))
#' Linear SVM Model
#'
#' Fits a linear SVM model against a SparkDataFrame, similar to svm in e1071 package.
#' Currently only supports binary classification model with linear kernel.
#' Users can print, make predictions on the produced model and save the model to the input path.
#'
#' @param data SparkDataFrame for training.
#' @param formula A symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', '-', '*', and '^'.
#' @param regParam The regularization parameter. Only supports L2 regularization currently.
#' @param maxIter Maximum iteration number.
#' @param tol Convergence tolerance of iterations.
#' @param standardization Whether to standardize the training features before fitting the model.
#' The coefficients of models will be always returned on the original scale,
#' so it will be transparent for users. Note that with/without
#' standardization, the models should be always converged to the same
#' solution when no regularization is applied.
#' @param threshold The threshold in binary classification applied to the linear model prediction.
#' This threshold can be any real number, where Inf will make all predictions 0.0
#' and -Inf will make all predictions 1.0.
#' @param weightCol The weight column name.
#' @param aggregationDepth The depth for treeAggregate (greater than or equal to 2). If the
#' dimensions of features or the number of partitions are large, this param
#' could be adjusted to a larger size.
#' This is an expert parameter. Default value should be good for most cases.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.svmLinear} returns a fitted linear SVM model.
#' @rdname spark.svmLinear
#' @aliases spark.svmLinear,SparkDataFrame,formula-method
#' @name spark.svmLinear
#' @examples
#' \dontrun{
#' sparkR.session()
#' t <- as.data.frame(Titanic)
#' training <- createDataFrame(t)
#' model <- spark.svmLinear(training, Survived ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # fitted values on training data
#' fitted <- predict(model, training)
#'
#' # save fitted model to input path
#' path <- "path/to/model"
#' write.ml(model, path)
#'
#' # can also read back the saved model and predict
#' # Note that summary deos not work on loaded model
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.svmLinear since 2.2.0
setMethod("spark.svmLinear", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, regParam = 0.0, maxIter = 100, tol = 1E-6, standardization = TRUE,
threshold = 0.0, weightCol = NULL, aggregationDepth = 2,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
if (!is.null(weightCol) && weightCol == "") {
weightCol <- NULL
} else if (!is.null(weightCol)) {
weightCol <- as.character(weightCol)
}
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.LinearSVCWrapper", "fit",
data@sdf, formula, as.numeric(regParam), as.integer(maxIter),
as.numeric(tol), as.logical(standardization), as.numeric(threshold),
weightCol, as.integer(aggregationDepth), handleInvalid)
new("LinearSVCModel", jobj = jobj)
})
# Predicted values based on a LinearSVCModel model
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns the predicted values based on a LinearSVCModel.
#' @rdname spark.svmLinear
#' @aliases predict,LinearSVCModel,SparkDataFrame-method
#' @note predict(LinearSVCModel) since 2.2.0
setMethod("predict", signature(object = "LinearSVCModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Get the summary of a LinearSVCModel
#' @param object a LinearSVCModel fitted by \code{spark.svmLinear}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{coefficients} (coefficients of the fitted model),
#' \code{numClasses} (number of classes), \code{numFeatures} (number of features).
#' @rdname spark.svmLinear
#' @aliases summary,LinearSVCModel-method
#' @note summary(LinearSVCModel) since 2.2.0
setMethod("summary", signature(object = "LinearSVCModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "rFeatures")
coefficients <- callJMethod(jobj, "rCoefficients")
coefficients <- as.matrix(unlist(coefficients))
colnames(coefficients) <- c("Estimate")
rownames(coefficients) <- unlist(features)
numClasses <- callJMethod(jobj, "numClasses")
numFeatures <- callJMethod(jobj, "numFeatures")
list(coefficients = coefficients, numClasses = numClasses, numFeatures = numFeatures)
})
# Save fitted LinearSVCModel to the input path
#' @param path The directory where the model is saved.
#' @param overwrite Overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.svmLinear
#' @aliases write.ml,LinearSVCModel,character-method
#' @note write.ml(LogisticRegression, character) since 2.2.0
setMethod("write.ml", signature(object = "LinearSVCModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Logistic Regression Model
#'
#' Fits an logistic regression model against a SparkDataFrame. It supports "binomial": Binary
#' logistic regression with pivoting; "multinomial": Multinomial logistic (softmax) regression
#' without pivoting, similar to glmnet. Users can print, make predictions on the produced model
#' and save the model to the input path.
#'
#' @param data SparkDataFrame for training.
#' @param formula A symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param regParam the regularization parameter.
#' @param elasticNetParam the ElasticNet mixing parameter. For alpha = 0.0, the penalty is an L2
#' penalty. For alpha = 1.0, it is an L1 penalty. For 0.0 < alpha < 1.0,
#' the penalty is a combination of L1 and L2. Default is 0.0 which is an
#' L2 penalty.
#' @param maxIter maximum iteration number.
#' @param tol convergence tolerance of iterations.
#' @param family the name of family which is a description of the label distribution to be used
#' in the model.
#' Supported options:
#' \itemize{
#' \item{"auto": Automatically select the family based on the number of classes:
#' If number of classes == 1 || number of classes == 2, set to "binomial".
#' Else, set to "multinomial".}
#' \item{"binomial": Binary logistic regression with pivoting.}
#' \item{"multinomial": Multinomial logistic (softmax) regression without
#' pivoting.}
#' }
#' @param standardization whether to standardize the training features before fitting the model.
#' The coefficients of models will be always returned on the original scale,
#' so it will be transparent for users. Note that with/without
#' standardization, the models should be always converged to the same
#' solution when no regularization is applied. Default is TRUE, same as
#' glmnet.
#' @param thresholds in binary classification, in range [0, 1]. If the estimated probability of
#' class label 1 is > threshold, then predict 1, else 0. A high threshold
#' encourages the model to predict 0 more often; a low threshold encourages the
#' model to predict 1 more often. Note: Setting this with threshold p is
#' equivalent to setting thresholds c(1-p, p). In multiclass (or binary)
#' classification to adjust the probability of predicting each class. Array must
#' have length equal to the number of classes, with values > 0, excepting that
#' at most one value may be 0. The class with largest value p/t is predicted,
#' where p is the original probability of that class and t is the class's
#' threshold.
#' @param weightCol The weight column name.
#' @param aggregationDepth The depth for treeAggregate (greater than or equal to 2). If the
#' dimensions of features or the number of partitions are large, this param
#' could be adjusted to a larger size. This is an expert parameter. Default
#' value should be good for most cases.
#' @param lowerBoundsOnCoefficients The lower bounds on coefficients if fitting under bound
#' constrained optimization.
#' The bound matrix must be compatible with the shape (1, number
#' of features) for binomial regression, or (number of classes,
#' number of features) for multinomial regression.
#' It is a R matrix.
#' @param upperBoundsOnCoefficients The upper bounds on coefficients if fitting under bound
#' constrained optimization.
#' The bound matrix must be compatible with the shape (1, number
#' of features) for binomial regression, or (number of classes,
#' number of features) for multinomial regression.
#' It is a R matrix.
#' @param lowerBoundsOnIntercepts The lower bounds on intercepts if fitting under bound constrained
#' optimization.
#' The bounds vector size must be equal to 1 for binomial regression,
#' or the number
#' of classes for multinomial regression.
#' @param upperBoundsOnIntercepts The upper bounds on intercepts if fitting under bound constrained
#' optimization.
#' The bound vector size must be equal to 1 for binomial regression,
#' or the number of classes for multinomial regression.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.logit} returns a fitted logistic regression model.
#' @rdname spark.logit
#' @aliases spark.logit,SparkDataFrame,formula-method
#' @name spark.logit
#' @examples
#' \dontrun{
#' sparkR.session()
#' # binary logistic regression
#' t <- as.data.frame(Titanic)
#' training <- createDataFrame(t)
#' model <- spark.logit(training, Survived ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # fitted values on training data
#' fitted <- predict(model, training)
#'
#' # save fitted model to input path
#' path <- "path/to/model"
#' write.ml(model, path)
#'
#' # can also read back the saved model and predict
#' # Note that summary deos not work on loaded model
#' savedModel <- read.ml(path)
#' summary(savedModel)
#'
#' # binary logistic regression against two classes with
#' # upperBoundsOnCoefficients and upperBoundsOnIntercepts
#' ubc <- matrix(c(1.0, 0.0, 1.0, 0.0), nrow = 1, ncol = 4)
#' model <- spark.logit(training, Species ~ .,
#' upperBoundsOnCoefficients = ubc,
#' upperBoundsOnIntercepts = 1.0)
#'
#' # multinomial logistic regression
#' model <- spark.logit(training, Class ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # multinomial logistic regression with
#' # lowerBoundsOnCoefficients and lowerBoundsOnIntercepts
#' lbc <- matrix(c(0.0, -1.0, 0.0, -1.0, 0.0, -1.0, 0.0, -1.0), nrow = 2, ncol = 4)
#' lbi <- as.array(c(0.0, 0.0))
#' model <- spark.logit(training, Species ~ ., family = "multinomial",
#' lowerBoundsOnCoefficients = lbc,
#' lowerBoundsOnIntercepts = lbi)
#' }
#' @note spark.logit since 2.1.0
setMethod("spark.logit", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, regParam = 0.0, elasticNetParam = 0.0, maxIter = 100,
tol = 1E-6, family = "auto", standardization = TRUE,
thresholds = 0.5, weightCol = NULL, aggregationDepth = 2,
lowerBoundsOnCoefficients = NULL, upperBoundsOnCoefficients = NULL,
lowerBoundsOnIntercepts = NULL, upperBoundsOnIntercepts = NULL,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
row <- 0
col <- 0
if (!is.null(weightCol) && weightCol == "") {
weightCol <- NULL
} else if (!is.null(weightCol)) {
weightCol <- as.character(weightCol)
}
if (!is.null(lowerBoundsOnIntercepts)) {
lowerBoundsOnIntercepts <- as.array(lowerBoundsOnIntercepts)
}
if (!is.null(upperBoundsOnIntercepts)) {
upperBoundsOnIntercepts <- as.array(upperBoundsOnIntercepts)
}
if (!is.null(lowerBoundsOnCoefficients)) {
if (class(lowerBoundsOnCoefficients) != "matrix") {
stop("lowerBoundsOnCoefficients must be a matrix.")
}
row <- nrow(lowerBoundsOnCoefficients)
col <- ncol(lowerBoundsOnCoefficients)
lowerBoundsOnCoefficients <- as.array(as.vector(lowerBoundsOnCoefficients))
}
if (!is.null(upperBoundsOnCoefficients)) {
if (class(upperBoundsOnCoefficients) != "matrix") {
stop("upperBoundsOnCoefficients must be a matrix.")
}
if (!is.null(lowerBoundsOnCoefficients) && (row != nrow(upperBoundsOnCoefficients)
|| col != ncol(upperBoundsOnCoefficients))) {
stop("dimension of upperBoundsOnCoefficients ",
"is not the same as lowerBoundsOnCoefficients")
}
if (is.null(lowerBoundsOnCoefficients)) {
row <- nrow(upperBoundsOnCoefficients)
col <- ncol(upperBoundsOnCoefficients)
}
upperBoundsOnCoefficients <- as.array(as.vector(upperBoundsOnCoefficients))
}
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.LogisticRegressionWrapper", "fit",
data@sdf, formula, as.numeric(regParam),
as.numeric(elasticNetParam), as.integer(maxIter),
as.numeric(tol), as.character(family),
as.logical(standardization), as.array(thresholds),
weightCol, as.integer(aggregationDepth),
as.integer(row), as.integer(col),
lowerBoundsOnCoefficients, upperBoundsOnCoefficients,
lowerBoundsOnIntercepts, upperBoundsOnIntercepts,
handleInvalid)
new("LogisticRegressionModel", jobj = jobj)
})
# Get the summary of an LogisticRegressionModel
#' @param object an LogisticRegressionModel fitted by \code{spark.logit}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{coefficients} (coefficients matrix of the fitted model).
#' @rdname spark.logit
#' @aliases summary,LogisticRegressionModel-method
#' @note summary(LogisticRegressionModel) since 2.1.0
setMethod("summary", signature(object = "LogisticRegressionModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "rFeatures")
labels <- callJMethod(jobj, "labels")
coefficients <- callJMethod(jobj, "rCoefficients")
nCol <- length(coefficients) / length(features)
coefficients <- matrix(unlist(coefficients), ncol = nCol)
# If nCol == 1, means this is a binomial logistic regression model with pivoting.
# Otherwise, it's a multinomial logistic regression model without pivoting.
if (nCol == 1) {
colnames(coefficients) <- c("Estimate")
} else {
colnames(coefficients) <- unlist(labels)
}
rownames(coefficients) <- unlist(features)
list(coefficients = coefficients)
})
# Predicted values based on an LogisticRegressionModel model
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns the predicted values based on an LogisticRegressionModel.
#' @rdname spark.logit
#' @aliases predict,LogisticRegressionModel,SparkDataFrame-method
#' @note predict(LogisticRegressionModel) since 2.1.0
setMethod("predict", signature(object = "LogisticRegressionModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Save fitted LogisticRegressionModel to the input path
#' @param path The directory where the model is saved.
#' @param overwrite Overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.logit
#' @aliases write.ml,LogisticRegressionModel,character-method
#' @note write.ml(LogisticRegression, character) since 2.1.0
setMethod("write.ml", signature(object = "LogisticRegressionModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Multilayer Perceptron Classification Model
#'
#' \code{spark.mlp} fits a multi-layer perceptron neural network model against a SparkDataFrame.
#' Users can call \code{summary} to print a summary of the fitted model, \code{predict} to make
#' predictions on new data, and \code{write.ml}/\code{read.ml} to save/load fitted models.
#' Only categorical data is supported.
#' For more details, see
#' \href{https://spark.apache.org/docs/latest/ml-classification-regression.html}{
#' Multilayer Perceptron}
#'
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param blockSize blockSize parameter.
#' @param layers integer vector containing the number of nodes for each layer.
#' @param solver solver parameter, supported options: "gd" (minibatch gradient descent) or "l-bfgs".
#' @param maxIter maximum iteration number.
#' @param tol convergence tolerance of iterations.
#' @param stepSize stepSize parameter.
#' @param seed seed parameter for weights initialization.
#' @param initialWeights initialWeights parameter for weights initialization, it should be a
#' numeric vector.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.mlp} returns a fitted Multilayer Perceptron Classification Model.
#' @rdname spark.mlp
#' @aliases spark.mlp,SparkDataFrame,formula-method
#' @name spark.mlp
#' @seealso \link{read.ml}
#' @examples
#' \dontrun{
#' df <- read.df("data/mllib/sample_multiclass_classification_data.txt", source = "libsvm")
#'
#' # fit a Multilayer Perceptron Classification Model
#' model <- spark.mlp(df, label ~ features, blockSize = 128, layers = c(4, 3), solver = "l-bfgs",
#' maxIter = 100, tol = 0.5, stepSize = 1, seed = 1,
#' initialWeights = c(0, 0, 0, 0, 0, 5, 5, 5, 5, 5, 9, 9, 9, 9, 9))
#'
#' # get the summary of the model
#' summary(model)
#'
#' # make predictions
#' predictions <- predict(model, df)
#'
#' # save and load the model
#' path <- "path/to/model"
#' write.ml(model, path)
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.mlp since 2.1.0
setMethod("spark.mlp", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, layers, blockSize = 128, solver = "l-bfgs", maxIter = 100,
tol = 1E-6, stepSize = 0.03, seed = NULL, initialWeights = NULL,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
if (is.null(layers)) {
stop("layers must be a integer vector with length > 1.")
}
layers <- as.integer(na.omit(layers))
if (length(layers) <= 1) {
stop("layers must be a integer vector with length > 1.")
}
if (!is.null(seed)) {
seed <- as.character(as.integer(seed))
}
if (!is.null(initialWeights)) {
initialWeights <- as.array(as.numeric(na.omit(initialWeights)))
}
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper",
"fit", data@sdf, formula, as.integer(blockSize), as.array(layers),
as.character(solver), as.integer(maxIter), as.numeric(tol),
as.numeric(stepSize), seed, initialWeights, handleInvalid)
new("MultilayerPerceptronClassificationModel", jobj = jobj)
})
# Returns the summary of a Multilayer Perceptron Classification Model produced by \code{spark.mlp}
#' @param object a Multilayer Perceptron Classification Model fitted by \code{spark.mlp}
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{numOfInputs} (number of inputs), \code{numOfOutputs}
#' (number of outputs), \code{layers} (array of layer sizes including input
#' and output layers), and \code{weights} (the weights of layers).
#' For \code{weights}, it is a numeric vector with length equal to the expected
#' given the architecture (i.e., for 8-10-2 network, 112 connection weights).
#' @rdname spark.mlp
#' @aliases summary,MultilayerPerceptronClassificationModel-method
#' @note summary(MultilayerPerceptronClassificationModel) since 2.1.0
setMethod("summary", signature(object = "MultilayerPerceptronClassificationModel"),
function(object) {
jobj <- object@jobj
layers <- unlist(callJMethod(jobj, "layers"))
numOfInputs <- head(layers, n = 1)
numOfOutputs <- tail(layers, n = 1)
weights <- callJMethod(jobj, "weights")
list(numOfInputs = numOfInputs, numOfOutputs = numOfOutputs,
layers = layers, weights = weights)
})
# Makes predictions from a model produced by spark.mlp().
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns a SparkDataFrame containing predicted labeled in a column named
#' "prediction".
#' @rdname spark.mlp
#' @aliases predict,MultilayerPerceptronClassificationModel-method
#' @note predict(MultilayerPerceptronClassificationModel) since 2.1.0
setMethod("predict", signature(object = "MultilayerPerceptronClassificationModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Saves the Multilayer Perceptron Classification Model to the input path.
#' @param path the directory where the model is saved.
#' @param overwrite overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.mlp
#' @aliases write.ml,MultilayerPerceptronClassificationModel,character-method
#' @seealso \link{write.ml}
#' @note write.ml(MultilayerPerceptronClassificationModel, character) since 2.1.0
setMethod("write.ml", signature(object = "MultilayerPerceptronClassificationModel",
path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Naive Bayes Models
#'
#' \code{spark.naiveBayes} fits a Bernoulli naive Bayes model against a SparkDataFrame.
#' Users can call \code{summary} to print a summary of the fitted model, \code{predict} to make
#' predictions on new data, and \code{write.ml}/\code{read.ml} to save/load fitted models.
#' Only categorical data is supported.
#'
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param smoothing smoothing parameter.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional argument(s) passed to the method. Currently only \code{smoothing}.
#' @return \code{spark.naiveBayes} returns a fitted naive Bayes model.
#' @rdname spark.naiveBayes
#' @aliases spark.naiveBayes,SparkDataFrame,formula-method
#' @name spark.naiveBayes
#' @seealso e1071: \url{https://cran.r-project.org/web/packages/e1071/index.html}
#' @examples
#' \dontrun{
#' data <- as.data.frame(UCBAdmissions)
#' df <- createDataFrame(data)
#'
#' # fit a Bernoulli naive Bayes model
#' model <- spark.naiveBayes(df, Admit ~ Gender + Dept, smoothing = 0)
#'
#' # get the summary of the model
#' summary(model)
#'
#' # make predictions
#' predictions <- predict(model, df)
#'
#' # save and load the model
#' path <- "path/to/model"
#' write.ml(model, path)
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.naiveBayes since 2.0.0
setMethod("spark.naiveBayes", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, smoothing = 1.0,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.NaiveBayesWrapper", "fit",
formula, data@sdf, smoothing, handleInvalid)
new("NaiveBayesModel", jobj = jobj)
})
# Returns the summary of a naive Bayes model produced by \code{spark.naiveBayes}
#' @param object a naive Bayes model fitted by \code{spark.naiveBayes}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{apriori} (the label distribution) and
#' \code{tables} (conditional probabilities given the target label).
#' @rdname spark.naiveBayes
#' @note summary(NaiveBayesModel) since 2.0.0
setMethod("summary", signature(object = "NaiveBayesModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "features")
labels <- callJMethod(jobj, "labels")
apriori <- callJMethod(jobj, "apriori")
apriori <- t(as.matrix(unlist(apriori)))
colnames(apriori) <- unlist(labels)
tables <- callJMethod(jobj, "tables")
tables <- matrix(tables, nrow = length(labels))
rownames(tables) <- unlist(labels)
colnames(tables) <- unlist(features)
list(apriori = apriori, tables = tables)
})
# Makes predictions from a naive Bayes model or a model produced by spark.naiveBayes(),
# similarly to R package e1071's predict.
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns a SparkDataFrame containing predicted labeled in a column named
#' "prediction".
#' @rdname spark.naiveBayes
#' @note predict(NaiveBayesModel) since 2.0.0
setMethod("predict", signature(object = "NaiveBayesModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Saves the Bernoulli naive Bayes model to the input path.
#' @param path the directory where the model is saved.
#' @param overwrite overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.naiveBayes
#' @seealso \link{write.ml}
#' @note write.ml(NaiveBayesModel, character) since 2.0.0
setMethod("write.ml", signature(object = "NaiveBayesModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Factorization Machines Classification Model
#'
#' \code{spark.fmClassifier} fits a factorization classification model against a SparkDataFrame.
#' Users can call \code{summary} to print a summary of the fitted model, \code{predict} to make
#' predictions on new data, and \code{write.ml}/\code{read.ml} to save/load fitted models.
#' Only categorical data is supported.
#'
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param factorSize dimensionality of the factors.
#' @param fitLinear whether to fit linear term. # TODO Can we express this with formula?
#' @param regParam the regularization parameter.
#' @param miniBatchFraction the mini-batch fraction parameter.
#' @param initStd the standard deviation of initial coefficients.
#' @param maxIter maximum iteration number.
#' @param stepSize stepSize parameter.
#' @param tol convergence tolerance of iterations.
#' @param solver solver parameter, supported options: "gd" (minibatch gradient descent) or "adamW".
#' @param thresholds in binary classification, in range [0, 1]. If the estimated probability of
#' class label 1 is > threshold, then predict 1, else 0. A high threshold
#' encourages the model to predict 0 more often; a low threshold encourages the
#' model to predict 1 more often. Note: Setting this with threshold p is
#' equivalent to setting thresholds c(1-p, p).
#' @param seed seed parameter for weights initialization.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.fmClassifier} returns a fitted Factorization Machines Classification Model.
#' @rdname spark.fmClassifier
#' @aliases spark.fmClassifier,SparkDataFrame,formula-method
#' @name spark.fmClassifier
#' @seealso \link{read.ml}
#' @examples
#' \dontrun{
#' df <- read.df("data/mllib/sample_binary_classification_data.txt", source = "libsvm")
#'
#' # fit Factorization Machines Classification Model
#' model <- spark.fmClassifier(
#' df, label ~ features,
#' regParam = 0.01, maxIter = 10, fitLinear = TRUE
#' )
#'
#' # get the summary of the model
#' summary(model)
#'
#' # make predictions
#' predictions <- predict(model, df)
#'
#' # save and load the model
#' path <- "path/to/model"
#' write.ml(model, path)
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.fmClassifier since 3.1.0
setMethod("spark.fmClassifier", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, factorSize = 8, fitLinear = TRUE, regParam = 0.0,
miniBatchFraction = 1.0, initStd = 0.01, maxIter = 100, stepSize=1.0,
tol = 1e-6, solver = c("adamW", "gd"), thresholds = NULL, seed = NULL,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
if (!is.null(seed)) {
seed <- as.character(as.integer(seed))
}
if (!is.null(thresholds)) {
thresholds <- as.list(thresholds)
}
solver <- match.arg(solver)
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.FMClassifierWrapper",
"fit",
data@sdf,
formula,
as.integer(factorSize),
as.logical(fitLinear),
as.numeric(regParam),
as.numeric(miniBatchFraction),
as.numeric(initStd),
as.integer(maxIter),
as.numeric(stepSize),
as.numeric(tol),
solver,
seed,
thresholds,
handleInvalid)
new("FMClassificationModel", jobj = jobj)
})
# Returns the summary of a FM Classification model produced by \code{spark.fmClassifier}
#' @param object a FM Classification model fitted by \code{spark.fmClassifier}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' @rdname spark.fmClassifier
#' @note summary(FMClassificationModel) since 3.1.0
setMethod("summary", signature(object = "FMClassificationModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "rFeatures")
coefficients <- callJMethod(jobj, "rCoefficients")
coefficients <- as.matrix(unlist(coefficients))
colnames(coefficients) <- c("Estimate")
rownames(coefficients) <- unlist(features)
numClasses <- callJMethod(jobj, "numClasses")
numFeatures <- callJMethod(jobj, "numFeatures")
raw_factors <- unlist(callJMethod(jobj, "rFactors"))
factor_size <- callJMethod(jobj, "factorSize")
list(
coefficients = coefficients,
factors = matrix(raw_factors, ncol = factor_size),
numClasses = numClasses, numFeatures = numFeatures,
factorSize = factor_size
)
})
# Predicted values based on an FMClassificationModel model
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns the predicted values based on a FM Classification model.
#' @rdname spark.fmClassifier
#' @aliases predict,FMClassificationModel,SparkDataFrame-method
#' @note predict(FMClassificationModel) since 3.1.0
setMethod("predict", signature(object = "FMClassificationModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Save fitted FMClassificationModel to the input path
#' @param path The directory where the model is saved.
#' @param overwrite Overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.fmClassifier
#' @aliases write.ml,FMClassificationModel,character-method
#' @note write.ml(FMClassificationModel, character) since 3.1.0
setMethod("write.ml", signature(object = "FMClassificationModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
| /R/pkg/R/mllib_classification.R | permissive | Shopify/spark | R | false | false | 40,521 | r | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# mllib_regression.R: Provides methods for MLlib classification algorithms
# (except for tree-based algorithms) integration
#' S4 class that represents an LinearSVCModel
#'
#' @param jobj a Java object reference to the backing Scala LinearSVCModel
#' @note LinearSVCModel since 2.2.0
setClass("LinearSVCModel", representation(jobj = "jobj"))
#' S4 class that represents an LogisticRegressionModel
#'
#' @param jobj a Java object reference to the backing Scala LogisticRegressionModel
#' @note LogisticRegressionModel since 2.1.0
setClass("LogisticRegressionModel", representation(jobj = "jobj"))
#' S4 class that represents a MultilayerPerceptronClassificationModel
#'
#' @param jobj a Java object reference to the backing Scala MultilayerPerceptronClassifierWrapper
#' @note MultilayerPerceptronClassificationModel since 2.1.0
setClass("MultilayerPerceptronClassificationModel", representation(jobj = "jobj"))
#' S4 class that represents a NaiveBayesModel
#'
#' @param jobj a Java object reference to the backing Scala NaiveBayesWrapper
#' @note NaiveBayesModel since 2.0.0
setClass("NaiveBayesModel", representation(jobj = "jobj"))
#' S4 class that represents a FMClassificationModel
#'
#' @param jobj a Java object reference to the backing Scala FMClassifierWrapper
#' @note FMClassificationModel since 3.1.0
setClass("FMClassificationModel", representation(jobj = "jobj"))
#' Linear SVM Model
#'
#' Fits a linear SVM model against a SparkDataFrame, similar to svm in e1071 package.
#' Currently only supports binary classification model with linear kernel.
#' Users can print, make predictions on the produced model and save the model to the input path.
#'
#' @param data SparkDataFrame for training.
#' @param formula A symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', '-', '*', and '^'.
#' @param regParam The regularization parameter. Only supports L2 regularization currently.
#' @param maxIter Maximum iteration number.
#' @param tol Convergence tolerance of iterations.
#' @param standardization Whether to standardize the training features before fitting the model.
#' The coefficients of models will be always returned on the original scale,
#' so it will be transparent for users. Note that with/without
#' standardization, the models should be always converged to the same
#' solution when no regularization is applied.
#' @param threshold The threshold in binary classification applied to the linear model prediction.
#' This threshold can be any real number, where Inf will make all predictions 0.0
#' and -Inf will make all predictions 1.0.
#' @param weightCol The weight column name.
#' @param aggregationDepth The depth for treeAggregate (greater than or equal to 2). If the
#' dimensions of features or the number of partitions are large, this param
#' could be adjusted to a larger size.
#' This is an expert parameter. Default value should be good for most cases.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.svmLinear} returns a fitted linear SVM model.
#' @rdname spark.svmLinear
#' @aliases spark.svmLinear,SparkDataFrame,formula-method
#' @name spark.svmLinear
#' @examples
#' \dontrun{
#' sparkR.session()
#' t <- as.data.frame(Titanic)
#' training <- createDataFrame(t)
#' model <- spark.svmLinear(training, Survived ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # fitted values on training data
#' fitted <- predict(model, training)
#'
#' # save fitted model to input path
#' path <- "path/to/model"
#' write.ml(model, path)
#'
#' # can also read back the saved model and predict
#' # Note that summary deos not work on loaded model
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.svmLinear since 2.2.0
setMethod("spark.svmLinear", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, regParam = 0.0, maxIter = 100, tol = 1E-6, standardization = TRUE,
threshold = 0.0, weightCol = NULL, aggregationDepth = 2,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
if (!is.null(weightCol) && weightCol == "") {
weightCol <- NULL
} else if (!is.null(weightCol)) {
weightCol <- as.character(weightCol)
}
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.LinearSVCWrapper", "fit",
data@sdf, formula, as.numeric(regParam), as.integer(maxIter),
as.numeric(tol), as.logical(standardization), as.numeric(threshold),
weightCol, as.integer(aggregationDepth), handleInvalid)
new("LinearSVCModel", jobj = jobj)
})
# Predicted values based on a LinearSVCModel model
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns the predicted values based on a LinearSVCModel.
#' @rdname spark.svmLinear
#' @aliases predict,LinearSVCModel,SparkDataFrame-method
#' @note predict(LinearSVCModel) since 2.2.0
setMethod("predict", signature(object = "LinearSVCModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Get the summary of a LinearSVCModel
#' @param object a LinearSVCModel fitted by \code{spark.svmLinear}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{coefficients} (coefficients of the fitted model),
#' \code{numClasses} (number of classes), \code{numFeatures} (number of features).
#' @rdname spark.svmLinear
#' @aliases summary,LinearSVCModel-method
#' @note summary(LinearSVCModel) since 2.2.0
setMethod("summary", signature(object = "LinearSVCModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "rFeatures")
coefficients <- callJMethod(jobj, "rCoefficients")
coefficients <- as.matrix(unlist(coefficients))
colnames(coefficients) <- c("Estimate")
rownames(coefficients) <- unlist(features)
numClasses <- callJMethod(jobj, "numClasses")
numFeatures <- callJMethod(jobj, "numFeatures")
list(coefficients = coefficients, numClasses = numClasses, numFeatures = numFeatures)
})
# Save fitted LinearSVCModel to the input path
#' @param path The directory where the model is saved.
#' @param overwrite Overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.svmLinear
#' @aliases write.ml,LinearSVCModel,character-method
#' @note write.ml(LogisticRegression, character) since 2.2.0
setMethod("write.ml", signature(object = "LinearSVCModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Logistic Regression Model
#'
#' Fits an logistic regression model against a SparkDataFrame. It supports "binomial": Binary
#' logistic regression with pivoting; "multinomial": Multinomial logistic (softmax) regression
#' without pivoting, similar to glmnet. Users can print, make predictions on the produced model
#' and save the model to the input path.
#'
#' @param data SparkDataFrame for training.
#' @param formula A symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param regParam the regularization parameter.
#' @param elasticNetParam the ElasticNet mixing parameter. For alpha = 0.0, the penalty is an L2
#' penalty. For alpha = 1.0, it is an L1 penalty. For 0.0 < alpha < 1.0,
#' the penalty is a combination of L1 and L2. Default is 0.0 which is an
#' L2 penalty.
#' @param maxIter maximum iteration number.
#' @param tol convergence tolerance of iterations.
#' @param family the name of family which is a description of the label distribution to be used
#' in the model.
#' Supported options:
#' \itemize{
#' \item{"auto": Automatically select the family based on the number of classes:
#' If number of classes == 1 || number of classes == 2, set to "binomial".
#' Else, set to "multinomial".}
#' \item{"binomial": Binary logistic regression with pivoting.}
#' \item{"multinomial": Multinomial logistic (softmax) regression without
#' pivoting.}
#' }
#' @param standardization whether to standardize the training features before fitting the model.
#' The coefficients of models will be always returned on the original scale,
#' so it will be transparent for users. Note that with/without
#' standardization, the models should be always converged to the same
#' solution when no regularization is applied. Default is TRUE, same as
#' glmnet.
#' @param thresholds in binary classification, in range [0, 1]. If the estimated probability of
#' class label 1 is > threshold, then predict 1, else 0. A high threshold
#' encourages the model to predict 0 more often; a low threshold encourages the
#' model to predict 1 more often. Note: Setting this with threshold p is
#' equivalent to setting thresholds c(1-p, p). In multiclass (or binary)
#' classification to adjust the probability of predicting each class. Array must
#' have length equal to the number of classes, with values > 0, excepting that
#' at most one value may be 0. The class with largest value p/t is predicted,
#' where p is the original probability of that class and t is the class's
#' threshold.
#' @param weightCol The weight column name.
#' @param aggregationDepth The depth for treeAggregate (greater than or equal to 2). If the
#' dimensions of features or the number of partitions are large, this param
#' could be adjusted to a larger size. This is an expert parameter. Default
#' value should be good for most cases.
#' @param lowerBoundsOnCoefficients The lower bounds on coefficients if fitting under bound
#' constrained optimization.
#' The bound matrix must be compatible with the shape (1, number
#' of features) for binomial regression, or (number of classes,
#' number of features) for multinomial regression.
#' It is a R matrix.
#' @param upperBoundsOnCoefficients The upper bounds on coefficients if fitting under bound
#' constrained optimization.
#' The bound matrix must be compatible with the shape (1, number
#' of features) for binomial regression, or (number of classes,
#' number of features) for multinomial regression.
#' It is a R matrix.
#' @param lowerBoundsOnIntercepts The lower bounds on intercepts if fitting under bound constrained
#' optimization.
#' The bounds vector size must be equal to 1 for binomial regression,
#' or the number
#' of classes for multinomial regression.
#' @param upperBoundsOnIntercepts The upper bounds on intercepts if fitting under bound constrained
#' optimization.
#' The bound vector size must be equal to 1 for binomial regression,
#' or the number of classes for multinomial regression.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.logit} returns a fitted logistic regression model.
#' @rdname spark.logit
#' @aliases spark.logit,SparkDataFrame,formula-method
#' @name spark.logit
#' @examples
#' \dontrun{
#' sparkR.session()
#' # binary logistic regression
#' t <- as.data.frame(Titanic)
#' training <- createDataFrame(t)
#' model <- spark.logit(training, Survived ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # fitted values on training data
#' fitted <- predict(model, training)
#'
#' # save fitted model to input path
#' path <- "path/to/model"
#' write.ml(model, path)
#'
#' # can also read back the saved model and predict
#' # Note that summary deos not work on loaded model
#' savedModel <- read.ml(path)
#' summary(savedModel)
#'
#' # binary logistic regression against two classes with
#' # upperBoundsOnCoefficients and upperBoundsOnIntercepts
#' ubc <- matrix(c(1.0, 0.0, 1.0, 0.0), nrow = 1, ncol = 4)
#' model <- spark.logit(training, Species ~ .,
#' upperBoundsOnCoefficients = ubc,
#' upperBoundsOnIntercepts = 1.0)
#'
#' # multinomial logistic regression
#' model <- spark.logit(training, Class ~ ., regParam = 0.5)
#' summary <- summary(model)
#'
#' # multinomial logistic regression with
#' # lowerBoundsOnCoefficients and lowerBoundsOnIntercepts
#' lbc <- matrix(c(0.0, -1.0, 0.0, -1.0, 0.0, -1.0, 0.0, -1.0), nrow = 2, ncol = 4)
#' lbi <- as.array(c(0.0, 0.0))
#' model <- spark.logit(training, Species ~ ., family = "multinomial",
#' lowerBoundsOnCoefficients = lbc,
#' lowerBoundsOnIntercepts = lbi)
#' }
#' @note spark.logit since 2.1.0
setMethod("spark.logit", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, regParam = 0.0, elasticNetParam = 0.0, maxIter = 100,
tol = 1E-6, family = "auto", standardization = TRUE,
thresholds = 0.5, weightCol = NULL, aggregationDepth = 2,
lowerBoundsOnCoefficients = NULL, upperBoundsOnCoefficients = NULL,
lowerBoundsOnIntercepts = NULL, upperBoundsOnIntercepts = NULL,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
row <- 0
col <- 0
if (!is.null(weightCol) && weightCol == "") {
weightCol <- NULL
} else if (!is.null(weightCol)) {
weightCol <- as.character(weightCol)
}
if (!is.null(lowerBoundsOnIntercepts)) {
lowerBoundsOnIntercepts <- as.array(lowerBoundsOnIntercepts)
}
if (!is.null(upperBoundsOnIntercepts)) {
upperBoundsOnIntercepts <- as.array(upperBoundsOnIntercepts)
}
if (!is.null(lowerBoundsOnCoefficients)) {
if (class(lowerBoundsOnCoefficients) != "matrix") {
stop("lowerBoundsOnCoefficients must be a matrix.")
}
row <- nrow(lowerBoundsOnCoefficients)
col <- ncol(lowerBoundsOnCoefficients)
lowerBoundsOnCoefficients <- as.array(as.vector(lowerBoundsOnCoefficients))
}
if (!is.null(upperBoundsOnCoefficients)) {
if (class(upperBoundsOnCoefficients) != "matrix") {
stop("upperBoundsOnCoefficients must be a matrix.")
}
if (!is.null(lowerBoundsOnCoefficients) && (row != nrow(upperBoundsOnCoefficients)
|| col != ncol(upperBoundsOnCoefficients))) {
stop("dimension of upperBoundsOnCoefficients ",
"is not the same as lowerBoundsOnCoefficients")
}
if (is.null(lowerBoundsOnCoefficients)) {
row <- nrow(upperBoundsOnCoefficients)
col <- ncol(upperBoundsOnCoefficients)
}
upperBoundsOnCoefficients <- as.array(as.vector(upperBoundsOnCoefficients))
}
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.LogisticRegressionWrapper", "fit",
data@sdf, formula, as.numeric(regParam),
as.numeric(elasticNetParam), as.integer(maxIter),
as.numeric(tol), as.character(family),
as.logical(standardization), as.array(thresholds),
weightCol, as.integer(aggregationDepth),
as.integer(row), as.integer(col),
lowerBoundsOnCoefficients, upperBoundsOnCoefficients,
lowerBoundsOnIntercepts, upperBoundsOnIntercepts,
handleInvalid)
new("LogisticRegressionModel", jobj = jobj)
})
# Get the summary of an LogisticRegressionModel
#' @param object an LogisticRegressionModel fitted by \code{spark.logit}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{coefficients} (coefficients matrix of the fitted model).
#' @rdname spark.logit
#' @aliases summary,LogisticRegressionModel-method
#' @note summary(LogisticRegressionModel) since 2.1.0
setMethod("summary", signature(object = "LogisticRegressionModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "rFeatures")
labels <- callJMethod(jobj, "labels")
coefficients <- callJMethod(jobj, "rCoefficients")
nCol <- length(coefficients) / length(features)
coefficients <- matrix(unlist(coefficients), ncol = nCol)
# If nCol == 1, means this is a binomial logistic regression model with pivoting.
# Otherwise, it's a multinomial logistic regression model without pivoting.
if (nCol == 1) {
colnames(coefficients) <- c("Estimate")
} else {
colnames(coefficients) <- unlist(labels)
}
rownames(coefficients) <- unlist(features)
list(coefficients = coefficients)
})
# Predicted values based on an LogisticRegressionModel model
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns the predicted values based on an LogisticRegressionModel.
#' @rdname spark.logit
#' @aliases predict,LogisticRegressionModel,SparkDataFrame-method
#' @note predict(LogisticRegressionModel) since 2.1.0
setMethod("predict", signature(object = "LogisticRegressionModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Save fitted LogisticRegressionModel to the input path
#' @param path The directory where the model is saved.
#' @param overwrite Overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.logit
#' @aliases write.ml,LogisticRegressionModel,character-method
#' @note write.ml(LogisticRegression, character) since 2.1.0
setMethod("write.ml", signature(object = "LogisticRegressionModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Multilayer Perceptron Classification Model
#'
#' \code{spark.mlp} fits a multi-layer perceptron neural network model against a SparkDataFrame.
#' Users can call \code{summary} to print a summary of the fitted model, \code{predict} to make
#' predictions on new data, and \code{write.ml}/\code{read.ml} to save/load fitted models.
#' Only categorical data is supported.
#' For more details, see
#' \href{https://spark.apache.org/docs/latest/ml-classification-regression.html}{
#' Multilayer Perceptron}
#'
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param blockSize blockSize parameter.
#' @param layers integer vector containing the number of nodes for each layer.
#' @param solver solver parameter, supported options: "gd" (minibatch gradient descent) or "l-bfgs".
#' @param maxIter maximum iteration number.
#' @param tol convergence tolerance of iterations.
#' @param stepSize stepSize parameter.
#' @param seed seed parameter for weights initialization.
#' @param initialWeights initialWeights parameter for weights initialization, it should be a
#' numeric vector.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.mlp} returns a fitted Multilayer Perceptron Classification Model.
#' @rdname spark.mlp
#' @aliases spark.mlp,SparkDataFrame,formula-method
#' @name spark.mlp
#' @seealso \link{read.ml}
#' @examples
#' \dontrun{
#' df <- read.df("data/mllib/sample_multiclass_classification_data.txt", source = "libsvm")
#'
#' # fit a Multilayer Perceptron Classification Model
#' model <- spark.mlp(df, label ~ features, blockSize = 128, layers = c(4, 3), solver = "l-bfgs",
#' maxIter = 100, tol = 0.5, stepSize = 1, seed = 1,
#' initialWeights = c(0, 0, 0, 0, 0, 5, 5, 5, 5, 5, 9, 9, 9, 9, 9))
#'
#' # get the summary of the model
#' summary(model)
#'
#' # make predictions
#' predictions <- predict(model, df)
#'
#' # save and load the model
#' path <- "path/to/model"
#' write.ml(model, path)
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.mlp since 2.1.0
setMethod("spark.mlp", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, layers, blockSize = 128, solver = "l-bfgs", maxIter = 100,
tol = 1E-6, stepSize = 0.03, seed = NULL, initialWeights = NULL,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
if (is.null(layers)) {
stop("layers must be a integer vector with length > 1.")
}
layers <- as.integer(na.omit(layers))
if (length(layers) <= 1) {
stop("layers must be a integer vector with length > 1.")
}
if (!is.null(seed)) {
seed <- as.character(as.integer(seed))
}
if (!is.null(initialWeights)) {
initialWeights <- as.array(as.numeric(na.omit(initialWeights)))
}
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper",
"fit", data@sdf, formula, as.integer(blockSize), as.array(layers),
as.character(solver), as.integer(maxIter), as.numeric(tol),
as.numeric(stepSize), seed, initialWeights, handleInvalid)
new("MultilayerPerceptronClassificationModel", jobj = jobj)
})
# Returns the summary of a Multilayer Perceptron Classification Model produced by \code{spark.mlp}
#' @param object a Multilayer Perceptron Classification Model fitted by \code{spark.mlp}
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{numOfInputs} (number of inputs), \code{numOfOutputs}
#' (number of outputs), \code{layers} (array of layer sizes including input
#' and output layers), and \code{weights} (the weights of layers).
#' For \code{weights}, it is a numeric vector with length equal to the expected
#' given the architecture (i.e., for 8-10-2 network, 112 connection weights).
#' @rdname spark.mlp
#' @aliases summary,MultilayerPerceptronClassificationModel-method
#' @note summary(MultilayerPerceptronClassificationModel) since 2.1.0
setMethod("summary", signature(object = "MultilayerPerceptronClassificationModel"),
function(object) {
jobj <- object@jobj
layers <- unlist(callJMethod(jobj, "layers"))
numOfInputs <- head(layers, n = 1)
numOfOutputs <- tail(layers, n = 1)
weights <- callJMethod(jobj, "weights")
list(numOfInputs = numOfInputs, numOfOutputs = numOfOutputs,
layers = layers, weights = weights)
})
# Makes predictions from a model produced by spark.mlp().
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns a SparkDataFrame containing predicted labeled in a column named
#' "prediction".
#' @rdname spark.mlp
#' @aliases predict,MultilayerPerceptronClassificationModel-method
#' @note predict(MultilayerPerceptronClassificationModel) since 2.1.0
setMethod("predict", signature(object = "MultilayerPerceptronClassificationModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Saves the Multilayer Perceptron Classification Model to the input path.
#' @param path the directory where the model is saved.
#' @param overwrite overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.mlp
#' @aliases write.ml,MultilayerPerceptronClassificationModel,character-method
#' @seealso \link{write.ml}
#' @note write.ml(MultilayerPerceptronClassificationModel, character) since 2.1.0
setMethod("write.ml", signature(object = "MultilayerPerceptronClassificationModel",
path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Naive Bayes Models
#'
#' \code{spark.naiveBayes} fits a Bernoulli naive Bayes model against a SparkDataFrame.
#' Users can call \code{summary} to print a summary of the fitted model, \code{predict} to make
#' predictions on new data, and \code{write.ml}/\code{read.ml} to save/load fitted models.
#' Only categorical data is supported.
#'
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param smoothing smoothing parameter.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional argument(s) passed to the method. Currently only \code{smoothing}.
#' @return \code{spark.naiveBayes} returns a fitted naive Bayes model.
#' @rdname spark.naiveBayes
#' @aliases spark.naiveBayes,SparkDataFrame,formula-method
#' @name spark.naiveBayes
#' @seealso e1071: \url{https://cran.r-project.org/web/packages/e1071/index.html}
#' @examples
#' \dontrun{
#' data <- as.data.frame(UCBAdmissions)
#' df <- createDataFrame(data)
#'
#' # fit a Bernoulli naive Bayes model
#' model <- spark.naiveBayes(df, Admit ~ Gender + Dept, smoothing = 0)
#'
#' # get the summary of the model
#' summary(model)
#'
#' # make predictions
#' predictions <- predict(model, df)
#'
#' # save and load the model
#' path <- "path/to/model"
#' write.ml(model, path)
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.naiveBayes since 2.0.0
setMethod("spark.naiveBayes", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, smoothing = 1.0,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.NaiveBayesWrapper", "fit",
formula, data@sdf, smoothing, handleInvalid)
new("NaiveBayesModel", jobj = jobj)
})
# Returns the summary of a naive Bayes model produced by \code{spark.naiveBayes}
#' @param object a naive Bayes model fitted by \code{spark.naiveBayes}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' The list includes \code{apriori} (the label distribution) and
#' \code{tables} (conditional probabilities given the target label).
#' @rdname spark.naiveBayes
#' @note summary(NaiveBayesModel) since 2.0.0
setMethod("summary", signature(object = "NaiveBayesModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "features")
labels <- callJMethod(jobj, "labels")
apriori <- callJMethod(jobj, "apriori")
apriori <- t(as.matrix(unlist(apriori)))
colnames(apriori) <- unlist(labels)
tables <- callJMethod(jobj, "tables")
tables <- matrix(tables, nrow = length(labels))
rownames(tables) <- unlist(labels)
colnames(tables) <- unlist(features)
list(apriori = apriori, tables = tables)
})
# Makes predictions from a naive Bayes model or a model produced by spark.naiveBayes(),
# similarly to R package e1071's predict.
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns a SparkDataFrame containing predicted labeled in a column named
#' "prediction".
#' @rdname spark.naiveBayes
#' @note predict(NaiveBayesModel) since 2.0.0
setMethod("predict", signature(object = "NaiveBayesModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Saves the Bernoulli naive Bayes model to the input path.
#' @param path the directory where the model is saved.
#' @param overwrite overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.naiveBayes
#' @seealso \link{write.ml}
#' @note write.ml(NaiveBayesModel, character) since 2.0.0
setMethod("write.ml", signature(object = "NaiveBayesModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
#' Factorization Machines Classification Model
#'
#' \code{spark.fmClassifier} fits a factorization classification model against a SparkDataFrame.
#' Users can call \code{summary} to print a summary of the fitted model, \code{predict} to make
#' predictions on new data, and \code{write.ml}/\code{read.ml} to save/load fitted models.
#' Only categorical data is supported.
#'
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param factorSize dimensionality of the factors.
#' @param fitLinear whether to fit linear term. # TODO Can we express this with formula?
#' @param regParam the regularization parameter.
#' @param miniBatchFraction the mini-batch fraction parameter.
#' @param initStd the standard deviation of initial coefficients.
#' @param maxIter maximum iteration number.
#' @param stepSize stepSize parameter.
#' @param tol convergence tolerance of iterations.
#' @param solver solver parameter, supported options: "gd" (minibatch gradient descent) or "adamW".
#' @param thresholds in binary classification, in range [0, 1]. If the estimated probability of
#' class label 1 is > threshold, then predict 1, else 0. A high threshold
#' encourages the model to predict 0 more often; a low threshold encourages the
#' model to predict 1 more often. Note: Setting this with threshold p is
#' equivalent to setting thresholds c(1-p, p).
#' @param seed seed parameter for weights initialization.
#' @param handleInvalid How to handle invalid data (unseen labels or NULL values) in features and
#' label column of string type.
#' Supported options: "skip" (filter out rows with invalid data),
#' "error" (throw an error), "keep" (put invalid data in
#' a special additional bucket, at index numLabels). Default
#' is "error".
#' @param ... additional arguments passed to the method.
#' @return \code{spark.fmClassifier} returns a fitted Factorization Machines Classification Model.
#' @rdname spark.fmClassifier
#' @aliases spark.fmClassifier,SparkDataFrame,formula-method
#' @name spark.fmClassifier
#' @seealso \link{read.ml}
#' @examples
#' \dontrun{
#' df <- read.df("data/mllib/sample_binary_classification_data.txt", source = "libsvm")
#'
#' # fit Factorization Machines Classification Model
#' model <- spark.fmClassifier(
#' df, label ~ features,
#' regParam = 0.01, maxIter = 10, fitLinear = TRUE
#' )
#'
#' # get the summary of the model
#' summary(model)
#'
#' # make predictions
#' predictions <- predict(model, df)
#'
#' # save and load the model
#' path <- "path/to/model"
#' write.ml(model, path)
#' savedModel <- read.ml(path)
#' summary(savedModel)
#' }
#' @note spark.fmClassifier since 3.1.0
setMethod("spark.fmClassifier", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, factorSize = 8, fitLinear = TRUE, regParam = 0.0,
miniBatchFraction = 1.0, initStd = 0.01, maxIter = 100, stepSize=1.0,
tol = 1e-6, solver = c("adamW", "gd"), thresholds = NULL, seed = NULL,
handleInvalid = c("error", "keep", "skip")) {
formula <- paste(deparse(formula), collapse = "")
if (!is.null(seed)) {
seed <- as.character(as.integer(seed))
}
if (!is.null(thresholds)) {
thresholds <- as.list(thresholds)
}
solver <- match.arg(solver)
handleInvalid <- match.arg(handleInvalid)
jobj <- callJStatic("org.apache.spark.ml.r.FMClassifierWrapper",
"fit",
data@sdf,
formula,
as.integer(factorSize),
as.logical(fitLinear),
as.numeric(regParam),
as.numeric(miniBatchFraction),
as.numeric(initStd),
as.integer(maxIter),
as.numeric(stepSize),
as.numeric(tol),
solver,
seed,
thresholds,
handleInvalid)
new("FMClassificationModel", jobj = jobj)
})
# Returns the summary of a FM Classification model produced by \code{spark.fmClassifier}
#' @param object a FM Classification model fitted by \code{spark.fmClassifier}.
#' @return \code{summary} returns summary information of the fitted model, which is a list.
#' @rdname spark.fmClassifier
#' @note summary(FMClassificationModel) since 3.1.0
setMethod("summary", signature(object = "FMClassificationModel"),
function(object) {
jobj <- object@jobj
features <- callJMethod(jobj, "rFeatures")
coefficients <- callJMethod(jobj, "rCoefficients")
coefficients <- as.matrix(unlist(coefficients))
colnames(coefficients) <- c("Estimate")
rownames(coefficients) <- unlist(features)
numClasses <- callJMethod(jobj, "numClasses")
numFeatures <- callJMethod(jobj, "numFeatures")
raw_factors <- unlist(callJMethod(jobj, "rFactors"))
factor_size <- callJMethod(jobj, "factorSize")
list(
coefficients = coefficients,
factors = matrix(raw_factors, ncol = factor_size),
numClasses = numClasses, numFeatures = numFeatures,
factorSize = factor_size
)
})
# Predicted values based on an FMClassificationModel model
#' @param newData a SparkDataFrame for testing.
#' @return \code{predict} returns the predicted values based on a FM Classification model.
#' @rdname spark.fmClassifier
#' @aliases predict,FMClassificationModel,SparkDataFrame-method
#' @note predict(FMClassificationModel) since 3.1.0
setMethod("predict", signature(object = "FMClassificationModel"),
function(object, newData) {
predict_internal(object, newData)
})
# Save fitted FMClassificationModel to the input path
#' @param path The directory where the model is saved.
#' @param overwrite Overwrites or not if the output path already exists. Default is FALSE
#' which means throw exception if the output path exists.
#'
#' @rdname spark.fmClassifier
#' @aliases write.ml,FMClassificationModel,character-method
#' @note write.ml(FMClassificationModel, character) since 3.1.0
setMethod("write.ml", signature(object = "FMClassificationModel", path = "character"),
function(object, path, overwrite = FALSE) {
write_internal(object, path, overwrite)
})
|
library(tidyverse)
pouet<-tibble(x=c("philipe","Jean","maurice"),y=c(12,14,15))
pouet[1,]
library(readr)
etudiants<-read_csv2(file = "etudiants.csv",
locale = locale(
date_names ="fr",
decimal_mark=",",
encoding = "UTF-8",
tz="Europe/Paris"
)
)
etudiants
don<-read_csv2(file = "resultatsL1S1BBTE",
locale = locale(
date_names ="fr",
decimal_mark=",",
encoding = "UTF-8",
tz="Europe/Paris"
)
)
| /tp1.R | no_license | Amadoutall/tp_in_rstudio | R | false | false | 688 | r | library(tidyverse)
pouet<-tibble(x=c("philipe","Jean","maurice"),y=c(12,14,15))
pouet[1,]
library(readr)
etudiants<-read_csv2(file = "etudiants.csv",
locale = locale(
date_names ="fr",
decimal_mark=",",
encoding = "UTF-8",
tz="Europe/Paris"
)
)
etudiants
don<-read_csv2(file = "resultatsL1S1BBTE",
locale = locale(
date_names ="fr",
decimal_mark=",",
encoding = "UTF-8",
tz="Europe/Paris"
)
)
|
library(dplyr)
library(ggplot2)
library(pals)
library(mapdata)
theme_set(theme_bw())
reg = map_data("world2Hires")
reg = subset(reg, region %in% c('Canada', 'USA'))
reg$long = (360 - reg$long)*-1
#######
datavar = "MODISA" # "MODISA" or "OI"
#######
datasource = ""
# Region limits:
latlim = c(30,61.5)
lonlim = c(-160,-120)
# Aesthetics:
# "Boiling cauldron of death" palette
# Re: Charles - Similar to pals::brewer.rdylbu
death_cauldron = rev(c("#A50026", "#EA5839", "#F67B49", "#FB9F5A", "#FDBE70","#FDDA8A", "#FFFFBF","#EDF8DE", #reds
#"#ededed",
"#DAF0F6", "#BCE1EE", "#9ECFE3", "#80B6D6", "#649AC7", "#4A7BB7", "#3C59A6", "#313695"))
# 7-day mean, sd, N
curr7days <- readRDS(paste0("data/",datavar,"_SST7day_rollingavgbackup_current.rds"))
# Climatological mean, sd, N
clim7days <- readRDS(paste0("data/",datavar,"_SST7day_rollingavgbackup_climatology.rds"))
curr_clim <- full_join(curr7days, clim7days, by = c("lon","lat"))
rm(curr7days, clim7days)
curr_clim$diff_7day_deg <- curr_clim$sst_7day-curr_clim$sst_7day_clim
# Calculate data outside 90th percentile / 1.3 times the SD
curr_clim$sd_1.3_pos <- (curr_clim$sst_7day_clim+(curr_clim$sst_7day_climsd*1.29))
curr_clim$sd_above <- curr_clim$sd_1.3_pos-curr_clim$sst_7day
start = unique(curr_clim$start_date)
end = unique(curr_clim$end_date)
start = start[!is.na(start)]
end = end[!is.na(end)]
# Most recent rolling 7-day average ####
curr_clim %>%
filter(!is.na(sst_7day)) %>%
ggplot() +
geom_tile(aes(x = lon, y = lat, fill = sst_7day)) +
scale_fill_gradientn(colours = jet(50), limits=c(0,30), breaks = c(0,5,10, 15, 20,25,30)) +
geom_contour(aes(x = lon, y = lat, z = sst_7day), size = 0.5,
breaks = c(0,5,10, 15, 20,25,30), colour = "black") +
guides(fill = guide_colorbar(barheight = 12,
ticks.colour = "black", ticks.linewidth = 1.5,
frame.colour = "black", frame.linewidth = 1.5)) +
theme(legend.position = "right",panel.background = element_rect(fill = "grey90")) +
coord_quickmap(xlim = lonlim, ylim = latlim, expand = F) +
labs(fill = expression("SST " ( degree*C)),
title = paste(start, "to", end,"Mean Day SST"),
subtitle = paste(datavar,"NRT Sea Surface Temperature"),
caption = datasource) + xlab(NULL) + ylab(NULL) +
scale_y_continuous(breaks = seq(min(latlim), max(latlim), 5)) +
scale_x_continuous(breaks = seq(min(lonlim), max(lonlim),5)) +
geom_polygon(data = reg, aes(x = long, y = lat, group = group), fill = "grey70", colour = "grey40", size = 0.5)
ggsave(filename = paste0("figures/SST_",datavar,"_7-day_rollingavg_",end,".png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
ggsave(filename = paste0("SST_",datavar,"_7-day_rollingavg.png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in", dpi=250)
# 7-day climatology anomaly ####
curr_clim$diff_7day_deg[curr_clim$diff_7day_deg > 3] = 3
curr_clim$diff_7day_deg[curr_clim$diff_7day_deg < -3] = -3
curr_clim %>%
filter(!is.na(diff_7day_deg)) %>%
ggplot() +
geom_tile(aes(x = lon, y = lat, fill = diff_7day_deg)) +
scale_fill_gradientn(colours = death_cauldron,
limits = c(-3,3), breaks = seq(-3,3,1)) +
geom_contour(aes(x = lon, y = lat, z = sd_above, colour = "1.29 SD"),
size = 0.3, breaks = 0) +
scale_colour_manual(name = NULL, guide = "legend", values = c("1.29 SD" = "black")) +
guides(fill = guide_colorbar(barheight = 12,
ticks.colour = "black", ticks.linewidth = 1.5,
frame.colour = "black", frame.linewidth = 1.5),
colour = guide_legend(override.aes = list(linetype = 1, shape = NA))) +
theme(legend.position = "right",
panel.background = element_rect(fill = "grey90")) +
coord_quickmap(xlim = lonlim, ylim = latlim, expand = F) +
labs(fill = expression("Anomaly " ( degree*C)),
title = paste(start, "to", end,"SST Anomaly"),
subtitle = paste(datavar,"NRT Sea Surface Temperature Anomaly"),
caption = datasource) +
xlab(NULL) + ylab(NULL) +
scale_y_continuous(breaks = seq(min(latlim), max(latlim), 5)) +
scale_x_continuous(breaks = seq(min(lonlim),max(lonlim),5)) +
geom_polygon(data = reg, aes(x = long, y = lat, group = group), fill = "grey70", colour = "grey40", size = 0.5)
ggsave(filename = paste0("figures/SST_",datavar,"_7-day_rollingavg_anom_",end,".png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
ggsave(filename = paste0("SST_",datavar,"_7-day_rollingavg_anom.png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in", dpi=250)
# Number of pixels: ####
# (Only do for MODISA, as OI is same everywhere)
curr_clim$sst_7dayn[is.na(curr_clim$sst_7day)] <- 0
curr_clim %>%
filter(!is.na(sst_7dayn)) %>%
ggplot() +
geom_tile(aes(x = lon, y = lat, fill = sst_7dayn)) +
scale_fill_gradientn(colours = c("grey90", pals::jet(7)), breaks = seq(0,7,1)) +
# geom_contour(aes(x = lon, y = lat, z = sst_7dayn), size = 0.5,
# breaks = c(5,10, 15, 20), colour = "black") +
guides(fill = guide_colorbar(barheight = 12, ticks.linewidth = 1.5,
frame.colour = "black", frame.linewidth = 1.5,
nbin = 8, raster=F, ticks.colour = NA)) +
theme(legend.position = "right",panel.background = element_rect(fill = "grey90")) +
coord_quickmap(xlim = lonlim, ylim = latlim, expand = F) +
labs(fill = 'Weekly \nObservations',
title = paste(start, "to", end,"Day SST, Number of Observations"),
subtitle = paste(datavar,"NRT Sea Surface Temperature"),
caption = datasource) +
xlab(NULL) + ylab(NULL) +
scale_y_continuous(breaks = seq(min(latlim), max(latlim), 5)) +
scale_x_continuous(breaks = seq(min(lonlim), max(lonlim),5)) +
geom_polygon(data = reg, aes(x = long, y = lat, group = group), fill = "grey70", colour = "grey40", size = 0.5)
ggsave(filename = paste0("figures/SST_",datavar,"_7-day_rollingavg_n_",end,".png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
ggsave(filename = paste0("SST_",datavar,"_7-day_rollingavg_n.png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
| /scripts/NRT_SST_Plots_MODISA.R | no_license | cilori/Pacific_SST_NRT_Monitoring | R | false | false | 6,426 | r | library(dplyr)
library(ggplot2)
library(pals)
library(mapdata)
theme_set(theme_bw())
reg = map_data("world2Hires")
reg = subset(reg, region %in% c('Canada', 'USA'))
reg$long = (360 - reg$long)*-1
#######
datavar = "MODISA" # "MODISA" or "OI"
#######
datasource = ""
# Region limits:
latlim = c(30,61.5)
lonlim = c(-160,-120)
# Aesthetics:
# "Boiling cauldron of death" palette
# Re: Charles - Similar to pals::brewer.rdylbu
death_cauldron = rev(c("#A50026", "#EA5839", "#F67B49", "#FB9F5A", "#FDBE70","#FDDA8A", "#FFFFBF","#EDF8DE", #reds
#"#ededed",
"#DAF0F6", "#BCE1EE", "#9ECFE3", "#80B6D6", "#649AC7", "#4A7BB7", "#3C59A6", "#313695"))
# 7-day mean, sd, N
curr7days <- readRDS(paste0("data/",datavar,"_SST7day_rollingavgbackup_current.rds"))
# Climatological mean, sd, N
clim7days <- readRDS(paste0("data/",datavar,"_SST7day_rollingavgbackup_climatology.rds"))
curr_clim <- full_join(curr7days, clim7days, by = c("lon","lat"))
rm(curr7days, clim7days)
curr_clim$diff_7day_deg <- curr_clim$sst_7day-curr_clim$sst_7day_clim
# Calculate data outside 90th percentile / 1.3 times the SD
curr_clim$sd_1.3_pos <- (curr_clim$sst_7day_clim+(curr_clim$sst_7day_climsd*1.29))
curr_clim$sd_above <- curr_clim$sd_1.3_pos-curr_clim$sst_7day
start = unique(curr_clim$start_date)
end = unique(curr_clim$end_date)
start = start[!is.na(start)]
end = end[!is.na(end)]
# Most recent rolling 7-day average ####
curr_clim %>%
filter(!is.na(sst_7day)) %>%
ggplot() +
geom_tile(aes(x = lon, y = lat, fill = sst_7day)) +
scale_fill_gradientn(colours = jet(50), limits=c(0,30), breaks = c(0,5,10, 15, 20,25,30)) +
geom_contour(aes(x = lon, y = lat, z = sst_7day), size = 0.5,
breaks = c(0,5,10, 15, 20,25,30), colour = "black") +
guides(fill = guide_colorbar(barheight = 12,
ticks.colour = "black", ticks.linewidth = 1.5,
frame.colour = "black", frame.linewidth = 1.5)) +
theme(legend.position = "right",panel.background = element_rect(fill = "grey90")) +
coord_quickmap(xlim = lonlim, ylim = latlim, expand = F) +
labs(fill = expression("SST " ( degree*C)),
title = paste(start, "to", end,"Mean Day SST"),
subtitle = paste(datavar,"NRT Sea Surface Temperature"),
caption = datasource) + xlab(NULL) + ylab(NULL) +
scale_y_continuous(breaks = seq(min(latlim), max(latlim), 5)) +
scale_x_continuous(breaks = seq(min(lonlim), max(lonlim),5)) +
geom_polygon(data = reg, aes(x = long, y = lat, group = group), fill = "grey70", colour = "grey40", size = 0.5)
ggsave(filename = paste0("figures/SST_",datavar,"_7-day_rollingavg_",end,".png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
ggsave(filename = paste0("SST_",datavar,"_7-day_rollingavg.png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in", dpi=250)
# 7-day climatology anomaly ####
curr_clim$diff_7day_deg[curr_clim$diff_7day_deg > 3] = 3
curr_clim$diff_7day_deg[curr_clim$diff_7day_deg < -3] = -3
curr_clim %>%
filter(!is.na(diff_7day_deg)) %>%
ggplot() +
geom_tile(aes(x = lon, y = lat, fill = diff_7day_deg)) +
scale_fill_gradientn(colours = death_cauldron,
limits = c(-3,3), breaks = seq(-3,3,1)) +
geom_contour(aes(x = lon, y = lat, z = sd_above, colour = "1.29 SD"),
size = 0.3, breaks = 0) +
scale_colour_manual(name = NULL, guide = "legend", values = c("1.29 SD" = "black")) +
guides(fill = guide_colorbar(barheight = 12,
ticks.colour = "black", ticks.linewidth = 1.5,
frame.colour = "black", frame.linewidth = 1.5),
colour = guide_legend(override.aes = list(linetype = 1, shape = NA))) +
theme(legend.position = "right",
panel.background = element_rect(fill = "grey90")) +
coord_quickmap(xlim = lonlim, ylim = latlim, expand = F) +
labs(fill = expression("Anomaly " ( degree*C)),
title = paste(start, "to", end,"SST Anomaly"),
subtitle = paste(datavar,"NRT Sea Surface Temperature Anomaly"),
caption = datasource) +
xlab(NULL) + ylab(NULL) +
scale_y_continuous(breaks = seq(min(latlim), max(latlim), 5)) +
scale_x_continuous(breaks = seq(min(lonlim),max(lonlim),5)) +
geom_polygon(data = reg, aes(x = long, y = lat, group = group), fill = "grey70", colour = "grey40", size = 0.5)
ggsave(filename = paste0("figures/SST_",datavar,"_7-day_rollingavg_anom_",end,".png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
ggsave(filename = paste0("SST_",datavar,"_7-day_rollingavg_anom.png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in", dpi=250)
# Number of pixels: ####
# (Only do for MODISA, as OI is same everywhere)
curr_clim$sst_7dayn[is.na(curr_clim$sst_7day)] <- 0
curr_clim %>%
filter(!is.na(sst_7dayn)) %>%
ggplot() +
geom_tile(aes(x = lon, y = lat, fill = sst_7dayn)) +
scale_fill_gradientn(colours = c("grey90", pals::jet(7)), breaks = seq(0,7,1)) +
# geom_contour(aes(x = lon, y = lat, z = sst_7dayn), size = 0.5,
# breaks = c(5,10, 15, 20), colour = "black") +
guides(fill = guide_colorbar(barheight = 12, ticks.linewidth = 1.5,
frame.colour = "black", frame.linewidth = 1.5,
nbin = 8, raster=F, ticks.colour = NA)) +
theme(legend.position = "right",panel.background = element_rect(fill = "grey90")) +
coord_quickmap(xlim = lonlim, ylim = latlim, expand = F) +
labs(fill = 'Weekly \nObservations',
title = paste(start, "to", end,"Day SST, Number of Observations"),
subtitle = paste(datavar,"NRT Sea Surface Temperature"),
caption = datasource) +
xlab(NULL) + ylab(NULL) +
scale_y_continuous(breaks = seq(min(latlim), max(latlim), 5)) +
scale_x_continuous(breaks = seq(min(lonlim), max(lonlim),5)) +
geom_polygon(data = reg, aes(x = long, y = lat, group = group), fill = "grey70", colour = "grey40", size = 0.5)
ggsave(filename = paste0("figures/SST_",datavar,"_7-day_rollingavg_n_",end,".png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
ggsave(filename = paste0("SST_",datavar,"_7-day_rollingavg_n.png"),
device = "png", scale = 1.9, height = 3.5, width = 3.5, units = "in")
|
#Read data from file in working directory:
data <- read.table("household_power_consumption.txt",sep=";",colClasses=c("character","character","numeric",
"numeric","numeric","numeric","numeric","numeric","numeric"),header=TRUE, na.strings="?")
#Create logical vector for subsetting:
DatesOfInterest <- data$Date %in% c("1/2/2007","2/2/2007")
#Subset data:
data <- data[DatesOfInterest,]
#Convert Date and Time columns:
data$dateTime <- paste(data$Date,data$Time)
data$dateTime = strptime(data$dateTime,"%d/%m/%Y %H:%M:%S")
hist(data$Global_active_power, col="darkorange2",
xlab="Global Active Power (kilowatts)",main="Global Active Power")
| /plot1.R | no_license | dkrooshof/ExData_Plotting1 | R | false | false | 669 | r | #Read data from file in working directory:
data <- read.table("household_power_consumption.txt",sep=";",colClasses=c("character","character","numeric",
"numeric","numeric","numeric","numeric","numeric","numeric"),header=TRUE, na.strings="?")
#Create logical vector for subsetting:
DatesOfInterest <- data$Date %in% c("1/2/2007","2/2/2007")
#Subset data:
data <- data[DatesOfInterest,]
#Convert Date and Time columns:
data$dateTime <- paste(data$Date,data$Time)
data$dateTime = strptime(data$dateTime,"%d/%m/%Y %H:%M:%S")
hist(data$Global_active_power, col="darkorange2",
xlab="Global Active Power (kilowatts)",main="Global Active Power")
|
library(lmomco)
### Name: rlmomco
### Title: Random Variates of a Distribution
### Aliases: rlmomco
### Keywords: random variate quantile function The lmomco functions The
### lmomco function mimics of R nomenclature
### ** Examples
lmr <- lmoms(rnorm(20)) # generate 20 standard normal variates
para <- parnor(lmr) # estimate parameters of the normal
simulate <- rlmomco(20,para) # simulate 20 samples using lmomco package
lmr <- vec2lmom(c(1000,500,.3)) # first three lmoments are known
para <- lmom2par(lmr,type="gev") # est. parameters of GEV distribution
Q <- rlmomco(45,para) # simulate 45 samples
PP <- pp(Q) # compute the plotting positions
plot(PP,sort(Q)) # plot the data up
| /data/genthat_extracted_code/lmomco/examples/rlmomco.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 729 | r | library(lmomco)
### Name: rlmomco
### Title: Random Variates of a Distribution
### Aliases: rlmomco
### Keywords: random variate quantile function The lmomco functions The
### lmomco function mimics of R nomenclature
### ** Examples
lmr <- lmoms(rnorm(20)) # generate 20 standard normal variates
para <- parnor(lmr) # estimate parameters of the normal
simulate <- rlmomco(20,para) # simulate 20 samples using lmomco package
lmr <- vec2lmom(c(1000,500,.3)) # first three lmoments are known
para <- lmom2par(lmr,type="gev") # est. parameters of GEV distribution
Q <- rlmomco(45,para) # simulate 45 samples
PP <- pp(Q) # compute the plotting positions
plot(PP,sort(Q)) # plot the data up
|
# sort --- external sort of text lines
subroutine sort (ifd, ofd)
filedes ifd, ofd
define(MERGEORDER,7)
define(MERGETEXT,900)
define(NAMESIZE,30)
define(MAXTEXT,32768)
define(MAXPTR,4096)
define(LOGPTR,12)
character linbuf (MAXTEXT), name (NAMESIZE)
integer infil (MERGEORDER), linptr (MAXPTR),
nlines, high, lim, low, t
integer gtext
filedes outfd
filedes makfil, open
high = 0
repeat { # initial formation of runs
t = gtext (linptr, nlines, linbuf, ifd)
call quick (linptr, nlines, linbuf)
if (t ~= EOF || high > 0) { # do only if more than 1 run required
high += 1
outfd = makfil (high)
call ptext (linptr, nlines, linbuf, outfd)
call close (outfd)
}
} until (t == EOF)
if (high == 0) { # everything fit in the first run
call ptext (linptr, nlines, linbuf, ofd)
call rewind (ofd)
return
}
for (low = 1; low < high; low += MERGEORDER) { # merge
lim = min0 (low + MERGEORDER - 1, high)
call gopen (infil, low, lim)
if (lim >= high ) # final merge phase
call merge (infil, lim - low + 1, ofd)
else {
high += 1
outfd = makfil (high)
call merge (infil, lim - low + 1, outfd)
call close (outfd)
}
call gremov (infil, low, lim)
}
call rewind (ofd)
return
end
# gname --- make unique name for file id n
subroutine gname (n, name)
integer n
character name (NAMESIZE)
call encode (name, NAMESIZE, "=temp=/xt=pid=*i"s, n)
return
end
# makfil --- make new file for number n
filedes function makfil (n)
integer n
character name (NAMESIZE)
filedes create
call gname (n, name)
makfil = create (name, READWRITE)
if (makfil == ERR)
call cant (name)
return
end
# gopen --- open group of files low ... lim
subroutine gopen (infil, low, lim)
filedes infil (MERGEORDER)
integer low, lim
character name (NAMESIZE)
integer i
filedes open
for (i = 1; i <= lim - low + 1; i += 1) {
call gname (low + i - 1, name)
infil (i) = open (name, READ)
if (infil (i) == ERR)
call cant (name)
}
return
end
# gremov --- remove group of files low ... lim
subroutine gremov (infil, low, lim)
filedes infil (MERGEORDER)
integer low, lim
character name (NAMESIZE)
integer i
for (i = 1; i <= lim - low + 1; i += 1) {
call close (infil (i))
call gname (low + i - 1, name)
call remove (name)
}
return
end
# merge --- merge infil (1) ... infil (nfiles) onto outfil
subroutine merge (infil, nfiles, outfil)
filedes infil (MERGEORDER), outfil
integer nfiles
character linbuf (MERGETEXT)
integer getlin
integer i, inf, lbp, lp1, nf, linptr (MERGEORDER)
lbp = 1
nf = 0
for (i = 1; i <= nfiles; i += 1) # get one line from each file
if (getlin (linbuf (lbp), infil (i)) ~= EOF) {
nf += 1
linptr (nf) = lbp
lbp += MAXLINE # room for largest line
}
call quick (linptr, nf, linbuf) # make initial heap
while (nf > 0) {
lp1 = linptr (1)
call putlin (linbuf (lp1), outfil)
inf = lp1 / MAXLINE + 1 # compute file index
if (getlin (linbuf (lp1), infil (inf)) == EOF) {
linptr (1) = linptr (nf)
nf -= 1
}
call reheap (linptr, nf, linbuf)
}
return
end
# reheap --- propagate linbuf (linptr (1)) to proper place in heap
subroutine reheap (linptr, nf, linbuf)
integer linptr (ARB), nf
character linbuf (MAXTEXT)
integer i, j
integer compare
for (i = 1; 2 * i <= nf; i = j) {
j = 2 * i
if (j < nf) # find smaller child
if (compare (linptr (j), linptr (j + 1), linbuf) > 0)
j += 1
if (compare (linptr (i), linptr (j), linbuf) <= 0)
break # proper position found
call exchan (linptr (i), linptr (j), linbuf) # percolate
}
return
end
# gtext --- get text lines into linbuf
integer function gtext (linptr, nlines, linbuf, infile)
integer linptr (MAXPTR), nlines
character linbuf (MAXTEXT)
filedes infile
integer lbp, len
integer getlin
nlines = 0
lbp = 1
repeat {
len = getlin (linbuf (lbp), infile)
if (len == EOF)
break
nlines += 1
linptr (nlines) = lbp
lbp += len + 1 # "1" = room for EOS
} until (lbp >= MAXTEXT - MAXLINE || nlines >= MAXPTR)
gtext = len
return
end
# ptext --- output text lines from linbuf
subroutine ptext (linptr, nlines, linbuf, outfil)
integer linptr (MAXPTR), nlines
character linbuf (MAXTEXT)
filedes outfil
integer i, j
for (i = 1; i <= nlines; i += 1) {
j = linptr (i)
call putlin (linbuf (j), outfil)
}
return
end
# compare --- compare linbuf (lp1) with linbuf (lp2)
integer function compare (lp1, lp2, linbuf)
integer lp1, lp2
character linbuf (ARB)
character c1, c2, uc1, uc2
character mapup
integer i, j
i = lp1
j = lp2
repeat {
c1 = linbuf (i)
c2 = linbuf (j)
if (c1 ~= c2)
break
if (c1 == EOS)
return (0)
i += 1
j += 1
}
uc1 = mapup (c1)
uc2 = mapup (c2)
select
when (uc1 < uc2)
compare = -1
when (uc1 > uc2)
compare = +1
when (IS_LOWER (c1))
compare = -1
else
compare = +1
return
end
# exchan --- exchange linbuf (lp1) with linbuf (lp2)
subroutine exchan (lp1, lp2, linbuf)
integer lp1, lp2
character linbuf (ARB)
integer k
k = lp1
lp1 = lp2
lp2 = k
return
end
# quick --- quicksort for character lines
subroutine quick (linptr, nlines, linbuf)
integer linptr (ARB), nlines
character linbuf (ARB)
integer i, j, lv (LOGPTR), p, pivlin, uv (LOGPTR)
integer compare
lv (1) = 1
uv (1) = nlines
p = 1
while (p > 0)
if (lv (p) >= uv (p)) # only one element in this subset
p -= 1 # pop stack
else {
i = lv (p) - 1
j = uv (p)
pivlin = linptr (j) # pivot line
while (i < j) {
for (i+=1; compare (linptr (i), pivlin, linbuf) < 0; i+=1)
;
for (j -= 1; j > i; j -= 1)
if (compare (linptr (j), pivlin, linbuf) <= 0)
break
if (i < j) # out of order pair
call exchan (linptr (i), linptr (j), linbuf)
}
j = uv (p) # move pivot to position i
call exchan (linptr (i), linptr (j), linbuf)
if (i - lv (p) < uv (p) - i) { # stack so shorter done first
lv (p + 1) = lv (p)
uv (p + 1) = i - 1
lv (p) = i + 1
}
else {
lv (p + 1) = i + 1
uv (p + 1) = uv (p)
uv (p) = i - 1
}
p += 1 # push onto stack
}
return
end
| /swt/src/spc/xref.u/xref_sort.r | no_license | arnoldrobbins/gt-swt | R | false | false | 7,193 | r | # sort --- external sort of text lines
subroutine sort (ifd, ofd)
filedes ifd, ofd
define(MERGEORDER,7)
define(MERGETEXT,900)
define(NAMESIZE,30)
define(MAXTEXT,32768)
define(MAXPTR,4096)
define(LOGPTR,12)
character linbuf (MAXTEXT), name (NAMESIZE)
integer infil (MERGEORDER), linptr (MAXPTR),
nlines, high, lim, low, t
integer gtext
filedes outfd
filedes makfil, open
high = 0
repeat { # initial formation of runs
t = gtext (linptr, nlines, linbuf, ifd)
call quick (linptr, nlines, linbuf)
if (t ~= EOF || high > 0) { # do only if more than 1 run required
high += 1
outfd = makfil (high)
call ptext (linptr, nlines, linbuf, outfd)
call close (outfd)
}
} until (t == EOF)
if (high == 0) { # everything fit in the first run
call ptext (linptr, nlines, linbuf, ofd)
call rewind (ofd)
return
}
for (low = 1; low < high; low += MERGEORDER) { # merge
lim = min0 (low + MERGEORDER - 1, high)
call gopen (infil, low, lim)
if (lim >= high ) # final merge phase
call merge (infil, lim - low + 1, ofd)
else {
high += 1
outfd = makfil (high)
call merge (infil, lim - low + 1, outfd)
call close (outfd)
}
call gremov (infil, low, lim)
}
call rewind (ofd)
return
end
# gname --- make unique name for file id n
subroutine gname (n, name)
integer n
character name (NAMESIZE)
call encode (name, NAMESIZE, "=temp=/xt=pid=*i"s, n)
return
end
# makfil --- make new file for number n
filedes function makfil (n)
integer n
character name (NAMESIZE)
filedes create
call gname (n, name)
makfil = create (name, READWRITE)
if (makfil == ERR)
call cant (name)
return
end
# gopen --- open group of files low ... lim
subroutine gopen (infil, low, lim)
filedes infil (MERGEORDER)
integer low, lim
character name (NAMESIZE)
integer i
filedes open
for (i = 1; i <= lim - low + 1; i += 1) {
call gname (low + i - 1, name)
infil (i) = open (name, READ)
if (infil (i) == ERR)
call cant (name)
}
return
end
# gremov --- remove group of files low ... lim
subroutine gremov (infil, low, lim)
filedes infil (MERGEORDER)
integer low, lim
character name (NAMESIZE)
integer i
for (i = 1; i <= lim - low + 1; i += 1) {
call close (infil (i))
call gname (low + i - 1, name)
call remove (name)
}
return
end
# merge --- merge infil (1) ... infil (nfiles) onto outfil
subroutine merge (infil, nfiles, outfil)
filedes infil (MERGEORDER), outfil
integer nfiles
character linbuf (MERGETEXT)
integer getlin
integer i, inf, lbp, lp1, nf, linptr (MERGEORDER)
lbp = 1
nf = 0
for (i = 1; i <= nfiles; i += 1) # get one line from each file
if (getlin (linbuf (lbp), infil (i)) ~= EOF) {
nf += 1
linptr (nf) = lbp
lbp += MAXLINE # room for largest line
}
call quick (linptr, nf, linbuf) # make initial heap
while (nf > 0) {
lp1 = linptr (1)
call putlin (linbuf (lp1), outfil)
inf = lp1 / MAXLINE + 1 # compute file index
if (getlin (linbuf (lp1), infil (inf)) == EOF) {
linptr (1) = linptr (nf)
nf -= 1
}
call reheap (linptr, nf, linbuf)
}
return
end
# reheap --- propagate linbuf (linptr (1)) to proper place in heap
subroutine reheap (linptr, nf, linbuf)
integer linptr (ARB), nf
character linbuf (MAXTEXT)
integer i, j
integer compare
for (i = 1; 2 * i <= nf; i = j) {
j = 2 * i
if (j < nf) # find smaller child
if (compare (linptr (j), linptr (j + 1), linbuf) > 0)
j += 1
if (compare (linptr (i), linptr (j), linbuf) <= 0)
break # proper position found
call exchan (linptr (i), linptr (j), linbuf) # percolate
}
return
end
# gtext --- get text lines into linbuf
integer function gtext (linptr, nlines, linbuf, infile)
integer linptr (MAXPTR), nlines
character linbuf (MAXTEXT)
filedes infile
integer lbp, len
integer getlin
nlines = 0
lbp = 1
repeat {
len = getlin (linbuf (lbp), infile)
if (len == EOF)
break
nlines += 1
linptr (nlines) = lbp
lbp += len + 1 # "1" = room for EOS
} until (lbp >= MAXTEXT - MAXLINE || nlines >= MAXPTR)
gtext = len
return
end
# ptext --- output text lines from linbuf
subroutine ptext (linptr, nlines, linbuf, outfil)
integer linptr (MAXPTR), nlines
character linbuf (MAXTEXT)
filedes outfil
integer i, j
for (i = 1; i <= nlines; i += 1) {
j = linptr (i)
call putlin (linbuf (j), outfil)
}
return
end
# compare --- compare linbuf (lp1) with linbuf (lp2)
integer function compare (lp1, lp2, linbuf)
integer lp1, lp2
character linbuf (ARB)
character c1, c2, uc1, uc2
character mapup
integer i, j
i = lp1
j = lp2
repeat {
c1 = linbuf (i)
c2 = linbuf (j)
if (c1 ~= c2)
break
if (c1 == EOS)
return (0)
i += 1
j += 1
}
uc1 = mapup (c1)
uc2 = mapup (c2)
select
when (uc1 < uc2)
compare = -1
when (uc1 > uc2)
compare = +1
when (IS_LOWER (c1))
compare = -1
else
compare = +1
return
end
# exchan --- exchange linbuf (lp1) with linbuf (lp2)
subroutine exchan (lp1, lp2, linbuf)
integer lp1, lp2
character linbuf (ARB)
integer k
k = lp1
lp1 = lp2
lp2 = k
return
end
# quick --- quicksort for character lines
subroutine quick (linptr, nlines, linbuf)
integer linptr (ARB), nlines
character linbuf (ARB)
integer i, j, lv (LOGPTR), p, pivlin, uv (LOGPTR)
integer compare
lv (1) = 1
uv (1) = nlines
p = 1
while (p > 0)
if (lv (p) >= uv (p)) # only one element in this subset
p -= 1 # pop stack
else {
i = lv (p) - 1
j = uv (p)
pivlin = linptr (j) # pivot line
while (i < j) {
for (i+=1; compare (linptr (i), pivlin, linbuf) < 0; i+=1)
;
for (j -= 1; j > i; j -= 1)
if (compare (linptr (j), pivlin, linbuf) <= 0)
break
if (i < j) # out of order pair
call exchan (linptr (i), linptr (j), linbuf)
}
j = uv (p) # move pivot to position i
call exchan (linptr (i), linptr (j), linbuf)
if (i - lv (p) < uv (p) - i) { # stack so shorter done first
lv (p + 1) = lv (p)
uv (p + 1) = i - 1
lv (p) = i + 1
}
else {
lv (p + 1) = i + 1
uv (p + 1) = uv (p)
uv (p) = i - 1
}
p += 1 # push onto stack
}
return
end
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/includeTaxa.R
\name{includeTaxa}
\alias{includeTaxa}
\title{Includes the list of taxa into the reconstructions.}
\usage{
includeTaxa(x, taxa, climate)
}
\arguments{
\item{x}{A \code{\link{crestObj}} produced by one of the \code{\link{crest}},
\code{\link{crest.get_modern_data}}, \code{\link{crest.calibrate}},
\code{\link{crest.reconstruct}} or \code{\link{loo}} functions.}
\item{taxa}{A vector of taxa to include.}
\item{climate}{A vector of climate variables to link the taxa with.}
}
\value{
Return the updated \code{\link{crestObj}}.
}
\description{
Includes the list of taxa into the reconstructions.
}
\examples{
data(reconstr)
print(reconstr$inputs$selectedTaxa)
reconstr <- includeTaxa(reconstr, reconstr$inputs$taxa.name, 'bio12')
## All the taxa are not selected for 'bio12', except for 'Taxon7' for which
## data are unavailable.
print(reconstr$inputs$selectedTaxa)
}
| /man/includeTaxa.Rd | permissive | mchevalier2/crestr | R | false | true | 962 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/includeTaxa.R
\name{includeTaxa}
\alias{includeTaxa}
\title{Includes the list of taxa into the reconstructions.}
\usage{
includeTaxa(x, taxa, climate)
}
\arguments{
\item{x}{A \code{\link{crestObj}} produced by one of the \code{\link{crest}},
\code{\link{crest.get_modern_data}}, \code{\link{crest.calibrate}},
\code{\link{crest.reconstruct}} or \code{\link{loo}} functions.}
\item{taxa}{A vector of taxa to include.}
\item{climate}{A vector of climate variables to link the taxa with.}
}
\value{
Return the updated \code{\link{crestObj}}.
}
\description{
Includes the list of taxa into the reconstructions.
}
\examples{
data(reconstr)
print(reconstr$inputs$selectedTaxa)
reconstr <- includeTaxa(reconstr, reconstr$inputs$taxa.name, 'bio12')
## All the taxa are not selected for 'bio12', except for 'Taxon7' for which
## data are unavailable.
print(reconstr$inputs$selectedTaxa)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/caching.R
\name{save.object}
\alias{save.object}
\title{Save an object (cache)}
\usage{
save.object(o, name = NULL, cache.dir)
}
\arguments{
\item{o}{object to save}
}
\value{
nothing
}
\description{
Save an object (cache)
}
\examples{
save.object(iris)
}
| /man/save.object.Rd | permissive | pydupont/cacheR | R | false | true | 334 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/caching.R
\name{save.object}
\alias{save.object}
\title{Save an object (cache)}
\usage{
save.object(o, name = NULL, cache.dir)
}
\arguments{
\item{o}{object to save}
}
\value{
nothing
}
\description{
Save an object (cache)
}
\examples{
save.object(iris)
}
|
#' @export
emphasis.app <- function() {
appDir <- system.file("Shiny-examples", "emphasis_postprocessing.R", package = "emphasis")
if (appDir == "") {
stop("Could not find example directory. Try re-installing `emphasis`.", call. = FALSE)
}
shiny::runApp(appDir, display.mode = "normal")
}
#' @export
analytics <- function() {
appDir <- system.file("Shiny-examples", "MCEM_vizualization.R", package = "emphasis")
if (appDir == "") {
stop("Could not find example directory. Try re-installing `emphasis`.", call. = FALSE)
}
shiny::runApp(appDir, display.mode = "normal")
} | /R/Emphasis_dd.R | no_license | thijsjanzen/emphasis | R | false | false | 601 | r | #' @export
emphasis.app <- function() {
appDir <- system.file("Shiny-examples", "emphasis_postprocessing.R", package = "emphasis")
if (appDir == "") {
stop("Could not find example directory. Try re-installing `emphasis`.", call. = FALSE)
}
shiny::runApp(appDir, display.mode = "normal")
}
#' @export
analytics <- function() {
appDir <- system.file("Shiny-examples", "MCEM_vizualization.R", package = "emphasis")
if (appDir == "") {
stop("Could not find example directory. Try re-installing `emphasis`.", call. = FALSE)
}
shiny::runApp(appDir, display.mode = "normal")
} |
#' South Dakota Sales Tax
#'
#' Takes any value that is inputted and multiples it by the South Dakota Sales Tax
#' @param x is the value before sales tax
#' @return total with sales tax
#' @export
SD_sales <- function(x)
{
total <- (x * 0.045) + x
return(total)
}
| /SalesTax/R/Sotuh Dakota.R | no_license | ddang4370/Final-Project | R | false | false | 268 | r | #' South Dakota Sales Tax
#'
#' Takes any value that is inputted and multiples it by the South Dakota Sales Tax
#' @param x is the value before sales tax
#' @return total with sales tax
#' @export
SD_sales <- function(x)
{
total <- (x * 0.045) + x
return(total)
}
|
## ----eval=FALSE, echo=TRUE, tidy=FALSE, size='footnotesize'--------------
# N.pla <- 1900
# N.vax <- 1700
# aveVElist <- list(-2, -1.5, -1, -0.5, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, c(0,0), c(0.4,0),
# c(0.4,0.4), c(0.4,0.2), c(0.4,0.3), c(0.5,0.3), c(0.5,0.4), c(0.6,0.3), c(0.6,0.4))
# infRate <- 0.04
# estimand <- "cuminc"
# laggedMonitoring <- TRUE
# lagTime <- 26
# outDir <- "./"
## ----eval=FALSE, echo=TRUE, tidy=FALSE, size='footnotesize'--------------
# for (i in 1:length(aveVElist)){
# simTrial(N=c(N.pla, rep(N.vax, length(aveVElist[[i]]))), aveVE=c(0, aveVElist[[i]]),
# VEmodel="half", vePeriods=c(1,27,79), enrollPeriod=78, enrollPartial=13,
# enrollPartialRelRate=0.5, dropoutRate=0.05, infecRate=infRate, fuTime=156,
# visitSchedule=c(0, (13/3)*(1:4), seq(13*6/3, 156, by=13*2/3)),
# missVaccProb=c(0,0.05,0.1,0.15), VEcutoffWeek=26, nTrials=1000,
# blockSize=NULL, stage1=78, saveDir=outDir, randomSeed=9)
#
# monitorTrial(dataFile=
# paste0("simTrial_nPlac=",N.pla,"_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),
# "_aveVE=",paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,".RData"),
# stage1=78, stage2=156, harmMonitorRange=c(10,100), alphaPerTest=NULL,
# nonEffStartMethod="FKG", nonEffInterval=20, lowerVEnoneff=0, upperVEnoneff=0.4,
# stage1VE=0, lowerVEuncPower=0, highVE=0.6, alphaNoneff=0.05, alphaStage1=0.05,
# alphaUncPower=0.05, alphaHigh=0.05, estimand=estimand,
# laggedMonitoring=laggedMonitoring, lagTime=lagTime, saveDir=outDir)
#
# censTrial(dataFile=
# paste0("simTrial_nPlac=",N.pla, "_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),"_aveVE=",
# paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,".RData"),
# monitorFile=
# paste0("monitorTrial_nPlac=", N.pla, "_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),"_aveVE=",
# paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,"_",estimand,".RData"),
# stage1=78, stage2=156, saveDir=outDir)
#
# if (i %in% 17:22){
# rankTrial(censFile=
# paste0("trialDataCens_nPlac=",N.pla,"_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),"_aveVE=",
# paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,"_",estimand,".RData"),
# idxHighestVE=1, headHead=matrix(1:2, nrow=1, ncol=2), lowerVE=0, stage1=78, stage2=156,
# alpha=0.05, saveDir=outDir)
# }
# }
#
# VEpowerPP(dataList=
# as.list(paste0("trialDataCens_nPlac=", N.pla, "_nVacc=", N.vax, "_aveVE=",
# do.call("c", aveVElist[5:13]), "_infRate=",infRate,"_",estimand,".RData")),
# lowerVEuncPower=0, alphaUncPower=0.05, VEcutoffWeek=26, stage1=78,
# outName=paste0("VEpwPP_nPlac=", N.pla, "_nVacc=", N.vax, "_infRate=",infRate,".RData"),
# saveDir=outDir)
## ----eval=FALSE, echo=TRUE, tidy=FALSE-----------------------------------
# browseVignettes(package="seqDesign")
| /inst/doc/seqDesignInstructions.R | no_license | cran/seqDesign | R | false | false | 3,457 | r | ## ----eval=FALSE, echo=TRUE, tidy=FALSE, size='footnotesize'--------------
# N.pla <- 1900
# N.vax <- 1700
# aveVElist <- list(-2, -1.5, -1, -0.5, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, c(0,0), c(0.4,0),
# c(0.4,0.4), c(0.4,0.2), c(0.4,0.3), c(0.5,0.3), c(0.5,0.4), c(0.6,0.3), c(0.6,0.4))
# infRate <- 0.04
# estimand <- "cuminc"
# laggedMonitoring <- TRUE
# lagTime <- 26
# outDir <- "./"
## ----eval=FALSE, echo=TRUE, tidy=FALSE, size='footnotesize'--------------
# for (i in 1:length(aveVElist)){
# simTrial(N=c(N.pla, rep(N.vax, length(aveVElist[[i]]))), aveVE=c(0, aveVElist[[i]]),
# VEmodel="half", vePeriods=c(1,27,79), enrollPeriod=78, enrollPartial=13,
# enrollPartialRelRate=0.5, dropoutRate=0.05, infecRate=infRate, fuTime=156,
# visitSchedule=c(0, (13/3)*(1:4), seq(13*6/3, 156, by=13*2/3)),
# missVaccProb=c(0,0.05,0.1,0.15), VEcutoffWeek=26, nTrials=1000,
# blockSize=NULL, stage1=78, saveDir=outDir, randomSeed=9)
#
# monitorTrial(dataFile=
# paste0("simTrial_nPlac=",N.pla,"_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),
# "_aveVE=",paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,".RData"),
# stage1=78, stage2=156, harmMonitorRange=c(10,100), alphaPerTest=NULL,
# nonEffStartMethod="FKG", nonEffInterval=20, lowerVEnoneff=0, upperVEnoneff=0.4,
# stage1VE=0, lowerVEuncPower=0, highVE=0.6, alphaNoneff=0.05, alphaStage1=0.05,
# alphaUncPower=0.05, alphaHigh=0.05, estimand=estimand,
# laggedMonitoring=laggedMonitoring, lagTime=lagTime, saveDir=outDir)
#
# censTrial(dataFile=
# paste0("simTrial_nPlac=",N.pla, "_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),"_aveVE=",
# paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,".RData"),
# monitorFile=
# paste0("monitorTrial_nPlac=", N.pla, "_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),"_aveVE=",
# paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,"_",estimand,".RData"),
# stage1=78, stage2=156, saveDir=outDir)
#
# if (i %in% 17:22){
# rankTrial(censFile=
# paste0("trialDataCens_nPlac=",N.pla,"_nVacc=",
# paste(rep(N.vax, length(aveVElist[[i]])), collapse="_"),"_aveVE=",
# paste(aveVElist[[i]], collapse="_"),"_infRate=",infRate,"_",estimand,".RData"),
# idxHighestVE=1, headHead=matrix(1:2, nrow=1, ncol=2), lowerVE=0, stage1=78, stage2=156,
# alpha=0.05, saveDir=outDir)
# }
# }
#
# VEpowerPP(dataList=
# as.list(paste0("trialDataCens_nPlac=", N.pla, "_nVacc=", N.vax, "_aveVE=",
# do.call("c", aveVElist[5:13]), "_infRate=",infRate,"_",estimand,".RData")),
# lowerVEuncPower=0, alphaUncPower=0.05, VEcutoffWeek=26, stage1=78,
# outName=paste0("VEpwPP_nPlac=", N.pla, "_nVacc=", N.vax, "_infRate=",infRate,".RData"),
# saveDir=outDir)
## ----eval=FALSE, echo=TRUE, tidy=FALSE-----------------------------------
# browseVignettes(package="seqDesign")
|
# Create the documentation for the package
devtools::document()
# Install the package
devtools::install(force = TRUE)
# Build the pkgdown site
# pkgdown::build_site()
# Check package
devtools::check()
# Load the package and view the summary
library(conjointTools)
help(package = 'conjointTools')
# Install from github
# devtools::install_github('jhelvy/conjointTools')
| /build.R | permissive | jrwhelan/conjointTools | R | false | false | 374 | r | # Create the documentation for the package
devtools::document()
# Install the package
devtools::install(force = TRUE)
# Build the pkgdown site
# pkgdown::build_site()
# Check package
devtools::check()
# Load the package and view the summary
library(conjointTools)
help(package = 'conjointTools')
# Install from github
# devtools::install_github('jhelvy/conjointTools')
|
library(tree)
library(randomForest)
library(gbm)
#Carseat Sales data set.
#Created test and train sets that are 50% of the data each.
#Also created a variable that is assigned to the true Sales values, true.vals, from the test set.
seed.val<-12345
set.seed(seed.val)
carseat.df <- read.csv("/Users/ryancoulter/Desktop/Data for R/carseat_sales_reg.csv")
carseat.df <- carseat.df[,-12]
train.rows<-sample(1:nrow(carseat.df), 200)
train.data <- carseat.df[train.rows,]
test.data<-carseat.df[-train.rows,]
true.labels<-carseat.df$Sales[-train.rows]
# Single tree models
carseat.tree <- tree(Sales ~. , data = train.data )
plot(carseat.tree)
text(carseat.tree)
#Shelf Location is best predictor, left: price, shelf location, income, age, comp price, eduaction, advertising right: price, education, comp price
#Looked for a smaller tree size by plotting the deviation vs tree size.
set.seed(seed.val)
cv.carseat<-cv.tree(carseat.tree)
plot(cv.carseat$size, cv.carseat$dev, type="b")
#Generated a pruned tree by picking the size in the range of [5,10] with the lowest deviance from the plot above.
carseat.prunetree <- prune.tree(carseat.tree, best = 6)
plot(carseat.prunetree)
text(carseat.prunetree)
#Most important predictors: Shelf location and price
#Made predictions and calculated mean squared error MSE to evaluate the unpruned and pruned tree models.
pred.unpruned <- predict(carseat.tree, test.data)
pred.pruned <- predict(carseat.prunetree, test.data)
mse.unpruned <- mean((pred.unpruned-true.labels)^2)
mse.pruned <- mean((pred.pruned-true.labels)^2)
mse.unpruned
mse.pruned
#The unpruned had an mse of 6.1 and the pruned had an mse of 4.9, these are expected because a more pruned tree would be less overfitted to the data and would provide more accurate predictions to new values.
#Bagging, Boosting and Random Forest.
#Called the "randomForest" function to create a bagged ensemble on the training data.
#Generated predictions on the test data and displayed the mean squared error MSE.
set.seed(seed.val)
bagged <- randomForest(Sales~., data=train.data, mtry = 6, importance = TRUE)
pred.bagged <- predict(bagged, test.data)
mse.bagged <- mean((pred.bagged-true.labels)^2)
mse.bagged
#Used the "randomForest" function to create a random forest ensemble on the training data.
#Generated predictions on the test data and display the mean squared error MSE.
set.seed(seed.val)
bagged <- randomForest(Sales~., data=train.data, mtry = 6, importance = TRUE)
pred.bagged <- predict(bagged, test.data)
mse.bagged <- mean((pred.bagged-true.labels)^2)
mse.bagged
#Used the "importance" function to plot the predictors vs their MSE for the random forest model.
plot(importance(bagged))
#Five most important predictors: ShelveLoc, price, compprice, age, advertising
#Used the gbm function to create a boosted model on the training data.
#Generated predictions on the test data and displayed the mean squared error MSE.
set.seed(seed.val)
gbm <- gbm(Sales~., data=train.data, n.trees = 5000, interaction.depth = 4)
pred.gbm <- predict(gbm, test.data, n.trees = 5000)
mse.gbm <- mean((pred.gbm-true.labels)^2)
mse.gbm
# Summary of models and MSE:
# single tree: 6.05
# pruned tree: 4.92
# bagged model:2.48
# rand forest: 2.48
# boosted model: 2.09
#
# These results make sense because as you move through the methods, the number of nodes decreases which elimates overfitting while also not underfitting the data which allows for the strongest predicitve power and lowest error. | /SalesProject.R | no_license | RyanHCoulter/PersonalDataAnalysisProjects | R | false | false | 3,527 | r |
library(tree)
library(randomForest)
library(gbm)
#Carseat Sales data set.
#Created test and train sets that are 50% of the data each.
#Also created a variable that is assigned to the true Sales values, true.vals, from the test set.
seed.val<-12345
set.seed(seed.val)
carseat.df <- read.csv("/Users/ryancoulter/Desktop/Data for R/carseat_sales_reg.csv")
carseat.df <- carseat.df[,-12]
train.rows<-sample(1:nrow(carseat.df), 200)
train.data <- carseat.df[train.rows,]
test.data<-carseat.df[-train.rows,]
true.labels<-carseat.df$Sales[-train.rows]
# Single tree models
carseat.tree <- tree(Sales ~. , data = train.data )
plot(carseat.tree)
text(carseat.tree)
#Shelf Location is best predictor, left: price, shelf location, income, age, comp price, eduaction, advertising right: price, education, comp price
#Looked for a smaller tree size by plotting the deviation vs tree size.
set.seed(seed.val)
cv.carseat<-cv.tree(carseat.tree)
plot(cv.carseat$size, cv.carseat$dev, type="b")
#Generated a pruned tree by picking the size in the range of [5,10] with the lowest deviance from the plot above.
carseat.prunetree <- prune.tree(carseat.tree, best = 6)
plot(carseat.prunetree)
text(carseat.prunetree)
#Most important predictors: Shelf location and price
#Made predictions and calculated mean squared error MSE to evaluate the unpruned and pruned tree models.
pred.unpruned <- predict(carseat.tree, test.data)
pred.pruned <- predict(carseat.prunetree, test.data)
mse.unpruned <- mean((pred.unpruned-true.labels)^2)
mse.pruned <- mean((pred.pruned-true.labels)^2)
mse.unpruned
mse.pruned
#The unpruned had an mse of 6.1 and the pruned had an mse of 4.9, these are expected because a more pruned tree would be less overfitted to the data and would provide more accurate predictions to new values.
#Bagging, Boosting and Random Forest.
#Called the "randomForest" function to create a bagged ensemble on the training data.
#Generated predictions on the test data and displayed the mean squared error MSE.
set.seed(seed.val)
bagged <- randomForest(Sales~., data=train.data, mtry = 6, importance = TRUE)
pred.bagged <- predict(bagged, test.data)
mse.bagged <- mean((pred.bagged-true.labels)^2)
mse.bagged
#Used the "randomForest" function to create a random forest ensemble on the training data.
#Generated predictions on the test data and display the mean squared error MSE.
set.seed(seed.val)
bagged <- randomForest(Sales~., data=train.data, mtry = 6, importance = TRUE)
pred.bagged <- predict(bagged, test.data)
mse.bagged <- mean((pred.bagged-true.labels)^2)
mse.bagged
#Used the "importance" function to plot the predictors vs their MSE for the random forest model.
plot(importance(bagged))
#Five most important predictors: ShelveLoc, price, compprice, age, advertising
#Used the gbm function to create a boosted model on the training data.
#Generated predictions on the test data and displayed the mean squared error MSE.
set.seed(seed.val)
gbm <- gbm(Sales~., data=train.data, n.trees = 5000, interaction.depth = 4)
pred.gbm <- predict(gbm, test.data, n.trees = 5000)
mse.gbm <- mean((pred.gbm-true.labels)^2)
mse.gbm
# Summary of models and MSE:
# single tree: 6.05
# pruned tree: 4.92
# bagged model:2.48
# rand forest: 2.48
# boosted model: 2.09
#
# These results make sense because as you move through the methods, the number of nodes decreases which elimates overfitting while also not underfitting the data which allows for the strongest predicitve power and lowest error. |
#' Butcher methods for a workflow
#'
#' These methods allow you to use the butcher package to reduce the size of
#' a workflow. After calling `butcher::butcher()` on a workflow, the only
#' guarantee is that you will still be able to `predict()` from that workflow.
#' Other functions may not work as expected.
#'
#' @param x A workflow.
#' @param verbose Should information be printed about how much memory is freed
#' from butchering?
#' @param ... Extra arguments possibly used by underlying methods.
#'
#' @name workflow-butcher
# @export - onLoad
#' @rdname workflow-butcher
axe_call.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_call(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_ctrl.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_ctrl(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_data.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_data(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
x <- replace_workflow_outcomes(x, NULL)
x <- replace_workflow_predictors(x, NULL)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_env.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_env(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
# Axe env of preprocessor
preprocessor <- extract_preprocessor(x)
if (has_preprocessor_recipe(x)) {
preprocessor <- butcher::axe_env(preprocessor, verbose = verbose, ...)
} else if (has_preprocessor_formula(x)) {
preprocessor <- butcher::axe_env(preprocessor, verbose = verbose, ...)
} else if (has_preprocessor_variables(x)) {
preprocessor$outcomes <- butcher::axe_env(preprocessor$outcomes, verbose = verbose, ...)
preprocessor$predictors <- butcher::axe_env(preprocessor$predictors, verbose = verbose, ...)
}
x <- replace_workflow_preprocessor(x, preprocessor)
# Axe env of prepped recipe (separate from fresh recipe preprocessor)
if (has_preprocessor_recipe(x)) {
prepped <- extract_recipe(x)
prepped <- butcher::axe_env(prepped, verbose = verbose, ...)
x <- replace_workflow_prepped_recipe(x, prepped)
}
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_fitted.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_fitted(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# ------------------------------------------------------------------------------
# butcher:::add_butcher_class
add_butcher_class <- function(x) {
if(!any(grepl("butcher", class(x)))) {
class(x) <- append(paste0("butchered_", class(x)[1]), class(x))
}
x
}
# ------------------------------------------------------------------------------
# For internal usage only, no checks on `value`. `value` can even be `NULL` to
# remove the element from the list. This is useful for removing
# predictors/outcomes when butchering. This does a direct replacement, with
# no resetting of `trained` or any stages.
replace_workflow_preprocessor <- function(x, value) {
validate_is_workflow(x)
if (has_preprocessor_formula(x)) {
x$pre$actions$formula$formula <- value
} else if (has_preprocessor_recipe(x)) {
x$pre$actions$recipe$recipe <- value
} else if (has_preprocessor_variables(x)) {
x$pre$actions$variables$variables <- value
} else {
abort("The workflow does not have a preprocessor.")
}
x
}
replace_workflow_fit <- function(x, value) {
validate_is_workflow(x)
if (!has_fit(x)) {
abort("The workflow does not have a model fit. Have you called `fit()` yet?")
}
x$fit$fit <- value
x
}
replace_workflow_predictors <- function(x, value) {
validate_is_workflow(x)
mold <- extract_mold(x)
mold$predictors <- value
replace_workflow_mold(x, mold)
}
replace_workflow_outcomes <- function(x, value) {
validate_is_workflow(x)
mold <- extract_mold(x)
mold$outcomes <- value
replace_workflow_mold(x, mold)
}
replace_workflow_prepped_recipe <- function(x, value) {
validate_is_workflow(x)
if (!has_preprocessor_recipe(x)) {
abort("The workflow must have a recipe preprocessor.")
}
mold <- extract_mold(x)
mold$blueprint$recipe <- value
replace_workflow_mold(x, mold)
}
replace_workflow_mold <- function(x, value) {
validate_is_workflow(x)
if (!has_mold(x)) {
abort("The workflow does not have a mold. Have you called `fit()` yet?")
}
x$pre$mold <- value
x
}
| /R/butcher.R | permissive | dkgaraujo/workflows | R | false | false | 4,816 | r | #' Butcher methods for a workflow
#'
#' These methods allow you to use the butcher package to reduce the size of
#' a workflow. After calling `butcher::butcher()` on a workflow, the only
#' guarantee is that you will still be able to `predict()` from that workflow.
#' Other functions may not work as expected.
#'
#' @param x A workflow.
#' @param verbose Should information be printed about how much memory is freed
#' from butchering?
#' @param ... Extra arguments possibly used by underlying methods.
#'
#' @name workflow-butcher
# @export - onLoad
#' @rdname workflow-butcher
axe_call.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_call(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_ctrl.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_ctrl(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_data.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_data(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
x <- replace_workflow_outcomes(x, NULL)
x <- replace_workflow_predictors(x, NULL)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_env.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_env(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
# Axe env of preprocessor
preprocessor <- extract_preprocessor(x)
if (has_preprocessor_recipe(x)) {
preprocessor <- butcher::axe_env(preprocessor, verbose = verbose, ...)
} else if (has_preprocessor_formula(x)) {
preprocessor <- butcher::axe_env(preprocessor, verbose = verbose, ...)
} else if (has_preprocessor_variables(x)) {
preprocessor$outcomes <- butcher::axe_env(preprocessor$outcomes, verbose = verbose, ...)
preprocessor$predictors <- butcher::axe_env(preprocessor$predictors, verbose = verbose, ...)
}
x <- replace_workflow_preprocessor(x, preprocessor)
# Axe env of prepped recipe (separate from fresh recipe preprocessor)
if (has_preprocessor_recipe(x)) {
prepped <- extract_recipe(x)
prepped <- butcher::axe_env(prepped, verbose = verbose, ...)
x <- replace_workflow_prepped_recipe(x, prepped)
}
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_fitted.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_fitted(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# ------------------------------------------------------------------------------
# butcher:::add_butcher_class
add_butcher_class <- function(x) {
if(!any(grepl("butcher", class(x)))) {
class(x) <- append(paste0("butchered_", class(x)[1]), class(x))
}
x
}
# ------------------------------------------------------------------------------
# For internal usage only, no checks on `value`. `value` can even be `NULL` to
# remove the element from the list. This is useful for removing
# predictors/outcomes when butchering. This does a direct replacement, with
# no resetting of `trained` or any stages.
replace_workflow_preprocessor <- function(x, value) {
validate_is_workflow(x)
if (has_preprocessor_formula(x)) {
x$pre$actions$formula$formula <- value
} else if (has_preprocessor_recipe(x)) {
x$pre$actions$recipe$recipe <- value
} else if (has_preprocessor_variables(x)) {
x$pre$actions$variables$variables <- value
} else {
abort("The workflow does not have a preprocessor.")
}
x
}
replace_workflow_fit <- function(x, value) {
validate_is_workflow(x)
if (!has_fit(x)) {
abort("The workflow does not have a model fit. Have you called `fit()` yet?")
}
x$fit$fit <- value
x
}
replace_workflow_predictors <- function(x, value) {
validate_is_workflow(x)
mold <- extract_mold(x)
mold$predictors <- value
replace_workflow_mold(x, mold)
}
replace_workflow_outcomes <- function(x, value) {
validate_is_workflow(x)
mold <- extract_mold(x)
mold$outcomes <- value
replace_workflow_mold(x, mold)
}
replace_workflow_prepped_recipe <- function(x, value) {
validate_is_workflow(x)
if (!has_preprocessor_recipe(x)) {
abort("The workflow must have a recipe preprocessor.")
}
mold <- extract_mold(x)
mold$blueprint$recipe <- value
replace_workflow_mold(x, mold)
}
replace_workflow_mold <- function(x, value) {
validate_is_workflow(x)
if (!has_mold(x)) {
abort("The workflow does not have a mold. Have you called `fit()` yet?")
}
x$pre$mold <- value
x
}
|
library(glmnet)
mydata = read.table("../../../../TrainingSet/FullSet/AvgRank/NSCLC.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mse",alpha=0.8,family="gaussian",standardize=FALSE)
sink('./NSCLC_082.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
| /Model/EN/AvgRank/NSCLC/NSCLC_082.R | no_license | esbgkannan/QSMART | R | false | false | 344 | r | library(glmnet)
mydata = read.table("../../../../TrainingSet/FullSet/AvgRank/NSCLC.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mse",alpha=0.8,family="gaussian",standardize=FALSE)
sink('./NSCLC_082.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
#!/hpf/tools/centos6/R/3.1.1/bin/Rscript
if (length(commandArgs(TRUE))<5){print("Error: incorrect number of arguments");if(NA)print("Error");}
path=commandArgs(TRUE)[1];
datapath=commandArgs(TRUE)[2];
resultdir=commandArgs(TRUE)[3];
if (length(commandArgs(TRUE))>3 && !(commandArgs(TRUE)[4] %in% c("0","1"))){exprfile=commandArgs(TRUE)[4];}else {
if (length(commandArgs(TRUE))>3 && commandArgs(TRUE)[4]!="0"){exprfile="gene_expression.txt";}else exprfile=NULL;}
if (length(commandArgs(TRUE))>4 && !(commandArgs(TRUE)[5] %in% c("0","1"))){genefile=commandArgs(TRUE)[5];}else {
if (length(commandArgs(TRUE))>4 && commandArgs(TRUE)[5]!="0"){genefile="genes.txt";}else genefile=NULL;}
if (length(commandArgs(TRUE))>5){patfile=commandArgs(TRUE)[6];}else patfile="patients.txt";
if (length(commandArgs(TRUE))>6){varfile=commandArgs(TRUE)[7];}else varfile="variants.txt";
if (length(commandArgs(TRUE))>7){exomefile=commandArgs(TRUE)[8];}else exomefile="genotype.txt";
if (length(commandArgs(TRUE))>8){compareMethods=as.numeric(commandArgs(TRUE)[9]);}else compareMethods=0;
if (length(commandArgs(TRUE))>9){complexity=as.numeric(commandArgs(TRUE)[10]);}else complexity=0;
if (length(commandArgs(TRUE))>10){ratioSignal=as.numeric(commandArgs(TRUE)[11]);}else ratioSignal=0.9;if (ratioSignal>1)ratioSignal=ratioSignal/100;
if (length(commandArgs(TRUE))>11){indpermut=as.numeric(commandArgs(TRUE)[12]);}else indpermut=0;
if (length(commandArgs(TRUE))>12){npermut=as.numeric(commandArgs(TRUE)[13]);}else npermut=0;
if (length(commandArgs(TRUE))>13){patfile_validation=commandArgs(TRUE)[14];}else patfile_validation=NULL;
local=FALSE;
if (local){
path="/home/aziz/Desktop/aziz/diseaseMechanism/";
indpermut=0;npermut=0;
datapath=paste(path,"data/",sep="");
resultdir=paste(path,"Results/",sep="");
ratioSignal=0.9;complexity=0;
genefile="genes.txt";patfile="patients.txt";varfile="variants.txt";exomefile="genotype.txt";exprfile="gene_expression.txt";
}
#Model parameters
meancgenes=20;
complexityDistr=c(0,0,0,0); if (complexity==0){complexityDistr=c(1,1,1,1)/4}else {complexityDistr[complexity]=1;}
decay=c(0.05,0.1,0.2,0.4);
alpha=0.5;
usenet2=TRUE;
maxConnections=50;
netparams=c(0.9,0.01,0.01,1);
removeExpressionOnly=FALSE;
propagate=FALSE;
e=NULL;cores=1;
auto_balance=TRUE;
testdmgwas=0
#networkName=paste(path,"networks/BIOGRID3.2.98.tab2/interactions.txt",sep="") ## Biogrid network
#networkName=paste(path,"networks/HPRD_Release9_062910/interactions.txt",sep="") ## HPRD network
#networkName=paste(path,"networks/humannet/interactions150.txt",sep="");## HumanNet
networkName=paste(path,"networks/BIOGRID3.4-132/Biogrid-Human-34-132.txt",sep="")## Biogrid updated
#networkName=paste(path,"networks/BIOGRID3.4-132/Biogrid-Physical-Conserved-Human-34-132.txt",sep="")## Biogrid conservative updated
#TODO change to whatever gene network you want to use. Can use other annotations.
codeDir=paste(path,"Rproject/",sep="");
dir.create(resultdir);
library(preprocessCore);
library(Matrix)
source(paste(codeDir,"functions/load_network_functions.r",sep=""));
source(paste(codeDir,"functions/misc_functions.r",sep=""));
source(paste(codeDir,"functions/process_expr.r",sep=""));
#load files
phenomat=read.table(paste(datapath,patfile,sep=""),sep="\t",stringsAsFactors =FALSE);
genotype=read.table(paste(datapath,exomefile,sep=""),sep="\t",stringsAsFactors =FALSE,colClasses=c("character","character","numeric"));
rawannot=read.table(paste(datapath,varfile,sep=""),sep="\t",stringsAsFactors =FALSE,colClasses=c("character","character","factor","numeric"));
annot=rawannot[order(rawannot[,2]),];print(table(annot[,3]));
if (length(exprfile)){exprmatraw=read.table(paste(datapath,exprfile,sep=""),sep="\t");}else exprmatraw=NULL;
if (length(genefile)){genemat=read.table(paste(datapath,genefile,sep=""),sep="\t",stringsAsFactors =FALSE);} else genemat=data.frame(sort(unique(annot[,2])),1);
dataok=verifyDependencies(phenomat,genotype,annot,genemat,exprmatraw);
if(!dataok)print("Problem with the input data. Execution stopped.")
if (dataok){
varids=1:nrow(annot);names(varids) <- annot[,1];
geneids=1:nrow(genemat);names(geneids)<- genemat[,1];
patientids=1:nrow(phenomat);names(patientids)<- phenomat[,1];
pheno=phenomat[,2];
#Balance them by reducing the bigger(sampling or taking first n)
ph1=which(pheno==1);ph0=which(pheno==0);
if (auto_balance){
if(length(ph1)>length(ph0)){ph1=sample(ph1,length(ph0));
}else if (length(ph0)>length(ph1))ph0=sample(ph0,length(ph1));
pheno=c(rep(1,length(ph1)),rep(0,length(ph0)));
includedpatientids=patientids[c(ph1,ph0)];
ph1=1:length(ph1); ph0=(length(ph1)+1):(length(ph1)+length(ph0));
mappat=match(patientids,includedpatientids);
} else {mappat=1:length(patientids);includedpatientids=patientids;}
nbgenes=length(geneids); nbpatients=length(pheno); genenames= names(geneids);
indsnp=rep(1,nrow(annot)); nb=1;
for (i in 2:nrow(annot)){
if(annot[i,2]!=annot[i-1,2]){nb=1;}else nb=nb+1;
indsnp[i]=nb;
}
harm=list(); length(harm) <- nbgenes;
transform_harm=function(x)(x*0.9+0.05)
for(i in 1:nbgenes)harm[[i]]<- transform_harm(annot[which(annot[,2]==genenames[i]),4]);
nbsnps=rep(0,nbgenes);for (i in 1:nbgenes)nbsnps[i]=length(harm[[i]]);
harmgene=rep(meancgenes/nbgenes,nbgenes);
#load expression
if (length(exprfile)){
quantnormalize=TRUE;logfirst=TRUE;ksi=1;premethod="None";nfactor=50;
mapped=mappat[patientids[colnames(exprmatraw)]];#patient in expression/methylation data mapped to includedpatientids (or patientsids if mappat identity);
if (premethod %in% c("RUV2","RUV4","RUVinv","RUVrinv")){neg_controls=read.table(paste(datapath,"negative_controls.txt",sep=""),sep="\t",stringsAsFactors =FALSE)[,1];}else neg_controls=NULL;
e=preprocess_expr_all(exprmatraw,mapped,ph1,ph0,genenames,nbpatients,pheno_expr,quantnormalize,logfirst,neg_controls,premethod,ksi,nfactor);
}
if(length(e))print(paste("Number of mild aberrations: ",length(which(abs(e)>2)), ", strong :",length(which(abs(e)>3))))
gc();
#load genotypes (list of variants in each inividual and zygocity)
het=list();length(het)<- nbgenes;for (i in 1:nbgenes){het[[i]]<- list(); length(het[[i]])<- nbpatients;}
hom=list();length(hom)<- nbgenes;for (i in 1:nbgenes){hom[[i]]<- list(); length(hom[[i]])<- nbpatients;}
#p=mappat[patientids[genotype[,2]]];ind=varids[genotype[,1]];g=geneids[annot[ind,2]];
p=mappat[match(genotype[,2],names(patientids))]; ind=match(genotype[,1],names(varids)); g=match(annot[ind,2],names(geneids));
if (compareMethods){
pres=which(!is.na(p));varuniq=unique(ind[pres]); geneuniq=geneids[annot[varuniq,2]];
transraw=list(gene=genenames[geneuniq],snps=varuniq ,mat=matrix(0,nbpatients,length(varuniq)));
}
for (i in 1:nrow(genotype)){
if(!is.na(g[i]) && !is.na(p[i])){
if (genotype[i,3]==1)het[[g[i]]][[p[i]]] <- c(het[[g[i]]][[p[i]]], indsnp[ind[i]])
if (genotype[i,3]==2)hom[[g[i]]][[p[i]]] <- c(hom[[g[i]]][[p[i]]], indsnp[ind[i]])
if (compareMethods)transraw$mat[p[i],which(varuniq==ind[i])]=genotype[i,3];
}
}
if (compareMethods){prescolumns=which(colSums(transraw$mat)>0);trans=list(gene=transraw$gene[prescolumns],snps=transraw$snps[prescolumns] ,mat=transraw$mat[,prescolumns])}
#load network and optionally include second order neighbours as direct link
net1=load_network_genes(networkName,as.character(names(geneids)),maxConnections)$network;
net2=mapply(surround2,net1,1:length(net1) , MoreArgs=list(net1=net1));
net=net1;if (usenet2)net=net2;
#Load and apply the method
source(paste(codeDir,"functions/analyse_results.r",sep=""));
source(paste(codeDir,"newmethod/sumproductmem.r",sep=""))
acc=NULL;lik=NULL;acc0=NULL;lik0=NULL;
if(indpermut==0){
ptm <- proc.time();#Rprof(filename = "Rprof.out")
x<- grid_search(codeDir,nbgenes,nbpatients,nbsnps,harm,harmgene,meancgenes,complexityDistr,pheno,hom,het,net,e, cores,ratioSignal,decay,alpha,netparams,removeExpressionOnly,propagate);
print(proc.time()-ptm);#summaryRprof(filename = "Rprof.out")
#Analyse results
bestgenes=order(x$h,decreasing=TRUE)[1:(2*meancgenes)];
print(genenames[bestgenes[1:meancgenes]]);print(x$h[bestgenes[1:meancgenes]])
write.table(t(x$h[bestgenes]),paste(resultdir,"topgenes.txt",sep=""),row.names=FALSE,col.names=as.character(genenames[bestgenes]))
if (x$status){
print(x$margC)
lik0=sum(x$likelihood);print(lik0);
acc0=t(x$predict-pheno)%*%(x$predict-pheno);print(acc0);
genestodraw=bestgenes;
plot_graphs(x,pheno,resultdir,genestodraw);
print(exp(x$munet[bestgenes,2]));
#pie_plot_patients(x$causes[[7]],bestgenes,genenames,resultdir,TRUE)
}
if (npermut>0)indpermut=indpermut+1
}
if (length(e)){
for(i in 1:10){
g=bestgenes[i];
plot_expr_gene(paste(resultdir,genenames[g],".png",sep=""),g,genenames,exprmatraw,e,ph1,ph0,includedpatientids)
}
}
save.image(paste(resultdir,"data.RData",sep=""))
#other methods
if(compareMethods){
library(AssotesteR)
library(dmGWAS)
library(SKAT)
source(paste(codeDir,"functions/analyse_results.r",sep=""));
methods=c("CAST","CALPHA","VT","SKAT-O")#c("CAST","CALPHA","VT","SKAT-O");
if(length(methods)){
methodsfile=paste(resultdir,"methods.txt",sep="")
file.create(methodsfile)
genesToTest=unique(trans$gene);
for (w in 1: length(methods)){
pvals=other_methods(trans,pheno,genesToTest,methods[w],1000000);
genesToTest1=genesToTest[which(apply(pvals,1,min)<0.05/length(genesToTest))];#8.10e-6 is the significance threshold after multiple hypothesis
write(methods[w],methodsfile, append=TRUE,sep="\t")
if(length(genesToTest1))write(genesToTest1,methodsfile, append=TRUE,sep="\t",ncolumns=length(genesToTest1));
write(genesToTest[order(apply(pvals,1,min))[1:20]],methodsfile, append=TRUE,sep="\t",ncolumns=20)
write(pvals[order(apply(pvals,1,min))[1:20]],methodsfile, append=TRUE,sep="\t",ncolumns=20)
}
print("Gene based tests performance assessment: done");
}
}
if (testdmgwas & compareMethods){
library(dmGWAS)
skatpvals=rep(0.5,nbgenes);skatpvals[genesToTest]=pvals;
skatpvals[which(skatpvals<=10^(-16))]=10^(-16);skatpvals[which(skatpvals>=1)]=0.5#+runif(length(which(skatpvals>=1)))/2;
d1=data.frame(gene=genenames,weight=skatpvals,stringsAsFactors =FALSE);
netmat=data.frame(interactorA=rep("gene",100000),interactorB=rep("gene",100000),stringsAsFactors =FALSE)
k=1;for (i in 1:nbgenes)if(length(net1[[i]])){for(j in net1[[i]]){if (j >i){netmat[k,1]=genenames[i];netmat[k,2]=genenames[j];k=k+1;}}}
netmat=netmat[1:(k-1),]
resdmgwas=dms(netmat,d1,expr1=NULL,expr2=NULL,d=1,r=0.1)
sel=resdmgwas$genesets.clear[[as.numeric(rownames(resdmgwas$zi.ordered)[1])]]
write.table(sel,paste(resultdir,"dmgwas_results.txt",sep=""),row.names=FALSE,col.names=FALSE,sep="\t")
}
#Validation set
if (length(patfile_validation)){
phenomat_v=read.table(paste(datapath,patfile_validation,sep=""),sep="\t",stringsAsFactors =FALSE);
patientids_v=1:nrow(phenomat_v); names(patientids_v)<- phenomat_v[,1];
het_v=list();length(het_v)<- nbgenes;for (i in 1:nbgenes){het_v[[i]]<- list(); length(het_v[[i]])<- nrow(phenomat_v);}
hom_v=list();length(hom_v)<- nbgenes;for (i in 1:nbgenes){hom_v[[i]]<- list(); length(hom_v[[i]])<- nrow(phenomat_v);}
mappat_v=1:length(patientids_v);
p_v=mappat_v[match(genotype[,2],names(patientids_v))];
for (i in 1:nrow(genotype)){
if(!is.na(g[i]) && !is.na(p_v[i])){
if (genotype[i,3]==1)het_v[[g[i]]][[p_v[i]]] <- c(het_v[[g[i]]][[p_v[i]]], indsnp[ind[i]])
if (genotype[i,3]==2)hom_v[[g[i]]][[p_v[i]]] <- c(hom_v[[g[i]]][[p_v[i]]], indsnp[ind[i]])
}
}
pheno_v=sumproduct_predict(x,het_v,hom_v,thresh=0.0);
pheno_v50=sumproduct_predict(x,het_v,hom_v,thresh=0.5);
pheno_vfinal=cbind(phenomat_v[,c(1,2)],pheno_v,pheno_v50);
pheno_tfinal=cbind(phenomat[,c(1,2)], x$predict);
write.table(pheno_vfinal,paste(resultdir,"predict_validation.txt",sep=""),row.names=FALSE,col.names=FALSE,sep="\t")
if (ncol(phenomat_v)>1){
library(pROC)
y1=pheno_vfinal[which(pheno_v>0.5),2];y2=pheno_vfinal[which(pheno_v>0.75),2];
ord=order(pheno_v,decreasing=TRUE);y3=pheno_vfinal[ord[1:10],2];y4=pheno_vfinal[ord[1:20],2];
stats=c(length(which(y1==1)),length(which(y1==0)),length(which(y2==1)),length(which(y2==0)), length(which(y3==1)),length(which(y3==0)),length(which(y4==1)),length(which(y4==0)));
signmult=rep(1,length(pheno_v));signmult2=signmult;signmult[which(pheno_vfinal[ord,2]==0)]=-1;signmult2[which(pheno_vfinal[ord,2]==0)]=-2; #multiplicative
map=mean(cumsum(as.numeric(pheno_vfinal[ord,2]==1))/(1:length(ord)))#mean average precision
es1=which.max(cumsum(signmult*pheno_vfinal[ord,3]));es2=which.max(cumsum(signmult));es3=max(cumsum(signmult*pheno_vfinal[ord,3]));
resauc=c(auc(pheno_tfinal[,2],pheno_tfinal[,3]),auc(pheno_vfinal[,2],pheno_vfinal[,3]),auc(pheno_vfinal[,2],pheno_vfinal[,4]),kruskal.test(pheno_vfinal[, 3],as.factor(pheno_vfinal[,2]))$p.value, length(which(x$h>0.5)),as.numeric(x$status), stats,es1,es2,es3,map,paste(genenames[which(x$h>0.5)],collapse='+'));
write.table(t(resauc),paste(resultdir,"cross_validation_auc.txt",sep=""),row.names=FALSE,col.names=FALSE,sep="\t")
#table(y[order(y[,3],decreasing=TRUE)[1:10],2])
table(pheno_vfinal[which(pheno_v>0.5),2])
}
}
#Permutations
if (indpermut & npermut){
xp=list();length(xp)=npermut;
lik=rep(0,npermut);acc=rep(0,npermut);
if(npermut){ for (i in indpermut:(indpermut+npermut-1)){
phenop=sample(pheno);
xp[[i]]=grid_search(codeDir,nbgenes,nbpatients,nbsnps,harm,harmgene,meancgenes,complexityDistr,phenop,hom,het,net,e, cores,ratioSignal,decay,alpha,netparams,removeExpressionOnly,propagate);
plot_graphs(xp[[i]],phenop,resultdir,NULL,i,reorder=TRUE);
lik[i]=sum(xp[[i]]$likelihood);
acc[i]=(t(xp[[i]]$predict-pheno)%*%(xp[[i]]$predict-pheno));
}}
}
write.table(c(acc0,acc),paste(resultdir,"pred.txt",sep=""),row.names=FALSE,col.names=FALSE)
write.table(c(lik0,lik),paste(resultdir,"lik.txt",sep=""),row.names=FALSE,col.names=FALSE)
}
| /diseaseMechanism/Rproject/analyse_data.r~ | no_license | goldenberg-lab/nc_aggregation | R | false | false | 14,023 | #!/hpf/tools/centos6/R/3.1.1/bin/Rscript
if (length(commandArgs(TRUE))<5){print("Error: incorrect number of arguments");if(NA)print("Error");}
path=commandArgs(TRUE)[1];
datapath=commandArgs(TRUE)[2];
resultdir=commandArgs(TRUE)[3];
if (length(commandArgs(TRUE))>3 && !(commandArgs(TRUE)[4] %in% c("0","1"))){exprfile=commandArgs(TRUE)[4];}else {
if (length(commandArgs(TRUE))>3 && commandArgs(TRUE)[4]!="0"){exprfile="gene_expression.txt";}else exprfile=NULL;}
if (length(commandArgs(TRUE))>4 && !(commandArgs(TRUE)[5] %in% c("0","1"))){genefile=commandArgs(TRUE)[5];}else {
if (length(commandArgs(TRUE))>4 && commandArgs(TRUE)[5]!="0"){genefile="genes.txt";}else genefile=NULL;}
if (length(commandArgs(TRUE))>5){patfile=commandArgs(TRUE)[6];}else patfile="patients.txt";
if (length(commandArgs(TRUE))>6){varfile=commandArgs(TRUE)[7];}else varfile="variants.txt";
if (length(commandArgs(TRUE))>7){exomefile=commandArgs(TRUE)[8];}else exomefile="genotype.txt";
if (length(commandArgs(TRUE))>8){compareMethods=as.numeric(commandArgs(TRUE)[9]);}else compareMethods=0;
if (length(commandArgs(TRUE))>9){complexity=as.numeric(commandArgs(TRUE)[10]);}else complexity=0;
if (length(commandArgs(TRUE))>10){ratioSignal=as.numeric(commandArgs(TRUE)[11]);}else ratioSignal=0.9;if (ratioSignal>1)ratioSignal=ratioSignal/100;
if (length(commandArgs(TRUE))>11){indpermut=as.numeric(commandArgs(TRUE)[12]);}else indpermut=0;
if (length(commandArgs(TRUE))>12){npermut=as.numeric(commandArgs(TRUE)[13]);}else npermut=0;
if (length(commandArgs(TRUE))>13){patfile_validation=commandArgs(TRUE)[14];}else patfile_validation=NULL;
local=FALSE;
if (local){
path="/home/aziz/Desktop/aziz/diseaseMechanism/";
indpermut=0;npermut=0;
datapath=paste(path,"data/",sep="");
resultdir=paste(path,"Results/",sep="");
ratioSignal=0.9;complexity=0;
genefile="genes.txt";patfile="patients.txt";varfile="variants.txt";exomefile="genotype.txt";exprfile="gene_expression.txt";
}
#Model parameters
meancgenes=20;
complexityDistr=c(0,0,0,0); if (complexity==0){complexityDistr=c(1,1,1,1)/4}else {complexityDistr[complexity]=1;}
decay=c(0.05,0.1,0.2,0.4);
alpha=0.5;
usenet2=TRUE;
maxConnections=50;
netparams=c(0.9,0.01,0.01,1);
removeExpressionOnly=FALSE;
propagate=FALSE;
e=NULL;cores=1;
auto_balance=TRUE;
testdmgwas=0
#networkName=paste(path,"networks/BIOGRID3.2.98.tab2/interactions.txt",sep="") ## Biogrid network
#networkName=paste(path,"networks/HPRD_Release9_062910/interactions.txt",sep="") ## HPRD network
#networkName=paste(path,"networks/humannet/interactions150.txt",sep="");## HumanNet
networkName=paste(path,"networks/BIOGRID3.4-132/Biogrid-Human-34-132.txt",sep="")## Biogrid updated
#networkName=paste(path,"networks/BIOGRID3.4-132/Biogrid-Physical-Conserved-Human-34-132.txt",sep="")## Biogrid conservative updated
#TODO change to whatever gene network you want to use. Can use other annotations.
codeDir=paste(path,"Rproject/",sep="");
dir.create(resultdir);
library(preprocessCore);
library(Matrix)
source(paste(codeDir,"functions/load_network_functions.r",sep=""));
source(paste(codeDir,"functions/misc_functions.r",sep=""));
source(paste(codeDir,"functions/process_expr.r",sep=""));
#load files
phenomat=read.table(paste(datapath,patfile,sep=""),sep="\t",stringsAsFactors =FALSE);
genotype=read.table(paste(datapath,exomefile,sep=""),sep="\t",stringsAsFactors =FALSE,colClasses=c("character","character","numeric"));
rawannot=read.table(paste(datapath,varfile,sep=""),sep="\t",stringsAsFactors =FALSE,colClasses=c("character","character","factor","numeric"));
annot=rawannot[order(rawannot[,2]),];print(table(annot[,3]));
if (length(exprfile)){exprmatraw=read.table(paste(datapath,exprfile,sep=""),sep="\t");}else exprmatraw=NULL;
if (length(genefile)){genemat=read.table(paste(datapath,genefile,sep=""),sep="\t",stringsAsFactors =FALSE);} else genemat=data.frame(sort(unique(annot[,2])),1);
dataok=verifyDependencies(phenomat,genotype,annot,genemat,exprmatraw);
if(!dataok)print("Problem with the input data. Execution stopped.")
if (dataok){
varids=1:nrow(annot);names(varids) <- annot[,1];
geneids=1:nrow(genemat);names(geneids)<- genemat[,1];
patientids=1:nrow(phenomat);names(patientids)<- phenomat[,1];
pheno=phenomat[,2];
#Balance them by reducing the bigger(sampling or taking first n)
ph1=which(pheno==1);ph0=which(pheno==0);
if (auto_balance){
if(length(ph1)>length(ph0)){ph1=sample(ph1,length(ph0));
}else if (length(ph0)>length(ph1))ph0=sample(ph0,length(ph1));
pheno=c(rep(1,length(ph1)),rep(0,length(ph0)));
includedpatientids=patientids[c(ph1,ph0)];
ph1=1:length(ph1); ph0=(length(ph1)+1):(length(ph1)+length(ph0));
mappat=match(patientids,includedpatientids);
} else {mappat=1:length(patientids);includedpatientids=patientids;}
nbgenes=length(geneids); nbpatients=length(pheno); genenames= names(geneids);
indsnp=rep(1,nrow(annot)); nb=1;
for (i in 2:nrow(annot)){
if(annot[i,2]!=annot[i-1,2]){nb=1;}else nb=nb+1;
indsnp[i]=nb;
}
harm=list(); length(harm) <- nbgenes;
transform_harm=function(x)(x*0.9+0.05)
for(i in 1:nbgenes)harm[[i]]<- transform_harm(annot[which(annot[,2]==genenames[i]),4]);
nbsnps=rep(0,nbgenes);for (i in 1:nbgenes)nbsnps[i]=length(harm[[i]]);
harmgene=rep(meancgenes/nbgenes,nbgenes);
#load expression
if (length(exprfile)){
quantnormalize=TRUE;logfirst=TRUE;ksi=1;premethod="None";nfactor=50;
mapped=mappat[patientids[colnames(exprmatraw)]];#patient in expression/methylation data mapped to includedpatientids (or patientsids if mappat identity);
if (premethod %in% c("RUV2","RUV4","RUVinv","RUVrinv")){neg_controls=read.table(paste(datapath,"negative_controls.txt",sep=""),sep="\t",stringsAsFactors =FALSE)[,1];}else neg_controls=NULL;
e=preprocess_expr_all(exprmatraw,mapped,ph1,ph0,genenames,nbpatients,pheno_expr,quantnormalize,logfirst,neg_controls,premethod,ksi,nfactor);
}
if(length(e))print(paste("Number of mild aberrations: ",length(which(abs(e)>2)), ", strong :",length(which(abs(e)>3))))
gc();
#load genotypes (list of variants in each inividual and zygocity)
het=list();length(het)<- nbgenes;for (i in 1:nbgenes){het[[i]]<- list(); length(het[[i]])<- nbpatients;}
hom=list();length(hom)<- nbgenes;for (i in 1:nbgenes){hom[[i]]<- list(); length(hom[[i]])<- nbpatients;}
#p=mappat[patientids[genotype[,2]]];ind=varids[genotype[,1]];g=geneids[annot[ind,2]];
p=mappat[match(genotype[,2],names(patientids))]; ind=match(genotype[,1],names(varids)); g=match(annot[ind,2],names(geneids));
if (compareMethods){
pres=which(!is.na(p));varuniq=unique(ind[pres]); geneuniq=geneids[annot[varuniq,2]];
transraw=list(gene=genenames[geneuniq],snps=varuniq ,mat=matrix(0,nbpatients,length(varuniq)));
}
for (i in 1:nrow(genotype)){
if(!is.na(g[i]) && !is.na(p[i])){
if (genotype[i,3]==1)het[[g[i]]][[p[i]]] <- c(het[[g[i]]][[p[i]]], indsnp[ind[i]])
if (genotype[i,3]==2)hom[[g[i]]][[p[i]]] <- c(hom[[g[i]]][[p[i]]], indsnp[ind[i]])
if (compareMethods)transraw$mat[p[i],which(varuniq==ind[i])]=genotype[i,3];
}
}
if (compareMethods){prescolumns=which(colSums(transraw$mat)>0);trans=list(gene=transraw$gene[prescolumns],snps=transraw$snps[prescolumns] ,mat=transraw$mat[,prescolumns])}
#load network and optionally include second order neighbours as direct link
net1=load_network_genes(networkName,as.character(names(geneids)),maxConnections)$network;
net2=mapply(surround2,net1,1:length(net1) , MoreArgs=list(net1=net1));
net=net1;if (usenet2)net=net2;
#Load and apply the method
source(paste(codeDir,"functions/analyse_results.r",sep=""));
source(paste(codeDir,"newmethod/sumproductmem.r",sep=""))
acc=NULL;lik=NULL;acc0=NULL;lik0=NULL;
if(indpermut==0){
ptm <- proc.time();#Rprof(filename = "Rprof.out")
x<- grid_search(codeDir,nbgenes,nbpatients,nbsnps,harm,harmgene,meancgenes,complexityDistr,pheno,hom,het,net,e, cores,ratioSignal,decay,alpha,netparams,removeExpressionOnly,propagate);
print(proc.time()-ptm);#summaryRprof(filename = "Rprof.out")
#Analyse results
bestgenes=order(x$h,decreasing=TRUE)[1:(2*meancgenes)];
print(genenames[bestgenes[1:meancgenes]]);print(x$h[bestgenes[1:meancgenes]])
write.table(t(x$h[bestgenes]),paste(resultdir,"topgenes.txt",sep=""),row.names=FALSE,col.names=as.character(genenames[bestgenes]))
if (x$status){
print(x$margC)
lik0=sum(x$likelihood);print(lik0);
acc0=t(x$predict-pheno)%*%(x$predict-pheno);print(acc0);
genestodraw=bestgenes;
plot_graphs(x,pheno,resultdir,genestodraw);
print(exp(x$munet[bestgenes,2]));
#pie_plot_patients(x$causes[[7]],bestgenes,genenames,resultdir,TRUE)
}
if (npermut>0)indpermut=indpermut+1
}
if (length(e)){
for(i in 1:10){
g=bestgenes[i];
plot_expr_gene(paste(resultdir,genenames[g],".png",sep=""),g,genenames,exprmatraw,e,ph1,ph0,includedpatientids)
}
}
save.image(paste(resultdir,"data.RData",sep=""))
#other methods
if(compareMethods){
library(AssotesteR)
library(dmGWAS)
library(SKAT)
source(paste(codeDir,"functions/analyse_results.r",sep=""));
methods=c("CAST","CALPHA","VT","SKAT-O")#c("CAST","CALPHA","VT","SKAT-O");
if(length(methods)){
methodsfile=paste(resultdir,"methods.txt",sep="")
file.create(methodsfile)
genesToTest=unique(trans$gene);
for (w in 1: length(methods)){
pvals=other_methods(trans,pheno,genesToTest,methods[w],1000000);
genesToTest1=genesToTest[which(apply(pvals,1,min)<0.05/length(genesToTest))];#8.10e-6 is the significance threshold after multiple hypothesis
write(methods[w],methodsfile, append=TRUE,sep="\t")
if(length(genesToTest1))write(genesToTest1,methodsfile, append=TRUE,sep="\t",ncolumns=length(genesToTest1));
write(genesToTest[order(apply(pvals,1,min))[1:20]],methodsfile, append=TRUE,sep="\t",ncolumns=20)
write(pvals[order(apply(pvals,1,min))[1:20]],methodsfile, append=TRUE,sep="\t",ncolumns=20)
}
print("Gene based tests performance assessment: done");
}
}
if (testdmgwas & compareMethods){
library(dmGWAS)
skatpvals=rep(0.5,nbgenes);skatpvals[genesToTest]=pvals;
skatpvals[which(skatpvals<=10^(-16))]=10^(-16);skatpvals[which(skatpvals>=1)]=0.5#+runif(length(which(skatpvals>=1)))/2;
d1=data.frame(gene=genenames,weight=skatpvals,stringsAsFactors =FALSE);
netmat=data.frame(interactorA=rep("gene",100000),interactorB=rep("gene",100000),stringsAsFactors =FALSE)
k=1;for (i in 1:nbgenes)if(length(net1[[i]])){for(j in net1[[i]]){if (j >i){netmat[k,1]=genenames[i];netmat[k,2]=genenames[j];k=k+1;}}}
netmat=netmat[1:(k-1),]
resdmgwas=dms(netmat,d1,expr1=NULL,expr2=NULL,d=1,r=0.1)
sel=resdmgwas$genesets.clear[[as.numeric(rownames(resdmgwas$zi.ordered)[1])]]
write.table(sel,paste(resultdir,"dmgwas_results.txt",sep=""),row.names=FALSE,col.names=FALSE,sep="\t")
}
#Validation set
if (length(patfile_validation)){
phenomat_v=read.table(paste(datapath,patfile_validation,sep=""),sep="\t",stringsAsFactors =FALSE);
patientids_v=1:nrow(phenomat_v); names(patientids_v)<- phenomat_v[,1];
het_v=list();length(het_v)<- nbgenes;for (i in 1:nbgenes){het_v[[i]]<- list(); length(het_v[[i]])<- nrow(phenomat_v);}
hom_v=list();length(hom_v)<- nbgenes;for (i in 1:nbgenes){hom_v[[i]]<- list(); length(hom_v[[i]])<- nrow(phenomat_v);}
mappat_v=1:length(patientids_v);
p_v=mappat_v[match(genotype[,2],names(patientids_v))];
for (i in 1:nrow(genotype)){
if(!is.na(g[i]) && !is.na(p_v[i])){
if (genotype[i,3]==1)het_v[[g[i]]][[p_v[i]]] <- c(het_v[[g[i]]][[p_v[i]]], indsnp[ind[i]])
if (genotype[i,3]==2)hom_v[[g[i]]][[p_v[i]]] <- c(hom_v[[g[i]]][[p_v[i]]], indsnp[ind[i]])
}
}
pheno_v=sumproduct_predict(x,het_v,hom_v,thresh=0.0);
pheno_v50=sumproduct_predict(x,het_v,hom_v,thresh=0.5);
pheno_vfinal=cbind(phenomat_v[,c(1,2)],pheno_v,pheno_v50);
pheno_tfinal=cbind(phenomat[,c(1,2)], x$predict);
write.table(pheno_vfinal,paste(resultdir,"predict_validation.txt",sep=""),row.names=FALSE,col.names=FALSE,sep="\t")
if (ncol(phenomat_v)>1){
library(pROC)
y1=pheno_vfinal[which(pheno_v>0.5),2];y2=pheno_vfinal[which(pheno_v>0.75),2];
ord=order(pheno_v,decreasing=TRUE);y3=pheno_vfinal[ord[1:10],2];y4=pheno_vfinal[ord[1:20],2];
stats=c(length(which(y1==1)),length(which(y1==0)),length(which(y2==1)),length(which(y2==0)), length(which(y3==1)),length(which(y3==0)),length(which(y4==1)),length(which(y4==0)));
signmult=rep(1,length(pheno_v));signmult2=signmult;signmult[which(pheno_vfinal[ord,2]==0)]=-1;signmult2[which(pheno_vfinal[ord,2]==0)]=-2; #multiplicative
map=mean(cumsum(as.numeric(pheno_vfinal[ord,2]==1))/(1:length(ord)))#mean average precision
es1=which.max(cumsum(signmult*pheno_vfinal[ord,3]));es2=which.max(cumsum(signmult));es3=max(cumsum(signmult*pheno_vfinal[ord,3]));
resauc=c(auc(pheno_tfinal[,2],pheno_tfinal[,3]),auc(pheno_vfinal[,2],pheno_vfinal[,3]),auc(pheno_vfinal[,2],pheno_vfinal[,4]),kruskal.test(pheno_vfinal[, 3],as.factor(pheno_vfinal[,2]))$p.value, length(which(x$h>0.5)),as.numeric(x$status), stats,es1,es2,es3,map,paste(genenames[which(x$h>0.5)],collapse='+'));
write.table(t(resauc),paste(resultdir,"cross_validation_auc.txt",sep=""),row.names=FALSE,col.names=FALSE,sep="\t")
#table(y[order(y[,3],decreasing=TRUE)[1:10],2])
table(pheno_vfinal[which(pheno_v>0.5),2])
}
}
#Permutations
if (indpermut & npermut){
xp=list();length(xp)=npermut;
lik=rep(0,npermut);acc=rep(0,npermut);
if(npermut){ for (i in indpermut:(indpermut+npermut-1)){
phenop=sample(pheno);
xp[[i]]=grid_search(codeDir,nbgenes,nbpatients,nbsnps,harm,harmgene,meancgenes,complexityDistr,phenop,hom,het,net,e, cores,ratioSignal,decay,alpha,netparams,removeExpressionOnly,propagate);
plot_graphs(xp[[i]],phenop,resultdir,NULL,i,reorder=TRUE);
lik[i]=sum(xp[[i]]$likelihood);
acc[i]=(t(xp[[i]]$predict-pheno)%*%(xp[[i]]$predict-pheno));
}}
}
write.table(c(acc0,acc),paste(resultdir,"pred.txt",sep=""),row.names=FALSE,col.names=FALSE)
write.table(c(lik0,lik),paste(resultdir,"lik.txt",sep=""),row.names=FALSE,col.names=FALSE)
}
| |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/binomialfunction.R
\name{bin_cumulative}
\alias{bin_cumulative}
\title{Binomal cumulative}
\usage{
bin_cumulative(n, p)
}
\arguments{
\item{n}{the number of trials}
\item{p}{the probability of successes}
}
\value{
create a data frame of cumilative probability of a binomial probability distribution
create a line graph of cumilative probability of a binomial probability distribution
}
\description{
list the cumilative probabality of each successe with probability p in n trials
}
\examples{
# list a binomial cumilative probability distribution
dis2 <- bin_cumulative(n = 5, p = 0.5)
# show a line graph of a binomial cumilative probability distribution
plot(dis2)
}
| /binomial/man/bin_cumulative.Rd | no_license | stat133-sp19/hw-stat133-lynn0707 | R | false | true | 753 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/binomialfunction.R
\name{bin_cumulative}
\alias{bin_cumulative}
\title{Binomal cumulative}
\usage{
bin_cumulative(n, p)
}
\arguments{
\item{n}{the number of trials}
\item{p}{the probability of successes}
}
\value{
create a data frame of cumilative probability of a binomial probability distribution
create a line graph of cumilative probability of a binomial probability distribution
}
\description{
list the cumilative probabality of each successe with probability p in n trials
}
\examples{
# list a binomial cumilative probability distribution
dis2 <- bin_cumulative(n = 5, p = 0.5)
# show a line graph of a binomial cumilative probability distribution
plot(dis2)
}
|
#
# Plots MDS timeseries
#
source("plot//plotTimeseries.R")
source("plot//plotDendrogram.R")
plotDendrogramTimeseries = function(hclustSeries, labels=NULL, category=NULL,
timeLabels=NULL, timeLabelOffset=1, maxHeight=1,
...) {
# TODO: implement show scales for only one plot
plotTimeseries(plotDendrogram, hclustSeries, labels, category,
timeLabels, timeLabelOffset, maxHeight=maxHeight,
...)
} | /plot/plotDendrogramTimeseries.R | permissive | blairkan/RConfMatrixPlots | R | false | false | 512 | r | #
# Plots MDS timeseries
#
source("plot//plotTimeseries.R")
source("plot//plotDendrogram.R")
plotDendrogramTimeseries = function(hclustSeries, labels=NULL, category=NULL,
timeLabels=NULL, timeLabelOffset=1, maxHeight=1,
...) {
# TODO: implement show scales for only one plot
plotTimeseries(plotDendrogram, hclustSeries, labels, category,
timeLabels, timeLabelOffset, maxHeight=maxHeight,
...)
} |
# Script to extract monthly output from ED and put it into a netcdf
path.base <- '/projectnb/dietzelab/paleon/ED_runs/MIP2_Region'
source(file.path(path.base, "0_setup", "model2netcdf.ED2.paleon.R"), chdir = TRUE)
site='lat45.25lon-68.75'
sitelat <- as.numeric(substr(site,4,8)) # lat from SAS run
sitelon <- as.numeric(substr(site,12,17)) # insert site longitude(s) here
block.yr=100 # number of years you want to write into each file
raw.dir <- '/projectnb/dietzelab/paleon/ED_runs/MIP2_Region/3_spin_finish/phase2_spinfinish.v1/lat45.25lon-68.75'
new.dir <- '/projectnb/dietzelab/paleon/ED_runs/MIP2_Region/3_spin_finish/spinfinish_qaqc.v1/lat45.25lon-68.75'
if(!dir.exists(new.dir)) dir.create(new.dir)
flist <- dir(file.path(raw.dir, "analy/"),"-E-") # Getting a list of what has been done
# Getting a list of years that have been completed
yr <- rep(NA,length(flist)) # create empty vector the same length as the file list
for(i in 1:length(flist)){
index <- gregexpr("-E-",flist[i])[[1]] # Searching for the monthly data marker (-E-); returns 3 bits of information: 1) capture.start (4); 2) capture.length (3); 3) capture.names (TRUE)
index <- index[1] # indexing off of just where the monthly flag starts
yr[i] <- as.numeric(substr(flist[i],index+3,index+6)) # putting in the Years, indexed off of where the year starts & ends
}
start.run <- as.Date(paste0(min(yr), "-01-01"), "%Y-%m-%d")
end.run <- as.Date(paste0(max(yr), "-01-01"), "%Y-%m-%d")
bins <- c(as.numeric(strftime(start.run, '%Y')), seq(from=as.numeric(paste0(substr(as.numeric(strftime(start.run, "%Y"))+block.yr, 1, 2), "00")), to=as.numeric(strftime(end.run, '%Y')), by=block.yr)) # Creating a vector with X year bins for the time period of interest
print(paste0("---------- Processing Site: ", site, " ----------"))
model2netcdf.ED2.paleon(site, raw.dir, new.dir, sitelat, sitelon, start.run, end.run, bins)
| /ED_Workflow/3_spin_finish/spinfinish_qaqc.v1/lat45.25lon-68.75/extract_output_general.R | no_license | MortonArb-ForestEcology/URF2018-Butkiewicz | R | false | false | 1,920 | r | # Script to extract monthly output from ED and put it into a netcdf
path.base <- '/projectnb/dietzelab/paleon/ED_runs/MIP2_Region'
source(file.path(path.base, "0_setup", "model2netcdf.ED2.paleon.R"), chdir = TRUE)
site='lat45.25lon-68.75'
sitelat <- as.numeric(substr(site,4,8)) # lat from SAS run
sitelon <- as.numeric(substr(site,12,17)) # insert site longitude(s) here
block.yr=100 # number of years you want to write into each file
raw.dir <- '/projectnb/dietzelab/paleon/ED_runs/MIP2_Region/3_spin_finish/phase2_spinfinish.v1/lat45.25lon-68.75'
new.dir <- '/projectnb/dietzelab/paleon/ED_runs/MIP2_Region/3_spin_finish/spinfinish_qaqc.v1/lat45.25lon-68.75'
if(!dir.exists(new.dir)) dir.create(new.dir)
flist <- dir(file.path(raw.dir, "analy/"),"-E-") # Getting a list of what has been done
# Getting a list of years that have been completed
yr <- rep(NA,length(flist)) # create empty vector the same length as the file list
for(i in 1:length(flist)){
index <- gregexpr("-E-",flist[i])[[1]] # Searching for the monthly data marker (-E-); returns 3 bits of information: 1) capture.start (4); 2) capture.length (3); 3) capture.names (TRUE)
index <- index[1] # indexing off of just where the monthly flag starts
yr[i] <- as.numeric(substr(flist[i],index+3,index+6)) # putting in the Years, indexed off of where the year starts & ends
}
start.run <- as.Date(paste0(min(yr), "-01-01"), "%Y-%m-%d")
end.run <- as.Date(paste0(max(yr), "-01-01"), "%Y-%m-%d")
bins <- c(as.numeric(strftime(start.run, '%Y')), seq(from=as.numeric(paste0(substr(as.numeric(strftime(start.run, "%Y"))+block.yr, 1, 2), "00")), to=as.numeric(strftime(end.run, '%Y')), by=block.yr)) # Creating a vector with X year bins for the time period of interest
print(paste0("---------- Processing Site: ", site, " ----------"))
model2netcdf.ED2.paleon(site, raw.dir, new.dir, sitelat, sitelon, start.run, end.run, bins)
|
#### author: Jinlong Zhang <jinlongzhang01@gmail.com>
#### institution: Kadoorie Farm and Botanic Garden, Hong Kong
#### package: phylotools
#### URL: http://github.com/helixcn/phylotools
#### date: 26 MAY 2015
get.phylip.name <- function(infile, clean_name = FALSE){
dat <- read.phylip(infile, clean_name = clean_name)
return(dat[,1])
}
| /R/get.phylip.name.R | no_license | lzhangss/phylotools | R | false | false | 350 | r | #### author: Jinlong Zhang <jinlongzhang01@gmail.com>
#### institution: Kadoorie Farm and Botanic Garden, Hong Kong
#### package: phylotools
#### URL: http://github.com/helixcn/phylotools
#### date: 26 MAY 2015
get.phylip.name <- function(infile, clean_name = FALSE){
dat <- read.phylip(infile, clean_name = clean_name)
return(dat[,1])
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lvl0_UCD.R
\name{lvl0_loadUCD}
\alias{lvl0_loadUCD}
\title{Load the UCD transportation database}
\usage{
lvl0_loadUCD(GCAM_data, EDGE_scenario, REMIND_scenario, input_folder,
years, UCD_dir = "UCD")
}
\arguments{
\item{UCD_dir}{}
}
\description{
Loads and prepares the non_fuel prices.
It also load PSI-based purchase prices for EU.
Final values:
non fuel price in 1990$/pkt (1990USD/tkm),
annual mileage in vkt/veh/yr (vehicle km traveled per year),
non_fuel_split in 1990$/pkt (1990USD/tkm)
}
| /man/lvl0_loadUCD.Rd | no_license | SebastianFra/edgeTransport | R | false | true | 581 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lvl0_UCD.R
\name{lvl0_loadUCD}
\alias{lvl0_loadUCD}
\title{Load the UCD transportation database}
\usage{
lvl0_loadUCD(GCAM_data, EDGE_scenario, REMIND_scenario, input_folder,
years, UCD_dir = "UCD")
}
\arguments{
\item{UCD_dir}{}
}
\description{
Loads and prepares the non_fuel prices.
It also load PSI-based purchase prices for EU.
Final values:
non fuel price in 1990$/pkt (1990USD/tkm),
annual mileage in vkt/veh/yr (vehicle km traveled per year),
non_fuel_split in 1990$/pkt (1990USD/tkm)
}
|
\name{NISTcalthPerCmSecDegCtOwattPerMeterK}
\alias{NISTcalthPerCmSecDegCtOwattPerMeterK}
\title{Convert calorieth per centimeter second degree Celsius to watt per meter kelvin }
\usage{NISTcalthPerCmSecDegCtOwattPerMeterK(calthPerCmSecDegC)}
\description{\code{NISTcalthPerCmSecDegCtOwattPerMeterK} converts from calorieth per centimeter second degree Celsius [calth/(cm * s * C)] to watt per meter kelvin [W/(m * K)] }
\arguments{
\item{calthPerCmSecDegC}{calorieth per centimeter second degree Celsius [calth/(cm * s * C)] }
}
\value{watt per meter kelvin [W/(m * K)] }
\source{
National Institute of Standards and Technology (NIST), 2014
NIST Guide to SI Units
B.8 Factors for Units Listed Alphabetically
\url{http://physics.nist.gov/Pubs/SP811/appenB8.html}
}
\references{
National Institute of Standards and Technology (NIST), 2014
NIST Guide to SI Units
B.8 Factors for Units Listed Alphabetically
\url{http://physics.nist.gov/Pubs/SP811/appenB8.html}
}
\author{Jose Gama}
\examples{
NISTcalthPerCmSecDegCtOwattPerMeterK(10)
}
\keyword{programming} | /man/NISTcalthPerCmSecDegCtOwattPerMeterK.Rd | no_license | cran/NISTunits | R | false | false | 1,057 | rd | \name{NISTcalthPerCmSecDegCtOwattPerMeterK}
\alias{NISTcalthPerCmSecDegCtOwattPerMeterK}
\title{Convert calorieth per centimeter second degree Celsius to watt per meter kelvin }
\usage{NISTcalthPerCmSecDegCtOwattPerMeterK(calthPerCmSecDegC)}
\description{\code{NISTcalthPerCmSecDegCtOwattPerMeterK} converts from calorieth per centimeter second degree Celsius [calth/(cm * s * C)] to watt per meter kelvin [W/(m * K)] }
\arguments{
\item{calthPerCmSecDegC}{calorieth per centimeter second degree Celsius [calth/(cm * s * C)] }
}
\value{watt per meter kelvin [W/(m * K)] }
\source{
National Institute of Standards and Technology (NIST), 2014
NIST Guide to SI Units
B.8 Factors for Units Listed Alphabetically
\url{http://physics.nist.gov/Pubs/SP811/appenB8.html}
}
\references{
National Institute of Standards and Technology (NIST), 2014
NIST Guide to SI Units
B.8 Factors for Units Listed Alphabetically
\url{http://physics.nist.gov/Pubs/SP811/appenB8.html}
}
\author{Jose Gama}
\examples{
NISTcalthPerCmSecDegCtOwattPerMeterK(10)
}
\keyword{programming} |
\name{bal.tab.Match}
\alias{bal.tab.Match}
\alias{bal.tab.optmatch}
\alias{bal.tab.ebalance}
\alias{bal.tab.designmatch}
\title{
Balance Statistics for \code{Matching}, \code{optmatch}, \code{ebal}, and \code{designmatch} Objects
}
\description{
Generates balance statistics for output objects from \pkg{Matching}, \pkg{optmatch}, \pkg{ebal}, and \pkg{designmatch}. Note that several arguments that used to be documented here are now documented in \link[=options-display]{display options}. They are still available.
}
\usage{
\method{bal.tab}{Match}(M,
formula = NULL,
data = NULL,
treat = NULL,
covs = NULL,
int = FALSE,
poly = 1,
distance = NULL,
addl = NULL,
continuous,
binary,
s.d.denom,
m.threshold = NULL,
v.threshold = NULL,
ks.threshold = NULL,
cluster = NULL,
abs = FALSE,
subset = NULL,
quick = TRUE,
...)
\method{bal.tab}{optmatch}(optmatch, ...)
\method{bal.tab}{ebalance}(ebal, ...)
\method{bal.tab}{designmatch}(dm, ...)
}
\arguments{
\item{M}{
a \code{Match} object; the output of a call to \code{Match()} from the \pkg{Matching} package.
}
\item{optmatch}{
an \code{optmatch} object; the output of a call to \code{pairmatch()} or \code{fullmatch()} from the \pkg{optmatch} package. This should be a factor vector containing the matching stratum membership for each unit.
}
\item{ebal}{
an \code{ebalance} object; the output of a call to \code{ebalance()} or \code{ebalance.trim()} from the \pkg{ebal} package.
}
\item{dm}{
the output of a call to \code{bmatch()}, \code{nmatch()}, or related wrapper functions from the \pkg{designmatch} package. This should be a list containing the IDs of the matched treated and control units.
}
\item{formula}{
a \code{formula} with the treatment variable as the response and the covariates for which balance is to be assessed as the predictors. All named variables must be in \code{data}. See Details.
}
\item{data}{
a data frame containing all the variables named in \code{formula}. See Details.
}
\item{treat}{
a vector of treatment statuses. See Details.
}
\item{covs}{
a data frame of covariate values for which to check balance. See Details.
}
\item{int}{
\code{logical} or \code{numeric}; whether or not to include 2-way interactions of covariates included in \code{covs} and in \code{addl}. If \code{numeric}, will be passed to \code{poly} as well. In older versions of \pkg{cobalt}, setting \code{int = TRUE} displayed squares of covariates; to replicate this behavior, set \code{int = 2}.
}
\item{poly}{
\code{numeric}; the highest polynomial of each continuous covariate to display. For example, if 2, squares of each continuous covariate will be displayed (in addition to the covariate itself); if 3, squares and cubes of each continuous covariate will be displayed, etc. If 1, the default, only the base covariate will be displayed. If \code{int} is numeric, \code{poly} will take on the value of \code{int}.
}
\item{distance}{
optional; either a vector or data.frame containing distance values (e.g., propensity scores) for each unit or a string containing the name of the distance variable in \code{data}.
}
\item{addl}{
a data frame of additional covariates for which to present balance. These may be covariates included in the original dataset but not included in \code{formula} or \code{covs}. In general, it makes more sense to include all desired variables in \code{formula} or \code{covs} than in \code{addl}. See note in Details for using \code{addl}.
}
\item{continuous}{
whether mean differences for continuous variables should be standardized ("std") or raw ("raw"). Default "std". Abbreviations allowed. This option can be set globally using \code{\link{set.cobalt.options}}.
}
\item{binary}{
whether mean differences for binary variables (i.e., difference in proportion) should be standardized ("std") or raw ("raw"). Default "raw". Abbreviations allowed. This option can be set globally using \code{\link{set.cobalt.options}}.
}
\item{s.d.denom}{
whether the denominator for standardized differences (if any are calculated) should be the standard deviation of the treated group ("treated"), the standard deviation of the control group ("control"), the pooled standard deviation ("pooled", computed as the square root of the mean of the group variances) or the standard deviation in the full sample ("all"). Abbreviations allowed. If not specified, for \code{Match} objects, \code{bal.tab()} will use "treated" if the estimand of the call to \code{Match()} is the ATT, "pooled" if the estimand is the ATE, and "control" if the estimand is the ATC; for \code{optmatch}, \code{ebal}, and \code{designmatch} objects, \code{bal.tab()} will use "treated".
}
\item{m.threshold}{
a numeric value for the threshold for mean differences. .1 is recommended.
}
\item{v.threshold}{
a numeric value for the threshold for variance ratios. Will automatically convert to the inverse if less than 1.
}
\item{ks.threshold}{
a numeric value for the threshold for Kolmogorov-Smirnov statistics. Must be between 0 and 1.
}
\item{cluster}{
either a vector containing cluster membership for each unit or a string containing the name of the cluster membership variable in \code{data} or the CBPS object. See \code{\link{bal.tab.cluster}} for details.
}
\item{abs}{
\code{logical}; whether displayed balance statistics should be in absolute value or not.
}
\item{subset}{
a \code{logical} vector denoting whether each observation should be included. It should be the same length as the variables in the original call to the conditioning function. \code{NA}s will be treated as \code{FALSE}. This can be used as an alternative to \code{cluster} to examine balance on subsets of the data.
}
\item{quick}{
\code{logical}; if \code{TRUE}, will not compute any values that will not be displayed. Set to \code{FALSE} if computed values not displayed will be used later.
}
\item{...}{
for \code{bal.tab.optmatch()}, \code{bal.tab.ebalance()}, and \code{bal.tab.designmatch()}, the same arguments as those passed to \code{bal.tab.Match()}. Otherwise, further arguments to control display of output. See \link[=options-display]{display options} for details.
}
}
\details{
\code{bal.tab()} generates a list of balance summaries for the object given, and function similarly to \code{MatchBalance()} in \pkg{Matching} and \code{meantab()} in \pkg{designmatch}. Note that output objects from \pkg{designmatch} do not have their own class; \code{bal.tab()} first checks whether the object meets the criteria to be treated as a \code{designmatch} object before dispatching the correct method. In particular, renaming or removing items from the output object can create unintended consequences.
The input to \code{bal.tab.Match()}, \code{bal.tab.optmatch()}, \code{bal.tab.ebalance()}, and \code{bal.tab.designmatch()} must include either both \code{formula} and \code{data} or both \code{covs} and \code{treat}. Using the \code{formula} + \code{data} inputs mirrors how \code{MatchBalance()} is used in \pkg{Matching}, and using the \code{covs} + \code{treat} input mirrors how \code{meantab()} is used in \pkg{designmatch}. (Note that to see identical results to \code{meantb()}, \code{s.d.denom} must be set to \code{"pooled"}, though this is not recommended.)
All balance statistics are calculated whether they are displayed by print or not, unless \code{quick = TRUE}. The threshold values (\code{m.threshold}, \code{v.threshold}, and \code{ks.threshold}) control whether extra columns should be inserted into the Balance table describing whether the balance statistics in question exceeded or were within the threshold. Including these thresholds also creates summary tables tallying the number of variables that exceeded and were within the threshold and displaying the variables with the greatest imbalance on that balance measure.
The inputs (if any) to \code{covs} must be a data frame; if more than one variable is included, this is straightforward (i.e., because \code{data[,c("v1", "v2")]} is already a data frame), but if only one variable is used (e.g., \code{data[,"v1"]}), R will coerce it to a vector, thus making it unfit for input. To avoid this, simply wrap the input to \code{covs} in \code{data.frame()} or use \code{subset()} if only one variable is to be added. Again, when more than one variable is included, the input is general already a data frame and nothing needs to be done.
}
\value{
For point treatments, if clusters and imputations are not specified, an object of class \code{"bal.tab"} containing balance summaries for the given object. See \code{\link{bal.tab}} for details.
If clusters are specified, an object of class \code{"bal.tab.cluster"} containing balance summaries within each cluster and a summary of balance across clusters. See \code{\link{bal.tab.cluster}} for details.
}
\author{
Noah Greifer
}
\seealso{
\code{\link{bal.tab}} for details of calculations.
}
\examples{
########## Matching ##########
library(Matching); data("lalonde", package = "cobalt")
p.score <- glm(treat ~ age + educ + race +
married + nodegree + re74 + re75,
data = lalonde, family = "binomial")$fitted.values
Match.out <- Match(Tr = lalonde$treat, X = p.score)
## Using formula and data
bal.tab(Match.out, treat ~ age + educ + race +
married + nodegree + re74 + re75, data = lalonde)
########## optmatch ##########
library("optmatch"); data("lalonde", package = "cobalt")
lalonde$prop.score <- glm(treat ~ age + educ + race +
married + nodegree + re74 + re75,
data = lalonde, family = binomial)$fitted.values
pm <- pairmatch(treat ~ prop.score, data = lalonde)
## Using formula and data
bal.tab(pm, treat ~ age + educ + race +
married + nodegree + re74 + re75, data = lalonde,
distance = "prop.score")
########## ebal ##########
library("ebal"); data("lalonde", package = "cobalt")
covariates <- subset(lalonde, select = -c(re78, treat, race))
e.out <- ebalance(lalonde$treat, covariates)
## Using treat and covs
bal.tab(e.out, treat = lalonde$treat, covs = covariates)
\dontrun{
########## designmatch ##########
library("designmatch"); data("lalonde", package = "cobalt")
covariates <- as.matrix(lalonde[c("age", "educ", "re74", "re75")])
dmout <- bmatch(lalonde$treat,
total_groups = sum(lalonde$treat == 1),
mom = list(covs = covariates,
tols = absstddif(covs, treat, .05))
)
## Using treat and covs
bal.tab(dmout, treat = lalonde$treat, covs = covariates)
}
}
\keyword{tables} | /man/bal.tab.Match.Rd | no_license | guhjy/cobalt | R | false | false | 10,669 | rd | \name{bal.tab.Match}
\alias{bal.tab.Match}
\alias{bal.tab.optmatch}
\alias{bal.tab.ebalance}
\alias{bal.tab.designmatch}
\title{
Balance Statistics for \code{Matching}, \code{optmatch}, \code{ebal}, and \code{designmatch} Objects
}
\description{
Generates balance statistics for output objects from \pkg{Matching}, \pkg{optmatch}, \pkg{ebal}, and \pkg{designmatch}. Note that several arguments that used to be documented here are now documented in \link[=options-display]{display options}. They are still available.
}
\usage{
\method{bal.tab}{Match}(M,
formula = NULL,
data = NULL,
treat = NULL,
covs = NULL,
int = FALSE,
poly = 1,
distance = NULL,
addl = NULL,
continuous,
binary,
s.d.denom,
m.threshold = NULL,
v.threshold = NULL,
ks.threshold = NULL,
cluster = NULL,
abs = FALSE,
subset = NULL,
quick = TRUE,
...)
\method{bal.tab}{optmatch}(optmatch, ...)
\method{bal.tab}{ebalance}(ebal, ...)
\method{bal.tab}{designmatch}(dm, ...)
}
\arguments{
\item{M}{
a \code{Match} object; the output of a call to \code{Match()} from the \pkg{Matching} package.
}
\item{optmatch}{
an \code{optmatch} object; the output of a call to \code{pairmatch()} or \code{fullmatch()} from the \pkg{optmatch} package. This should be a factor vector containing the matching stratum membership for each unit.
}
\item{ebal}{
an \code{ebalance} object; the output of a call to \code{ebalance()} or \code{ebalance.trim()} from the \pkg{ebal} package.
}
\item{dm}{
the output of a call to \code{bmatch()}, \code{nmatch()}, or related wrapper functions from the \pkg{designmatch} package. This should be a list containing the IDs of the matched treated and control units.
}
\item{formula}{
a \code{formula} with the treatment variable as the response and the covariates for which balance is to be assessed as the predictors. All named variables must be in \code{data}. See Details.
}
\item{data}{
a data frame containing all the variables named in \code{formula}. See Details.
}
\item{treat}{
a vector of treatment statuses. See Details.
}
\item{covs}{
a data frame of covariate values for which to check balance. See Details.
}
\item{int}{
\code{logical} or \code{numeric}; whether or not to include 2-way interactions of covariates included in \code{covs} and in \code{addl}. If \code{numeric}, will be passed to \code{poly} as well. In older versions of \pkg{cobalt}, setting \code{int = TRUE} displayed squares of covariates; to replicate this behavior, set \code{int = 2}.
}
\item{poly}{
\code{numeric}; the highest polynomial of each continuous covariate to display. For example, if 2, squares of each continuous covariate will be displayed (in addition to the covariate itself); if 3, squares and cubes of each continuous covariate will be displayed, etc. If 1, the default, only the base covariate will be displayed. If \code{int} is numeric, \code{poly} will take on the value of \code{int}.
}
\item{distance}{
optional; either a vector or data.frame containing distance values (e.g., propensity scores) for each unit or a string containing the name of the distance variable in \code{data}.
}
\item{addl}{
a data frame of additional covariates for which to present balance. These may be covariates included in the original dataset but not included in \code{formula} or \code{covs}. In general, it makes more sense to include all desired variables in \code{formula} or \code{covs} than in \code{addl}. See note in Details for using \code{addl}.
}
\item{continuous}{
whether mean differences for continuous variables should be standardized ("std") or raw ("raw"). Default "std". Abbreviations allowed. This option can be set globally using \code{\link{set.cobalt.options}}.
}
\item{binary}{
whether mean differences for binary variables (i.e., difference in proportion) should be standardized ("std") or raw ("raw"). Default "raw". Abbreviations allowed. This option can be set globally using \code{\link{set.cobalt.options}}.
}
\item{s.d.denom}{
whether the denominator for standardized differences (if any are calculated) should be the standard deviation of the treated group ("treated"), the standard deviation of the control group ("control"), the pooled standard deviation ("pooled", computed as the square root of the mean of the group variances) or the standard deviation in the full sample ("all"). Abbreviations allowed. If not specified, for \code{Match} objects, \code{bal.tab()} will use "treated" if the estimand of the call to \code{Match()} is the ATT, "pooled" if the estimand is the ATE, and "control" if the estimand is the ATC; for \code{optmatch}, \code{ebal}, and \code{designmatch} objects, \code{bal.tab()} will use "treated".
}
\item{m.threshold}{
a numeric value for the threshold for mean differences. .1 is recommended.
}
\item{v.threshold}{
a numeric value for the threshold for variance ratios. Will automatically convert to the inverse if less than 1.
}
\item{ks.threshold}{
a numeric value for the threshold for Kolmogorov-Smirnov statistics. Must be between 0 and 1.
}
\item{cluster}{
either a vector containing cluster membership for each unit or a string containing the name of the cluster membership variable in \code{data} or the CBPS object. See \code{\link{bal.tab.cluster}} for details.
}
\item{abs}{
\code{logical}; whether displayed balance statistics should be in absolute value or not.
}
\item{subset}{
a \code{logical} vector denoting whether each observation should be included. It should be the same length as the variables in the original call to the conditioning function. \code{NA}s will be treated as \code{FALSE}. This can be used as an alternative to \code{cluster} to examine balance on subsets of the data.
}
\item{quick}{
\code{logical}; if \code{TRUE}, will not compute any values that will not be displayed. Set to \code{FALSE} if computed values not displayed will be used later.
}
\item{...}{
for \code{bal.tab.optmatch()}, \code{bal.tab.ebalance()}, and \code{bal.tab.designmatch()}, the same arguments as those passed to \code{bal.tab.Match()}. Otherwise, further arguments to control display of output. See \link[=options-display]{display options} for details.
}
}
\details{
\code{bal.tab()} generates a list of balance summaries for the object given, and function similarly to \code{MatchBalance()} in \pkg{Matching} and \code{meantab()} in \pkg{designmatch}. Note that output objects from \pkg{designmatch} do not have their own class; \code{bal.tab()} first checks whether the object meets the criteria to be treated as a \code{designmatch} object before dispatching the correct method. In particular, renaming or removing items from the output object can create unintended consequences.
The input to \code{bal.tab.Match()}, \code{bal.tab.optmatch()}, \code{bal.tab.ebalance()}, and \code{bal.tab.designmatch()} must include either both \code{formula} and \code{data} or both \code{covs} and \code{treat}. Using the \code{formula} + \code{data} inputs mirrors how \code{MatchBalance()} is used in \pkg{Matching}, and using the \code{covs} + \code{treat} input mirrors how \code{meantab()} is used in \pkg{designmatch}. (Note that to see identical results to \code{meantb()}, \code{s.d.denom} must be set to \code{"pooled"}, though this is not recommended.)
All balance statistics are calculated whether they are displayed by print or not, unless \code{quick = TRUE}. The threshold values (\code{m.threshold}, \code{v.threshold}, and \code{ks.threshold}) control whether extra columns should be inserted into the Balance table describing whether the balance statistics in question exceeded or were within the threshold. Including these thresholds also creates summary tables tallying the number of variables that exceeded and were within the threshold and displaying the variables with the greatest imbalance on that balance measure.
The inputs (if any) to \code{covs} must be a data frame; if more than one variable is included, this is straightforward (i.e., because \code{data[,c("v1", "v2")]} is already a data frame), but if only one variable is used (e.g., \code{data[,"v1"]}), R will coerce it to a vector, thus making it unfit for input. To avoid this, simply wrap the input to \code{covs} in \code{data.frame()} or use \code{subset()} if only one variable is to be added. Again, when more than one variable is included, the input is general already a data frame and nothing needs to be done.
}
\value{
For point treatments, if clusters and imputations are not specified, an object of class \code{"bal.tab"} containing balance summaries for the given object. See \code{\link{bal.tab}} for details.
If clusters are specified, an object of class \code{"bal.tab.cluster"} containing balance summaries within each cluster and a summary of balance across clusters. See \code{\link{bal.tab.cluster}} for details.
}
\author{
Noah Greifer
}
\seealso{
\code{\link{bal.tab}} for details of calculations.
}
\examples{
########## Matching ##########
library(Matching); data("lalonde", package = "cobalt")
p.score <- glm(treat ~ age + educ + race +
married + nodegree + re74 + re75,
data = lalonde, family = "binomial")$fitted.values
Match.out <- Match(Tr = lalonde$treat, X = p.score)
## Using formula and data
bal.tab(Match.out, treat ~ age + educ + race +
married + nodegree + re74 + re75, data = lalonde)
########## optmatch ##########
library("optmatch"); data("lalonde", package = "cobalt")
lalonde$prop.score <- glm(treat ~ age + educ + race +
married + nodegree + re74 + re75,
data = lalonde, family = binomial)$fitted.values
pm <- pairmatch(treat ~ prop.score, data = lalonde)
## Using formula and data
bal.tab(pm, treat ~ age + educ + race +
married + nodegree + re74 + re75, data = lalonde,
distance = "prop.score")
########## ebal ##########
library("ebal"); data("lalonde", package = "cobalt")
covariates <- subset(lalonde, select = -c(re78, treat, race))
e.out <- ebalance(lalonde$treat, covariates)
## Using treat and covs
bal.tab(e.out, treat = lalonde$treat, covs = covariates)
\dontrun{
########## designmatch ##########
library("designmatch"); data("lalonde", package = "cobalt")
covariates <- as.matrix(lalonde[c("age", "educ", "re74", "re75")])
dmout <- bmatch(lalonde$treat,
total_groups = sum(lalonde$treat == 1),
mom = list(covs = covariates,
tols = absstddif(covs, treat, .05))
)
## Using treat and covs
bal.tab(dmout, treat = lalonde$treat, covs = covariates)
}
}
\keyword{tables} |
#' Plot Method for the train Class
#'
#' This function takes the output of a \code{\link{train}} object and creates a
#' line or level plot using the \pkg{lattice} or \pkg{ggplot2} libraries.
#'
#' If there are no tuning parameters, or none were varied, an error is
#' produced.
#'
#' If the model has one tuning parameter with multiple candidate values, a plot
#' is produced showing the profile of the results over the parameter. Also, a
#' plot can be produced if there are multiple tuning parameters but only one is
#' varied.
#'
#' If there are two tuning parameters with different values, a plot can be
#' produced where a different line is shown for each value of of the other
#' parameter. For three parameters, the same line plot is created within
#' conditioning panels/facets of the other parameter.
#'
#' Also, with two tuning parameters (with different values), a levelplot (i.e.
#' un-clustered heatmap) can be created. For more than two parameters, this
#' plot is created inside conditioning panels/facets.
#'
#' @aliases plot.train ggplot.train
#' @param x an object of class \code{\link{train}}.
#' @param metric What measure of performance to plot. Examples of possible
#' values are "RMSE", "Rsquared", "Accuracy" or "Kappa". Other values can be
#' used depending on what metrics have been calculated.
#' @param plotType a string describing the type of plot (\code{"scatter"},
#' \code{"level"} or \code{"line"} (\code{plot} only))
#' @param digits an integer specifying the number of significant digits used to
#' label the parameter value.
#' @param xTrans a function that will be used to scale the x-axis in scatter
#' plots.
#' @param data an object of class \code{\link{train}}.
#' @param output either "data", "ggplot" or "layered". The first returns a data
#' frame while the second returns a simple \code{ggplot} object with no layers.
#' The third value returns a plot with a set of layers.
#' @param nameInStrip a logical: if there are more than 2 tuning parameters,
#' should the name and value be included in the panel title?
#' @param highlight a logical: if \code{TRUE}, a diamond is placed around the
#' optimal parameter setting for models using grid search.
#' @param mapping,environment unused arguments to make consistent with
#' \pkg{ggplot2} generic method
#' @param \dots \code{plot} only: specifications to be passed to
#' \code{\link[lattice]{levelplot}}, \code{\link[lattice]{xyplot}},
#' \code{\link[lattice:xyplot]{stripplot}} (for line plots). The function
#' automatically sets some arguments (e.g. axis labels) but passing in values
#' here will over-ride the defaults
#' @author Max Kuhn
#' @seealso \code{\link{train}}, \code{\link[lattice]{levelplot}},
#' \code{\link[lattice]{xyplot}}, \code{\link[lattice:xyplot]{stripplot}},
#' \code{\link[ggplot2]{ggplot}}
#' @references Kuhn (2008), ``Building Predictive Models in R Using the caret''
#' (\url{http://www.jstatsoft.org/article/view/v028i05/v28i05.pdf})
#' @keywords hplot
#' @examples
#'
#'
#' \dontrun{
#' library(klaR)
#' rdaFit <- train(Species ~ .,
#' data = iris,
#' method = "rda",
#' control = trainControl(method = "cv"))
#' plot(rdaFit)
#' plot(rdaFit, plotType = "level")
#'
#' ggplot(rdaFit) + theme_bw()
#'
#' }
#'
#' @export plot.train
"plot.train" <- function(x,
plotType = "scatter",
metric = x$metric[1],
digits = getOption("digits") - 3,
xTrans = NULL,
nameInStrip = FALSE,
...)
{
## Error trap
if(!(plotType %in% c("level", "scatter", "line"))) stop("plotType must be either level, scatter or line")
cutpoints <- c(0, 1.9, 2.9, 3.9, Inf)
## define some functions
prettyVal <- function(u, dig, Name = NULL)
{
if(is.numeric(u))
{
if(!is.null(Name)) u <- paste(gsub(".", " ", Name, fixed = TRUE),
": ",
format(u, digits = dig), sep = "")
return(factor(u))
} else return(if(!is.factor(u)) as.factor(u) else u)
}
## Get tuning parameter info
params <- x$modelInfo$parameters$parameter
if(grepl("adapt", x$control$method))
warning("When using adaptive resampling, this plot may not accurately capture the relationship between the tuning parameters and model performance.")
plotIt <- "yes"
if(all(params == "parameter"))
{
plotIt <- "There are no tuning parameters for this model."
} else {
dat <- x$results
## Some exceptions for pretty printing
if(x$method == "nb") dat$usekernel <- factor(ifelse(dat$usekernel, "Nonparametric", "Gaussian"))
if(x$method == "gam") dat$select <- factor(ifelse(dat$select, "Feature Selection", "No Feature Selection"))
if(x$method == "qrnn") dat$bag <- factor(ifelse(dat$bag, "Bagging", "No Bagging"))
if(x$method == "C5.0") dat$winnow <- factor(ifelse(dat$winnow, "Winnowing", "No Winnowing"))
## if(x$method %in% c("M5Rules", "M5", "PART")) dat$pruned <- factor(ifelse(dat$pruned == "Yes", "Pruned", "Unpruned"))
## if(x$method %in% c("M5Rules", "M5")) dat$smoothed <- factor(ifelse(dat$smoothed == "Yes", "Smoothed", "Unsmoothed"))
if(x$method == "M5") dat$rules <- factor(ifelse(dat$rules == "Yes", "Rules", "Trees"))
## if(x$method == "vbmpRadial") dat$estimateTheta <- factor(ifelse(dat$estimateTheta == "Yes", "Estimate Theta", "Do Not Estimate Theta"))
## Check to see which tuning parameters were varied
paramValues <- apply(dat[,params,drop = FALSE],
2,
function(x) length(unique(x)))
##paramValues <- paramValues[order(paramValues)]
if(any(paramValues > 1))
{
params <- names(paramValues)[paramValues > 1]
} else plotIt <- "There are no tuning parameters with more than 1 value."
}
if(plotIt == "yes")
{
p <- length(params)
dat <- dat[, c(metric, params)]
if(p > 1) {
numUnique <- unlist(lapply(dat[, -1], function(x) length(unique(x))))
numUnique <- sort(numUnique, decreasing = TRUE)
dat <- dat[, c(metric, names(numUnique))]
params <- names(numUnique)
}
## The conveintion is that the first parameter (in
## position #2 of dat) is plotted on the x-axis,
## the second parameter is the grouping variable
## and the rest are conditioning variables
if(!is.null(xTrans) & plotType == "scatter") dat[,2] <- xTrans(dat[,2])
## We need to pretty-up some of the values of grouping
## or conditioning variables
resampText <- resampName(x, FALSE)
if(plotType %in% c("line", "scatter"))
{
if(plotType == "scatter")
{
if(p >= 2) for(i in 3:ncol(dat))
dat[,i] <- prettyVal(dat[,i], dig = digits, Name = if(i > 3) params[i-1] else NULL)
} else {
for(i in 2:ncol(dat))
dat[,i] <- prettyVal(dat[,i], dig = digits, Name = if(i > 3) params[i-1] else NULL)
}
for(i in 2:ncol(dat)) if(is.logical(dat[,i])) dat[,i] <- factor(dat[,i])
if(p > 2 & nameInStrip) {
strip_vars <- params[-(1:2)]
strip_lab <- subset(x$modelInfo$parameters, parameter %in% strip_vars)$label
for(i in seq_along(strip_vars))
dat[, strip_vars[i]] <- factor(paste(strip_lab[i], dat[, strip_vars[i]], sep = ": "))
}
## make formula
form <- if(p <= 2)
{
as.formula(
paste(metric, "~", params[1], sep = ""))
} else as.formula(paste(metric, "~", params[1], "|",
paste(params[-(1:2)], collapse = "*"),
sep = ""))
defaultArgs <- list(x = form,
data = dat,
groups = if(p > 1) dat[,params[2]] else NULL)
if(length(list(...)) > 0) defaultArgs <- c(defaultArgs, list(...))
lNames <- names(defaultArgs)
if(!("ylab" %in% lNames)) defaultArgs$ylab <- paste(metric, resampText)
if(!("type" %in% lNames) & plotType == "scatter") defaultArgs$type <- c("g", "o")
if(!("type" %in% lNames) & plotType == "line") defaultArgs$type <- c("g", "o")
if(p > 1)
{
## I apologize in advance for the following 3 line kludge.
groupCols <- 4
if(length(unique(dat[,3])) < 4) groupCols <- length(unique(dat[,3]))
if(length(unique(dat[,3])) %in% 5:6) groupCols <- 3
groupCols <- as.numeric(
cut(length(unique(dat[,3])),
cutpoints,
include.lowest = TRUE))
if(!(any(c("key", "auto.key") %in% lNames)))
defaultArgs$auto.key <- list(columns = groupCols,
lines = TRUE,
title = as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[2]],
cex.title = 1)
}
if(!("xlab" %in% lNames)) defaultArgs$xlab <- as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[1]]
if(plotType == "scatter")
{
out <- do.call("xyplot", defaultArgs)
} else {
## line plot #########################
out <- do.call("stripplot", defaultArgs)
}
}
if(plotType == "level")
{
if(p == 1) stop("There must be at least 2 tuning parameters with multiple values")
for(i in 2:ncol(dat))
dat[,i] <- prettyVal(dat[,i], dig = digits, Name = if(i > 3) params[i-1] else NULL)
if(p > 2 & nameInStrip) {
strip_vars <- params[-(1:2)]
strip_lab <- subset(x$modelInfo$parameters, parameter %in% strip_vars)$label
for(i in seq_along(strip_vars))
dat[, strip_vars[i]] <- factor(paste(strip_lab[i], dat[, strip_vars[i]], sep = ": "))
}
## make formula
form <- if(p <= 2)
{
as.formula(paste(metric, "~", params[1], "*", params[2], sep = ""))
} else as.formula(paste(metric, "~", params[1], "*", params[2], "|",
paste(params[-(1:2)], collapse = "*"),
sep = ""))
defaultArgs <- list(x = form, data = dat)
if(length(list(...)) > 0) defaultArgs <- c(defaultArgs, list(...))
lNames <- names(defaultArgs)
if(!("sub" %in% lNames)) defaultArgs$sub <- paste(metric, resampText)
if(!("xlab" %in% lNames)) defaultArgs$xlab <- as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[1]]
if(!("ylab" %in% lNames)) defaultArgs$ylab <- as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[2]]
out <- do.call("levelplot", defaultArgs)
}
} else stop(plotIt)
out
}
| /pkg/caret/R/plot.train.R | no_license | mutual-ai/caret | R | false | false | 11,852 | r | #' Plot Method for the train Class
#'
#' This function takes the output of a \code{\link{train}} object and creates a
#' line or level plot using the \pkg{lattice} or \pkg{ggplot2} libraries.
#'
#' If there are no tuning parameters, or none were varied, an error is
#' produced.
#'
#' If the model has one tuning parameter with multiple candidate values, a plot
#' is produced showing the profile of the results over the parameter. Also, a
#' plot can be produced if there are multiple tuning parameters but only one is
#' varied.
#'
#' If there are two tuning parameters with different values, a plot can be
#' produced where a different line is shown for each value of of the other
#' parameter. For three parameters, the same line plot is created within
#' conditioning panels/facets of the other parameter.
#'
#' Also, with two tuning parameters (with different values), a levelplot (i.e.
#' un-clustered heatmap) can be created. For more than two parameters, this
#' plot is created inside conditioning panels/facets.
#'
#' @aliases plot.train ggplot.train
#' @param x an object of class \code{\link{train}}.
#' @param metric What measure of performance to plot. Examples of possible
#' values are "RMSE", "Rsquared", "Accuracy" or "Kappa". Other values can be
#' used depending on what metrics have been calculated.
#' @param plotType a string describing the type of plot (\code{"scatter"},
#' \code{"level"} or \code{"line"} (\code{plot} only))
#' @param digits an integer specifying the number of significant digits used to
#' label the parameter value.
#' @param xTrans a function that will be used to scale the x-axis in scatter
#' plots.
#' @param data an object of class \code{\link{train}}.
#' @param output either "data", "ggplot" or "layered". The first returns a data
#' frame while the second returns a simple \code{ggplot} object with no layers.
#' The third value returns a plot with a set of layers.
#' @param nameInStrip a logical: if there are more than 2 tuning parameters,
#' should the name and value be included in the panel title?
#' @param highlight a logical: if \code{TRUE}, a diamond is placed around the
#' optimal parameter setting for models using grid search.
#' @param mapping,environment unused arguments to make consistent with
#' \pkg{ggplot2} generic method
#' @param \dots \code{plot} only: specifications to be passed to
#' \code{\link[lattice]{levelplot}}, \code{\link[lattice]{xyplot}},
#' \code{\link[lattice:xyplot]{stripplot}} (for line plots). The function
#' automatically sets some arguments (e.g. axis labels) but passing in values
#' here will over-ride the defaults
#' @author Max Kuhn
#' @seealso \code{\link{train}}, \code{\link[lattice]{levelplot}},
#' \code{\link[lattice]{xyplot}}, \code{\link[lattice:xyplot]{stripplot}},
#' \code{\link[ggplot2]{ggplot}}
#' @references Kuhn (2008), ``Building Predictive Models in R Using the caret''
#' (\url{http://www.jstatsoft.org/article/view/v028i05/v28i05.pdf})
#' @keywords hplot
#' @examples
#'
#'
#' \dontrun{
#' library(klaR)
#' rdaFit <- train(Species ~ .,
#' data = iris,
#' method = "rda",
#' control = trainControl(method = "cv"))
#' plot(rdaFit)
#' plot(rdaFit, plotType = "level")
#'
#' ggplot(rdaFit) + theme_bw()
#'
#' }
#'
#' @export plot.train
"plot.train" <- function(x,
plotType = "scatter",
metric = x$metric[1],
digits = getOption("digits") - 3,
xTrans = NULL,
nameInStrip = FALSE,
...)
{
## Error trap
if(!(plotType %in% c("level", "scatter", "line"))) stop("plotType must be either level, scatter or line")
cutpoints <- c(0, 1.9, 2.9, 3.9, Inf)
## define some functions
prettyVal <- function(u, dig, Name = NULL)
{
if(is.numeric(u))
{
if(!is.null(Name)) u <- paste(gsub(".", " ", Name, fixed = TRUE),
": ",
format(u, digits = dig), sep = "")
return(factor(u))
} else return(if(!is.factor(u)) as.factor(u) else u)
}
## Get tuning parameter info
params <- x$modelInfo$parameters$parameter
if(grepl("adapt", x$control$method))
warning("When using adaptive resampling, this plot may not accurately capture the relationship between the tuning parameters and model performance.")
plotIt <- "yes"
if(all(params == "parameter"))
{
plotIt <- "There are no tuning parameters for this model."
} else {
dat <- x$results
## Some exceptions for pretty printing
if(x$method == "nb") dat$usekernel <- factor(ifelse(dat$usekernel, "Nonparametric", "Gaussian"))
if(x$method == "gam") dat$select <- factor(ifelse(dat$select, "Feature Selection", "No Feature Selection"))
if(x$method == "qrnn") dat$bag <- factor(ifelse(dat$bag, "Bagging", "No Bagging"))
if(x$method == "C5.0") dat$winnow <- factor(ifelse(dat$winnow, "Winnowing", "No Winnowing"))
## if(x$method %in% c("M5Rules", "M5", "PART")) dat$pruned <- factor(ifelse(dat$pruned == "Yes", "Pruned", "Unpruned"))
## if(x$method %in% c("M5Rules", "M5")) dat$smoothed <- factor(ifelse(dat$smoothed == "Yes", "Smoothed", "Unsmoothed"))
if(x$method == "M5") dat$rules <- factor(ifelse(dat$rules == "Yes", "Rules", "Trees"))
## if(x$method == "vbmpRadial") dat$estimateTheta <- factor(ifelse(dat$estimateTheta == "Yes", "Estimate Theta", "Do Not Estimate Theta"))
## Check to see which tuning parameters were varied
paramValues <- apply(dat[,params,drop = FALSE],
2,
function(x) length(unique(x)))
##paramValues <- paramValues[order(paramValues)]
if(any(paramValues > 1))
{
params <- names(paramValues)[paramValues > 1]
} else plotIt <- "There are no tuning parameters with more than 1 value."
}
if(plotIt == "yes")
{
p <- length(params)
dat <- dat[, c(metric, params)]
if(p > 1) {
numUnique <- unlist(lapply(dat[, -1], function(x) length(unique(x))))
numUnique <- sort(numUnique, decreasing = TRUE)
dat <- dat[, c(metric, names(numUnique))]
params <- names(numUnique)
}
## The conveintion is that the first parameter (in
## position #2 of dat) is plotted on the x-axis,
## the second parameter is the grouping variable
## and the rest are conditioning variables
if(!is.null(xTrans) & plotType == "scatter") dat[,2] <- xTrans(dat[,2])
## We need to pretty-up some of the values of grouping
## or conditioning variables
resampText <- resampName(x, FALSE)
if(plotType %in% c("line", "scatter"))
{
if(plotType == "scatter")
{
if(p >= 2) for(i in 3:ncol(dat))
dat[,i] <- prettyVal(dat[,i], dig = digits, Name = if(i > 3) params[i-1] else NULL)
} else {
for(i in 2:ncol(dat))
dat[,i] <- prettyVal(dat[,i], dig = digits, Name = if(i > 3) params[i-1] else NULL)
}
for(i in 2:ncol(dat)) if(is.logical(dat[,i])) dat[,i] <- factor(dat[,i])
if(p > 2 & nameInStrip) {
strip_vars <- params[-(1:2)]
strip_lab <- subset(x$modelInfo$parameters, parameter %in% strip_vars)$label
for(i in seq_along(strip_vars))
dat[, strip_vars[i]] <- factor(paste(strip_lab[i], dat[, strip_vars[i]], sep = ": "))
}
## make formula
form <- if(p <= 2)
{
as.formula(
paste(metric, "~", params[1], sep = ""))
} else as.formula(paste(metric, "~", params[1], "|",
paste(params[-(1:2)], collapse = "*"),
sep = ""))
defaultArgs <- list(x = form,
data = dat,
groups = if(p > 1) dat[,params[2]] else NULL)
if(length(list(...)) > 0) defaultArgs <- c(defaultArgs, list(...))
lNames <- names(defaultArgs)
if(!("ylab" %in% lNames)) defaultArgs$ylab <- paste(metric, resampText)
if(!("type" %in% lNames) & plotType == "scatter") defaultArgs$type <- c("g", "o")
if(!("type" %in% lNames) & plotType == "line") defaultArgs$type <- c("g", "o")
if(p > 1)
{
## I apologize in advance for the following 3 line kludge.
groupCols <- 4
if(length(unique(dat[,3])) < 4) groupCols <- length(unique(dat[,3]))
if(length(unique(dat[,3])) %in% 5:6) groupCols <- 3
groupCols <- as.numeric(
cut(length(unique(dat[,3])),
cutpoints,
include.lowest = TRUE))
if(!(any(c("key", "auto.key") %in% lNames)))
defaultArgs$auto.key <- list(columns = groupCols,
lines = TRUE,
title = as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[2]],
cex.title = 1)
}
if(!("xlab" %in% lNames)) defaultArgs$xlab <- as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[1]]
if(plotType == "scatter")
{
out <- do.call("xyplot", defaultArgs)
} else {
## line plot #########################
out <- do.call("stripplot", defaultArgs)
}
}
if(plotType == "level")
{
if(p == 1) stop("There must be at least 2 tuning parameters with multiple values")
for(i in 2:ncol(dat))
dat[,i] <- prettyVal(dat[,i], dig = digits, Name = if(i > 3) params[i-1] else NULL)
if(p > 2 & nameInStrip) {
strip_vars <- params[-(1:2)]
strip_lab <- subset(x$modelInfo$parameters, parameter %in% strip_vars)$label
for(i in seq_along(strip_vars))
dat[, strip_vars[i]] <- factor(paste(strip_lab[i], dat[, strip_vars[i]], sep = ": "))
}
## make formula
form <- if(p <= 2)
{
as.formula(paste(metric, "~", params[1], "*", params[2], sep = ""))
} else as.formula(paste(metric, "~", params[1], "*", params[2], "|",
paste(params[-(1:2)], collapse = "*"),
sep = ""))
defaultArgs <- list(x = form, data = dat)
if(length(list(...)) > 0) defaultArgs <- c(defaultArgs, list(...))
lNames <- names(defaultArgs)
if(!("sub" %in% lNames)) defaultArgs$sub <- paste(metric, resampText)
if(!("xlab" %in% lNames)) defaultArgs$xlab <- as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[1]]
if(!("ylab" %in% lNames)) defaultArgs$ylab <- as.character(x$modelInfo$parameter$label)[x$modelInfo$parameter$parameter == params[2]]
out <- do.call("levelplot", defaultArgs)
}
} else stop(plotIt)
out
}
|
# MethylMix analysis of pancreatic cancer dataset
# Since there are no normal samples, we just want the trancriptionally predictive genes. These can be obtained using
# MethylMix_ModelGeneExpression
library(MethylMix)
library(doParallel)
library(tictoc)
rm(list=ls())
# load data
load("/home/anita/Benchmarking/two_omics/PancreaticCancerCompleteDataAnalysis/PancreaticCancerRawDataset.Rdata")
source("/home/anita/Benchmarking/two_omics/MesotheliomaCompleteDataAnalysis/MethylMix/MethylMix_FDRCalculation_Function.R")
rm(meso.cnv)
rm(paad.cnv)
METPaad <- as.matrix(paad.me)
GEPaad <- as.matrix(paad.ge)
# running methylmix in parallel
cl <- makeCluster(5)
registerDoParallel(cl)
tic("MethylMix")
MethylMixFunctionalGenes <- MethylMix_ModelGeneExpression(METPaad, GEPaad, CovariateData = NULL)
toc()
MethylMixFunctionalGenes_FDR <- MethylMix_ModelGeneExpression_FDR(METcancer = METPaad, GEcancer = GEPaad, CovariateData = NULL)
stopCluster(cl)
# > MethylMixFunctionalGenes <- MethylMix_ModelGeneExpression(METPaad, GEPaad, CovariateData = NULL)
# Found 177 samples with both methylation and expression data.
# Correlating methylation data with gene expression...
#
# Found 3025 transcriptionally predictive genes.
# > toc()
# MethylMix: 16.389 sec elapsed
fdrs <- data.frame(MethylMixFunctionalGenes_FDR)
fdrs[,2] <- as.numeric(as.character(fdrs[,2]))
str(fdrs)
fdrs2 <- fdrs[fdrs[,2] < 0.05,] # All the genes satisfy fdr cut-off of 0.05
# sorting based on fdr (smallest to largest)
head(fdrs2)
fdrs2 <- fdrs2[order(fdrs2$V2, decreasing = F),]
head(fdrs2)
# selecting top 3000
resdf <- fdrs2[1:3000,]
genes <- as.character(resdf$FunctionalGenes)
# saving results
setwd("/home/anita/Benchmarking/two_omics/PancreaticCancerCompleteDataAnalysis/MethylMix/")
write.table(genes, "MethylMix_PAAD_3000Genes.txt", quote = F, row.names = F)
| /Multi-staged tools/PancreaticCancer/MethylMix/MethylMix_PAAD_Analysis.R | no_license | AtinaSat/Evaluation-of-integration-tools | R | false | false | 1,841 | r | # MethylMix analysis of pancreatic cancer dataset
# Since there are no normal samples, we just want the trancriptionally predictive genes. These can be obtained using
# MethylMix_ModelGeneExpression
library(MethylMix)
library(doParallel)
library(tictoc)
rm(list=ls())
# load data
load("/home/anita/Benchmarking/two_omics/PancreaticCancerCompleteDataAnalysis/PancreaticCancerRawDataset.Rdata")
source("/home/anita/Benchmarking/two_omics/MesotheliomaCompleteDataAnalysis/MethylMix/MethylMix_FDRCalculation_Function.R")
rm(meso.cnv)
rm(paad.cnv)
METPaad <- as.matrix(paad.me)
GEPaad <- as.matrix(paad.ge)
# running methylmix in parallel
cl <- makeCluster(5)
registerDoParallel(cl)
tic("MethylMix")
MethylMixFunctionalGenes <- MethylMix_ModelGeneExpression(METPaad, GEPaad, CovariateData = NULL)
toc()
MethylMixFunctionalGenes_FDR <- MethylMix_ModelGeneExpression_FDR(METcancer = METPaad, GEcancer = GEPaad, CovariateData = NULL)
stopCluster(cl)
# > MethylMixFunctionalGenes <- MethylMix_ModelGeneExpression(METPaad, GEPaad, CovariateData = NULL)
# Found 177 samples with both methylation and expression data.
# Correlating methylation data with gene expression...
#
# Found 3025 transcriptionally predictive genes.
# > toc()
# MethylMix: 16.389 sec elapsed
fdrs <- data.frame(MethylMixFunctionalGenes_FDR)
fdrs[,2] <- as.numeric(as.character(fdrs[,2]))
str(fdrs)
fdrs2 <- fdrs[fdrs[,2] < 0.05,] # All the genes satisfy fdr cut-off of 0.05
# sorting based on fdr (smallest to largest)
head(fdrs2)
fdrs2 <- fdrs2[order(fdrs2$V2, decreasing = F),]
head(fdrs2)
# selecting top 3000
resdf <- fdrs2[1:3000,]
genes <- as.character(resdf$FunctionalGenes)
# saving results
setwd("/home/anita/Benchmarking/two_omics/PancreaticCancerCompleteDataAnalysis/MethylMix/")
write.table(genes, "MethylMix_PAAD_3000Genes.txt", quote = F, row.names = F)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/eif_moderated.R
\name{modtest_ic}
\alias{modtest_ic}
\title{Moderated Statistical Tests for Influence Functions}
\usage{
modtest_ic(biotmle, adjust = "BH", pval_type = c("normal", "logistic"), ...)
}
\arguments{
\item{biotmle}{\code{biotmle} object as generated by \code{biomarkertmle}}
\item{adjust}{the multiple testing correction to be applied to p-values that
are generated from the moderated tests. The recommended (default) method
is that of Benjamini and Hochberg. See \link[limma]{topTable} for a list of
appropriate methods.}
\item{pval_type}{The reference distribution to be used for computing the
p-value. Those based on the normal approximation tend to provide misleading
inference when working with moderately sized (finite) samples. Use of the
logistic distribution has been found to empirically improve performance in
settings where multiple hypothesis testing is a concern.}
\item{...}{Other arguments passed to \code{\link[limma]{topTable}}.}
}
\value{
\code{biotmle} object containing the results of applying both
\code{\link[limma]{lmFit}} and \code{\link[limma]{topTable}}.
}
\description{
Performs variance shrinkage via application of an empirical Bayes procedure
(of LIMMA) on the observed data after a transformation moving the data to
influence function space, based on the average treatment effect parameter.
}
\examples{
library(dplyr)
library(biotmleData)
library(SuperLearner)
library(SummarizedExperiment)
data(illuminaData)
colData(illuminaData) <- colData(illuminaData) \%>\%
data.frame() \%>\%
dplyr::mutate(age = as.numeric(age > median(age))) \%>\%
DataFrame()
benz_idx <- which(names(colData(illuminaData)) \%in\% "benzene")
biomarkerTMLEout <- biomarkertmle(
se = illuminaData[1:2, ],
varInt = benz_idx,
bppar_type = BiocParallel::SerialParam(),
g_lib = c("SL.mean", "SL.glm"),
Q_lib = c("SL.mean", "SL.glm")
)
limmaTMLEout <- modtest_ic(biotmle = biomarkerTMLEout)
}
| /man/modtest_ic.Rd | permissive | nhejazi/biotmle | R | false | true | 2,006 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/eif_moderated.R
\name{modtest_ic}
\alias{modtest_ic}
\title{Moderated Statistical Tests for Influence Functions}
\usage{
modtest_ic(biotmle, adjust = "BH", pval_type = c("normal", "logistic"), ...)
}
\arguments{
\item{biotmle}{\code{biotmle} object as generated by \code{biomarkertmle}}
\item{adjust}{the multiple testing correction to be applied to p-values that
are generated from the moderated tests. The recommended (default) method
is that of Benjamini and Hochberg. See \link[limma]{topTable} for a list of
appropriate methods.}
\item{pval_type}{The reference distribution to be used for computing the
p-value. Those based on the normal approximation tend to provide misleading
inference when working with moderately sized (finite) samples. Use of the
logistic distribution has been found to empirically improve performance in
settings where multiple hypothesis testing is a concern.}
\item{...}{Other arguments passed to \code{\link[limma]{topTable}}.}
}
\value{
\code{biotmle} object containing the results of applying both
\code{\link[limma]{lmFit}} and \code{\link[limma]{topTable}}.
}
\description{
Performs variance shrinkage via application of an empirical Bayes procedure
(of LIMMA) on the observed data after a transformation moving the data to
influence function space, based on the average treatment effect parameter.
}
\examples{
library(dplyr)
library(biotmleData)
library(SuperLearner)
library(SummarizedExperiment)
data(illuminaData)
colData(illuminaData) <- colData(illuminaData) \%>\%
data.frame() \%>\%
dplyr::mutate(age = as.numeric(age > median(age))) \%>\%
DataFrame()
benz_idx <- which(names(colData(illuminaData)) \%in\% "benzene")
biomarkerTMLEout <- biomarkertmle(
se = illuminaData[1:2, ],
varInt = benz_idx,
bppar_type = BiocParallel::SerialParam(),
g_lib = c("SL.mean", "SL.glm"),
Q_lib = c("SL.mean", "SL.glm")
)
limmaTMLEout <- modtest_ic(biotmle = biomarkerTMLEout)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rl_generate_policy.R
\name{rl_generate_policy}
\alias{rl_generate_policy}
\title{Function performs Reinforcement Learning using the past data to generate model policy}
\usage{
rl_generate_policy(x, states, actions, control)
}
\arguments{
\item{x}{- Dataframe containing trading data}
\item{states}{- Character vector, Selected states of the System}
\item{actions}{- Character vector, Selected actions executed under environment}
\item{control}{- List, control parameters as defined in the Reinforcement Learning Package}
}
\value{
Function returns data frame with reinforcement learning model policy
}
\description{
This function will perform Reinforcement Learning using Trading Data.
It will suggest whether or not it is better to keep using trading systems or not.
Function is just using results of the past performance to generate the recommendation (not a holy grail).
}
\details{
Initial policy is generated using a dummy zero values.
This way function starts working directly from the first observation.
However policy 'ON' value will only be generated once the Q value is greater than zero
}
\examples{
library(dplyr)
library(ReinforcementLearning)
library(magrittr)
library(lazytrade)
data(data_trades)
states <- c("tradewin", "tradeloss")
actions <- c("ON", "OFF")
control <- list(alpha = 0.7, gamma = 0.3, epsilon = 0.1)
rl_generate_policy(x = data_trades,
states, actions, control)
}
\author{
(C) 2019,2020 Vladimir Zhbanko
}
| /man/rl_generate_policy.Rd | permissive | Soyouzzz1986/lazytrade | R | false | true | 1,562 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rl_generate_policy.R
\name{rl_generate_policy}
\alias{rl_generate_policy}
\title{Function performs Reinforcement Learning using the past data to generate model policy}
\usage{
rl_generate_policy(x, states, actions, control)
}
\arguments{
\item{x}{- Dataframe containing trading data}
\item{states}{- Character vector, Selected states of the System}
\item{actions}{- Character vector, Selected actions executed under environment}
\item{control}{- List, control parameters as defined in the Reinforcement Learning Package}
}
\value{
Function returns data frame with reinforcement learning model policy
}
\description{
This function will perform Reinforcement Learning using Trading Data.
It will suggest whether or not it is better to keep using trading systems or not.
Function is just using results of the past performance to generate the recommendation (not a holy grail).
}
\details{
Initial policy is generated using a dummy zero values.
This way function starts working directly from the first observation.
However policy 'ON' value will only be generated once the Q value is greater than zero
}
\examples{
library(dplyr)
library(ReinforcementLearning)
library(magrittr)
library(lazytrade)
data(data_trades)
states <- c("tradewin", "tradeloss")
actions <- c("ON", "OFF")
control <- list(alpha = 0.7, gamma = 0.3, epsilon = 0.1)
rl_generate_policy(x = data_trades,
states, actions, control)
}
\author{
(C) 2019,2020 Vladimir Zhbanko
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fitRSF.R
\name{fitRSF}
\alias{fitRSF}
\title{Fit RSF}
\usage{
fitRSF(xy, method = c("poisson", "logit"), covlist, nbin = NULL,
nzeros = NULL, buffsize = NULL)
}
\arguments{
\item{xy}{Matrix of observed locations}
\item{method}{Method to fit the RSF, either "poisson" for a
Poisson GLM, or "logit" for a logistic regression}
\item{covlist}{List of covariate rasters}
\item{nbin}{Number of bins in each dimension for Poisson GLM}
\item{nzeros}{Number of zeros to sample for logistic regression}
\item{buffsize}{Size of buffer around the observed locations from which
control locations should be sampled, if method='logit'.}
}
\value{
List of \code{mod} (output of glm) and \code{xyzeros} (matrix of
unused locations)
}
\description{
Fit RSF
}
| /man/fitRSF.Rd | no_license | TheoMichelot/localGibbs | R | false | true | 826 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fitRSF.R
\name{fitRSF}
\alias{fitRSF}
\title{Fit RSF}
\usage{
fitRSF(xy, method = c("poisson", "logit"), covlist, nbin = NULL,
nzeros = NULL, buffsize = NULL)
}
\arguments{
\item{xy}{Matrix of observed locations}
\item{method}{Method to fit the RSF, either "poisson" for a
Poisson GLM, or "logit" for a logistic regression}
\item{covlist}{List of covariate rasters}
\item{nbin}{Number of bins in each dimension for Poisson GLM}
\item{nzeros}{Number of zeros to sample for logistic regression}
\item{buffsize}{Size of buffer around the observed locations from which
control locations should be sampled, if method='logit'.}
}
\value{
List of \code{mod} (output of glm) and \code{xyzeros} (matrix of
unused locations)
}
\description{
Fit RSF
}
|
library(shiny)
# Define UI for dataset viewer application
shinyUI(fluidPage(
# Application title
titlePanel("Data Science Specilization Rating"),
br(),
sidebarLayout(
sidebarPanel(
textInput("fname", "First Name:", "Enter your first name"),
textInput("lname", "Last Name:", "Enter your Last name"),
selectInput("rating", "Enter Course Rating (1-5, 5 being the best):",
choices = 5:1),
textInput("feedback", "Provide your feedback:", "Please provide your feedback"),
actionButton("go", "Submit")
),
# Show the caption, a summary of the dataset and an HTML
# table with the requested number of observations
mainPanel(
h4(textOutput("heading", container = span)),
br(),
verbatimTextOutput("response"),
br(),
tags$p("Your Rating for Course:"),
verbatimTextOutput("rating"),
tags$p("Your feebback about Course:"),
verbatimTextOutput("feedback"),
tags$p("Average Rating of the Course:"),
verbatimTextOutput("avgrate"),
tags$p("Based on # of ratings"),
verbatimTextOutput("numrate"),
tags$p("Histogram for Rating:"),
plotOutput('plot')
)
)
)) | /ui.r | no_license | ritesh256/ShinyCourseProjectApp | R | false | false | 1,234 | r | library(shiny)
# Define UI for dataset viewer application
shinyUI(fluidPage(
# Application title
titlePanel("Data Science Specilization Rating"),
br(),
sidebarLayout(
sidebarPanel(
textInput("fname", "First Name:", "Enter your first name"),
textInput("lname", "Last Name:", "Enter your Last name"),
selectInput("rating", "Enter Course Rating (1-5, 5 being the best):",
choices = 5:1),
textInput("feedback", "Provide your feedback:", "Please provide your feedback"),
actionButton("go", "Submit")
),
# Show the caption, a summary of the dataset and an HTML
# table with the requested number of observations
mainPanel(
h4(textOutput("heading", container = span)),
br(),
verbatimTextOutput("response"),
br(),
tags$p("Your Rating for Course:"),
verbatimTextOutput("rating"),
tags$p("Your feebback about Course:"),
verbatimTextOutput("feedback"),
tags$p("Average Rating of the Course:"),
verbatimTextOutput("avgrate"),
tags$p("Based on # of ratings"),
verbatimTextOutput("numrate"),
tags$p("Histogram for Rating:"),
plotOutput('plot')
)
)
)) |
# -------------------#
# NO2 atlas workflow #
# -------------------#
# The workflow consists of the following steps
# 1) Looking up which OTM shape files are needed to construct a shape file of the domain
# and producue the shape file. By default the city centre (defined in city.df) is taken
# as the centre of a 20x20 km square. If the lon.min, lon.max, lat.min, and lat.max variables
# are avaliable in city.df these are taken as domain boundaries. All results of a city are in
# a subfolder <cityname>/
# 2) Add a column with the zone to the network shp. A zone shape file has to be availalble
# in a sub-folder <cityname>/zones_<city_name>. The format of this file is strictly defined:
# - projection WGS84, 3 fields, id (int), name (string), descr (string)
# - non overlapping polygons named bigLEZ and smallLEZ (for the Atlas)
# 3) Grid the road network (currently in Python)
# 4) Calculate fleet emission factors for all scenarios and zones
# 5) Calculate gridded emissions for each scenario. Scenarios are under <cityname>/results/<scenario_name>
# 6) Apply the dispersion kernels on the emissions
# Clean up
rm(list=ls())
# set the directory of this script as working directory
wd <- dirname(sys.frame(1)$ofile)
setwd(wd)
# load libraries and auxiliary functions
library(raster)
library(geosphere)
library(parallel)
source("NO2_atlas_config.R")
source("create_default_extent.R") # step 1
source("create_network_shp_utm.R") # step 1
source("add_zones.R") # step 2
source("check_zones_overlap.R") # step 2
source("create_fleet_emission_factors.R") # step 4
source("create_gridded_emissions.R") # step 5
source("sherpacity_par.R") # step 6
# loop over all the cities
for (cityname in as.vector(city.df$cityname)) { # as.vector(city.df$cityname)
print("########################################")
print(cityname)
# city coordinates
city.coords <- matrix(c(city.df$lon[city.df$cityname == cityname],
city.df$lat[city.df$cityname == cityname]), ncol = 2,
dimnames = list(c(cityname), c("lon", "lat")))
# create a folder for a specific city in the cities.output.folder if it doesn't exist yet.
# A leading '/' has to be removed when cities.output.folder = ""
city.output.folder <- sub("^/", "", file.path(cities.output.folder, cityname))
if (!dir.exists(city.output.folder)) {
dir.create(city.output.folder, recursive = T)
}
# folder and filename of the UTM shape file with the network (maybe it exists, maybe not)
dsn.city.utm.shp <- city.output.folder
layer.city.utm.shp <- paste0("traffic_roadlinks_", cityname)
city.utm.shp <- paste0(dsn.city.utm.shp, "/", layer.city.utm.shp, ".shp")
# to avoid wrong behaviour with previous loop, remove the SpatialLinesDataFrame of the network
if ("city.utm.sldf" %in% ls()) { rm(city.utm.sldf) }
# folder and filename of the UTM shape file with the network and zones (maybe it exists, maybe not)
dsn.zoned.city.utm.shp <- city.output.folder
layer.city.zoned.utm.shp <- paste0("traffic_roadlinks_zones_", cityname)
zoned.city.utm.shp <- paste0(dsn.zoned.city.utm.shp, "/", layer.city.zoned.utm.shp, ".shp")
# to avoid wrong behaviour with previous loop, remove the SpatialLinesDataFrame of the network
if ("zoned.city.utm.sldf" %in% ls()) { rm(zoned.city.utm.sldf) }
# Step 1: Create the network shape file for the domain
# ----------------------------------------------------
# create network shape file inside domain in utm
# 1) look up which shape files to use
# 2) paste them together
# 3) convert to UTM
# 4) write a shape file
if (!file.exists(city.utm.shp) | rerun.domain) {
print(paste0("Creating a shape file in UTM for ", cityname, " ..."))
# check if domain is defined, if not create one of 20x20km
lon.lat.limits <- city.df[city.df$cityname == cityname, c('lon.min', 'lon.max', 'lat.min', 'lat.max')]
# horror, extent doesn't take a df with named columns
lon.lat.limits <- c(lon.lat.limits[1,1], lon.lat.limits[1,2], lon.lat.limits[1,3], lon.lat.limits[1,4])
if (is.na(sum(lon.lat.limits))) {
city.extent <- create.default.extent(city.coords)
} else {
# define city extent in latitude longitude (WGS84, epsg: 4326)
city.extent <- extent(lon.lat.limits)
}
# create a shape file for the domain, combining the necessary OTM shape files
city.utm.sldf <- create.network.shp.utm(city.extent, cityname, nuts3.bbox.df, otm.path)
# write the UTM shape file of the network
writeOGR(city.utm.sldf, dsn = dsn.city.utm.shp, layer = layer.city.utm.shp,
driver="ESRI Shapefile")
} else {
print(paste0("A network shape file in UTM without zones was already created for ", cityname))
}
# Step 2: Add a column with the zone to the network shp
# -----------------------------------------------------
# Add a column to the network shape in utm with the zones
# the zones have be in a shape file with name <cityname>/zones_<cityname>.shp
# the attribute table has 2 fields: 'id' and 'zone_name'
# If no zones area available just add a column 'Compelent'
# check if there is a network shape files with zones added
if (!file.exists(zoned.city.utm.shp) | rerun.zoning) {
# Check if the network without zones is already in memory from step 1 (city.utm.sldf), if not read it.
# This avoids reading the shp file twice (which is slow) but it is important that before
# starting a new city the varialbes 'city.utm.sldf' and 'zoned.city.utm.sldf' are removed.
# dangerous when looping over cities
if (!("city.utm.sldf" %in% ls())) {
# if the network shape file already exists, read it
print(paste0("Reading ", dsn.city.utm.shp, "/", layer.city.utm.shp, ".shp"))
city.utm.sldf <- readOGR(dsn = dsn.city.utm.shp, layer = layer.city.utm.shp)
}
# Check if there is a shape file with zones. If so, add the zones to the network shape file.
dsn.zones.shp <- paste0(city.output.folder, "/zones_", cityname)
layer.zones.shp <- paste0("zones_", cityname)
zones.shp <- paste0(dsn.zones.shp, "/", layer.zones.shp, ".shp")
if (file.exists(zones.shp)) {
# read the zones shape file as a spatialPolygonsDataFrame
zones.spdf <- readOGR(dsn = dsn.zones.shp, layer = layer.zones.shp)
# check if the zones are not overlapping
overlap <- check.zones.overlap(zones.spdf)
if (overlap == TRUE) {
print("Overlapping zones. Modify the zones file.")
} else {
# Add a column with the zone to each road
print(paste0("Adding zones to network to create ", zoned.city.utm.shp))
zoned.city.utm.sldf <- add.zones(city.utm.sldf, zones.spdf, cityname)
}
# if there is no zones_<cityname>.shp just add a column for the complementary zone
} else {
print(paste0("No zones for ", cityname, ". Put a zones_<cityname>.shp in the zones_<cityname> folder."))
# print(paste0("No zones for ", cityname, ". A default complementary zone was added."))
# zoned.network.shp.utm <- network.shp.utm
# zoned.network.shp.utm@data$zone_name <- "Complement"
}
# write the UTM shape file with zone(s) if it was produced. There are 2 possibilities:
# 1) A valid zones file was provided (without overlapping zones)
# 2) No zones file, just one complementary zone everywhere
if ("zoned.city.utm.sldf" %in% ls()) {
writeOGR(zoned.city.utm.sldf,
dsn = dsn.zoned.city.utm.shp,
layer = layer.city.zoned.utm.shp,
driver = "ESRI Shapefile",
overwrite = TRUE)
# !!!!!!!!!!!
# to save some space the network shape file without zones could be deleted
# !!!!!!!!!!!
}
} else {
print(paste0("A zoned network shape file already exists for ", cityname))
} # close if for zoned network shape
# remove zoned.network.shp.utm to avoid wrong behaviour in next loop. Just to be sure.
for (myVar in c('zoned.network.shp.utm', 'network.shp.utm', 'zones.shp')) {
if (myVar %in% ls()) {rm(myVar)}
}
# Step 3: Grid the road network (currently in Python)
# ---------------------------------------------------
# see fast_gridding.py
gridded.network.file <- paste0(city.output.folder, "/", cityname, "_grid_tbl.txt")
# Step 4: Calculate fleet emission factors
# ----------------------------------------
# scenario definition file
# 5 columns: scenario_name,zone_name,default_fleet_country,default_fleet_year,fleet_configuration
scenario.definition.file <- paste0(city.output.folder, "/", cityname, "_scenario_definition.csv")
# emission factor file
emission.factors.file <- paste0(city.output.folder, "/", cityname, "_emission_factors.csv")
if (file.exists(scenario.definition.file)) {
if (!file.exists(emission.factors.file)) {
# create a data frame with the emission factors to be used per
# scenario, zone and road type
scenario.efs.df <- create_fleet_emission_factors(scenario.definition.file)
write.table(scenario.efs.df, file = emission.factors.file, sep = ",", row.names = FALSE, quote = FALSE)
} else {
print(paste0("Emission factors are already calcultated for ", cityname))
}
} else {
print(paste0("No scenarios available for ", cityname))
}
# Step 5: Create emission rasters for scenarios
# ---------------------------------------------
# check if a gridded network table and emission factors are available
if (file.exists(gridded.network.file) & file.exists(emission.factors.file)) {
print(paste0("Creating emission rasters for ", cityname))
# read the emission factors
scenario.efs.df <- read.table(emission.factors.file, sep = ",", header = TRUE)
scenario.list <- as.vector(unique(scenario.efs.df$scenario_name))
# read the gridded network
gridded.network.df <- read.table(gridded.network.file, sep = ";", header = TRUE, quote = "")
# create results folder if it doesn't exist
results.folder <- file.path(city.output.folder, "results")
if (!dir.exists(results.folder)) { dir.create(results.folder) }
# Calculate the number of cores
no_cores <- detectCores()
# Initiate cluster
cl <- makeCluster(no_cores-1) # no_cores
# add common variables for all scenarios to the cluster environment
clusterExport(cl, c("gridded.network.df", "scenario.efs.df", "results.folder", "AADT.field"))
# throw the runs on the cluster
parLapply(cl, scenario.list, create.gridded.emissions)
# stop the cluster
stopCluster(cl)
# # loop over all scenarios (seqential code)
# for (scenario_name in scenario.list) {
# # create results folder if it doesn't exist
# results.folder <- paste0(cityname, "/results/")
# if (!dir.exists(results.folder)) { dir.create(results.folder) }
#
# # create a scenario folder if it doesn't exitst yet
# scenario.folder <- paste0(results.folder, scenario_name)
# if (!dir.exists(scenario.folder)) { dir.create(scenario.folder) }
#
# # create gridded emissions for each scenario
# # output will be written to the scenario folder (emis_NOx.asc, emis_PM25.asc)
# # add if exist
# if (!file.exists(paste0(scenario.folder, "/emis_NOx.asc"))) {
# print(paste("Running scenario", scenario_name, "in", cityname))
# create.gridded.emissions(scenario_name, gridded.network.df, scenario.efs.df, scenario.folder)
# } else {
# print(paste("Scenario", scenario_name, "in", cityname, "already exists."))
# }
# }
} else {
if(!(file.exists(gridded.network.file))) {
print(paste0("No gridded network available for ", cityname, ". Run the python script."))
}
if(!(file.exists(scenario.definition.file))) {
print(paste0("No scenario definitions for ", cityname))
}
}
# Step 6: Calculate concentrations for the scenarios
# --------------------------------------------------
# the variables 'pollutant' and 'sc.config.file' are defined in the
# config file 'NO2_atlas_config.R'.
if (file.exists(gridded.network.file) & file.exists(emission.factors.file)) {
print(paste0("Creating concentration rasters for ", cityname))
input.list <- list()
n.scenarios <- length(scenario.list)
# loop over all scenarios
for (i in 1:n.scenarios) {
input.list[[i]] <- list("sc.config.file" = sc.config.file,
"city.coords" = city.coords,
"emis.raster.file" = file.path(results.folder, scenario.list[i], paste0("emis_", pollutant, ".asc")),
"pollutant" = if(pollutant=="NOx") {"NO2"} else {pollutant},
"output.path" = file.path(results.folder, scenario.list[i]))
}
# for testing
# sherpacity_par(input.list[[1]])
# Calculate the number of cores
no_cores <- detectCores()
# Initiate cluster with as many cores as scenarios if possible, but never more than the
# total number of cores minus 1
cl <- makeCluster(min(no_cores-1, n.scenarios))
# throw the runs on the cluster
parLapply(cl, input.list, sherpacity_par)
# stop the cluster
stopCluster(cl)
# # loop over all scenarios (sequential)
# for (scenario_name in scenario.list) {
# # check if the emission raster exists
# pollutant <- "NOx"
# for (pollutant in c("NOx")) { # , "PM25"
# output.path <- paste0(cityname, "/results/", scenario_name, "/")
# emis.raster.file <- paste0(output.path, "emis_", pollutant, ".asc")
# if (file.exists(emis.raster.file)) {
# conc.file <- paste0(output.path, pollutant, "_total_conc.asc")
# if (!file.exists(conc.file)) {
# # apply kernel approach to emissions
# print(paste0("Calculation of ", pollutant, " concentrations in ", cityname, " for the ", scenario_name, " scenario."))
# sherpacity(sc.config.file, city.coords, emis.raster.file,
# if (pollutant == "NOx") {"NO2"} else {pollutant}, output.path)
# } else {
# print(paste0(pollutant, " concentrations are already calculated for ", scenario_name, " in ", cityname))
# }
# } else {
# print(paste0("No emissions raster for ", pollutant, " in the ", scenario_name, " scenario in ", cityname))
# }
# }
# }
}
} # close loop over cities
| /NO2_atlas_workflow.R | permissive | bd77/SHERPA-city | R | false | false | 14,960 | r | # -------------------#
# NO2 atlas workflow #
# -------------------#
# The workflow consists of the following steps
# 1) Looking up which OTM shape files are needed to construct a shape file of the domain
# and producue the shape file. By default the city centre (defined in city.df) is taken
# as the centre of a 20x20 km square. If the lon.min, lon.max, lat.min, and lat.max variables
# are avaliable in city.df these are taken as domain boundaries. All results of a city are in
# a subfolder <cityname>/
# 2) Add a column with the zone to the network shp. A zone shape file has to be availalble
# in a sub-folder <cityname>/zones_<city_name>. The format of this file is strictly defined:
# - projection WGS84, 3 fields, id (int), name (string), descr (string)
# - non overlapping polygons named bigLEZ and smallLEZ (for the Atlas)
# 3) Grid the road network (currently in Python)
# 4) Calculate fleet emission factors for all scenarios and zones
# 5) Calculate gridded emissions for each scenario. Scenarios are under <cityname>/results/<scenario_name>
# 6) Apply the dispersion kernels on the emissions
# Clean up
rm(list=ls())
# set the directory of this script as working directory
wd <- dirname(sys.frame(1)$ofile)
setwd(wd)
# load libraries and auxiliary functions
library(raster)
library(geosphere)
library(parallel)
source("NO2_atlas_config.R")
source("create_default_extent.R") # step 1
source("create_network_shp_utm.R") # step 1
source("add_zones.R") # step 2
source("check_zones_overlap.R") # step 2
source("create_fleet_emission_factors.R") # step 4
source("create_gridded_emissions.R") # step 5
source("sherpacity_par.R") # step 6
# loop over all the cities
for (cityname in as.vector(city.df$cityname)) { # as.vector(city.df$cityname)
print("########################################")
print(cityname)
# city coordinates
city.coords <- matrix(c(city.df$lon[city.df$cityname == cityname],
city.df$lat[city.df$cityname == cityname]), ncol = 2,
dimnames = list(c(cityname), c("lon", "lat")))
# create a folder for a specific city in the cities.output.folder if it doesn't exist yet.
# A leading '/' has to be removed when cities.output.folder = ""
city.output.folder <- sub("^/", "", file.path(cities.output.folder, cityname))
if (!dir.exists(city.output.folder)) {
dir.create(city.output.folder, recursive = T)
}
# folder and filename of the UTM shape file with the network (maybe it exists, maybe not)
dsn.city.utm.shp <- city.output.folder
layer.city.utm.shp <- paste0("traffic_roadlinks_", cityname)
city.utm.shp <- paste0(dsn.city.utm.shp, "/", layer.city.utm.shp, ".shp")
# to avoid wrong behaviour with previous loop, remove the SpatialLinesDataFrame of the network
if ("city.utm.sldf" %in% ls()) { rm(city.utm.sldf) }
# folder and filename of the UTM shape file with the network and zones (maybe it exists, maybe not)
dsn.zoned.city.utm.shp <- city.output.folder
layer.city.zoned.utm.shp <- paste0("traffic_roadlinks_zones_", cityname)
zoned.city.utm.shp <- paste0(dsn.zoned.city.utm.shp, "/", layer.city.zoned.utm.shp, ".shp")
# to avoid wrong behaviour with previous loop, remove the SpatialLinesDataFrame of the network
if ("zoned.city.utm.sldf" %in% ls()) { rm(zoned.city.utm.sldf) }
# Step 1: Create the network shape file for the domain
# ----------------------------------------------------
# create network shape file inside domain in utm
# 1) look up which shape files to use
# 2) paste them together
# 3) convert to UTM
# 4) write a shape file
if (!file.exists(city.utm.shp) | rerun.domain) {
print(paste0("Creating a shape file in UTM for ", cityname, " ..."))
# check if domain is defined, if not create one of 20x20km
lon.lat.limits <- city.df[city.df$cityname == cityname, c('lon.min', 'lon.max', 'lat.min', 'lat.max')]
# horror, extent doesn't take a df with named columns
lon.lat.limits <- c(lon.lat.limits[1,1], lon.lat.limits[1,2], lon.lat.limits[1,3], lon.lat.limits[1,4])
if (is.na(sum(lon.lat.limits))) {
city.extent <- create.default.extent(city.coords)
} else {
# define city extent in latitude longitude (WGS84, epsg: 4326)
city.extent <- extent(lon.lat.limits)
}
# create a shape file for the domain, combining the necessary OTM shape files
city.utm.sldf <- create.network.shp.utm(city.extent, cityname, nuts3.bbox.df, otm.path)
# write the UTM shape file of the network
writeOGR(city.utm.sldf, dsn = dsn.city.utm.shp, layer = layer.city.utm.shp,
driver="ESRI Shapefile")
} else {
print(paste0("A network shape file in UTM without zones was already created for ", cityname))
}
# Step 2: Add a column with the zone to the network shp
# -----------------------------------------------------
# Add a column to the network shape in utm with the zones
# the zones have be in a shape file with name <cityname>/zones_<cityname>.shp
# the attribute table has 2 fields: 'id' and 'zone_name'
# If no zones area available just add a column 'Compelent'
# check if there is a network shape files with zones added
if (!file.exists(zoned.city.utm.shp) | rerun.zoning) {
# Check if the network without zones is already in memory from step 1 (city.utm.sldf), if not read it.
# This avoids reading the shp file twice (which is slow) but it is important that before
# starting a new city the varialbes 'city.utm.sldf' and 'zoned.city.utm.sldf' are removed.
# dangerous when looping over cities
if (!("city.utm.sldf" %in% ls())) {
# if the network shape file already exists, read it
print(paste0("Reading ", dsn.city.utm.shp, "/", layer.city.utm.shp, ".shp"))
city.utm.sldf <- readOGR(dsn = dsn.city.utm.shp, layer = layer.city.utm.shp)
}
# Check if there is a shape file with zones. If so, add the zones to the network shape file.
dsn.zones.shp <- paste0(city.output.folder, "/zones_", cityname)
layer.zones.shp <- paste0("zones_", cityname)
zones.shp <- paste0(dsn.zones.shp, "/", layer.zones.shp, ".shp")
if (file.exists(zones.shp)) {
# read the zones shape file as a spatialPolygonsDataFrame
zones.spdf <- readOGR(dsn = dsn.zones.shp, layer = layer.zones.shp)
# check if the zones are not overlapping
overlap <- check.zones.overlap(zones.spdf)
if (overlap == TRUE) {
print("Overlapping zones. Modify the zones file.")
} else {
# Add a column with the zone to each road
print(paste0("Adding zones to network to create ", zoned.city.utm.shp))
zoned.city.utm.sldf <- add.zones(city.utm.sldf, zones.spdf, cityname)
}
# if there is no zones_<cityname>.shp just add a column for the complementary zone
} else {
print(paste0("No zones for ", cityname, ". Put a zones_<cityname>.shp in the zones_<cityname> folder."))
# print(paste0("No zones for ", cityname, ". A default complementary zone was added."))
# zoned.network.shp.utm <- network.shp.utm
# zoned.network.shp.utm@data$zone_name <- "Complement"
}
# write the UTM shape file with zone(s) if it was produced. There are 2 possibilities:
# 1) A valid zones file was provided (without overlapping zones)
# 2) No zones file, just one complementary zone everywhere
if ("zoned.city.utm.sldf" %in% ls()) {
writeOGR(zoned.city.utm.sldf,
dsn = dsn.zoned.city.utm.shp,
layer = layer.city.zoned.utm.shp,
driver = "ESRI Shapefile",
overwrite = TRUE)
# !!!!!!!!!!!
# to save some space the network shape file without zones could be deleted
# !!!!!!!!!!!
}
} else {
print(paste0("A zoned network shape file already exists for ", cityname))
} # close if for zoned network shape
# remove zoned.network.shp.utm to avoid wrong behaviour in next loop. Just to be sure.
for (myVar in c('zoned.network.shp.utm', 'network.shp.utm', 'zones.shp')) {
if (myVar %in% ls()) {rm(myVar)}
}
# Step 3: Grid the road network (currently in Python)
# ---------------------------------------------------
# see fast_gridding.py
gridded.network.file <- paste0(city.output.folder, "/", cityname, "_grid_tbl.txt")
# Step 4: Calculate fleet emission factors
# ----------------------------------------
# scenario definition file
# 5 columns: scenario_name,zone_name,default_fleet_country,default_fleet_year,fleet_configuration
scenario.definition.file <- paste0(city.output.folder, "/", cityname, "_scenario_definition.csv")
# emission factor file
emission.factors.file <- paste0(city.output.folder, "/", cityname, "_emission_factors.csv")
if (file.exists(scenario.definition.file)) {
if (!file.exists(emission.factors.file)) {
# create a data frame with the emission factors to be used per
# scenario, zone and road type
scenario.efs.df <- create_fleet_emission_factors(scenario.definition.file)
write.table(scenario.efs.df, file = emission.factors.file, sep = ",", row.names = FALSE, quote = FALSE)
} else {
print(paste0("Emission factors are already calcultated for ", cityname))
}
} else {
print(paste0("No scenarios available for ", cityname))
}
# Step 5: Create emission rasters for scenarios
# ---------------------------------------------
# check if a gridded network table and emission factors are available
if (file.exists(gridded.network.file) & file.exists(emission.factors.file)) {
print(paste0("Creating emission rasters for ", cityname))
# read the emission factors
scenario.efs.df <- read.table(emission.factors.file, sep = ",", header = TRUE)
scenario.list <- as.vector(unique(scenario.efs.df$scenario_name))
# read the gridded network
gridded.network.df <- read.table(gridded.network.file, sep = ";", header = TRUE, quote = "")
# create results folder if it doesn't exist
results.folder <- file.path(city.output.folder, "results")
if (!dir.exists(results.folder)) { dir.create(results.folder) }
# Calculate the number of cores
no_cores <- detectCores()
# Initiate cluster
cl <- makeCluster(no_cores-1) # no_cores
# add common variables for all scenarios to the cluster environment
clusterExport(cl, c("gridded.network.df", "scenario.efs.df", "results.folder", "AADT.field"))
# throw the runs on the cluster
parLapply(cl, scenario.list, create.gridded.emissions)
# stop the cluster
stopCluster(cl)
# # loop over all scenarios (seqential code)
# for (scenario_name in scenario.list) {
# # create results folder if it doesn't exist
# results.folder <- paste0(cityname, "/results/")
# if (!dir.exists(results.folder)) { dir.create(results.folder) }
#
# # create a scenario folder if it doesn't exitst yet
# scenario.folder <- paste0(results.folder, scenario_name)
# if (!dir.exists(scenario.folder)) { dir.create(scenario.folder) }
#
# # create gridded emissions for each scenario
# # output will be written to the scenario folder (emis_NOx.asc, emis_PM25.asc)
# # add if exist
# if (!file.exists(paste0(scenario.folder, "/emis_NOx.asc"))) {
# print(paste("Running scenario", scenario_name, "in", cityname))
# create.gridded.emissions(scenario_name, gridded.network.df, scenario.efs.df, scenario.folder)
# } else {
# print(paste("Scenario", scenario_name, "in", cityname, "already exists."))
# }
# }
} else {
if(!(file.exists(gridded.network.file))) {
print(paste0("No gridded network available for ", cityname, ". Run the python script."))
}
if(!(file.exists(scenario.definition.file))) {
print(paste0("No scenario definitions for ", cityname))
}
}
# Step 6: Calculate concentrations for the scenarios
# --------------------------------------------------
# the variables 'pollutant' and 'sc.config.file' are defined in the
# config file 'NO2_atlas_config.R'.
if (file.exists(gridded.network.file) & file.exists(emission.factors.file)) {
print(paste0("Creating concentration rasters for ", cityname))
input.list <- list()
n.scenarios <- length(scenario.list)
# loop over all scenarios
for (i in 1:n.scenarios) {
input.list[[i]] <- list("sc.config.file" = sc.config.file,
"city.coords" = city.coords,
"emis.raster.file" = file.path(results.folder, scenario.list[i], paste0("emis_", pollutant, ".asc")),
"pollutant" = if(pollutant=="NOx") {"NO2"} else {pollutant},
"output.path" = file.path(results.folder, scenario.list[i]))
}
# for testing
# sherpacity_par(input.list[[1]])
# Calculate the number of cores
no_cores <- detectCores()
# Initiate cluster with as many cores as scenarios if possible, but never more than the
# total number of cores minus 1
cl <- makeCluster(min(no_cores-1, n.scenarios))
# throw the runs on the cluster
parLapply(cl, input.list, sherpacity_par)
# stop the cluster
stopCluster(cl)
# # loop over all scenarios (sequential)
# for (scenario_name in scenario.list) {
# # check if the emission raster exists
# pollutant <- "NOx"
# for (pollutant in c("NOx")) { # , "PM25"
# output.path <- paste0(cityname, "/results/", scenario_name, "/")
# emis.raster.file <- paste0(output.path, "emis_", pollutant, ".asc")
# if (file.exists(emis.raster.file)) {
# conc.file <- paste0(output.path, pollutant, "_total_conc.asc")
# if (!file.exists(conc.file)) {
# # apply kernel approach to emissions
# print(paste0("Calculation of ", pollutant, " concentrations in ", cityname, " for the ", scenario_name, " scenario."))
# sherpacity(sc.config.file, city.coords, emis.raster.file,
# if (pollutant == "NOx") {"NO2"} else {pollutant}, output.path)
# } else {
# print(paste0(pollutant, " concentrations are already calculated for ", scenario_name, " in ", cityname))
# }
# } else {
# print(paste0("No emissions raster for ", pollutant, " in the ", scenario_name, " scenario in ", cityname))
# }
# }
# }
}
} # close loop over cities
|
# 1. Directories and names -----------------------------------------------
# file directory koppeltabel. NB: zorg dat sheetname='AGV', kolommen= 'CODE', 'balansnaam', 'agrarisch'
dir_kop <- 'pbelasting/input/190324_koppeltabel.xlsx'
# folder directory waterbalansen NB: zorg dat alle foute balansen niet in deze dir staan
dir_bal <- "../../balansen/"
# 2. input-----------------------
kopTab <- read_excel(dir_kop)
#eag weg bij eerdere versies, 2500-eag-5 weg, 1balansen aan meerdere eags koppelen
files <- list.files(dir_bal)
init <- readRDS("./pbelasting/input/init.rds") # data van G.Ros obv balansen M. Ouboter 201808
meanSoil <- readRDS("./pbelasting/input/meanSoil.rds")
#write.table(files0, file = paste(getwd(),"/output/namenBalansen",format(Sys.time(),"%Y%m%d%H%M"),".csv", sep= ""), quote = FALSE, na = "", sep =';', row.names = FALSE)
# 3. functions to read parameters for waterbalances--------------------
# - loadAlgemeen -> uitgangspunten (opp verhard etc.)
# - loadBalans2 -> maand data
# functie om algemene gegevens van de excelsheet te laden ----------------------
loadAlgemeen = function(x){
alg = read_excel(x, sheet = 'uitgangspunten', col_names = F, skip = 2)[1:34,1:9]
a_tot = as.numeric(alg[5,4])/10000
a_drain = as.numeric(alg[6,4])/10000
a_verhard = as.numeric(alg[7,4])/10000
a_gemengd = as.numeric(alg[8,4])/10000
a_water = as.numeric(alg[9,4])/10000
a_bodemhoogte = as.numeric(alg[10,4])
a_slootdiepte = as.numeric(alg[10,7])
a_inlaat1 = as.character(alg[22,1])
a_inlaat2 = as.character(alg[23,1])
a_inlaat3 = as.character(alg[24,1])
a_inlaat4 = as.character(alg[25,1])
a_inlaat5 = as.character(alg[26,1])
a_uitlaat1 = as.character(alg[27,1])
a_uitlaat2 = as.character(alg[28,1])
a_uitlaat3 = as.character(alg[29,1])
a_uitlaat4 = as.character(alg[30,1])
pol = as.character(files[match(x, fileLocs)])
names(pol) = 'pol'
# toevoegingen mogelijk voor geohydrology: zomer en winter kwelwaardes
out = data.frame(pol, a_tot, a_drain, a_verhard, a_gemengd, a_water,
a_bodemhoogte, a_slootdiepte, a_inlaat1,a_inlaat2,a_inlaat3,a_inlaat4,a_inlaat5,a_uitlaat1,a_uitlaat2,a_uitlaat3,a_uitlaat4)
out$pol = as.character(out$pol)
out$a_inlaat1 = as.character(out$a_inlaat1)
out$a_inlaat2 = as.character(out$a_inlaat2)
out$a_inlaat3 = as.character(out$a_inlaat3)
out$a_inlaat4 = as.character(out$a_inlaat4)
out$a_inlaat5 = as.character(out$a_inlaat5)
out$a_uitlaat1 = as.character(out$a_uitlaat1)
out$a_uitlaat2 = as.character(out$a_uitlaat2)
out$a_uitlaat3 = as.character(out$a_uitlaat3)
out$a_uitlaat4 = as.character(out$a_uitlaat4)
return(out)
}
loadBalance2 = function(file, file2){
print(file)
print(file2)
pol = as.character(files[match(file, fileLocs)])
names(pol) = 'pol'
num <- function(x){as.numeric(x)}
if(grepl(pattern = '.xls$', file)){
balans = read_excel(file, sheet = 'Q+P_mnd', col_names = F, na = '#N/A', skip = 13 )
} else{
# als geen xlsx dan hele sheet laden, anders worden sommige kolommen geskipped
rows = -(1:13)
balans = read_excel(file, sheet = 'Q+P_mnd',col_names = F, na = '#N/A')[rows,] # niet skip=13 gebruiken, anders fout door verschil xls en xlsx in huidige versie van read_excel (170220)
}
balans = as.data.frame(balans) # maak dataframe
# maanden
maand = num(format(as.Date(num(balans[,1]), origin = "1900-01-01"), "%m"))
jaar = num(format(as.Date(num(balans[,1]), origin = "1900-01-01"), "%Y"))
seiz = ifelse(maand<4 | maand>9, 'w', 'z') # seizoen: winter of zomer
# peil, volume en debiet
w_peil = num(balans[,2]) # peil[m NAP]
w_volume = num(balans[,3]) # volume [m3]
w_debiet = num(balans[,4]) # debiet [mm/dag]
w_berging = num(balans[,28]) # berging [m3/dag]
w_sluitfout = num(balans[,29]) # sluitfout [m3/dag]
w_maalstaat = num(balans[,30])
# IN waterposten
w_i_neerslag = num(balans[,6]) # neerslag [m3/dag]
w_i_kwel = num(balans[,7]) # kwel [m3/dag] (simulatie)
w_i_verhard = num(balans[,8]) # instroom via verhard [m3/dag]
w_i_riol = num(balans[,9]) # instroom via riolering [m3/dag]
w_i_drain = num(balans[,10])# instroom via drainage [m3/dag]
w_i_uitspoel = num(balans[,11])# instroom via uitspoeling [m3/dag]
w_i_afstroom = num(balans[,12])# instroom via afspoeling [m3/dag]
w_i_inlaat1 = num(balans[,13])
w_i_inlaat2 = num(balans[,14])
w_i_inlaat3 = num(balans[,15])
w_i_inlaat4 = num(balans[,16])
w_i_inlaat5 = num(balans[,17])
# UIT waterposten
w_o_verdamping= -num(balans[,19]) # verdamping [m3/dag]
w_o_wegzijg = -num(balans[,20]) # [m3/dag]
w_o_intrek = -num(balans[,21]) # [m3/dag]
w_o_uitlaat1 = -num(balans[,22])
w_o_uitlaat2 = -num(balans[,23])
w_o_uitlaat3 = -num(balans[,24])
w_o_uitlaat4 = -num(balans[,25])
w_o_uitlaat5 = -num(balans[,26])
# IN P Posten (op basis van minimum!!!)
wp_min_neerslag = num(balans[,35]) # [mg/m2/dag]
wp_min_kwel = num(balans[,36]) # [mg/m2/dag]
wp_min_verhard = num(balans[,37]) # [mg/m2/dag]
wp_min_riol = num(balans[,38]) # [mg/m2/dag]
wp_min_drain = num(balans[,39]) # [mg/m2/dag]
wp_min_uitspoel = num(balans[,40]) # [mg/m2/dag]
wp_min_afstroom = num(balans[,41]) # [mg/m2/dag]
wp_min_inlaat1 = num(balans[,42])
wp_min_inlaat2 = num(balans[,43])
wp_min_inlaat3 = num(balans[,44])
wp_min_inlaat4 = num(balans[,45])
wp_min_inlaat5 = num(balans[,46])
# IN P Posten (op basis van increment!!!)
wp_inc_neerslag = num(balans[,48]) # [mg/m2/dag]
wp_inc_kwel = num(balans[,49]) # [mg/m2/dag]
wp_inc_verhard = num(balans[,50]) # [mg/m2/dag]
wp_inc_riol = num(balans[,51]) # [mg/m2/dag]
wp_inc_drain = num(balans[,52])# [mg/m2/dag]
wp_inc_uitspoel = num(balans[,53])# [mg/m2/dag]
wp_inc_afstroom = num(balans[,54])# [mg/m2/dag]
wp_inc_inlaat1 = num(balans[,55])
wp_inc_inlaat2 = num(balans[,56])
wp_inc_inlaat3 = num(balans[,57])
wp_inc_inlaat4 = num(balans[,58])
wp_inc_inlaat5 = num(balans[,59])
# UIT P posten (bij gemaal)
wp_meting_gm3 = ifelse(num(balans[,5]) == 0, NA, num(balans[,5]))
wp_meting_mgm2d = -num(balans[,63]) # [mg/m2/dag]
# Maak data.frame van alle posten
DF = data.frame(wp_meting_gm3, wp_meting_mgm2d,
jaar, maand, seiz,w_peil, w_volume, w_debiet, w_berging, w_sluitfout,
w_maalstaat, w_i_neerslag,w_i_kwel, w_i_verhard,w_i_riol,
w_i_drain,w_i_uitspoel,w_i_afstroom,
w_i_inlaat1,w_i_inlaat2,w_i_inlaat3,w_i_inlaat4,w_i_inlaat5
,w_o_verdamping,
w_o_wegzijg,w_o_intrek,
w_o_uitlaat1,w_o_uitlaat2,w_o_uitlaat3,w_o_uitlaat4,w_o_uitlaat5,
wp_min_neerslag,wp_min_kwel,
wp_min_verhard, wp_min_riol,wp_min_drain,wp_min_uitspoel,wp_min_afstroom,
wp_min_inlaat1,wp_min_inlaat2,wp_min_inlaat3,wp_min_inlaat4,wp_min_inlaat5,
wp_inc_neerslag,wp_inc_kwel,
wp_inc_verhard, wp_inc_riol,wp_inc_drain,wp_inc_uitspoel,wp_inc_afstroom,
wp_inc_inlaat1,wp_inc_inlaat2,wp_inc_inlaat3,wp_inc_inlaat4,wp_inc_inlaat5, stringsAsFactors = F)
DF <- cbind(DF , pol)
DF <- DF[!DF$w_i_neerslag <= 0,] # om te voorkomen dat legen reeksen worden meegenomen. Geen neerslag op maandbasis komt niet voor.
# return output
return(DF)
}
# Wrapper function -------------------------------------------------------------
loadBalances_lm = function(...){
files <- list.files(dir_bal)
fileLocs = paste0(dir_bal, files)
# 1. Algemene data ---------------
# read excel data from sheet 'uitgangspunten'
alg = suppressWarnings(do.call(rbind, lapply(fileLocs, loadAlgemeen)))
# save in Rdata file (per date)
# saveRDS(alg, file = paste0('pbelasting/data/balans/alg_',gsub('^20','',gsub('-', '', Sys.Date())), '.rds'))
# 2. Maand en seizoen data -----------------
# read excel data from sheet 'jaargemiddelden'
balM <- suppressWarnings(do.call(rbind, lapply(1:length(fileLocs), function(x) loadBalance2(fileLocs[x], files[x]))))
balM$pol <- as.character(balM$pol)
# Koppel EAG, GAF en KRW waterlichamen
dat <-
inner_join(alg, balM, by='pol') %>%
left_join(kopTab, by = c('pol' = 'balans')) %>% # om de andere data te koppelen
mutate(GAF = as.numeric(GAF))%>%
mutate(GAF = as.character(GAF)) %>%
left_join(init, by = c('GAF' = 'i_pol')) #%>% # initiator data toevoegen
#left_join(meanSoil, by= c('EAG' = 'CODE')) %>% # initiator data toevoegen
#left_join(meanSoil, by= c('GAF' = 'CODE')) # bodem data toevoegen
dat <- dat[!is.na(dat$w_maalstaat),]
dat$p_i_redDAW <- 0
#sel <- dat$agrarisch == '1'& !is.na(dat$wp_min_uitspoel) & !dat$wp_min_uitspoel == 0
dat$p_i_redDAW <- (0.1 * dat$wp_min_uitspoel)
dat$wp_min_sum <- dat$wp_min_neerslag + dat$wp_min_kwel + dat$wp_min_verhard+ dat$wp_min_riol+ dat$wp_min_drain + dat$wp_min_uitspoel +
dat$wp_min_afstroom + dat$wp_min_inlaat1 + dat$wp_min_inlaat2 + dat$wp_min_inlaat3 + dat$wp_min_inlaat4 + dat$wp_min_inlaat5
dat$wp_inc_sum <- dat$wp_inc_neerslag+dat$wp_inc_kwel+ dat$wp_inc_verhard+ dat$wp_inc_riol+dat$wp_inc_drain+dat$wp_inc_uitspoel+dat$wp_inc_afstroom+
dat$wp_inc_inlaat1+dat$wp_inc_inlaat2+dat$wp_inc_inlaat3+dat$wp_inc_inlaat4+dat$wp_inc_inlaat5
write.table(dat[!is.na(dat$KRW),], file = paste(getwd(),"/pbelasting/output/gemMaandBalansenWL",format(Sys.time(),"%Y%m%d%H%M"),".csv", sep= ""), quote = FALSE, na = "", sep =';', row.names = FALSE)
write.table(dat, file = paste(getwd(),"/pbelasting/output/gemMaandBalansen",format(Sys.time(),"%Y%m%d%H%M"),".csv", sep= ""), quote = FALSE, na = "", sep =';', row.names = FALSE)
saveRDS(dat, file = paste0('pbelasting/dat','.rds'))
return(dat)
}
| /scripts/loadBalances.R | no_license | gerardhros/WaternetAnalyse | R | false | false | 10,268 | r | # 1. Directories and names -----------------------------------------------
# file directory koppeltabel. NB: zorg dat sheetname='AGV', kolommen= 'CODE', 'balansnaam', 'agrarisch'
dir_kop <- 'pbelasting/input/190324_koppeltabel.xlsx'
# folder directory waterbalansen NB: zorg dat alle foute balansen niet in deze dir staan
dir_bal <- "../../balansen/"
# 2. input-----------------------
kopTab <- read_excel(dir_kop)
#eag weg bij eerdere versies, 2500-eag-5 weg, 1balansen aan meerdere eags koppelen
files <- list.files(dir_bal)
init <- readRDS("./pbelasting/input/init.rds") # data van G.Ros obv balansen M. Ouboter 201808
meanSoil <- readRDS("./pbelasting/input/meanSoil.rds")
#write.table(files0, file = paste(getwd(),"/output/namenBalansen",format(Sys.time(),"%Y%m%d%H%M"),".csv", sep= ""), quote = FALSE, na = "", sep =';', row.names = FALSE)
# 3. functions to read parameters for waterbalances--------------------
# - loadAlgemeen -> uitgangspunten (opp verhard etc.)
# - loadBalans2 -> maand data
# functie om algemene gegevens van de excelsheet te laden ----------------------
loadAlgemeen = function(x){
alg = read_excel(x, sheet = 'uitgangspunten', col_names = F, skip = 2)[1:34,1:9]
a_tot = as.numeric(alg[5,4])/10000
a_drain = as.numeric(alg[6,4])/10000
a_verhard = as.numeric(alg[7,4])/10000
a_gemengd = as.numeric(alg[8,4])/10000
a_water = as.numeric(alg[9,4])/10000
a_bodemhoogte = as.numeric(alg[10,4])
a_slootdiepte = as.numeric(alg[10,7])
a_inlaat1 = as.character(alg[22,1])
a_inlaat2 = as.character(alg[23,1])
a_inlaat3 = as.character(alg[24,1])
a_inlaat4 = as.character(alg[25,1])
a_inlaat5 = as.character(alg[26,1])
a_uitlaat1 = as.character(alg[27,1])
a_uitlaat2 = as.character(alg[28,1])
a_uitlaat3 = as.character(alg[29,1])
a_uitlaat4 = as.character(alg[30,1])
pol = as.character(files[match(x, fileLocs)])
names(pol) = 'pol'
# toevoegingen mogelijk voor geohydrology: zomer en winter kwelwaardes
out = data.frame(pol, a_tot, a_drain, a_verhard, a_gemengd, a_water,
a_bodemhoogte, a_slootdiepte, a_inlaat1,a_inlaat2,a_inlaat3,a_inlaat4,a_inlaat5,a_uitlaat1,a_uitlaat2,a_uitlaat3,a_uitlaat4)
out$pol = as.character(out$pol)
out$a_inlaat1 = as.character(out$a_inlaat1)
out$a_inlaat2 = as.character(out$a_inlaat2)
out$a_inlaat3 = as.character(out$a_inlaat3)
out$a_inlaat4 = as.character(out$a_inlaat4)
out$a_inlaat5 = as.character(out$a_inlaat5)
out$a_uitlaat1 = as.character(out$a_uitlaat1)
out$a_uitlaat2 = as.character(out$a_uitlaat2)
out$a_uitlaat3 = as.character(out$a_uitlaat3)
out$a_uitlaat4 = as.character(out$a_uitlaat4)
return(out)
}
loadBalance2 = function(file, file2){
print(file)
print(file2)
pol = as.character(files[match(file, fileLocs)])
names(pol) = 'pol'
num <- function(x){as.numeric(x)}
if(grepl(pattern = '.xls$', file)){
balans = read_excel(file, sheet = 'Q+P_mnd', col_names = F, na = '#N/A', skip = 13 )
} else{
# als geen xlsx dan hele sheet laden, anders worden sommige kolommen geskipped
rows = -(1:13)
balans = read_excel(file, sheet = 'Q+P_mnd',col_names = F, na = '#N/A')[rows,] # niet skip=13 gebruiken, anders fout door verschil xls en xlsx in huidige versie van read_excel (170220)
}
balans = as.data.frame(balans) # maak dataframe
# maanden
maand = num(format(as.Date(num(balans[,1]), origin = "1900-01-01"), "%m"))
jaar = num(format(as.Date(num(balans[,1]), origin = "1900-01-01"), "%Y"))
seiz = ifelse(maand<4 | maand>9, 'w', 'z') # seizoen: winter of zomer
# peil, volume en debiet
w_peil = num(balans[,2]) # peil[m NAP]
w_volume = num(balans[,3]) # volume [m3]
w_debiet = num(balans[,4]) # debiet [mm/dag]
w_berging = num(balans[,28]) # berging [m3/dag]
w_sluitfout = num(balans[,29]) # sluitfout [m3/dag]
w_maalstaat = num(balans[,30])
# IN waterposten
w_i_neerslag = num(balans[,6]) # neerslag [m3/dag]
w_i_kwel = num(balans[,7]) # kwel [m3/dag] (simulatie)
w_i_verhard = num(balans[,8]) # instroom via verhard [m3/dag]
w_i_riol = num(balans[,9]) # instroom via riolering [m3/dag]
w_i_drain = num(balans[,10])# instroom via drainage [m3/dag]
w_i_uitspoel = num(balans[,11])# instroom via uitspoeling [m3/dag]
w_i_afstroom = num(balans[,12])# instroom via afspoeling [m3/dag]
w_i_inlaat1 = num(balans[,13])
w_i_inlaat2 = num(balans[,14])
w_i_inlaat3 = num(balans[,15])
w_i_inlaat4 = num(balans[,16])
w_i_inlaat5 = num(balans[,17])
# UIT waterposten
w_o_verdamping= -num(balans[,19]) # verdamping [m3/dag]
w_o_wegzijg = -num(balans[,20]) # [m3/dag]
w_o_intrek = -num(balans[,21]) # [m3/dag]
w_o_uitlaat1 = -num(balans[,22])
w_o_uitlaat2 = -num(balans[,23])
w_o_uitlaat3 = -num(balans[,24])
w_o_uitlaat4 = -num(balans[,25])
w_o_uitlaat5 = -num(balans[,26])
# IN P Posten (op basis van minimum!!!)
wp_min_neerslag = num(balans[,35]) # [mg/m2/dag]
wp_min_kwel = num(balans[,36]) # [mg/m2/dag]
wp_min_verhard = num(balans[,37]) # [mg/m2/dag]
wp_min_riol = num(balans[,38]) # [mg/m2/dag]
wp_min_drain = num(balans[,39]) # [mg/m2/dag]
wp_min_uitspoel = num(balans[,40]) # [mg/m2/dag]
wp_min_afstroom = num(balans[,41]) # [mg/m2/dag]
wp_min_inlaat1 = num(balans[,42])
wp_min_inlaat2 = num(balans[,43])
wp_min_inlaat3 = num(balans[,44])
wp_min_inlaat4 = num(balans[,45])
wp_min_inlaat5 = num(balans[,46])
# IN P Posten (op basis van increment!!!)
wp_inc_neerslag = num(balans[,48]) # [mg/m2/dag]
wp_inc_kwel = num(balans[,49]) # [mg/m2/dag]
wp_inc_verhard = num(balans[,50]) # [mg/m2/dag]
wp_inc_riol = num(balans[,51]) # [mg/m2/dag]
wp_inc_drain = num(balans[,52])# [mg/m2/dag]
wp_inc_uitspoel = num(balans[,53])# [mg/m2/dag]
wp_inc_afstroom = num(balans[,54])# [mg/m2/dag]
wp_inc_inlaat1 = num(balans[,55])
wp_inc_inlaat2 = num(balans[,56])
wp_inc_inlaat3 = num(balans[,57])
wp_inc_inlaat4 = num(balans[,58])
wp_inc_inlaat5 = num(balans[,59])
# UIT P posten (bij gemaal)
wp_meting_gm3 = ifelse(num(balans[,5]) == 0, NA, num(balans[,5]))
wp_meting_mgm2d = -num(balans[,63]) # [mg/m2/dag]
# Maak data.frame van alle posten
DF = data.frame(wp_meting_gm3, wp_meting_mgm2d,
jaar, maand, seiz,w_peil, w_volume, w_debiet, w_berging, w_sluitfout,
w_maalstaat, w_i_neerslag,w_i_kwel, w_i_verhard,w_i_riol,
w_i_drain,w_i_uitspoel,w_i_afstroom,
w_i_inlaat1,w_i_inlaat2,w_i_inlaat3,w_i_inlaat4,w_i_inlaat5
,w_o_verdamping,
w_o_wegzijg,w_o_intrek,
w_o_uitlaat1,w_o_uitlaat2,w_o_uitlaat3,w_o_uitlaat4,w_o_uitlaat5,
wp_min_neerslag,wp_min_kwel,
wp_min_verhard, wp_min_riol,wp_min_drain,wp_min_uitspoel,wp_min_afstroom,
wp_min_inlaat1,wp_min_inlaat2,wp_min_inlaat3,wp_min_inlaat4,wp_min_inlaat5,
wp_inc_neerslag,wp_inc_kwel,
wp_inc_verhard, wp_inc_riol,wp_inc_drain,wp_inc_uitspoel,wp_inc_afstroom,
wp_inc_inlaat1,wp_inc_inlaat2,wp_inc_inlaat3,wp_inc_inlaat4,wp_inc_inlaat5, stringsAsFactors = F)
DF <- cbind(DF , pol)
DF <- DF[!DF$w_i_neerslag <= 0,] # om te voorkomen dat legen reeksen worden meegenomen. Geen neerslag op maandbasis komt niet voor.
# return output
return(DF)
}
# Wrapper function -------------------------------------------------------------
loadBalances_lm = function(...){
files <- list.files(dir_bal)
fileLocs = paste0(dir_bal, files)
# 1. Algemene data ---------------
# read excel data from sheet 'uitgangspunten'
alg = suppressWarnings(do.call(rbind, lapply(fileLocs, loadAlgemeen)))
# save in Rdata file (per date)
# saveRDS(alg, file = paste0('pbelasting/data/balans/alg_',gsub('^20','',gsub('-', '', Sys.Date())), '.rds'))
# 2. Maand en seizoen data -----------------
# read excel data from sheet 'jaargemiddelden'
balM <- suppressWarnings(do.call(rbind, lapply(1:length(fileLocs), function(x) loadBalance2(fileLocs[x], files[x]))))
balM$pol <- as.character(balM$pol)
# Koppel EAG, GAF en KRW waterlichamen
dat <-
inner_join(alg, balM, by='pol') %>%
left_join(kopTab, by = c('pol' = 'balans')) %>% # om de andere data te koppelen
mutate(GAF = as.numeric(GAF))%>%
mutate(GAF = as.character(GAF)) %>%
left_join(init, by = c('GAF' = 'i_pol')) #%>% # initiator data toevoegen
#left_join(meanSoil, by= c('EAG' = 'CODE')) %>% # initiator data toevoegen
#left_join(meanSoil, by= c('GAF' = 'CODE')) # bodem data toevoegen
dat <- dat[!is.na(dat$w_maalstaat),]
dat$p_i_redDAW <- 0
#sel <- dat$agrarisch == '1'& !is.na(dat$wp_min_uitspoel) & !dat$wp_min_uitspoel == 0
dat$p_i_redDAW <- (0.1 * dat$wp_min_uitspoel)
dat$wp_min_sum <- dat$wp_min_neerslag + dat$wp_min_kwel + dat$wp_min_verhard+ dat$wp_min_riol+ dat$wp_min_drain + dat$wp_min_uitspoel +
dat$wp_min_afstroom + dat$wp_min_inlaat1 + dat$wp_min_inlaat2 + dat$wp_min_inlaat3 + dat$wp_min_inlaat4 + dat$wp_min_inlaat5
dat$wp_inc_sum <- dat$wp_inc_neerslag+dat$wp_inc_kwel+ dat$wp_inc_verhard+ dat$wp_inc_riol+dat$wp_inc_drain+dat$wp_inc_uitspoel+dat$wp_inc_afstroom+
dat$wp_inc_inlaat1+dat$wp_inc_inlaat2+dat$wp_inc_inlaat3+dat$wp_inc_inlaat4+dat$wp_inc_inlaat5
write.table(dat[!is.na(dat$KRW),], file = paste(getwd(),"/pbelasting/output/gemMaandBalansenWL",format(Sys.time(),"%Y%m%d%H%M"),".csv", sep= ""), quote = FALSE, na = "", sep =';', row.names = FALSE)
write.table(dat, file = paste(getwd(),"/pbelasting/output/gemMaandBalansen",format(Sys.time(),"%Y%m%d%H%M"),".csv", sep= ""), quote = FALSE, na = "", sep =';', row.names = FALSE)
saveRDS(dat, file = paste0('pbelasting/dat','.rds'))
return(dat)
}
|
## ============================================================================
## Exploratory Data Analysis
## ============================================================================
## Course Project 1 - Electric Power Consumption
## Plot 2
## ----------------------------------------------------------------------------
## 2015-01-11, Philipp
## ============================================================================
# set working directory (source file is in a subdirectory called data)
setwd("/home/philipp/projects/coursera/expl_data_analysis/ExData_Plotting1/")
# we just require two days
filteredData <- grep('^[1-2]/2/2007|^Date',
readLines("./data/household_power_consumption.txt"),
value=TRUE)
# source data classes
customClasses <- c("character", "character", "numeric", "numeric", "numeric",
"numeric", "numeric", "numeric", "numeric")
# get data from filtered lines and with custom classes
data <- read.table(textConnection(filteredData),
colClasses=customClasses,
header=TRUE, sep=";")
# convert the date and time values to date/time classes
data$Time <- strptime(paste(data$Date, data$Time), format="%d/%m/%Y%H:%M:%S")
data$Date <- as.Date(data$Date, format="%d/%m/%Y")
# figure 2
# set local to get english weekday names
Sys.setlocale(category="LC_TIME", locale="C")
png(file="plot2.png", width=480, height=480, bg="transparent")
with(data,
plot(Time, Global_active_power,
type="l",
xlab="",
ylab="Global Active Power (kilowatts)")
)
dev.off()
| /plot2.R | no_license | PhilippDBC/ExData_Plotting1 | R | false | false | 1,604 | r | ## ============================================================================
## Exploratory Data Analysis
## ============================================================================
## Course Project 1 - Electric Power Consumption
## Plot 2
## ----------------------------------------------------------------------------
## 2015-01-11, Philipp
## ============================================================================
# set working directory (source file is in a subdirectory called data)
setwd("/home/philipp/projects/coursera/expl_data_analysis/ExData_Plotting1/")
# we just require two days
filteredData <- grep('^[1-2]/2/2007|^Date',
readLines("./data/household_power_consumption.txt"),
value=TRUE)
# source data classes
customClasses <- c("character", "character", "numeric", "numeric", "numeric",
"numeric", "numeric", "numeric", "numeric")
# get data from filtered lines and with custom classes
data <- read.table(textConnection(filteredData),
colClasses=customClasses,
header=TRUE, sep=";")
# convert the date and time values to date/time classes
data$Time <- strptime(paste(data$Date, data$Time), format="%d/%m/%Y%H:%M:%S")
data$Date <- as.Date(data$Date, format="%d/%m/%Y")
# figure 2
# set local to get english weekday names
Sys.setlocale(category="LC_TIME", locale="C")
png(file="plot2.png", width=480, height=480, bg="transparent")
with(data,
plot(Time, Global_active_power,
type="l",
xlab="",
ylab="Global Active Power (kilowatts)")
)
dev.off()
|
######################################################################################
## Figures for Green's function model paper
######################################################################################
# cols<-c('#d7191c','#fdae61','#ffffbf','#abd9e9','#2c7bb6')
cols<-c('#ca0020','#f4a582','#f7f7f7','#92c5de','#0571b0')
#-------------------------------------------------------------------------------------
## Figure 1: Map
#-------------------------------------------------------------------------------------
setwd("~/Google Drive/Mapping/Broughton Map/")
require(PBSmapping)
load("Data/nepacLLhigh.rda")
data<-read.delim("Data/SeaLice_SummaryDatafile.txt")
names(data)
focal.year<-2006
farms<-read.delim("Data/FarmLocations.txt")
farm.loc<-data.frame(EID=farms$EID,X=farms$lon,Y=farms$lat, name=farms$name, co=farms$company)
farm.loc<-as.EventData(farm.loc, projection="LL")
farm.loc<-farm.loc[c(23,29,30),]
data1<-subset(data, year==focal.year)
X<--data1$lon
Y<-data1$lat
eid<-c(1:length(X))
sampling.loc<-data.frame(EID=eid,X,Y,date=data1$Date, site.name=data1$site.name, temp=data1$temp, sal=data1$sal)
sampling.loc<-sampling.loc[is.na(X)==FALSE,]
sampling.loc<-as.EventData(sampling.loc, projection="LL")
B.box<-data.frame(PID=rep(1,4),POS=c(1:4), X=c(-126.8, -125.5, -125.5, -126.8), Y=c(50.5, 50.5, 51, 51))
B.box<-as.PolySet(B.box, projection="LL")
rivers<-importShapefile("~/Google Drive/Mapping/Broughton Map/Data/bcdrcarto/bcdrcarto_l_1m_v6-0_shp/bcdrcarto_l")
#gpclibPermit()
# Larger Map of BC coast
#quartz(height=4, width=3.5)
pdf("~/Google Drive/Greens/FINALLY/v6/MapBig.pdf", height=4, width=3.5)
plotMap(thinPolys(nepacLLhigh, tol=3), xlim=c(-128.8, -122), ylim=c(48, 53), col=grey(0.8), bg="white", las=1, xaxt="n", yaxt="n", border=grey(0.4))
#Add box for the Broughton Archipelago
addPolys(B.box, lwd=2, density=0, border=1)
plotMap(nepacLLhigh, xlim=c(-126.8, -125.5), ylim=c(50.5, 51), col=grey(0.8), bg="white", border=grey(0.4), xlab=expression(paste(degree, "Longitude")), ylab=expression(paste(degree, "Latitude")), xaxt="n", yaxt="n")
axis(side=1, at=c(-126.6,-126.4, -126.2, -126.0, -125.8, -125.6), labels=c("-126.6", "-126.4","-126.2", "-126.0", "-125.8","-125.6"), tck=0.03, cex.axis=0.8)
axis(side=1, at=seq(-126.8, -125.5, 0.05), labels=FALSE, tck=0.015)
axis(side=2, at=seq(50.6,50.9,0.1), tck=0.03, las=1, cex.axis=0.8)
axis(side=2, at=seq(50.5, 51, 0.05), labels=FALSE, tck=0.015)
axis(side=3, at=c(-126.6,-126.4, -126.2, -126.0, -125.8, -125.6), labels=FALSE, tck=0.03)
axis(side=3, at=seq(-126.8, -125.5, 0.05), labels=FALSE, tck=0.015)
axis(side=4, at=seq(50.6,50.9,0.1), labels=FALSE, tck=0.03, las=1, cex.axis=0.8)
axis(side=4, at=seq(50.5, 51, 0.05), labels=FALSE, tck=0.015)
#-------------------------------------------------------------------------------------
## Figure 1: Model schematic
#-------------------------------------------------------------------------------------
# In LaTeX via tikz
#-------------------------------------------------------------------------------------
## Figure 3: The average number of motile L. salmonis per farmed salmon
## on three salmon farms under four different treatment scenarios:
#-------------------------------------------------------------------------------------
load("Workspace/Scenarios_20181130.RData")
pdf(file="Figures/Fig3.pdf", width=5.3, height=4, pointsize=10)
par(mfrow=c(2,2), mar=c(3,3,2,1), oma=c(0,1,0,0))
topText<-c("a) Scenario A", "b) Scenario B", "c) Scenario C", "d) Scenario D")
for(i in 1:4){
plot(as.Date(times, origin="2005-09-01"), predicted.f[[i]][,1], "l",col=1, ylim=c(0, 8), bty="l", las=1, xlab="", ylab="")#max(predicted.f[[i]])
polygon(x=c(as.Date("2006-04-01"), as.Date("2006-04-01"), as.Date("2006-06-30"), as.Date("2006-06-30")), y=c(-1, 10, 10, -1), border=NA, col="#00000020")
lines(as.Date(times, origin="2005-09-01"), predicted.f[[i]][,2], lty=2)
lines(as.Date(times, origin="2005-09-01"), predicted.f[[i]][,3], lty=3)
#abline(v=as.Date(tT[[i]], origin="2005-09-01"), lty=c(1:3))
abline(h=3, col=cols[1])#, col=grey(0.8), lwd=2)
mtext(side=3, adj=0, line=0.5, topText[i])
if(i==2) legend("topleft", lty=c(1,2,3), c("Farm 1", "Farm 2", "Farm 3"), bg="white", bty="n")#legend(as.Date("2005-09-01"), 13, lwd=c(1,1,1,1,15), col=c(rep(1,3), 2,grey(0.8)), lty=c(1,2,3,1,1), c("Sargeaunts", "Humphrey", "Burdwood", "Treatment threshold", "Juvenile salmon migration"), bg="white", ncol=3, xpd=NA, bty="n")
}
mtext(side=2, outer=TRUE, expression(paste("Average number of motile ", italic(L), ". ", italic(salmonis), " per farmed salmon")), line=-0.5)
dev.off()
#-------------------------------------------------------------------------------------
## Figure 4: Farm predictions
#-------------------------------------------------------------------------------------
require(gplots)
load("Workspaces/farmModels_13June2012.RData")
load("Workspaces/farm_fits_20160209.RData")
pdf(file="Figures/Fig4.pdf", width=4, height=7)
par(mfrow=c(3,1), mar=c(4,3,1,1), oma=c(2,2,1,1))
plot(as.Date(T.all, origin="1970-01-01"), fset(p.fitted[[1]][,1], T.all, T.slice.all[1]), "l", xlab="", ylab="", bty="n", ylim=c(0,8), las=1, cex.axis=1.2)
polygon(x=c(T.all, rev(T.all)), y=c(CI.pred[[1]][,1], rev(CI.pred[[1]][,2])), border=NA, col="#00000030")
plotCI(as.Date(unique(Z$Date[Z$Farm=="SP"]), origin="1970-01-01"), Lice.summary[[1]][,1], ui=Lice.summary[[1]][,3], li=Lice.summary[[1]][,2], add=TRUE)
abline(v=T.slice.all[1], lty=2)
mtext(side=3, adj=0, "a) Farm 1", line=0.5)
plot(as.Date(T.all, origin="1970-01-01"), fset(p.fitted[[2]][,1], T.all, T.slice.all[2]), "l", xlab="", ylab="", bty="n", ylim=c(0,8), las=1, cex.axis=1.2)
polygon(x=c(T.all, rev(T.all)), y=c(CI.pred[[2]][,1], rev(CI.pred[[2]][,2])), border=NA, col="#00000030")
plotCI(as.Date(unique(Z$Date[Z$Farm=="HR"]), origin="1970-01-01"), Lice.summary[[2]][,1], ui=Lice.summary[[2]][,3], li=Lice.summary[[2]][,2], add=TRUE)
abline(v=T.slice.all[2], lty=2)
mtext(side=3, adj=0, "b) Farm 2", line=0.5)
plot(as.Date(T.all, origin="1970-01-01"), fset(p.fitted[[3]][,1], T.all, T.slice.all[3]), "l", xlab="", ylab="", bty="n", ylim=c(0,8), las=1, cex.axis=1.2)
polygon(x=c(T.all, rev(T.all)), y=c(CI.pred[[3]][,1], rev(CI.pred[[3]][,2])), border=NA, col="#00000030")
points(as.Date(Z$Date[Z$Farm=="BG"], origin="1970-01-01"), Z$Lice[Z$Farm=="BG"]/20)
abline(v=T.slice.all[3], lty=2)
mtext(side=3, adj=0, "c) Farm 3", line=0.5)
mtext(side=2, outer=TRUE, "Average lice per farm salmon")
mtext(side=1, outer=TRUE, "Date (2005/2006)")
dev.off()
#-------------------------------------------------------------------------------------
## Figure 5: The simulated densities of infectious copepodites
#-------------------------------------------------------------------------------------
load("Workspaces/Scenarios_20181130.RData")
source("filledContour.R")
pal<-colorRampPalette(c("white", 1))
source("4 Simulations/sim-grid.R")
pdf(file="Figures/SimDens.pdf", width=6.3, height=6, pointsize=10)
par(mfrow=c(2,2), mar=c(3,3,2,1), oma=c(2,2,1,0))
for(i in 1:4){
filledContour(farmL.all[[i]][seq(1, 280, 2), seq(1, 606, 4)], x=x[seq(1, 280, 2)], y=as.Date(T, origin="2005-09-01")[seq(1, 606, 4)], zlim=c(0,0.05), color.palette=pal, xlab="", ylab="")
contour(farmL.all[[i]], x=x, y=as.Date(T,origin="2005-09-01"), add=TRUE)
mtext(side=3, line=0.05, paste("Scenario ", LETTERS[i]))
arrows(x0=-40, y0=as.Date(paste("2006-", e1, sep=""), format="%Y-%j"), x1=60, y1=migDate(60, Xstart=-40, Tstart=e1), col=cols[5], length=0.08, xpd=NA, lwd=2)
points(-40, as.Date(paste("2006-", e1, sep=""), format="%Y-%j"), pch=19, col=cols[5])
arrows(x0=-40, y0=as.Date(paste("2006-", e2, sep=""), format="%Y-%j"), x1=60, y1=migDate(60, Xstart=-40, Tstart=e2), col=cols[1], length=0.08, xpd=NA)
points(-40, as.Date(paste("2006-", e2, sep=""), format="%Y-%j"), pch=21, col=cols[1], bg="white")
if(i>=3) abline(h=as.Date(tT[[i]][1], origin="2005-09-01"), lty=2)
abline(v=c(-3.7, 4.0, 53), lty=c(1:3), lwd=1.2)
}
mtext(side=1, outer=TRUE, "Distance along migration (km)")
mtext(side=2, outer=TRUE, "Date", las=0)
dev.off()
#-------------------------------------------------------------------------------------
## Figure 6: Simulation metrics
#-------------------------------------------------------------------------------------
load("Workspaces/Scenarios_20181130.RData")
pdf(file="Figures/Fig6.pdf", width=6.3, height=2.5)
par(mfrow=c(1,3), mar=c(3,5,1,0), oma=c(1,0,1,1), mgp=c(2.5, 1, 0), mgp=c(3,1,0))
for(i in c(c(1,2,5))){
bp<-barplot2(Metrics.summary[[i]][[1]], plot.ci=TRUE, ci.l=Metrics.summary[[i]][[2]], ci.u=Metrics.summary[[i]][[3]], las=1, names.arg=LETTERS[1:4], xlab="", ylab=c("Total infection pressure\nalong migration route", "Max. sea lice per fish", "Motile-days", "Number of lice", "Estimated mortality of wild salmon")[i], beside=TRUE, col=cols[c(4,1)], ci.width=0.4)
abline(h=0)
mtext(side=3, line=0.5, c("a)", "b)", "c) Number of motile-days", " d) Average louse load", "c) ")[i], cex=par('cex'), adj=0)
# if(i==5){
# abline(h=0.159, col=2)
# abline(h=c(0.083, 0.223), col=2, lty=2)
# }
if(i==2)mtext(side=1, "Treatment scenario", cex=par('cex'), line=3)
if(i==1)legend(5.5, 4.1, fill=cols[c(4,1)], c("Normal", "Early"), bty="n", xpd=NA, title="Migration timing")
}
dev.off()
######################################################################################
# SUPPORTING INFROMATION #
######################################################################################
#-------------------------------------------------------------------------------------
## SI Figure 1: Farm predictions
#-------------------------------------------------------------------------------------
load("Code/Cope distribution/CopedistWorkspace_June14.RData")
p<-read.csv("Code/JAGS/2006/neg binom/dcloneFittedParameters.csv")
t.sim0<-seq(50,150,2)
t.ind<-match(t.sim0, T);t.ind[length(t.ind)]<-2800
ani.options(outdir = "~/Documents/Greens/FINALLY/Figures/FarmVambient/3SeparateFarms")
saveLatex({
## put any code here to produce several plots
for(i in 1:length(t.sim0)){
plot(x, p$k[1]*p$phi[1]*copedist[,t.ind[i]], "l", bty="l", las=1, ylab="Planktonic copepidites", xlab="Distance", ylim=c(0, max(p$k[1]*p$phi[1]*copedist)), lwd=2, col=grey(0.8))
lines(x, p$k[1]*p$phi[1]*SP[,t.ind[i]]/scale.par)
lines(x, p$k[1]*p$phi[1]*HR[,t.ind[i]]/scale.par)
lines(x, p$k[1]*p$phi[1]*BW[,t.ind[i]]/scale.par)
lines(x, rep(p$k[1], length(x)), lty=2)
abline(v=c(-3.71, 4, 53), lty=3)
mtext(side=3, as.Date("2006-01-01")+t.sim0[i])
#if(t.sim0[i]>=67) lines(x, p$k[1]*p$phi[1]*copedist[,which(T==67)], col="#00000040", lwd=1.2)
}
},
interval = 0.4, ani.dev = 'pdf', ani.type = 'pdf', ani.height = 4, ani.width = 5, ani.opts='controls,width=4in', pdflatex = '/usr/texbin/pdflatex', overwrite=TRUE, documentclass = paste("\\documentclass{article}", "\\usepackage[margin=0.3in]{geometry}", sep="\n"))
#-------------------------------------------------------------------------------------
## Observed and predicted surfaces
#-------------------------------------------------------------------------------------
library(here)
library(dclone)
library(parallel)
library(boot) # for inv.logit function
# Load in results from model fitting by "fit-model-JAGS.R"
load('Workspaces/LiceFits2Data_20160308.RData')
# Summaryize MCMC output
S<-summary(mod)
source('3 Fitting/liceBoot.R')
source("3 Fitting/sim-model.R")
d<-4 # Scale of grid for plotting
n.dist<-length(seq(-44,68,d))
n.day<-length(seq(100,144,d))
dist2<-rep(seq(-44,68,d), n.day)
day2<-rep(seq(100,144,d), each=n.dist)
L<-farmL
sim.out<-simulate.lice(p=S[[1]][,1], dist=dist2, day=day2)
j<-1 #species = pink
# colour scheme
colPred <- c(surf = "#FFFFFF80", grid = "#d7191c80")
# colDat <- c(above = "#2c7bb6", below = "#abd9e9")
colDat <- c(above = 1, below = "#00000030")
# pdf(file = "Figures/Supplement-PinkFits2Data.pdf", width=4, height=9)
colPred <- c(surf = "#FFFFFF80", grid = grey(0.6))
colDat <- c(above = "#2c7bb6", below = "#d7191c60")
# quartz(width = 5, height = 5, pointsize=10)
# par(mfrow=c(1,1), mar=c(1,0,1,0), oma=c(0,0,1,0))
quartz(width = 4, height = 9, pointsize=10)
par(mfrow=c(3,1), mar=c(1,0,1,0), oma=c(0,0,1,0))
for(i in 1:3){
Z<-matrix(sim.out[[i]][,j], nrow=n.dist, ncol=n.day, byrow=FALSE)
pmat1<-persp(z=Z, x=seq(-44,68,d), y=seq(100,144,d), xlab="Distance", ylab="Time", zlab=c("C(x,t)", "H(x,t)", "M(x,t)")[i], theta=-140, phi=15, col=colPred['surf'], zlim=c(0, max(Lice.boot[i,,j,1])), border = colPred['grid'], ticktype = "detailed")
Lmean<-trans3d(x=dist, y=day, z=Lice.boot[i,,j,1], pmat=pmat1)
Lmin<-trans3d(x=dist, y=day, z=Lice.boot[i,,j,2], pmat=pmat1)
Lmax<-trans3d(x=dist, y=day, z=Lice.boot[i,,j,3], pmat=pmat1)
for(k in 1:length(day)){
ind<-c(findInterval(dist[k],seq(-44,68,d)), findInterval(day[k],seq(100,144,d)))
# text(Lmean$x[k], Lmean$y[k], k, xpd=NA)
if(Lice.boot[i,k,j,1]>=Z[ind[1], ind[2]]){ #if obs>pred
points(Lmean$x[k], Lmean$y[k], pch=19, xpd=NA, col=colDat['above'])
segments(Lmean$x[k], Lmean$y[k], Lmax$x[k], Lmax$y[k], col=colDat['above'])
if(Lice.boot[i,k,j,2]>Z[ind[1], ind[2]]){
segments(Lmean$x[k], Lmean$y[k], Lmin$x[k], Lmin$y[k], col=colDat['above'])
}else{
modMin<-trans3d(x=dist[k], y=day[k], z=Z[ind[1], ind[2]], pmat=pmat1)
segments(Lmean$x[k], Lmean$y[k], modMin$x, modMin$y, col=colDat['above'])
segments(modMin$x, modMin$y, Lmin$x[k], Lmin$y[k], col= colDat['below'])
}
} # end if obs > pred
if(Lice.boot[i,k,j,1]<Z[ind[1], ind[2]]){ #if obs < pred
points(Lmean$x[k], Lmean$y[k], pch=19, col=colDat['below'])
segments(Lmean$x[k], Lmean$y[k], Lmin$x[k], Lmin$y[k], col=colDat['below'])
if(Lice.boot[i,k,j,3]<Z[ind[1], ind[2]]){
segments(Lmean$x[k], Lmean$y[k], Lmax$x[k], Lmax$y[k], col=colDat['below'])
}else{
modMin<-trans3d(x=dist[k], y=day[k], z=Z[ind[1], ind[2]], pmat=pmat1)
segments(Lmean$x[k], Lmean$y[k], modMin$x, modMin$y, col=colDat['below'])
segments(modMin$x, modMin$y, Lmax$x[k], Lmax$y[k], col=colDat['above'])
}
} #end if obs < pred
} # end k
# par(new=TRUE)
# persp(z=Z, x=seq(-44,68,d), y=seq(100,144,d), xlab="", ylab="", zlab="", theta=-140, phi=15, col=colPred['surf'], zlim=c(0, max(Lice.boot[i,,j,1])), border = colPred['grid'])
mtext(side=3, adj=0, c("a) Copepodid", "b) Chalimus", "c) Motile")[i])
} #end stage i
#-------------------------------------------------------------------------------------
## Observed versus predicted over time
#-------------------------------------------------------------------------------------
colR <- colorRampPalette(c('#d7191c','#fdae61','#ffffbf','#abd9e9','#2c7bb6'))
sim.out2<-simulate.lice(p=S[[1]][,1], dist=dist, day=day)
for(i in 1:3){
plotCI(sim.out2[[i]][,j], Lice.boot[i,,j,1], li = Lice.boot[i,,j,2], ui = Lice.boot[i,,j,3], gap=0, pch=21, pt.bg = colR(n=length(unique(day)))[match(1:length(unique(day)), order(day))], xlim=c(0.12, 0.15), ylim=c(0, 0.3))
abline(a =0, b=1, lty=2)
#-------------------------------------------------------------------------------------
## Temperature and salinity over course of sampling
#-------------------------------------------------------------------------------------
siteDat <- read.csv("Figures/site_data2006.csv")
index <- read.csv("3 Fitting/index.csv")
head(siteDat)
range(siteDat$temp, na.rm=TRUE)
range(siteDat$sal, na.rm=TRUE)
mean(siteDat$temp, na.rm=TRUE)
# How many sites sampled in a day?
range(tapply(siteDat$temp, siteDat$date, length))
siteDat$date <- as.Date(siteDat$FullDate, format = "%Y-%m-%d")
plot(siteDat$date, siteDat$temp)
meanTemp <- tapply(siteDat$temp, siteDat$date, mean)
sdTemp <- tapply(siteDat$temp, siteDat$date, sd)
meanSal <- tapply(siteDat$sal, siteDat$date, mean)
sdSal <- tapply(siteDat$sal, siteDat$date, sd)
tempRange <- seq(7, 15, 0.1)
devTimeC <- tau(tempRange, beta[1,1], beta[1,2])
tPoints <- rev(tempRange)[findInterval(c(2:5), rev(devTimeC))]
plot(tempRange, devTimeC, "l")
quartz(width = 4.5, height = 6, pointsize =10)
par(mfrow=c(2, 1), mar=c(3,4,2,4.5), oma=c(2,0,0,0))
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, type = "n", xlab="", ylab=expression(paste("Surface temperature (", degree, "C)")), las=1, liw=sdTemp, uiw=sdTemp, lwd = NA, bty="u")
axis(side = 4, at = tPoints, labels = c(2:5), las=1)
mtext(side = 4, "Development time\nof copepodites (days)", line=3)
abline(h = 10, lty=2, col = "#d7191c")
abline(h = mean(siteDat$temp, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, liw=sdTemp, uiw=sdTemp, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "a)")
legend("topleft", lwd=c(2, 1), lty=c(1,2), col=c("#2c7bb6","#d7191c"), c("mean", "assumed value"), bty="n")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanSal, type ="n", xlab="", ylab="Surface salinity (ppm)", las=1, liw=sdSal, uiw=sdSal, lwd = NA, bty="u")
# axis(side = 4, at = sPoints, labels = c(0.01, 0.05, 0.1, 0.2, 0.5, 0.8), las=1, xpd=NA)
# mtext(side = 4, "Mortality rate of copepodites", line=3)
abline(h = 30, lty=2, col = "#d7191c")
abline(h = mean(siteDat$sal, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanSal, liw=sdSal, uiw=sdSal, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "b)")
# What impact on parameters
#Fixed parameters from Stien et al. (2005)
tauC <- 0.525^-2 # Development time of copepodids
tauH <- 0.250^-2 # Development time of chalimus
tauM <- 0.187^-2 # Development time of motiles
beta <- matrix(c(24.79, 74.70, 67.47, 0.525, 0.250, 0.187), nrow = 3, ncol = 2, dimnames = list(c("C", "H", "M")))
tau <- function(temp, beta1, beta2){
(beta1/(temp - 10 + beta1*beta2))^2
}
tau(c(7.7, 12.5), beta[1,1], beta[1,2])
tau(c(7.7, 12.5), beta[2,1], beta[2,2])
tau(c(7.7, 12.5), beta[3,1], beta[3,2])
tau(10, beta[1,1]+c(-1.96, 1.96)*1.43, beta[1,2]+c(-1.96, 1.96)*0.017)
# Salinity relationship from Groner et al. (2016)
# time (hrs) = exp(beta0 + beta1*salinity)
M <- function(sal, denom, exponent){
1/(1+(sal/denom)^exponent)
}
M(mean(siteDat$sal, na.rm=TRUE), 19.09, 7.11)
salRange <- seq(10, 40, 0.1)
Ms <- M(salRange, 19.09, 7.11)
sPoints <- rev(salRange)[findInterval(c(0.01, 0.05, 0.1, 0.2, 0.5, 0.8), rev(Ms))]
###################
quartz(width = 5, height = 6, pointsize = 12)
layout(matrix(c(1,1,2,3,3,4), nrow=2, byrow=TRUE))
par(oma=c(2,0,0,0))
par(mar=c(4,4,2,0))
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, type = "n", xlab="", ylab=expression(paste("Surface temperature (", degree, "C)")), las=1, liw=sdTemp, uiw=sdTemp, lwd = NA)
abline(h = 10, lty=2, col = "#d7191c")
abline(h = mean(siteDat$temp, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
abline(h = range(meanTemp, na.rm=TRUE), lty = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, liw=sdTemp, uiw=sdTemp, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "a)")
legend("topleft", lwd=c(2, 1,1), lty=c(1,2,2), col=c("#2c7bb6","#2c7bb6","#d7191c"), c("overall mean", "range in daily avg.", "assumed constant value"), bty="n")
par(mar=c(4,0,2,4))
u <- par('usr')
plot(tau(seq(u[3], u[4], 0.01), beta[1,1], beta[1,2]), seq(u[3], u[4], 0.01), "l", lwd=2, yaxs="i", yaxt="n", xlab="Development time\n of copepodids (days)")
u1 <- par('usr')
segments(x0 = u1[1], x1=tauC, y0 = 10, y1 = 10, lty=2, col = "#d7191c")
segments(x0 = tauC, x1=tauC, y0 = 10, y1 = u1[3], lty=2, col = "#d7191c")
segments(x0 = rep(u1[1],2), x1=tau(range(meanTemp, na.rm=TRUE), beta[1,1], beta[1,2]), y0 = range(meanTemp, na.rm=TRUE), y1 = range(meanTemp, na.rm=TRUE), lty=2, col = "#2c7bb6")
segments(x0 = tau(range(meanTemp, na.rm=TRUE), beta[1,1], beta[1,2]), x1=tau(range(meanTemp, na.rm=TRUE), beta[1,1], beta[1,2]), y0 = range(meanTemp, na.rm=TRUE), y1 = u1[3], lty=2, col = "#2c7bb6")
text(4.4, 15, "Stien et al. (2005)")
axis(side=4, las=1)
par(mar=c(4,4,2,0))
plotCI(as.Date(names(meanSal), format = "%Y-%m-%d"), meanSal, type ="n", xlab="", ylab="Surface salinity (ppm)", las=1, liw=sdSal, uiw=sdSal, lwd = NA)
# axis(side = 4, at = sPoints, labels = c(0.01, 0.05, 0.1, 0.2, 0.5, 0.8), las=1, xpd=NA)
# mtext(side = 4, "Mortality rate of copepodites", line=3)
abline(h = 30, lty=2, col = "#d7191c")
abline(h = mean(siteDat$sal, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
abline(h = range(meanSal[meanSal>15], na.rm=TRUE), lty = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanSal, liw=sdSal, uiw=sdSal, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "b)")
points(as.Date(names(meanSal[meanSal < 15]), format = "%Y-%m-%d"), meanSal[meanSal < 15], pch=19)
par(mar=c(4,0,2,4))
u <- par('usr')
S <- seq(u[3], u[4], 0.1)
plot(1 - 1/(1+(S/21.22)^5.82), S, "l", lwd=2, yaxs="i", yaxt="n", xlab="Proportion of\ncopepodids attaching")
u1 <- par('usr')
Srange <- range(meanSal[meanSal>15], na.rm=TRUE)
segments(x0 = rep(u1[1],2), x1=1 - 1/(1+(Srange/21.22)^5.82), y0 = Srange, y1 = Srange, lty=2, col = "#2c7bb6")
segments(x0 = 1 - 1/(1+(Srange/21.22)^5.82), x1=1 - 1/(1+(Srange/21.22)^5.82), y0 = Srange, y1 = u1[3], lty=2, col = "#2c7bb6")
text(0.42, 33.5, "Groner et al. (2016)")
| /figures.R | no_license | sjpeacock/Spatiotemporal-infection-model | R | false | false | 21,396 | r | ######################################################################################
## Figures for Green's function model paper
######################################################################################
# cols<-c('#d7191c','#fdae61','#ffffbf','#abd9e9','#2c7bb6')
cols<-c('#ca0020','#f4a582','#f7f7f7','#92c5de','#0571b0')
#-------------------------------------------------------------------------------------
## Figure 1: Map
#-------------------------------------------------------------------------------------
setwd("~/Google Drive/Mapping/Broughton Map/")
require(PBSmapping)
load("Data/nepacLLhigh.rda")
data<-read.delim("Data/SeaLice_SummaryDatafile.txt")
names(data)
focal.year<-2006
farms<-read.delim("Data/FarmLocations.txt")
farm.loc<-data.frame(EID=farms$EID,X=farms$lon,Y=farms$lat, name=farms$name, co=farms$company)
farm.loc<-as.EventData(farm.loc, projection="LL")
farm.loc<-farm.loc[c(23,29,30),]
data1<-subset(data, year==focal.year)
X<--data1$lon
Y<-data1$lat
eid<-c(1:length(X))
sampling.loc<-data.frame(EID=eid,X,Y,date=data1$Date, site.name=data1$site.name, temp=data1$temp, sal=data1$sal)
sampling.loc<-sampling.loc[is.na(X)==FALSE,]
sampling.loc<-as.EventData(sampling.loc, projection="LL")
B.box<-data.frame(PID=rep(1,4),POS=c(1:4), X=c(-126.8, -125.5, -125.5, -126.8), Y=c(50.5, 50.5, 51, 51))
B.box<-as.PolySet(B.box, projection="LL")
rivers<-importShapefile("~/Google Drive/Mapping/Broughton Map/Data/bcdrcarto/bcdrcarto_l_1m_v6-0_shp/bcdrcarto_l")
#gpclibPermit()
# Larger Map of BC coast
#quartz(height=4, width=3.5)
pdf("~/Google Drive/Greens/FINALLY/v6/MapBig.pdf", height=4, width=3.5)
plotMap(thinPolys(nepacLLhigh, tol=3), xlim=c(-128.8, -122), ylim=c(48, 53), col=grey(0.8), bg="white", las=1, xaxt="n", yaxt="n", border=grey(0.4))
#Add box for the Broughton Archipelago
addPolys(B.box, lwd=2, density=0, border=1)
plotMap(nepacLLhigh, xlim=c(-126.8, -125.5), ylim=c(50.5, 51), col=grey(0.8), bg="white", border=grey(0.4), xlab=expression(paste(degree, "Longitude")), ylab=expression(paste(degree, "Latitude")), xaxt="n", yaxt="n")
axis(side=1, at=c(-126.6,-126.4, -126.2, -126.0, -125.8, -125.6), labels=c("-126.6", "-126.4","-126.2", "-126.0", "-125.8","-125.6"), tck=0.03, cex.axis=0.8)
axis(side=1, at=seq(-126.8, -125.5, 0.05), labels=FALSE, tck=0.015)
axis(side=2, at=seq(50.6,50.9,0.1), tck=0.03, las=1, cex.axis=0.8)
axis(side=2, at=seq(50.5, 51, 0.05), labels=FALSE, tck=0.015)
axis(side=3, at=c(-126.6,-126.4, -126.2, -126.0, -125.8, -125.6), labels=FALSE, tck=0.03)
axis(side=3, at=seq(-126.8, -125.5, 0.05), labels=FALSE, tck=0.015)
axis(side=4, at=seq(50.6,50.9,0.1), labels=FALSE, tck=0.03, las=1, cex.axis=0.8)
axis(side=4, at=seq(50.5, 51, 0.05), labels=FALSE, tck=0.015)
#-------------------------------------------------------------------------------------
## Figure 1: Model schematic
#-------------------------------------------------------------------------------------
# In LaTeX via tikz
#-------------------------------------------------------------------------------------
## Figure 3: The average number of motile L. salmonis per farmed salmon
## on three salmon farms under four different treatment scenarios:
#-------------------------------------------------------------------------------------
load("Workspace/Scenarios_20181130.RData")
pdf(file="Figures/Fig3.pdf", width=5.3, height=4, pointsize=10)
par(mfrow=c(2,2), mar=c(3,3,2,1), oma=c(0,1,0,0))
topText<-c("a) Scenario A", "b) Scenario B", "c) Scenario C", "d) Scenario D")
for(i in 1:4){
plot(as.Date(times, origin="2005-09-01"), predicted.f[[i]][,1], "l",col=1, ylim=c(0, 8), bty="l", las=1, xlab="", ylab="")#max(predicted.f[[i]])
polygon(x=c(as.Date("2006-04-01"), as.Date("2006-04-01"), as.Date("2006-06-30"), as.Date("2006-06-30")), y=c(-1, 10, 10, -1), border=NA, col="#00000020")
lines(as.Date(times, origin="2005-09-01"), predicted.f[[i]][,2], lty=2)
lines(as.Date(times, origin="2005-09-01"), predicted.f[[i]][,3], lty=3)
#abline(v=as.Date(tT[[i]], origin="2005-09-01"), lty=c(1:3))
abline(h=3, col=cols[1])#, col=grey(0.8), lwd=2)
mtext(side=3, adj=0, line=0.5, topText[i])
if(i==2) legend("topleft", lty=c(1,2,3), c("Farm 1", "Farm 2", "Farm 3"), bg="white", bty="n")#legend(as.Date("2005-09-01"), 13, lwd=c(1,1,1,1,15), col=c(rep(1,3), 2,grey(0.8)), lty=c(1,2,3,1,1), c("Sargeaunts", "Humphrey", "Burdwood", "Treatment threshold", "Juvenile salmon migration"), bg="white", ncol=3, xpd=NA, bty="n")
}
mtext(side=2, outer=TRUE, expression(paste("Average number of motile ", italic(L), ". ", italic(salmonis), " per farmed salmon")), line=-0.5)
dev.off()
#-------------------------------------------------------------------------------------
## Figure 4: Farm predictions
#-------------------------------------------------------------------------------------
require(gplots)
load("Workspaces/farmModels_13June2012.RData")
load("Workspaces/farm_fits_20160209.RData")
pdf(file="Figures/Fig4.pdf", width=4, height=7)
par(mfrow=c(3,1), mar=c(4,3,1,1), oma=c(2,2,1,1))
plot(as.Date(T.all, origin="1970-01-01"), fset(p.fitted[[1]][,1], T.all, T.slice.all[1]), "l", xlab="", ylab="", bty="n", ylim=c(0,8), las=1, cex.axis=1.2)
polygon(x=c(T.all, rev(T.all)), y=c(CI.pred[[1]][,1], rev(CI.pred[[1]][,2])), border=NA, col="#00000030")
plotCI(as.Date(unique(Z$Date[Z$Farm=="SP"]), origin="1970-01-01"), Lice.summary[[1]][,1], ui=Lice.summary[[1]][,3], li=Lice.summary[[1]][,2], add=TRUE)
abline(v=T.slice.all[1], lty=2)
mtext(side=3, adj=0, "a) Farm 1", line=0.5)
plot(as.Date(T.all, origin="1970-01-01"), fset(p.fitted[[2]][,1], T.all, T.slice.all[2]), "l", xlab="", ylab="", bty="n", ylim=c(0,8), las=1, cex.axis=1.2)
polygon(x=c(T.all, rev(T.all)), y=c(CI.pred[[2]][,1], rev(CI.pred[[2]][,2])), border=NA, col="#00000030")
plotCI(as.Date(unique(Z$Date[Z$Farm=="HR"]), origin="1970-01-01"), Lice.summary[[2]][,1], ui=Lice.summary[[2]][,3], li=Lice.summary[[2]][,2], add=TRUE)
abline(v=T.slice.all[2], lty=2)
mtext(side=3, adj=0, "b) Farm 2", line=0.5)
plot(as.Date(T.all, origin="1970-01-01"), fset(p.fitted[[3]][,1], T.all, T.slice.all[3]), "l", xlab="", ylab="", bty="n", ylim=c(0,8), las=1, cex.axis=1.2)
polygon(x=c(T.all, rev(T.all)), y=c(CI.pred[[3]][,1], rev(CI.pred[[3]][,2])), border=NA, col="#00000030")
points(as.Date(Z$Date[Z$Farm=="BG"], origin="1970-01-01"), Z$Lice[Z$Farm=="BG"]/20)
abline(v=T.slice.all[3], lty=2)
mtext(side=3, adj=0, "c) Farm 3", line=0.5)
mtext(side=2, outer=TRUE, "Average lice per farm salmon")
mtext(side=1, outer=TRUE, "Date (2005/2006)")
dev.off()
#-------------------------------------------------------------------------------------
## Figure 5: The simulated densities of infectious copepodites
#-------------------------------------------------------------------------------------
load("Workspaces/Scenarios_20181130.RData")
source("filledContour.R")
pal<-colorRampPalette(c("white", 1))
source("4 Simulations/sim-grid.R")
pdf(file="Figures/SimDens.pdf", width=6.3, height=6, pointsize=10)
par(mfrow=c(2,2), mar=c(3,3,2,1), oma=c(2,2,1,0))
for(i in 1:4){
filledContour(farmL.all[[i]][seq(1, 280, 2), seq(1, 606, 4)], x=x[seq(1, 280, 2)], y=as.Date(T, origin="2005-09-01")[seq(1, 606, 4)], zlim=c(0,0.05), color.palette=pal, xlab="", ylab="")
contour(farmL.all[[i]], x=x, y=as.Date(T,origin="2005-09-01"), add=TRUE)
mtext(side=3, line=0.05, paste("Scenario ", LETTERS[i]))
arrows(x0=-40, y0=as.Date(paste("2006-", e1, sep=""), format="%Y-%j"), x1=60, y1=migDate(60, Xstart=-40, Tstart=e1), col=cols[5], length=0.08, xpd=NA, lwd=2)
points(-40, as.Date(paste("2006-", e1, sep=""), format="%Y-%j"), pch=19, col=cols[5])
arrows(x0=-40, y0=as.Date(paste("2006-", e2, sep=""), format="%Y-%j"), x1=60, y1=migDate(60, Xstart=-40, Tstart=e2), col=cols[1], length=0.08, xpd=NA)
points(-40, as.Date(paste("2006-", e2, sep=""), format="%Y-%j"), pch=21, col=cols[1], bg="white")
if(i>=3) abline(h=as.Date(tT[[i]][1], origin="2005-09-01"), lty=2)
abline(v=c(-3.7, 4.0, 53), lty=c(1:3), lwd=1.2)
}
mtext(side=1, outer=TRUE, "Distance along migration (km)")
mtext(side=2, outer=TRUE, "Date", las=0)
dev.off()
#-------------------------------------------------------------------------------------
## Figure 6: Simulation metrics
#-------------------------------------------------------------------------------------
load("Workspaces/Scenarios_20181130.RData")
pdf(file="Figures/Fig6.pdf", width=6.3, height=2.5)
par(mfrow=c(1,3), mar=c(3,5,1,0), oma=c(1,0,1,1), mgp=c(2.5, 1, 0), mgp=c(3,1,0))
for(i in c(c(1,2,5))){
bp<-barplot2(Metrics.summary[[i]][[1]], plot.ci=TRUE, ci.l=Metrics.summary[[i]][[2]], ci.u=Metrics.summary[[i]][[3]], las=1, names.arg=LETTERS[1:4], xlab="", ylab=c("Total infection pressure\nalong migration route", "Max. sea lice per fish", "Motile-days", "Number of lice", "Estimated mortality of wild salmon")[i], beside=TRUE, col=cols[c(4,1)], ci.width=0.4)
abline(h=0)
mtext(side=3, line=0.5, c("a)", "b)", "c) Number of motile-days", " d) Average louse load", "c) ")[i], cex=par('cex'), adj=0)
# if(i==5){
# abline(h=0.159, col=2)
# abline(h=c(0.083, 0.223), col=2, lty=2)
# }
if(i==2)mtext(side=1, "Treatment scenario", cex=par('cex'), line=3)
if(i==1)legend(5.5, 4.1, fill=cols[c(4,1)], c("Normal", "Early"), bty="n", xpd=NA, title="Migration timing")
}
dev.off()
######################################################################################
# SUPPORTING INFROMATION #
######################################################################################
#-------------------------------------------------------------------------------------
## SI Figure 1: Farm predictions
#-------------------------------------------------------------------------------------
load("Code/Cope distribution/CopedistWorkspace_June14.RData")
p<-read.csv("Code/JAGS/2006/neg binom/dcloneFittedParameters.csv")
t.sim0<-seq(50,150,2)
t.ind<-match(t.sim0, T);t.ind[length(t.ind)]<-2800
ani.options(outdir = "~/Documents/Greens/FINALLY/Figures/FarmVambient/3SeparateFarms")
saveLatex({
## put any code here to produce several plots
for(i in 1:length(t.sim0)){
plot(x, p$k[1]*p$phi[1]*copedist[,t.ind[i]], "l", bty="l", las=1, ylab="Planktonic copepidites", xlab="Distance", ylim=c(0, max(p$k[1]*p$phi[1]*copedist)), lwd=2, col=grey(0.8))
lines(x, p$k[1]*p$phi[1]*SP[,t.ind[i]]/scale.par)
lines(x, p$k[1]*p$phi[1]*HR[,t.ind[i]]/scale.par)
lines(x, p$k[1]*p$phi[1]*BW[,t.ind[i]]/scale.par)
lines(x, rep(p$k[1], length(x)), lty=2)
abline(v=c(-3.71, 4, 53), lty=3)
mtext(side=3, as.Date("2006-01-01")+t.sim0[i])
#if(t.sim0[i]>=67) lines(x, p$k[1]*p$phi[1]*copedist[,which(T==67)], col="#00000040", lwd=1.2)
}
},
interval = 0.4, ani.dev = 'pdf', ani.type = 'pdf', ani.height = 4, ani.width = 5, ani.opts='controls,width=4in', pdflatex = '/usr/texbin/pdflatex', overwrite=TRUE, documentclass = paste("\\documentclass{article}", "\\usepackage[margin=0.3in]{geometry}", sep="\n"))
#-------------------------------------------------------------------------------------
## Observed and predicted surfaces
#-------------------------------------------------------------------------------------
library(here)
library(dclone)
library(parallel)
library(boot) # for inv.logit function
# Load in results from model fitting by "fit-model-JAGS.R"
load('Workspaces/LiceFits2Data_20160308.RData')
# Summaryize MCMC output
S<-summary(mod)
source('3 Fitting/liceBoot.R')
source("3 Fitting/sim-model.R")
d<-4 # Scale of grid for plotting
n.dist<-length(seq(-44,68,d))
n.day<-length(seq(100,144,d))
dist2<-rep(seq(-44,68,d), n.day)
day2<-rep(seq(100,144,d), each=n.dist)
L<-farmL
sim.out<-simulate.lice(p=S[[1]][,1], dist=dist2, day=day2)
j<-1 #species = pink
# colour scheme
colPred <- c(surf = "#FFFFFF80", grid = "#d7191c80")
# colDat <- c(above = "#2c7bb6", below = "#abd9e9")
colDat <- c(above = 1, below = "#00000030")
# pdf(file = "Figures/Supplement-PinkFits2Data.pdf", width=4, height=9)
colPred <- c(surf = "#FFFFFF80", grid = grey(0.6))
colDat <- c(above = "#2c7bb6", below = "#d7191c60")
# quartz(width = 5, height = 5, pointsize=10)
# par(mfrow=c(1,1), mar=c(1,0,1,0), oma=c(0,0,1,0))
quartz(width = 4, height = 9, pointsize=10)
par(mfrow=c(3,1), mar=c(1,0,1,0), oma=c(0,0,1,0))
for(i in 1:3){
Z<-matrix(sim.out[[i]][,j], nrow=n.dist, ncol=n.day, byrow=FALSE)
pmat1<-persp(z=Z, x=seq(-44,68,d), y=seq(100,144,d), xlab="Distance", ylab="Time", zlab=c("C(x,t)", "H(x,t)", "M(x,t)")[i], theta=-140, phi=15, col=colPred['surf'], zlim=c(0, max(Lice.boot[i,,j,1])), border = colPred['grid'], ticktype = "detailed")
Lmean<-trans3d(x=dist, y=day, z=Lice.boot[i,,j,1], pmat=pmat1)
Lmin<-trans3d(x=dist, y=day, z=Lice.boot[i,,j,2], pmat=pmat1)
Lmax<-trans3d(x=dist, y=day, z=Lice.boot[i,,j,3], pmat=pmat1)
for(k in 1:length(day)){
ind<-c(findInterval(dist[k],seq(-44,68,d)), findInterval(day[k],seq(100,144,d)))
# text(Lmean$x[k], Lmean$y[k], k, xpd=NA)
if(Lice.boot[i,k,j,1]>=Z[ind[1], ind[2]]){ #if obs>pred
points(Lmean$x[k], Lmean$y[k], pch=19, xpd=NA, col=colDat['above'])
segments(Lmean$x[k], Lmean$y[k], Lmax$x[k], Lmax$y[k], col=colDat['above'])
if(Lice.boot[i,k,j,2]>Z[ind[1], ind[2]]){
segments(Lmean$x[k], Lmean$y[k], Lmin$x[k], Lmin$y[k], col=colDat['above'])
}else{
modMin<-trans3d(x=dist[k], y=day[k], z=Z[ind[1], ind[2]], pmat=pmat1)
segments(Lmean$x[k], Lmean$y[k], modMin$x, modMin$y, col=colDat['above'])
segments(modMin$x, modMin$y, Lmin$x[k], Lmin$y[k], col= colDat['below'])
}
} # end if obs > pred
if(Lice.boot[i,k,j,1]<Z[ind[1], ind[2]]){ #if obs < pred
points(Lmean$x[k], Lmean$y[k], pch=19, col=colDat['below'])
segments(Lmean$x[k], Lmean$y[k], Lmin$x[k], Lmin$y[k], col=colDat['below'])
if(Lice.boot[i,k,j,3]<Z[ind[1], ind[2]]){
segments(Lmean$x[k], Lmean$y[k], Lmax$x[k], Lmax$y[k], col=colDat['below'])
}else{
modMin<-trans3d(x=dist[k], y=day[k], z=Z[ind[1], ind[2]], pmat=pmat1)
segments(Lmean$x[k], Lmean$y[k], modMin$x, modMin$y, col=colDat['below'])
segments(modMin$x, modMin$y, Lmax$x[k], Lmax$y[k], col=colDat['above'])
}
} #end if obs < pred
} # end k
# par(new=TRUE)
# persp(z=Z, x=seq(-44,68,d), y=seq(100,144,d), xlab="", ylab="", zlab="", theta=-140, phi=15, col=colPred['surf'], zlim=c(0, max(Lice.boot[i,,j,1])), border = colPred['grid'])
mtext(side=3, adj=0, c("a) Copepodid", "b) Chalimus", "c) Motile")[i])
} #end stage i
#-------------------------------------------------------------------------------------
## Observed versus predicted over time
#-------------------------------------------------------------------------------------
colR <- colorRampPalette(c('#d7191c','#fdae61','#ffffbf','#abd9e9','#2c7bb6'))
sim.out2<-simulate.lice(p=S[[1]][,1], dist=dist, day=day)
for(i in 1:3){
plotCI(sim.out2[[i]][,j], Lice.boot[i,,j,1], li = Lice.boot[i,,j,2], ui = Lice.boot[i,,j,3], gap=0, pch=21, pt.bg = colR(n=length(unique(day)))[match(1:length(unique(day)), order(day))], xlim=c(0.12, 0.15), ylim=c(0, 0.3))
abline(a =0, b=1, lty=2)
#-------------------------------------------------------------------------------------
## Temperature and salinity over course of sampling
#-------------------------------------------------------------------------------------
siteDat <- read.csv("Figures/site_data2006.csv")
index <- read.csv("3 Fitting/index.csv")
head(siteDat)
range(siteDat$temp, na.rm=TRUE)
range(siteDat$sal, na.rm=TRUE)
mean(siteDat$temp, na.rm=TRUE)
# How many sites sampled in a day?
range(tapply(siteDat$temp, siteDat$date, length))
siteDat$date <- as.Date(siteDat$FullDate, format = "%Y-%m-%d")
plot(siteDat$date, siteDat$temp)
meanTemp <- tapply(siteDat$temp, siteDat$date, mean)
sdTemp <- tapply(siteDat$temp, siteDat$date, sd)
meanSal <- tapply(siteDat$sal, siteDat$date, mean)
sdSal <- tapply(siteDat$sal, siteDat$date, sd)
tempRange <- seq(7, 15, 0.1)
devTimeC <- tau(tempRange, beta[1,1], beta[1,2])
tPoints <- rev(tempRange)[findInterval(c(2:5), rev(devTimeC))]
plot(tempRange, devTimeC, "l")
quartz(width = 4.5, height = 6, pointsize =10)
par(mfrow=c(2, 1), mar=c(3,4,2,4.5), oma=c(2,0,0,0))
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, type = "n", xlab="", ylab=expression(paste("Surface temperature (", degree, "C)")), las=1, liw=sdTemp, uiw=sdTemp, lwd = NA, bty="u")
axis(side = 4, at = tPoints, labels = c(2:5), las=1)
mtext(side = 4, "Development time\nof copepodites (days)", line=3)
abline(h = 10, lty=2, col = "#d7191c")
abline(h = mean(siteDat$temp, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, liw=sdTemp, uiw=sdTemp, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "a)")
legend("topleft", lwd=c(2, 1), lty=c(1,2), col=c("#2c7bb6","#d7191c"), c("mean", "assumed value"), bty="n")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanSal, type ="n", xlab="", ylab="Surface salinity (ppm)", las=1, liw=sdSal, uiw=sdSal, lwd = NA, bty="u")
# axis(side = 4, at = sPoints, labels = c(0.01, 0.05, 0.1, 0.2, 0.5, 0.8), las=1, xpd=NA)
# mtext(side = 4, "Mortality rate of copepodites", line=3)
abline(h = 30, lty=2, col = "#d7191c")
abline(h = mean(siteDat$sal, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanSal, liw=sdSal, uiw=sdSal, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "b)")
# What impact on parameters
#Fixed parameters from Stien et al. (2005)
tauC <- 0.525^-2 # Development time of copepodids
tauH <- 0.250^-2 # Development time of chalimus
tauM <- 0.187^-2 # Development time of motiles
beta <- matrix(c(24.79, 74.70, 67.47, 0.525, 0.250, 0.187), nrow = 3, ncol = 2, dimnames = list(c("C", "H", "M")))
tau <- function(temp, beta1, beta2){
(beta1/(temp - 10 + beta1*beta2))^2
}
tau(c(7.7, 12.5), beta[1,1], beta[1,2])
tau(c(7.7, 12.5), beta[2,1], beta[2,2])
tau(c(7.7, 12.5), beta[3,1], beta[3,2])
tau(10, beta[1,1]+c(-1.96, 1.96)*1.43, beta[1,2]+c(-1.96, 1.96)*0.017)
# Salinity relationship from Groner et al. (2016)
# time (hrs) = exp(beta0 + beta1*salinity)
M <- function(sal, denom, exponent){
1/(1+(sal/denom)^exponent)
}
M(mean(siteDat$sal, na.rm=TRUE), 19.09, 7.11)
salRange <- seq(10, 40, 0.1)
Ms <- M(salRange, 19.09, 7.11)
sPoints <- rev(salRange)[findInterval(c(0.01, 0.05, 0.1, 0.2, 0.5, 0.8), rev(Ms))]
###################
quartz(width = 5, height = 6, pointsize = 12)
layout(matrix(c(1,1,2,3,3,4), nrow=2, byrow=TRUE))
par(oma=c(2,0,0,0))
par(mar=c(4,4,2,0))
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, type = "n", xlab="", ylab=expression(paste("Surface temperature (", degree, "C)")), las=1, liw=sdTemp, uiw=sdTemp, lwd = NA)
abline(h = 10, lty=2, col = "#d7191c")
abline(h = mean(siteDat$temp, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
abline(h = range(meanTemp, na.rm=TRUE), lty = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanTemp, liw=sdTemp, uiw=sdTemp, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "a)")
legend("topleft", lwd=c(2, 1,1), lty=c(1,2,2), col=c("#2c7bb6","#2c7bb6","#d7191c"), c("overall mean", "range in daily avg.", "assumed constant value"), bty="n")
par(mar=c(4,0,2,4))
u <- par('usr')
plot(tau(seq(u[3], u[4], 0.01), beta[1,1], beta[1,2]), seq(u[3], u[4], 0.01), "l", lwd=2, yaxs="i", yaxt="n", xlab="Development time\n of copepodids (days)")
u1 <- par('usr')
segments(x0 = u1[1], x1=tauC, y0 = 10, y1 = 10, lty=2, col = "#d7191c")
segments(x0 = tauC, x1=tauC, y0 = 10, y1 = u1[3], lty=2, col = "#d7191c")
segments(x0 = rep(u1[1],2), x1=tau(range(meanTemp, na.rm=TRUE), beta[1,1], beta[1,2]), y0 = range(meanTemp, na.rm=TRUE), y1 = range(meanTemp, na.rm=TRUE), lty=2, col = "#2c7bb6")
segments(x0 = tau(range(meanTemp, na.rm=TRUE), beta[1,1], beta[1,2]), x1=tau(range(meanTemp, na.rm=TRUE), beta[1,1], beta[1,2]), y0 = range(meanTemp, na.rm=TRUE), y1 = u1[3], lty=2, col = "#2c7bb6")
text(4.4, 15, "Stien et al. (2005)")
axis(side=4, las=1)
par(mar=c(4,4,2,0))
plotCI(as.Date(names(meanSal), format = "%Y-%m-%d"), meanSal, type ="n", xlab="", ylab="Surface salinity (ppm)", las=1, liw=sdSal, uiw=sdSal, lwd = NA)
# axis(side = 4, at = sPoints, labels = c(0.01, 0.05, 0.1, 0.2, 0.5, 0.8), las=1, xpd=NA)
# mtext(side = 4, "Mortality rate of copepodites", line=3)
abline(h = 30, lty=2, col = "#d7191c")
abline(h = mean(siteDat$sal, na.rm=TRUE), lwd = 2, col = "#2c7bb6")
abline(h = range(meanSal[meanSal>15], na.rm=TRUE), lty = 2, col = "#2c7bb6")
plotCI(as.Date(names(meanTemp), format = "%Y-%m-%d"), meanSal, liw=sdSal, uiw=sdSal, gap=0, pch=21, pt.bg="white", sfrac = 0.008, add=TRUE)
mtext(side = 3, adj=0, line = 0.5, "b)")
points(as.Date(names(meanSal[meanSal < 15]), format = "%Y-%m-%d"), meanSal[meanSal < 15], pch=19)
par(mar=c(4,0,2,4))
u <- par('usr')
S <- seq(u[3], u[4], 0.1)
plot(1 - 1/(1+(S/21.22)^5.82), S, "l", lwd=2, yaxs="i", yaxt="n", xlab="Proportion of\ncopepodids attaching")
u1 <- par('usr')
Srange <- range(meanSal[meanSal>15], na.rm=TRUE)
segments(x0 = rep(u1[1],2), x1=1 - 1/(1+(Srange/21.22)^5.82), y0 = Srange, y1 = Srange, lty=2, col = "#2c7bb6")
segments(x0 = 1 - 1/(1+(Srange/21.22)^5.82), x1=1 - 1/(1+(Srange/21.22)^5.82), y0 = Srange, y1 = u1[3], lty=2, col = "#2c7bb6")
text(0.42, 33.5, "Groner et al. (2016)")
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Do not modify this file since it was automatically generated from:
%
% plot-methods.R
%
% by the Rdoc compiler part of the R.oo package.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\name{noisePlot}
\alias{noisePlot}
\alias{noisePlot,QDNAseqReadCounts,missing-method}
\title{Plot noise as a function of sequence depth}
\usage{
noisePlot(x, y, ...)
}
\description{
Plot noise as a function of sequence depth.
}
\arguments{
\item{x}{A \code{\link{QDNAseqReadCounts}} object.}
\item{y}{missing}
\item{...}{Further arguments to \code{\link[graphics]{plot}}() and
\code{\link[graphics]{text}}.}
}
\examples{
data(LGG150)
readCounts <- LGG150
readCountsFiltered <- applyFilters(readCounts)
readCountsFiltered <- estimateCorrection(readCountsFiltered)
noisePlot(readCountsFiltered)
}
\author{Ilari Scheinin}
\keyword{hplot}
| /man/noisePlot.Rd | no_license | ccagc/QDNAseq | R | false | false | 962 | rd | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Do not modify this file since it was automatically generated from:
%
% plot-methods.R
%
% by the Rdoc compiler part of the R.oo package.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\name{noisePlot}
\alias{noisePlot}
\alias{noisePlot,QDNAseqReadCounts,missing-method}
\title{Plot noise as a function of sequence depth}
\usage{
noisePlot(x, y, ...)
}
\description{
Plot noise as a function of sequence depth.
}
\arguments{
\item{x}{A \code{\link{QDNAseqReadCounts}} object.}
\item{y}{missing}
\item{...}{Further arguments to \code{\link[graphics]{plot}}() and
\code{\link[graphics]{text}}.}
}
\examples{
data(LGG150)
readCounts <- LGG150
readCountsFiltered <- applyFilters(readCounts)
readCountsFiltered <- estimateCorrection(readCountsFiltered)
noisePlot(readCountsFiltered)
}
\author{Ilari Scheinin}
\keyword{hplot}
|
# 09. 데이터 분석 프로젝트
# 09-1. '한국복지패널데이터' 분석 준비하기
install.packages("foreign") # foreign 패키지 설치
install.packages("dplyr")
install.packages("ggplot2")
install.packages("readxl")
library(foreign) # SPSS 파일 로드
library(dplyr) # 전처리
library(ggplot2) # 시각화
library(readxl) # 엑셀 파일 불러오기
raw_welfare <- read.spss(file = "Koweps_hpc10_2015_beta3.sav", to.data.frame = T)
welfare <- raw_welfare
# 데이터 검토하기
head(welfare)
tail(welfare)
View(welfare)
dim(welfare)
str(welfare)
summary(welfare)
# 변수명 바꾸기
welfare <- rename(welfare, sex = h10_g3, # 성별
birth = h10_g4, # 태어난 연도
marriage = h10_g10, # 혼인 상태
religion = h10_g11, # 종교
income = p1002_8aq1, # 월급
code_job = h10_eco9, # 직종 코드
code_region = h10_reg7) # 지역 코드
str(welfare)
View(welfare$sex)
# 09-2. 성별에 따른 월급 차이
#- "성별에 따라 월급이 다를까?"
class(welfare$sex)
table(welfare$sex)
# 이상치 결측 처리
welfare$sex <- ifelse(welfare$sex == 9, NA, welfare$sex)
# 결측치 확인
table(is.na(welfare$sex))
welfare$sex <- ifelse(welfare$sex == 1, "male", "female") # 1대신 남자, 2대신 여자
table(welfare$sex)
qplot(welfare$sex)
# 월급 변수 검토 및 전처리
# 1. 변수 검토하기
class(welfare$income)
summary(welfare$income)
qplot(welfare$income)
qplot(welfare$income) + xlim(0, 1000) # 좀 더 자세히 보자
# 2. 전처리
# 이상치 확인 : 수입은 공개하지 않은 사람도 많다.
summary(welfare$income)
# 이상치 결측 처리
welfare$income <- ifelse(welfare$income %in% c(0, 9999), NA, welfare$income)
# 결측치 확인
table(is.na(welfare$income)) # NA가 12044개
# 성별에 따른 월급 차이 분석하기
# 1. 성별 월급 평균표 만들기
sex_income <- welfare %>%
filter(!is.na(income)) %>%
group_by(sex) %>%
summarise(mean_income = mean(income))
sex_income
ggplot(data = sex_income, aes(x = sex, y = mean_income)) + geom_col()
# 09-3. 나이와 월급의 관계
# - "몇 살 때 월급을 가장 많이 받을까?" pdf 17쪽
class(welfare$birth)
summary(welfare$birth)
qplot(welfare$birth)
# 2. 전처리
# 이상치 확인
summary(welfare$birth)
# 결측치 확인
table(is.na(welfare$birth))
# # 이상치 결측 처리
welfare$birth <- ifelse(welfare$birth == 9999, NA, welfare$birth)
table(is.na(welfare$birth))
# 3. 파생변수 만들기 - 나이
welfare$age <- 2015 - welfare$birth + 1
summary(welfare$age)
qplot(welfare$age)
# 1. 나이에 따른 월급 평균표 만들기
age_income <- welfare %>% filter(!is.na(income)) %>% group_by(age) %>% summarise(mean_income = mean(income))
head(age_income)
ggplot(data = age_income, aes(x = age, y = mean_income)) + geom_line()
# 연령대 변수 검토 및 전처리하기
# 파생변수 만들기 - 연령대
welfare <- welfare %>% mutate(ageg = ifelse(age < 30, "young", ifelse(age <= 59, "middle", "old")))
table(welfare$ageg)
qplot(welfare$ageg)
# 연령대에 따른 월급 차이 분석하기
# 1. 연령대별 월급 평균표 만들기
ageg_income <- welfare %>% filter(!is.na(income)) %>% group_by(ageg) %>% summarise(mean_income = mean(income))
ageg_income
ggplot(data = ageg_income, aes(x = ageg, y = mean_income)) + geom_col()
# 막대 정렬 : 초년, 중년, 노년 나이 순
ggplot(data = ageg_income, aes(x = ageg, y = mean_income)) + geom_col() + scale_x_discrete(limits = c("young", "middle", "old"))
# 9-5
# 연령대 및 성별 월급 차이 분석하기
# 1. 연령대 및 성별 월급 평균표 만들기
sex_income <- welfare %>% filter(!is.na(income)) %>% group_by(ageg, sex) %>% summarise(mean_income = mean(income))
sex_income
ggplot(data = sex_income, aes(x = ageg, y = mean_income, fill = sex)) +
geom_col() + scale_x_discrete(limits = c("young", "middle", "old"))
# 성별 막대 분리
ggplot(data = sex_income, aes(x = ageg, y = mean_income, fill = sex)) + geom_col(position = "dodge") + scale_x_discrete(limits = c("young", "middle", "old"))
# 성별 연령별 월급 평균표 만들기
sex_age <- welfare %>%
filter(!is.na(income)) %>%
group_by(age, sex) %>%
summarise(mean_income = mean(income))
head(sex_age)
ggplot(data = sex_age, aes(x = age, y = mean_income, col = sex)) + geom_line()
#9-6. 직업별 월급 차이
# "어떤 직업이 월급을 가장 많이 받을까?
class(welfare$code_job)
table(welfare$code_job)
library(readxl)
list_job <- read_excel("Koweps_Codebook.xlsx", col_names = T, sheet = 2)
head(list_job)
dim(list_job)
welfare <- left_join(welfare, list_job, id = "code_job")
welfare %>%
filter(!is.na(code_job)) %>%
select(code_job, job) %>%
head(10)
# 직업별 월급 차이 분석하기
# 1. 직업별 월급 평균표 만들기
job_income <- welfare %>%
filter(!is.na(job) & !is.na(income)) %>%
group_by(job) %>%
summarise(mean_income = mean(income))
head(job_income)
top10 <- job_income %>%
arrange(desc(mean_income)) %>%
head(10)
top10
ggplot(data = top10, aes(x = reorder(job, mean_income), y = mean_income)) +
geom_col() +
coord_flip()
bottom10 <- job_income %>%
arrange(mean_income) %>%
head(10)
bottom10
ggplot(data = bottom10, aes(x = reorder(job, -mean_income),
y = mean_income)) +
geom_col() +
coord_flip() +
ylim(0, 850)
# 9-7. 성별 직업 빈도
# "성별로 어떤 직업이 가장 많을까?"
# 남성 직업 빈도 상위 10 개 추출
job_male <- welfare %>%
filter(!is.na(job) & sex == "male") %>%
group_by(job) %>%
summarise(n = n()) %>%
arrange(desc(n)) %>%
head(10)
job_male
# 여성 직업 빈도 상위 10 개 추출
job_female <- welfare %>%
filter(!is.na(job) & sex == "female") %>%
group_by(job) %>%
summarise(n = n()) %>%
arrange(desc(n)) %>%
head(10)
job_female
# 남성 직업 빈도 상위 10 개 직업
ggplot(data = job_male, aes(x = reorder(job, n), y = n)) +
geom_col() +
coord_flip()
# 여성 직업 빈도 상위 10 개 직업
ggplot(data = job_female, aes(x = reorder(job, n), y = n)) +
geom_col() +
coord_flip()
# 09-8. 종교 유무에 따른 이혼율
# "종교가 있는 사람들이 이혼을 덜 할까?
class(welfare$religion)
table(welfare$religion)
# 종교 유무 이름 부여
welfare$religion <- ifelse(welfare$religion == 1, "yes", "no")
table(welfare$religion)
qplot(welfare$religion)
class(welfare$marriage)
table(welfare$marriage)
# 이혼 여부 변수 만들기
welfare$group_marriage <- ifelse(welfare$marriage == 1, "marriage",
ifelse(welfare$marriage == 3, "divorce", NA))
table(welfare$group_marriage)
table(is.na(welfare$group_marriage))
qplot(welfare$group_marriage)
religion_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
group_by(religion, group_marriage) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 1))
religion_marriage
religion_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
count(religion, group_marriage) %>%
group_by(religion) %>%
mutate(pct = round(n/sum(n)*100, 1))
# 이혼 추출
divorce <- religion_marriage %>%
filter(group_marriage == "divorce") %>%
select(religion, pct)
divorce
ggplot(data = divorce, aes(x = religion, y = pct)) + geom_col()
ageg_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
group_by(ageg, group_marriage) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 1))
ageg_marriage
ageg_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
count(ageg, group_marriage) %>%
group_by(ageg) %>%
mutate(pct = round(n/sum(n)*100, 1))
# 초년 제외, 이혼 추출
ageg_divorce <- ageg_marriage %>%
filter(ageg != "young" & group_marriage == "divorce") %>%
select(ageg, pct)
ageg_divorce
ggplot(data = ageg_divorce, aes(x = ageg, y = pct)) + geom_col()
# 연령대, 종교유무, 결혼상태별 비율표 만들기
ageg_religion_marriage <- welfare %>%
filter(!is.na(group_marriage) & ageg != "young") %>%
group_by(ageg, religion, group_marriage) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 1))
ageg_religion_marriage
ageg_religion_marriage <- welfare %>%
filter(!is.na(group_marriage) & ageg != "young") %>%
count(ageg, religion, group_marriage) %>%
group_by(ageg, religion) %>%
mutate(pct = round(n/sum(n)*100, 1))
df_divorce <- ageg_religion_marriage %>%
filter(group_marriage == "divorce") %>%
select(ageg, religion, pct)
df_divorce
ggplot(data = df_divorce, aes(x = ageg, y = pct, fill = religion )) +
geom_col(position = "dodge")
#09-9. 지역별 연령대 비율
# "노년층이 많은 지역은 어디일까?"
class(welfare$code_region)
table(welfare$code_region)
# 지역 코드 목록 만들기
list_region <- data.frame(code_region = c(1:7),
region = c("서울",
"수도권(인천/경기)",
"부산/경남/울산",
"대구/경북",
"대전/충남",
"강원/충북",
"광주/전남/전북/제주도"))
list_region
welfare <- left_join(welfare, list_region, id = "code_region")
welfare %>%
select(code_region, region) %>%
head
region_ageg <- welfare %>%
group_by(region, ageg) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 2))
head(region_ageg)
region_ageg <- welfare %>%
count(region, ageg) %>%
group_by(region) %>%
mutate(pct = round(n/sum(n)*100, 2))
ggplot(data = region_ageg, aes(x = region, y = pct, fill = ageg)) +
geom_col() +
coord_flip()
# 노년층 비율 내림차순 정렬
list_order_old <- region_ageg %>%
filter(ageg == "old") %>%
arrange(pct)
list_order_old
order <- list_order_old$region
order
ggplot(data = region_ageg, aes(x = region, y = pct, fill = ageg)) +
geom_col() +
coord_flip() +
scale_x_discrete(limits = order)
class(region_ageg$ageg)
levels(region_ageg$ageg)
region_ageg$ageg <- factor(region_ageg$ageg,
level = c("old", "middle", "young"))
class(region_ageg$ageg)
levels(region_ageg$ageg)
ggplot(data = region_ageg, aes(x = region, y = pct, fill = ageg)) +
geom_col() +
coord_flip() +
scale_x_discrete(limits = order)
| /코드 - 2015년도 한국인의 삶 비교분석.R | no_license | wnsqja1324/R_report | R | false | false | 11,069 | r | # 09. 데이터 분석 프로젝트
# 09-1. '한국복지패널데이터' 분석 준비하기
install.packages("foreign") # foreign 패키지 설치
install.packages("dplyr")
install.packages("ggplot2")
install.packages("readxl")
library(foreign) # SPSS 파일 로드
library(dplyr) # 전처리
library(ggplot2) # 시각화
library(readxl) # 엑셀 파일 불러오기
raw_welfare <- read.spss(file = "Koweps_hpc10_2015_beta3.sav", to.data.frame = T)
welfare <- raw_welfare
# 데이터 검토하기
head(welfare)
tail(welfare)
View(welfare)
dim(welfare)
str(welfare)
summary(welfare)
# 변수명 바꾸기
welfare <- rename(welfare, sex = h10_g3, # 성별
birth = h10_g4, # 태어난 연도
marriage = h10_g10, # 혼인 상태
religion = h10_g11, # 종교
income = p1002_8aq1, # 월급
code_job = h10_eco9, # 직종 코드
code_region = h10_reg7) # 지역 코드
str(welfare)
View(welfare$sex)
# 09-2. 성별에 따른 월급 차이
#- "성별에 따라 월급이 다를까?"
class(welfare$sex)
table(welfare$sex)
# 이상치 결측 처리
welfare$sex <- ifelse(welfare$sex == 9, NA, welfare$sex)
# 결측치 확인
table(is.na(welfare$sex))
welfare$sex <- ifelse(welfare$sex == 1, "male", "female") # 1대신 남자, 2대신 여자
table(welfare$sex)
qplot(welfare$sex)
# 월급 변수 검토 및 전처리
# 1. 변수 검토하기
class(welfare$income)
summary(welfare$income)
qplot(welfare$income)
qplot(welfare$income) + xlim(0, 1000) # 좀 더 자세히 보자
# 2. 전처리
# 이상치 확인 : 수입은 공개하지 않은 사람도 많다.
summary(welfare$income)
# 이상치 결측 처리
welfare$income <- ifelse(welfare$income %in% c(0, 9999), NA, welfare$income)
# 결측치 확인
table(is.na(welfare$income)) # NA가 12044개
# 성별에 따른 월급 차이 분석하기
# 1. 성별 월급 평균표 만들기
sex_income <- welfare %>%
filter(!is.na(income)) %>%
group_by(sex) %>%
summarise(mean_income = mean(income))
sex_income
ggplot(data = sex_income, aes(x = sex, y = mean_income)) + geom_col()
# 09-3. 나이와 월급의 관계
# - "몇 살 때 월급을 가장 많이 받을까?" pdf 17쪽
class(welfare$birth)
summary(welfare$birth)
qplot(welfare$birth)
# 2. 전처리
# 이상치 확인
summary(welfare$birth)
# 결측치 확인
table(is.na(welfare$birth))
# # 이상치 결측 처리
welfare$birth <- ifelse(welfare$birth == 9999, NA, welfare$birth)
table(is.na(welfare$birth))
# 3. 파생변수 만들기 - 나이
welfare$age <- 2015 - welfare$birth + 1
summary(welfare$age)
qplot(welfare$age)
# 1. 나이에 따른 월급 평균표 만들기
age_income <- welfare %>% filter(!is.na(income)) %>% group_by(age) %>% summarise(mean_income = mean(income))
head(age_income)
ggplot(data = age_income, aes(x = age, y = mean_income)) + geom_line()
# 연령대 변수 검토 및 전처리하기
# 파생변수 만들기 - 연령대
welfare <- welfare %>% mutate(ageg = ifelse(age < 30, "young", ifelse(age <= 59, "middle", "old")))
table(welfare$ageg)
qplot(welfare$ageg)
# 연령대에 따른 월급 차이 분석하기
# 1. 연령대별 월급 평균표 만들기
ageg_income <- welfare %>% filter(!is.na(income)) %>% group_by(ageg) %>% summarise(mean_income = mean(income))
ageg_income
ggplot(data = ageg_income, aes(x = ageg, y = mean_income)) + geom_col()
# 막대 정렬 : 초년, 중년, 노년 나이 순
ggplot(data = ageg_income, aes(x = ageg, y = mean_income)) + geom_col() + scale_x_discrete(limits = c("young", "middle", "old"))
# 9-5
# 연령대 및 성별 월급 차이 분석하기
# 1. 연령대 및 성별 월급 평균표 만들기
sex_income <- welfare %>% filter(!is.na(income)) %>% group_by(ageg, sex) %>% summarise(mean_income = mean(income))
sex_income
ggplot(data = sex_income, aes(x = ageg, y = mean_income, fill = sex)) +
geom_col() + scale_x_discrete(limits = c("young", "middle", "old"))
# 성별 막대 분리
ggplot(data = sex_income, aes(x = ageg, y = mean_income, fill = sex)) + geom_col(position = "dodge") + scale_x_discrete(limits = c("young", "middle", "old"))
# 성별 연령별 월급 평균표 만들기
sex_age <- welfare %>%
filter(!is.na(income)) %>%
group_by(age, sex) %>%
summarise(mean_income = mean(income))
head(sex_age)
ggplot(data = sex_age, aes(x = age, y = mean_income, col = sex)) + geom_line()
#9-6. 직업별 월급 차이
# "어떤 직업이 월급을 가장 많이 받을까?
class(welfare$code_job)
table(welfare$code_job)
library(readxl)
list_job <- read_excel("Koweps_Codebook.xlsx", col_names = T, sheet = 2)
head(list_job)
dim(list_job)
welfare <- left_join(welfare, list_job, id = "code_job")
welfare %>%
filter(!is.na(code_job)) %>%
select(code_job, job) %>%
head(10)
# 직업별 월급 차이 분석하기
# 1. 직업별 월급 평균표 만들기
job_income <- welfare %>%
filter(!is.na(job) & !is.na(income)) %>%
group_by(job) %>%
summarise(mean_income = mean(income))
head(job_income)
top10 <- job_income %>%
arrange(desc(mean_income)) %>%
head(10)
top10
ggplot(data = top10, aes(x = reorder(job, mean_income), y = mean_income)) +
geom_col() +
coord_flip()
bottom10 <- job_income %>%
arrange(mean_income) %>%
head(10)
bottom10
ggplot(data = bottom10, aes(x = reorder(job, -mean_income),
y = mean_income)) +
geom_col() +
coord_flip() +
ylim(0, 850)
# 9-7. 성별 직업 빈도
# "성별로 어떤 직업이 가장 많을까?"
# 남성 직업 빈도 상위 10 개 추출
job_male <- welfare %>%
filter(!is.na(job) & sex == "male") %>%
group_by(job) %>%
summarise(n = n()) %>%
arrange(desc(n)) %>%
head(10)
job_male
# 여성 직업 빈도 상위 10 개 추출
job_female <- welfare %>%
filter(!is.na(job) & sex == "female") %>%
group_by(job) %>%
summarise(n = n()) %>%
arrange(desc(n)) %>%
head(10)
job_female
# 남성 직업 빈도 상위 10 개 직업
ggplot(data = job_male, aes(x = reorder(job, n), y = n)) +
geom_col() +
coord_flip()
# 여성 직업 빈도 상위 10 개 직업
ggplot(data = job_female, aes(x = reorder(job, n), y = n)) +
geom_col() +
coord_flip()
# 09-8. 종교 유무에 따른 이혼율
# "종교가 있는 사람들이 이혼을 덜 할까?
class(welfare$religion)
table(welfare$religion)
# 종교 유무 이름 부여
welfare$religion <- ifelse(welfare$religion == 1, "yes", "no")
table(welfare$religion)
qplot(welfare$religion)
class(welfare$marriage)
table(welfare$marriage)
# 이혼 여부 변수 만들기
welfare$group_marriage <- ifelse(welfare$marriage == 1, "marriage",
ifelse(welfare$marriage == 3, "divorce", NA))
table(welfare$group_marriage)
table(is.na(welfare$group_marriage))
qplot(welfare$group_marriage)
religion_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
group_by(religion, group_marriage) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 1))
religion_marriage
religion_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
count(religion, group_marriage) %>%
group_by(religion) %>%
mutate(pct = round(n/sum(n)*100, 1))
# 이혼 추출
divorce <- religion_marriage %>%
filter(group_marriage == "divorce") %>%
select(religion, pct)
divorce
ggplot(data = divorce, aes(x = religion, y = pct)) + geom_col()
ageg_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
group_by(ageg, group_marriage) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 1))
ageg_marriage
ageg_marriage <- welfare %>%
filter(!is.na(group_marriage)) %>%
count(ageg, group_marriage) %>%
group_by(ageg) %>%
mutate(pct = round(n/sum(n)*100, 1))
# 초년 제외, 이혼 추출
ageg_divorce <- ageg_marriage %>%
filter(ageg != "young" & group_marriage == "divorce") %>%
select(ageg, pct)
ageg_divorce
ggplot(data = ageg_divorce, aes(x = ageg, y = pct)) + geom_col()
# 연령대, 종교유무, 결혼상태별 비율표 만들기
ageg_religion_marriage <- welfare %>%
filter(!is.na(group_marriage) & ageg != "young") %>%
group_by(ageg, religion, group_marriage) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 1))
ageg_religion_marriage
ageg_religion_marriage <- welfare %>%
filter(!is.na(group_marriage) & ageg != "young") %>%
count(ageg, religion, group_marriage) %>%
group_by(ageg, religion) %>%
mutate(pct = round(n/sum(n)*100, 1))
df_divorce <- ageg_religion_marriage %>%
filter(group_marriage == "divorce") %>%
select(ageg, religion, pct)
df_divorce
ggplot(data = df_divorce, aes(x = ageg, y = pct, fill = religion )) +
geom_col(position = "dodge")
#09-9. 지역별 연령대 비율
# "노년층이 많은 지역은 어디일까?"
class(welfare$code_region)
table(welfare$code_region)
# 지역 코드 목록 만들기
list_region <- data.frame(code_region = c(1:7),
region = c("서울",
"수도권(인천/경기)",
"부산/경남/울산",
"대구/경북",
"대전/충남",
"강원/충북",
"광주/전남/전북/제주도"))
list_region
welfare <- left_join(welfare, list_region, id = "code_region")
welfare %>%
select(code_region, region) %>%
head
region_ageg <- welfare %>%
group_by(region, ageg) %>%
summarise(n = n()) %>%
mutate(tot_group = sum(n)) %>%
mutate(pct = round(n/tot_group*100, 2))
head(region_ageg)
region_ageg <- welfare %>%
count(region, ageg) %>%
group_by(region) %>%
mutate(pct = round(n/sum(n)*100, 2))
ggplot(data = region_ageg, aes(x = region, y = pct, fill = ageg)) +
geom_col() +
coord_flip()
# 노년층 비율 내림차순 정렬
list_order_old <- region_ageg %>%
filter(ageg == "old") %>%
arrange(pct)
list_order_old
order <- list_order_old$region
order
ggplot(data = region_ageg, aes(x = region, y = pct, fill = ageg)) +
geom_col() +
coord_flip() +
scale_x_discrete(limits = order)
class(region_ageg$ageg)
levels(region_ageg$ageg)
region_ageg$ageg <- factor(region_ageg$ageg,
level = c("old", "middle", "young"))
class(region_ageg$ageg)
levels(region_ageg$ageg)
ggplot(data = region_ageg, aes(x = region, y = pct, fill = ageg)) +
geom_col() +
coord_flip() +
scale_x_discrete(limits = order)
|
# Copyright (C) 2010-$year$ - Sebastien Bihorel
#
# This file must be used under the terms of the CeCILL.
# This source file is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at
# http://www.cecill.info/licences/Licence_CeCILL_V2-en.txt
#
as.optimbase.functionargs <- function(x=NULL){
x <- unclass(x)
structure(as.list(x),class='optimbase.functionargs')
}
as.optimbase.outputargs <- function(x=NULL){
x <- unclass(x)
structure(as.list(x),class='optimbase.outputargs')
} | /R/as.R | no_license | sbihorel/optimbase | R | false | false | 582 | r | # Copyright (C) 2010-$year$ - Sebastien Bihorel
#
# This file must be used under the terms of the CeCILL.
# This source file is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at
# http://www.cecill.info/licences/Licence_CeCILL_V2-en.txt
#
as.optimbase.functionargs <- function(x=NULL){
x <- unclass(x)
structure(as.list(x),class='optimbase.functionargs')
}
as.optimbase.outputargs <- function(x=NULL){
x <- unclass(x)
structure(as.list(x),class='optimbase.outputargs')
} |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fingerprint_tools.R
\name{setdiff_fingerprints}
\alias{setdiff_fingerprints}
\title{Calcualte the set-difference for two sets of fingerprints elementwise}
\usage{
setdiff_fingerprints(fps_A, fps_B)
}
\arguments{
\item{fps_A}{fps list of \code{\link[fingerprint]{fingerprint}} objects
(1 x n_samples)}
\item{fps_B}{fps list of \code{\link[fingerprint]{fingerprint}} objects
(1 x n_samples)}
}
\value{
list of \code{\link[fingerprint]{fingerprint-class}}.
}
\description{
Function to calcualte the set-difference for two sets of hashed fingerprints
for each example. This can be useful, if circular, i.e. ECFP or FCFP,
fingerprints are considered. For example if we want to know, which features
are unique to the ECFP6 fingerprints, we need to remove those ones of ECFP4.
}
\examples{
inchi <- "InChI=1S/C9H10O4/c10-7-3-1-6(2-4-7)5-8(11)9(12)13/h1-4,8,10-11H,5H2,(H,12,13)"
fps_ecfp4 <- calculate_fingerprints_from_inchi(
inchi, fp_type = "circular", fp_mode = "count", circular.type="ECFP4")
fps_ecfp6 <- calculate_fingerprints_from_inchi(
inchi, fp_type = "circular", fp_mode = "count", circular.type="ECFP6")
fps_diff <- setdiff_fingerprints(fps_ecfp6, fps_ecfp4)
}
| /man/setdiff_fingerprints.Rd | permissive | bachi55/rcdkTools | R | false | true | 1,256 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fingerprint_tools.R
\name{setdiff_fingerprints}
\alias{setdiff_fingerprints}
\title{Calcualte the set-difference for two sets of fingerprints elementwise}
\usage{
setdiff_fingerprints(fps_A, fps_B)
}
\arguments{
\item{fps_A}{fps list of \code{\link[fingerprint]{fingerprint}} objects
(1 x n_samples)}
\item{fps_B}{fps list of \code{\link[fingerprint]{fingerprint}} objects
(1 x n_samples)}
}
\value{
list of \code{\link[fingerprint]{fingerprint-class}}.
}
\description{
Function to calcualte the set-difference for two sets of hashed fingerprints
for each example. This can be useful, if circular, i.e. ECFP or FCFP,
fingerprints are considered. For example if we want to know, which features
are unique to the ECFP6 fingerprints, we need to remove those ones of ECFP4.
}
\examples{
inchi <- "InChI=1S/C9H10O4/c10-7-3-1-6(2-4-7)5-8(11)9(12)13/h1-4,8,10-11H,5H2,(H,12,13)"
fps_ecfp4 <- calculate_fingerprints_from_inchi(
inchi, fp_type = "circular", fp_mode = "count", circular.type="ECFP4")
fps_ecfp6 <- calculate_fingerprints_from_inchi(
inchi, fp_type = "circular", fp_mode = "count", circular.type="ECFP6")
fps_diff <- setdiff_fingerprints(fps_ecfp6, fps_ecfp4)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/model.R
\name{predict_proba}
\alias{predict_proba}
\alias{predict_classes}
\title{Generates probability or class probability predictions for the input samples.}
\usage{
predict_proba(object, x, batch_size = NULL, verbose = 0, steps = NULL)
predict_classes(object, x, batch_size = NULL, verbose = 0, steps = NULL)
}
\arguments{
\item{object}{Keras model object}
\item{x}{Input data (vector, matrix, or array)}
\item{batch_size}{Integer. If unspecified, it will default to 32.}
\item{verbose}{Verbosity mode, 0 or 1.}
\item{steps}{Total number of steps (batches of samples) before declaring the
evaluation round finished. The default \code{NULL} is equal to the number of
samples in your dataset divided by the batch size.}
}
\description{
Generates probability or class probability predictions for the input samples.
}
\details{
The input samples are processed batch by batch.
}
\seealso{
Other model functions: \code{\link{compile}},
\code{\link{evaluate.keras.engine.training.Model}},
\code{\link{evaluate_generator}},
\code{\link{fit_generator}}, \code{\link{fit}},
\code{\link{get_config}}, \code{\link{get_layer}},
\code{\link{keras_model_sequential}},
\code{\link{keras_model}}, \code{\link{multi_gpu_model}},
\code{\link{pop_layer}},
\code{\link{predict.keras.engine.training.Model}},
\code{\link{predict_generator}},
\code{\link{predict_on_batch}},
\code{\link{summary.keras.engine.training.Model}},
\code{\link{train_on_batch}}
}
| /man/predict_proba.Rd | no_license | lurd4862/keras | R | false | true | 1,545 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/model.R
\name{predict_proba}
\alias{predict_proba}
\alias{predict_classes}
\title{Generates probability or class probability predictions for the input samples.}
\usage{
predict_proba(object, x, batch_size = NULL, verbose = 0, steps = NULL)
predict_classes(object, x, batch_size = NULL, verbose = 0, steps = NULL)
}
\arguments{
\item{object}{Keras model object}
\item{x}{Input data (vector, matrix, or array)}
\item{batch_size}{Integer. If unspecified, it will default to 32.}
\item{verbose}{Verbosity mode, 0 or 1.}
\item{steps}{Total number of steps (batches of samples) before declaring the
evaluation round finished. The default \code{NULL} is equal to the number of
samples in your dataset divided by the batch size.}
}
\description{
Generates probability or class probability predictions for the input samples.
}
\details{
The input samples are processed batch by batch.
}
\seealso{
Other model functions: \code{\link{compile}},
\code{\link{evaluate.keras.engine.training.Model}},
\code{\link{evaluate_generator}},
\code{\link{fit_generator}}, \code{\link{fit}},
\code{\link{get_config}}, \code{\link{get_layer}},
\code{\link{keras_model_sequential}},
\code{\link{keras_model}}, \code{\link{multi_gpu_model}},
\code{\link{pop_layer}},
\code{\link{predict.keras.engine.training.Model}},
\code{\link{predict_generator}},
\code{\link{predict_on_batch}},
\code{\link{summary.keras.engine.training.Model}},
\code{\link{train_on_batch}}
}
|
shinyServer(function(input, output, session) {
shiny::observeEvent(c(input$run),{
output$plot <- shiny::renderPlot({
ext_status_plot(data()(),stat = input$panel)
})
})
data <- shiny::eventReactive(input$run,{
shiny::reactivePoll(
1000, session,
# This function returns the time that log_file was last modified
checkFunc = function() {
f <- file.path(input$path,input$run,sprintf('%s.ext',input$run))
if (file.exists(f))
file.info(f)$mtime[1]
else
""
},
# This function returns the content of log_file
valueFunc = function() {
ext_status_read(file.path(input$path,input$run),est_ind = input$est)
}
)
})
shiny::observeEvent(input$path,{
runs <- basename(list.dirs(input$path,recursive = FALSE))
output$run <- renderUI({
shiny::selectInput(
inputId = 'run',
label = 'Select Run',
choices = runs,
selected = tail(runs,1)
)
})
})
shiny::observeEvent(input$qt, {
shiny::stopApp()
})
})
| /server.R | no_license | yonicd/NMTracker | R | false | false | 1,143 | r | shinyServer(function(input, output, session) {
shiny::observeEvent(c(input$run),{
output$plot <- shiny::renderPlot({
ext_status_plot(data()(),stat = input$panel)
})
})
data <- shiny::eventReactive(input$run,{
shiny::reactivePoll(
1000, session,
# This function returns the time that log_file was last modified
checkFunc = function() {
f <- file.path(input$path,input$run,sprintf('%s.ext',input$run))
if (file.exists(f))
file.info(f)$mtime[1]
else
""
},
# This function returns the content of log_file
valueFunc = function() {
ext_status_read(file.path(input$path,input$run),est_ind = input$est)
}
)
})
shiny::observeEvent(input$path,{
runs <- basename(list.dirs(input$path,recursive = FALSE))
output$run <- renderUI({
shiny::selectInput(
inputId = 'run',
label = 'Select Run',
choices = runs,
selected = tail(runs,1)
)
})
})
shiny::observeEvent(input$qt, {
shiny::stopApp()
})
})
|
# Copyright (c) 2018 by Einzbern
# 目的1:计算 "不需要发发了” 与 "需要发发了" 两类从发业绩预告开始之后一年的市场表现
# 目的2:计算以定期报告为t=0,从t=-100天开始算到t=50天的CAR
#####----Guidance----#####
load("../Output/02-D-Trading-Data.RData")
options(scipen = 99, digits = 3)
library(tidyverse)
library(lubridate)
library(moments)
library(xlsx)
library(stargazer)
#####----筛选业绩报告期为年报的样本----#####
Report_Data <- filter(Report_Data, month(`业绩报告期`) == 12)
Report_Data <- Report_Data %>%
mutate(`业绩预告发布情况_4类` = if_else(`业绩预告发布情况_4类` == "不需发没发", "A不需发没发", `业绩预告发布情况_4类`)) %>%
unite(`业绩预告情况_10类`, `业绩预告发布情况_4类`, `业绩预告与业绩报告比较_离散型`, remove = FALSE)
table(Report_Data$`业绩预告情况_10类`)
table(Report_Data$`业绩预告发布情况_4类`)
#####----目的1的CAR----#####
estimation_window <- c(-240, -60) # 日历日,未注明同
announcement_window <- c(0, 500)
least_valid_days <- 63 # 交易日
event <- "业绩预告日期"
group.by <- "业绩预告发布情况_4类"
calculate_ERiRf <- function(Trading) {
lm(RiRf ~ RmRf + SMB + HML, data = Trading, subset = Used_for_Estimation) %>%
predict(newdata = filter(Trading, !Used_for_Estimation)) %>%
mutate(Trading %>% filter(!Used_for_Estimation) %>% select(`交易日期`, RiRf), ERiRf = .)
}
calculate_t <- function(event_date, date_vector) {
if (event_date %in% date_vector) {
return(seq_along(date_vector) - match(event_date, date_vector))
} else {
date_vector_aug <- sort(union(date_vector, event_date))
date_vector_aug <- seq_along(date_vector_aug) - match(event_date, date_vector_aug)
return(date_vector_aug[date_vector_aug != 0])
}
}
AR <- Report_Data %>%
select(`证券代码`, `业绩报告期`, `事件日期` = !!event) %>%
na.omit() %>%
group_by(`证券代码`, `业绩报告期`, `事件日期`) %>%
do(tibble(`交易日期` = c(seq(from = .$`事件日期` + days(!!estimation_window[1]), to = .$`事件日期` + days(!!estimation_window[2]), by = "day"),
seq(from = .$`事件日期` + days(!!announcement_window[1]), to = .$`事件日期` + days(!!announcement_window[2]), by = "day")),
Used_for_Estimation = c(rep(TRUE, diff(!!estimation_window) + 1), rep(FALSE, diff(!!announcement_window) + 1)))) %>%
inner_join(select(Trading_Data, -`成交额`, -Ri, -Rm, -Rf, -`涨跌停状态`)) %>%
filter(sum(Used_for_Estimation) >= !!least_valid_days) %>%
do(calculate_ERiRf(.)) %>%
arrange(`证券代码`, `业绩报告期`, `事件日期`, `交易日期`) %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(t = calculate_t(`事件日期`[1], `交易日期`), AR = RiRf - ERiRf)
## 下面两种对各组平均CAR的计算方法按照道理讲应该得到一样的结果,但是不一样,
## 我还没查出原因在哪里,所以选取最保险的那种算法。因为我总觉得第一种算法是
## 因为t的排序导致的。
# AR_grouped <- AR %>%
# left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by)) %>%
# group_by(`证券代码`, `业绩报告期`) %>%
# mutate(group.num = n()) %>%
# filter(group.num > 260) %>%
# mutate(CAR = cumsum(AR)) %>%
# group_by(`分组`, t) %>%
# summarise(CAR.mean = mean(CAR), CAR.mean.trim = mean(CAR, trim = 0.01))
##
AR_grouped <- AR %>%
left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by))
n1.1 <- nrow(AR_grouped)
AR_grouped <- AR_grouped %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(group.num = n()) %>%
filter(group.num > 260)
n1.2 <- nrow(AR_grouped)
AR_grouped <- AR_grouped %>%
group_by(`分组`, t) %>%
summarise(AR.mean = mean(AR), AR.mean.trim = mean(AR, trim = 0.01)) %>%
mutate(CAR.mean = cumsum(AR.mean), CAR.mean.trim = cumsum(AR.mean.trim))
## 画CAR的图
## png(filename = "../Output/事件点是业绩预告时间.Mean.png")
ggplot(AR_grouped, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
## dev.off()
##
## png(filename = "../Output/事件点是业绩预告时间.Mean.Trim.png")
ggplot(AR_grouped, aes(x = t, y = CAR.mean.trim, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean.Trim")+
xlim(0, 260)
## dev.off()
#####----目的2:是不是 "不需发发了" 降低的天数对应定期发的日子----#####
## 先画出定期报告时间与业绩预告时间的时间差的分布
Reason <- Report_Data %>%
filter(`业绩预告发布情况_4类` == "不需发发了" | `业绩预告发布情况_4类` == "应该发发了") %>%
mutate(`时间差` = `业绩报告_第一时间` - `业绩预告日期`,
`时间差1` = `业绩报告_第一时间` - `业绩预告日期_最后一次`)
## png(filename = "../Output/定期报告与业绩预告时间差.png")
ggplot(Reason, aes(x = `时间差`, color = `业绩预告发布情况_4类`)) +
geom_freqpoly(size = 1)
## dev.off()
#####----计算以定期报告为t=0,从t=-300天开始算到t=100天的CAR----#####
estimation_window.regular <- c(-440, -260) # 日历日,未注明同
announcement_window.regular <- c(-200, 100)
least_valid_days.regular <- 63 # 交易日
event.regular <- "业绩报告_第一时间"
AR.regular <- Report_Data %>%
select(`证券代码`, `业绩报告期`, `事件日期` = !!event.regular) %>%
na.omit() %>%
group_by(`证券代码`, `业绩报告期`, `事件日期`) %>%
do(tibble(`交易日期` = c(seq(from = .$`事件日期` + days(!!estimation_window.regular[1]), to = .$`事件日期` + days(!!estimation_window.regular[2]), by = "day"),
seq(from = .$`事件日期` + days(!!announcement_window.regular[1]), to = .$`事件日期` + days(!!announcement_window.regular[2]), by = "day")),
Used_for_Estimation = c(rep(TRUE, diff(!!estimation_window.regular) + 1), rep(FALSE, diff(!!announcement_window.regular) + 1)))) %>%
inner_join(select(Trading_Data, -`成交额`, -Ri, -Rm, -Rf, -`涨跌停状态`)) %>%
filter(sum(Used_for_Estimation) >= !!least_valid_days.regular) %>%
do(calculate_ERiRf(.)) %>%
arrange(`证券代码`, `业绩报告期`, `事件日期`, `交易日期`) %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(t = calculate_t(`事件日期`[1], `交易日期`), AR = RiRf - ERiRf)
##
AR_grouped.regular <- AR.regular %>%
left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by))
n2.1 <- nrow(AR_grouped.regular)
AR_grouped.regular <- AR_grouped.regular %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(group.num = n(), group.num1 = sum(t < 0), group.num2 = sum(t > 0)) %>%
filter((group.num1 > 99) & (group.num2 > 50))
n2.2 <- nrow(AR_grouped.regular)
AR_grouped.regular <- AR_grouped.regular %>%
group_by(`分组`, t) %>%
summarise(AR.mean = mean(AR), AR.mean.trim = mean(AR, trim = 0.01)) %>%
mutate(CAR.mean = cumsum(AR.mean), CAR.mean.trim = cumsum(AR.mean.trim))
## 画CAR的图
## png(filename = "../Output/事件点是业绩报告_第一时间.Mean.png")
ggplot(AR_grouped.regular, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩报告_第一时间.Mean") +
xlim(-100, 50)
## dev.off()
##
## png(filename = "../Output/事件点是业绩报告_第一时间.Mean.Trim.png")
ggplot(AR_grouped.regular, aes(x = t, y = CAR.mean.trim, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩报告_第一时间.Mean.Trim")+
xlim(-100, 50)
## dev.off()
#####----目的3:看 "不需发发了","应该发发了" 里面细分的子类的表现----#####
group.by.sub <- "业绩预告情况_10类"
##
AR_grouped.sub <- AR %>%
left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by.sub))
n3.1 <- nrow(AR_grouped.sub)
AR_grouped.sub <- AR_grouped.sub %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(group.num = n()) %>%
filter(group.num > 260)
n3.2 <- nrow(AR_grouped.sub)
AR_grouped.sub <- AR_grouped.sub %>%
group_by(`分组`, t) %>%
summarise(AR.mean = mean(AR), AR.mean.trim = mean(AR, trim = 0.01)) %>%
mutate(CAR.mean = cumsum(AR.mean), CAR.mean.trim = cumsum(AR.mean.trim))
AR_grouped.sub <- rbind(AR_grouped.sub, AR_grouped)
## 画CAR的图
## png(filename = "../Output/事件点是业绩预告时间.Mean.png")
AR_grouped.sub1 <- AR_grouped.sub %>%
filter(`分组` %in% c("不需发发了",
"应该发发了",
"不需发发了_NA",
"不需发发了_业绩预告低于业绩报告",
"不需发发了_业绩预告符合业绩报告",
"不需发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub1, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
## dev.off()
##
## png(filename = "../Output/事件点是业绩预告时间.Mean.Trim.png")
AR_grouped.sub2 <- AR_grouped.sub %>%
filter(`分组` %in% c("不需发发了",
"应该发发了",
"应该发发了_NA",
"应该发发了_业绩预告低于业绩报告",
"应该发发了_业绩预告符合业绩报告",
"应该发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub2, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
## dev.off()
AR_grouped.sub3 <- AR_grouped.sub %>%
filter(`分组` %in% c("不需发发了_NA",
"不需发发了_业绩预告低于业绩报告",
"不需发发了_业绩预告符合业绩报告",
"不需发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub3, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
##
AR_grouped.sub4 <- AR_grouped.sub %>%
filter(`分组` %in% c("应该发发了_NA",
"应该发发了_业绩预告低于业绩报告",
"应该发发了_业绩预告符合业绩报告",
"应该发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub4, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
#####----Ending----#####
save.image("../Output/04-D-CAR-Data.RData")
| /04-C-CAR.R | no_license | ssh352/Earnings-Guidance | R | false | false | 11,444 | r | # Copyright (c) 2018 by Einzbern
# 目的1:计算 "不需要发发了” 与 "需要发发了" 两类从发业绩预告开始之后一年的市场表现
# 目的2:计算以定期报告为t=0,从t=-100天开始算到t=50天的CAR
#####----Guidance----#####
load("../Output/02-D-Trading-Data.RData")
options(scipen = 99, digits = 3)
library(tidyverse)
library(lubridate)
library(moments)
library(xlsx)
library(stargazer)
#####----筛选业绩报告期为年报的样本----#####
Report_Data <- filter(Report_Data, month(`业绩报告期`) == 12)
Report_Data <- Report_Data %>%
mutate(`业绩预告发布情况_4类` = if_else(`业绩预告发布情况_4类` == "不需发没发", "A不需发没发", `业绩预告发布情况_4类`)) %>%
unite(`业绩预告情况_10类`, `业绩预告发布情况_4类`, `业绩预告与业绩报告比较_离散型`, remove = FALSE)
table(Report_Data$`业绩预告情况_10类`)
table(Report_Data$`业绩预告发布情况_4类`)
#####----目的1的CAR----#####
estimation_window <- c(-240, -60) # 日历日,未注明同
announcement_window <- c(0, 500)
least_valid_days <- 63 # 交易日
event <- "业绩预告日期"
group.by <- "业绩预告发布情况_4类"
calculate_ERiRf <- function(Trading) {
lm(RiRf ~ RmRf + SMB + HML, data = Trading, subset = Used_for_Estimation) %>%
predict(newdata = filter(Trading, !Used_for_Estimation)) %>%
mutate(Trading %>% filter(!Used_for_Estimation) %>% select(`交易日期`, RiRf), ERiRf = .)
}
calculate_t <- function(event_date, date_vector) {
if (event_date %in% date_vector) {
return(seq_along(date_vector) - match(event_date, date_vector))
} else {
date_vector_aug <- sort(union(date_vector, event_date))
date_vector_aug <- seq_along(date_vector_aug) - match(event_date, date_vector_aug)
return(date_vector_aug[date_vector_aug != 0])
}
}
AR <- Report_Data %>%
select(`证券代码`, `业绩报告期`, `事件日期` = !!event) %>%
na.omit() %>%
group_by(`证券代码`, `业绩报告期`, `事件日期`) %>%
do(tibble(`交易日期` = c(seq(from = .$`事件日期` + days(!!estimation_window[1]), to = .$`事件日期` + days(!!estimation_window[2]), by = "day"),
seq(from = .$`事件日期` + days(!!announcement_window[1]), to = .$`事件日期` + days(!!announcement_window[2]), by = "day")),
Used_for_Estimation = c(rep(TRUE, diff(!!estimation_window) + 1), rep(FALSE, diff(!!announcement_window) + 1)))) %>%
inner_join(select(Trading_Data, -`成交额`, -Ri, -Rm, -Rf, -`涨跌停状态`)) %>%
filter(sum(Used_for_Estimation) >= !!least_valid_days) %>%
do(calculate_ERiRf(.)) %>%
arrange(`证券代码`, `业绩报告期`, `事件日期`, `交易日期`) %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(t = calculate_t(`事件日期`[1], `交易日期`), AR = RiRf - ERiRf)
## 下面两种对各组平均CAR的计算方法按照道理讲应该得到一样的结果,但是不一样,
## 我还没查出原因在哪里,所以选取最保险的那种算法。因为我总觉得第一种算法是
## 因为t的排序导致的。
# AR_grouped <- AR %>%
# left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by)) %>%
# group_by(`证券代码`, `业绩报告期`) %>%
# mutate(group.num = n()) %>%
# filter(group.num > 260) %>%
# mutate(CAR = cumsum(AR)) %>%
# group_by(`分组`, t) %>%
# summarise(CAR.mean = mean(CAR), CAR.mean.trim = mean(CAR, trim = 0.01))
##
AR_grouped <- AR %>%
left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by))
n1.1 <- nrow(AR_grouped)
AR_grouped <- AR_grouped %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(group.num = n()) %>%
filter(group.num > 260)
n1.2 <- nrow(AR_grouped)
AR_grouped <- AR_grouped %>%
group_by(`分组`, t) %>%
summarise(AR.mean = mean(AR), AR.mean.trim = mean(AR, trim = 0.01)) %>%
mutate(CAR.mean = cumsum(AR.mean), CAR.mean.trim = cumsum(AR.mean.trim))
## 画CAR的图
## png(filename = "../Output/事件点是业绩预告时间.Mean.png")
ggplot(AR_grouped, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
## dev.off()
##
## png(filename = "../Output/事件点是业绩预告时间.Mean.Trim.png")
ggplot(AR_grouped, aes(x = t, y = CAR.mean.trim, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean.Trim")+
xlim(0, 260)
## dev.off()
#####----目的2:是不是 "不需发发了" 降低的天数对应定期发的日子----#####
## 先画出定期报告时间与业绩预告时间的时间差的分布
Reason <- Report_Data %>%
filter(`业绩预告发布情况_4类` == "不需发发了" | `业绩预告发布情况_4类` == "应该发发了") %>%
mutate(`时间差` = `业绩报告_第一时间` - `业绩预告日期`,
`时间差1` = `业绩报告_第一时间` - `业绩预告日期_最后一次`)
## png(filename = "../Output/定期报告与业绩预告时间差.png")
ggplot(Reason, aes(x = `时间差`, color = `业绩预告发布情况_4类`)) +
geom_freqpoly(size = 1)
## dev.off()
#####----计算以定期报告为t=0,从t=-300天开始算到t=100天的CAR----#####
estimation_window.regular <- c(-440, -260) # 日历日,未注明同
announcement_window.regular <- c(-200, 100)
least_valid_days.regular <- 63 # 交易日
event.regular <- "业绩报告_第一时间"
AR.regular <- Report_Data %>%
select(`证券代码`, `业绩报告期`, `事件日期` = !!event.regular) %>%
na.omit() %>%
group_by(`证券代码`, `业绩报告期`, `事件日期`) %>%
do(tibble(`交易日期` = c(seq(from = .$`事件日期` + days(!!estimation_window.regular[1]), to = .$`事件日期` + days(!!estimation_window.regular[2]), by = "day"),
seq(from = .$`事件日期` + days(!!announcement_window.regular[1]), to = .$`事件日期` + days(!!announcement_window.regular[2]), by = "day")),
Used_for_Estimation = c(rep(TRUE, diff(!!estimation_window.regular) + 1), rep(FALSE, diff(!!announcement_window.regular) + 1)))) %>%
inner_join(select(Trading_Data, -`成交额`, -Ri, -Rm, -Rf, -`涨跌停状态`)) %>%
filter(sum(Used_for_Estimation) >= !!least_valid_days.regular) %>%
do(calculate_ERiRf(.)) %>%
arrange(`证券代码`, `业绩报告期`, `事件日期`, `交易日期`) %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(t = calculate_t(`事件日期`[1], `交易日期`), AR = RiRf - ERiRf)
##
AR_grouped.regular <- AR.regular %>%
left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by))
n2.1 <- nrow(AR_grouped.regular)
AR_grouped.regular <- AR_grouped.regular %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(group.num = n(), group.num1 = sum(t < 0), group.num2 = sum(t > 0)) %>%
filter((group.num1 > 99) & (group.num2 > 50))
n2.2 <- nrow(AR_grouped.regular)
AR_grouped.regular <- AR_grouped.regular %>%
group_by(`分组`, t) %>%
summarise(AR.mean = mean(AR), AR.mean.trim = mean(AR, trim = 0.01)) %>%
mutate(CAR.mean = cumsum(AR.mean), CAR.mean.trim = cumsum(AR.mean.trim))
## 画CAR的图
## png(filename = "../Output/事件点是业绩报告_第一时间.Mean.png")
ggplot(AR_grouped.regular, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩报告_第一时间.Mean") +
xlim(-100, 50)
## dev.off()
##
## png(filename = "../Output/事件点是业绩报告_第一时间.Mean.Trim.png")
ggplot(AR_grouped.regular, aes(x = t, y = CAR.mean.trim, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩报告_第一时间.Mean.Trim")+
xlim(-100, 50)
## dev.off()
#####----目的3:看 "不需发发了","应该发发了" 里面细分的子类的表现----#####
group.by.sub <- "业绩预告情况_10类"
##
AR_grouped.sub <- AR %>%
left_join(select(Report_Data, `证券代码`, `业绩报告期`, `分组` = !!group.by.sub))
n3.1 <- nrow(AR_grouped.sub)
AR_grouped.sub <- AR_grouped.sub %>%
group_by(`证券代码`, `业绩报告期`) %>%
mutate(group.num = n()) %>%
filter(group.num > 260)
n3.2 <- nrow(AR_grouped.sub)
AR_grouped.sub <- AR_grouped.sub %>%
group_by(`分组`, t) %>%
summarise(AR.mean = mean(AR), AR.mean.trim = mean(AR, trim = 0.01)) %>%
mutate(CAR.mean = cumsum(AR.mean), CAR.mean.trim = cumsum(AR.mean.trim))
AR_grouped.sub <- rbind(AR_grouped.sub, AR_grouped)
## 画CAR的图
## png(filename = "../Output/事件点是业绩预告时间.Mean.png")
AR_grouped.sub1 <- AR_grouped.sub %>%
filter(`分组` %in% c("不需发发了",
"应该发发了",
"不需发发了_NA",
"不需发发了_业绩预告低于业绩报告",
"不需发发了_业绩预告符合业绩报告",
"不需发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub1, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
## dev.off()
##
## png(filename = "../Output/事件点是业绩预告时间.Mean.Trim.png")
AR_grouped.sub2 <- AR_grouped.sub %>%
filter(`分组` %in% c("不需发发了",
"应该发发了",
"应该发发了_NA",
"应该发发了_业绩预告低于业绩报告",
"应该发发了_业绩预告符合业绩报告",
"应该发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub2, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
## dev.off()
AR_grouped.sub3 <- AR_grouped.sub %>%
filter(`分组` %in% c("不需发发了_NA",
"不需发发了_业绩预告低于业绩报告",
"不需发发了_业绩预告符合业绩报告",
"不需发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub3, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
##
AR_grouped.sub4 <- AR_grouped.sub %>%
filter(`分组` %in% c("应该发发了_NA",
"应该发发了_业绩预告低于业绩报告",
"应该发发了_业绩预告符合业绩报告",
"应该发发了_业绩预告高于业绩报告"))
ggplot(AR_grouped.sub4, aes(x = t, y = CAR.mean, group = `分组`, color = `分组`)) +
geom_line(size = 1) +
theme_grey(base_family = "STKaiti") +
labs(title = "事件点是业绩预告时间.Mean") +
xlim(0, 260)
#####----Ending----#####
save.image("../Output/04-D-CAR-Data.RData")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.model.dt.tree.R
\name{xgb.model.dt.tree}
\alias{xgb.model.dt.tree}
\title{Parse a boosted tree model text dump}
\usage{
xgb.model.dt.tree(feature_names = NULL, model = NULL, text = NULL,
n_first_tree = NULL)
}
\arguments{
\item{feature_names}{character vector of feature names. If the model already
contains feature names, this argument should be \code{NULL} (default value)}
\item{model}{object of class \code{xgb.Booster}}
\item{text}{\code{character} vector previously generated by the \code{xgb.dump}
function (where parameter \code{with_stats = TRUE} should have been set).}
\item{n_first_tree}{limit the parsing to the \code{n} first trees.
If set to \code{NULL}, all trees of the model are parsed.}
}
\value{
A \code{data.table} with detailed information about model trees' nodes.
The columns of the \code{data.table} are:
\itemize{
\item \code{Tree}: ID of a tree in a model
\item \code{Node}: ID of a node in a tree
\item \code{ID}: unique identifier of a node in a model
\item \code{Feature}: for a branch node, it's a feature id or name (when available);
for a leaf note, it simply labels it as \code{'Leaf'}
\item \code{Split}: location of the split for a branch node (split condition is always "less than")
\item \code{Yes}: ID of the next node when the split condition is met
\item \code{No}: ID of the next node when the split condition is not met
\item \code{Missing}: ID of the next node when branch value is missing
\item \code{Quality}: either the split gain (change in loss) or the leaf value
\item \code{Cover}: metric related to the number of observation either seen by a split
or collected by a leaf during training.
}
}
\description{
Parse a boosted tree model text dump into a \code{data.table} structure.
}
\examples{
# Basic use:
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
(dt <- xgb.model.dt.tree(colnames(agaricus.train$data), bst))
# How to match feature names of splits that are following a current 'Yes' branch:
merge(dt, dt[, .(ID, Y.Feature=Feature)], by.x='Yes', by.y='ID', all.x=TRUE)[order(Tree,Node)]
}
| /R-package/man/xgb.model.dt.tree.Rd | permissive | zoucuomiao/xgboost | R | false | true | 2,352 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xgb.model.dt.tree.R
\name{xgb.model.dt.tree}
\alias{xgb.model.dt.tree}
\title{Parse a boosted tree model text dump}
\usage{
xgb.model.dt.tree(feature_names = NULL, model = NULL, text = NULL,
n_first_tree = NULL)
}
\arguments{
\item{feature_names}{character vector of feature names. If the model already
contains feature names, this argument should be \code{NULL} (default value)}
\item{model}{object of class \code{xgb.Booster}}
\item{text}{\code{character} vector previously generated by the \code{xgb.dump}
function (where parameter \code{with_stats = TRUE} should have been set).}
\item{n_first_tree}{limit the parsing to the \code{n} first trees.
If set to \code{NULL}, all trees of the model are parsed.}
}
\value{
A \code{data.table} with detailed information about model trees' nodes.
The columns of the \code{data.table} are:
\itemize{
\item \code{Tree}: ID of a tree in a model
\item \code{Node}: ID of a node in a tree
\item \code{ID}: unique identifier of a node in a model
\item \code{Feature}: for a branch node, it's a feature id or name (when available);
for a leaf note, it simply labels it as \code{'Leaf'}
\item \code{Split}: location of the split for a branch node (split condition is always "less than")
\item \code{Yes}: ID of the next node when the split condition is met
\item \code{No}: ID of the next node when the split condition is not met
\item \code{Missing}: ID of the next node when branch value is missing
\item \code{Quality}: either the split gain (change in loss) or the leaf value
\item \code{Cover}: metric related to the number of observation either seen by a split
or collected by a leaf during training.
}
}
\description{
Parse a boosted tree model text dump into a \code{data.table} structure.
}
\examples{
# Basic use:
data(agaricus.train, package='xgboost')
bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2,
eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")
(dt <- xgb.model.dt.tree(colnames(agaricus.train$data), bst))
# How to match feature names of splits that are following a current 'Yes' branch:
merge(dt, dt[, .(ID, Y.Feature=Feature)], by.x='Yes', by.y='ID', all.x=TRUE)[order(Tree,Node)]
}
|
if(!exists('girix.input')) {
source("/home/ubuntu/i2b2/GIRIXScripts/lib/girix.r")
setwd("ReportGenerator")
girix.input['params'] <- '{}'
girix.input['Patient set'] <- '-1'
girix.input["requestDiagram"] <- 'all'
}
source("ReportGen/main.r", local=T)
input <- girix.input
girix.output[["Report"]] <- generateOutput()
rm(girix.input, girix.concept.names, girix.events, girix.modifiers, girix.observations, girix.observers, girix.patients); gc()
| /GIRIXScripts/ReportGenerator/mainscript.r | no_license | bp2014n2/i2b2 | R | false | false | 456 | r | if(!exists('girix.input')) {
source("/home/ubuntu/i2b2/GIRIXScripts/lib/girix.r")
setwd("ReportGenerator")
girix.input['params'] <- '{}'
girix.input['Patient set'] <- '-1'
girix.input["requestDiagram"] <- 'all'
}
source("ReportGen/main.r", local=T)
input <- girix.input
girix.output[["Report"]] <- generateOutput()
rm(girix.input, girix.concept.names, girix.events, girix.modifiers, girix.observations, girix.observers, girix.patients); gc()
|
library("audio")
piano_frequency <- function(n){
return((2^(1/12))^(n-49)*440)
}
piano_keys = c(1:88)
frequencies <- piano_frequency(piano_keys)
durations <- rep(1/12,88)
piano_frame <- data.frame(freq = frequencies,duration = durations)
tempo <- 120
sample_rate <- 44100
make_sine <- function(freq, duration) {
wave <- sin(seq(0, duration / tempo * 60, 1 / sample_rate) *
freq * 2 * pi)
fade <- seq(0, 1, 50 / sample_rate)
wave * c(fade, rep(1, length(wave) - 2 * length(fade)), rev(fade))
}
all_keys <- c()
for(i in 1:nrow(piano_frame)){
curr_vec <- make_sine(piano_frame$freq[i],piano_frame$duration[i])
all_keys <- c(all_keys,curr_vec)
}
play(all_keys)
first_fourier_seires <- function(x){
first_addend = 1.0000001*sin(x/25)
second_addend = 4*sin(3*x)/3*pi
third_addend = 4*sin(5*x)/5*pi
fourth_addend = 4*sin(7*x)/7*pi
return(first_addend)
}
play(first_fourier_seires(1:100000))
| /Data Science Internship/music.R | no_license | AaronKruchten/PSC-EDA | R | false | false | 937 | r | library("audio")
piano_frequency <- function(n){
return((2^(1/12))^(n-49)*440)
}
piano_keys = c(1:88)
frequencies <- piano_frequency(piano_keys)
durations <- rep(1/12,88)
piano_frame <- data.frame(freq = frequencies,duration = durations)
tempo <- 120
sample_rate <- 44100
make_sine <- function(freq, duration) {
wave <- sin(seq(0, duration / tempo * 60, 1 / sample_rate) *
freq * 2 * pi)
fade <- seq(0, 1, 50 / sample_rate)
wave * c(fade, rep(1, length(wave) - 2 * length(fade)), rev(fade))
}
all_keys <- c()
for(i in 1:nrow(piano_frame)){
curr_vec <- make_sine(piano_frame$freq[i],piano_frame$duration[i])
all_keys <- c(all_keys,curr_vec)
}
play(all_keys)
first_fourier_seires <- function(x){
first_addend = 1.0000001*sin(x/25)
second_addend = 4*sin(3*x)/3*pi
third_addend = 4*sin(5*x)/5*pi
fourth_addend = 4*sin(7*x)/7*pi
return(first_addend)
}
play(first_fourier_seires(1:100000))
|
#' hm
#'
#' This function draws heatmap of HiC data so that we can visually compares the imputation results.
#'
#' @param datvec A vector of upper triangular mamtrix.
#' @param n Number of bins.
#' @param title The title of the heatmap.
#'
#' @return Heatmap of the matrix.
#' @export
#'
#' @examples.
#' data("K562_1_true")
#' hm(K562_1_true[,1], 61, title="Expected")
hm <- function(datvec, n, title="Heatmap") {
library(plsgenomics)
normmatrix <- function(matr) {
maxvalue <- max(matr[upper.tri(matr)])
minvalue <- min(matr[upper.tri(matr)])
normmatr <- (matr-minvalue)/(maxvalue-minvalue)
return(normmatr)
}
mat <- matrix(0, n, n)
mat[upper.tri(mat, diag=FALSE)] <- datvec
return(matrix.heatmap(normmatrix(mat), main=title))
}
| /R/hm.R | no_license | Queen0044/scHiCBayes | R | false | false | 765 | r | #' hm
#'
#' This function draws heatmap of HiC data so that we can visually compares the imputation results.
#'
#' @param datvec A vector of upper triangular mamtrix.
#' @param n Number of bins.
#' @param title The title of the heatmap.
#'
#' @return Heatmap of the matrix.
#' @export
#'
#' @examples.
#' data("K562_1_true")
#' hm(K562_1_true[,1], 61, title="Expected")
hm <- function(datvec, n, title="Heatmap") {
library(plsgenomics)
normmatrix <- function(matr) {
maxvalue <- max(matr[upper.tri(matr)])
minvalue <- min(matr[upper.tri(matr)])
normmatr <- (matr-minvalue)/(maxvalue-minvalue)
return(normmatr)
}
mat <- matrix(0, n, n)
mat[upper.tri(mat, diag=FALSE)] <- datvec
return(matrix.heatmap(normmatrix(mat), main=title))
}
|
# read data from txt file
data <- read.table("household_power_consumption.txt"
, header = TRUE, sep=";"
, stringsAsFactors = FALSE
, na.strings = "?")
# select only required subdata
sub_data <- data[data$Date == "1/2/2007"
| data$Date == "2/2/2007"]
# add DateTime column and convert to POSIXct
sub_data <- within(sub_data, DateTime <- paste(Date, Time, sep=" "))
sub_data$DateTime = as.POSIXct(strptime(sub_data$DateTime
, format = "%d/%m/%Y %H:%M:%S"))
# open png, create graph and close
png("plot3.png")
with(sub_data, plot(DateTime, Sub_metering_1, type="n", ylab="Energy sub metering", xlab=""))
with(sub_data, lines(DateTime, Sub_metering_1))
with(sub_data, lines(DateTime, Sub_metering_2, col = "red"))
with(sub_data, lines(DateTime, Sub_metering_3, col = "blue"))
legend(x="topright"
, legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3")
, pch = NA, lty = 1 , col = c("black", "red", "blue"))
dev.off() | /plot3.R | no_license | DaanHoevers/ExData_Plotting1 | R | false | false | 1,055 | r | # read data from txt file
data <- read.table("household_power_consumption.txt"
, header = TRUE, sep=";"
, stringsAsFactors = FALSE
, na.strings = "?")
# select only required subdata
sub_data <- data[data$Date == "1/2/2007"
| data$Date == "2/2/2007"]
# add DateTime column and convert to POSIXct
sub_data <- within(sub_data, DateTime <- paste(Date, Time, sep=" "))
sub_data$DateTime = as.POSIXct(strptime(sub_data$DateTime
, format = "%d/%m/%Y %H:%M:%S"))
# open png, create graph and close
png("plot3.png")
with(sub_data, plot(DateTime, Sub_metering_1, type="n", ylab="Energy sub metering", xlab=""))
with(sub_data, lines(DateTime, Sub_metering_1))
with(sub_data, lines(DateTime, Sub_metering_2, col = "red"))
with(sub_data, lines(DateTime, Sub_metering_3, col = "blue"))
legend(x="topright"
, legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3")
, pch = NA, lty = 1 , col = c("black", "red", "blue"))
dev.off() |
library(concordance)
### Name: concordance
### Title: concord
### Aliases: concord
### Keywords: concordance
### ** Examples
## data(concord_data, package="concordance")
codes.of.origin <- concord_data$isic2 # Vector of values to be converted
concord(codes.of.origin, "isic2", "sitc4")
concord("00121", "sitc3", "hs")
## % Returns vector 10410
concord(c("00121","00151"), "sitc3", "hs")
## % Returns vector 10410 10110 10111 10119
## % This list currently incomplete: WITS does not seem to have complete n-to-n data
concord("0012", "sitc3", "sitc3")
## % Returns vector 121 122 124
## % i.e. 00121, 00122, 00124 are all the SITC3 codes in the database starting with 0012
concord("0012", "sitc3", "hs")
## % Returns vector 10410 10420 20500
## % These are all the HS codes corresponding to any of the SITC3 codes starting with 0012
concord(c("030310", "030322", "030329"), "hs0", "isic2")
## % Returns vector 1310
## % These HS1988/92 codes correspond to Frozen Pacific salmon, Frozen Atlantic and Danube salmon,
## % and Frozen salmonidae (excl. Pacific, Atlantic, Danube salmon) resp.
## % The returned ISIC2 code corresponds to Ocean and coastal fishing
concord(c("30310", "30322", "30329"), "hs0", "isic3")
## % Returns vector 1512
## % The returned ISIC3 code corresponds to Processing and preserving of fish and fish products
concord("1512", "isic3", "isic2")
## % Returns vector 1301 3114
## % ISIC code 3114 corresponds to Canning, preserving and processing of fish, crustaces
## % and similar foods
concord('0111','ISIC3','BEC')
## % Returns [1] "111" "021" "112"
| /data/genthat_extracted_code/concordance/examples/concord.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 1,591 | r | library(concordance)
### Name: concordance
### Title: concord
### Aliases: concord
### Keywords: concordance
### ** Examples
## data(concord_data, package="concordance")
codes.of.origin <- concord_data$isic2 # Vector of values to be converted
concord(codes.of.origin, "isic2", "sitc4")
concord("00121", "sitc3", "hs")
## % Returns vector 10410
concord(c("00121","00151"), "sitc3", "hs")
## % Returns vector 10410 10110 10111 10119
## % This list currently incomplete: WITS does not seem to have complete n-to-n data
concord("0012", "sitc3", "sitc3")
## % Returns vector 121 122 124
## % i.e. 00121, 00122, 00124 are all the SITC3 codes in the database starting with 0012
concord("0012", "sitc3", "hs")
## % Returns vector 10410 10420 20500
## % These are all the HS codes corresponding to any of the SITC3 codes starting with 0012
concord(c("030310", "030322", "030329"), "hs0", "isic2")
## % Returns vector 1310
## % These HS1988/92 codes correspond to Frozen Pacific salmon, Frozen Atlantic and Danube salmon,
## % and Frozen salmonidae (excl. Pacific, Atlantic, Danube salmon) resp.
## % The returned ISIC2 code corresponds to Ocean and coastal fishing
concord(c("30310", "30322", "30329"), "hs0", "isic3")
## % Returns vector 1512
## % The returned ISIC3 code corresponds to Processing and preserving of fish and fish products
concord("1512", "isic3", "isic2")
## % Returns vector 1301 3114
## % ISIC code 3114 corresponds to Canning, preserving and processing of fish, crustaces
## % and similar foods
concord('0111','ISIC3','BEC')
## % Returns [1] "111" "021" "112"
|
library(gdata)
### Name: elem
### Title: Display Information about Elements in a Given Object
### Aliases: elem
### Keywords: attribute classes list print utilities
### ** Examples
## Not run:
##D data(infert)
##D elem(infert)
##D model <- glm(case~spontaneous+induced, family=binomial, data=infert)
##D elem(model, dim=TRUE)
##D elem(model$family)
## End(Not run)
| /data/genthat_extracted_code/gdata/examples/elem.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 373 | r | library(gdata)
### Name: elem
### Title: Display Information about Elements in a Given Object
### Aliases: elem
### Keywords: attribute classes list print utilities
### ** Examples
## Not run:
##D data(infert)
##D elem(infert)
##D model <- glm(case~spontaneous+induced, family=binomial, data=infert)
##D elem(model, dim=TRUE)
##D elem(model$family)
## End(Not run)
|
###
# training skeleton
###
library(caret)
# add any model specific package library commands
# set working directory
WORK.DIR <- "./src/skeleton_model" # modify to specify directory to contain model artififacts
# Common Functions and Global variables
source("./src/CommonFunctions.R")
source(paste0(WORK.DIR,"/ModelCommonFunctions.R"))
# set caret training parameters
CARET.TRAIN.PARMS <- list(method="MODEL.METHOD") # Replace MODEL.METHOD with appropriate caret model
CARET.TUNE.GRID <- NULL # NULL provides model specific default tuning parameters
# user specified tuning parameters
#CARET.TUNE.GRID <- expand.grid(nIter=c(100))
# model specific training parameter
CARET.TRAIN.CTRL <- trainControl(method="repeatedcv",
number=5,
repeats=1,
verboseIter=TRUE,
classProbs=TRUE,
summaryFunction=caretLogLossSummary)
CARET.TRAIN.OTHER.PARMS <- list(trControl=CARET.TRAIN.CTRL,
maximize=FALSE,
tuneGrid=CARET.TUNE.GRID,
metric="LogLoss")
MODEL.SPECIFIC.PARMS <- NULL # Other model specific parameters
MODEL.COMMENT <- ""
# amount of data to train
FRACTION.TRAIN.DATA <- 0.1
# load model performance data
load(paste0(WORK.DIR,"/modelPerf.RData"))
# get training data
load(paste0(DATA.DIR,"/train_calib_test.RData"))
# extract subset for inital training
set.seed(29)
idx <- sample(nrow(train.raw),FRACTION.TRAIN.DATA*nrow(train.raw))
train.df <- train.raw[idx,]
# prepare data for training
train.data <- prepModelData(train.df)
library(doMC)
registerDoMC(cores = 5)
# library(doSNOW)
# cl <- makeCluster(5,type="SOCK")
# registerDoSNOW(cl)
# clusterExport(cl,list("logLossEval"))
# train the model
Sys.time()
set.seed(825)
time.data <- system.time(mdl.fit <- do.call(train,c(list(train.data$predictors,
train.data$response),
CARET.TRAIN.PARMS,
MODEL.SPECIFIC.PARMS,
CARET.TRAIN.OTHER.PARMS)))
time.data
mdl.fit
# stopCluster(cl)
# prepare data for training
test.data <- prepModelData(test.raw)
pred.probs <- predict(mdl.fit,newdata = test.data$predictors,type = "prob")
score <- logLossEval(pred.probs,test.data$response)
score
# determine if score improved
improved <- ifelse(score < min(modelPerf.df$score),"Yes","No")
# record Model performance
modelPerf.df <- recordModelPerf(modelPerf.df,
mdl.fit$method,
time.data,
train.data$predictors,
score,
improved=improved,
bestTune=flattenDF(mdl.fit$bestTune),
tune.grid=flattenDF(CARET.TUNE.GRID),
model.parms=paste(names(MODEL.SPECIFIC.PARMS),
as.character(MODEL.SPECIFIC.PARMS),
sep="=",collapse=","),
comment=MODEL.COMMENT)
save(modelPerf.df,file=paste0(WORK.DIR,"/modelPerf.RData"))
#display model performance record for this run
tail(modelPerf.df[,1:10],1)
# if last score recorded is better than previous ones save model object
last.idx <- length(modelPerf.df$score)
if (last.idx == 1 || improved == "Yes") {
cat("found improved model, saving...\n")
flush.console()
#yes we have improvement or first score, save generated model
file.name <- paste0("/model_",mdl.fit$method,"_",modelPerf.df$date.time[last.idx],".RData")
file.name <- gsub(" ","_",file.name)
file.name <- gsub(":","_",file.name)
save(mdl.fit,file=paste0(WORK.DIR,file.name))
} else {
cat("no improvement!!!\n")
flush.console()
}
| /src/xgb_model/train_model.R | no_license | jimthompson5802/kaggle-OttoGroup | R | false | false | 4,052 | r | ###
# training skeleton
###
library(caret)
# add any model specific package library commands
# set working directory
WORK.DIR <- "./src/skeleton_model" # modify to specify directory to contain model artififacts
# Common Functions and Global variables
source("./src/CommonFunctions.R")
source(paste0(WORK.DIR,"/ModelCommonFunctions.R"))
# set caret training parameters
CARET.TRAIN.PARMS <- list(method="MODEL.METHOD") # Replace MODEL.METHOD with appropriate caret model
CARET.TUNE.GRID <- NULL # NULL provides model specific default tuning parameters
# user specified tuning parameters
#CARET.TUNE.GRID <- expand.grid(nIter=c(100))
# model specific training parameter
CARET.TRAIN.CTRL <- trainControl(method="repeatedcv",
number=5,
repeats=1,
verboseIter=TRUE,
classProbs=TRUE,
summaryFunction=caretLogLossSummary)
CARET.TRAIN.OTHER.PARMS <- list(trControl=CARET.TRAIN.CTRL,
maximize=FALSE,
tuneGrid=CARET.TUNE.GRID,
metric="LogLoss")
MODEL.SPECIFIC.PARMS <- NULL # Other model specific parameters
MODEL.COMMENT <- ""
# amount of data to train
FRACTION.TRAIN.DATA <- 0.1
# load model performance data
load(paste0(WORK.DIR,"/modelPerf.RData"))
# get training data
load(paste0(DATA.DIR,"/train_calib_test.RData"))
# extract subset for inital training
set.seed(29)
idx <- sample(nrow(train.raw),FRACTION.TRAIN.DATA*nrow(train.raw))
train.df <- train.raw[idx,]
# prepare data for training
train.data <- prepModelData(train.df)
library(doMC)
registerDoMC(cores = 5)
# library(doSNOW)
# cl <- makeCluster(5,type="SOCK")
# registerDoSNOW(cl)
# clusterExport(cl,list("logLossEval"))
# train the model
Sys.time()
set.seed(825)
time.data <- system.time(mdl.fit <- do.call(train,c(list(train.data$predictors,
train.data$response),
CARET.TRAIN.PARMS,
MODEL.SPECIFIC.PARMS,
CARET.TRAIN.OTHER.PARMS)))
time.data
mdl.fit
# stopCluster(cl)
# prepare data for training
test.data <- prepModelData(test.raw)
pred.probs <- predict(mdl.fit,newdata = test.data$predictors,type = "prob")
score <- logLossEval(pred.probs,test.data$response)
score
# determine if score improved
improved <- ifelse(score < min(modelPerf.df$score),"Yes","No")
# record Model performance
modelPerf.df <- recordModelPerf(modelPerf.df,
mdl.fit$method,
time.data,
train.data$predictors,
score,
improved=improved,
bestTune=flattenDF(mdl.fit$bestTune),
tune.grid=flattenDF(CARET.TUNE.GRID),
model.parms=paste(names(MODEL.SPECIFIC.PARMS),
as.character(MODEL.SPECIFIC.PARMS),
sep="=",collapse=","),
comment=MODEL.COMMENT)
save(modelPerf.df,file=paste0(WORK.DIR,"/modelPerf.RData"))
#display model performance record for this run
tail(modelPerf.df[,1:10],1)
# if last score recorded is better than previous ones save model object
last.idx <- length(modelPerf.df$score)
if (last.idx == 1 || improved == "Yes") {
cat("found improved model, saving...\n")
flush.console()
#yes we have improvement or first score, save generated model
file.name <- paste0("/model_",mdl.fit$method,"_",modelPerf.df$date.time[last.idx],".RData")
file.name <- gsub(" ","_",file.name)
file.name <- gsub(":","_",file.name)
save(mdl.fit,file=paste0(WORK.DIR,file.name))
} else {
cat("no improvement!!!\n")
flush.console()
}
|
######################### Tidy data output ##################################
# Groups the meanstd_data file by subject ID and Activity and takes the average
tidy_output <-
meanstd_data %>%
select(-meas_source) %>% # Remove measure
group_by(subjectID, Activity) %>%
summarize_all(mean)
| /03_Getting_And_Cleaning_Data/06_tidy_output.R | no_license | kartik-avula/Assignments | R | false | false | 335 | r | ######################### Tidy data output ##################################
# Groups the meanstd_data file by subject ID and Activity and takes the average
tidy_output <-
meanstd_data %>%
select(-meas_source) %>% # Remove measure
group_by(subjectID, Activity) %>%
summarize_all(mean)
|
path="D:/Pro ML book/Code files/Random forest/Files"
setwd(path)
t=read.csv("train_sample.csv")
train=t[1:8000,]
test=t[8001:9999,]
library(rpart)
test$prediction=0
for(i in 1:1000){ # we are running 1000 times - i.e., 1000 decision trees
y=0.5
x=sample(1:nrow(t),round(nrow(t)*y)) # Sample 50% of total rows, as y is 0.5
t2=t[x, c(1,sample(139,5)+1)] # Sample 5 columns randomly, leaving the first column which is the dependent variable
dt=rpart(response~.,data=t2) # Build a decision tree on the above subset of data (sample rows & columns)
pred1=(predict(dt,test)) # Predict based on the tree just built
test$prediction=(test$prediction+pred1) # Add predictions of all the iterations of previously built decision trees
}
test$prediction = (test$prediction)/1000
| /Chapter 5 - Random Forest/rf_code.R | no_license | marcusdipaula/Pro-Machine-Learning | R | false | false | 806 | r | path="D:/Pro ML book/Code files/Random forest/Files"
setwd(path)
t=read.csv("train_sample.csv")
train=t[1:8000,]
test=t[8001:9999,]
library(rpart)
test$prediction=0
for(i in 1:1000){ # we are running 1000 times - i.e., 1000 decision trees
y=0.5
x=sample(1:nrow(t),round(nrow(t)*y)) # Sample 50% of total rows, as y is 0.5
t2=t[x, c(1,sample(139,5)+1)] # Sample 5 columns randomly, leaving the first column which is the dependent variable
dt=rpart(response~.,data=t2) # Build a decision tree on the above subset of data (sample rows & columns)
pred1=(predict(dt,test)) # Predict based on the tree just built
test$prediction=(test$prediction+pred1) # Add predictions of all the iterations of previously built decision trees
}
test$prediction = (test$prediction)/1000
|
# ADOBE CONFIDENTIAL
# ___________________
# Copyright 2018 Adobe Systems Incorporated
# All Rights Reserved.
# NOTICE: All information contained herein is, and remains
# the property of Adobe Systems Incorporated and its suppliers,
# if any. The intellectual and technical concepts contained
# herein are proprietary to Adobe Systems Incorporated and its
# suppliers and are protected by all applicable intellectual property
# laws, including trade secret and copyright laws.
# Dissemination of this information or reproduction of this material
# is strictly forbidden unless prior written permission is obtained
# from Adobe Systems Incorporated.
# Set up abstractScorer
abstractScorer <- ml.runtime.r::abstractScorer()
#' applicationScorer
#'
#' @keywords applicationScorer
#' @export applicationScorer
#' @exportClass applicationScorer
applicationScorer <- setRefClass("applicationScorer",
contains = "abstractScorer",
methods = list(
score = function(configurationJSON) {
print("Running Scorer Function.")
# Set working directory to AZ_BATCHAI_INPUT_MODEL
setwd(configurationJSON$modelPATH)
#########################################
# Load Libraries
#########################################
library(gbm)
library(lubridate)
library(tidyverse)
set.seed(1234)
#########################################
# Load Data
#########################################
reticulate::use_python("/usr/bin/python3.6")
data_access_sdk_python <- reticulate::import("data_access_sdk_python")
reader <- data_access_sdk_python$reader$DataSetReader(client_id = configurationJSON$ML_FRAMEWORK_IMS_USER_CLIENT_ID,
user_token = configurationJSON$ML_FRAMEWORK_IMS_TOKEN,
service_token = configurationJSON$ML_FRAMEWORK_IMS_ML_TOKEN)
df <- reader$load(configurationJSON$scoringDataSetId, configurationJSON$ML_FRAMEWORK_IMS_ORG_ID)
df <- as_tibble(df)
#########################################
# Data Preparation/Feature Engineering
#########################################
df <- df %>%
mutate(store = as.numeric(store)) %>%
mutate(date = mdy(date), week = week(date), year = year(date)) %>%
mutate(new = 1) %>%
spread(storeType, new, fill = 0) %>%
mutate(isHoliday = as.integer(isHoliday)) %>%
mutate(weeklySalesAhead = lead(weeklySales, 45),
weeklySalesLag = lag(weeklySales, 45),
weeklySalesDiff = (weeklySales - weeklySalesLag) / weeklySalesLag) %>%
drop_na()
test_df <- df %>%
select(-date)
#########################################
# Retrieve saved model from trainer
#########################################
retrieved_model <- readRDS("model.rds")
#########################################
# Generate Predictions
#########################################
pred <- predict(retrieved_model, test_df, n.trees = retrieved_model$n.trees)
output_df <- df %>%
select(date, store) %>%
mutate(prediction = pred,
store = as.integer(store),
date = as.character(date))
#########################################
# Write Results
#########################################
reticulate::use_python("/usr/bin/python3.6")
data_access_sdk_python <- reticulate::import("data_access_sdk_python")
catalog_url <- "https://platform.adobe.io/data/foundation/catalog"
ingestion_url <- "https://platform.adobe.io/data/foundation/import"
print("Set up writer")
writer <- data_access_sdk_python$writer$DataSetWriter(catalog_url = catalog_url,
ingestion_url = ingestion_url,
client_id = configurationJSON$ML_FRAMEWORK_IMS_USER_CLIENT_ID,
user_token = configurationJSON$ML_FRAMEWORK_IMS_TOKEN,
service_token = configurationJSON$ML_FRAMEWORK_IMS_ML_TOKEN)
print("Writer configured")
writer$write(data_set_id = configurationJSON$output_dataset_id,
dataframe = output_df,
ims_org = configurationJSON$ML_FRAMEWORK_IMS_ORG_ID)
print("Write done")
print("Exiting Scorer Function.")
}
)
)
| /recipes/R/Retail - GradientBoosting/R/applicationScorer.R | permissive | paulhsiung/experience-platform-dsw-reference | R | false | false | 4,624 | r | # ADOBE CONFIDENTIAL
# ___________________
# Copyright 2018 Adobe Systems Incorporated
# All Rights Reserved.
# NOTICE: All information contained herein is, and remains
# the property of Adobe Systems Incorporated and its suppliers,
# if any. The intellectual and technical concepts contained
# herein are proprietary to Adobe Systems Incorporated and its
# suppliers and are protected by all applicable intellectual property
# laws, including trade secret and copyright laws.
# Dissemination of this information or reproduction of this material
# is strictly forbidden unless prior written permission is obtained
# from Adobe Systems Incorporated.
# Set up abstractScorer
abstractScorer <- ml.runtime.r::abstractScorer()
#' applicationScorer
#'
#' @keywords applicationScorer
#' @export applicationScorer
#' @exportClass applicationScorer
applicationScorer <- setRefClass("applicationScorer",
contains = "abstractScorer",
methods = list(
score = function(configurationJSON) {
print("Running Scorer Function.")
# Set working directory to AZ_BATCHAI_INPUT_MODEL
setwd(configurationJSON$modelPATH)
#########################################
# Load Libraries
#########################################
library(gbm)
library(lubridate)
library(tidyverse)
set.seed(1234)
#########################################
# Load Data
#########################################
reticulate::use_python("/usr/bin/python3.6")
data_access_sdk_python <- reticulate::import("data_access_sdk_python")
reader <- data_access_sdk_python$reader$DataSetReader(client_id = configurationJSON$ML_FRAMEWORK_IMS_USER_CLIENT_ID,
user_token = configurationJSON$ML_FRAMEWORK_IMS_TOKEN,
service_token = configurationJSON$ML_FRAMEWORK_IMS_ML_TOKEN)
df <- reader$load(configurationJSON$scoringDataSetId, configurationJSON$ML_FRAMEWORK_IMS_ORG_ID)
df <- as_tibble(df)
#########################################
# Data Preparation/Feature Engineering
#########################################
df <- df %>%
mutate(store = as.numeric(store)) %>%
mutate(date = mdy(date), week = week(date), year = year(date)) %>%
mutate(new = 1) %>%
spread(storeType, new, fill = 0) %>%
mutate(isHoliday = as.integer(isHoliday)) %>%
mutate(weeklySalesAhead = lead(weeklySales, 45),
weeklySalesLag = lag(weeklySales, 45),
weeklySalesDiff = (weeklySales - weeklySalesLag) / weeklySalesLag) %>%
drop_na()
test_df <- df %>%
select(-date)
#########################################
# Retrieve saved model from trainer
#########################################
retrieved_model <- readRDS("model.rds")
#########################################
# Generate Predictions
#########################################
pred <- predict(retrieved_model, test_df, n.trees = retrieved_model$n.trees)
output_df <- df %>%
select(date, store) %>%
mutate(prediction = pred,
store = as.integer(store),
date = as.character(date))
#########################################
# Write Results
#########################################
reticulate::use_python("/usr/bin/python3.6")
data_access_sdk_python <- reticulate::import("data_access_sdk_python")
catalog_url <- "https://platform.adobe.io/data/foundation/catalog"
ingestion_url <- "https://platform.adobe.io/data/foundation/import"
print("Set up writer")
writer <- data_access_sdk_python$writer$DataSetWriter(catalog_url = catalog_url,
ingestion_url = ingestion_url,
client_id = configurationJSON$ML_FRAMEWORK_IMS_USER_CLIENT_ID,
user_token = configurationJSON$ML_FRAMEWORK_IMS_TOKEN,
service_token = configurationJSON$ML_FRAMEWORK_IMS_ML_TOKEN)
print("Writer configured")
writer$write(data_set_id = configurationJSON$output_dataset_id,
dataframe = output_df,
ims_org = configurationJSON$ML_FRAMEWORK_IMS_ORG_ID)
print("Write done")
print("Exiting Scorer Function.")
}
)
)
|
############################################
#Variance inflation factor function for LME#
############################################
vif.lme <- function (fit) {
## adapted from rms::vif
v <- vcov(fit)
if(class(fit)=="lme"){
nam <- names(fixef(fit))
}else{
nam <- names(coef(fit))
}
## exclude intercepts
ns <- sum(1 * (nam == "Intercept" | nam == "(Intercept)"))
if (ns > 0) {
v <- v[-(1:ns), -(1:ns), drop = FALSE]
nam <- nam[-(1:ns)] }
d <- diag(v)^0.5
v <- diag(solve(v/(d %o% d)))
names(v) <- nam
v } | /Functions for Multilevel RCM/vif_lme.R | permissive | Thvb/R-function-lib | R | false | false | 547 | r | ############################################
#Variance inflation factor function for LME#
############################################
vif.lme <- function (fit) {
## adapted from rms::vif
v <- vcov(fit)
if(class(fit)=="lme"){
nam <- names(fixef(fit))
}else{
nam <- names(coef(fit))
}
## exclude intercepts
ns <- sum(1 * (nam == "Intercept" | nam == "(Intercept)"))
if (ns > 0) {
v <- v[-(1:ns), -(1:ns), drop = FALSE]
nam <- nam[-(1:ns)] }
d <- diag(v)^0.5
v <- diag(solve(v/(d %o% d)))
names(v) <- nam
v } |
## Sandwich estimator for variance of RR
var.rr.dr = function(y, x, va, vb, vc, alpha.dr, alpha.ml, beta.ml, gamma,
optimal, weights) {
########################################
pscore = as.vector(expit(vc %*% gamma))
n = length(pscore)
### 1. - E[dS/d(alpha.ml,beta.ml)] ############################## Computing
### the Hessian:
Hrr = Hessian2RR(y, x, va, vb, alpha.ml, beta.ml, weights)
hessian = Hrr$hessian
p0 = Hrr$p0; p1 = Hrr$p1; pA = Hrr$pA
dpsi0.by.dtheta = Hrr$dpsi0.by.dtheta
dpsi0.by.dphi = Hrr$dpsi0.by.dphi
dtheta.by.dalpha = Hrr$dtheta.by.dalpha
dphi.by.dbeta = Hrr$dphi.by.dbeta
dl.by.dpsi0 = Hrr$dl.by.dpsi0
############# extra building blocks ##########################
H.alpha = y * exp(-x * (as.vector(va %*% alpha.dr)))
############# Calculation of optimal vector (used in several places below) ##
if (optimal == TRUE) {
theta = as.vector(va %*% alpha.ml) # avoid n by 1 matrix
dtheta.by.dalpha.beta = cbind(va, matrix(0, n, length(beta.ml)))
wt = 1/(1 - p0 + (1 - pscore) * (exp(-theta) - 1))
} else {
wt = rep(1, n)
}
### 2. -E[dU.by.dalphaml.betaml] ####################################
dU.by.dp0 = -va * wt * (x - pscore) # n by 2
dp0.by.dpsi0 = p0
dpsi0.by.dalpha.beta = cbind(dpsi0.by.dtheta * dtheta.by.dalpha, dpsi0.by.dphi *
dphi.by.dbeta) # n by 4
# 4 = 2 (alpha) + 2 (beta)
dp0.by.dalpha.beta = dpsi0.by.dalpha.beta * dp0.by.dpsi0 # n by 4
dU.by.dwt = va * (x - pscore) * (H.alpha - p0) # n by 2
dwt.by.dwti = -wt^2 # n
# wti is short for wt_inv
dU.by.dwti = dU.by.dwt * dwt.by.dwti # n by 2
if (optimal == TRUE) {
dwti.by.dalpha.beta = -dp0.by.dalpha.beta - (1 - pscore) * exp(-theta) *
dtheta.by.dalpha.beta # n by 4
} else {
dwti.by.dalpha.beta = matrix(0, n, ncol(va) + ncol(vb))
}
dU.by.dalpha.ml.beta.ml = t(dU.by.dp0 * weights) %*% dp0.by.dalpha.beta +
t(dU.by.dwti * weights) %*% dwti.by.dalpha.beta
### 3. tau = -E[dU/dalpha.dr] ######################################## (This
### is the bread of the sandwich estimate)
dU.by.dH = va * wt * (x - pscore) # n by 2
dH.by.dalpha.dr = -va * x * H.alpha # n by 2
tau = -t(dU.by.dH * weights) %*% dH.by.dalpha.dr/sum(weights) # 2 by 2
### 4. E[d(prop score score equation)/dgamma]
dpscore.by.dgamma = vc * pscore * (1 - pscore) # n by 2
part4 = -t(vc * weights) %*% dpscore.by.dgamma # 2 by 2
### 5. E[dU/dgamma]
dU.by.dpscore = -va * wt * (H.alpha - p0) # n by 2
if (optimal == TRUE) {
dwti.by.dpscore = 1 - exp(-theta) # n
dwti.by.dgamma = dpscore.by.dgamma * dwti.by.dpscore # n by 2
} else {
dwti.by.dgamma = matrix(0, n, ncol(vc))
}
dU.by.dgamma = t(dU.by.dpscore * weights) %*% dpscore.by.dgamma + t(dU.by.dwti *
weights) %*% dwti.by.dgamma # 2 by 2
############################################################################# Assembling semi-parametric variance matrix
U = va * wt * (x - pscore) * (H.alpha - p0) # n by 2
S = cbind(dl.by.dpsi0 * (dpsi0.by.dtheta + x) * dtheta.by.dalpha, dl.by.dpsi0 *
dpsi0.by.dphi * dphi.by.dbeta)
pscore.score = vc * (x - pscore)
Utilde = U - t(dU.by.dalpha.ml.beta.ml %*% (-solve(hessian)) %*% t(S)) -
t(dU.by.dgamma %*% (solve(part4)) %*% t(pscore.score)) # n by 2
USigma = t(Utilde * weights) %*% Utilde/sum(weights)
################################### Asymptotic var matrix for alpha.dr
alpha.dr.variance = solve(tau) %*% USigma %*% solve(tau)/sum(weights)
return(alpha.dr.variance)
}
| /R/2.2_DR_Var_RR.R | no_license | mclements/brm | R | false | false | 4,013 | r |
## Sandwich estimator for variance of RR
var.rr.dr = function(y, x, va, vb, vc, alpha.dr, alpha.ml, beta.ml, gamma,
optimal, weights) {
########################################
pscore = as.vector(expit(vc %*% gamma))
n = length(pscore)
### 1. - E[dS/d(alpha.ml,beta.ml)] ############################## Computing
### the Hessian:
Hrr = Hessian2RR(y, x, va, vb, alpha.ml, beta.ml, weights)
hessian = Hrr$hessian
p0 = Hrr$p0; p1 = Hrr$p1; pA = Hrr$pA
dpsi0.by.dtheta = Hrr$dpsi0.by.dtheta
dpsi0.by.dphi = Hrr$dpsi0.by.dphi
dtheta.by.dalpha = Hrr$dtheta.by.dalpha
dphi.by.dbeta = Hrr$dphi.by.dbeta
dl.by.dpsi0 = Hrr$dl.by.dpsi0
############# extra building blocks ##########################
H.alpha = y * exp(-x * (as.vector(va %*% alpha.dr)))
############# Calculation of optimal vector (used in several places below) ##
if (optimal == TRUE) {
theta = as.vector(va %*% alpha.ml) # avoid n by 1 matrix
dtheta.by.dalpha.beta = cbind(va, matrix(0, n, length(beta.ml)))
wt = 1/(1 - p0 + (1 - pscore) * (exp(-theta) - 1))
} else {
wt = rep(1, n)
}
### 2. -E[dU.by.dalphaml.betaml] ####################################
dU.by.dp0 = -va * wt * (x - pscore) # n by 2
dp0.by.dpsi0 = p0
dpsi0.by.dalpha.beta = cbind(dpsi0.by.dtheta * dtheta.by.dalpha, dpsi0.by.dphi *
dphi.by.dbeta) # n by 4
# 4 = 2 (alpha) + 2 (beta)
dp0.by.dalpha.beta = dpsi0.by.dalpha.beta * dp0.by.dpsi0 # n by 4
dU.by.dwt = va * (x - pscore) * (H.alpha - p0) # n by 2
dwt.by.dwti = -wt^2 # n
# wti is short for wt_inv
dU.by.dwti = dU.by.dwt * dwt.by.dwti # n by 2
if (optimal == TRUE) {
dwti.by.dalpha.beta = -dp0.by.dalpha.beta - (1 - pscore) * exp(-theta) *
dtheta.by.dalpha.beta # n by 4
} else {
dwti.by.dalpha.beta = matrix(0, n, ncol(va) + ncol(vb))
}
dU.by.dalpha.ml.beta.ml = t(dU.by.dp0 * weights) %*% dp0.by.dalpha.beta +
t(dU.by.dwti * weights) %*% dwti.by.dalpha.beta
### 3. tau = -E[dU/dalpha.dr] ######################################## (This
### is the bread of the sandwich estimate)
dU.by.dH = va * wt * (x - pscore) # n by 2
dH.by.dalpha.dr = -va * x * H.alpha # n by 2
tau = -t(dU.by.dH * weights) %*% dH.by.dalpha.dr/sum(weights) # 2 by 2
### 4. E[d(prop score score equation)/dgamma]
dpscore.by.dgamma = vc * pscore * (1 - pscore) # n by 2
part4 = -t(vc * weights) %*% dpscore.by.dgamma # 2 by 2
### 5. E[dU/dgamma]
dU.by.dpscore = -va * wt * (H.alpha - p0) # n by 2
if (optimal == TRUE) {
dwti.by.dpscore = 1 - exp(-theta) # n
dwti.by.dgamma = dpscore.by.dgamma * dwti.by.dpscore # n by 2
} else {
dwti.by.dgamma = matrix(0, n, ncol(vc))
}
dU.by.dgamma = t(dU.by.dpscore * weights) %*% dpscore.by.dgamma + t(dU.by.dwti *
weights) %*% dwti.by.dgamma # 2 by 2
############################################################################# Assembling semi-parametric variance matrix
U = va * wt * (x - pscore) * (H.alpha - p0) # n by 2
S = cbind(dl.by.dpsi0 * (dpsi0.by.dtheta + x) * dtheta.by.dalpha, dl.by.dpsi0 *
dpsi0.by.dphi * dphi.by.dbeta)
pscore.score = vc * (x - pscore)
Utilde = U - t(dU.by.dalpha.ml.beta.ml %*% (-solve(hessian)) %*% t(S)) -
t(dU.by.dgamma %*% (solve(part4)) %*% t(pscore.score)) # n by 2
USigma = t(Utilde * weights) %*% Utilde/sum(weights)
################################### Asymptotic var matrix for alpha.dr
alpha.dr.variance = solve(tau) %*% USigma %*% solve(tau)/sum(weights)
return(alpha.dr.variance)
}
|
library(methods)
library(RSelenium)
library(testthat)
library(XML)
library(digest)
ISR_login <- Sys.getenv("ISR_login")
ISR_pwd <- Sys.getenv("ISR_pwd")
SAUCE_USERNAME <- Sys.getenv("SAUCE_USERNAME")
SAUCE_ACCESS_KEY <- Sys.getenv("SAUCE_ACCESS_KEY")
machine <- ifelse(Sys.getenv("TRAVIS") == "true", "TRAVIS", "LOCAL")
server <- ifelse(Sys.getenv("TRAVIS_BRANCH") == "master", "www", "test")
build <- Sys.getenv("TRAVIS_BUILD_NUMBER")
job <- Sys.getenv("TRAVIS_JOB_NUMBER")
jobURL <- paste0("https://travis-ci.org/RGLab/UITesting/jobs/", Sys.getenv("TRAVIS_JOB_ID"))
name <- ifelse(machine == "TRAVIS",
paste0("UI testing `", server, "` by TRAVIS #", job, " ", jobURL),
paste0("UI testing `", server, "` by ", Sys.info()["nodename"]))
url <- ifelse(machine == "TRAVIS", "localhost", "ondemand.saucelabs.com")
ip <- paste0(SAUCE_USERNAME, ":", SAUCE_ACCESS_KEY, "@", url)
port <- ifelse(machine == "TRAVIS", 4445, 80)
browserName <- ifelse(Sys.getenv("SAUCE_BROWSER") == "",
"chrome",
Sys.getenv("SAUCE_BROWSER"))
extraCapabilities <- list(name = name,
build = build,
username = SAUCE_USERNAME,
accessKey = SAUCE_ACCESS_KEY,
tags = list(machine, server))
remDr <- remoteDriver$new(remoteServerAddr = ip,
port = port,
browserName = browserName,
version = "latest",
platform = "Windows 10",
extraCapabilities = extraCapabilities)
remDr$open(silent = TRUE)
ptm <- proc.time()
cat("\nhttps://saucelabs.com/beta/tests/", remDr@.xData$sessionid, "\n", sep = "")
if (machine == "TRAVIS") write(paste0("export SAUCE_JOB=", remDr@.xData$sessionid), "SAUCE")
remDr$maxWindowSize()
remDr$setImplicitWaitTimeout(milliseconds = 20000)
siteURL <- paste0("https://", server, ".immunespace.org")
# helper functions ----
context_of <- function(file, what, url, level = NULL) {
if (exists("ptm")) {
elapsed <- proc.time() - ptm
timeStamp <- paste0("At ", floor(elapsed[3] / 60), " minutes ",
round(elapsed[3] %% 60), " seconds")
} else {
timeStamp <- ""
}
level <- ifelse(is.null(level), "", paste0(" (", level, " level) "))
msg <- paste0("\n", file, ": testing '", what, "' page", level,
"\n", url,
"\n", timeStamp,
"\n")
context(msg)
}
# test functions ----
test_connection <- function(remDr, pageURL, expectedTitle) {
test_that("can connect to the page", {
remDr$navigate(pageURL)
if (remDr$getTitle()[[1]] == "Sign In") {
id <- remDr$findElement(using = "id", value = "email")
id$sendKeysToElement(list(ISR_login))
pw <- remDr$findElement(using = "id", value = "password")
pw$sendKeysToElement(list(ISR_pwd))
loginButton <- remDr$findElement(using = "class", value = "labkey-button")
loginButton$clickElement()
while(remDr$getTitle()[[1]] == "Sign In") Sys.sleep(1)
}
pageTitle <- remDr$getTitle()[[1]]
expect_equal(pageTitle, expectedTitle)
})
}
test_module <- function(module) {
test_that(paste(module, "module is present"), {
module <- remDr$findElements(using = "css selector", value = "div.ISCore")
expect_equal(length(module), 1)
tab_panel <- remDr$findElements(using = "class", value = "x-tab-panel")
expect_equal(length(tab_panel), 1)
})
}
test_tabs <- function(x) {
test_that("tabs are present", {
tab_header <- remDr$findElements(using = "class", value = "x-tab-panel-header")
expect_equal(length(tab_header), 1)
tabs <- tab_header[[1]]$findChildElements(using = "css selector", value = "li[id^=ext-comp]")
expect_equal(length(tabs), 4)
expect_equal(tabs[[1]]$getElementText()[[1]], x[1])
expect_equal(tabs[[2]]$getElementText()[[1]], x[2])
expect_equal(tabs[[3]]$getElementText()[[1]], x[3])
expect_equal(tabs[[4]]$getElementText()[[1]], x[4])
})
}
| /tests/initialize.R | no_license | TheGilt/UITesting | R | false | false | 4,143 | r | library(methods)
library(RSelenium)
library(testthat)
library(XML)
library(digest)
ISR_login <- Sys.getenv("ISR_login")
ISR_pwd <- Sys.getenv("ISR_pwd")
SAUCE_USERNAME <- Sys.getenv("SAUCE_USERNAME")
SAUCE_ACCESS_KEY <- Sys.getenv("SAUCE_ACCESS_KEY")
machine <- ifelse(Sys.getenv("TRAVIS") == "true", "TRAVIS", "LOCAL")
server <- ifelse(Sys.getenv("TRAVIS_BRANCH") == "master", "www", "test")
build <- Sys.getenv("TRAVIS_BUILD_NUMBER")
job <- Sys.getenv("TRAVIS_JOB_NUMBER")
jobURL <- paste0("https://travis-ci.org/RGLab/UITesting/jobs/", Sys.getenv("TRAVIS_JOB_ID"))
name <- ifelse(machine == "TRAVIS",
paste0("UI testing `", server, "` by TRAVIS #", job, " ", jobURL),
paste0("UI testing `", server, "` by ", Sys.info()["nodename"]))
url <- ifelse(machine == "TRAVIS", "localhost", "ondemand.saucelabs.com")
ip <- paste0(SAUCE_USERNAME, ":", SAUCE_ACCESS_KEY, "@", url)
port <- ifelse(machine == "TRAVIS", 4445, 80)
browserName <- ifelse(Sys.getenv("SAUCE_BROWSER") == "",
"chrome",
Sys.getenv("SAUCE_BROWSER"))
extraCapabilities <- list(name = name,
build = build,
username = SAUCE_USERNAME,
accessKey = SAUCE_ACCESS_KEY,
tags = list(machine, server))
remDr <- remoteDriver$new(remoteServerAddr = ip,
port = port,
browserName = browserName,
version = "latest",
platform = "Windows 10",
extraCapabilities = extraCapabilities)
remDr$open(silent = TRUE)
ptm <- proc.time()
cat("\nhttps://saucelabs.com/beta/tests/", remDr@.xData$sessionid, "\n", sep = "")
if (machine == "TRAVIS") write(paste0("export SAUCE_JOB=", remDr@.xData$sessionid), "SAUCE")
remDr$maxWindowSize()
remDr$setImplicitWaitTimeout(milliseconds = 20000)
siteURL <- paste0("https://", server, ".immunespace.org")
# helper functions ----
context_of <- function(file, what, url, level = NULL) {
if (exists("ptm")) {
elapsed <- proc.time() - ptm
timeStamp <- paste0("At ", floor(elapsed[3] / 60), " minutes ",
round(elapsed[3] %% 60), " seconds")
} else {
timeStamp <- ""
}
level <- ifelse(is.null(level), "", paste0(" (", level, " level) "))
msg <- paste0("\n", file, ": testing '", what, "' page", level,
"\n", url,
"\n", timeStamp,
"\n")
context(msg)
}
# test functions ----
test_connection <- function(remDr, pageURL, expectedTitle) {
test_that("can connect to the page", {
remDr$navigate(pageURL)
if (remDr$getTitle()[[1]] == "Sign In") {
id <- remDr$findElement(using = "id", value = "email")
id$sendKeysToElement(list(ISR_login))
pw <- remDr$findElement(using = "id", value = "password")
pw$sendKeysToElement(list(ISR_pwd))
loginButton <- remDr$findElement(using = "class", value = "labkey-button")
loginButton$clickElement()
while(remDr$getTitle()[[1]] == "Sign In") Sys.sleep(1)
}
pageTitle <- remDr$getTitle()[[1]]
expect_equal(pageTitle, expectedTitle)
})
}
test_module <- function(module) {
test_that(paste(module, "module is present"), {
module <- remDr$findElements(using = "css selector", value = "div.ISCore")
expect_equal(length(module), 1)
tab_panel <- remDr$findElements(using = "class", value = "x-tab-panel")
expect_equal(length(tab_panel), 1)
})
}
test_tabs <- function(x) {
test_that("tabs are present", {
tab_header <- remDr$findElements(using = "class", value = "x-tab-panel-header")
expect_equal(length(tab_header), 1)
tabs <- tab_header[[1]]$findChildElements(using = "css selector", value = "li[id^=ext-comp]")
expect_equal(length(tabs), 4)
expect_equal(tabs[[1]]$getElementText()[[1]], x[1])
expect_equal(tabs[[2]]$getElementText()[[1]], x[2])
expect_equal(tabs[[3]]$getElementText()[[1]], x[3])
expect_equal(tabs[[4]]$getElementText()[[1]], x[4])
})
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/appmesh_operations.R
\name{appmesh_delete_route}
\alias{appmesh_delete_route}
\title{Deletes an existing route}
\usage{
appmesh_delete_route(meshName, meshOwner, routeName, virtualRouterName)
}
\arguments{
\item{meshName}{[required] The name of the service mesh to delete the route in.}
\item{meshOwner}{The AWS IAM account ID of the service mesh owner. If the account ID is
not your own, then it's the ID of the account that shared the mesh with
your account. For more information about mesh sharing, see \href{https://docs.aws.amazon.com/app-mesh/latest/userguide/sharing.html}{Working with shared meshes}.}
\item{routeName}{[required] The name of the route to delete.}
\item{virtualRouterName}{[required] The name of the virtual router to delete the route in.}
}
\description{
Deletes an existing route.
}
\section{Request syntax}{
\preformatted{svc$delete_route(
meshName = "string",
meshOwner = "string",
routeName = "string",
virtualRouterName = "string"
)
}
}
\keyword{internal}
| /paws/man/appmesh_delete_route.Rd | permissive | sanchezvivi/paws | R | false | true | 1,076 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/appmesh_operations.R
\name{appmesh_delete_route}
\alias{appmesh_delete_route}
\title{Deletes an existing route}
\usage{
appmesh_delete_route(meshName, meshOwner, routeName, virtualRouterName)
}
\arguments{
\item{meshName}{[required] The name of the service mesh to delete the route in.}
\item{meshOwner}{The AWS IAM account ID of the service mesh owner. If the account ID is
not your own, then it's the ID of the account that shared the mesh with
your account. For more information about mesh sharing, see \href{https://docs.aws.amazon.com/app-mesh/latest/userguide/sharing.html}{Working with shared meshes}.}
\item{routeName}{[required] The name of the route to delete.}
\item{virtualRouterName}{[required] The name of the virtual router to delete the route in.}
}
\description{
Deletes an existing route.
}
\section{Request syntax}{
\preformatted{svc$delete_route(
meshName = "string",
meshOwner = "string",
routeName = "string",
virtualRouterName = "string"
)
}
}
\keyword{internal}
|
\name{estimatePatternsOneColumn}
\alias{estimatePatternsOneColumn}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Estimate distribution of methylation patterns with one column of counts
}
\description{
This function is part of the inner working of the package.
Please refer to \code{\link{estimatePatterns}}
}
\usage{
estimatePatterns(patternCounts,
epsilon,
eta,
column,
fast,
steps)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
}
\details{
%% ~~ If necessary, more details than the description above ~~
}
\value{
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
Peijie Lin, Sylvain Foret, Conrad Burden
}
\note{
%% ~~further notes~~
}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ internal }
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
% vim:ts=2:sw=2:sts=2:expandtab:
| /man/estimatePatternsOneColumn.Rd | no_license | sylvainforet/Bpest | R | false | false | 1,175 | rd | \name{estimatePatternsOneColumn}
\alias{estimatePatternsOneColumn}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Estimate distribution of methylation patterns with one column of counts
}
\description{
This function is part of the inner working of the package.
Please refer to \code{\link{estimatePatterns}}
}
\usage{
estimatePatterns(patternCounts,
epsilon,
eta,
column,
fast,
steps)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
}
\details{
%% ~~ If necessary, more details than the description above ~~
}
\value{
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
Peijie Lin, Sylvain Foret, Conrad Burden
}
\note{
%% ~~further notes~~
}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ internal }
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
% vim:ts=2:sw=2:sts=2:expandtab:
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utilities.R
\name{summary.tvarGIRF}
\alias{summary.tvarGIRF}
\title{Print a summary of some estimated GIRFs}
\usage{
\method{summary}{tvarGIRF}(object, ...)
}
\arguments{
\item{object}{The result of a call to `GIRF`}
\item{...}{Additional arguments; not used}
}
\description{
Print a summary of some estimated GIRFs
}
| /man/summary.tvarGIRF.Rd | permissive | angusmoore/tvarGIRF | R | false | true | 397 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utilities.R
\name{summary.tvarGIRF}
\alias{summary.tvarGIRF}
\title{Print a summary of some estimated GIRFs}
\usage{
\method{summary}{tvarGIRF}(object, ...)
}
\arguments{
\item{object}{The result of a call to `GIRF`}
\item{...}{Additional arguments; not used}
}
\description{
Print a summary of some estimated GIRFs
}
|
data <- read.csv("noOfPages.txt", stringsAsFactors = F, header = FALSE, sep = "|")
incdata = data[, 1]
incdata <- as.numeric(incdata)
brk <- seq(0, 300, 10)
png("q1-histogram1.png")
hist(incdata, main = "Blogs vs. Number of Pages", breaks=brk, freq = T, xlab="Pages", ylab="Blogs")
| /A9/documentation/questions/q1/R/histogram.R | no_license | mkoga001/cs595-f14 | R | false | false | 296 | r | data <- read.csv("noOfPages.txt", stringsAsFactors = F, header = FALSE, sep = "|")
incdata = data[, 1]
incdata <- as.numeric(incdata)
brk <- seq(0, 300, 10)
png("q1-histogram1.png")
hist(incdata, main = "Blogs vs. Number of Pages", breaks=brk, freq = T, xlab="Pages", ylab="Blogs")
|
#
# # The purpose of this script is to create a data object (dto) which will hold all data and metadata.
# # Run the lines below to stitch a basic html output.
knitr::stitch_rmd(
script="./sandbox/map/mmse-map-LH.R",
output="./sandbox/map/stitched-output/mmse-map-LH.md"
)
## The above lines are executed only when the file is run in RStudio, !! NOT when an Rmd/Rnw file calls it !!
#how do i make stiched output work?
# knitr::stitch_rmd(script="./___/___.R", output="./___/___/___.md")
#These first few lines run only when the file is run in RStudio, !!NOT when an Rmd/Rnw file calls it!!
rm(list=ls(all=TRUE)) #Clear the variables from previous runs.
cat("\f") # clear console
# ---- load-packages -----------------------------------------------------------
# Attach these packages so their functions don't need to be qualified: http://r-pkgs.had.co.nz/namespace.html#search-path
library(magrittr) # enables piping : %>%
library(lmerTest)
library(outliers)
# ---- load-sources ------------------------------------------------------------
# Call `base::source()` on any repo file that defines functions needed below. Ideally, no real operations are performed.
source("./scripts/common-functions.R") # used in multiple reports
source("./scripts/graph-presets.R")
source("./scripts/general-graphs.R") #in scripts folder
source("./scripts/specific-graphs.R")
source("./scripts/specific-graphs-pred.R")
source("./scripts/graphs-pred.R")
source("./scripts/graphs-predVID.R")
# source("./scripts/graph-presets.R") # fonts, colors, themes
# Verify these packages are available on the machine, but their functions need to be qualified: http://r-pkgs.had.co.nz/namespace.html#search-path
requireNamespace("ggplot2") # graphing
# requireNamespace("readr") # data input
requireNamespace("tidyr") # data manipulation
requireNamespace("dplyr") # Avoid attaching dplyr, b/c its function names conflict with a lot of packages (esp base, stats, and plyr).
requireNamespace("testit")# For asserting conditions meet expected patterns.
requireNamespace("nlme") # estimate mixed models | esp. gls()
requireNamespace("lme4") # estimate mixed models | esp. lmer()
requireNamespace("arm") # process model objects
getwd()
# ---- declare-globals ---------------------------------------------------------
# path_input <- "./data/unshared/derived/dto.rds"
# path_input <- "./data/unshared/derived/dAL.rds"
path_input <- "./data/unshared/derived/map/data.rds"
# figure_path <- 'manipulation/stitched-output/te/'
# ---- load-data ---------------------------------------------------------------
ds <- readRDS(path_input)
dim(ds)
str(ds)
# head(ds)
# ---- specify the models-------------------------
source("./scripts/functions-for-glm-models.R")
# ---- subset---------------------------------------
#phys_bl_bp = each persons baseline score - the baseline grand mean (between person baseline compare)
#phys_bp = the persons score at time j - their mean
#phys_wp = that persons score at time j, minus their mean (deviations-from--own-mean)
#mmse_wp = the persons mmse score at time j, minus their mean, so that a positive value indicated scores higehr then their mean
#biomarker_high = indicates if the person's mean on that particular biomarker is above the 3rd quartile
#subset
dwn <- ds %>%
dplyr::select(id, full_year, age_bl, al_count, phys_bl_bp, phys_bp, phys_wp, mmse, mmse_wp, age_bl_centered, female, edu) %>%
na.omit()
#19809 observations
#subset for biomarker models
dbio<- ds %>%
dplyr::select(id, full_year, age_bl, phys_bl_bp, phys_bp, phys_wp, mmse, mmse_wp, age_bl_centered, female, edu, al_count,
apoe, cholesterol, hemoglobin, hdlratio, hdl,ldl, glucose, creatine, #raw scores
cholesterol_HIGH, hemoglobin_HIGH, hdlratio_HIGH, hdl_LOW,ldl_HIGH, glucose_HIGH,creatine_HIGH) %>%
na.omit()
#4082 observations
# physical activity question: hours per week that the ppt. engages in 5 different categories
# ---- quality check variables ----------------------------------------------
#mmse
boxplot(dwn$mmse)
# bpmmse<-boxplot(dwn$mmse)
hist(dwn$mmse)
# t(table(dwn$mmse, useNA = "always"))
#not normal dist
# mmse-at-baseline
#
# d2<- subset(dwn, full_year==0, select=c( ))
#age at bsline
boxplot(dwn$age_bl_centered)
# bp<-boxplot(dwn$age_bl_centered)
# bp$out
hist(dwn$age_bl_centered)
#looks ok
#physical activity
#phys_bp
boxplot(dwn$phys_bp)
bp<-boxplot(dwn$phys_bp)
# bp$out
hist(dwn$phys_bp)
#skweded right
#phys_bp
boxplot(dwn$phys_wp)
hist(dwn$phys_wp)
#looks okay, people are more likely to be lower then their mean
#biomarkers
hist(dbio$apoe) #?
hist(dbio$cholesterol) #normal
hist(dbio$hemoglobin)#heavy at mean
# is a form of hemoglobin (a blood pigment that carries oxygen) that is bound to glucose.
#High HbA1c levels indicate poorer control of diabetes
#not affected by short-term fluctuations in blood glucose concentrations
#6.5% signals that diabetes is present
hist(dbio$hdlratio)#normal (high ratio is bad)
hist(dbio$hdl)#normal
hist(dbio$ldl)#normal
hist(dbio$glucose)#skewed right
hist(dbio$creatine) #skewed right, heavy at low end
#index of kidney function, high is bad (influenced by muscle tissue, therefore gender may confound this)
plyr::count(dbio, 'cholesterol_HIGH')
plyr::count(dbio, 'hemoglobin_HIGH')
plyr::count(dbio, 'hdl_LOW')
plyr::count(dbio, 'ldl_HIGH')
plyr::count(dbio, 'glucose_HIGH')
plyr::count(dbio, 'creatine_HIGH')
plyr::count(dbio, 'hdlratio_HIGH')
#(about the same ratio for all biomarkers)
# ---- obtain-multivariate-counts ---------------------------------------
computed_variables <- c("glucose_HIGH", "cholesterol_HIGH", "hemoglobin_HIGH", "hdlratio_HIGH", "hdl_LOW", "ldl_HIGH","creatine_HIGH" )
bio<- dbio %>%
# dplyr::group_by(glucoseHIGH,hgba1cHIGH) %>%
dplyr::group_by_(.dots = computed_variables) %>%
dplyr::summarize(count =n())
#no NA's
##### Andrey's edits begin #################
# ## Outcome
# cognition, operationalized by mmse score
#
# ## Temporal
# - full_year - wave
#
# ## Confounder
# - age_bl_centered - age at baseline centered at mean (M=79)
# - female
# - edu
#
# ## Exposure
# - physical_activity - hours per week engaged in physical activity
# - phys_bp - person's score on physical activity relative to the grand mean (between persons)
# - phys_wp - person's score on physical activity (on particular occasion) relative to the personal mean across occasions.
# ---- model-specification -------------------------
local_stem <- "dv ~ 1 + full_year "
# pooled_stem <- paste0(local_stem, "study_name_f + ")
predictors_A <- "
age_bl_centered +
female +
edu
"
predictors_AA <- "
age_bl_centered +
female +
edu +
age_bl_centered:female +
age_bl_centered:edu +
female:edu
"
predictors_B <- "
age_bl_centered +
female +
edu +
physical_activity +
phys_bp +
phys_wp
"
predictors_BB <- "
age_bl_centered +
female +
edu +
physical_activity +
phys_bp +
phys_wp +
age_bl_centered:female +
age_bl_centered:edu +
age_bl_centered:physical_activity +
age_bl_centered:phys_bp +
age_bl_centered:phys_wp +
female:edu +
female:physical_activity +
female:phys_bp +
female:phys_wp +
edu:physical_activity +
edu:phys_bp +
edu:phys_wp +
physical_activity:phys_bp +
physical_activity:phys_wp +
phys_bp:phys_wp +
phys_wp
"
# ---- define-modeling-functions -------------------------
source("./scripts/modeling-functions.R")
predictors_A <- "age_bl_centered + female + edu"
model_A <- estimate_local_model(ds, "mmse", predictors_A)
summary(model_A)
# ---- equations ------------------------------------------------------
eq_0 <- as.formula("mmse ~ 1 + full_year +
(1 + full_year |id)")
#does the outcome change over time? i.e. if so, it makes sense to add predictors.
eq_1 <- as.formula("mmse ~ 1 + full_year + age_bl_centered +
(1 + full_year |id)")
eq_2 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered +
(1 + full_year |id)")
eq_3 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp +
(1 + full_year |id)")
#bp effects: are people with higher values than other people (on average, over time)
#also higher in mmse (on average over time) (hoffman pdf page 72)
eq_4 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + phys_bp + phys_wp +
(1 + full_year |id)")
#wp effects: if you have higher values than usual (your own mean) at this occasion, do you also have a higher outcome
#value than usual at this occasion ? (hoffman pdf page 72)
# main effect of time: slope term- yearly rate of change in mmse, controlling for exercise
#(hoffman page 75)
#- if only bp : induvidual slopes dont differ sig
#- if only wp sig : induvidual slopes are sig increasing or decreasing
#(note)
#when you add a time varying covariate (singer and willett page 170)
#i-the intercept parameter or FE is now the value of the outcome when all Level 1 predictors (including time) are zero
#ii-the slope is now a conditional rate of change, controlling for the effects of the TVC
#this changes the meaning of the Level-2 varience components, so comparing them across sucessive models makes no sense
#we can now only use the F.E and goodness of fit stats to compare these models
eq_4es <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female +
(1 + full_year |id)")
#gender effects:
#main effect of time in study: when baseline age is at the mean (i.e. 0)
#main effects of age_bl: when year in study is 0 (i.e. at baseline)
#interactioin: i don't think this interaction is meaningful, this part of the model probably is not necessary
#
# eq_5 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female + female:phys_bp + female:phys_wp +
# (1 + full_year |id)")
# no gender differences in the effects of phys activity
# ---- models ------------------------------------------------------
model_0<- lmerTest::lmer(eq_0, data=dwn, REML=TRUE)
model_1<- lmerTest::lmer(eq_1, data=dwn, REML=TRUE)
model_2<- lmerTest::lmer(eq_2, data=dwn, REML=TRUE)
model_3<- lmerTest::lmer(eq_3, data=dwn, REML=TRUE)
model_4<- lmerTest::lmer(eq_4, data=dwn, REML=TRUE)
model_4es<- lmerTest::lmer(eq_4es, data=dwn, REML=TRUE)
# model_5<- lmerTest::lmer(eq_5, data=dwn, REML=TRUE)
options(scipen = 11)
lmerTest::summary((model_0))
broom::tidy(model_0, data=dwn)
lmerTest::summary((model_1))
lmerTest::summary((model_2))
lmerTest::summary((model_3))
lmerTest::summary((model_4))
lmerTest::summary((model_4es))
# lmerTest::summary((model_5))
anova(model_3, model_4, model_4es)
# because we only can use goodness of fit measures now to compare (not varience component)
#it seems like model 4 (without gender/educ) is the best fit
#allowing the effect of the TVC to vary over tiem (interaction term) -page 171 singer and willett
eq_tv_intxn <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female + phys_wp:full_year +
(1 + full_year |id)")
model_tv_intx<- lmerTest::lmer(eq_tv_intxn, data=dwn, REML=TRUE)
lmerTest::summary((model_tv_intx))
#should i exclude gender/edu? -lose bp effects??
eq_tv_intxn <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + phys_wp:full_year +
(1 + full_year |id)")
model_tv_intx<- lmerTest::lmer(eq_tv_intxn, data=dwn, REML=TRUE)
lmerTest::summary((model_tv_intx))
#singer and willet page 171:
#subscript j-occasion (phys_wp_ij) is what differentiates the a TVC from a TIVC
##can interpret interactin in 2 ways
#i- the effect of within person changes in physical activity on mmse varies over time - (it fluctuates?)
#ii- the rate of change in mmse differs as a function of within person changes in PA **
#in this model it is sig- those who walk more than their average decline less over time
# hypothetical dichotomizing graph to understand the interaction
dtemp<-dwn
summary(dtemp$phys_wp)
hist(dtemp$phys_wp)
dtemp$pred<- predict(model_tv_intx)
for(i in unique(dtemp$id)) {
for (j in 1:length(dtemp$phys_wp[dtemp$id==i])) {
if (isTRUE(dtemp$phys_wp[dtemp$id==i][j] > 0.77 )) {
dtemp$phys_wp_dichot[dtemp$id==i][j] <- "HIGH" }
else if (isTRUE(dtemp$phys_wp[dtemp$id==i][j] < (-1.1 ))){
dtemp$phys_wp_dichot[dtemp$id==i][j] <- "LOW"
}
# else {
# dtemp$phys_wp_dichot[dtemp$id==i][j] <- "NA"
# }
}}
ids <- sample(unique(dtemp$id),1)
dtemp %>%
dplyr::filter(id %in% ids ) %>%
dplyr::group_by(id) %>%
dplyr::select(id,phys_wp_dichot,phys_wp)
set.seed(1)
ids <- sample(dtemp$id,100)
d <- dtemp %>% dplyr::filter( id %in% ids)
g<- ggplot2::ggplot(d, aes_string(x= "full_year", y="pred", colour="phys_wp_dichot")) +
geom_point(shape=21, size=5)+
stat_smooth(method=lm, se=FALSE)+
main_theme
# stat_smooth(method=lm, level=0.95, fullrange=FALSE)
g <- g + labs(list(
title="interaction",
x="years_in_study", y="mmse_predictions"))
g
#those who walk higher than their average also have higher mmmse scores at that time, and decline less
#those who walk lower than average have lower mmse scores over time, and decline at a faster rate
#---------------------------------------------------------------------------------------
#using mmse deviation from mean as outcome ### thus, those who score higher than their average on physical activity,
#also score higher than their average mmse (but not higher mmse than the average population, those wp effects were not sig)
#but no between person effects, i.e. those who generally walk higher than the average person don't have higher mmse
#scores than their average (which make sense because it's relative to themselves)
#############################################################################################
eq_test <- as.formula("mmse_wp ~ 1 + full_year + phys_wp + phys_bp +
(1 + full_year |id)")
model_test<- lmerTest::lmer(eq_test, data=dwn, REML=TRUE)
lmerTest::summary((model_test))
eq_test2 <- as.formula("mmse_wp ~ 1 + full_year + age_bl_centered + phys_bp + phys_wp +
# + edu + female +
(1 + full_year |id)")
model_test2<- lmerTest::lmer(eq_test2, data=dwn, REML=TRUE)
lmerTest::summary((model_test2))
#-graph
dtemp$pred2 <- predict(model_test2)
set.seed(1)
ids <- sample(dtemp$id,100)
d <- dtemp %>% dplyr::filter( id %in% ids)
g<- ggplot2::ggplot(d, aes_string(x= "full_year", y="pred2", colour="phys_wp_dichot")) +
geom_point(shape=21, size=5)+
stat_smooth(method=lm, se=FALSE)+
main_theme
# stat_smooth(method=lm, level=0.95, fullrange=FALSE)
g <- g + labs(list(
title="interaction",
x="years_in_study", y="mmse_predictions"))
g
#people who walk more than average score higher than their average mmse, and decline at a slower rate
#-----------------------------------------------------------------------------------------------
eq_re<- as.formula("mmse ~ 1 + full_year + age_bl_centered + phys_bp + phys_wp + edu + female +
(1 + full_year + phys_wp |id)")
model_re<- lmerTest::lmer(eq_re, data=dwn, REML=TRUE)
lmerTest::summary((model_re))
#only include this parameter if we except the effects of physical activity to vary systematically across people! (-singer and willet page 169)
##### when i start to include biomarkers, I would say yes, I do expect this.
#i.e. people who walk higher then normal, who are low in stress, will benifit more than those who walk higher than normal, who are high in stress
#this is my hypothesis - stress negates the effects of PA
#including the within person random effects makes phys_bp and phys_wp significant, indicating there are moderators in the relationship
#hoffman page 76
###composite biomarker variable--------------------------------------------------------------------
eq_5 <- as.formula("mmse_wp ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female + al_count +
(1 + full_year + phys_wp |id)")
model_bio<- lmerTest::lmer(eq_5, data=dbio, REML=TRUE)
lmerTest::summary((model_bio))
eq_5 <- as.formula("mmse_wp ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + al_count + al_count:phys_wp +
# edu + female
(1 + full_year + phys_wp |id)")
model_bio<- lmerTest::lmer(eq_5, data=dbio, REML=TRUE)
lmerTest::summary((model_bio))
#######--------------- old graphing stuff that would need editing below ---------------------
#
# #--- predictions vs actual mmse
# dwn$pred7re<- predict(model_7re)
#
# #--- graphics
#
# set.seed(1)
# ids <- sample(ds$id,50)
# d <- dwn %>% dplyr::filter( id %in% ids)
# dim(d)
# names(dwn)
#
#
# #MMSE and model predictions
# lines_predVID(
# d,
# variable_name= "mmse",
# predicted = "predre",
# line_size=.5,
# line_alpha=.5,
# # top_y = 35,
# # bottom_y = 0,
# # by_y = 5,
# bottom_age = 0,
# top_age = 25,
# by_age = 5
# )
#
# #####--- intercept versus slope of model 7
#
# cf <- coefficients(model_7re)$id
# head(cf,20)
#
# cf<-data.frame(cf)
# str(cf)
# names(cf)
#
# ##intercept versus random slopes of interaction
# g<- ggplot2::ggplot(cf, aes_string(x="full_year.phys", y="X.Intercept.")) +
# geom_point(size=5, alpha=.5, colour="blue")+
# main_theme+
# stat_smooth(method=lm, level=0.95, fullrange=FALSE)
# g <- g + labs(list(
# # title="Intercept versus the Estimates Slope of the Interaction Between Time Since Baseline and phys Score",
# x="predcited slope", y="intercept"))
#
# g
#
#
| /sandbox/compile-models-mmse/compile-models-mmse.R | no_license | beccav8/psy564_longitudinal_models | R | false | false | 17,891 | r | #
# # The purpose of this script is to create a data object (dto) which will hold all data and metadata.
# # Run the lines below to stitch a basic html output.
knitr::stitch_rmd(
script="./sandbox/map/mmse-map-LH.R",
output="./sandbox/map/stitched-output/mmse-map-LH.md"
)
## The above lines are executed only when the file is run in RStudio, !! NOT when an Rmd/Rnw file calls it !!
#how do i make stiched output work?
# knitr::stitch_rmd(script="./___/___.R", output="./___/___/___.md")
#These first few lines run only when the file is run in RStudio, !!NOT when an Rmd/Rnw file calls it!!
rm(list=ls(all=TRUE)) #Clear the variables from previous runs.
cat("\f") # clear console
# ---- load-packages -----------------------------------------------------------
# Attach these packages so their functions don't need to be qualified: http://r-pkgs.had.co.nz/namespace.html#search-path
library(magrittr) # enables piping : %>%
library(lmerTest)
library(outliers)
# ---- load-sources ------------------------------------------------------------
# Call `base::source()` on any repo file that defines functions needed below. Ideally, no real operations are performed.
source("./scripts/common-functions.R") # used in multiple reports
source("./scripts/graph-presets.R")
source("./scripts/general-graphs.R") #in scripts folder
source("./scripts/specific-graphs.R")
source("./scripts/specific-graphs-pred.R")
source("./scripts/graphs-pred.R")
source("./scripts/graphs-predVID.R")
# source("./scripts/graph-presets.R") # fonts, colors, themes
# Verify these packages are available on the machine, but their functions need to be qualified: http://r-pkgs.had.co.nz/namespace.html#search-path
requireNamespace("ggplot2") # graphing
# requireNamespace("readr") # data input
requireNamespace("tidyr") # data manipulation
requireNamespace("dplyr") # Avoid attaching dplyr, b/c its function names conflict with a lot of packages (esp base, stats, and plyr).
requireNamespace("testit")# For asserting conditions meet expected patterns.
requireNamespace("nlme") # estimate mixed models | esp. gls()
requireNamespace("lme4") # estimate mixed models | esp. lmer()
requireNamespace("arm") # process model objects
getwd()
# ---- declare-globals ---------------------------------------------------------
# path_input <- "./data/unshared/derived/dto.rds"
# path_input <- "./data/unshared/derived/dAL.rds"
path_input <- "./data/unshared/derived/map/data.rds"
# figure_path <- 'manipulation/stitched-output/te/'
# ---- load-data ---------------------------------------------------------------
ds <- readRDS(path_input)
dim(ds)
str(ds)
# head(ds)
# ---- specify the models-------------------------
source("./scripts/functions-for-glm-models.R")
# ---- subset---------------------------------------
#phys_bl_bp = each persons baseline score - the baseline grand mean (between person baseline compare)
#phys_bp = the persons score at time j - their mean
#phys_wp = that persons score at time j, minus their mean (deviations-from--own-mean)
#mmse_wp = the persons mmse score at time j, minus their mean, so that a positive value indicated scores higehr then their mean
#biomarker_high = indicates if the person's mean on that particular biomarker is above the 3rd quartile
#subset
dwn <- ds %>%
dplyr::select(id, full_year, age_bl, al_count, phys_bl_bp, phys_bp, phys_wp, mmse, mmse_wp, age_bl_centered, female, edu) %>%
na.omit()
#19809 observations
#subset for biomarker models
dbio<- ds %>%
dplyr::select(id, full_year, age_bl, phys_bl_bp, phys_bp, phys_wp, mmse, mmse_wp, age_bl_centered, female, edu, al_count,
apoe, cholesterol, hemoglobin, hdlratio, hdl,ldl, glucose, creatine, #raw scores
cholesterol_HIGH, hemoglobin_HIGH, hdlratio_HIGH, hdl_LOW,ldl_HIGH, glucose_HIGH,creatine_HIGH) %>%
na.omit()
#4082 observations
# physical activity question: hours per week that the ppt. engages in 5 different categories
# ---- quality check variables ----------------------------------------------
#mmse
boxplot(dwn$mmse)
# bpmmse<-boxplot(dwn$mmse)
hist(dwn$mmse)
# t(table(dwn$mmse, useNA = "always"))
#not normal dist
# mmse-at-baseline
#
# d2<- subset(dwn, full_year==0, select=c( ))
#age at bsline
boxplot(dwn$age_bl_centered)
# bp<-boxplot(dwn$age_bl_centered)
# bp$out
hist(dwn$age_bl_centered)
#looks ok
#physical activity
#phys_bp
boxplot(dwn$phys_bp)
bp<-boxplot(dwn$phys_bp)
# bp$out
hist(dwn$phys_bp)
#skweded right
#phys_bp
boxplot(dwn$phys_wp)
hist(dwn$phys_wp)
#looks okay, people are more likely to be lower then their mean
#biomarkers
hist(dbio$apoe) #?
hist(dbio$cholesterol) #normal
hist(dbio$hemoglobin)#heavy at mean
# is a form of hemoglobin (a blood pigment that carries oxygen) that is bound to glucose.
#High HbA1c levels indicate poorer control of diabetes
#not affected by short-term fluctuations in blood glucose concentrations
#6.5% signals that diabetes is present
hist(dbio$hdlratio)#normal (high ratio is bad)
hist(dbio$hdl)#normal
hist(dbio$ldl)#normal
hist(dbio$glucose)#skewed right
hist(dbio$creatine) #skewed right, heavy at low end
#index of kidney function, high is bad (influenced by muscle tissue, therefore gender may confound this)
plyr::count(dbio, 'cholesterol_HIGH')
plyr::count(dbio, 'hemoglobin_HIGH')
plyr::count(dbio, 'hdl_LOW')
plyr::count(dbio, 'ldl_HIGH')
plyr::count(dbio, 'glucose_HIGH')
plyr::count(dbio, 'creatine_HIGH')
plyr::count(dbio, 'hdlratio_HIGH')
#(about the same ratio for all biomarkers)
# ---- obtain-multivariate-counts ---------------------------------------
computed_variables <- c("glucose_HIGH", "cholesterol_HIGH", "hemoglobin_HIGH", "hdlratio_HIGH", "hdl_LOW", "ldl_HIGH","creatine_HIGH" )
bio<- dbio %>%
# dplyr::group_by(glucoseHIGH,hgba1cHIGH) %>%
dplyr::group_by_(.dots = computed_variables) %>%
dplyr::summarize(count =n())
#no NA's
##### Andrey's edits begin #################
# ## Outcome
# cognition, operationalized by mmse score
#
# ## Temporal
# - full_year - wave
#
# ## Confounder
# - age_bl_centered - age at baseline centered at mean (M=79)
# - female
# - edu
#
# ## Exposure
# - physical_activity - hours per week engaged in physical activity
# - phys_bp - person's score on physical activity relative to the grand mean (between persons)
# - phys_wp - person's score on physical activity (on particular occasion) relative to the personal mean across occasions.
# ---- model-specification -------------------------
local_stem <- "dv ~ 1 + full_year "
# pooled_stem <- paste0(local_stem, "study_name_f + ")
predictors_A <- "
age_bl_centered +
female +
edu
"
predictors_AA <- "
age_bl_centered +
female +
edu +
age_bl_centered:female +
age_bl_centered:edu +
female:edu
"
predictors_B <- "
age_bl_centered +
female +
edu +
physical_activity +
phys_bp +
phys_wp
"
predictors_BB <- "
age_bl_centered +
female +
edu +
physical_activity +
phys_bp +
phys_wp +
age_bl_centered:female +
age_bl_centered:edu +
age_bl_centered:physical_activity +
age_bl_centered:phys_bp +
age_bl_centered:phys_wp +
female:edu +
female:physical_activity +
female:phys_bp +
female:phys_wp +
edu:physical_activity +
edu:phys_bp +
edu:phys_wp +
physical_activity:phys_bp +
physical_activity:phys_wp +
phys_bp:phys_wp +
phys_wp
"
# ---- define-modeling-functions -------------------------
source("./scripts/modeling-functions.R")
predictors_A <- "age_bl_centered + female + edu"
model_A <- estimate_local_model(ds, "mmse", predictors_A)
summary(model_A)
# ---- equations ------------------------------------------------------
eq_0 <- as.formula("mmse ~ 1 + full_year +
(1 + full_year |id)")
#does the outcome change over time? i.e. if so, it makes sense to add predictors.
eq_1 <- as.formula("mmse ~ 1 + full_year + age_bl_centered +
(1 + full_year |id)")
eq_2 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered +
(1 + full_year |id)")
eq_3 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp +
(1 + full_year |id)")
#bp effects: are people with higher values than other people (on average, over time)
#also higher in mmse (on average over time) (hoffman pdf page 72)
eq_4 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + phys_bp + phys_wp +
(1 + full_year |id)")
#wp effects: if you have higher values than usual (your own mean) at this occasion, do you also have a higher outcome
#value than usual at this occasion ? (hoffman pdf page 72)
# main effect of time: slope term- yearly rate of change in mmse, controlling for exercise
#(hoffman page 75)
#- if only bp : induvidual slopes dont differ sig
#- if only wp sig : induvidual slopes are sig increasing or decreasing
#(note)
#when you add a time varying covariate (singer and willett page 170)
#i-the intercept parameter or FE is now the value of the outcome when all Level 1 predictors (including time) are zero
#ii-the slope is now a conditional rate of change, controlling for the effects of the TVC
#this changes the meaning of the Level-2 varience components, so comparing them across sucessive models makes no sense
#we can now only use the F.E and goodness of fit stats to compare these models
eq_4es <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female +
(1 + full_year |id)")
#gender effects:
#main effect of time in study: when baseline age is at the mean (i.e. 0)
#main effects of age_bl: when year in study is 0 (i.e. at baseline)
#interactioin: i don't think this interaction is meaningful, this part of the model probably is not necessary
#
# eq_5 <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female + female:phys_bp + female:phys_wp +
# (1 + full_year |id)")
# no gender differences in the effects of phys activity
# ---- models ------------------------------------------------------
model_0<- lmerTest::lmer(eq_0, data=dwn, REML=TRUE)
model_1<- lmerTest::lmer(eq_1, data=dwn, REML=TRUE)
model_2<- lmerTest::lmer(eq_2, data=dwn, REML=TRUE)
model_3<- lmerTest::lmer(eq_3, data=dwn, REML=TRUE)
model_4<- lmerTest::lmer(eq_4, data=dwn, REML=TRUE)
model_4es<- lmerTest::lmer(eq_4es, data=dwn, REML=TRUE)
# model_5<- lmerTest::lmer(eq_5, data=dwn, REML=TRUE)
options(scipen = 11)
lmerTest::summary((model_0))
broom::tidy(model_0, data=dwn)
lmerTest::summary((model_1))
lmerTest::summary((model_2))
lmerTest::summary((model_3))
lmerTest::summary((model_4))
lmerTest::summary((model_4es))
# lmerTest::summary((model_5))
anova(model_3, model_4, model_4es)
# because we only can use goodness of fit measures now to compare (not varience component)
#it seems like model 4 (without gender/educ) is the best fit
#allowing the effect of the TVC to vary over tiem (interaction term) -page 171 singer and willett
eq_tv_intxn <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female + phys_wp:full_year +
(1 + full_year |id)")
model_tv_intx<- lmerTest::lmer(eq_tv_intxn, data=dwn, REML=TRUE)
lmerTest::summary((model_tv_intx))
#should i exclude gender/edu? -lose bp effects??
eq_tv_intxn <- as.formula("mmse ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + phys_wp:full_year +
(1 + full_year |id)")
model_tv_intx<- lmerTest::lmer(eq_tv_intxn, data=dwn, REML=TRUE)
lmerTest::summary((model_tv_intx))
#singer and willet page 171:
#subscript j-occasion (phys_wp_ij) is what differentiates the a TVC from a TIVC
##can interpret interactin in 2 ways
#i- the effect of within person changes in physical activity on mmse varies over time - (it fluctuates?)
#ii- the rate of change in mmse differs as a function of within person changes in PA **
#in this model it is sig- those who walk more than their average decline less over time
# hypothetical dichotomizing graph to understand the interaction
dtemp<-dwn
summary(dtemp$phys_wp)
hist(dtemp$phys_wp)
dtemp$pred<- predict(model_tv_intx)
for(i in unique(dtemp$id)) {
for (j in 1:length(dtemp$phys_wp[dtemp$id==i])) {
if (isTRUE(dtemp$phys_wp[dtemp$id==i][j] > 0.77 )) {
dtemp$phys_wp_dichot[dtemp$id==i][j] <- "HIGH" }
else if (isTRUE(dtemp$phys_wp[dtemp$id==i][j] < (-1.1 ))){
dtemp$phys_wp_dichot[dtemp$id==i][j] <- "LOW"
}
# else {
# dtemp$phys_wp_dichot[dtemp$id==i][j] <- "NA"
# }
}}
ids <- sample(unique(dtemp$id),1)
dtemp %>%
dplyr::filter(id %in% ids ) %>%
dplyr::group_by(id) %>%
dplyr::select(id,phys_wp_dichot,phys_wp)
set.seed(1)
ids <- sample(dtemp$id,100)
d <- dtemp %>% dplyr::filter( id %in% ids)
g<- ggplot2::ggplot(d, aes_string(x= "full_year", y="pred", colour="phys_wp_dichot")) +
geom_point(shape=21, size=5)+
stat_smooth(method=lm, se=FALSE)+
main_theme
# stat_smooth(method=lm, level=0.95, fullrange=FALSE)
g <- g + labs(list(
title="interaction",
x="years_in_study", y="mmse_predictions"))
g
#those who walk higher than their average also have higher mmmse scores at that time, and decline less
#those who walk lower than average have lower mmse scores over time, and decline at a faster rate
#---------------------------------------------------------------------------------------
#using mmse deviation from mean as outcome ### thus, those who score higher than their average on physical activity,
#also score higher than their average mmse (but not higher mmse than the average population, those wp effects were not sig)
#but no between person effects, i.e. those who generally walk higher than the average person don't have higher mmse
#scores than their average (which make sense because it's relative to themselves)
#############################################################################################
eq_test <- as.formula("mmse_wp ~ 1 + full_year + phys_wp + phys_bp +
(1 + full_year |id)")
model_test<- lmerTest::lmer(eq_test, data=dwn, REML=TRUE)
lmerTest::summary((model_test))
eq_test2 <- as.formula("mmse_wp ~ 1 + full_year + age_bl_centered + phys_bp + phys_wp +
# + edu + female +
(1 + full_year |id)")
model_test2<- lmerTest::lmer(eq_test2, data=dwn, REML=TRUE)
lmerTest::summary((model_test2))
#-graph
dtemp$pred2 <- predict(model_test2)
set.seed(1)
ids <- sample(dtemp$id,100)
d <- dtemp %>% dplyr::filter( id %in% ids)
g<- ggplot2::ggplot(d, aes_string(x= "full_year", y="pred2", colour="phys_wp_dichot")) +
geom_point(shape=21, size=5)+
stat_smooth(method=lm, se=FALSE)+
main_theme
# stat_smooth(method=lm, level=0.95, fullrange=FALSE)
g <- g + labs(list(
title="interaction",
x="years_in_study", y="mmse_predictions"))
g
#people who walk more than average score higher than their average mmse, and decline at a slower rate
#-----------------------------------------------------------------------------------------------
eq_re<- as.formula("mmse ~ 1 + full_year + age_bl_centered + phys_bp + phys_wp + edu + female +
(1 + full_year + phys_wp |id)")
model_re<- lmerTest::lmer(eq_re, data=dwn, REML=TRUE)
lmerTest::summary((model_re))
#only include this parameter if we except the effects of physical activity to vary systematically across people! (-singer and willet page 169)
##### when i start to include biomarkers, I would say yes, I do expect this.
#i.e. people who walk higher then normal, who are low in stress, will benifit more than those who walk higher than normal, who are high in stress
#this is my hypothesis - stress negates the effects of PA
#including the within person random effects makes phys_bp and phys_wp significant, indicating there are moderators in the relationship
#hoffman page 76
###composite biomarker variable--------------------------------------------------------------------
eq_5 <- as.formula("mmse_wp ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + edu + female + al_count +
(1 + full_year + phys_wp |id)")
model_bio<- lmerTest::lmer(eq_5, data=dbio, REML=TRUE)
lmerTest::summary((model_bio))
eq_5 <- as.formula("mmse_wp ~ 1 + full_year + age_bl_centered + full_year:age_bl_centered + phys_bp + phys_wp + al_count + al_count:phys_wp +
# edu + female
(1 + full_year + phys_wp |id)")
model_bio<- lmerTest::lmer(eq_5, data=dbio, REML=TRUE)
lmerTest::summary((model_bio))
#######--------------- old graphing stuff that would need editing below ---------------------
#
# #--- predictions vs actual mmse
# dwn$pred7re<- predict(model_7re)
#
# #--- graphics
#
# set.seed(1)
# ids <- sample(ds$id,50)
# d <- dwn %>% dplyr::filter( id %in% ids)
# dim(d)
# names(dwn)
#
#
# #MMSE and model predictions
# lines_predVID(
# d,
# variable_name= "mmse",
# predicted = "predre",
# line_size=.5,
# line_alpha=.5,
# # top_y = 35,
# # bottom_y = 0,
# # by_y = 5,
# bottom_age = 0,
# top_age = 25,
# by_age = 5
# )
#
# #####--- intercept versus slope of model 7
#
# cf <- coefficients(model_7re)$id
# head(cf,20)
#
# cf<-data.frame(cf)
# str(cf)
# names(cf)
#
# ##intercept versus random slopes of interaction
# g<- ggplot2::ggplot(cf, aes_string(x="full_year.phys", y="X.Intercept.")) +
# geom_point(size=5, alpha=.5, colour="blue")+
# main_theme+
# stat_smooth(method=lm, level=0.95, fullrange=FALSE)
# g <- g + labs(list(
# # title="Intercept versus the Estimates Slope of the Interaction Between Time Since Baseline and phys Score",
# x="predcited slope", y="intercept"))
#
# g
#
#
|
# Course Assignment, Getting and Cleaning Data
# P. Pulley
library(dplyr)
# Download and extract zip files; Load tables. Stitch together "test" and "train" tables.
if(!file.exists("./Assign")){dir.create("./Assign")}
fileUrl <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
download.file(fileUrl, "./Assign/zipdata.zip", method = "curl", mode = "wb")
unzip("./Assign/zipdata.zip", exdir = "./Assign")
subjtest <- read.table("./UCI HAR Dataset/test/subject_test.txt")
ytest <- read.table("./UCI HAR Dataset/test/y_test.txt")
xtest <- read.table("./UCI HAR Dataset/test/X_test.txt")
test <- cbind(subjtest, ytest, xtest)
subjtrain <- read.table("./UCI HAR Dataset/train/subject_train.txt")
ytrain <- read.table("./UCI HAR Dataset/train/y_train.txt")
xtrain <- read.table("./UCI HAR Dataset/train/X_train.txt")
train <- cbind(subjtrain, ytrain, xtrain)
# Stitch "train" and "test" tables together to form large raw data frame
datastitch <- rbind(train, test)
# Add features to raw data as a header. Add column names for subject (person) and activity
features <- read.table("./UCI HAR Dataset/features.txt")
header <- c("person","activity", as.character(features$V2))
colnames(datastitch) <- header
# Transform person and activity columns to categorical "factor" variables
datastitch$person <- factor(datastitch$person)
actlabs <- read.table("./UCI HAR Dataset/activity_labels.txt")
datastitch$activity <- factor(datastitch$activity, labels = tolower(as.character(actlabs$V2)))
# Extract variables that have only "mean()" or "std()" in the header; Subset the data frame
names <- names(datastitch)
namesextract <- names[grep("mean\\(\\)|std\\(\\)", names)]
datatrim <- datastitch[,c("person", "activity", namesextract)]
# Calculate the mean of each mean/std column, grouped by person and activity
summ <- datatrim %>% group_by(person, activity) %>% summarize_each(funs(mean))
summ
| /run_analysis.R | no_license | pulleyps/Course3_Cleaning_Data_Assignment | R | false | false | 1,975 | r | # Course Assignment, Getting and Cleaning Data
# P. Pulley
library(dplyr)
# Download and extract zip files; Load tables. Stitch together "test" and "train" tables.
if(!file.exists("./Assign")){dir.create("./Assign")}
fileUrl <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
download.file(fileUrl, "./Assign/zipdata.zip", method = "curl", mode = "wb")
unzip("./Assign/zipdata.zip", exdir = "./Assign")
subjtest <- read.table("./UCI HAR Dataset/test/subject_test.txt")
ytest <- read.table("./UCI HAR Dataset/test/y_test.txt")
xtest <- read.table("./UCI HAR Dataset/test/X_test.txt")
test <- cbind(subjtest, ytest, xtest)
subjtrain <- read.table("./UCI HAR Dataset/train/subject_train.txt")
ytrain <- read.table("./UCI HAR Dataset/train/y_train.txt")
xtrain <- read.table("./UCI HAR Dataset/train/X_train.txt")
train <- cbind(subjtrain, ytrain, xtrain)
# Stitch "train" and "test" tables together to form large raw data frame
datastitch <- rbind(train, test)
# Add features to raw data as a header. Add column names for subject (person) and activity
features <- read.table("./UCI HAR Dataset/features.txt")
header <- c("person","activity", as.character(features$V2))
colnames(datastitch) <- header
# Transform person and activity columns to categorical "factor" variables
datastitch$person <- factor(datastitch$person)
actlabs <- read.table("./UCI HAR Dataset/activity_labels.txt")
datastitch$activity <- factor(datastitch$activity, labels = tolower(as.character(actlabs$V2)))
# Extract variables that have only "mean()" or "std()" in the header; Subset the data frame
names <- names(datastitch)
namesextract <- names[grep("mean\\(\\)|std\\(\\)", names)]
datatrim <- datastitch[,c("person", "activity", namesextract)]
# Calculate the mean of each mean/std column, grouped by person and activity
summ <- datatrim %>% group_by(person, activity) %>% summarize_each(funs(mean))
summ
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mcmcIterator.R
\name{nextStep}
\alias{nextStep}
\title{next step of an MCMC chain}
\usage{
nextStep(object)
}
\arguments{
\item{object}{an mcmc loop object}
}
\description{
just a wrapper for nextElem really.
}
| /man/nextStep.Rd | no_license | bentaylor1/spatsurv | R | false | true | 290 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mcmcIterator.R
\name{nextStep}
\alias{nextStep}
\title{next step of an MCMC chain}
\usage{
nextStep(object)
}
\arguments{
\item{object}{an mcmc loop object}
}
\description{
just a wrapper for nextElem really.
}
|
### Terrence D. Jorgensen
### Last updated: 21 February 2019
##' Parcel-Allocation Variability in Model Ranking
##'
##' This function quantifies and assesses the consequences of parcel-allocation
##' variability for model ranking of structural equation models (SEMs) that
##' differ in their structural specification but share the same parcel-level
##' measurement specification (see Sterba & Rights, 2016). This function calls
##' \code{\link{parcelAllocation}}---which can be used with only one SEM in
##' isolation---to fit two (assumed) nested models to each of a specified number
##' of random item-to-parcel allocations. Output includes summary information
##' about the distribution of model selection results (including plots) and the
##' distribution of results for each model individually, across allocations
##' within-sample. Note that this function can be used when selecting among more
##' than two competing structural models as well (see instructions below
##' involving the \code{seed} argument).
##'
##' This is based on a SAS macro \code{ParcelAlloc} (Sterba & MacCallum, 2010).
##' The \code{PAVranking} function produces results discussed in Sterba and
##' Rights (2016) relevant to the assessment of parcel-allocation variability in
##' model selection and model ranking. Specifically, the \code{PAVranking}
##' function first calls \code{\link{parcelAllocation}} to generate a given
##' number (\code{nAlloc}) of item-to-parcel allocations, fitting both specified
##' models to each allocation, and providing summaryies of PAV for each model.
##' Additionally, \code{PAVranking} provides the following new summaries:
##'
##' \itemize{
##' \item{PAV in model selection index values and model ranking between
##' Models \code{model0} and \code{model1}.}
##' \item{The proportion of allocations that converged and the proportion of
##' proper solutions (results are summarized for allocations with both
##' converged and proper allocations only).}
##' }
##'
##' For further details on the benefits of the random allocation of items to
##' parcels, see Sterba (2011) and Sterba and MacCallum (2010).
##'
##' To test whether nested models have equivalent fit, results can be pooled
##' across allocations using the same methods available for pooling results
##' across multiple imputations of missing data (see \bold{Examples}).
##'
##' \emph{Note}: This function requires the \code{lavaan} package. Missing data
##' must be coded as \code{NA}. If the function returns \code{"Error in
##' plot.new() : figure margins too large"}, the user may need to increase
##' size of the plot window (e.g., in RStudio) and rerun the function.
##'
##'
##' @importFrom stats sd
##' @importFrom lavaan parTable lavListInspect lavaanList
##' @importFrom graphics hist
##'
##' @param model0,model1 \code{\link[lavaan]{lavaan}} model syntax specifying
##' nested models (\code{model0} within \code{model1}) to be fitted
##' to the same parceled data. Note that there can be a mixture of
##' items and parcels (even within the same factor), in case certain items
##' should never be parceled. Can be a character string or parameter table.
##' Also see \code{\link[lavaan]{lavaanify}} for more details.
##' @param data A \code{data.frame} containing all observed variables appearing
##' in the \code{model}, as well as those in the \code{item.syntax} used to
##' create parcels. If the data have missing values, multiple imputation
##' before parceling is recommended: submit a stacked data set (with a variable
##' for the imputation number, so they can be separateed later) and set
##' \code{do.fit = FALSE} to return the list of \code{data.frame}s (one per
##' allocation), each of which is a stacked, imputed data set with parcels.
##' @param parcel.names \code{character} vector containing names of all parcels
##' appearing as indicators in \code{model}.
##' @param item.syntax \link[lavaan]{lavaan} model syntax specifying the model
##' that would be fit to all of the unparceled items, including items that
##' should be randomly allocated to parcels appearing in \code{model}.
##' @param nAlloc The number of random items-to-parcels allocations to generate.
##' @param fun \code{character} string indicating the name of the
##' \code{\link[lavaan]{lavaan}} function used to fit \code{model} to
##' \code{data}. Can only take the values \code{"lavaan"}, \code{"sem"},
##' \code{"cfa"}, or \code{"growth"}.
##' @param alpha Alpha level used as criterion for significance.
##' @param bic.crit Criterion for assessing evidence in favor of one model
##' over another. See Rafferty (1995) for guidelines (default is "very
##' strong evidence" in favor of the model with lower BIC).
##' @param fit.measures \code{character} vector containing names of fit measures
##' to request from each fitted \code{\link[lavaan]{lavaan}} model. See the
##' output of \code{\link[lavaan]{fitMeasures}} for a list of available measures.
##' @param \dots Additional arguments to be passed to
##' \code{\link[lavaan]{lavaanList}}. See also \code{\link[lavaan]{lavOptions}}
##' @param show.progress If \code{TRUE}, show a \code{\link[utils]{txtProgressBar}}
##' indicating how fast each model-fitting iterates over allocations.
##' @param iseed (Optional) Random seed used for parceling items. When the same
##' random seed is specified and the program is re-run, the same allocations
##' will be generated. The seed argument can be used to assess parcel-allocation
##' variability in model ranking when considering more than two models. For each
##' pair of models under comparison, the program should be rerun using the same
##' random seed. Doing so ensures that multiple model comparisons will employ
##' the same set of parcel datasets. \emph{Note}: When using \pkg{parallel}
##' options, you must first type \code{RNGkind("L'Ecuyer-CMRG")} into the R
##' Console, so that the seed will be controlled across cores.
##' @param warn Whether to print warnings when fitting models to each allocation
##'
##' @return
##' \item{model0.results}{Results returned by \code{\link{parcelAllocation}}
##' for \code{model0} (see the \bold{Value} section).}
##' \item{model1.results}{Results returned by \code{\link{parcelAllocation}}
##' for \code{model1} (see the \bold{Value} section).}
##' \item{model0.v.model1}{A \code{list} of model-comparison results, including
##' the following: \itemize{
##' \item{\code{LRT_Summary:}}{ The average likelihood ratio test across
##' allocations, as well as the \emph{SD}, minimum, maximum, range, and the
##' proportion of allocations for which the test was significant.}
##' \item{\code{Fit_Index_Differences:}}{ Differences in fit indices, organized
##' by what proportion favored each model and among those, what the average
##' difference was.}
##' \item{\code{Favored_by_BIC:}}{ The proportion of allocations in which each
##' model met the criterion (\code{bic.crit}) for a substantial difference
##' in fit.}
##' \item{\code{Convergence_Summary:}}{ The proportion of allocations in which
##' each model (and both models) converged on a solution.}
##' } Histograms are also printed to the current plot-output device.}
##'
##' @author
##' Terrence D. Jorgensen (University of Amsterdam; \email{TJorgensen314@@gmail.com})
##'
##' @seealso \code{\link{parcelAllocation}} for fitting a single model,
##' \code{\link{poolMAlloc}} for choosing the number of allocations
##'
##' @references
##' Raftery, A. E. (1995). Bayesian model selection in social
##' research. \emph{Sociological Methodology, 25}, 111--163. doi:10.2307/271063
##'
##' Sterba, S. K. (2011). Implications of parcel-allocation variability for
##' comparing fit of item-solutions and parcel-solutions. \emph{Structural
##' Equation Modeling: A Multidisciplinary Journal, 18}(4), 554--577.
##' doi:10.1080/10705511.2011.607073
##'
##' Sterba, S. K., & MacCallum, R. C. (2010). Variability in parameter estimates
##' and model fit across repeated allocations of items to parcels.
##' \emph{Multivariate Behavioral Research, 45}(2), 322--358.
##' doi:10.1080/00273171003680302
##'
##' Sterba, S. K., & Rights, J. D. (2017). Effects of parceling on model
##' selection: Parcel-allocation variability in model ranking.
##' \emph{Psychological Methods, 22}(1), 47--68. doi:10.1037/met0000067
##'
##' @examples
##'
##' ## Specify the item-level model (if NO parcels were created)
##' ## This must apply to BOTH competing models
##'
##' item.syntax <- c(paste0("f1 =~ f1item", 1:9),
##' paste0("f2 =~ f2item", 1:9))
##' cat(item.syntax, sep = "\n")
##' ## Below, we reduce the size of this same model by
##' ## applying different parceling schemes
##'
##' ## Specify a 2-factor CFA with correlated factors, using 3-indicator parcels
##' mod1 <- '
##' f1 =~ par1 + par2 + par3
##' f2 =~ par4 + par5 + par6
##' '
##' ## Specify a more restricted model with orthogonal factors
##' mod0 <- '
##' f1 =~ par1 + par2 + par3
##' f2 =~ par4 + par5 + par6
##' f1 ~~ 0*f2
##' '
##' ## names of parcels (must apply to BOTH models)
##' (parcel.names <- paste0("par", 1:6))
##'
##' \dontrun{
##' ## override default random-number generator to use parallel options
##' RNGkind("L'Ecuyer-CMRG")
##'
##' PAVranking(model0 = mod0, model1 = mod1, data = simParcel, nAlloc = 100,
##' parcel.names = parcel.names, item.syntax = item.syntax,
##' std.lv = TRUE, # any addition lavaan arguments
##' parallel = "snow") # parallel options
##'
##'
##'
##' ## POOL RESULTS by treating parcel allocations as multiple imputations
##'
##' ## save list of data sets instead of fitting model yet
##' dataList <- parcelAllocation(mod.parcels, data = simParcel, nAlloc = 100,
##' parcel.names = parcel.names,
##' item.syntax = item.syntax,
##' do.fit = FALSE)
##' ## now fit each model to each data set
##' fit0 <- cfa.mi(mod0, data = dataList, std.lv = TRUE)
##' fit1 <- cfa.mi(mod1, data = dataList, std.lv = TRUE)
##' anova(fit0, fit1) # pooled test statistic comparing models
##' class?lavaan.mi # find more methods for pooling results
##' }
##'
##' @export
PAVranking <- function(model0, model1, data, parcel.names, item.syntax,
nAlloc = 100, fun = "sem", alpha = .05, bic.crit = 10,
fit.measures = c("chisq","df","cfi","tli","rmsea",
"srmr","logl","aic","bic","bic2"), ...,
show.progress = FALSE, iseed = 12345, warn = FALSE) {
if (alpha >= 1 | alpha <= 0) stop('alpha level must be between 0 and 1')
bic.crit <- abs(bic.crit)
## fit each model
out0 <- parcelAllocation(model = model0, data = data, nAlloc = nAlloc,
parcel.names = parcel.names, item.syntax = item.syntax,
fun = fun, alpha = alpha, fit.measures = fit.measures,
..., show.progress = show.progress, iseed = iseed,
return.fit = TRUE, warn = warn)
out1 <- parcelAllocation(model = model1, data = data, nAlloc = nAlloc,
parcel.names = parcel.names, item.syntax = item.syntax,
fun = fun, alpha = alpha, fit.measures = fit.measures,
..., show.progress = show.progress, iseed = iseed,
return.fit = TRUE, warn = warn)
## convergence summary
conv0 <- out0$Model@meta$ok
conv1 <- out1$Model@meta$ok
conv01 <- conv0 & conv1
conv <- data.frame(Proportion_Converged = sapply(list(conv0, conv1, conv01), mean),
row.names = c("model0","model1","Both_Models"))
## add proper solutions? I would advise against
## check df matches assumed nesting
DF0 <- out0$Fit["df", "Avg"]
DF1 <- out1$Fit["df", "Avg"]
if (DF0 == DF1) stop('Models have identical df, so they cannot be compared.')
if (DF0 < DF1) warning('model0 should be nested within model1, ',
'but df_0 < df_1. Should models be swapped?')
temp <- out0
out0 <- out1
out1 <- temp
## Re-run lavaanList to collect model-comparison results
if (show.progress) message('Re-fitting model0 to collect model-comparison ',
'statistics\n')
oldCall <- out0$Model@call
oldCall$model <- parTable(out0$Model)
oldCall$dataList <- out0$Model@external$dataList[conv01]
if (!is.null(oldCall$parallel)) {
if (oldCall$parallel == "snow") {
oldCall$parallel <- "no"
oldCall$ncpus <- 1L
if (warn) warning("Unable to pass lavaan::lavTestLRT() arguments when ",
"parallel = 'snow'. Switching to parallel = 'no'. ",
"Unless using Windows, parallel = 'multicore' works.")
}
}
PT1 <- parTable(out1$Model)
op1 <- lavListInspect(out1$Model, "options")
oldCall$FUN <- function(obj) {
fit1 <- try(lavaan::lavaan(model = PT1, slotOptions = op1,
slotData = obj@Data), silent = TRUE)
if (inherits(fit1, "try-error")) return("fit failed")
out <- lavaan::lavTestLRT(obj, fit1)
if (inherits(out, "try-error")) return("lavTestLRT() failed")
out
}
fit01 <- eval(as.call(oldCall))
## check if there are any results
noLRT <- sapply(fit01@funList, is.character)
if (all(noLRT)) stop("No success using lavTestScore() on any allocations.")
## anova() results
CHIs <- sapply(fit01@funList[!noLRT], "[", i = 2, j = "Chisq diff")
pVals <- sapply(fit01@funList[!noLRT], "[", i = 2, j = "Pr(>Chisq)")
LRT <- c(`Avg LRT` = mean(CHIs), df = abs(DF0 - DF1), SD = sd(CHIs),
Min = min(CHIs), Max = max(CHIs), Range = max(CHIs) - min(CHIs),
`% Sig` = mean(pVals < alpha))
class(LRT) <- c("lavaan.vector","numeric")
## differences in fit indices
indices <- fit.measures[!grepl(pattern = "chisq|df|pvalue", fit.measures)]
Fit0 <- do.call(cbind, out0$Model@funList[conv01])[indices, ]
Fit1 <- do.call(cbind, out1$Model@funList[conv01])[indices, ]
## higher values for model0
Fit01 <- Fit0 - Fit1
higher0 <- Fit0 > Fit1
perc0 <- rowMeans(higher0)
avg0 <- rowMeans(Fit01 * higher0)
## higher values for model1
Fit10 <- Fit1 - Fit0
higher1 <- Fit1 > Fit0
perc1 <- rowMeans(higher1)
avg1 <- rowMeans(Fit10 * higher1)
fitDiff <- data.frame(Prop0 = perc0, Avg0 = avg0, Prop1 = perc1, Avg1 = avg1)
class(fitDiff) <- c("lavaan.data.frame","data.frame")
attr(fitDiff, "header") <- paste("Note: Higher values of goodness-of-fit",
"indices (e.g., CFI) favor that model, but",
"higher values of badness-of-fit indices",
"(e.g., RMSEA) indicate the competing model",
"is favored.\n\n'Prop0' indicates the",
"proportion of allocations for which each",
"index was higher for model0 (likewise,",
"'Prop1' indicates this for model1).\n",
"\nAmong those allocations, 'Avg0' or 'Avg1'",
"indicates the average amount by which the",
"index was higher for that model.")
## favored by BIC
favorBIC <- NULL
if (any(grepl(pattern = "bic", fit.measures))) {
if ("bic" %in% fit.measures) {
highBIC <- abs(Fit01["bic",]) >= bic.crit
favor0 <- mean(higher1["bic",] & highBIC)
favor1 <- mean(higher0["bic",] & highBIC)
favorBIC <- data.frame("bic" = c(favor0, favor1),
row.names = paste("Evidence Favoring Model", 0:1))
}
if ("bic2" %in% fit.measures) {
highBIC <- abs(Fit01["bic2",]) >= bic.crit
favor0 <- mean(higher1["bic2",] & highBIC)
favor1 <- mean(higher0["bic2",] & highBIC)
favorBIC2 <- data.frame("bic2" = c(favor0, favor1),
row.names = paste("Evidence Favoring Model", 0:1))
if (is.null(favorBIC)) {
favorBIC <- favorBIC2
} else favorBIC <- cbind(favorBIC, favorBIC2)
}
#TODO: add bic.priorN from moreFitIndices()
class(favorBIC) <- c("lavaan.data.frame","data.frame")
attr(favorBIC, "header") <- paste("Percent of Allocations with |BIC Diff| >",
bic.crit)
}
## return results
list(Model0_Results = out0[c("Estimates","SE","Fit")],
Model1_Results = out1[c("Estimates","SE","Fit")],
Model0.v.Model1 = list(LRT_Summary = LRT,
Fit_Index_Differences = fitDiff,
Favored_by_BIC = favorBIC,
Convergence_Summary = conv))
}
## ------------
## old function
## ------------
## @param nPerPar A list in which each element is a vector, corresponding to
## each factor, indicating sizes of parcels. If variables are left out of
## parceling, they should not be accounted for here (i.e., there should not be
## parcels of size "1").
## @param facPlc A list of vectors, each corresponding to a factor, specifying
## the item indicators of that factor (whether included in parceling or not).
## Either variable names or column numbers. Variables not listed will not be
## modeled or included in output datasets.
## @param nAlloc The number of random allocations of items to parcels to
## generate.
## @param syntaxA lavaan syntax for Model A. Note that, for likelihood ratio
## test (LRT) results to be interpreted, Model A should be nested within Model
## B (though the function will still provide results when Models A and B are
## nonnested).
## @param syntaxB lavaan syntax for Model B. Note that, for likelihood ratio
## test (LRT) results to be appropriate, Model A should be nested within Model
## B (though the function will still provide results when Models A and B are
## nonnested).
## @param dataset Item-level dataset
## @param parceloutput folder where parceled data sets will be outputted (note
## for Windows users: file path must specified using forward slashes).
## @param names (Optional) A character vector containing the names of parceled
## variables.
## @param leaveout (Optional) A vector of variables to be left out of
## randomized parceling. Either variable names or column numbers are allowed.
## @examples
##
## \dontrun{
## ## lavaan syntax for Model A: a 2 Uncorrelated
## ## factor CFA model to be fit to parceled data
##
## parmodelA <- '
## f1 =~ NA*p1f1 + p2f1 + p3f1
## f2 =~ NA*p1f2 + p2f2 + p3f2
## p1f1 ~ 1
## p2f1 ~ 1
## p3f1 ~ 1
## p1f2 ~ 1
## p2f2 ~ 1
## p3f2 ~ 1
## p1f1 ~~ p1f1
## p2f1 ~~ p2f1
## p3f1 ~~ p3f1
## p1f2 ~~ p1f2
## p2f2 ~~ p2f2
## p3f2 ~~ p3f2
## f1 ~~ 1*f1
## f2 ~~ 1*f2
## f1 ~~ 0*f2
## '
##
## ## lavaan syntax for Model B: a 2 Correlated
## ## factor CFA model to be fit to parceled data
##
## parmodelB <- '
## f1 =~ NA*p1f1 + p2f1 + p3f1
## f2 =~ NA*p1f2 + p2f2 + p3f2
## p1f1 ~ 1
## p2f1 ~ 1
## p3f1 ~ 1
## p1f2 ~ 1
## p2f2 ~ 1
## p3f2 ~ 1
## p1f1 ~~ p1f1
## p2f1 ~~ p2f1
## p3f1 ~~ p3f1
## p1f2 ~~ p1f2
## p2f2 ~~ p2f2
## p3f2 ~~ p3f2
## f1 ~~ 1*f1
## f2 ~~ 1*f2
## f1 ~~ f2
## '
##
## ## specify items for each factor
## f1name <- colnames(simParcel)[1:9]
## f2name <- colnames(simParcel)[10:18]
##
## ## run function
## PAVranking(nPerPar = list(c(3,3,3), c(3,3,3)), facPlc = list(f1name,f2name),
## nAlloc = 100, parceloutput = 0, leaveout = 0,
## syntaxA = parmodelA, syntaxB = parmodelB, dataset = simParcel,
## names = list("p1f1","p2f1","p3f1","p1f2","p2f2","p3f2"))
## }
##
# PAVranking <- function(nPerPar, facPlc, nAlloc = 100, parceloutput = 0, syntaxA, syntaxB,
# dataset, names = NULL, leaveout = 0, seed = NA, ...) {
# if (is.null(names))
# names <- matrix(NA, length(nPerPar), 1)
# ## set random seed if specified
# if (is.na(seed) == FALSE)
# set.seed(seed)
# ## allow many tables to be outputted
# options(max.print = 1e+06)
#
# ## Create parceled datasets
# if (is.character(dataset)) dataset <- utils::read.csv(dataset)
# dataset <- as.matrix(dataset)
#
# if (nAlloc < 2)
# stop("Minimum of two allocations required.")
#
# if (is.list(facPlc)) {
# if (is.numeric(facPlc[[1]][1]) == FALSE) {
# facPlcb <- facPlc
# Namesv <- colnames(dataset)
#
# for (i in 1:length(facPlc)) {
# for (j in 1:length(facPlc[[i]])) {
# facPlcb[[i]][j] <- match(facPlc[[i]][j], Namesv)
# }
# facPlcb[[i]] <- as.numeric(facPlcb[[i]])
# }
# facPlc <- facPlcb
# }
#
# # facPlc2 <- rep(0, sum(sapply(facPlc, length)))
# facPlc2 <- rep(0, ncol(dataset))
#
# for (i in 1:length(facPlc)) {
# for (j in 1:length(facPlc[[i]])) {
# facPlc2[facPlc[[i]][j]] <- i
# }
# }
# facPlc <- facPlc2
# }
#
# if (leaveout != 0) {
# if (is.numeric(leaveout) == FALSE) {
# leaveoutb <- rep(0, length(leaveout))
# Namesv <- colnames(dataset)
#
# for (i in 1:length(leaveout)) {
# leaveoutb[i] <- match(leaveout[i], Namesv)
# }
# leaveout <- as.numeric(leaveoutb)
# }
# k1 <- 0.001
# for (i in 1:length(leaveout)) {
# facPlc[leaveout[i]] <- facPlc[leaveout[i]] + k1
# k1 <- k1 + 0.001
# }
# }
#
# if (0 %in% facPlc == TRUE) {
# Zfreq <- sum(facPlc == 0)
# for (i in 1:Zfreq) {
# Zplc <- match(0, facPlc)
# dataset <- dataset[, -Zplc]
# facPlc <- facPlc[-Zplc]
# }
# ## this allows for unused variables in dataset, which are specified by zeros, and
# ## deleted
# }
#
# if (is.list(nPerPar)) {
# nPerPar2 <- c()
# for (i in 1:length(nPerPar)) {
# Onesp <- sum(facPlc > i & facPlc < i + 1)
# nPerPar2 <- c(nPerPar2, nPerPar[i], rep(1, Onesp), recursive = TRUE)
# }
# nPerPar <- nPerPar2
# }
#
# Npp <- c()
# for (i in 1:length(nPerPar)) {
# Npp <- c(Npp, rep(i, nPerPar[i]))
# }
#
# Locate <- sort(round(facPlc))
# Maxv <- max(Locate) - 1
#
# if (length(Locate) != length(Npp))
# stop("Parcels incorrectly specified.\",
# \" Check input!")
#
# if (Maxv > 0) {
# ## Bug was here. With 1 factor Maxv=0. Skip this with a single factor
# for (i in 1:Maxv) {
# Mat <- match(i + 1, Locate)
# if (Npp[Mat] == Npp[Mat - 1])
# stop("Parcels incorrectly specified.\",
# \" Check input!")
# }
# }
# ## warning message if parcel crosses into multiple factors vector, parcel to which
# ## each variable belongs vector, factor to which each variables belongs if
# ## variables are in the same parcel, but different factors error message given in
# ## output
#
# Onevec <- facPlc - round(facPlc)
# NleaveA <- length(Onevec) - sum(Onevec == 0)
# NleaveP <- sum(nPerPar == 1)
#
# if (NleaveA < NleaveP)
# warning("Single-variable parcels have been requested.\",
# \" Check input!")
#
# if (NleaveA > NleaveP)
# warning("More non-parceled variables have been", " requested than provided for in parcel",
# " vector. Check input!")
#
# if (length(names) > 1) {
# if (length(names) != length(nPerPar))
# warning("Number of parcel names provided not equal to number", " of parcels requested")
# }
#
# Data <- c(1:ncol(dataset))
# ## creates a vector of the number of indicators e.g. for three indicators, c(1, 2,
# ## 3)
# Nfactors <- max(facPlc)
# ## scalar, number of factors
# Nindicators <- length(Data)
# ## scalar, number of indicators
# Npar <- length(nPerPar)
# ## scalar, number of parcels
# Rmize <- runif(Nindicators, 1, Nindicators)
# ## create vector of randomly ordered numbers, length of number of indicators
#
# Data <- rbind(facPlc, Rmize, Data)
# ## 'Data' becomes object of three rows, consisting of 1) factor to which each
# ## indicator belongs (in order to preserve indicator/factor assignment during
# ## randomization) 2) randomly order numbers 3) indicator number
#
# Results <- matrix(numeric(0), nAlloc, Nindicators)
# ## create empty matrix for parcel allocation matrix
#
# Pin <- nPerPar[1]
# for (i in 2:length(nPerPar)) {
# Pin <- c(Pin, nPerPar[i] + Pin[i - 1])
# ## creates vector which indicates the range of columns (endpoints) in each parcel
# }
#
# for (i in 1:nAlloc) {
# Data[2, ] <- runif(Nindicators, 1, Nindicators)
# ## Replace second row with newly randomly ordered numbers
#
# Data <- Data[, order(Data[2, ])]
# ## Order the columns according to the values of the second row
#
# Data <- Data[, order(Data[1, ])]
# ## Order the columns according to the values of the first row in order to preserve
# ## factor assignment
#
# Results[i, ] <- Data[3, ]
# ## assign result to allocation matrix
# }
#
# Alpha <- rbind(Results[1, ], dataset)
# ## bind first random allocation to dataset 'Alpha'
#
# Allocations <- list()
# ## create empty list for allocation data matrices
#
# for (i in 1:nAlloc) {
# Ineff <- rep(NA, ncol(Results))
# Ineff2 <- c(1:ncol(Results))
# for (inefficient in 1:ncol(Results)) {
# Ineff[Results[i, inefficient]] <- Ineff2[inefficient]
# }
#
# Alpha[1, ] <- Ineff
# ## replace first row of dataset matrix with row 'i' from allocation matrix
#
# Beta <- Alpha[, order(Alpha[1, ])]
# ## arrangle dataset columns by values of first row assign to temporary matrix
# ## 'Beta'
#
# Temp <- matrix(NA, nrow(dataset), Npar)
# ## create empty matrix for averaged parcel variables
#
# TempAA <- if (length(1:Pin[1]) > 1)
# Beta[2:nrow(Beta), 1:Pin[1]] else cbind(Beta[2:nrow(Beta), 1:Pin[1]], Beta[2:nrow(Beta), 1:Pin[1]])
# Temp[, 1] <- rowMeans(TempAA, na.rm = TRUE)
# ## fill first column with averages from assigned indicators
# for (al in 2:Npar) {
# Plc <- Pin[al - 1] + 1
# ## placeholder variable for determining parcel width
# TempBB <- if (length(Plc:Pin[al]) > 1)
# Beta[2:nrow(Beta), Plc:Pin[al]] else cbind(Beta[2:nrow(Beta), Plc:Pin[al]], Beta[2:nrow(Beta), Plc:Pin[al]])
# Temp[, al] <- rowMeans(TempBB, na.rm = TRUE)
# ## fill remaining columns with averages from assigned indicators
# }
# if (length(names) > 1)
# colnames(Temp) <- names
# Allocations[[i]] <- Temp
# ## assign result to list of parcel datasets
# }
#
# ## Write parceled datasets
# if (as.vector(regexpr("/", parceloutput)) != -1) {
# replist <- matrix(NA, nAlloc, 1)
# for (i in 1:nAlloc) {
# ## if (is.na(names)==TRUE) names <- matrix(NA,nrow(
# colnames(Allocations[[i]]) <- names
# utils::write.table(Allocations[[i]], paste(parceloutput, "/parcelruns", i,
# ".dat", sep = ""),
# row.names = FALSE, col.names = TRUE)
# replist[i, 1] <- paste("parcelruns", i, ".dat", sep = "")
# }
# utils::write.table(replist, paste(parceloutput, "/parcelrunsreplist.dat",
# sep = ""),
# quote = FALSE, row.names = FALSE, col.names = FALSE)
# }
#
#
# ## Model A estimation
#
# {
# Param_A <- list()
# ## list for parameter estimated for each imputation
# Fitind_A <- list()
# ## list for fit indices estimated for each imputation
# Converged_A <- list()
# ## list for whether or not each allocation converged
# ProperSolution_A <- list()
# ## list for whether or not each allocation has proper solutions
# ConvergedProper_A <- list()
# ## list for whether or not each allocation converged and has proper solutions
#
# for (i in 1:nAlloc) {
# data_A <- as.data.frame(Allocations[[i]], row.names = NULL, optional = FALSE)
# ## convert allocation matrix to dataframe for model estimation
# fit_A <- lavaan::sem(syntaxA, data = data_A, ...)
# ## estimate model in lavaan
# if (lavInspect(fit_A, "converged") == TRUE) {
# Converged_A[[i]] <- 1
# } else Converged_A[[i]] <- 0
# ## determine whether or not each allocation converged
# Param_A[[i]] <- lavaan::parameterEstimates(fit_A)[, c("lhs", "op", "rhs",
# "est", "se", "z", "pvalue", "ci.lower", "ci.upper")]
# ## assign allocation parameter estimates to list
# if (lavInspect(fit_A, "post.check") == TRUE & Converged_A[[i]] == 1) {
# ProperSolution_A[[i]] <- 1
# } else ProperSolution_A[[i]] <- 0
# ## determine whether or not each allocation has proper solutions
# if (any(is.na(Param_A[[i]][, 5] == TRUE)))
# ProperSolution_A[[i]] <- 0
# ## make sure each allocation has existing SE
# if (Converged_A[[i]] == 1 & ProperSolution_A[[i]] == 1) {
# ConvergedProper_A[[i]] <- 1
# } else ConvergedProper_A[[i]] <- 0
# ## determine whether or not each allocation converged and has proper solutions
#
# if (ConvergedProper_A[[i]] == 0)
# Param_A[[i]][, 4:9] <- matrix(data = NA, nrow(Param_A[[i]]), 6)
# ## make parameter estimates null for nonconverged, improper solutions
#
# if (ConvergedProper_A[[i]] == 1) {
# Fitind_A[[i]] <- lavaan::fitMeasures(fit_A, c("chisq", "df", "cfi",
# "tli", "rmsea", "srmr", "logl", "bic", "aic"))
# } else Fitind_A[[i]] <- c(NA, NA, NA, NA, NA, NA, NA, NA, NA)
# ### assign allocation parameter estimates to list
#
# }
#
#
# nConverged_A <- Reduce("+", Converged_A)
# ## count number of converged allocations
#
# nProperSolution_A <- Reduce("+", ProperSolution_A)
# ## count number of allocations with proper solutions
#
# nConvergedProper_A <- Reduce("+", ConvergedProper_A)
# ## count number of allocations with proper solutions
#
# if (nConvergedProper_A == 0)
# stop("All allocations failed to converge and/or yielded improper solutions for Model A and/or B.")
# ## stop program if no allocations converge
#
# Parmn_A <- Param_A[[1]]
# ## assign first parameter estimates to mean dataframe
#
# ParSE_A <- matrix(NA, nrow(Parmn_A), nAlloc)
# ParSEmn_A <- Parmn_A[, 5]
#
# Parsd_A <- matrix(NA, nrow(Parmn_A), nAlloc)
# ## assign parameter estimates for S.D. calculation
#
# Fitmn_A <- Fitind_A[[1]]
# ## assign first fit indices to mean dataframe
#
# Fitsd_A <- matrix(NA, length(Fitmn_A), nAlloc)
# ## assign fit indices for S.D. calculation
#
# Sigp_A <- matrix(NA, nrow(Parmn_A), nAlloc)
# ## assign p-values to calculate percentage significant
#
# Fitind_A <- data.frame(Fitind_A)
# ### convert fit index table to data frame
#
# for (i in 1:nAlloc) {
#
# Parsd_A[, i] <- Param_A[[i]][, 4]
# ## assign parameter estimates for S.D. estimation
#
# ParSE_A[, i] <- Param_A[[i]][, 5]
#
# if (i > 1) {
# ParSEmn_A <- rowSums(cbind(ParSEmn_A, Param_A[[i]][, 5]), na.rm = TRUE)
# }
#
# Sigp_A[, ncol(Sigp_A) - i + 1] <- Param_A[[i]][, 7]
# ## assign p-values to calculate percentage significant
#
# Fitsd_A[, i] <- Fitind_A[[i]]
# ## assign fit indices for S.D. estimation
#
# if (i > 1) {
# Parmn_A[, 4:ncol(Parmn_A)] <- rowSums(cbind(Parmn_A[, 4:ncol(Parmn_A)],
# Param_A[[i]][, 4:ncol(Parmn_A)]), na.rm = TRUE)
# }
# ## add together all parameter estimates
#
# if (i > 1)
# Fitmn_A <- rowSums(cbind(Fitmn_A, Fitind_A[[i]]), na.rm = TRUE)
# ## add together all fit indices
#
# }
#
#
# Sigp_A <- Sigp_A + 0.45
# Sigp_A <- apply(Sigp_A, c(1, 2), round)
# Sigp_A <- 1 - as.vector(rowMeans(Sigp_A, na.rm = TRUE))
# ## calculate percentage significant parameters
#
# Parsum_A <- cbind(apply(Parsd_A, 1, mean, na.rm = TRUE),
# apply(Parsd_A, 1, sd, na.rm = TRUE),
# apply(Parsd_A, 1, max, na.rm = TRUE),
# apply(Parsd_A, 1, min, na.rm = TRUE),
# apply(Parsd_A, 1, max, na.rm = TRUE) - apply(Parsd_A, 1, min, na.rm = TRUE), Sigp_A * 100)
# colnames(Parsum_A) <- c("Avg Est.", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ## calculate parameter S.D., minimum, maximum, range, bind to percentage
# ## significant
#
# ParSEmn_A <- Parmn_A[, 1:3]
# ParSEfn_A <- cbind(ParSEmn_A, apply(ParSE_A, 1, mean, na.rm = TRUE),
# apply(ParSE_A, 1, sd, na.rm = TRUE),
# apply(ParSE_A, 1, max, na.rm = TRUE),
# apply(ParSE_A, 1, min, na.rm = TRUE),
# apply(ParSE_A, 1, max, na.rm = TRUE) - apply(ParSE_A, 1, min, na.rm = TRUE))
# colnames(ParSEfn_A) <- c("lhs", "op", "rhs", "Avg SE", "S.D.",
# "MAX", "MIN", "Range")
#
# Fitsum_A <- cbind(apply(Fitsd_A, 1, mean, na.rm = TRUE),
# apply(Fitsd_A, 1, sd, na.rm = TRUE),
# apply(Fitsd_A, 1, max, na.rm = TRUE),
# apply(Fitsd_A, 1, min, na.rm = TRUE),
# apply(Fitsd_A, 1, max, na.rm = TRUE) - apply(Fitsd_A, 1, min, na.rm = TRUE))
# rownames(Fitsum_A) <- c("chisq", "df", "cfi", "tli", "rmsea", "srmr", "logl",
# "bic", "aic")
# ## calculate fit S.D., minimum, maximum, range
#
# Parmn_A[, 4:ncol(Parmn_A)] <- Parmn_A[, 4:ncol(Parmn_A)]/nConvergedProper_A
# ## divide totalled parameter estimates by number converged allocations
# Parmn_A <- Parmn_A[, 1:3]
# ## remove confidence intervals from output
# Parmn_A <- cbind(Parmn_A, Parsum_A)
# ## bind parameter average estimates to cross-allocation information
# Fitmn_A <- Fitmn_A/nConvergedProper_A
# ## divide totalled fit indices by number converged allocations
#
# pChisq_A <- list()
# ## create empty list for Chi-square p-values
# sigChisq_A <- list()
# ## create empty list for Chi-square significance
#
# for (i in 1:nAlloc) {
# pChisq_A[[i]] <- (1 - pchisq(Fitsd_A[1, i], Fitsd_A[2, i]))
# ## calculate p-value for each Chi-square
# if (is.na(pChisq_A[[i]]) == FALSE & pChisq_A[[i]] < 0.05) {
# sigChisq_A[[i]] <- 1
# } else sigChisq_A[[i]] <- 0
# }
# ## count number of allocations with significant chi-square
#
# PerSigChisq_A <- (Reduce("+", sigChisq_A))/nConvergedProper_A * 100
# PerSigChisq_A <- round(PerSigChisq_A, 3)
# ## calculate percent of allocations with significant chi-square
#
# PerSigChisqCol_A <- c(PerSigChisq_A, "n/a", "n/a", "n/a", "n/a", "n/a", "n/a",
# "n/a", "n/a")
# ## create list of Chi-square Percent Significant and 'n/a' (used for fit summary
# ## table)
#
# options(stringsAsFactors = FALSE)
# ## set default option to allow strings into dataframe without converting to factors
#
# Fitsum_A <- data.frame(Fitsum_A, PerSigChisqCol_A)
# colnames(Fitsum_A) <- c("Avg Ind", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ### bind to fit averages
#
# options(stringsAsFactors = TRUE)
# ## unset option to allow strings into dataframe without converting to factors
#
# ParSEfn_A[, 4:8] <- apply(ParSEfn_A[, 4:8], 2, round, digits = 3)
# Parmn_A[, 4:9] <- apply(Parmn_A[, 4:9], 2, round, digits = 3)
# Fitsum_A[, 1:5] <- apply(Fitsum_A[, 1:5], 2, round, digits = 3)
# ## round output to three digits
#
# Fitsum_A[2, 2:5] <- c("n/a", "n/a", "n/a", "n/a")
# ## Change df row to 'n/a' for sd, max, min, and range
#
# Output_A <- list(Parmn_A, ParSEfn_A, Fitsum_A)
# names(Output_A) <- c("Estimates_A", "SE_A", "Fit_A")
# ## output summary for model A
#
# }
#
# ## Model B estimation
#
# {
# Param <- list()
# ## list for parameter estimated for each imputation
# Fitind <- list()
# ## list for fit indices estimated for each imputation
# Converged <- list()
# ## list for whether or not each allocation converged
# ProperSolution <- list()
# ## list for whether or not each allocation has proper solutions
# ConvergedProper <- list()
# ## list for whether or not each allocation is converged and proper
#
# for (i in 1:nAlloc) {
# data <- as.data.frame(Allocations[[i]], row.names = NULL, optional = FALSE)
# ## convert allocation matrix to dataframe for model estimation
# fit <- lavaan::sem(syntaxB, data = data, ...)
# ## estimate model in lavaan
# if (lavInspect(fit, "converged") == TRUE) {
# Converged[[i]] <- 1
# } else Converged[[i]] <- 0
# ## determine whether or not each allocation converged
# Param[[i]] <- lavaan::parameterEstimates(fit)[, c("lhs", "op", "rhs",
# "est", "se", "z", "pvalue", "ci.lower", "ci.upper")]
# ## assign allocation parameter estimates to list
# if (lavInspect(fit, "post.check") == TRUE & Converged[[i]] == 1) {
# ProperSolution[[i]] <- 1
# } else ProperSolution[[i]] <- 0
# ## determine whether or not each allocation has proper solutions
# if (any(is.na(Param[[i]][, 5] == TRUE)))
# ProperSolution[[i]] <- 0
# ## make sure each allocation has existing SE
# if (Converged[[i]] == 1 & ProperSolution[[i]] == 1) {
# ConvergedProper[[i]] <- 1
# } else ConvergedProper[[i]] <- 0
# ## determine whether or not each allocation converged and has proper solutions
#
# if (ConvergedProper[[i]] == 0)
# Param[[i]] <- matrix(data = NA, nrow(Param[[i]]), ncol(Param[[i]]))
# ## make parameter estimates null for nonconverged, improper solutions
#
# if (ConvergedProper[[i]] == 1) {
# Fitind[[i]] <- lavaan::fitMeasures(fit, c("chisq", "df", "cfi", "tli",
# "rmsea", "srmr", "logl", "bic", "aic"))
# } else Fitind[[i]] <- c(NA, NA, NA, NA, NA, NA, NA, NA, NA)
# ### assign allocation parameter estimates to list
#
#
# }
#
#
#
#
# nConverged <- Reduce("+", Converged)
# ## count number of converged allocations
#
# nProperSolution <- Reduce("+", ProperSolution)
# ## count number of allocations with proper solutions
#
# nConvergedProper <- Reduce("+", ConvergedProper)
# ## count number of allocations with proper solutions
#
# if (nConvergedProper == 0)
# stop("All allocations failed to converge", " and/or yielded improper solutions for",
# " Model A and/or B.")
# ## stop program if no allocations converge
#
# Parmn <- Param[[1]]
# ## assign first parameter estimates to mean dataframe
#
# ParSE <- matrix(NA, nrow(Parmn), nAlloc)
# ParSEmn <- Parmn[, 5]
#
# Parsd <- matrix(NA, nrow(Parmn), nAlloc)
# ## assign parameter estimates for S.D. calculation
#
# Fitmn <- Fitind[[1]]
# ## assign first fit indices to mean dataframe
#
# Fitsd <- matrix(NA, length(Fitmn), nAlloc)
# ## assign fit indices for S.D. calculation
#
# Sigp <- matrix(NA, nrow(Parmn), nAlloc)
# ## assign p-values to calculate percentage significant
#
# Fitind <- data.frame(Fitind)
# ### convert fit index table to dataframe
#
#
# for (i in 1:nAlloc) {
#
# Parsd[, i] <- Param[[i]][, 4]
# ## assign parameter estimates for S.D. estimation
#
# ParSE[, i] <- Param[[i]][, 5]
#
# if (i > 1)
# ParSEmn <- rowSums(cbind(ParSEmn, Param[[i]][, 5]), na.rm = TRUE)
#
# Sigp[, ncol(Sigp) - i + 1] <- Param[[i]][, 7]
# ## assign p-values to calculate percentage significant
#
#
# Fitsd[, i] <- Fitind[[i]]
# ## assign fit indices for S.D. estimation
#
# if (i > 1) {
# Parmn[, 4:ncol(Parmn)] <- rowSums(cbind(Parmn[, 4:ncol(Parmn)], Param[[i]][,
# 4:ncol(Parmn)]), na.rm = TRUE)
# }
# ## add together all parameter estimates
#
# if (i > 1)
# Fitmn <- rowSums(cbind(Fitmn, Fitind[[i]]), na.rm = TRUE)
# ## add together all fit indices
#
# }
#
#
# Sigp <- Sigp + 0.45
# Sigp <- apply(Sigp, c(1, 2), round)
# Sigp <- 1 - as.vector(rowMeans(Sigp, na.rm = TRUE))
# ## calculate percentage significant parameters
#
# Parsum <- cbind(apply(Parsd, 1, mean, na.rm = TRUE), apply(Parsd, 1, sd, na.rm = TRUE),
# apply(Parsd, 1, max, na.rm = TRUE), apply(Parsd, 1, min, na.rm = TRUE),
# apply(Parsd, 1, max, na.rm = TRUE) - apply(Parsd, 1, min, na.rm = TRUE),
# Sigp * 100)
# colnames(Parsum) <- c("Avg Est", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ## calculate parameter S.D., minimum, maximum, range, bind to percentage
# ## significant
#
# ParSEmn <- Parmn[, 1:3]
# ParSEfn <- cbind(ParSEmn, apply(ParSE, 1, mean, na.rm = TRUE), apply(ParSE,
# 1, sd, na.rm = TRUE), apply(ParSE, 1, max, na.rm = TRUE), apply(ParSE,
# 1, min, na.rm = TRUE), apply(ParSE, 1, max, na.rm = TRUE) - apply(ParSE,
# 1, min, na.rm = TRUE))
# colnames(ParSEfn) <- c("lhs", "op", "rhs", "Avg SE", "S.D.", "MAX", "MIN",
# "Range")
#
# Fitsum <- cbind(apply(Fitsd, 1, mean, na.rm = TRUE), apply(Fitsd, 1, sd, na.rm = TRUE),
# apply(Fitsd, 1, max, na.rm = TRUE), apply(Fitsd, 1, min, na.rm = TRUE),
# apply(Fitsd, 1, max, na.rm = TRUE) - apply(Fitsd, 1, min, na.rm = TRUE))
# rownames(Fitsum) <- c("chisq", "df", "cfi", "tli", "rmsea", "srmr", "logl",
# "bic", "aic")
# ## calculate fit S.D., minimum, maximum, range
#
# Parmn[, 4:ncol(Parmn)] <- Parmn[, 4:ncol(Parmn)]/nConvergedProper
# ## divide totalled parameter estimates by number converged allocations
# Parmn <- Parmn[, 1:3]
# ## remove confidence intervals from output
# Parmn <- cbind(Parmn, Parsum)
# ## bind parameter average estimates to cross-allocation information
# Fitmn <- as.numeric(Fitmn)
# ## make fit index values numeric
# Fitmn <- Fitmn/nConvergedProper
# ## divide totalled fit indices by number converged allocations
#
# pChisq <- list()
# ## create empty list for Chi-square p-values
# sigChisq <- list()
# ## create empty list for Chi-square significance
#
# for (i in 1:nAlloc) {
#
# pChisq[[i]] <- (1 - pchisq(Fitsd[1, i], Fitsd[2, i]))
# ## calculate p-value for each Chi-square
#
# if (is.na(pChisq[[i]]) == FALSE & pChisq[[i]] < 0.05) {
# sigChisq[[i]] <- 1
# } else sigChisq[[i]] <- 0
# }
# ## count number of allocations with significant chi-square
#
# PerSigChisq <- (Reduce("+", sigChisq))/nConvergedProper * 100
# PerSigChisq <- round(PerSigChisq, 3)
# ## calculate percent of allocations with significant chi-square
#
# PerSigChisqCol <- c(PerSigChisq, "n/a", "n/a", "n/a", "n/a", "n/a", "n/a",
# "n/a", "n/a")
# ## create list of Chi-square Percent Significant and 'n/a' (used for fit summary
# ## table)
#
# options(stringsAsFactors = FALSE)
# ## set default option to allow strings into dataframe without converting to factors
#
# Fitsum <- data.frame(Fitsum, PerSigChisqCol)
# colnames(Fitsum) <- c("Avg Ind", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ### bind to fit averages
#
# options(stringsAsFactors = TRUE)
# ## unset option to allow strings into dataframe without converting to factors
#
# ParSEfn[, 4:8] <- apply(ParSEfn[, 4:8], 2, round, digits = 3)
# Parmn[, 4:9] <- apply(Parmn[, 4:9], 2, round, digits = 3)
# Fitsum[, 1:5] <- apply(Fitsum[, 1:5], 2, round, digits = 3)
# ## round output to three digits
#
# Fitsum[2, 2:5] <- c("n/a", "n/a", "n/a", "n/a")
# ## Change df row to 'n/a' for sd, max, min, and range
#
# Output_B <- list(Parmn, ParSEfn, Fitsum)
# names(Output_B) <- c("Estimates_B", "SE_B", "Fit_B")
# ## output summary for model A
#
# }
#
# ## Model Comparison (everything in this section is new)
#
# {
# Converged_AB <- list()
# ## create list of convergence comparison for each allocation
# ProperSolution_AB <- list()
# ## create list of proper solution comparison for each allocation
# ConvergedProper_AB <- list()
# ## create list of convergence and proper solution comparison for each allocation
# lrtest_AB <- list()
# ## create list for likelihood ratio test for each allocation
# lrchisq_AB <- list()
# ## create list for likelihood ratio chi square value
# lrchisqp_AB <- list()
# ## create list for likelihood ratio test p-value
# lrsig_AB <- list()
# ## create list for likelihood ratio test significance
#
# for (i in 1:nAlloc) {
# if (Converged_A[[i]] == 1 & Converged[[i]] == 1) {
# Converged_AB[[i]] <- 1
# } else Converged_AB[[i]] <- 0
# ## compare convergence
#
# if (ProperSolution_A[[i]] == 1 & ProperSolution[[i]] == 1) {
# ProperSolution_AB[[i]] <- 1
# } else ProperSolution_AB[[i]] <- 0
# ## compare existence of proper solutions
#
# if (ConvergedProper_A[[i]] == 1 & ConvergedProper[[i]] == 1) {
# ConvergedProper_AB[[i]] <- 1
# } else ConvergedProper_AB[[i]] <- 0
# ## compare existence of proper solutions and convergence
#
#
#
# if (ConvergedProper_AB[[i]] == 1) {
#
# data <- as.data.frame(Allocations[[i]], row.names = NULL, optional = FALSE)
# ## convert allocation matrix to dataframe for model estimation
# fit_A <- lavaan::sem(syntaxA, data = data, ...)
# ## estimate model A in lavaan
# fit <- lavaan::sem(syntaxB, data = data, ...)
# ## estimate model B in lavaan
# lrtest_AB[[i]] <- lavaan::lavTestLRT(fit_A, fit)
# ## likelihood ratio test comparing A and B
# lrtestd_AB <- as.data.frame(lrtest_AB[[i]], row.names = NULL, optional = FALSE)
# ## convert lrtest results to dataframe
# lrchisq_AB[[i]] <- lrtestd_AB[2, 5]
# ## write lrtest chisq as single numeric variable
# lrchisqp_AB[[i]] <- lrtestd_AB[2, 7]
# ## write lrtest p-value as single numeric variable
# if (lrchisqp_AB[[i]] < 0.05) {
# lrsig_AB[[i]] <- 1
# } else {
# lrsig_AB[[i]] <- 0
# }
# ## determine statistical significance of lrtest
#
# }
# }
#
# lrchisqp_AB <- unlist(lrchisqp_AB, recursive = TRUE, use.names = TRUE)
# ## convert lrchisqp_AB from list to vector
# lrchisqp_AB <- as.numeric(lrchisqp_AB)
# ## make lrchisqp_AB numeric
# lrsig_AB <- unlist(lrsig_AB, recursive = TRUE, use.names = TRUE)
# ## convert lrsig_AB from list to vector
# lrsig_AB <- as.numeric(lrsig_AB)
# ### make lrsig_AB numeric
#
#
# nConverged_AB <- Reduce("+", Converged_AB)
# ## count number of allocations that converged for both A and B
# nProperSolution_AB <- Reduce("+", ProperSolution_AB)
# ## count number of allocations with proper solutions for both A and B
# nConvergedProper_AB <- Reduce("+", ConvergedProper_AB)
# ## count number of allocations that converged and have proper solutions for both A
# ## and B
# ProConverged_AB <- (nConverged_AB/nAlloc) * 100
# ## calc proportion of allocations that converged for both A and B
# nlrsig_AB <- Reduce("+", lrsig_AB)
# ## count number of allocations with significant lrtest between A and B
# Prolrsig_AB <- (nlrsig_AB/nConvergedProper_AB) * 100
# ## calc proportion of allocations with significant lrtest between A and B
# lrchisq_AB <- unlist(lrchisq_AB, recursive = TRUE, use.names = TRUE)
# ### convert lrchisq_AB from list to vector
# lrchisq_AB <- as.numeric(lrchisq_AB)
# ### make lrchisq_AB numeric
# AvgLRT_AB <- (Reduce("+", lrchisq_AB))/nConvergedProper_AB
# ## calc average LRT
#
# LRTsum <- cbind(AvgLRT_AB, lrtestd_AB[2, 3], sd(lrchisq_AB, na.rm = TRUE),
# max(lrchisq_AB), min(lrchisq_AB),
# max(lrchisq_AB) - min(lrchisq_AB), Prolrsig_AB)
# colnames(LRTsum) <- c("Avg LRT", "df", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ## calculate LRT distribution statistics
#
# FitDiff_AB <- Fitsd_A - Fitsd
# ## compute fit index difference matrix
#
# for (i in 1:nAlloc) {
# if (ConvergedProper_AB[[i]] != 1)
# FitDiff_AB[1:9, i] <- 0
# }
# ### make fit differences zero for each non-converged allocation
#
# BICDiff_AB <- list()
# AICDiff_AB <- list()
# RMSEADiff_AB <- list()
# CFIDiff_AB <- list()
# TLIDiff_AB <- list()
# SRMRDiff_AB <- list()
# BICDiffGT10_AB <- list()
# ## create list noting each allocation in which A is preferred over B
#
# BICDiff_BA <- list()
# AICDiff_BA <- list()
# RMSEADiff_BA <- list()
# CFIDiff_BA <- list()
# TLIDiff_BA <- list()
# SRMRDiff_BA <- list()
# BICDiffGT10_BA <- list()
# ## create list noting each allocation in which B is preferred over A
#
# for (i in 1:nAlloc) {
# if (FitDiff_AB[8, i] < 0) {
# BICDiff_AB[[i]] <- 1
# } else BICDiff_AB[[i]] <- 0
# if (FitDiff_AB[9, i] < 0) {
# AICDiff_AB[[i]] <- 1
# } else AICDiff_AB[[i]] <- 0
# if (FitDiff_AB[5, i] < 0) {
# RMSEADiff_AB[[i]] <- 1
# } else RMSEADiff_AB[[i]] <- 0
# if (FitDiff_AB[3, i] > 0) {
# CFIDiff_AB[[i]] <- 1
# } else CFIDiff_AB[[i]] <- 0
# if (FitDiff_AB[4, i] > 0) {
# TLIDiff_AB[[i]] <- 1
# } else TLIDiff_AB[[i]] <- 0
# if (FitDiff_AB[6, i] < 0) {
# SRMRDiff_AB[[i]] <- 1
# } else SRMRDiff_AB[[i]] <- 0
# if (FitDiff_AB[8, i] < (-10)) {
# BICDiffGT10_AB[[i]] <- 1
# } else BICDiffGT10_AB[[i]] <- 0
# }
# nBIC_AoverB <- Reduce("+", BICDiff_AB)
# nAIC_AoverB <- Reduce("+", AICDiff_AB)
# nRMSEA_AoverB <- Reduce("+", RMSEADiff_AB)
# nCFI_AoverB <- Reduce("+", CFIDiff_AB)
# nTLI_AoverB <- Reduce("+", TLIDiff_AB)
# nSRMR_AoverB <- Reduce("+", SRMRDiff_AB)
# nBICDiffGT10_AoverB <- Reduce("+", BICDiffGT10_AB)
# ## compute number of 'A preferred over B' for each fit index
#
# for (i in 1:nAlloc) {
# if (FitDiff_AB[8, i] > 0) {
# BICDiff_BA[[i]] <- 1
# } else BICDiff_BA[[i]] <- 0
# if (FitDiff_AB[9, i] > 0) {
# AICDiff_BA[[i]] <- 1
# } else AICDiff_BA[[i]] <- 0
# if (FitDiff_AB[5, i] > 0) {
# RMSEADiff_BA[[i]] <- 1
# } else RMSEADiff_BA[[i]] <- 0
# if (FitDiff_AB[3, i] < 0) {
# CFIDiff_BA[[i]] <- 1
# } else CFIDiff_BA[[i]] <- 0
# if (FitDiff_AB[4, i] < 0) {
# TLIDiff_BA[[i]] <- 1
# } else TLIDiff_BA[[i]] <- 0
# if (FitDiff_AB[6, i] > 0) {
# SRMRDiff_BA[[i]] <- 1
# } else SRMRDiff_BA[[i]] <- 0
# if (FitDiff_AB[8, i] > (10)) {
# BICDiffGT10_BA[[i]] <- 1
# } else BICDiffGT10_BA[[i]] <- 0
# }
# nBIC_BoverA <- Reduce("+", BICDiff_BA)
# nAIC_BoverA <- Reduce("+", AICDiff_BA)
# nRMSEA_BoverA <- Reduce("+", RMSEADiff_BA)
# nCFI_BoverA <- Reduce("+", CFIDiff_BA)
# nTLI_BoverA <- Reduce("+", TLIDiff_BA)
# nSRMR_BoverA <- Reduce("+", SRMRDiff_BA)
# nBICDiffGT10_BoverA <- Reduce("+", BICDiffGT10_BA)
# ## compute number of 'B preferred over A' for each fit index
#
# BICDiffAvgtemp <- list()
# AICDiffAvgtemp <- list()
# RMSEADiffAvgtemp <- list()
# CFIDiffAvgtemp <- list()
# TLIDiffAvgtemp <- list()
# SRMRDiffAvgtemp <- list()
# BICgt10DiffAvgtemp <- list()
# ## create empty list for average fit index differences
#
# for (i in 1:nAlloc) {
# if (BICDiff_AB[[i]] != 1) {
# BICDiffAvgtemp[[i]] <- 0
# } else BICDiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# if (AICDiff_AB[[i]] != 1) {
# AICDiffAvgtemp[[i]] <- 0
# } else AICDiffAvgtemp[[i]] <- FitDiff_AB[9, i]
# if (RMSEADiff_AB[[i]] != 1) {
# RMSEADiffAvgtemp[[i]] <- 0
# } else RMSEADiffAvgtemp[[i]] <- FitDiff_AB[5, i]
# if (CFIDiff_AB[[i]] != 1) {
# CFIDiffAvgtemp[[i]] <- 0
# } else CFIDiffAvgtemp[[i]] <- FitDiff_AB[3, i]
# if (TLIDiff_AB[[i]] != 1) {
# TLIDiffAvgtemp[[i]] <- 0
# } else TLIDiffAvgtemp[[i]] <- FitDiff_AB[4, i]
# if (SRMRDiff_AB[[i]] != 1) {
# SRMRDiffAvgtemp[[i]] <- 0
# } else SRMRDiffAvgtemp[[i]] <- FitDiff_AB[6, i]
# if (BICDiffGT10_AB[[i]] != 1) {
# BICgt10DiffAvgtemp[[i]] <- 0
# } else BICgt10DiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# }
# ## make average fit index difference list composed solely of values where A is
# ## preferred over B
#
# BICDiffAvg_AB <- Reduce("+", BICDiffAvgtemp)/nBIC_AoverB * (-1)
# AICDiffAvg_AB <- Reduce("+", AICDiffAvgtemp)/nAIC_AoverB * (-1)
# RMSEADiffAvg_AB <- Reduce("+", RMSEADiffAvgtemp)/nRMSEA_AoverB * (-1)
# CFIDiffAvg_AB <- Reduce("+", CFIDiffAvgtemp)/nCFI_AoverB
# TLIDiffAvg_AB <- Reduce("+", TLIDiffAvgtemp)/nTLI_AoverB
# SRMRDiffAvg_AB <- Reduce("+", SRMRDiffAvgtemp)/nSRMR_AoverB * (-1)
# BICgt10DiffAvg_AB <- Reduce("+", BICgt10DiffAvgtemp)/nBICDiffGT10_AoverB *
# (-1)
# ## calc average fit index difference when A is preferred over B
#
# FitDiffAvg_AoverB <- list(BICDiffAvg_AB, AICDiffAvg_AB, RMSEADiffAvg_AB, CFIDiffAvg_AB,
# TLIDiffAvg_AB, SRMRDiffAvg_AB)
# ## create list of all fit index differences when A is preferred over B
#
# FitDiffAvg_AoverB <- unlist(FitDiffAvg_AoverB, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# for (i in 1:nAlloc) {
# if (BICDiff_BA[[i]] != 1) {
# BICDiffAvgtemp[[i]] <- 0
# } else BICDiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# if (AICDiff_BA[[i]] != 1) {
# AICDiffAvgtemp[[i]] <- 0
# } else AICDiffAvgtemp[[i]] <- FitDiff_AB[9, i]
# if (RMSEADiff_BA[[i]] != 1) {
# RMSEADiffAvgtemp[[i]] <- 0
# } else RMSEADiffAvgtemp[[i]] <- FitDiff_AB[5, i]
# if (CFIDiff_BA[[i]] != 1) {
# CFIDiffAvgtemp[[i]] <- 0
# } else CFIDiffAvgtemp[[i]] <- FitDiff_AB[3, i]
# if (TLIDiff_BA[[i]] != 1) {
# TLIDiffAvgtemp[[i]] <- 0
# } else TLIDiffAvgtemp[[i]] <- FitDiff_AB[4, i]
# if (SRMRDiff_BA[[i]] != 1) {
# SRMRDiffAvgtemp[[i]] <- 0
# } else SRMRDiffAvgtemp[[i]] <- FitDiff_AB[6, i]
# if (BICDiffGT10_BA[[i]] != 1) {
# BICgt10DiffAvgtemp[[i]] <- 0
# } else BICgt10DiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# }
# ## make average fit index difference list composed solely of values where B is
# ## preferred over A
#
# BICDiffAvg_BA <- Reduce("+", BICDiffAvgtemp)/nBIC_BoverA
# AICDiffAvg_BA <- Reduce("+", AICDiffAvgtemp)/nAIC_BoverA
# RMSEADiffAvg_BA <- Reduce("+", RMSEADiffAvgtemp)/nRMSEA_BoverA
# CFIDiffAvg_BA <- Reduce("+", CFIDiffAvgtemp)/nCFI_BoverA * (-1)
# TLIDiffAvg_BA <- Reduce("+", TLIDiffAvgtemp)/nTLI_BoverA * (-1)
# SRMRDiffAvg_BA <- Reduce("+", SRMRDiffAvgtemp)/nSRMR_BoverA
# BICgt10DiffAvg_BA <- Reduce("+", BICgt10DiffAvgtemp)/nBICDiffGT10_BoverA
# ## calc average fit index difference when B is preferred over A
#
# FitDiffAvg_BoverA <- list(BICDiffAvg_BA, AICDiffAvg_BA, RMSEADiffAvg_BA, CFIDiffAvg_BA,
# TLIDiffAvg_BA, SRMRDiffAvg_BA)
# ## create list of all fit index differences when B is preferred over A
#
# FitDiffAvg_BoverA <- unlist(FitDiffAvg_BoverA, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# FitDiffBICgt10_AoverB <- nBICDiffGT10_AoverB/nConvergedProper_AB * 100
# ### calculate portion of allocations where A strongly preferred over B
#
# FitDiffBICgt10_BoverA <- nBICDiffGT10_BoverA/nConvergedProper_AB * 100
# ### calculate portion of allocations where B strongly preferred over A
#
# FitDiffBICgt10 <- rbind(FitDiffBICgt10_AoverB, FitDiffBICgt10_BoverA)
# rownames(FitDiffBICgt10) <- c("Very Strong evidence for A>B", "Very Strong evidence for B>A")
# colnames(FitDiffBICgt10) <- "% Allocations"
# ### create table of proportions of 'A strongly preferred over B' and 'B strongly
# ### preferred over A'
#
# FitDiff_AoverB <- list(nBIC_AoverB/nConvergedProper_AB * 100, nAIC_AoverB/nConvergedProper_AB *
# 100, nRMSEA_AoverB/nConvergedProper_AB * 100, nCFI_AoverB/nConvergedProper_AB *
# 100, nTLI_AoverB/nConvergedProper_AB * 100, nSRMR_AoverB/nConvergedProper_AB *
# 100)
# ### create list of all proportions of 'A preferred over B'
# FitDiff_BoverA <- list(nBIC_BoverA/nConvergedProper_AB * 100, nAIC_BoverA/nConvergedProper_AB *
# 100, nRMSEA_BoverA/nConvergedProper_AB * 100, nCFI_BoverA/nConvergedProper_AB *
# 100, nTLI_BoverA/nConvergedProper_AB * 100, nSRMR_BoverA/nConvergedProper_AB *
# 100)
# ### create list of all proportions of 'B preferred over A'
#
# FitDiff_AoverB <- unlist(FitDiff_AoverB, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# FitDiff_BoverA <- unlist(FitDiff_BoverA, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# FitDiffSum_AB <- cbind(FitDiff_AoverB, FitDiffAvg_AoverB, FitDiff_BoverA,
# FitDiffAvg_BoverA)
# colnames(FitDiffSum_AB) <- c("% A>B", "Avg Amount A>B", "% B>A", "Avg Amount B>A")
# rownames(FitDiffSum_AB) <- c("bic", "aic", "rmsea", "cfi", "tli", "srmr")
# ## create table showing number of allocations in which A>B and B>A as well as
# ## average difference values
#
# for (i in 1:nAlloc) {
# is.na(FitDiff_AB[1:9, i]) <- ConvergedProper_AB[[i]] != 1
# }
# ### make fit differences missing for each non-converged allocation
#
# LRThistMax <- max(hist(lrchisqp_AB, plot = FALSE)$counts)
# BIChistMax <- max(hist(FitDiff_AB[8, 1:nAlloc], plot = FALSE)$counts)
# AIChistMax <- max(hist(FitDiff_AB[9, 1:nAlloc], plot = FALSE)$counts)
# RMSEAhistMax <- max(hist(FitDiff_AB[5, 1:nAlloc], plot = FALSE)$counts)
# CFIhistMax <- max(hist(FitDiff_AB[3, 1:nAlloc], plot = FALSE)$counts)
# TLIhistMax <- max(hist(FitDiff_AB[4, 1:nAlloc], plot = FALSE)$counts)
# ### calculate y-axis height for each histogram
#
# LRThist <- hist(lrchisqp_AB, ylim = c(0, LRThistMax), xlab = "p-value", main = "LRT p-values")
# ## plot histogram of LRT p-values
#
# BIChist <- hist(FitDiff_AB[8, 1:nAlloc], ylim = c(0, BIChistMax), xlab = "BIC_modA - BIC_modB",
# main = "BIC Diff")
# AIChist <- hist(FitDiff_AB[9, 1:nAlloc], ylim = c(0, AIChistMax), xlab = "AIC_modA - AIC_modB",
# main = "AIC Diff")
# RMSEAhist <- hist(FitDiff_AB[5, 1:nAlloc], ylim = c(0, RMSEAhistMax), xlab = "RMSEA_modA - RMSEA_modB",
# main = "RMSEA Diff")
# CFIhist <- hist(FitDiff_AB[3, 1:nAlloc], ylim = c(0, CFIhistMax), xlab = "CFI_modA - CFI_modB",
# main = "CFI Diff")
# TLIhist <- hist(FitDiff_AB[4, 1:nAlloc], ylim = c(0, TLIhistMax), xlab = "TLI_modA - TLI_modB",
# main = "TLI Diff")
# ### plot histograms for each index_modA - index_modB
# BIChist
# AIChist
# RMSEAhist
# CFIhist
# TLIhist
#
# ConvergedProperSum <- rbind(nConverged_A/nAlloc, nConverged/nAlloc, nConverged_AB/nAlloc,
# nConvergedProper_A/nAlloc, nConvergedProper/nAlloc, nConvergedProper_AB/nAlloc)
# rownames(ConvergedProperSum) <- c("Converged_A", "Converged_B", "Converged_AB",
# "ConvergedProper_A", "ConvergedProper_B", "ConvergedProper_AB")
# colnames(ConvergedProperSum) <- "Proportion of Allocations"
# ### create table summarizing proportions of converged allocations and allocations
# ### with proper solutions
#
# Output_AB <- list(round(LRTsum, 3), "LRT results are interpretable specifically for nested models",
# round(FitDiffSum_AB, 3), round(FitDiffBICgt10, 3), ConvergedProperSum)
# names(Output_AB) <- c("LRT Summary, Model A vs. Model B", "Note:", "Fit Index Differences",
# "Percent of Allocations with |BIC Diff| > 10", "Converged and Proper Solutions Summary")
# ### output for model comparison
#
# }
#
# return(list(Output_A, Output_B, Output_AB))
# ## returns output for model A, model B, and the comparison of these
# }
| /semTools/R/PAVranking.R | no_license | lnsongxf/semTools | R | false | false | 61,593 | r | ### Terrence D. Jorgensen
### Last updated: 21 February 2019
##' Parcel-Allocation Variability in Model Ranking
##'
##' This function quantifies and assesses the consequences of parcel-allocation
##' variability for model ranking of structural equation models (SEMs) that
##' differ in their structural specification but share the same parcel-level
##' measurement specification (see Sterba & Rights, 2016). This function calls
##' \code{\link{parcelAllocation}}---which can be used with only one SEM in
##' isolation---to fit two (assumed) nested models to each of a specified number
##' of random item-to-parcel allocations. Output includes summary information
##' about the distribution of model selection results (including plots) and the
##' distribution of results for each model individually, across allocations
##' within-sample. Note that this function can be used when selecting among more
##' than two competing structural models as well (see instructions below
##' involving the \code{seed} argument).
##'
##' This is based on a SAS macro \code{ParcelAlloc} (Sterba & MacCallum, 2010).
##' The \code{PAVranking} function produces results discussed in Sterba and
##' Rights (2016) relevant to the assessment of parcel-allocation variability in
##' model selection and model ranking. Specifically, the \code{PAVranking}
##' function first calls \code{\link{parcelAllocation}} to generate a given
##' number (\code{nAlloc}) of item-to-parcel allocations, fitting both specified
##' models to each allocation, and providing summaryies of PAV for each model.
##' Additionally, \code{PAVranking} provides the following new summaries:
##'
##' \itemize{
##' \item{PAV in model selection index values and model ranking between
##' Models \code{model0} and \code{model1}.}
##' \item{The proportion of allocations that converged and the proportion of
##' proper solutions (results are summarized for allocations with both
##' converged and proper allocations only).}
##' }
##'
##' For further details on the benefits of the random allocation of items to
##' parcels, see Sterba (2011) and Sterba and MacCallum (2010).
##'
##' To test whether nested models have equivalent fit, results can be pooled
##' across allocations using the same methods available for pooling results
##' across multiple imputations of missing data (see \bold{Examples}).
##'
##' \emph{Note}: This function requires the \code{lavaan} package. Missing data
##' must be coded as \code{NA}. If the function returns \code{"Error in
##' plot.new() : figure margins too large"}, the user may need to increase
##' size of the plot window (e.g., in RStudio) and rerun the function.
##'
##'
##' @importFrom stats sd
##' @importFrom lavaan parTable lavListInspect lavaanList
##' @importFrom graphics hist
##'
##' @param model0,model1 \code{\link[lavaan]{lavaan}} model syntax specifying
##' nested models (\code{model0} within \code{model1}) to be fitted
##' to the same parceled data. Note that there can be a mixture of
##' items and parcels (even within the same factor), in case certain items
##' should never be parceled. Can be a character string or parameter table.
##' Also see \code{\link[lavaan]{lavaanify}} for more details.
##' @param data A \code{data.frame} containing all observed variables appearing
##' in the \code{model}, as well as those in the \code{item.syntax} used to
##' create parcels. If the data have missing values, multiple imputation
##' before parceling is recommended: submit a stacked data set (with a variable
##' for the imputation number, so they can be separateed later) and set
##' \code{do.fit = FALSE} to return the list of \code{data.frame}s (one per
##' allocation), each of which is a stacked, imputed data set with parcels.
##' @param parcel.names \code{character} vector containing names of all parcels
##' appearing as indicators in \code{model}.
##' @param item.syntax \link[lavaan]{lavaan} model syntax specifying the model
##' that would be fit to all of the unparceled items, including items that
##' should be randomly allocated to parcels appearing in \code{model}.
##' @param nAlloc The number of random items-to-parcels allocations to generate.
##' @param fun \code{character} string indicating the name of the
##' \code{\link[lavaan]{lavaan}} function used to fit \code{model} to
##' \code{data}. Can only take the values \code{"lavaan"}, \code{"sem"},
##' \code{"cfa"}, or \code{"growth"}.
##' @param alpha Alpha level used as criterion for significance.
##' @param bic.crit Criterion for assessing evidence in favor of one model
##' over another. See Rafferty (1995) for guidelines (default is "very
##' strong evidence" in favor of the model with lower BIC).
##' @param fit.measures \code{character} vector containing names of fit measures
##' to request from each fitted \code{\link[lavaan]{lavaan}} model. See the
##' output of \code{\link[lavaan]{fitMeasures}} for a list of available measures.
##' @param \dots Additional arguments to be passed to
##' \code{\link[lavaan]{lavaanList}}. See also \code{\link[lavaan]{lavOptions}}
##' @param show.progress If \code{TRUE}, show a \code{\link[utils]{txtProgressBar}}
##' indicating how fast each model-fitting iterates over allocations.
##' @param iseed (Optional) Random seed used for parceling items. When the same
##' random seed is specified and the program is re-run, the same allocations
##' will be generated. The seed argument can be used to assess parcel-allocation
##' variability in model ranking when considering more than two models. For each
##' pair of models under comparison, the program should be rerun using the same
##' random seed. Doing so ensures that multiple model comparisons will employ
##' the same set of parcel datasets. \emph{Note}: When using \pkg{parallel}
##' options, you must first type \code{RNGkind("L'Ecuyer-CMRG")} into the R
##' Console, so that the seed will be controlled across cores.
##' @param warn Whether to print warnings when fitting models to each allocation
##'
##' @return
##' \item{model0.results}{Results returned by \code{\link{parcelAllocation}}
##' for \code{model0} (see the \bold{Value} section).}
##' \item{model1.results}{Results returned by \code{\link{parcelAllocation}}
##' for \code{model1} (see the \bold{Value} section).}
##' \item{model0.v.model1}{A \code{list} of model-comparison results, including
##' the following: \itemize{
##' \item{\code{LRT_Summary:}}{ The average likelihood ratio test across
##' allocations, as well as the \emph{SD}, minimum, maximum, range, and the
##' proportion of allocations for which the test was significant.}
##' \item{\code{Fit_Index_Differences:}}{ Differences in fit indices, organized
##' by what proportion favored each model and among those, what the average
##' difference was.}
##' \item{\code{Favored_by_BIC:}}{ The proportion of allocations in which each
##' model met the criterion (\code{bic.crit}) for a substantial difference
##' in fit.}
##' \item{\code{Convergence_Summary:}}{ The proportion of allocations in which
##' each model (and both models) converged on a solution.}
##' } Histograms are also printed to the current plot-output device.}
##'
##' @author
##' Terrence D. Jorgensen (University of Amsterdam; \email{TJorgensen314@@gmail.com})
##'
##' @seealso \code{\link{parcelAllocation}} for fitting a single model,
##' \code{\link{poolMAlloc}} for choosing the number of allocations
##'
##' @references
##' Raftery, A. E. (1995). Bayesian model selection in social
##' research. \emph{Sociological Methodology, 25}, 111--163. doi:10.2307/271063
##'
##' Sterba, S. K. (2011). Implications of parcel-allocation variability for
##' comparing fit of item-solutions and parcel-solutions. \emph{Structural
##' Equation Modeling: A Multidisciplinary Journal, 18}(4), 554--577.
##' doi:10.1080/10705511.2011.607073
##'
##' Sterba, S. K., & MacCallum, R. C. (2010). Variability in parameter estimates
##' and model fit across repeated allocations of items to parcels.
##' \emph{Multivariate Behavioral Research, 45}(2), 322--358.
##' doi:10.1080/00273171003680302
##'
##' Sterba, S. K., & Rights, J. D. (2017). Effects of parceling on model
##' selection: Parcel-allocation variability in model ranking.
##' \emph{Psychological Methods, 22}(1), 47--68. doi:10.1037/met0000067
##'
##' @examples
##'
##' ## Specify the item-level model (if NO parcels were created)
##' ## This must apply to BOTH competing models
##'
##' item.syntax <- c(paste0("f1 =~ f1item", 1:9),
##' paste0("f2 =~ f2item", 1:9))
##' cat(item.syntax, sep = "\n")
##' ## Below, we reduce the size of this same model by
##' ## applying different parceling schemes
##'
##' ## Specify a 2-factor CFA with correlated factors, using 3-indicator parcels
##' mod1 <- '
##' f1 =~ par1 + par2 + par3
##' f2 =~ par4 + par5 + par6
##' '
##' ## Specify a more restricted model with orthogonal factors
##' mod0 <- '
##' f1 =~ par1 + par2 + par3
##' f2 =~ par4 + par5 + par6
##' f1 ~~ 0*f2
##' '
##' ## names of parcels (must apply to BOTH models)
##' (parcel.names <- paste0("par", 1:6))
##'
##' \dontrun{
##' ## override default random-number generator to use parallel options
##' RNGkind("L'Ecuyer-CMRG")
##'
##' PAVranking(model0 = mod0, model1 = mod1, data = simParcel, nAlloc = 100,
##' parcel.names = parcel.names, item.syntax = item.syntax,
##' std.lv = TRUE, # any addition lavaan arguments
##' parallel = "snow") # parallel options
##'
##'
##'
##' ## POOL RESULTS by treating parcel allocations as multiple imputations
##'
##' ## save list of data sets instead of fitting model yet
##' dataList <- parcelAllocation(mod.parcels, data = simParcel, nAlloc = 100,
##' parcel.names = parcel.names,
##' item.syntax = item.syntax,
##' do.fit = FALSE)
##' ## now fit each model to each data set
##' fit0 <- cfa.mi(mod0, data = dataList, std.lv = TRUE)
##' fit1 <- cfa.mi(mod1, data = dataList, std.lv = TRUE)
##' anova(fit0, fit1) # pooled test statistic comparing models
##' class?lavaan.mi # find more methods for pooling results
##' }
##'
##' @export
PAVranking <- function(model0, model1, data, parcel.names, item.syntax,
nAlloc = 100, fun = "sem", alpha = .05, bic.crit = 10,
fit.measures = c("chisq","df","cfi","tli","rmsea",
"srmr","logl","aic","bic","bic2"), ...,
show.progress = FALSE, iseed = 12345, warn = FALSE) {
if (alpha >= 1 | alpha <= 0) stop('alpha level must be between 0 and 1')
bic.crit <- abs(bic.crit)
## fit each model
out0 <- parcelAllocation(model = model0, data = data, nAlloc = nAlloc,
parcel.names = parcel.names, item.syntax = item.syntax,
fun = fun, alpha = alpha, fit.measures = fit.measures,
..., show.progress = show.progress, iseed = iseed,
return.fit = TRUE, warn = warn)
out1 <- parcelAllocation(model = model1, data = data, nAlloc = nAlloc,
parcel.names = parcel.names, item.syntax = item.syntax,
fun = fun, alpha = alpha, fit.measures = fit.measures,
..., show.progress = show.progress, iseed = iseed,
return.fit = TRUE, warn = warn)
## convergence summary
conv0 <- out0$Model@meta$ok
conv1 <- out1$Model@meta$ok
conv01 <- conv0 & conv1
conv <- data.frame(Proportion_Converged = sapply(list(conv0, conv1, conv01), mean),
row.names = c("model0","model1","Both_Models"))
## add proper solutions? I would advise against
## check df matches assumed nesting
DF0 <- out0$Fit["df", "Avg"]
DF1 <- out1$Fit["df", "Avg"]
if (DF0 == DF1) stop('Models have identical df, so they cannot be compared.')
if (DF0 < DF1) warning('model0 should be nested within model1, ',
'but df_0 < df_1. Should models be swapped?')
temp <- out0
out0 <- out1
out1 <- temp
## Re-run lavaanList to collect model-comparison results
if (show.progress) message('Re-fitting model0 to collect model-comparison ',
'statistics\n')
oldCall <- out0$Model@call
oldCall$model <- parTable(out0$Model)
oldCall$dataList <- out0$Model@external$dataList[conv01]
if (!is.null(oldCall$parallel)) {
if (oldCall$parallel == "snow") {
oldCall$parallel <- "no"
oldCall$ncpus <- 1L
if (warn) warning("Unable to pass lavaan::lavTestLRT() arguments when ",
"parallel = 'snow'. Switching to parallel = 'no'. ",
"Unless using Windows, parallel = 'multicore' works.")
}
}
PT1 <- parTable(out1$Model)
op1 <- lavListInspect(out1$Model, "options")
oldCall$FUN <- function(obj) {
fit1 <- try(lavaan::lavaan(model = PT1, slotOptions = op1,
slotData = obj@Data), silent = TRUE)
if (inherits(fit1, "try-error")) return("fit failed")
out <- lavaan::lavTestLRT(obj, fit1)
if (inherits(out, "try-error")) return("lavTestLRT() failed")
out
}
fit01 <- eval(as.call(oldCall))
## check if there are any results
noLRT <- sapply(fit01@funList, is.character)
if (all(noLRT)) stop("No success using lavTestScore() on any allocations.")
## anova() results
CHIs <- sapply(fit01@funList[!noLRT], "[", i = 2, j = "Chisq diff")
pVals <- sapply(fit01@funList[!noLRT], "[", i = 2, j = "Pr(>Chisq)")
LRT <- c(`Avg LRT` = mean(CHIs), df = abs(DF0 - DF1), SD = sd(CHIs),
Min = min(CHIs), Max = max(CHIs), Range = max(CHIs) - min(CHIs),
`% Sig` = mean(pVals < alpha))
class(LRT) <- c("lavaan.vector","numeric")
## differences in fit indices
indices <- fit.measures[!grepl(pattern = "chisq|df|pvalue", fit.measures)]
Fit0 <- do.call(cbind, out0$Model@funList[conv01])[indices, ]
Fit1 <- do.call(cbind, out1$Model@funList[conv01])[indices, ]
## higher values for model0
Fit01 <- Fit0 - Fit1
higher0 <- Fit0 > Fit1
perc0 <- rowMeans(higher0)
avg0 <- rowMeans(Fit01 * higher0)
## higher values for model1
Fit10 <- Fit1 - Fit0
higher1 <- Fit1 > Fit0
perc1 <- rowMeans(higher1)
avg1 <- rowMeans(Fit10 * higher1)
fitDiff <- data.frame(Prop0 = perc0, Avg0 = avg0, Prop1 = perc1, Avg1 = avg1)
class(fitDiff) <- c("lavaan.data.frame","data.frame")
attr(fitDiff, "header") <- paste("Note: Higher values of goodness-of-fit",
"indices (e.g., CFI) favor that model, but",
"higher values of badness-of-fit indices",
"(e.g., RMSEA) indicate the competing model",
"is favored.\n\n'Prop0' indicates the",
"proportion of allocations for which each",
"index was higher for model0 (likewise,",
"'Prop1' indicates this for model1).\n",
"\nAmong those allocations, 'Avg0' or 'Avg1'",
"indicates the average amount by which the",
"index was higher for that model.")
## favored by BIC
favorBIC <- NULL
if (any(grepl(pattern = "bic", fit.measures))) {
if ("bic" %in% fit.measures) {
highBIC <- abs(Fit01["bic",]) >= bic.crit
favor0 <- mean(higher1["bic",] & highBIC)
favor1 <- mean(higher0["bic",] & highBIC)
favorBIC <- data.frame("bic" = c(favor0, favor1),
row.names = paste("Evidence Favoring Model", 0:1))
}
if ("bic2" %in% fit.measures) {
highBIC <- abs(Fit01["bic2",]) >= bic.crit
favor0 <- mean(higher1["bic2",] & highBIC)
favor1 <- mean(higher0["bic2",] & highBIC)
favorBIC2 <- data.frame("bic2" = c(favor0, favor1),
row.names = paste("Evidence Favoring Model", 0:1))
if (is.null(favorBIC)) {
favorBIC <- favorBIC2
} else favorBIC <- cbind(favorBIC, favorBIC2)
}
#TODO: add bic.priorN from moreFitIndices()
class(favorBIC) <- c("lavaan.data.frame","data.frame")
attr(favorBIC, "header") <- paste("Percent of Allocations with |BIC Diff| >",
bic.crit)
}
## return results
list(Model0_Results = out0[c("Estimates","SE","Fit")],
Model1_Results = out1[c("Estimates","SE","Fit")],
Model0.v.Model1 = list(LRT_Summary = LRT,
Fit_Index_Differences = fitDiff,
Favored_by_BIC = favorBIC,
Convergence_Summary = conv))
}
## ------------
## old function
## ------------
## @param nPerPar A list in which each element is a vector, corresponding to
## each factor, indicating sizes of parcels. If variables are left out of
## parceling, they should not be accounted for here (i.e., there should not be
## parcels of size "1").
## @param facPlc A list of vectors, each corresponding to a factor, specifying
## the item indicators of that factor (whether included in parceling or not).
## Either variable names or column numbers. Variables not listed will not be
## modeled or included in output datasets.
## @param nAlloc The number of random allocations of items to parcels to
## generate.
## @param syntaxA lavaan syntax for Model A. Note that, for likelihood ratio
## test (LRT) results to be interpreted, Model A should be nested within Model
## B (though the function will still provide results when Models A and B are
## nonnested).
## @param syntaxB lavaan syntax for Model B. Note that, for likelihood ratio
## test (LRT) results to be appropriate, Model A should be nested within Model
## B (though the function will still provide results when Models A and B are
## nonnested).
## @param dataset Item-level dataset
## @param parceloutput folder where parceled data sets will be outputted (note
## for Windows users: file path must specified using forward slashes).
## @param names (Optional) A character vector containing the names of parceled
## variables.
## @param leaveout (Optional) A vector of variables to be left out of
## randomized parceling. Either variable names or column numbers are allowed.
## @examples
##
## \dontrun{
## ## lavaan syntax for Model A: a 2 Uncorrelated
## ## factor CFA model to be fit to parceled data
##
## parmodelA <- '
## f1 =~ NA*p1f1 + p2f1 + p3f1
## f2 =~ NA*p1f2 + p2f2 + p3f2
## p1f1 ~ 1
## p2f1 ~ 1
## p3f1 ~ 1
## p1f2 ~ 1
## p2f2 ~ 1
## p3f2 ~ 1
## p1f1 ~~ p1f1
## p2f1 ~~ p2f1
## p3f1 ~~ p3f1
## p1f2 ~~ p1f2
## p2f2 ~~ p2f2
## p3f2 ~~ p3f2
## f1 ~~ 1*f1
## f2 ~~ 1*f2
## f1 ~~ 0*f2
## '
##
## ## lavaan syntax for Model B: a 2 Correlated
## ## factor CFA model to be fit to parceled data
##
## parmodelB <- '
## f1 =~ NA*p1f1 + p2f1 + p3f1
## f2 =~ NA*p1f2 + p2f2 + p3f2
## p1f1 ~ 1
## p2f1 ~ 1
## p3f1 ~ 1
## p1f2 ~ 1
## p2f2 ~ 1
## p3f2 ~ 1
## p1f1 ~~ p1f1
## p2f1 ~~ p2f1
## p3f1 ~~ p3f1
## p1f2 ~~ p1f2
## p2f2 ~~ p2f2
## p3f2 ~~ p3f2
## f1 ~~ 1*f1
## f2 ~~ 1*f2
## f1 ~~ f2
## '
##
## ## specify items for each factor
## f1name <- colnames(simParcel)[1:9]
## f2name <- colnames(simParcel)[10:18]
##
## ## run function
## PAVranking(nPerPar = list(c(3,3,3), c(3,3,3)), facPlc = list(f1name,f2name),
## nAlloc = 100, parceloutput = 0, leaveout = 0,
## syntaxA = parmodelA, syntaxB = parmodelB, dataset = simParcel,
## names = list("p1f1","p2f1","p3f1","p1f2","p2f2","p3f2"))
## }
##
# PAVranking <- function(nPerPar, facPlc, nAlloc = 100, parceloutput = 0, syntaxA, syntaxB,
# dataset, names = NULL, leaveout = 0, seed = NA, ...) {
# if (is.null(names))
# names <- matrix(NA, length(nPerPar), 1)
# ## set random seed if specified
# if (is.na(seed) == FALSE)
# set.seed(seed)
# ## allow many tables to be outputted
# options(max.print = 1e+06)
#
# ## Create parceled datasets
# if (is.character(dataset)) dataset <- utils::read.csv(dataset)
# dataset <- as.matrix(dataset)
#
# if (nAlloc < 2)
# stop("Minimum of two allocations required.")
#
# if (is.list(facPlc)) {
# if (is.numeric(facPlc[[1]][1]) == FALSE) {
# facPlcb <- facPlc
# Namesv <- colnames(dataset)
#
# for (i in 1:length(facPlc)) {
# for (j in 1:length(facPlc[[i]])) {
# facPlcb[[i]][j] <- match(facPlc[[i]][j], Namesv)
# }
# facPlcb[[i]] <- as.numeric(facPlcb[[i]])
# }
# facPlc <- facPlcb
# }
#
# # facPlc2 <- rep(0, sum(sapply(facPlc, length)))
# facPlc2 <- rep(0, ncol(dataset))
#
# for (i in 1:length(facPlc)) {
# for (j in 1:length(facPlc[[i]])) {
# facPlc2[facPlc[[i]][j]] <- i
# }
# }
# facPlc <- facPlc2
# }
#
# if (leaveout != 0) {
# if (is.numeric(leaveout) == FALSE) {
# leaveoutb <- rep(0, length(leaveout))
# Namesv <- colnames(dataset)
#
# for (i in 1:length(leaveout)) {
# leaveoutb[i] <- match(leaveout[i], Namesv)
# }
# leaveout <- as.numeric(leaveoutb)
# }
# k1 <- 0.001
# for (i in 1:length(leaveout)) {
# facPlc[leaveout[i]] <- facPlc[leaveout[i]] + k1
# k1 <- k1 + 0.001
# }
# }
#
# if (0 %in% facPlc == TRUE) {
# Zfreq <- sum(facPlc == 0)
# for (i in 1:Zfreq) {
# Zplc <- match(0, facPlc)
# dataset <- dataset[, -Zplc]
# facPlc <- facPlc[-Zplc]
# }
# ## this allows for unused variables in dataset, which are specified by zeros, and
# ## deleted
# }
#
# if (is.list(nPerPar)) {
# nPerPar2 <- c()
# for (i in 1:length(nPerPar)) {
# Onesp <- sum(facPlc > i & facPlc < i + 1)
# nPerPar2 <- c(nPerPar2, nPerPar[i], rep(1, Onesp), recursive = TRUE)
# }
# nPerPar <- nPerPar2
# }
#
# Npp <- c()
# for (i in 1:length(nPerPar)) {
# Npp <- c(Npp, rep(i, nPerPar[i]))
# }
#
# Locate <- sort(round(facPlc))
# Maxv <- max(Locate) - 1
#
# if (length(Locate) != length(Npp))
# stop("Parcels incorrectly specified.\",
# \" Check input!")
#
# if (Maxv > 0) {
# ## Bug was here. With 1 factor Maxv=0. Skip this with a single factor
# for (i in 1:Maxv) {
# Mat <- match(i + 1, Locate)
# if (Npp[Mat] == Npp[Mat - 1])
# stop("Parcels incorrectly specified.\",
# \" Check input!")
# }
# }
# ## warning message if parcel crosses into multiple factors vector, parcel to which
# ## each variable belongs vector, factor to which each variables belongs if
# ## variables are in the same parcel, but different factors error message given in
# ## output
#
# Onevec <- facPlc - round(facPlc)
# NleaveA <- length(Onevec) - sum(Onevec == 0)
# NleaveP <- sum(nPerPar == 1)
#
# if (NleaveA < NleaveP)
# warning("Single-variable parcels have been requested.\",
# \" Check input!")
#
# if (NleaveA > NleaveP)
# warning("More non-parceled variables have been", " requested than provided for in parcel",
# " vector. Check input!")
#
# if (length(names) > 1) {
# if (length(names) != length(nPerPar))
# warning("Number of parcel names provided not equal to number", " of parcels requested")
# }
#
# Data <- c(1:ncol(dataset))
# ## creates a vector of the number of indicators e.g. for three indicators, c(1, 2,
# ## 3)
# Nfactors <- max(facPlc)
# ## scalar, number of factors
# Nindicators <- length(Data)
# ## scalar, number of indicators
# Npar <- length(nPerPar)
# ## scalar, number of parcels
# Rmize <- runif(Nindicators, 1, Nindicators)
# ## create vector of randomly ordered numbers, length of number of indicators
#
# Data <- rbind(facPlc, Rmize, Data)
# ## 'Data' becomes object of three rows, consisting of 1) factor to which each
# ## indicator belongs (in order to preserve indicator/factor assignment during
# ## randomization) 2) randomly order numbers 3) indicator number
#
# Results <- matrix(numeric(0), nAlloc, Nindicators)
# ## create empty matrix for parcel allocation matrix
#
# Pin <- nPerPar[1]
# for (i in 2:length(nPerPar)) {
# Pin <- c(Pin, nPerPar[i] + Pin[i - 1])
# ## creates vector which indicates the range of columns (endpoints) in each parcel
# }
#
# for (i in 1:nAlloc) {
# Data[2, ] <- runif(Nindicators, 1, Nindicators)
# ## Replace second row with newly randomly ordered numbers
#
# Data <- Data[, order(Data[2, ])]
# ## Order the columns according to the values of the second row
#
# Data <- Data[, order(Data[1, ])]
# ## Order the columns according to the values of the first row in order to preserve
# ## factor assignment
#
# Results[i, ] <- Data[3, ]
# ## assign result to allocation matrix
# }
#
# Alpha <- rbind(Results[1, ], dataset)
# ## bind first random allocation to dataset 'Alpha'
#
# Allocations <- list()
# ## create empty list for allocation data matrices
#
# for (i in 1:nAlloc) {
# Ineff <- rep(NA, ncol(Results))
# Ineff2 <- c(1:ncol(Results))
# for (inefficient in 1:ncol(Results)) {
# Ineff[Results[i, inefficient]] <- Ineff2[inefficient]
# }
#
# Alpha[1, ] <- Ineff
# ## replace first row of dataset matrix with row 'i' from allocation matrix
#
# Beta <- Alpha[, order(Alpha[1, ])]
# ## arrangle dataset columns by values of first row assign to temporary matrix
# ## 'Beta'
#
# Temp <- matrix(NA, nrow(dataset), Npar)
# ## create empty matrix for averaged parcel variables
#
# TempAA <- if (length(1:Pin[1]) > 1)
# Beta[2:nrow(Beta), 1:Pin[1]] else cbind(Beta[2:nrow(Beta), 1:Pin[1]], Beta[2:nrow(Beta), 1:Pin[1]])
# Temp[, 1] <- rowMeans(TempAA, na.rm = TRUE)
# ## fill first column with averages from assigned indicators
# for (al in 2:Npar) {
# Plc <- Pin[al - 1] + 1
# ## placeholder variable for determining parcel width
# TempBB <- if (length(Plc:Pin[al]) > 1)
# Beta[2:nrow(Beta), Plc:Pin[al]] else cbind(Beta[2:nrow(Beta), Plc:Pin[al]], Beta[2:nrow(Beta), Plc:Pin[al]])
# Temp[, al] <- rowMeans(TempBB, na.rm = TRUE)
# ## fill remaining columns with averages from assigned indicators
# }
# if (length(names) > 1)
# colnames(Temp) <- names
# Allocations[[i]] <- Temp
# ## assign result to list of parcel datasets
# }
#
# ## Write parceled datasets
# if (as.vector(regexpr("/", parceloutput)) != -1) {
# replist <- matrix(NA, nAlloc, 1)
# for (i in 1:nAlloc) {
# ## if (is.na(names)==TRUE) names <- matrix(NA,nrow(
# colnames(Allocations[[i]]) <- names
# utils::write.table(Allocations[[i]], paste(parceloutput, "/parcelruns", i,
# ".dat", sep = ""),
# row.names = FALSE, col.names = TRUE)
# replist[i, 1] <- paste("parcelruns", i, ".dat", sep = "")
# }
# utils::write.table(replist, paste(parceloutput, "/parcelrunsreplist.dat",
# sep = ""),
# quote = FALSE, row.names = FALSE, col.names = FALSE)
# }
#
#
# ## Model A estimation
#
# {
# Param_A <- list()
# ## list for parameter estimated for each imputation
# Fitind_A <- list()
# ## list for fit indices estimated for each imputation
# Converged_A <- list()
# ## list for whether or not each allocation converged
# ProperSolution_A <- list()
# ## list for whether or not each allocation has proper solutions
# ConvergedProper_A <- list()
# ## list for whether or not each allocation converged and has proper solutions
#
# for (i in 1:nAlloc) {
# data_A <- as.data.frame(Allocations[[i]], row.names = NULL, optional = FALSE)
# ## convert allocation matrix to dataframe for model estimation
# fit_A <- lavaan::sem(syntaxA, data = data_A, ...)
# ## estimate model in lavaan
# if (lavInspect(fit_A, "converged") == TRUE) {
# Converged_A[[i]] <- 1
# } else Converged_A[[i]] <- 0
# ## determine whether or not each allocation converged
# Param_A[[i]] <- lavaan::parameterEstimates(fit_A)[, c("lhs", "op", "rhs",
# "est", "se", "z", "pvalue", "ci.lower", "ci.upper")]
# ## assign allocation parameter estimates to list
# if (lavInspect(fit_A, "post.check") == TRUE & Converged_A[[i]] == 1) {
# ProperSolution_A[[i]] <- 1
# } else ProperSolution_A[[i]] <- 0
# ## determine whether or not each allocation has proper solutions
# if (any(is.na(Param_A[[i]][, 5] == TRUE)))
# ProperSolution_A[[i]] <- 0
# ## make sure each allocation has existing SE
# if (Converged_A[[i]] == 1 & ProperSolution_A[[i]] == 1) {
# ConvergedProper_A[[i]] <- 1
# } else ConvergedProper_A[[i]] <- 0
# ## determine whether or not each allocation converged and has proper solutions
#
# if (ConvergedProper_A[[i]] == 0)
# Param_A[[i]][, 4:9] <- matrix(data = NA, nrow(Param_A[[i]]), 6)
# ## make parameter estimates null for nonconverged, improper solutions
#
# if (ConvergedProper_A[[i]] == 1) {
# Fitind_A[[i]] <- lavaan::fitMeasures(fit_A, c("chisq", "df", "cfi",
# "tli", "rmsea", "srmr", "logl", "bic", "aic"))
# } else Fitind_A[[i]] <- c(NA, NA, NA, NA, NA, NA, NA, NA, NA)
# ### assign allocation parameter estimates to list
#
# }
#
#
# nConverged_A <- Reduce("+", Converged_A)
# ## count number of converged allocations
#
# nProperSolution_A <- Reduce("+", ProperSolution_A)
# ## count number of allocations with proper solutions
#
# nConvergedProper_A <- Reduce("+", ConvergedProper_A)
# ## count number of allocations with proper solutions
#
# if (nConvergedProper_A == 0)
# stop("All allocations failed to converge and/or yielded improper solutions for Model A and/or B.")
# ## stop program if no allocations converge
#
# Parmn_A <- Param_A[[1]]
# ## assign first parameter estimates to mean dataframe
#
# ParSE_A <- matrix(NA, nrow(Parmn_A), nAlloc)
# ParSEmn_A <- Parmn_A[, 5]
#
# Parsd_A <- matrix(NA, nrow(Parmn_A), nAlloc)
# ## assign parameter estimates for S.D. calculation
#
# Fitmn_A <- Fitind_A[[1]]
# ## assign first fit indices to mean dataframe
#
# Fitsd_A <- matrix(NA, length(Fitmn_A), nAlloc)
# ## assign fit indices for S.D. calculation
#
# Sigp_A <- matrix(NA, nrow(Parmn_A), nAlloc)
# ## assign p-values to calculate percentage significant
#
# Fitind_A <- data.frame(Fitind_A)
# ### convert fit index table to data frame
#
# for (i in 1:nAlloc) {
#
# Parsd_A[, i] <- Param_A[[i]][, 4]
# ## assign parameter estimates for S.D. estimation
#
# ParSE_A[, i] <- Param_A[[i]][, 5]
#
# if (i > 1) {
# ParSEmn_A <- rowSums(cbind(ParSEmn_A, Param_A[[i]][, 5]), na.rm = TRUE)
# }
#
# Sigp_A[, ncol(Sigp_A) - i + 1] <- Param_A[[i]][, 7]
# ## assign p-values to calculate percentage significant
#
# Fitsd_A[, i] <- Fitind_A[[i]]
# ## assign fit indices for S.D. estimation
#
# if (i > 1) {
# Parmn_A[, 4:ncol(Parmn_A)] <- rowSums(cbind(Parmn_A[, 4:ncol(Parmn_A)],
# Param_A[[i]][, 4:ncol(Parmn_A)]), na.rm = TRUE)
# }
# ## add together all parameter estimates
#
# if (i > 1)
# Fitmn_A <- rowSums(cbind(Fitmn_A, Fitind_A[[i]]), na.rm = TRUE)
# ## add together all fit indices
#
# }
#
#
# Sigp_A <- Sigp_A + 0.45
# Sigp_A <- apply(Sigp_A, c(1, 2), round)
# Sigp_A <- 1 - as.vector(rowMeans(Sigp_A, na.rm = TRUE))
# ## calculate percentage significant parameters
#
# Parsum_A <- cbind(apply(Parsd_A, 1, mean, na.rm = TRUE),
# apply(Parsd_A, 1, sd, na.rm = TRUE),
# apply(Parsd_A, 1, max, na.rm = TRUE),
# apply(Parsd_A, 1, min, na.rm = TRUE),
# apply(Parsd_A, 1, max, na.rm = TRUE) - apply(Parsd_A, 1, min, na.rm = TRUE), Sigp_A * 100)
# colnames(Parsum_A) <- c("Avg Est.", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ## calculate parameter S.D., minimum, maximum, range, bind to percentage
# ## significant
#
# ParSEmn_A <- Parmn_A[, 1:3]
# ParSEfn_A <- cbind(ParSEmn_A, apply(ParSE_A, 1, mean, na.rm = TRUE),
# apply(ParSE_A, 1, sd, na.rm = TRUE),
# apply(ParSE_A, 1, max, na.rm = TRUE),
# apply(ParSE_A, 1, min, na.rm = TRUE),
# apply(ParSE_A, 1, max, na.rm = TRUE) - apply(ParSE_A, 1, min, na.rm = TRUE))
# colnames(ParSEfn_A) <- c("lhs", "op", "rhs", "Avg SE", "S.D.",
# "MAX", "MIN", "Range")
#
# Fitsum_A <- cbind(apply(Fitsd_A, 1, mean, na.rm = TRUE),
# apply(Fitsd_A, 1, sd, na.rm = TRUE),
# apply(Fitsd_A, 1, max, na.rm = TRUE),
# apply(Fitsd_A, 1, min, na.rm = TRUE),
# apply(Fitsd_A, 1, max, na.rm = TRUE) - apply(Fitsd_A, 1, min, na.rm = TRUE))
# rownames(Fitsum_A) <- c("chisq", "df", "cfi", "tli", "rmsea", "srmr", "logl",
# "bic", "aic")
# ## calculate fit S.D., minimum, maximum, range
#
# Parmn_A[, 4:ncol(Parmn_A)] <- Parmn_A[, 4:ncol(Parmn_A)]/nConvergedProper_A
# ## divide totalled parameter estimates by number converged allocations
# Parmn_A <- Parmn_A[, 1:3]
# ## remove confidence intervals from output
# Parmn_A <- cbind(Parmn_A, Parsum_A)
# ## bind parameter average estimates to cross-allocation information
# Fitmn_A <- Fitmn_A/nConvergedProper_A
# ## divide totalled fit indices by number converged allocations
#
# pChisq_A <- list()
# ## create empty list for Chi-square p-values
# sigChisq_A <- list()
# ## create empty list for Chi-square significance
#
# for (i in 1:nAlloc) {
# pChisq_A[[i]] <- (1 - pchisq(Fitsd_A[1, i], Fitsd_A[2, i]))
# ## calculate p-value for each Chi-square
# if (is.na(pChisq_A[[i]]) == FALSE & pChisq_A[[i]] < 0.05) {
# sigChisq_A[[i]] <- 1
# } else sigChisq_A[[i]] <- 0
# }
# ## count number of allocations with significant chi-square
#
# PerSigChisq_A <- (Reduce("+", sigChisq_A))/nConvergedProper_A * 100
# PerSigChisq_A <- round(PerSigChisq_A, 3)
# ## calculate percent of allocations with significant chi-square
#
# PerSigChisqCol_A <- c(PerSigChisq_A, "n/a", "n/a", "n/a", "n/a", "n/a", "n/a",
# "n/a", "n/a")
# ## create list of Chi-square Percent Significant and 'n/a' (used for fit summary
# ## table)
#
# options(stringsAsFactors = FALSE)
# ## set default option to allow strings into dataframe without converting to factors
#
# Fitsum_A <- data.frame(Fitsum_A, PerSigChisqCol_A)
# colnames(Fitsum_A) <- c("Avg Ind", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ### bind to fit averages
#
# options(stringsAsFactors = TRUE)
# ## unset option to allow strings into dataframe without converting to factors
#
# ParSEfn_A[, 4:8] <- apply(ParSEfn_A[, 4:8], 2, round, digits = 3)
# Parmn_A[, 4:9] <- apply(Parmn_A[, 4:9], 2, round, digits = 3)
# Fitsum_A[, 1:5] <- apply(Fitsum_A[, 1:5], 2, round, digits = 3)
# ## round output to three digits
#
# Fitsum_A[2, 2:5] <- c("n/a", "n/a", "n/a", "n/a")
# ## Change df row to 'n/a' for sd, max, min, and range
#
# Output_A <- list(Parmn_A, ParSEfn_A, Fitsum_A)
# names(Output_A) <- c("Estimates_A", "SE_A", "Fit_A")
# ## output summary for model A
#
# }
#
# ## Model B estimation
#
# {
# Param <- list()
# ## list for parameter estimated for each imputation
# Fitind <- list()
# ## list for fit indices estimated for each imputation
# Converged <- list()
# ## list for whether or not each allocation converged
# ProperSolution <- list()
# ## list for whether or not each allocation has proper solutions
# ConvergedProper <- list()
# ## list for whether or not each allocation is converged and proper
#
# for (i in 1:nAlloc) {
# data <- as.data.frame(Allocations[[i]], row.names = NULL, optional = FALSE)
# ## convert allocation matrix to dataframe for model estimation
# fit <- lavaan::sem(syntaxB, data = data, ...)
# ## estimate model in lavaan
# if (lavInspect(fit, "converged") == TRUE) {
# Converged[[i]] <- 1
# } else Converged[[i]] <- 0
# ## determine whether or not each allocation converged
# Param[[i]] <- lavaan::parameterEstimates(fit)[, c("lhs", "op", "rhs",
# "est", "se", "z", "pvalue", "ci.lower", "ci.upper")]
# ## assign allocation parameter estimates to list
# if (lavInspect(fit, "post.check") == TRUE & Converged[[i]] == 1) {
# ProperSolution[[i]] <- 1
# } else ProperSolution[[i]] <- 0
# ## determine whether or not each allocation has proper solutions
# if (any(is.na(Param[[i]][, 5] == TRUE)))
# ProperSolution[[i]] <- 0
# ## make sure each allocation has existing SE
# if (Converged[[i]] == 1 & ProperSolution[[i]] == 1) {
# ConvergedProper[[i]] <- 1
# } else ConvergedProper[[i]] <- 0
# ## determine whether or not each allocation converged and has proper solutions
#
# if (ConvergedProper[[i]] == 0)
# Param[[i]] <- matrix(data = NA, nrow(Param[[i]]), ncol(Param[[i]]))
# ## make parameter estimates null for nonconverged, improper solutions
#
# if (ConvergedProper[[i]] == 1) {
# Fitind[[i]] <- lavaan::fitMeasures(fit, c("chisq", "df", "cfi", "tli",
# "rmsea", "srmr", "logl", "bic", "aic"))
# } else Fitind[[i]] <- c(NA, NA, NA, NA, NA, NA, NA, NA, NA)
# ### assign allocation parameter estimates to list
#
#
# }
#
#
#
#
# nConverged <- Reduce("+", Converged)
# ## count number of converged allocations
#
# nProperSolution <- Reduce("+", ProperSolution)
# ## count number of allocations with proper solutions
#
# nConvergedProper <- Reduce("+", ConvergedProper)
# ## count number of allocations with proper solutions
#
# if (nConvergedProper == 0)
# stop("All allocations failed to converge", " and/or yielded improper solutions for",
# " Model A and/or B.")
# ## stop program if no allocations converge
#
# Parmn <- Param[[1]]
# ## assign first parameter estimates to mean dataframe
#
# ParSE <- matrix(NA, nrow(Parmn), nAlloc)
# ParSEmn <- Parmn[, 5]
#
# Parsd <- matrix(NA, nrow(Parmn), nAlloc)
# ## assign parameter estimates for S.D. calculation
#
# Fitmn <- Fitind[[1]]
# ## assign first fit indices to mean dataframe
#
# Fitsd <- matrix(NA, length(Fitmn), nAlloc)
# ## assign fit indices for S.D. calculation
#
# Sigp <- matrix(NA, nrow(Parmn), nAlloc)
# ## assign p-values to calculate percentage significant
#
# Fitind <- data.frame(Fitind)
# ### convert fit index table to dataframe
#
#
# for (i in 1:nAlloc) {
#
# Parsd[, i] <- Param[[i]][, 4]
# ## assign parameter estimates for S.D. estimation
#
# ParSE[, i] <- Param[[i]][, 5]
#
# if (i > 1)
# ParSEmn <- rowSums(cbind(ParSEmn, Param[[i]][, 5]), na.rm = TRUE)
#
# Sigp[, ncol(Sigp) - i + 1] <- Param[[i]][, 7]
# ## assign p-values to calculate percentage significant
#
#
# Fitsd[, i] <- Fitind[[i]]
# ## assign fit indices for S.D. estimation
#
# if (i > 1) {
# Parmn[, 4:ncol(Parmn)] <- rowSums(cbind(Parmn[, 4:ncol(Parmn)], Param[[i]][,
# 4:ncol(Parmn)]), na.rm = TRUE)
# }
# ## add together all parameter estimates
#
# if (i > 1)
# Fitmn <- rowSums(cbind(Fitmn, Fitind[[i]]), na.rm = TRUE)
# ## add together all fit indices
#
# }
#
#
# Sigp <- Sigp + 0.45
# Sigp <- apply(Sigp, c(1, 2), round)
# Sigp <- 1 - as.vector(rowMeans(Sigp, na.rm = TRUE))
# ## calculate percentage significant parameters
#
# Parsum <- cbind(apply(Parsd, 1, mean, na.rm = TRUE), apply(Parsd, 1, sd, na.rm = TRUE),
# apply(Parsd, 1, max, na.rm = TRUE), apply(Parsd, 1, min, na.rm = TRUE),
# apply(Parsd, 1, max, na.rm = TRUE) - apply(Parsd, 1, min, na.rm = TRUE),
# Sigp * 100)
# colnames(Parsum) <- c("Avg Est", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ## calculate parameter S.D., minimum, maximum, range, bind to percentage
# ## significant
#
# ParSEmn <- Parmn[, 1:3]
# ParSEfn <- cbind(ParSEmn, apply(ParSE, 1, mean, na.rm = TRUE), apply(ParSE,
# 1, sd, na.rm = TRUE), apply(ParSE, 1, max, na.rm = TRUE), apply(ParSE,
# 1, min, na.rm = TRUE), apply(ParSE, 1, max, na.rm = TRUE) - apply(ParSE,
# 1, min, na.rm = TRUE))
# colnames(ParSEfn) <- c("lhs", "op", "rhs", "Avg SE", "S.D.", "MAX", "MIN",
# "Range")
#
# Fitsum <- cbind(apply(Fitsd, 1, mean, na.rm = TRUE), apply(Fitsd, 1, sd, na.rm = TRUE),
# apply(Fitsd, 1, max, na.rm = TRUE), apply(Fitsd, 1, min, na.rm = TRUE),
# apply(Fitsd, 1, max, na.rm = TRUE) - apply(Fitsd, 1, min, na.rm = TRUE))
# rownames(Fitsum) <- c("chisq", "df", "cfi", "tli", "rmsea", "srmr", "logl",
# "bic", "aic")
# ## calculate fit S.D., minimum, maximum, range
#
# Parmn[, 4:ncol(Parmn)] <- Parmn[, 4:ncol(Parmn)]/nConvergedProper
# ## divide totalled parameter estimates by number converged allocations
# Parmn <- Parmn[, 1:3]
# ## remove confidence intervals from output
# Parmn <- cbind(Parmn, Parsum)
# ## bind parameter average estimates to cross-allocation information
# Fitmn <- as.numeric(Fitmn)
# ## make fit index values numeric
# Fitmn <- Fitmn/nConvergedProper
# ## divide totalled fit indices by number converged allocations
#
# pChisq <- list()
# ## create empty list for Chi-square p-values
# sigChisq <- list()
# ## create empty list for Chi-square significance
#
# for (i in 1:nAlloc) {
#
# pChisq[[i]] <- (1 - pchisq(Fitsd[1, i], Fitsd[2, i]))
# ## calculate p-value for each Chi-square
#
# if (is.na(pChisq[[i]]) == FALSE & pChisq[[i]] < 0.05) {
# sigChisq[[i]] <- 1
# } else sigChisq[[i]] <- 0
# }
# ## count number of allocations with significant chi-square
#
# PerSigChisq <- (Reduce("+", sigChisq))/nConvergedProper * 100
# PerSigChisq <- round(PerSigChisq, 3)
# ## calculate percent of allocations with significant chi-square
#
# PerSigChisqCol <- c(PerSigChisq, "n/a", "n/a", "n/a", "n/a", "n/a", "n/a",
# "n/a", "n/a")
# ## create list of Chi-square Percent Significant and 'n/a' (used for fit summary
# ## table)
#
# options(stringsAsFactors = FALSE)
# ## set default option to allow strings into dataframe without converting to factors
#
# Fitsum <- data.frame(Fitsum, PerSigChisqCol)
# colnames(Fitsum) <- c("Avg Ind", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ### bind to fit averages
#
# options(stringsAsFactors = TRUE)
# ## unset option to allow strings into dataframe without converting to factors
#
# ParSEfn[, 4:8] <- apply(ParSEfn[, 4:8], 2, round, digits = 3)
# Parmn[, 4:9] <- apply(Parmn[, 4:9], 2, round, digits = 3)
# Fitsum[, 1:5] <- apply(Fitsum[, 1:5], 2, round, digits = 3)
# ## round output to three digits
#
# Fitsum[2, 2:5] <- c("n/a", "n/a", "n/a", "n/a")
# ## Change df row to 'n/a' for sd, max, min, and range
#
# Output_B <- list(Parmn, ParSEfn, Fitsum)
# names(Output_B) <- c("Estimates_B", "SE_B", "Fit_B")
# ## output summary for model A
#
# }
#
# ## Model Comparison (everything in this section is new)
#
# {
# Converged_AB <- list()
# ## create list of convergence comparison for each allocation
# ProperSolution_AB <- list()
# ## create list of proper solution comparison for each allocation
# ConvergedProper_AB <- list()
# ## create list of convergence and proper solution comparison for each allocation
# lrtest_AB <- list()
# ## create list for likelihood ratio test for each allocation
# lrchisq_AB <- list()
# ## create list for likelihood ratio chi square value
# lrchisqp_AB <- list()
# ## create list for likelihood ratio test p-value
# lrsig_AB <- list()
# ## create list for likelihood ratio test significance
#
# for (i in 1:nAlloc) {
# if (Converged_A[[i]] == 1 & Converged[[i]] == 1) {
# Converged_AB[[i]] <- 1
# } else Converged_AB[[i]] <- 0
# ## compare convergence
#
# if (ProperSolution_A[[i]] == 1 & ProperSolution[[i]] == 1) {
# ProperSolution_AB[[i]] <- 1
# } else ProperSolution_AB[[i]] <- 0
# ## compare existence of proper solutions
#
# if (ConvergedProper_A[[i]] == 1 & ConvergedProper[[i]] == 1) {
# ConvergedProper_AB[[i]] <- 1
# } else ConvergedProper_AB[[i]] <- 0
# ## compare existence of proper solutions and convergence
#
#
#
# if (ConvergedProper_AB[[i]] == 1) {
#
# data <- as.data.frame(Allocations[[i]], row.names = NULL, optional = FALSE)
# ## convert allocation matrix to dataframe for model estimation
# fit_A <- lavaan::sem(syntaxA, data = data, ...)
# ## estimate model A in lavaan
# fit <- lavaan::sem(syntaxB, data = data, ...)
# ## estimate model B in lavaan
# lrtest_AB[[i]] <- lavaan::lavTestLRT(fit_A, fit)
# ## likelihood ratio test comparing A and B
# lrtestd_AB <- as.data.frame(lrtest_AB[[i]], row.names = NULL, optional = FALSE)
# ## convert lrtest results to dataframe
# lrchisq_AB[[i]] <- lrtestd_AB[2, 5]
# ## write lrtest chisq as single numeric variable
# lrchisqp_AB[[i]] <- lrtestd_AB[2, 7]
# ## write lrtest p-value as single numeric variable
# if (lrchisqp_AB[[i]] < 0.05) {
# lrsig_AB[[i]] <- 1
# } else {
# lrsig_AB[[i]] <- 0
# }
# ## determine statistical significance of lrtest
#
# }
# }
#
# lrchisqp_AB <- unlist(lrchisqp_AB, recursive = TRUE, use.names = TRUE)
# ## convert lrchisqp_AB from list to vector
# lrchisqp_AB <- as.numeric(lrchisqp_AB)
# ## make lrchisqp_AB numeric
# lrsig_AB <- unlist(lrsig_AB, recursive = TRUE, use.names = TRUE)
# ## convert lrsig_AB from list to vector
# lrsig_AB <- as.numeric(lrsig_AB)
# ### make lrsig_AB numeric
#
#
# nConverged_AB <- Reduce("+", Converged_AB)
# ## count number of allocations that converged for both A and B
# nProperSolution_AB <- Reduce("+", ProperSolution_AB)
# ## count number of allocations with proper solutions for both A and B
# nConvergedProper_AB <- Reduce("+", ConvergedProper_AB)
# ## count number of allocations that converged and have proper solutions for both A
# ## and B
# ProConverged_AB <- (nConverged_AB/nAlloc) * 100
# ## calc proportion of allocations that converged for both A and B
# nlrsig_AB <- Reduce("+", lrsig_AB)
# ## count number of allocations with significant lrtest between A and B
# Prolrsig_AB <- (nlrsig_AB/nConvergedProper_AB) * 100
# ## calc proportion of allocations with significant lrtest between A and B
# lrchisq_AB <- unlist(lrchisq_AB, recursive = TRUE, use.names = TRUE)
# ### convert lrchisq_AB from list to vector
# lrchisq_AB <- as.numeric(lrchisq_AB)
# ### make lrchisq_AB numeric
# AvgLRT_AB <- (Reduce("+", lrchisq_AB))/nConvergedProper_AB
# ## calc average LRT
#
# LRTsum <- cbind(AvgLRT_AB, lrtestd_AB[2, 3], sd(lrchisq_AB, na.rm = TRUE),
# max(lrchisq_AB), min(lrchisq_AB),
# max(lrchisq_AB) - min(lrchisq_AB), Prolrsig_AB)
# colnames(LRTsum) <- c("Avg LRT", "df", "S.D.", "MAX", "MIN", "Range", "% Sig")
# ## calculate LRT distribution statistics
#
# FitDiff_AB <- Fitsd_A - Fitsd
# ## compute fit index difference matrix
#
# for (i in 1:nAlloc) {
# if (ConvergedProper_AB[[i]] != 1)
# FitDiff_AB[1:9, i] <- 0
# }
# ### make fit differences zero for each non-converged allocation
#
# BICDiff_AB <- list()
# AICDiff_AB <- list()
# RMSEADiff_AB <- list()
# CFIDiff_AB <- list()
# TLIDiff_AB <- list()
# SRMRDiff_AB <- list()
# BICDiffGT10_AB <- list()
# ## create list noting each allocation in which A is preferred over B
#
# BICDiff_BA <- list()
# AICDiff_BA <- list()
# RMSEADiff_BA <- list()
# CFIDiff_BA <- list()
# TLIDiff_BA <- list()
# SRMRDiff_BA <- list()
# BICDiffGT10_BA <- list()
# ## create list noting each allocation in which B is preferred over A
#
# for (i in 1:nAlloc) {
# if (FitDiff_AB[8, i] < 0) {
# BICDiff_AB[[i]] <- 1
# } else BICDiff_AB[[i]] <- 0
# if (FitDiff_AB[9, i] < 0) {
# AICDiff_AB[[i]] <- 1
# } else AICDiff_AB[[i]] <- 0
# if (FitDiff_AB[5, i] < 0) {
# RMSEADiff_AB[[i]] <- 1
# } else RMSEADiff_AB[[i]] <- 0
# if (FitDiff_AB[3, i] > 0) {
# CFIDiff_AB[[i]] <- 1
# } else CFIDiff_AB[[i]] <- 0
# if (FitDiff_AB[4, i] > 0) {
# TLIDiff_AB[[i]] <- 1
# } else TLIDiff_AB[[i]] <- 0
# if (FitDiff_AB[6, i] < 0) {
# SRMRDiff_AB[[i]] <- 1
# } else SRMRDiff_AB[[i]] <- 0
# if (FitDiff_AB[8, i] < (-10)) {
# BICDiffGT10_AB[[i]] <- 1
# } else BICDiffGT10_AB[[i]] <- 0
# }
# nBIC_AoverB <- Reduce("+", BICDiff_AB)
# nAIC_AoverB <- Reduce("+", AICDiff_AB)
# nRMSEA_AoverB <- Reduce("+", RMSEADiff_AB)
# nCFI_AoverB <- Reduce("+", CFIDiff_AB)
# nTLI_AoverB <- Reduce("+", TLIDiff_AB)
# nSRMR_AoverB <- Reduce("+", SRMRDiff_AB)
# nBICDiffGT10_AoverB <- Reduce("+", BICDiffGT10_AB)
# ## compute number of 'A preferred over B' for each fit index
#
# for (i in 1:nAlloc) {
# if (FitDiff_AB[8, i] > 0) {
# BICDiff_BA[[i]] <- 1
# } else BICDiff_BA[[i]] <- 0
# if (FitDiff_AB[9, i] > 0) {
# AICDiff_BA[[i]] <- 1
# } else AICDiff_BA[[i]] <- 0
# if (FitDiff_AB[5, i] > 0) {
# RMSEADiff_BA[[i]] <- 1
# } else RMSEADiff_BA[[i]] <- 0
# if (FitDiff_AB[3, i] < 0) {
# CFIDiff_BA[[i]] <- 1
# } else CFIDiff_BA[[i]] <- 0
# if (FitDiff_AB[4, i] < 0) {
# TLIDiff_BA[[i]] <- 1
# } else TLIDiff_BA[[i]] <- 0
# if (FitDiff_AB[6, i] > 0) {
# SRMRDiff_BA[[i]] <- 1
# } else SRMRDiff_BA[[i]] <- 0
# if (FitDiff_AB[8, i] > (10)) {
# BICDiffGT10_BA[[i]] <- 1
# } else BICDiffGT10_BA[[i]] <- 0
# }
# nBIC_BoverA <- Reduce("+", BICDiff_BA)
# nAIC_BoverA <- Reduce("+", AICDiff_BA)
# nRMSEA_BoverA <- Reduce("+", RMSEADiff_BA)
# nCFI_BoverA <- Reduce("+", CFIDiff_BA)
# nTLI_BoverA <- Reduce("+", TLIDiff_BA)
# nSRMR_BoverA <- Reduce("+", SRMRDiff_BA)
# nBICDiffGT10_BoverA <- Reduce("+", BICDiffGT10_BA)
# ## compute number of 'B preferred over A' for each fit index
#
# BICDiffAvgtemp <- list()
# AICDiffAvgtemp <- list()
# RMSEADiffAvgtemp <- list()
# CFIDiffAvgtemp <- list()
# TLIDiffAvgtemp <- list()
# SRMRDiffAvgtemp <- list()
# BICgt10DiffAvgtemp <- list()
# ## create empty list for average fit index differences
#
# for (i in 1:nAlloc) {
# if (BICDiff_AB[[i]] != 1) {
# BICDiffAvgtemp[[i]] <- 0
# } else BICDiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# if (AICDiff_AB[[i]] != 1) {
# AICDiffAvgtemp[[i]] <- 0
# } else AICDiffAvgtemp[[i]] <- FitDiff_AB[9, i]
# if (RMSEADiff_AB[[i]] != 1) {
# RMSEADiffAvgtemp[[i]] <- 0
# } else RMSEADiffAvgtemp[[i]] <- FitDiff_AB[5, i]
# if (CFIDiff_AB[[i]] != 1) {
# CFIDiffAvgtemp[[i]] <- 0
# } else CFIDiffAvgtemp[[i]] <- FitDiff_AB[3, i]
# if (TLIDiff_AB[[i]] != 1) {
# TLIDiffAvgtemp[[i]] <- 0
# } else TLIDiffAvgtemp[[i]] <- FitDiff_AB[4, i]
# if (SRMRDiff_AB[[i]] != 1) {
# SRMRDiffAvgtemp[[i]] <- 0
# } else SRMRDiffAvgtemp[[i]] <- FitDiff_AB[6, i]
# if (BICDiffGT10_AB[[i]] != 1) {
# BICgt10DiffAvgtemp[[i]] <- 0
# } else BICgt10DiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# }
# ## make average fit index difference list composed solely of values where A is
# ## preferred over B
#
# BICDiffAvg_AB <- Reduce("+", BICDiffAvgtemp)/nBIC_AoverB * (-1)
# AICDiffAvg_AB <- Reduce("+", AICDiffAvgtemp)/nAIC_AoverB * (-1)
# RMSEADiffAvg_AB <- Reduce("+", RMSEADiffAvgtemp)/nRMSEA_AoverB * (-1)
# CFIDiffAvg_AB <- Reduce("+", CFIDiffAvgtemp)/nCFI_AoverB
# TLIDiffAvg_AB <- Reduce("+", TLIDiffAvgtemp)/nTLI_AoverB
# SRMRDiffAvg_AB <- Reduce("+", SRMRDiffAvgtemp)/nSRMR_AoverB * (-1)
# BICgt10DiffAvg_AB <- Reduce("+", BICgt10DiffAvgtemp)/nBICDiffGT10_AoverB *
# (-1)
# ## calc average fit index difference when A is preferred over B
#
# FitDiffAvg_AoverB <- list(BICDiffAvg_AB, AICDiffAvg_AB, RMSEADiffAvg_AB, CFIDiffAvg_AB,
# TLIDiffAvg_AB, SRMRDiffAvg_AB)
# ## create list of all fit index differences when A is preferred over B
#
# FitDiffAvg_AoverB <- unlist(FitDiffAvg_AoverB, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# for (i in 1:nAlloc) {
# if (BICDiff_BA[[i]] != 1) {
# BICDiffAvgtemp[[i]] <- 0
# } else BICDiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# if (AICDiff_BA[[i]] != 1) {
# AICDiffAvgtemp[[i]] <- 0
# } else AICDiffAvgtemp[[i]] <- FitDiff_AB[9, i]
# if (RMSEADiff_BA[[i]] != 1) {
# RMSEADiffAvgtemp[[i]] <- 0
# } else RMSEADiffAvgtemp[[i]] <- FitDiff_AB[5, i]
# if (CFIDiff_BA[[i]] != 1) {
# CFIDiffAvgtemp[[i]] <- 0
# } else CFIDiffAvgtemp[[i]] <- FitDiff_AB[3, i]
# if (TLIDiff_BA[[i]] != 1) {
# TLIDiffAvgtemp[[i]] <- 0
# } else TLIDiffAvgtemp[[i]] <- FitDiff_AB[4, i]
# if (SRMRDiff_BA[[i]] != 1) {
# SRMRDiffAvgtemp[[i]] <- 0
# } else SRMRDiffAvgtemp[[i]] <- FitDiff_AB[6, i]
# if (BICDiffGT10_BA[[i]] != 1) {
# BICgt10DiffAvgtemp[[i]] <- 0
# } else BICgt10DiffAvgtemp[[i]] <- FitDiff_AB[8, i]
# }
# ## make average fit index difference list composed solely of values where B is
# ## preferred over A
#
# BICDiffAvg_BA <- Reduce("+", BICDiffAvgtemp)/nBIC_BoverA
# AICDiffAvg_BA <- Reduce("+", AICDiffAvgtemp)/nAIC_BoverA
# RMSEADiffAvg_BA <- Reduce("+", RMSEADiffAvgtemp)/nRMSEA_BoverA
# CFIDiffAvg_BA <- Reduce("+", CFIDiffAvgtemp)/nCFI_BoverA * (-1)
# TLIDiffAvg_BA <- Reduce("+", TLIDiffAvgtemp)/nTLI_BoverA * (-1)
# SRMRDiffAvg_BA <- Reduce("+", SRMRDiffAvgtemp)/nSRMR_BoverA
# BICgt10DiffAvg_BA <- Reduce("+", BICgt10DiffAvgtemp)/nBICDiffGT10_BoverA
# ## calc average fit index difference when B is preferred over A
#
# FitDiffAvg_BoverA <- list(BICDiffAvg_BA, AICDiffAvg_BA, RMSEADiffAvg_BA, CFIDiffAvg_BA,
# TLIDiffAvg_BA, SRMRDiffAvg_BA)
# ## create list of all fit index differences when B is preferred over A
#
# FitDiffAvg_BoverA <- unlist(FitDiffAvg_BoverA, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# FitDiffBICgt10_AoverB <- nBICDiffGT10_AoverB/nConvergedProper_AB * 100
# ### calculate portion of allocations where A strongly preferred over B
#
# FitDiffBICgt10_BoverA <- nBICDiffGT10_BoverA/nConvergedProper_AB * 100
# ### calculate portion of allocations where B strongly preferred over A
#
# FitDiffBICgt10 <- rbind(FitDiffBICgt10_AoverB, FitDiffBICgt10_BoverA)
# rownames(FitDiffBICgt10) <- c("Very Strong evidence for A>B", "Very Strong evidence for B>A")
# colnames(FitDiffBICgt10) <- "% Allocations"
# ### create table of proportions of 'A strongly preferred over B' and 'B strongly
# ### preferred over A'
#
# FitDiff_AoverB <- list(nBIC_AoverB/nConvergedProper_AB * 100, nAIC_AoverB/nConvergedProper_AB *
# 100, nRMSEA_AoverB/nConvergedProper_AB * 100, nCFI_AoverB/nConvergedProper_AB *
# 100, nTLI_AoverB/nConvergedProper_AB * 100, nSRMR_AoverB/nConvergedProper_AB *
# 100)
# ### create list of all proportions of 'A preferred over B'
# FitDiff_BoverA <- list(nBIC_BoverA/nConvergedProper_AB * 100, nAIC_BoverA/nConvergedProper_AB *
# 100, nRMSEA_BoverA/nConvergedProper_AB * 100, nCFI_BoverA/nConvergedProper_AB *
# 100, nTLI_BoverA/nConvergedProper_AB * 100, nSRMR_BoverA/nConvergedProper_AB *
# 100)
# ### create list of all proportions of 'B preferred over A'
#
# FitDiff_AoverB <- unlist(FitDiff_AoverB, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# FitDiff_BoverA <- unlist(FitDiff_BoverA, recursive = TRUE, use.names = TRUE)
# ### convert from list to vector
#
# FitDiffSum_AB <- cbind(FitDiff_AoverB, FitDiffAvg_AoverB, FitDiff_BoverA,
# FitDiffAvg_BoverA)
# colnames(FitDiffSum_AB) <- c("% A>B", "Avg Amount A>B", "% B>A", "Avg Amount B>A")
# rownames(FitDiffSum_AB) <- c("bic", "aic", "rmsea", "cfi", "tli", "srmr")
# ## create table showing number of allocations in which A>B and B>A as well as
# ## average difference values
#
# for (i in 1:nAlloc) {
# is.na(FitDiff_AB[1:9, i]) <- ConvergedProper_AB[[i]] != 1
# }
# ### make fit differences missing for each non-converged allocation
#
# LRThistMax <- max(hist(lrchisqp_AB, plot = FALSE)$counts)
# BIChistMax <- max(hist(FitDiff_AB[8, 1:nAlloc], plot = FALSE)$counts)
# AIChistMax <- max(hist(FitDiff_AB[9, 1:nAlloc], plot = FALSE)$counts)
# RMSEAhistMax <- max(hist(FitDiff_AB[5, 1:nAlloc], plot = FALSE)$counts)
# CFIhistMax <- max(hist(FitDiff_AB[3, 1:nAlloc], plot = FALSE)$counts)
# TLIhistMax <- max(hist(FitDiff_AB[4, 1:nAlloc], plot = FALSE)$counts)
# ### calculate y-axis height for each histogram
#
# LRThist <- hist(lrchisqp_AB, ylim = c(0, LRThistMax), xlab = "p-value", main = "LRT p-values")
# ## plot histogram of LRT p-values
#
# BIChist <- hist(FitDiff_AB[8, 1:nAlloc], ylim = c(0, BIChistMax), xlab = "BIC_modA - BIC_modB",
# main = "BIC Diff")
# AIChist <- hist(FitDiff_AB[9, 1:nAlloc], ylim = c(0, AIChistMax), xlab = "AIC_modA - AIC_modB",
# main = "AIC Diff")
# RMSEAhist <- hist(FitDiff_AB[5, 1:nAlloc], ylim = c(0, RMSEAhistMax), xlab = "RMSEA_modA - RMSEA_modB",
# main = "RMSEA Diff")
# CFIhist <- hist(FitDiff_AB[3, 1:nAlloc], ylim = c(0, CFIhistMax), xlab = "CFI_modA - CFI_modB",
# main = "CFI Diff")
# TLIhist <- hist(FitDiff_AB[4, 1:nAlloc], ylim = c(0, TLIhistMax), xlab = "TLI_modA - TLI_modB",
# main = "TLI Diff")
# ### plot histograms for each index_modA - index_modB
# BIChist
# AIChist
# RMSEAhist
# CFIhist
# TLIhist
#
# ConvergedProperSum <- rbind(nConverged_A/nAlloc, nConverged/nAlloc, nConverged_AB/nAlloc,
# nConvergedProper_A/nAlloc, nConvergedProper/nAlloc, nConvergedProper_AB/nAlloc)
# rownames(ConvergedProperSum) <- c("Converged_A", "Converged_B", "Converged_AB",
# "ConvergedProper_A", "ConvergedProper_B", "ConvergedProper_AB")
# colnames(ConvergedProperSum) <- "Proportion of Allocations"
# ### create table summarizing proportions of converged allocations and allocations
# ### with proper solutions
#
# Output_AB <- list(round(LRTsum, 3), "LRT results are interpretable specifically for nested models",
# round(FitDiffSum_AB, 3), round(FitDiffBICgt10, 3), ConvergedProperSum)
# names(Output_AB) <- c("LRT Summary, Model A vs. Model B", "Note:", "Fit Index Differences",
# "Percent of Allocations with |BIC Diff| > 10", "Converged and Proper Solutions Summary")
# ### output for model comparison
#
# }
#
# return(list(Output_A, Output_B, Output_AB))
# ## returns output for model A, model B, and the comparison of these
# }
|
# install.packages("arules")
# install.packages("arulesViz", dependencies = TRUE)
# Import libraries
library(arules)
library(arulesViz)
# Import dataset
# dataset = read.csv('../datasets/Grocery.csv', header = FALSE)
# Read transactions
basket = arules::read.transactions('../datasets/Grocery.csv', sep = ',', rm.duplicates = TRUE)
arules::summary(basket)
png(filename="../img/grocery_item_relative_freq_R.png")
arules::itemFrequencyPlot(basket, topN = 25, type="relative", main='relative freq of items of grocery (R)')
dev.off()
png(filename="../img/grocery_item_absolute_freq_R.png")
arules::itemFrequencyPlot(basket, topN = 25, type="absolute", main="absolute freq of items of grocery (R)")
dev.off()
# Association Learning
rules = arules::apriori(data = basket, parameter = list(support=0.004, confidence=0.25))
rules_sort = arules::sort(rules, by=list('lift', 'confidence'))
arules::inspect(rules_sort[1:50])
# Visualization plots
png(filename="../img/grocery_graph_R.png")
plot(rules_sort[1:30], method = "graph", control = list(type="items"), main = "graph for top 30 rules")
dev.off()
png(filename="../img/grocery_matrix_R.png")
plot(rules_sort[1:50], method = "matrix")
dev.off()
png(filename="../img/grocery_scatterplot_R.png")
plot(rules_sort[1:50], method = "scatterplot", main = "scatterplot for top 50 rules")
dev.off()
png(filename="../img/grocery_paracoord_R.png")
plot(rules_sort[1:50], method = "paracoord", main = "paracoord for top 50 rules")
dev.off()
png(filename="../img/grocery_grouped_matrix_R.png")
plot(rules_sort[1:25], method = "grouped matrix", main = "grouped matrix for top 25 rules")
dev.off()
| /grocery_analysis.R | no_license | greatsharma/Association-Learning | R | false | false | 1,639 | r | # install.packages("arules")
# install.packages("arulesViz", dependencies = TRUE)
# Import libraries
library(arules)
library(arulesViz)
# Import dataset
# dataset = read.csv('../datasets/Grocery.csv', header = FALSE)
# Read transactions
basket = arules::read.transactions('../datasets/Grocery.csv', sep = ',', rm.duplicates = TRUE)
arules::summary(basket)
png(filename="../img/grocery_item_relative_freq_R.png")
arules::itemFrequencyPlot(basket, topN = 25, type="relative", main='relative freq of items of grocery (R)')
dev.off()
png(filename="../img/grocery_item_absolute_freq_R.png")
arules::itemFrequencyPlot(basket, topN = 25, type="absolute", main="absolute freq of items of grocery (R)")
dev.off()
# Association Learning
rules = arules::apriori(data = basket, parameter = list(support=0.004, confidence=0.25))
rules_sort = arules::sort(rules, by=list('lift', 'confidence'))
arules::inspect(rules_sort[1:50])
# Visualization plots
png(filename="../img/grocery_graph_R.png")
plot(rules_sort[1:30], method = "graph", control = list(type="items"), main = "graph for top 30 rules")
dev.off()
png(filename="../img/grocery_matrix_R.png")
plot(rules_sort[1:50], method = "matrix")
dev.off()
png(filename="../img/grocery_scatterplot_R.png")
plot(rules_sort[1:50], method = "scatterplot", main = "scatterplot for top 50 rules")
dev.off()
png(filename="../img/grocery_paracoord_R.png")
plot(rules_sort[1:50], method = "paracoord", main = "paracoord for top 50 rules")
dev.off()
png(filename="../img/grocery_grouped_matrix_R.png")
plot(rules_sort[1:25], method = "grouped matrix", main = "grouped matrix for top 25 rules")
dev.off()
|
library("matrixStats")
rowQuantiles_R <- function(x, probs, na.rm=FALSE, drop=TRUE, ...) {
q <- apply(x, MARGIN=1L, FUN=function(x, probs, na.rm) {
if (!na.rm && any(is.na(x))) {
naValue <- NA_real_
storage.mode(naValue) <- storage.mode(x)
rep(naValue, length(probs))
} else {
as.vector(quantile(x, probs=probs, na.rm=na.rm, ...))
}
}, probs=probs, na.rm=na.rm)
if (!is.null(dim(q))) q <- t(q)
else dim(q) <- c(nrow(x), length(probs))
digits <- max(2L, getOption("digits"))
colnames(q) <- sprintf("%.*g%%", digits, 100*probs)
if (drop) q <- drop(q)
q
}
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Test with multiple quantiles
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
for (mode in c("integer", "double")) {
cat("mode: ", mode, "\n", sep="")
x <- matrix(1:40+0.1, nrow=8, ncol=5)
storage.mode(x) <- mode
str(x)
probs <- c(0,0.5,1)
q0 <- rowQuantiles_R(x, probs=probs)
print(q0)
q1 <- rowQuantiles(x, probs=probs)
print(q1)
stopifnot(all.equal(q1, q0))
q2 <- colQuantiles(t(x), probs=probs)
stopifnot(all.equal(q2, q0))
} # for (mode ...)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Test with a single quantile
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
for (mode in c("integer", "double")) {
cat("mode: ", mode, "\n", sep="")
x <- matrix(1:40, nrow=8, ncol=5)
storage.mode(x) <- mode
str(x)
probs <- c(0.5)
q0 <- rowQuantiles_R(x, probs=probs)
print(q0)
q1 <- rowQuantiles(x, probs=probs)
print(q1)
stopifnot(all.equal(q1, q0))
q2 <- colQuantiles(t(x), probs=probs)
stopifnot(all.equal(q2, q0))
} # for (mode ...)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Consistency checks
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
set.seed(1)
probs <- seq(from=0, to=1, by=0.25)
cat("Consistency checks:\n")
K <- if (Sys.getenv("_R_CHECK_FULL_") == "" || Sys.getenv("_R_CHECK_USE_VALGRIND_") != "") 4 else 20
for (kk in seq_len(K)) {
cat("Random test #", kk, "\n", sep="")
# Simulate data in a matrix of any shape
dim <- sample(20:60, size=2L)
n <- prod(dim)
x <- rnorm(n, sd=100)
dim(x) <- dim
# Add NAs?
hasNA <- (kk %% 4) %in% c(3,0);
if (hasNA) {
cat("Adding NAs\n")
nna <- sample(n, size=1)
naValues <- c(NA_real_, NaN)
x[sample(length(x), size=nna)] <- sample(naValues, size=nna, replace=TRUE)
}
# Integer or double?
if ((kk %% 4) %in% c(2,0)) {
cat("Coercing to integers\n")
storage.mode(x) <- "integer"
}
str(x)
# rowQuantiles():
q0 <- rowQuantiles_R(x, probs=probs, na.rm=hasNA)
q1 <- rowQuantiles(x, probs=probs, na.rm=hasNA)
stopifnot(all.equal(q1, q0))
q2 <- colQuantiles(t(x), probs=probs, na.rm=hasNA)
stopifnot(all.equal(q2, q0))
} # for (kk ...)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Empty matrices
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
x <- matrix(NA_real_, nrow=0L, ncol=0L)
probs <- c(0, 0.25, 0.75, 1)
q <- rowQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(nrow(x), length(probs))))
q <- colQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(ncol(x), length(probs))))
x <- matrix(NA_real_, nrow=2L, ncol=0L)
q <- rowQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(nrow(x), length(probs))))
x <- matrix(NA_real_, nrow=0L, ncol=2L)
q <- colQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(ncol(x), length(probs))))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Single column matrices
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
x <- matrix(1, nrow=2L, ncol=1L)
q <- rowQuantiles(x, probs=probs)
print(q)
x <- matrix(1, nrow=1L, ncol=2L)
q <- colQuantiles(x, probs=probs)
print(q)
| /tests/rowQuantiles.R | no_license | kaushikg06/matrixStats | R | false | false | 3,770 | r | library("matrixStats")
rowQuantiles_R <- function(x, probs, na.rm=FALSE, drop=TRUE, ...) {
q <- apply(x, MARGIN=1L, FUN=function(x, probs, na.rm) {
if (!na.rm && any(is.na(x))) {
naValue <- NA_real_
storage.mode(naValue) <- storage.mode(x)
rep(naValue, length(probs))
} else {
as.vector(quantile(x, probs=probs, na.rm=na.rm, ...))
}
}, probs=probs, na.rm=na.rm)
if (!is.null(dim(q))) q <- t(q)
else dim(q) <- c(nrow(x), length(probs))
digits <- max(2L, getOption("digits"))
colnames(q) <- sprintf("%.*g%%", digits, 100*probs)
if (drop) q <- drop(q)
q
}
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Test with multiple quantiles
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
for (mode in c("integer", "double")) {
cat("mode: ", mode, "\n", sep="")
x <- matrix(1:40+0.1, nrow=8, ncol=5)
storage.mode(x) <- mode
str(x)
probs <- c(0,0.5,1)
q0 <- rowQuantiles_R(x, probs=probs)
print(q0)
q1 <- rowQuantiles(x, probs=probs)
print(q1)
stopifnot(all.equal(q1, q0))
q2 <- colQuantiles(t(x), probs=probs)
stopifnot(all.equal(q2, q0))
} # for (mode ...)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Test with a single quantile
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
for (mode in c("integer", "double")) {
cat("mode: ", mode, "\n", sep="")
x <- matrix(1:40, nrow=8, ncol=5)
storage.mode(x) <- mode
str(x)
probs <- c(0.5)
q0 <- rowQuantiles_R(x, probs=probs)
print(q0)
q1 <- rowQuantiles(x, probs=probs)
print(q1)
stopifnot(all.equal(q1, q0))
q2 <- colQuantiles(t(x), probs=probs)
stopifnot(all.equal(q2, q0))
} # for (mode ...)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Consistency checks
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
set.seed(1)
probs <- seq(from=0, to=1, by=0.25)
cat("Consistency checks:\n")
K <- if (Sys.getenv("_R_CHECK_FULL_") == "" || Sys.getenv("_R_CHECK_USE_VALGRIND_") != "") 4 else 20
for (kk in seq_len(K)) {
cat("Random test #", kk, "\n", sep="")
# Simulate data in a matrix of any shape
dim <- sample(20:60, size=2L)
n <- prod(dim)
x <- rnorm(n, sd=100)
dim(x) <- dim
# Add NAs?
hasNA <- (kk %% 4) %in% c(3,0);
if (hasNA) {
cat("Adding NAs\n")
nna <- sample(n, size=1)
naValues <- c(NA_real_, NaN)
x[sample(length(x), size=nna)] <- sample(naValues, size=nna, replace=TRUE)
}
# Integer or double?
if ((kk %% 4) %in% c(2,0)) {
cat("Coercing to integers\n")
storage.mode(x) <- "integer"
}
str(x)
# rowQuantiles():
q0 <- rowQuantiles_R(x, probs=probs, na.rm=hasNA)
q1 <- rowQuantiles(x, probs=probs, na.rm=hasNA)
stopifnot(all.equal(q1, q0))
q2 <- colQuantiles(t(x), probs=probs, na.rm=hasNA)
stopifnot(all.equal(q2, q0))
} # for (kk ...)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Empty matrices
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
x <- matrix(NA_real_, nrow=0L, ncol=0L)
probs <- c(0, 0.25, 0.75, 1)
q <- rowQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(nrow(x), length(probs))))
q <- colQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(ncol(x), length(probs))))
x <- matrix(NA_real_, nrow=2L, ncol=0L)
q <- rowQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(nrow(x), length(probs))))
x <- matrix(NA_real_, nrow=0L, ncol=2L)
q <- colQuantiles(x, probs=probs)
stopifnot(identical(dim(q), c(ncol(x), length(probs))))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Single column matrices
# - - - - - - - - - - - - - - - - - - - - - - - - - - - -
x <- matrix(1, nrow=2L, ncol=1L)
q <- rowQuantiles(x, probs=probs)
print(q)
x <- matrix(1, nrow=1L, ncol=2L)
q <- colQuantiles(x, probs=probs)
print(q)
|
# Introduction
## ══════════════
# • Learning objectives:
## • Learn the R formula interface
## • Specify factor contrasts to test specific hypotheses
## • Perform model comparisons
## • Run and interpret variety of regression models in R
## Set working directory
## ─────────────────────────
## It is often helpful to start your R session by setting your working
## directory so you don't have to type the full path names to your data
## and other files
# set the working directory
datasets <- setwd("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets")
# setwd("C:/Users/dataclass/Desktop/Rstatistics")
## You might also start by listing the files in your working directory
getwd() # where am I?
list.files("dataSets") # files in the dataSets folder
## Load the states data
## ────────────────────────
# read the states data
states.data <- readRDS("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets/states.rds")
#get labels
states.info <- data.frame(attributes(states.data)[c("names", "var.labels")])
#look at last few labels
tail(states.info, 8)
## Linear regression
## ═══════════════════
## Examine the data before fitting models
## ──────────────────────────────────────────
## Start by examining the data to check for problems.
# summary of expense and csat columns, all rows
sts.ex.sat <- subset(states.data, select = c("expense", "csat"))
summary(sts.ex.sat)
# correlation between expense and csat
cor(sts.ex.sat)
## Plot the data before fitting models
## ───────────────────────────────────────
## Plot the data to look for multivariate outliers, non-linear
## relationships etc.
# scatter plot of expense vs csat
plot(sts.ex.sat)
## Linear regression example
## ─────────────────────────────
## • Linear regression models can be fit with the `lm()' function
## • For example, we can use `lm' to predict SAT scores based on
## per-pupal expenditures:
# Fit our regression model
sat.mod <- lm(csat ~ expense, # regression formula
data=states.data) # data set
# Summarize and print the results
summary(sat.mod) # show regression coefficients table
## Why is the association between expense and SAT scores /negative/?
## ─────────────────────────────────────────────────────────────────────
## Many people find it surprising that the per-capita expenditure on
## students is negatively related to SAT scores. The beauty of multiple
## regression is that we can try to pull these apart. What would the
## association between expense and SAT scores be if there were no
## difference among the states in the percentage of students taking the
## SAT?
summary(lm(csat ~ expense + percent, data = states.data))
## The lm class and methods
## ────────────────────────────
## OK, we fit our model. Now what?
## • Examine the model object:
class(sat.mod)
names(sat.mod)
methods(class = class(sat.mod))[1:9]
## • Use function methods to get more information about the fit
confint(sat.mod)
# hist(residuals(sat.mod))
## Linear Regression Assumptions
## ─────────────────────────────────
## • Ordinary least squares regression relies on several assumptions,
## including that the residuals are normally distributed and
## homoscedastic, the errors are independent and the relationships are
## linear.
## • Investigate these assumptions visually by plotting your model:
par(mar = c(4, 4, 2, 2), mfrow = c(1, 2)) #optional
plot(sat.mod, which = c(1, 2)) # "which" argument optional
## Comparing models
## ────────────────────
## Do congressional voting patterns predict SAT scores over and above
## expense? Fit two models and compare them:
# fit another model, adding house and senate as predictors
sat.voting.mod <- lm(csat ~ expense + house + senate,
data = na.omit(states.data))
sat.mod <- update(sat.mod, data=na.omit(states.data))
# compare using the anova() function
anova(sat.mod, sat.voting.mod)
coef(summary(sat.voting.mod))
## Exercise: least squares regression
## ────────────────────────────────────────
## Use the /states.rds/ data set. Fit a model predicting energy consumed
## per capita (energy) from the percentage of residents living in
## metropolitan areas (metro). Be sure to
## 1. Examine/plot the data before fitting the model
states.data <- readRDS("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets/states.rds")
#get labels
states.info <- data.frame(attributes(states.data)[c("names", "var.labels")])
states.info
str(states.data)
# summary of energy and metro columns, all rows
sts.energy.metro <- subset(states.data, select = c("energy", "metro"))
summary(sts.energy.metro)
# correlation between energy and metro
#cor(sts.energy.metro)
cor(sts.energy.metro, use = "complete.obs")
plot(sts.energy.metro)
## 2. Print and interpret the model `summary'
energy_mod <- lm(energy ~ metro, #Linear Regression Model
data=states.data)
summary(energy_mod)
## 3. `plot' the model to look for deviations from modeling assumptions
plot(energy_mod)
## Select one or more additional predictors to add to your model and
## repeat steps 1-3. Is this model significantly better than the model
## with /metro/ as the only predictor?
#Adding additional predictors to the model
# summary of energy and metro columns, all rows
sts.energy.metro.college.income <- subset(states.data, select = c("energy", "metro", "college", "income"))
summary(sts.energy.metro.college.income)
# correlation between energy and metro
#cor(sts.energy.metro)
cor(sts.energy.metro.college.income, use = "complete.obs")
plot(sts.energy.metro.college.income)
# Print and interpret the model `summary'
energy_coll_inc_mod <- lm(energy ~ metro + college + income, #Linear Regression Model
data=states.data)
summary(energy_coll_inc_mod)
# plot the model to look for deviations from modeling assumptions
plot(energy_coll_inc_mod)
## Interactions and factors
## ══════════════════════════
## Modeling interactions
## ─────────────────────────
## Interactions allow us assess the extent to which the association
## between one predictor and the outcome depends on a second predictor.
## For example: Does the association between expense and SAT scores
## depend on the median income in the state?
#Add the interaction to the model
energy.metro.by.college <- lm(energy ~ metro*college,
data=states.data)
#Show the results
coef(summary(energy.metro.by.college)) # show regression coefficients table
## Regression with categorical predictors
## ──────────────────────────────────────────
## Let's try to predict SAT scores from region, a categorical variable.
## Note that you must make sure R does not think your categorical
## variable is numeric.
# make sure R knows region is categorical
str(states.data$region)
states.data$region <- factor(states.data$region)
#Add region to the model
energy.region <- lm(energy ~ region,
data=states.data)
#Show the results
coef(summary(energy.region)) # show regression coefficients table
anova(energy.region) # show ANOVA table
## Again, *make sure to tell R which variables are categorical by
## converting them to factors!*
## Setting factor reference groups and contrasts
## ─────────────────────────────────────────────────
## In the previous example we use the default contrasts for region. The
## default in R is treatment contrasts, with the first level as the
## reference. We can change the reference group or use another coding
## scheme using the `C' function.
# print default contrasts
contrasts(states.data$region)
# change the reference group
coef(summary(lm(energy ~ C(region, base=4),
data=states.data)))
# change the coding scheme
coef(summary(lm(energy ~ C(region, contr.helmert),
data=states.data)))
## See also `?contrasts', `?contr.treatment', and `?relevel'.
## Exercise: interactions and factors
## ────────────────────────────────────────
## Use the states data set.
## 1. Add on to the regression equation that you created in exercise 1 by
## generating an interaction term and testing the interaction.
states.data <- readRDS("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets/states.rds")
#get labels
states.info <- data.frame(attributes(states.data)[c("names", "var.labels")])
states.info
str(states.data)
sat.energy.by.metro <- lm(energy ~ metro*college,
data=states.data)
#Show the results
coef(summary(sat.energy.by.metro)) # show regression coefficients table
## 2. Try adding region to the model. Are there significant differences
## across the four regions?
# make sure R knows region is categorical
str(states.data$region)
states.data$region <- factor(states.data$region)
#Add region to the model
energy.region <- lm(energy ~ region,
data=states.data)
#Show the results
coef(summary(energy.region)) # show regression coefficients table
anova(energy.region) # show ANOVA table | /linear_regression.R | no_license | gsista/Springboard-Capstone-Project | R | false | false | 10,305 | r | # Introduction
## ══════════════
# • Learning objectives:
## • Learn the R formula interface
## • Specify factor contrasts to test specific hypotheses
## • Perform model comparisons
## • Run and interpret variety of regression models in R
## Set working directory
## ─────────────────────────
## It is often helpful to start your R session by setting your working
## directory so you don't have to type the full path names to your data
## and other files
# set the working directory
datasets <- setwd("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets")
# setwd("C:/Users/dataclass/Desktop/Rstatistics")
## You might also start by listing the files in your working directory
getwd() # where am I?
list.files("dataSets") # files in the dataSets folder
## Load the states data
## ────────────────────────
# read the states data
states.data <- readRDS("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets/states.rds")
#get labels
states.info <- data.frame(attributes(states.data)[c("names", "var.labels")])
#look at last few labels
tail(states.info, 8)
## Linear regression
## ═══════════════════
## Examine the data before fitting models
## ──────────────────────────────────────────
## Start by examining the data to check for problems.
# summary of expense and csat columns, all rows
sts.ex.sat <- subset(states.data, select = c("expense", "csat"))
summary(sts.ex.sat)
# correlation between expense and csat
cor(sts.ex.sat)
## Plot the data before fitting models
## ───────────────────────────────────────
## Plot the data to look for multivariate outliers, non-linear
## relationships etc.
# scatter plot of expense vs csat
plot(sts.ex.sat)
## Linear regression example
## ─────────────────────────────
## • Linear regression models can be fit with the `lm()' function
## • For example, we can use `lm' to predict SAT scores based on
## per-pupal expenditures:
# Fit our regression model
sat.mod <- lm(csat ~ expense, # regression formula
data=states.data) # data set
# Summarize and print the results
summary(sat.mod) # show regression coefficients table
## Why is the association between expense and SAT scores /negative/?
## ─────────────────────────────────────────────────────────────────────
## Many people find it surprising that the per-capita expenditure on
## students is negatively related to SAT scores. The beauty of multiple
## regression is that we can try to pull these apart. What would the
## association between expense and SAT scores be if there were no
## difference among the states in the percentage of students taking the
## SAT?
summary(lm(csat ~ expense + percent, data = states.data))
## The lm class and methods
## ────────────────────────────
## OK, we fit our model. Now what?
## • Examine the model object:
class(sat.mod)
names(sat.mod)
methods(class = class(sat.mod))[1:9]
## • Use function methods to get more information about the fit
confint(sat.mod)
# hist(residuals(sat.mod))
## Linear Regression Assumptions
## ─────────────────────────────────
## • Ordinary least squares regression relies on several assumptions,
## including that the residuals are normally distributed and
## homoscedastic, the errors are independent and the relationships are
## linear.
## • Investigate these assumptions visually by plotting your model:
par(mar = c(4, 4, 2, 2), mfrow = c(1, 2)) #optional
plot(sat.mod, which = c(1, 2)) # "which" argument optional
## Comparing models
## ────────────────────
## Do congressional voting patterns predict SAT scores over and above
## expense? Fit two models and compare them:
# fit another model, adding house and senate as predictors
sat.voting.mod <- lm(csat ~ expense + house + senate,
data = na.omit(states.data))
sat.mod <- update(sat.mod, data=na.omit(states.data))
# compare using the anova() function
anova(sat.mod, sat.voting.mod)
coef(summary(sat.voting.mod))
## Exercise: least squares regression
## ────────────────────────────────────────
## Use the /states.rds/ data set. Fit a model predicting energy consumed
## per capita (energy) from the percentage of residents living in
## metropolitan areas (metro). Be sure to
## 1. Examine/plot the data before fitting the model
states.data <- readRDS("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets/states.rds")
#get labels
states.info <- data.frame(attributes(states.data)[c("names", "var.labels")])
states.info
str(states.data)
# summary of energy and metro columns, all rows
sts.energy.metro <- subset(states.data, select = c("energy", "metro"))
summary(sts.energy.metro)
# correlation between energy and metro
#cor(sts.energy.metro)
cor(sts.energy.metro, use = "complete.obs")
plot(sts.energy.metro)
## 2. Print and interpret the model `summary'
energy_mod <- lm(energy ~ metro, #Linear Regression Model
data=states.data)
summary(energy_mod)
## 3. `plot' the model to look for deviations from modeling assumptions
plot(energy_mod)
## Select one or more additional predictors to add to your model and
## repeat steps 1-3. Is this model significantly better than the model
## with /metro/ as the only predictor?
#Adding additional predictors to the model
# summary of energy and metro columns, all rows
sts.energy.metro.college.income <- subset(states.data, select = c("energy", "metro", "college", "income"))
summary(sts.energy.metro.college.income)
# correlation between energy and metro
#cor(sts.energy.metro)
cor(sts.energy.metro.college.income, use = "complete.obs")
plot(sts.energy.metro.college.income)
# Print and interpret the model `summary'
energy_coll_inc_mod <- lm(energy ~ metro + college + income, #Linear Regression Model
data=states.data)
summary(energy_coll_inc_mod)
# plot the model to look for deviations from modeling assumptions
plot(energy_coll_inc_mod)
## Interactions and factors
## ══════════════════════════
## Modeling interactions
## ─────────────────────────
## Interactions allow us assess the extent to which the association
## between one predictor and the outcome depends on a second predictor.
## For example: Does the association between expense and SAT scores
## depend on the median income in the state?
#Add the interaction to the model
energy.metro.by.college <- lm(energy ~ metro*college,
data=states.data)
#Show the results
coef(summary(energy.metro.by.college)) # show regression coefficients table
## Regression with categorical predictors
## ──────────────────────────────────────────
## Let's try to predict SAT scores from region, a categorical variable.
## Note that you must make sure R does not think your categorical
## variable is numeric.
# make sure R knows region is categorical
str(states.data$region)
states.data$region <- factor(states.data$region)
#Add region to the model
energy.region <- lm(energy ~ region,
data=states.data)
#Show the results
coef(summary(energy.region)) # show regression coefficients table
anova(energy.region) # show ANOVA table
## Again, *make sure to tell R which variables are categorical by
## converting them to factors!*
## Setting factor reference groups and contrasts
## ─────────────────────────────────────────────────
## In the previous example we use the default contrasts for region. The
## default in R is treatment contrasts, with the first level as the
## reference. We can change the reference group or use another coding
## scheme using the `C' function.
# print default contrasts
contrasts(states.data$region)
# change the reference group
coef(summary(lm(energy ~ C(region, base=4),
data=states.data)))
# change the coding scheme
coef(summary(lm(energy ~ C(region, contr.helmert),
data=states.data)))
## See also `?contrasts', `?contr.treatment', and `?relevel'.
## Exercise: interactions and factors
## ────────────────────────────────────────
## Use the states data set.
## 1. Add on to the regression equation that you created in exercise 1 by
## generating an interaction term and testing the interaction.
states.data <- readRDS("/Users/gsista/Desktop/Springboard/DataScience/Excercises/linear_regression/dataSets/states.rds")
#get labels
states.info <- data.frame(attributes(states.data)[c("names", "var.labels")])
states.info
str(states.data)
sat.energy.by.metro <- lm(energy ~ metro*college,
data=states.data)
#Show the results
coef(summary(sat.energy.by.metro)) # show regression coefficients table
## 2. Try adding region to the model. Are there significant differences
## across the four regions?
# make sure R knows region is categorical
str(states.data$region)
states.data$region <- factor(states.data$region)
#Add region to the model
energy.region <- lm(energy ~ region,
data=states.data)
#Show the results
coef(summary(energy.region)) # show regression coefficients table
anova(energy.region) # show ANOVA table |
#' A nicer default theme for ggplot2
#'
#' @examples
#' vpc(simple_data$sim, simple_data$obs) + theme_plain()
#'
#' @export
theme_plain <- function () {
ggplot2::theme(
text = ggplot2::element_text(family="sans"),
plot.title = ggplot2::element_text(family="sans", size = 16, vjust = 1.5),
axis.title.x = ggplot2::element_text(family="sans",vjust=-0.25),
axis.title.y = ggplot2::element_text(family="sans"),
legend.background = ggplot2::element_rect(fill = "white"),
#legend.position = c(0.14, 0.80),
panel.grid.major = ggplot2::element_line(colour = "#e5e5e5"),
panel.grid.minor = ggplot2::element_blank(),
panel.background = ggplot2::element_rect(fill = "#efefef", colour = NA),
strip.background = ggplot2::element_rect(fill = "#444444", colour = NA),
strip.text = ggplot2::element_text(face="bold", colour = "white")
)
}
#' Empty ggplot2 theme
#'
#' @examples
#' vpc(simple_data$sim, simple_data$obs) + theme_empty()
#'
#' @export
theme_empty <- function () {
ggplot2::theme(panel.grid.major = ggplot2::element_blank(),
panel.grid.minor = ggplot2::element_blank(),
panel.background = ggplot2::element_blank(),
axis.line = ggplot2::element_line(colour = "black"))
}
| /R/themes.R | permissive | ronkeizer/vpc | R | false | false | 1,243 | r | #' A nicer default theme for ggplot2
#'
#' @examples
#' vpc(simple_data$sim, simple_data$obs) + theme_plain()
#'
#' @export
theme_plain <- function () {
ggplot2::theme(
text = ggplot2::element_text(family="sans"),
plot.title = ggplot2::element_text(family="sans", size = 16, vjust = 1.5),
axis.title.x = ggplot2::element_text(family="sans",vjust=-0.25),
axis.title.y = ggplot2::element_text(family="sans"),
legend.background = ggplot2::element_rect(fill = "white"),
#legend.position = c(0.14, 0.80),
panel.grid.major = ggplot2::element_line(colour = "#e5e5e5"),
panel.grid.minor = ggplot2::element_blank(),
panel.background = ggplot2::element_rect(fill = "#efefef", colour = NA),
strip.background = ggplot2::element_rect(fill = "#444444", colour = NA),
strip.text = ggplot2::element_text(face="bold", colour = "white")
)
}
#' Empty ggplot2 theme
#'
#' @examples
#' vpc(simple_data$sim, simple_data$obs) + theme_empty()
#'
#' @export
theme_empty <- function () {
ggplot2::theme(panel.grid.major = ggplot2::element_blank(),
panel.grid.minor = ggplot2::element_blank(),
panel.background = ggplot2::element_blank(),
axis.line = ggplot2::element_line(colour = "black"))
}
|
corr <- function(directory, threshold = 0){
output <- numeric()
names <- dir(directory, pattern = ".csv")
## loop through directory
for (i in 1:length(names)){
file = paste(directory,"/",names[i], sep = "")
temp_table <- read.csv(file, header = TRUE)
temp_complete <- complete.cases(temp_table)
temp_t <- temp_table[temp_complete,]
## if number of complete cases exceeds threshold, calc correlation and add to output
if(nrow(temp_t)>threshold){
cor <- cor(temp_t[,2],temp_t[,3])
output <- c(output, cor)
}
}
output
} | /corr.R | no_license | Sam-Huntington/R-scripts | R | false | false | 546 | r | corr <- function(directory, threshold = 0){
output <- numeric()
names <- dir(directory, pattern = ".csv")
## loop through directory
for (i in 1:length(names)){
file = paste(directory,"/",names[i], sep = "")
temp_table <- read.csv(file, header = TRUE)
temp_complete <- complete.cases(temp_table)
temp_t <- temp_table[temp_complete,]
## if number of complete cases exceeds threshold, calc correlation and add to output
if(nrow(temp_t)>threshold){
cor <- cor(temp_t[,2],temp_t[,3])
output <- c(output, cor)
}
}
output
} |
# Data Check
#
# checks the data is the correct type(s)
# @param data the data passed into the map layer funciton
# @param callingFunc the function calling the data check
dataCheck <- function(data, callingFunc){
if(is.null(data)){
message(paste0("no data supplied to ", callingFunc))
return(FALSE)
}
if(!inherits(data, "data.frame")){
warning(paste0(callingFunc, ": currently only data.frames, sf and sfencoded objects are supported"))
return(FALSE)
}
if(nrow(data) == 0){
message(paste0("no data supplied to ", callingFunc))
return(FALSE)
}
return(TRUE)
}
# heatWeightCheck
# Converts the 'weight' argument to 'fill_colour' so the remaining
# legend functions will work
heatWeightCheck <- function(objArgs){
names(objArgs)[which(names(objArgs) == "weight")] <- "fill_colour"
return(objArgs)
}
isUrl <- function(txt) grepl("(^http)|(^www)|(^file)", txt)
# Is Using Polyline
#
# Checks if the polyline argument is null or not
# @param polyline
isUsingPolyline <- function(polyline){
if(!is.null(polyline)) return(TRUE)
return(FALSE)
}
# Lat Lon Check
#
# Attempts to find the lat and lon columns. Also corrects the 'lon'
# column to 'lng'
#
# @param objArgs arguments of calling funciton
# @param lat object specified by user
# @param lon object specified by user
# @param dataNames data names
# @param layer_call the map layer funciton calling this function
latLonCheck <- function(objArgs, lat, lon, dataNames, layer_call){
## change lon to lng
names(objArgs)[which(names(objArgs) == "lon")] <- "lng"
if(is.null(lat)){
lat <- find_lat_column(dataNames, layer_call, TRUE)
objArgs[['lat']] <- lat
}
if(is.null(lon)){
lon <- find_lon_column(dataNames, layer_call, TRUE)
objArgs[['lng']] <- lon
}
return(objArgs)
}
# Latitude Check
#
# Checks that a value is between -90:90
latitudeCheck <- function(lat, arg){
if(!is.numeric(lat) | lat < -90 | lat > 90)
stop(paste0(arg, " must be a value between -90 and 90 (inclusive)"))
}
# Longitude Check
#
# Checks that a value is between -90:90
longitudeCheck <- function(lat, arg){
if(!is.numeric(lat) | lat < -180 | lat > 180)
stop(paste0(arg, " must be a value between -180 and 180 (inclusive)"))
}
# Lat Lon Poly Check
#
# Check to ensure either the polyline or the lat & lon columns are specified
# @param lat latitude column
# @param lon longitude column
# @param polyline polyline column
latLonPolyCheck <- function(lat, lon, polyline){
if(!is.null(polyline) & !is.null(lat) & !is.null(lon))
stop('please use either a polyline colulmn, or lat/lon coordinate columns, not both')
if(is.null(polyline) & (is.null(lat) | is.null(lon)))
stop("please supply the either the column containing the polylines, or the lat/lon coordinate columns")
}
# Layer Id
#
# Checks the layer_id parameter, and provides a default one if NULL
# @param layer_id
layerId <- function(layer_id){
if(!is.null(layer_id) & length(layer_id) != 1)
stop("please provide a single value for 'layer_id'")
if(is.null(layer_id)){
return("defaultLayerId")
}else{
return(layer_id)
}
}
# Marker Colour Icon Check
#
# Checks for only one of colour or marker_icon, and fixes the 'marker_icon'
# to be 'icon'
# @param data the data supplied to the map layer
# @param objArgs the arguments to the function
# @param colour the colour argument for a marker
# @param marker_icon the icon argument for a marker
markerColourIconCheck <- function(data, objArgs, colour, marker_icon){
if(!is.null(colour) & !is.null(marker_icon))
stop("only one of colour or icon can be used")
if(!is.null(marker_icon)){
objArgs[['url']] <- marker_icon
objArgs[['marker_icon']] <- NULL
}
if(!is.null(colour)){
if(!all((tolower(data[, colour])) %in% c("red","blue","green","lavender"))){
stop("colours must be either red, blue, green or lavender")
}
}
return(objArgs)
}
optionDissipatingCheck <- function(option_dissipating){
if(!is.null(option_dissipating))
if(!is.logical(option_dissipating))
stop("option_dissipating must be logical")
}
optionOpacityCheck <- function(option_opacity){
if(!is.null(option_opacity))
if(!is.numeric(option_opacity) | (option_opacity < 0 | option_opacity > 1))
stop("option_opacity must be a numeric between 0 and 1")
}
optionRadiusCheck <- function(option_radius){
if(!is.null(option_radius))
if(!is.numeric(option_radius))
stop("option_radius must be numeric")
}
# Palette Check
#
# Checks if the palette is a function, or a list of functions
# @param arg palette to test
paletteCheck <- function(arg){
if(is.null(arg)) return(viridisLite::viridis)
arg <- checkPalettes(arg)
return(arg)
}
checkPalettes <- function(arg) UseMethod("checkPalettes")
#' @export
checkPalettes.function <- function(arg) return(arg)
#' @export
checkPalettes.list <- function(arg){
## names must be either 'fill_colour' or 'stroke_colour'
if(length(setdiff(names(arg), c("fill_colour", "stroke_colour"))) > 0)
stop("unrecognised colour mappings")
lapply(arg, paletteCheck)
}
#' @export
checkPalettes.default <- function(arg) stop("I don't recognise the type of palette you've supplied")
# Path Id Check
#
# If a polygon is using coordinates a path Id is also required. If pathId is not
# supplied it is assumed each sequence of coordinates belong to the same path
# @param data
# @param pathId
# @param usePolyline
pathIdCheck <- function(data, pathId, usePolyline, objArgs){
if(!usePolyline){
if(is.null(pathId)){
message("No 'pathId' value defined, assuming one continuous line per polygon")
pathId <- 'pathId'
objArgs[['pathId']] <- pathId
data[, pathId] <- '1'
}else{
data[, pathId] <- as.character(data[, pathId])
}
}
return(list(data = data, objArgs = objArgs, pathId = pathId))
}
# Poly Id Check
#
# Polygons and polylines require an id for specifying the shape being defined.
# @param data the data being passed into the shape function
# @param id the id from the data
# @param usePolyline logical indicating if an encoded polyline is being used
# @param objArgs the arguments to the function will be updated if a default ID value is required
# @details
# If coordinates are being used, and no id is specified, the coordinates are assumed
# to identify a single polyline
#
# If polylines are being used,
polyIdCheck <- function(data, id, usePolyline, objArgs){
if(usePolyline){
if(is.null(id)){
id <- 'id'
objArgs[['id']] <- id
data[, id] <- 1:nrow(data)
}else{
data[, id] <- data[, id]
}
}else{
if(is.null(id)){
message("No 'id' value defined, assuming one continuous line of coordinates")
id <- 'id'
data[, id] <- 1
objArgs[['id']] <- id
}else{
data[, id] <- data[, id]
}
}
return(list(data = data, objArgs = objArgs, id = id))
}
# some browsers don't support the alpha channel
removeAlpha <- function(cols) substr(cols, 1, 7)
zIndexCheck <- function( objArgs, z_index ) {
if(!is.null(z_index)) {
objArgs[['z_index']] <- z_index
}
return( objArgs )
}
| /R/google_map_layer_parameters.R | permissive | SymbolixAU/googleway | R | false | false | 7,141 | r | # Data Check
#
# checks the data is the correct type(s)
# @param data the data passed into the map layer funciton
# @param callingFunc the function calling the data check
dataCheck <- function(data, callingFunc){
if(is.null(data)){
message(paste0("no data supplied to ", callingFunc))
return(FALSE)
}
if(!inherits(data, "data.frame")){
warning(paste0(callingFunc, ": currently only data.frames, sf and sfencoded objects are supported"))
return(FALSE)
}
if(nrow(data) == 0){
message(paste0("no data supplied to ", callingFunc))
return(FALSE)
}
return(TRUE)
}
# heatWeightCheck
# Converts the 'weight' argument to 'fill_colour' so the remaining
# legend functions will work
heatWeightCheck <- function(objArgs){
names(objArgs)[which(names(objArgs) == "weight")] <- "fill_colour"
return(objArgs)
}
isUrl <- function(txt) grepl("(^http)|(^www)|(^file)", txt)
# Is Using Polyline
#
# Checks if the polyline argument is null or not
# @param polyline
isUsingPolyline <- function(polyline){
if(!is.null(polyline)) return(TRUE)
return(FALSE)
}
# Lat Lon Check
#
# Attempts to find the lat and lon columns. Also corrects the 'lon'
# column to 'lng'
#
# @param objArgs arguments of calling funciton
# @param lat object specified by user
# @param lon object specified by user
# @param dataNames data names
# @param layer_call the map layer funciton calling this function
latLonCheck <- function(objArgs, lat, lon, dataNames, layer_call){
## change lon to lng
names(objArgs)[which(names(objArgs) == "lon")] <- "lng"
if(is.null(lat)){
lat <- find_lat_column(dataNames, layer_call, TRUE)
objArgs[['lat']] <- lat
}
if(is.null(lon)){
lon <- find_lon_column(dataNames, layer_call, TRUE)
objArgs[['lng']] <- lon
}
return(objArgs)
}
# Latitude Check
#
# Checks that a value is between -90:90
latitudeCheck <- function(lat, arg){
if(!is.numeric(lat) | lat < -90 | lat > 90)
stop(paste0(arg, " must be a value between -90 and 90 (inclusive)"))
}
# Longitude Check
#
# Checks that a value is between -90:90
longitudeCheck <- function(lat, arg){
if(!is.numeric(lat) | lat < -180 | lat > 180)
stop(paste0(arg, " must be a value between -180 and 180 (inclusive)"))
}
# Lat Lon Poly Check
#
# Check to ensure either the polyline or the lat & lon columns are specified
# @param lat latitude column
# @param lon longitude column
# @param polyline polyline column
latLonPolyCheck <- function(lat, lon, polyline){
if(!is.null(polyline) & !is.null(lat) & !is.null(lon))
stop('please use either a polyline colulmn, or lat/lon coordinate columns, not both')
if(is.null(polyline) & (is.null(lat) | is.null(lon)))
stop("please supply the either the column containing the polylines, or the lat/lon coordinate columns")
}
# Layer Id
#
# Checks the layer_id parameter, and provides a default one if NULL
# @param layer_id
layerId <- function(layer_id){
if(!is.null(layer_id) & length(layer_id) != 1)
stop("please provide a single value for 'layer_id'")
if(is.null(layer_id)){
return("defaultLayerId")
}else{
return(layer_id)
}
}
# Marker Colour Icon Check
#
# Checks for only one of colour or marker_icon, and fixes the 'marker_icon'
# to be 'icon'
# @param data the data supplied to the map layer
# @param objArgs the arguments to the function
# @param colour the colour argument for a marker
# @param marker_icon the icon argument for a marker
markerColourIconCheck <- function(data, objArgs, colour, marker_icon){
if(!is.null(colour) & !is.null(marker_icon))
stop("only one of colour or icon can be used")
if(!is.null(marker_icon)){
objArgs[['url']] <- marker_icon
objArgs[['marker_icon']] <- NULL
}
if(!is.null(colour)){
if(!all((tolower(data[, colour])) %in% c("red","blue","green","lavender"))){
stop("colours must be either red, blue, green or lavender")
}
}
return(objArgs)
}
optionDissipatingCheck <- function(option_dissipating){
if(!is.null(option_dissipating))
if(!is.logical(option_dissipating))
stop("option_dissipating must be logical")
}
optionOpacityCheck <- function(option_opacity){
if(!is.null(option_opacity))
if(!is.numeric(option_opacity) | (option_opacity < 0 | option_opacity > 1))
stop("option_opacity must be a numeric between 0 and 1")
}
optionRadiusCheck <- function(option_radius){
if(!is.null(option_radius))
if(!is.numeric(option_radius))
stop("option_radius must be numeric")
}
# Palette Check
#
# Checks if the palette is a function, or a list of functions
# @param arg palette to test
paletteCheck <- function(arg){
if(is.null(arg)) return(viridisLite::viridis)
arg <- checkPalettes(arg)
return(arg)
}
checkPalettes <- function(arg) UseMethod("checkPalettes")
#' @export
checkPalettes.function <- function(arg) return(arg)
#' @export
checkPalettes.list <- function(arg){
## names must be either 'fill_colour' or 'stroke_colour'
if(length(setdiff(names(arg), c("fill_colour", "stroke_colour"))) > 0)
stop("unrecognised colour mappings")
lapply(arg, paletteCheck)
}
#' @export
checkPalettes.default <- function(arg) stop("I don't recognise the type of palette you've supplied")
# Path Id Check
#
# If a polygon is using coordinates a path Id is also required. If pathId is not
# supplied it is assumed each sequence of coordinates belong to the same path
# @param data
# @param pathId
# @param usePolyline
pathIdCheck <- function(data, pathId, usePolyline, objArgs){
if(!usePolyline){
if(is.null(pathId)){
message("No 'pathId' value defined, assuming one continuous line per polygon")
pathId <- 'pathId'
objArgs[['pathId']] <- pathId
data[, pathId] <- '1'
}else{
data[, pathId] <- as.character(data[, pathId])
}
}
return(list(data = data, objArgs = objArgs, pathId = pathId))
}
# Poly Id Check
#
# Polygons and polylines require an id for specifying the shape being defined.
# @param data the data being passed into the shape function
# @param id the id from the data
# @param usePolyline logical indicating if an encoded polyline is being used
# @param objArgs the arguments to the function will be updated if a default ID value is required
# @details
# If coordinates are being used, and no id is specified, the coordinates are assumed
# to identify a single polyline
#
# If polylines are being used,
polyIdCheck <- function(data, id, usePolyline, objArgs){
if(usePolyline){
if(is.null(id)){
id <- 'id'
objArgs[['id']] <- id
data[, id] <- 1:nrow(data)
}else{
data[, id] <- data[, id]
}
}else{
if(is.null(id)){
message("No 'id' value defined, assuming one continuous line of coordinates")
id <- 'id'
data[, id] <- 1
objArgs[['id']] <- id
}else{
data[, id] <- data[, id]
}
}
return(list(data = data, objArgs = objArgs, id = id))
}
# some browsers don't support the alpha channel
removeAlpha <- function(cols) substr(cols, 1, 7)
zIndexCheck <- function( objArgs, z_index ) {
if(!is.null(z_index)) {
objArgs[['z_index']] <- z_index
}
return( objArgs )
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/conflicts.R
\name{psyperform_conflicts}
\alias{psyperform_conflicts}
\title{This function lists all the conflicts between packages in psyperform
and other packages that you have loaded.}
\usage{
psyperform_conflicts()
}
\description{
There are four conflicts that are deliberately ignored: \code{intersect},
\code{union}, \code{setequal}, and \code{setdiff} from dplyr. These functions
make the base equivalents generic, so shouldn't negatively affect any
existing code.
}
\examples{
psyperform_conflicts()
}
| /man/psyperform_conflicts.Rd | permissive | fmhoeger/psyperform | R | false | true | 587 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/conflicts.R
\name{psyperform_conflicts}
\alias{psyperform_conflicts}
\title{This function lists all the conflicts between packages in psyperform
and other packages that you have loaded.}
\usage{
psyperform_conflicts()
}
\description{
There are four conflicts that are deliberately ignored: \code{intersect},
\code{union}, \code{setequal}, and \code{setdiff} from dplyr. These functions
make the base equivalents generic, so shouldn't negatively affect any
existing code.
}
\examples{
psyperform_conflicts()
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gargle-api-key.R
\name{gargle_api_key}
\alias{gargle_api_key}
\title{API key for demonstration purposes}
\usage{
gargle_api_key()
}
\value{
A Google API key
}
\description{
Returns an API key for use when test driving gargle. Some API requests for
public resources do not require authorization, in which case the request can
be sent with an API key in lieu of a token. If you want to get your own API
key, setup a new project in \href{https://console.developers.google.com}{Google Developers Console}, enable the APIs of interest,
and follow the instructions in \href{https://support.google.com/googleapi/answer/6158862}{Setting up API keys}.
}
\examples{
## see the key
gargle_api_key()
## use the key with the Places API (explicitly enabled for this key)
## gets restaurants close to a location in Vancouver, BC
url <- httr::parse_url("https://maps.googleapis.com/maps/api/place/nearbysearch/json")
url$query <- list(
location = "49.268682,-123.167117",
radius = 100,
type = "restaurant",
key = gargle_api_key()
)
url <- httr::build_url(url)
res <- httr::content(httr::GET(url))
vapply(res$results, function(x) x$name, character(1))
}
\keyword{internal}
| /man/gargle_api_key.Rd | permissive | takewiki/gargle | R | false | true | 1,243 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gargle-api-key.R
\name{gargle_api_key}
\alias{gargle_api_key}
\title{API key for demonstration purposes}
\usage{
gargle_api_key()
}
\value{
A Google API key
}
\description{
Returns an API key for use when test driving gargle. Some API requests for
public resources do not require authorization, in which case the request can
be sent with an API key in lieu of a token. If you want to get your own API
key, setup a new project in \href{https://console.developers.google.com}{Google Developers Console}, enable the APIs of interest,
and follow the instructions in \href{https://support.google.com/googleapi/answer/6158862}{Setting up API keys}.
}
\examples{
## see the key
gargle_api_key()
## use the key with the Places API (explicitly enabled for this key)
## gets restaurants close to a location in Vancouver, BC
url <- httr::parse_url("https://maps.googleapis.com/maps/api/place/nearbysearch/json")
url$query <- list(
location = "49.268682,-123.167117",
radius = 100,
type = "restaurant",
key = gargle_api_key()
)
url <- httr::build_url(url)
res <- httr::content(httr::GET(url))
vapply(res$results, function(x) x$name, character(1))
}
\keyword{internal}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/create_gaussian.R
\name{MFKnockoffs.create.gaussian}
\alias{MFKnockoffs.create.gaussian}
\title{Sample multivariate Gaussian knockoff variables}
\usage{
MFKnockoffs.create.gaussian(X, mu, Sigma, method = c("sdp", "asdp", "equi"),
diag_s = NULL)
}
\arguments{
\item{X}{normalized n-by-p realization of the design matrix}
\item{mu}{mean vector of length p for X}
\item{Sigma}{p-by-p covariance matrix for X}
\item{method}{either 'equi', 'sdp' or 'asdp' (default:'sdp')}
\item{diag_s}{pre-computed vector of covariances between the original variables and the knockoffs.
This will be computed according to 'method', if not supplied}
}
\value{
n-by-p matrix of knockoff variables
}
\description{
Samples multivariate Gaussian fixed-design knockoff variables for the original variables.
}
\references{
Candes et al., Panning for Gold: Model-free Knockoffs for High-dimensional Controlled Variable Selection,
arXiv:1610.02351 (2016).
\href{https://statweb.stanford.edu/~candes/MF_Knockoffs/index.html}{https://statweb.stanford.edu/~candes/MF_Knockoffs/index.html}
}
\seealso{
Other methods for creating knockoffs: \code{\link{MFKnockoffs.create.approximate_gaussian}},
\code{\link{MFKnockoffs.create.fixed}}
}
| /man/MFKnockoffs.create.gaussian.Rd | no_license | alexrives/MFKnockoffs | R | false | true | 1,293 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/create_gaussian.R
\name{MFKnockoffs.create.gaussian}
\alias{MFKnockoffs.create.gaussian}
\title{Sample multivariate Gaussian knockoff variables}
\usage{
MFKnockoffs.create.gaussian(X, mu, Sigma, method = c("sdp", "asdp", "equi"),
diag_s = NULL)
}
\arguments{
\item{X}{normalized n-by-p realization of the design matrix}
\item{mu}{mean vector of length p for X}
\item{Sigma}{p-by-p covariance matrix for X}
\item{method}{either 'equi', 'sdp' or 'asdp' (default:'sdp')}
\item{diag_s}{pre-computed vector of covariances between the original variables and the knockoffs.
This will be computed according to 'method', if not supplied}
}
\value{
n-by-p matrix of knockoff variables
}
\description{
Samples multivariate Gaussian fixed-design knockoff variables for the original variables.
}
\references{
Candes et al., Panning for Gold: Model-free Knockoffs for High-dimensional Controlled Variable Selection,
arXiv:1610.02351 (2016).
\href{https://statweb.stanford.edu/~candes/MF_Knockoffs/index.html}{https://statweb.stanford.edu/~candes/MF_Knockoffs/index.html}
}
\seealso{
Other methods for creating knockoffs: \code{\link{MFKnockoffs.create.approximate_gaussian}},
\code{\link{MFKnockoffs.create.fixed}}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/findMarker.R
\name{findMarkerTopTable}
\alias{findMarkerTopTable}
\title{Fetch the table of top markers that pass the filtering}
\usage{
findMarkerTopTable(
inSCE,
log2fcThreshold = 1,
fdrThreshold = 0.05,
minClustExprPerc = 0.7,
maxCtrlExprPerc = 0.4,
minMeanExpr = 1,
topN = 10
)
}
\arguments{
\item{inSCE}{\linkS4class{SingleCellExperiment} inherited object.}
\item{log2fcThreshold}{Only use DEGs with the absolute values of log2FC
larger than this value. Default \code{1}}
\item{fdrThreshold}{Only use DEGs with FDR value smaller than this value.
Default \code{0.05}}
\item{minClustExprPerc}{A numeric scalar. The minimum cutoff of the
percentage of cells in the cluster of interests that expressed the marker
gene. Default \code{0.7}.}
\item{maxCtrlExprPerc}{A numeric scalar. The maximum cutoff of the
percentage of cells out of the cluster (control group) that expressed the
marker gene. Default \code{0.4}.}
\item{minMeanExpr}{A numeric scalar. The minimum cutoff of the mean
expression value of the marker in the cluster of interests. Default \code{1}.}
\item{topN}{An integer. Only to fetch this number of top markers for each
cluster in maximum, in terms of log2FC value. Use \code{NULL} to cancel the
top N subscription. Default \code{10}.}
}
\value{
An organized \code{data.frame} object, with the top marker gene
information.
}
\description{
Fetch the table of top markers that pass the filtering
}
\details{
Users have to run \code{findMarkerDiffExp()} prior to using this
function to extract a top marker table.
}
\examples{
data("mouseBrainSubsetSCE", package = "singleCellTK")
mouseBrainSubsetSCE <- findMarkerDiffExp(mouseBrainSubsetSCE,
useAssay = "logcounts",
cluster = "level1class")
findMarkerTopTable(mouseBrainSubsetSCE)
}
| /man/findMarkerTopTable.Rd | permissive | rz2333/singleCellTK | R | false | true | 1,928 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/findMarker.R
\name{findMarkerTopTable}
\alias{findMarkerTopTable}
\title{Fetch the table of top markers that pass the filtering}
\usage{
findMarkerTopTable(
inSCE,
log2fcThreshold = 1,
fdrThreshold = 0.05,
minClustExprPerc = 0.7,
maxCtrlExprPerc = 0.4,
minMeanExpr = 1,
topN = 10
)
}
\arguments{
\item{inSCE}{\linkS4class{SingleCellExperiment} inherited object.}
\item{log2fcThreshold}{Only use DEGs with the absolute values of log2FC
larger than this value. Default \code{1}}
\item{fdrThreshold}{Only use DEGs with FDR value smaller than this value.
Default \code{0.05}}
\item{minClustExprPerc}{A numeric scalar. The minimum cutoff of the
percentage of cells in the cluster of interests that expressed the marker
gene. Default \code{0.7}.}
\item{maxCtrlExprPerc}{A numeric scalar. The maximum cutoff of the
percentage of cells out of the cluster (control group) that expressed the
marker gene. Default \code{0.4}.}
\item{minMeanExpr}{A numeric scalar. The minimum cutoff of the mean
expression value of the marker in the cluster of interests. Default \code{1}.}
\item{topN}{An integer. Only to fetch this number of top markers for each
cluster in maximum, in terms of log2FC value. Use \code{NULL} to cancel the
top N subscription. Default \code{10}.}
}
\value{
An organized \code{data.frame} object, with the top marker gene
information.
}
\description{
Fetch the table of top markers that pass the filtering
}
\details{
Users have to run \code{findMarkerDiffExp()} prior to using this
function to extract a top marker table.
}
\examples{
data("mouseBrainSubsetSCE", package = "singleCellTK")
mouseBrainSubsetSCE <- findMarkerDiffExp(mouseBrainSubsetSCE,
useAssay = "logcounts",
cluster = "level1class")
findMarkerTopTable(mouseBrainSubsetSCE)
}
|
load("data/game-data.RData")
source("functions.R", encoding = "UTF-8")
###LOAD TOURNEY/TEAM DATA#####
NCAATourneyDetailedResults<-read.csv("data/MNCAATourneyDetailedResults.csv")
NCAATourneyDetailedResults$Tournament<-1
RegularSeasonDetailedResult<-read.csv("data/MRegularSeasonDetailedResults.csv")
RegularSeasonDetailedResult$Tournament<-0
SecondaryTourneyDetailedResults<-read.csv("data/MSecondaryTourneyCompactResults.csv")
SecondaryTourneyDetailedResults$Tournament<-1
Teams<-read.csv("data/MTeams.csv")
Teams$TeamName<-coordTeam(Teams$TeamName)
colnames(Teams)[1:2]<-c("TeamID", "Team_Full")
Teams[duplicated(Teams$Team_Full), ]
# source("misc-create-tourney-geog.R")
TourneySeeds<-read.csv("data/MNCAATourneySeeds.csv")
TourneySeeds_temp<-read.csv("data/MNCAATourneySeeds_temp.csv")
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022]<-coordTeam(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022])
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022]<-Teams$TeamID[match(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022], Teams$Team_Full)]
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023]<-coordTeam(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023])
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023]<-Teams$TeamID[match(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023], Teams$Team_Full)]
TourneySeeds_temp$TeamID<-as.numeric(TourneySeeds_temp$TeamID)
tail(TourneySeeds_temp, 100)
TourneySeeds<-TourneySeeds_temp
TourneySeeds[duplicated(TourneySeeds[, c("TeamID", "Season")]),]
TourneySlots<-read.csv("data/MNCAATourneySlots.csv")
TourneySlots<-read.csv("data/MNCAATourneySlots_temp.csv") #Update playin-games seeds which may change year to year
TourneySlots[, c("Slot", "StrongSeed", "WeakSeed")]<-sapply(TourneySlots[, c("Slot", "StrongSeed", "WeakSeed")], as.character)
Seasons<-read.csv("data/MSeasons.csv")
Seasons$DayZero<-as.Date(sapply(strsplit(Seasons$DayZero, " "),`[[`,1))
table(Seasons$Season)
head(Seasons)
#all possible slots that each team can be in
getSlots<-function(seed, season){
# slot<-"W11"
slot<-TourneySlots$Slot[TourneySlots$Season==season & (TourneySlots$StrongSeed==seed|TourneySlots$WeakSeed==seed)]
next_slot<-slot
while(slot!="R6CH"){
next_slot<-c(next_slot,
TourneySlots$Slot[TourneySlots$Season==season & (TourneySlots$StrongSeed==slot|TourneySlots$WeakSeed==slot)]
)
slot<- TourneySlots$Slot[TourneySlots$Season==season & (TourneySlots$StrongSeed==slot|TourneySlots$WeakSeed==slot)]
}
data.frame(Seed=seed, Slot=next_slot, Season=season)
}
TourneyRounds<-lapply(1:nrow(TourneySeeds),function(x) getSlots(TourneySeeds$Seed[x], TourneySeeds$Season[x]))
TourneyRounds<-ldply(TourneyRounds,data.frame)
table(TourneyRounds$Season)
#manually add current season,can delete this once kaggle data released
# TourneySlots2<-TourneySlots[TourneySlots$Season==2021,]
# TourneySlots2$Season<-2022
# TourneySlots<-rbind.fill(TourneySlots, TourneySlots2)
# table(TourneySlots$Season)
# setdiff(TourneySeeds$Seed[TourneySeeds$Season==2022], TourneySlots$StrongSeed[TourneySlots$Season==2022])
#
# TeamGeog<-read.csv("data/TeamGeog.csv")
# colnames(TeamGeog)[2:3]<-c("TeamLat", "TeamLng")
#
# CitiesEnriched<-read.csv("data/CitiesEnriched.csv")
# Cities2<-data.frame(CityID=4444:4449,
# City=c("Florence", "Georgetown", "Seaside", "Kissimmee", "Oakland", "St. Petersburg"),
# State=c("AL", "KY", "CA", "FL", "MI", "FL"),
# Lat=c(34.7998,38.2098,36.6149,28.2920, 37.8044,27.7676),
# Lng=c(-87.6773,-84.5588,-121.8221 ,-81.4076,-122.2711,-82.6403))
# CitiesEnriched<-rbind.fill(CitiesEnriched, Cities2[!Cities2$CityID%in% CitiesEnriched$CityID,])
#
# GameGeog<-read.csv("data/MGameCities.csv")
# GameGeog<-merge(GameGeog, CitiesEnriched[, c("Lat", "Lng", "CityID", "City","State")], by="CityID", all.x = T)
# GameGeog$WTeam<-Teams$Team_Full[match(GameGeog$WTeamID, Teams$TeamID)]
# GameGeog$LTeam<-Teams$Team_Full[match(GameGeog$LTeamID, Teams$TeamID)]
# GameGeog$DayZero<-Seasons$DayZero[match(GameGeog$Season, Seasons$Season)]
# GameGeog$DATE<-GameGeog$DayZero+GameGeog$DayNum
# GameGeog<-melt(GameGeog,id.vars =setdiff(colnames(GameGeog), c("WTeam", "LTeam")), value.name = "Team")
# GameGeog<-GameGeog[, !colnames(GameGeog)%in%c("variable", "WTeamID", "LTeamID")]
# head(GameGeog)
#add more game-distances
# load("data/merged_locs.Rda") #scraped-realgm data
# merged_locs<-merged_locs[, c("Team", "Date", "City", "State.Abb", "Lat", "Lng")]
# colnames(merged_locs)[colnames(merged_locs)%in% c("Date", "State.Abb")]<-c("DATE", "State")
# merged_locs$Season<-ifelse(month(merged_locs$DATE)%in% 10:12, year(merged_locs$DATE)+1, year(merged_locs$DATE))
# GameGeog<-rbind.fill(GameGeog, merged_locs[merged_locs$Season<=2009,])
# table(GameGeog$Season)
#
# write.csv(GameGeog, file="data/GameGeogEnriched.csv") #use theses for my other project
#
# TeamGeog$Team_Full<-Teams$Team_Full[match(TeamGeog$team_id, Teams$TeamID)]
# write.csv(TeamGeog, file="data/TeamGeogEnriched.csv") #usethese for other project
#
# TourneyGeog<-read.csv("data/TourneyGeog.csv")
# TourneyGeog<-TourneyGeog[, c("season", "slot", "lat", "lng")]
# colnames(TourneyGeog)<-c("Season", "Slot", "Lat", "Lng")
# table(TourneyGeog$Season)
# source("misc-create-tourney-geog.R")
#
# TourneyGeog<-rbind(TourneyGeog, slots2[,c("Season", "Slot", "Lat", "Lng")])
# TourneyGeog<-rbind(TourneyGeog, slots[,c("Season", "Slot", "Lat", "Lng")])
# table(TourneyGeog$Season)
# TeamCoaches<-read.csv("data/MTeamCoaches.csv")
# TeamCoaches$DayZero<-Seasons$DayZero[match(TeamCoaches$Season, Seasons$Season)]
# TeamCoaches$FirstDATE<-TeamCoaches$DayZero+TeamCoaches$FirstDayNum
# TeamCoaches$LastDATE<-TeamCoaches$DayZero+TeamCoaches$LastDayNum
###MASSEY DATA####
#https://www.kaggle.com/masseyratings/rankings/data
Massey_All<-fread("data/MMasseyOrdinals.csv")
colnames(Massey_All)<-c("Season", "DayNum", "Source", "TeamID", "Rank")
Massey_All$DayZero<-Seasons$DayZero[match(Massey_All$Season, Seasons$Season)]
Massey_All$DATE<-Massey_All$DayZero+Massey_All$DayNum
Massey_All$DATE<-as.Date(Massey_All$DATE)
Massey_All<-data.table(Massey_All)
Massey_means<-Massey_All[, list( medRank=as.numeric(median(Rank)),
maxRank=max(Rank), minRank=min(Rank), meanRank=mean(Rank)), by=c("TeamID", "DATE", "Season")]
Massey_means<-Massey_means[order(Massey_means$DATE, decreasing = F), ]
Massey_means<-Massey_means[, `:=`( preRank=meanRank[1]), by=c("TeamID", "Season")]
Massey_All<-data.frame(Massey_All)
Massey_means<-data.frame(Massey_means)
Massey_means<-Massey_means[month(Massey_means$DATE)!=4,]
Massey_All<-Massey_All[month(Massey_All$DATE) !=4,]
Massey_temp<-Massey_All[Massey_All$DATE==as.Date("2019-03-13"),]
Massey_temp$DATE<-as.Date("2019-03-18")
Massey_All<-rbind(Massey_All, Massey_temp)
unique(Massey_All$DATE[Massey_All$Season==2019])
###SCRAPE EXTERNAL DATA#####
getYahoo<-function(year, round){
#year<-2012
if(year==2012){
data<-read_html(paste0("https://web.archive.org/web/20120316095713/http://tournament.fantasysports.yahoo.com:80/t1/group/all/pickdistribution?round=", round))
}
names<-data%>% html_nodes("td")%>% html_text()
if(names[1]!="1"){
names<-names[-c(1:2)]
}
names<-data.frame(matrix(names, ncol=3, byrow=T))
names<-names[, -1]
colnames(names)<-c("Team", "Ownership")
names$Team<-coordTeam(sapply(strsplit(names$Team, " \\("), `[[`, 1))
names$Round<-paste0("R", round)
names$Season<-year
names$Ownership<-as.numeric(names$Ownership)/100
names
}
getYear<-function(year){
print(year)
data.frame(rbindlist(lapply(1:6, function(x) getYahoo(year, x))))
}
whoPicked_yahoo<-getYear(2012)
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Lamar/vermont")]<-"Vermont"
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Mvsu/wku")]<-"Western Kentucky"
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Byu/iona")]<-"Brigham Young"
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Cal/usf")]<-"Usf"
whoPicked_yahoo<-ddply(whoPicked_yahoo, .(Team, Round, Season),summarize, Ownership=sum(Ownership))
table(whoPicked_yahoo$Round)
setdiff(whoPicked_yahoo$Team, Teams$Team_Full)
readFile<-function(year){
string<-paste0(year, "/WhoPickedWhom.csv")
whoPicked<-read.csv(string)
colnames(whoPicked)<-c("R1", "R2", "R3", "R4", "R5", "R6")
whoPicked[, 7:12]<-sapply(whoPicked, function(x) sapply( strsplit(x, "-"), function(x) x[length(x)]))
whoPicked[, 1:6]<-sapply(whoPicked[, 1:6], function(x) sapply( strsplit(x, "-"), function(x) paste0(x[-length(x)], collapse="-", sep="")))
whoPicked[, 1:6]<-sapply(whoPicked[, 1:6], function(x) gsub(paste0(1:16,collapse="|" ), "", x))
whoPicked[,7:12]<-sapply(whoPicked[, 7:12], function(x) as.numeric(gsub("%", "", x))*.01)
#reshape whoPicked data
whoPicked2<-data.frame(Team=rep(NA, 64*6), Round=rep(NA, 64*6), Ownership=rep(NA, 64*6))
row<-1;
for(i in 1:6){
for(team in whoPicked[, i]) {
whoPicked2$Team[row]<-team
whoPicked2$Round[row]<-colnames(whoPicked)[i]
whoPicked2$Ownership[row]<-whoPicked[whoPicked[, i]==team,i+6]
row<-row+1
}
}
whoPicked2$Team<-coordTeam((whoPicked2$Team))
whoPicked2$Season<-year
whoPicked2
}
whoPicked<-ldply(lapply(c(2008:2011, 2013:2019), readFile), data.frame)
whoPicked<-rbind(whoPicked, whoPicked_yahoo)
whoPicked$Team[whoPicked$Team=='Pr / Sc']<-'Providence'
whoPicked$Team[whoPicked$Team=="North Carolina / Ud"]<-'Uc Davis'
whoPicked$Team[whoPicked$Team=="Abil Christian"]<-'Abilene Christian'
whoPicked2<-ldply(lapply(c(2021:2023), readFile), data.frame)
whoPicked2$Team[whoPicked2$Team=="Iu"]<-'Indiana'
whoPicked<-rbind(whoPicked[!whoPicked$Season%in% whoPicked2$Season,], whoPicked2)
setdiff(whoPicked$Team, Teams$Team_Full)
readPage<-function(year){
print(year)
page<-read_html(paste0("https://www.sportsoddshistory.com/cbb-div/?y=",year-1 ,"-", year ,"&sa=cbb&a=tour&o=r1"))
result<-page%>% html_table(fill=T, header=T)
result[[2]]$Region<-"A"
result[[3]]$Region<-"B"
result[[4]]$Region<-"C"
result[[5]]$Region<-"D"
result<-rbindlist(result[2:5])
colnames(result)[2]<-"Final.Four.Odds"
result<-result[, c("Team", "Region", "Final.Four.Odds", "Result")]
result<-result[!result$Team=="Team"&!is.na(result$Team)]
result$Final.Four.Odds<-as.numeric(result$Final.Four.Odds)
result$Final.Four.Odds.dec<-ifelse(result$Final.Four.Odds>0, result$Final.Four.Odds/100+1, -100/result$Final.Four.Odds+1)
result$ImpliedProb<-1/result$Final.Four.Odds.dec
result[, ImpliedProb.sum:=sum(ImpliedProb), by="Region"]
result[, ImpliedProb:=ImpliedProb/ImpliedProb.sum]
result$Year<-year
result
}
final.four.odds<-ldply(lapply(c(2013:2019, 2021), readPage), data.frame)
final.four.odds$Team<-coordTeam(final.four.odds$Team)
final.four.odds$Team[final.four.odds$Team=="Stjohns"]<-"St Johns"
setdiff(final.four.odds$Team, Teams$Team_Full)
save(list=ls()[ls()%in% c( "TR_Rank","whoPicked", "final.four.odds",# "SCurve" , "KenPom", "SAG_Rank","march538", #scraped data
"Seasons","Teams", "TourneySlots","TeamCoaches","TourneySeeds_temp",
"TourneySeeds","TourneyRounds","NCAATourneyDetailedResults","RegularSeasonDetailedResult","SecondaryTourneyDetailedResults",
"TourneyGeog", "TeamGeog", "GameGeog" ,"CitiesEnriched","Massey_means", "Massey_All" #team/tourney data
)], file="data/game-data.RData")
# load("data/game-data.RData") | /scrape-data.R | no_license | dlm1223/march-madness | R | false | false | 11,583 | r | load("data/game-data.RData")
source("functions.R", encoding = "UTF-8")
###LOAD TOURNEY/TEAM DATA#####
NCAATourneyDetailedResults<-read.csv("data/MNCAATourneyDetailedResults.csv")
NCAATourneyDetailedResults$Tournament<-1
RegularSeasonDetailedResult<-read.csv("data/MRegularSeasonDetailedResults.csv")
RegularSeasonDetailedResult$Tournament<-0
SecondaryTourneyDetailedResults<-read.csv("data/MSecondaryTourneyCompactResults.csv")
SecondaryTourneyDetailedResults$Tournament<-1
Teams<-read.csv("data/MTeams.csv")
Teams$TeamName<-coordTeam(Teams$TeamName)
colnames(Teams)[1:2]<-c("TeamID", "Team_Full")
Teams[duplicated(Teams$Team_Full), ]
# source("misc-create-tourney-geog.R")
TourneySeeds<-read.csv("data/MNCAATourneySeeds.csv")
TourneySeeds_temp<-read.csv("data/MNCAATourneySeeds_temp.csv")
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022]<-coordTeam(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022])
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022]<-Teams$TeamID[match(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2022], Teams$Team_Full)]
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023]<-coordTeam(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023])
TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023]<-Teams$TeamID[match(TourneySeeds_temp$TeamID[TourneySeeds_temp$Season==2023], Teams$Team_Full)]
TourneySeeds_temp$TeamID<-as.numeric(TourneySeeds_temp$TeamID)
tail(TourneySeeds_temp, 100)
TourneySeeds<-TourneySeeds_temp
TourneySeeds[duplicated(TourneySeeds[, c("TeamID", "Season")]),]
TourneySlots<-read.csv("data/MNCAATourneySlots.csv")
TourneySlots<-read.csv("data/MNCAATourneySlots_temp.csv") #Update playin-games seeds which may change year to year
TourneySlots[, c("Slot", "StrongSeed", "WeakSeed")]<-sapply(TourneySlots[, c("Slot", "StrongSeed", "WeakSeed")], as.character)
Seasons<-read.csv("data/MSeasons.csv")
Seasons$DayZero<-as.Date(sapply(strsplit(Seasons$DayZero, " "),`[[`,1))
table(Seasons$Season)
head(Seasons)
#all possible slots that each team can be in
getSlots<-function(seed, season){
# slot<-"W11"
slot<-TourneySlots$Slot[TourneySlots$Season==season & (TourneySlots$StrongSeed==seed|TourneySlots$WeakSeed==seed)]
next_slot<-slot
while(slot!="R6CH"){
next_slot<-c(next_slot,
TourneySlots$Slot[TourneySlots$Season==season & (TourneySlots$StrongSeed==slot|TourneySlots$WeakSeed==slot)]
)
slot<- TourneySlots$Slot[TourneySlots$Season==season & (TourneySlots$StrongSeed==slot|TourneySlots$WeakSeed==slot)]
}
data.frame(Seed=seed, Slot=next_slot, Season=season)
}
TourneyRounds<-lapply(1:nrow(TourneySeeds),function(x) getSlots(TourneySeeds$Seed[x], TourneySeeds$Season[x]))
TourneyRounds<-ldply(TourneyRounds,data.frame)
table(TourneyRounds$Season)
#manually add current season,can delete this once kaggle data released
# TourneySlots2<-TourneySlots[TourneySlots$Season==2021,]
# TourneySlots2$Season<-2022
# TourneySlots<-rbind.fill(TourneySlots, TourneySlots2)
# table(TourneySlots$Season)
# setdiff(TourneySeeds$Seed[TourneySeeds$Season==2022], TourneySlots$StrongSeed[TourneySlots$Season==2022])
#
# TeamGeog<-read.csv("data/TeamGeog.csv")
# colnames(TeamGeog)[2:3]<-c("TeamLat", "TeamLng")
#
# CitiesEnriched<-read.csv("data/CitiesEnriched.csv")
# Cities2<-data.frame(CityID=4444:4449,
# City=c("Florence", "Georgetown", "Seaside", "Kissimmee", "Oakland", "St. Petersburg"),
# State=c("AL", "KY", "CA", "FL", "MI", "FL"),
# Lat=c(34.7998,38.2098,36.6149,28.2920, 37.8044,27.7676),
# Lng=c(-87.6773,-84.5588,-121.8221 ,-81.4076,-122.2711,-82.6403))
# CitiesEnriched<-rbind.fill(CitiesEnriched, Cities2[!Cities2$CityID%in% CitiesEnriched$CityID,])
#
# GameGeog<-read.csv("data/MGameCities.csv")
# GameGeog<-merge(GameGeog, CitiesEnriched[, c("Lat", "Lng", "CityID", "City","State")], by="CityID", all.x = T)
# GameGeog$WTeam<-Teams$Team_Full[match(GameGeog$WTeamID, Teams$TeamID)]
# GameGeog$LTeam<-Teams$Team_Full[match(GameGeog$LTeamID, Teams$TeamID)]
# GameGeog$DayZero<-Seasons$DayZero[match(GameGeog$Season, Seasons$Season)]
# GameGeog$DATE<-GameGeog$DayZero+GameGeog$DayNum
# GameGeog<-melt(GameGeog,id.vars =setdiff(colnames(GameGeog), c("WTeam", "LTeam")), value.name = "Team")
# GameGeog<-GameGeog[, !colnames(GameGeog)%in%c("variable", "WTeamID", "LTeamID")]
# head(GameGeog)
#add more game-distances
# load("data/merged_locs.Rda") #scraped-realgm data
# merged_locs<-merged_locs[, c("Team", "Date", "City", "State.Abb", "Lat", "Lng")]
# colnames(merged_locs)[colnames(merged_locs)%in% c("Date", "State.Abb")]<-c("DATE", "State")
# merged_locs$Season<-ifelse(month(merged_locs$DATE)%in% 10:12, year(merged_locs$DATE)+1, year(merged_locs$DATE))
# GameGeog<-rbind.fill(GameGeog, merged_locs[merged_locs$Season<=2009,])
# table(GameGeog$Season)
#
# write.csv(GameGeog, file="data/GameGeogEnriched.csv") #use theses for my other project
#
# TeamGeog$Team_Full<-Teams$Team_Full[match(TeamGeog$team_id, Teams$TeamID)]
# write.csv(TeamGeog, file="data/TeamGeogEnriched.csv") #usethese for other project
#
# TourneyGeog<-read.csv("data/TourneyGeog.csv")
# TourneyGeog<-TourneyGeog[, c("season", "slot", "lat", "lng")]
# colnames(TourneyGeog)<-c("Season", "Slot", "Lat", "Lng")
# table(TourneyGeog$Season)
# source("misc-create-tourney-geog.R")
#
# TourneyGeog<-rbind(TourneyGeog, slots2[,c("Season", "Slot", "Lat", "Lng")])
# TourneyGeog<-rbind(TourneyGeog, slots[,c("Season", "Slot", "Lat", "Lng")])
# table(TourneyGeog$Season)
# TeamCoaches<-read.csv("data/MTeamCoaches.csv")
# TeamCoaches$DayZero<-Seasons$DayZero[match(TeamCoaches$Season, Seasons$Season)]
# TeamCoaches$FirstDATE<-TeamCoaches$DayZero+TeamCoaches$FirstDayNum
# TeamCoaches$LastDATE<-TeamCoaches$DayZero+TeamCoaches$LastDayNum
###MASSEY DATA####
#https://www.kaggle.com/masseyratings/rankings/data
Massey_All<-fread("data/MMasseyOrdinals.csv")
colnames(Massey_All)<-c("Season", "DayNum", "Source", "TeamID", "Rank")
Massey_All$DayZero<-Seasons$DayZero[match(Massey_All$Season, Seasons$Season)]
Massey_All$DATE<-Massey_All$DayZero+Massey_All$DayNum
Massey_All$DATE<-as.Date(Massey_All$DATE)
Massey_All<-data.table(Massey_All)
Massey_means<-Massey_All[, list( medRank=as.numeric(median(Rank)),
maxRank=max(Rank), minRank=min(Rank), meanRank=mean(Rank)), by=c("TeamID", "DATE", "Season")]
Massey_means<-Massey_means[order(Massey_means$DATE, decreasing = F), ]
Massey_means<-Massey_means[, `:=`( preRank=meanRank[1]), by=c("TeamID", "Season")]
Massey_All<-data.frame(Massey_All)
Massey_means<-data.frame(Massey_means)
Massey_means<-Massey_means[month(Massey_means$DATE)!=4,]
Massey_All<-Massey_All[month(Massey_All$DATE) !=4,]
Massey_temp<-Massey_All[Massey_All$DATE==as.Date("2019-03-13"),]
Massey_temp$DATE<-as.Date("2019-03-18")
Massey_All<-rbind(Massey_All, Massey_temp)
unique(Massey_All$DATE[Massey_All$Season==2019])
###SCRAPE EXTERNAL DATA#####
getYahoo<-function(year, round){
#year<-2012
if(year==2012){
data<-read_html(paste0("https://web.archive.org/web/20120316095713/http://tournament.fantasysports.yahoo.com:80/t1/group/all/pickdistribution?round=", round))
}
names<-data%>% html_nodes("td")%>% html_text()
if(names[1]!="1"){
names<-names[-c(1:2)]
}
names<-data.frame(matrix(names, ncol=3, byrow=T))
names<-names[, -1]
colnames(names)<-c("Team", "Ownership")
names$Team<-coordTeam(sapply(strsplit(names$Team, " \\("), `[[`, 1))
names$Round<-paste0("R", round)
names$Season<-year
names$Ownership<-as.numeric(names$Ownership)/100
names
}
getYear<-function(year){
print(year)
data.frame(rbindlist(lapply(1:6, function(x) getYahoo(year, x))))
}
whoPicked_yahoo<-getYear(2012)
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Lamar/vermont")]<-"Vermont"
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Mvsu/wku")]<-"Western Kentucky"
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Byu/iona")]<-"Brigham Young"
whoPicked_yahoo$Team[whoPicked_yahoo$Team%in% c("Cal/usf")]<-"Usf"
whoPicked_yahoo<-ddply(whoPicked_yahoo, .(Team, Round, Season),summarize, Ownership=sum(Ownership))
table(whoPicked_yahoo$Round)
setdiff(whoPicked_yahoo$Team, Teams$Team_Full)
readFile<-function(year){
string<-paste0(year, "/WhoPickedWhom.csv")
whoPicked<-read.csv(string)
colnames(whoPicked)<-c("R1", "R2", "R3", "R4", "R5", "R6")
whoPicked[, 7:12]<-sapply(whoPicked, function(x) sapply( strsplit(x, "-"), function(x) x[length(x)]))
whoPicked[, 1:6]<-sapply(whoPicked[, 1:6], function(x) sapply( strsplit(x, "-"), function(x) paste0(x[-length(x)], collapse="-", sep="")))
whoPicked[, 1:6]<-sapply(whoPicked[, 1:6], function(x) gsub(paste0(1:16,collapse="|" ), "", x))
whoPicked[,7:12]<-sapply(whoPicked[, 7:12], function(x) as.numeric(gsub("%", "", x))*.01)
#reshape whoPicked data
whoPicked2<-data.frame(Team=rep(NA, 64*6), Round=rep(NA, 64*6), Ownership=rep(NA, 64*6))
row<-1;
for(i in 1:6){
for(team in whoPicked[, i]) {
whoPicked2$Team[row]<-team
whoPicked2$Round[row]<-colnames(whoPicked)[i]
whoPicked2$Ownership[row]<-whoPicked[whoPicked[, i]==team,i+6]
row<-row+1
}
}
whoPicked2$Team<-coordTeam((whoPicked2$Team))
whoPicked2$Season<-year
whoPicked2
}
whoPicked<-ldply(lapply(c(2008:2011, 2013:2019), readFile), data.frame)
whoPicked<-rbind(whoPicked, whoPicked_yahoo)
whoPicked$Team[whoPicked$Team=='Pr / Sc']<-'Providence'
whoPicked$Team[whoPicked$Team=="North Carolina / Ud"]<-'Uc Davis'
whoPicked$Team[whoPicked$Team=="Abil Christian"]<-'Abilene Christian'
whoPicked2<-ldply(lapply(c(2021:2023), readFile), data.frame)
whoPicked2$Team[whoPicked2$Team=="Iu"]<-'Indiana'
whoPicked<-rbind(whoPicked[!whoPicked$Season%in% whoPicked2$Season,], whoPicked2)
setdiff(whoPicked$Team, Teams$Team_Full)
readPage<-function(year){
print(year)
page<-read_html(paste0("https://www.sportsoddshistory.com/cbb-div/?y=",year-1 ,"-", year ,"&sa=cbb&a=tour&o=r1"))
result<-page%>% html_table(fill=T, header=T)
result[[2]]$Region<-"A"
result[[3]]$Region<-"B"
result[[4]]$Region<-"C"
result[[5]]$Region<-"D"
result<-rbindlist(result[2:5])
colnames(result)[2]<-"Final.Four.Odds"
result<-result[, c("Team", "Region", "Final.Four.Odds", "Result")]
result<-result[!result$Team=="Team"&!is.na(result$Team)]
result$Final.Four.Odds<-as.numeric(result$Final.Four.Odds)
result$Final.Four.Odds.dec<-ifelse(result$Final.Four.Odds>0, result$Final.Four.Odds/100+1, -100/result$Final.Four.Odds+1)
result$ImpliedProb<-1/result$Final.Four.Odds.dec
result[, ImpliedProb.sum:=sum(ImpliedProb), by="Region"]
result[, ImpliedProb:=ImpliedProb/ImpliedProb.sum]
result$Year<-year
result
}
final.four.odds<-ldply(lapply(c(2013:2019, 2021), readPage), data.frame)
final.four.odds$Team<-coordTeam(final.four.odds$Team)
final.four.odds$Team[final.four.odds$Team=="Stjohns"]<-"St Johns"
setdiff(final.four.odds$Team, Teams$Team_Full)
save(list=ls()[ls()%in% c( "TR_Rank","whoPicked", "final.four.odds",# "SCurve" , "KenPom", "SAG_Rank","march538", #scraped data
"Seasons","Teams", "TourneySlots","TeamCoaches","TourneySeeds_temp",
"TourneySeeds","TourneyRounds","NCAATourneyDetailedResults","RegularSeasonDetailedResult","SecondaryTourneyDetailedResults",
"TourneyGeog", "TeamGeog", "GameGeog" ,"CitiesEnriched","Massey_means", "Massey_All" #team/tourney data
)], file="data/game-data.RData")
# load("data/game-data.RData") |
library(shiny)
library(shinythemes)
library(DT)
library(ggplot2)
library(car)
library(nortest)
library(tseries)
library(RcmdrMisc)
library(lmtest)
datos <-read.csv("www/dataset.csv",dec = ",")
shinyServer(function(input, output) {
output$RawData <- DT::renderDataTable(
DT::datatable({
datos
},
options = list(lengthMenu=list(c(5,15,20),c('5','15','20')),pageLength=10,
initComplete = JS(
"function(settings, json) {",
"$(this.api().table().header()).css({'background-color': 'moccasin', 'color': '1c1b1b'});",
"}"),
columnDefs=list(list(className='dt-center',targets="_all"))
),
filter = "top",
selection = 'multiple',
style = 'bootstrap',
class = 'cell-border stripe',
rownames = FALSE,
colnames = c("Subregion","Municipality","Projected population","Thefts","Traffic accidents","Homicides","School deserters","Sports venues","Extortions","Personal injuries")
))
variabletrans <- reactive({
attach(datos)
if(input$Transformacion == 0){Lesiones.t=log(LesionesPer)}else{Lesiones.t=LesionesPer^as.double(input$Transformacion)}
if(input$Transformacion ==1){titulo="Personal Injuries"}else{titulo="Transformed Personal Injuries"}
Resultado <- cbind(Lesiones.t,titulo)
})
variabletrans2 <- reactive({
attach(datos)
if(input$Transformacion == 0){Lesiones.t=log(LesionesPer)}else{Lesiones.t=LesionesPer^as.double(input$Transformacion)}
if(input$Transformacion ==1){titulo="Personal Injuries"}else{titulo="Transformed Personal Injuries"}
Lesiones.t
})
output$Histograma <- renderPlot({
ggplot(NULL,aes(as.double(variabletrans()[,1])))+geom_histogram(bins=nclass.Sturges(as.double(variabletrans()[,1])),color="white",
fill="seagreen1",aes(y=..density..),lwd=0.8)+geom_density(color="seagreen4",
alpha=0.3,fill="seagreen4",lty=1)+
labs(title = paste(variabletrans()[1,2], "\n histogram"),x=variabletrans()[1,2],y="Density")+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=13, face="bold"),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$Boxplot <- renderPlot({
ggplot(NULL,aes(x=0,y=as.double(variabletrans()[,1])))+geom_boxplot(color="black",fill="skyblue",alpha=0.5)+ stat_summary(fun.y=mean, colour="darkred", geom="point",shape=18, size=3)+
labs(title = paste(variabletrans()[1,2], "\n boxplot"),x="",y=variabletrans()[1,2])+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(),
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$qqPlot <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
qqPlot(as.double(variabletrans()[,1]),grid=F,xlab="",ylab="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
qqPlot(as.double(variabletrans()[,1]),col="coral",pch=16,id=T,lwd=1.9,col.lines = "black",grid = F, main = paste(variabletrans()[1,2], "\n Q-Q plot"),xlab="Normal quantiles",ylab=variabletrans()[1,2])
box(col="white")
})
testanalitico <- reactive({
if(input$PruebaAnalitica == 1){
prueba <- shapiro.test(as.double(variabletrans()[,1]))
}
else
{
if(input$PruebaAnalitica ==2){
prueba <- ad.test(as.double(variabletrans()[,1]))
}
else
{
if(input$PruebaAnalitica == 3){
prueba <- cvm.test(as.double(variabletrans()[,1]))
}
else
{
if(input$PruebaAnalitica == 4){
prueba <- lillie.test(as.double(variabletrans()[,1]))
}
else
{
prueba <- jarque.bera.test(as.double(variabletrans()[,1]))
}
}
}
}
prueba$data.name <- variabletrans()[1,2]
prueba
})
output$Prueba <- renderPrint({
testanalitico()
})
output$Conclusion1 <- renderText({
if(testanalitico()$p.value < 0.05){mensaje="We reject the null hypothesis, the variable is not normally distributed"}else{mensaje="We keep the null hypothesis, the variable is normally distributed"}
mensaje
})
output$ReadMore <- renderUI({
if(input$PruebaAnalitica == 1){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
if(input$PruebaAnalitica == 2){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
if(input$PruebaAnalitica == 3){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
if(input$PruebaAnalitica ==4){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
}
}
}
More
})
Trans1 <- reactive({
attach(datos)
if(input$Transformacion1 == 0){ProjectedPopulation.t=log(ProjectedPopulation)}else{ProjectedPopulation.t=ProjectedPopulation^as.double(input$Transformacion1)}
if(input$Transformacion1 ==1){titulo="Projected Population"}else{
if(input$Transformacion1==0){titulo="Projected Population Logarithm"}else{titulo="Transformed Projected Population"}}
Resultado1 <- cbind(ProjectedPopulation.t,titulo)
})
Trans2 <- reactive({
attach(datos)
if(input$Transformacion2 == 0){Thefts.t=log(Thefts)}else{Thefts.t=Thefts^as.double(input$Transformacion2)}
if(input$Transformacion2 ==1){titulo="Thefts"}else{
if(input$Transformacion2==0){titulo="Thefts Logarithm"}else{titulo="Transformed Thefts"}}
Resultado2 <- cbind(Thefts.t,titulo)
})
Trans3 <- reactive({
attach(datos)
if(input$Transformacion3 == 0){TrafAccid.t=log(TrafAccid)}else{TrafAccid.t=TrafAccid^as.double(input$Transformacion3)}
if(input$Transformacion3 ==1){titulo="Traffic accidents"}else{
if(input$Transformacion3==0){titulo="Traffic accidents Logarithm"}else{titulo="Transformed Traffic accidents"}}
Resultado3 <- cbind(TrafAccid.t,titulo)
})
Trans4 <- reactive({
attach(datos)
if(input$Transformacion4 == 0){Homicides.t=log(Homicides)}else{Homicides.t=Homicides^as.double(input$Transformacion4)}
if(input$Transformacion4 ==1){titulo="Homicides"}else{
if(input$Transformacion4 ==0){titulo="Homicides Logarithm"}else{titulo="Transformed Homicides"}}
Resultado4 <- cbind(Homicides.t,titulo)
})
Trans5 <- reactive({
attach(datos)
if(input$Transformacion5 == 0){SchoolDes.t=log(SchoolDes)}else{SchoolDes.t=SchoolDes^as.double(input$Transformacion5)}
if(input$Transformacion5 ==1){titulo="School deserters"}else{
if(input$Transformacion5 ==0){titulo="School deserters Logarithm"}else{titulo="Transformed School deserters"}}
Resultado5 <- cbind(SchoolDes.t,titulo)
})
Trans6 <- reactive({
attach(datos)
if(input$Transformacion6 == 0){SportsScenari.t=log(SportsScenari)}else{SportsScenari.t=SportsScenari^as.double(input$Transformacion6)}
if(input$Transformacion6 ==1){titulo="Sports venues"}else{
if(input$Transformacion6 ==0){titulo="Sports venues Logarithm"}else{titulo="Transformed Sports venues"}}
Resultado6 <- cbind(SportsScenari.t,titulo)
})
Trans7 <- reactive({
attach(datos)
if(input$Transformacion7 == 0){Extortions.t=log(Extortions)}else{Extortions.t=Extortions^as.double(input$Transformacion7)}
if(input$Transformacion7 ==1){titulo="Extortions"}else{
if(input$Transformacion7 ==0){titulo="Extortions Logarithm"}else{titulo="Transformed Extortions"}}
Resultado7 <- cbind(Extortions.t,titulo)
})
output$Dispersion1 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans1()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="seagreen3")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans1()[1,2], "\n"),x=paste(Trans1()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion1 <- renderText({
corr <- cor(as.double(Trans1()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion2 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans2()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="peachpuff")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans2()[1,2], "\n"),x=paste(Trans2()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion2 <- renderText({
corr <- cor(as.double(Trans2()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion3 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans3()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="cyan")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans3()[1,2], "\n"),x=paste(Trans3()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion3 <- renderText({
corr <- cor(as.double(Trans3()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion4 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans4()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="violet")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans4()[1,2], "\n"),x=paste(Trans4()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion4 <- renderText({
corr <- cor(as.double(Trans4()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion5 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans5()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="sandybrown")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans5()[1,2], "\n"),x=paste(Trans5()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion5 <- renderText({
corr <- cor(as.double(Trans5()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion6 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans6()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="hotpink")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans6()[1,2], "\n"),x=paste(Trans6()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion6 <- renderText({
corr <- cor(as.double(Trans6()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion7 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans7()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="yellow")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans7()[1,2], "\n"),x=paste(Trans7()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion7 <- renderText({
corr <- cor(as.double(Trans7()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
Modelcuanti <- reactive({
cuantis <- datos[,c(3:10)]
xnam <- paste0(colnames(cuantis[as.double(input$variablescuantis)]))
if(length(xnam)==0)
{
fmla <- as.formula(paste(deparse(substitute(variabletrans2())), "~ 1"))
nombrefmla <- paste(variabletrans()[1,2], "~ 1")
}
else
{
fmla <- as.formula(paste(deparse(substitute(variabletrans2())), "~",paste(xnam,collapse = " + ")))
nombrefmla <- paste(variabletrans()[1,2], "~",paste(xnam,collapse = " + "))
}
Model1 <- lm(fmla)
Model1[["call"]] <- nombrefmla
Model1
})
output$Model1 <- renderPrint({
summary(Modelcuanti())
})
output$VIF <- renderPrint({
if(length(Modelcuanti()$coefficients)<=2){
mensaje="The model must have at least two explanatory variables to execute this function"
mensaje
}
else
{
vif(Modelcuanti())
}
})
output$Alarm <- renderText({
if(length(Modelcuanti()$coefficients)<=2){
alerta = "The model does not has enough independent variables"
alerta
}
else
{
listadevifs <- vif(Modelcuanti())
mensaje="There are no multicollinearity problems"
nombres <- vector(mode = "numeric",length = 7)
for(i in 1:length(listadevifs)){
if(listadevifs[[i]]>5){
mensaje="There are multicollinearity problems in the following variables:"
nombres[i] = i
}
}
variablesconproblemas <- paste(names(listadevifs[nombres]),collapse = ", ")
if(nombres[1]==0 & nombres[2]==0 & nombres[3]==0 & nombres[4]==0 & nombres[5]==0 & nombres[6]==0 & nombres[7]==0){
mensaje
}
else
{
paste(mensaje,variablesconproblemas,". You should keep only one of these.")
}
}
})
output$Determination <- renderText({
valoresp <- summary(Modelcuanti())$coefficients[,4]
mensaje1 = "All the model parameters are significant for a confidence level of 95%"
nombresbetas <- vector(mode = "numeric",length = 6)
for(i in 1:length(valoresp)){
if(valoresp[[i]]>0.05){
mensaje1="There are parameters which are not significant with a confidence level of 95%, that is, there are relationships between variables that are not important, these parameters correspond to:"
nombresbetas[i]=i
}
}
betasnosignificativos <- paste(names(valoresp[nombresbetas]),collapse = ", ")
paste(mensaje1, betasnosignificativos)
})
output$AdjustedDetermination <- renderText({
Rajustado <- summary(Modelcuanti())$adj.r.squared
paste("Whit the current model you got an adjusted R squared of",Rajustado, ". But remember, if there are multicollinearity problems this is not a reliable result, because the estimation can be quite imprecise, and the variances and IC will be too broad")
})
Modelofinalfinal <- reactive({
todasvariables <- cbind(datos[,c(3:9)],as.double(Trans1()[,1]),as.double(Trans2()[,1]),as.double(Trans3()[,1]),as.double(Trans4()[,1]),as.double(Trans5()[,1]),as.double(Trans6()[,1]),as.double(Trans7()[,1]))
todasvariables <- todasvariables[, c(matrix(1:ncol(todasvariables), nrow = 2, byrow = T))]
nombrestodas <- c("Projected population","Transformed Projected population","Thefts","Transformed Thefts","Traffic accidents","Transformed Traffic accidents","Homicides","Transformed Homicides","School deserters","Transformed School deserters","Sports venues", "Transformed Sports venues","Extortions", "Transformed Extortions")
variables <- c(input$variablesincluidas,input$incluirtrans)
sort(variables,decreasing = F)
xnames <- paste0(colnames(todasvariables[as.double(variables)]))
if(length(xnames)==0)
{
fmla2 <- as.formula(paste(deparse(substitute(variabletrans2())), "~ 1"))
nombrefmla2 <- paste(variabletrans()[1,2], "~ 1")
}
else
{
fmla2 <- as.formula(paste(deparse(substitute(variabletrans2())), "~",paste(xnames,collapse = " + ")))
nombrefmla2 <- paste(variabletrans()[1,2], "~",paste(nombrestodas[as.double(variables)],collapse = " + "))
}
Model2 <- lm(fmla2)
Model2[["call"]] <- nombrefmla2
names(Model2$coefficients) <- c("Intercept",nombrestodas[as.double(variables)])
Model2
})
output$finalmodel <- renderPrint({
summary(Modelofinalfinal())
})
output$Significancy2 <- renderUI({
valoresp2 <- summary(Modelofinalfinal())$coefficients[,4]
mensaje2 = "All the model parameters are significant for a confidence level of 95%"
nombresbetas2 <- vector(mode = "numeric",length = 15)
for(i in 1:length(valoresp2)){
if(valoresp2[[i]]>0.05){
mensaje2="There are parameters which are not significant with a confidence level of 95%, that is, there are relationships between variables that are not important, these parameters correspond to:"
nombresbetas2[i]=i
}
}
betasnosignificativos2 <- paste(names(valoresp2[nombresbetas2]),collapse = ", ")
p(paste(mensaje2, betasnosignificativos2),style="padding:25px;background-color:lavender;border-left:8px solid blue;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center")
})
output$Anothermessage <- renderUI({
p("Those variables whose betas are not significant should be eliminated from the model, try to get them out one by one prioritizing those whose betas have a higher p-value (Pr(>|t|))",style="padding:25px;background-color:papayawhip;border-left:8px solid coral;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center")
})
output$FinalAlarma <- renderUI({
multicoli <- summary(Modelofinalfinal())$coefficients[,4]
contador = 0
for(i in 1:length(names(multicoli))){
if(names(multicoli[i])=="Projected population"| names(multicoli[i])=="School deserters")
{
contador = contador + 1
}
}
if(contador >= 2)
{
mensaje = "There are multicollinearity problems, remember the analysis of the previous section"
}
else
{
mensaje = "It seems that there are no problems of multicollinearity, you are doing great so far"
}
p(mensaje, style="padding:25px;background-color:papayawhip;border-left:8px solid coral;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center" )
})
selecciondevariables <- reactive({
todasvariables2 <- cbind(datos[,c(3:9)],as.double(Trans1()[,1]),as.double(Trans2()[,1]),as.double(Trans3()[,1]),as.double(Trans4()[,1]),as.double(Trans5()[,1]),as.double(Trans6()[,1]),as.double(Trans7()[,1]))
todasvariables2 <- todasvariables2[, c(matrix(1:ncol(todasvariables2), nrow = 2, byrow = T))]
nombrestodas2 <- c("Projected population","Transformed Projected population","Thefts","Transformed Thefts","Traffic accidents","Transformed Traffic accidents","Homicides","Transformed Homicides","School deserters","Transformed School deserters","Sports venues", "Transformed Sports venues","Extortions", "Transformed Extortions")
variables <- c(input$variablesincluidas,input$incluirtrans)
sort(variables,decreasing = F)
xnames2 <- paste0(colnames(todasvariables2[as.double(variables)]))
if(length(xnames2)==0)
{
fmla3 <- as.formula(paste(deparse(substitute(variabletrans2())), "~ 1"))
}
else
{
fmla3 <- as.formula(paste(deparse(substitute(variabletrans2())), "~",paste(xnames2,collapse = " + ")))
}
Model3 <- lm(fmla3)
Model3
Modelbase <- stepwise(Model3,direction = input$direccion,criterion = input$criterio)
Modelbase
})
Modelazofinal <- reactive({
formulita <- selecciondevariables()$terms
if(any(grepl("Trans1", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("ProjectedPopulation", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + ProjectedPopulation)
}
}
if(any(grepl("Trans2", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Thefts", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Thefts)
}
}
if(any(grepl("Trans3", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("TrafAccid", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + TrafAccid)
}
}
if(any(grepl("Trans4", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Homicides", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Homicides)
}
}
if(any(grepl("Trans5", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SchoolDes", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SchoolDes)
}
}
if(any(grepl("Trans6", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SportsScenari", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SportsScenari)
}
}
if(any(grepl("Trans7", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Extortions", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Extortions)
}
}
Model4 <- lm(formulita)
Model4[["call"]] <- selecciondevariables()$call
Model4[["call"]] <- gsub("[(]","",Model4[["call"]])
Model4[["call"]] <- gsub("[)]","",Model4[["call"]])
Model4[["call"]] <- gsub("variabletrans2",variabletrans()[1,2],Model4[["call"]])
Model4[["call"]] <- paste0(Model4[["call"]][1],"(formula = ",Model4[["call"]][2],")")
names(Model4$coefficients) <- gsub("[.]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[,]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[(]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[)]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[[]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[]]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans1 1","Transformed Projected Population",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans2 1","Transformed Thefts",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans3 1","Transformed TrafAccid",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans4 1","Transformed Homicides",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans5 1","Transformed SchoolDes",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans6 1","Transformed SportsScenari",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans3 1","Transformed Extortions",names(Model4$coefficients))
Model4
})
output$ModeloBack <- renderPrint({
invisible(capture.output(modelo <- summary(Modelazofinal())))
modelo
})
output$Determinacionfinal <- renderUI({
coeficiente = summary(Modelazofinal())$adj.r.squared
p(paste("With the final model you built, you get an adjusted square R of:",coeficiente),style="padding:25px;background-color:papayawhip;border-left:8px solid coral;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center" )
})
output$Histograma2 <- renderPlot({
ggplot(NULL,aes(Modelazofinal()$residuals))+geom_histogram(bins=nclass.Sturges(Modelazofinal()$residuals),color="white",
fill="seagreen1",aes(y=..density..),lwd=0.8)+geom_density(color="seagreen4",
alpha=0.3,fill="seagreen4",lty=1)+
labs(title = "Residuals \n histogram",x="Residuals",y="Density")+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=13, face="bold"),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$Boxplot2 <- renderPlot({
ggplot(NULL,aes(x=0,y=Modelazofinal()$residuals))+geom_boxplot(color="black",fill="skyblue",alpha=0.5)+ stat_summary(fun.y=mean, colour="darkred", geom="point",shape=18, size=3)+
labs(title = " Residuals \n boxplot",x="",y="Residuals")+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(),
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$qqPlot2 <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
qqPlot(Modelazofinal()$residuals,grid=F,xlab="",ylab="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
qqPlot(Modelazofinal()$residuals,col="coral",pch=16,id=T,lwd=1.9,col.lines = "black",grid = F, main = "Residuals \n Q-Q plot",xlab="Normal quantiles",ylab="Residuals")
box(col="white")
})
testanalitico2 <- reactive({
if(input$PruebaAnalitica2 == 1){
prueba <- shapiro.test(Modelazofinal()$residuals)
}
else
{
if(input$PruebaAnalitica2 ==2){
prueba <- ad.test(Modelazofinal()$residuals)
}
else
{
if(input$PruebaAnalitica2 == 3){
prueba <- cvm.test(Modelazofinal()$residuals)
}
else
{
if(input$PruebaAnalitica2 == 4){
prueba <- lillie.test(Modelazofinal()$residuals)
}
else
{
prueba <- jarque.bera.test(Modelazofinal()$residuals)
}
}
}
}
prueba$data.name <- "Model Residuals"
prueba
})
output$Prueba2 <- renderPrint({
testanalitico2()
})
output$Conclusion12 <- renderText({
if(testanalitico2()$p.value < 0.05){mensaje="We reject the null hypothesis, the residuals are not normally distributed"}else{mensaje="We keep the null hypothesis, the residuals are normally distributed"}
mensaje
})
output$ReadMore2 <- renderUI({
if(input$PruebaAnalitica2 == 1){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
if(input$PruebaAnalitica2 == 2){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
if(input$PruebaAnalitica2 == 3){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
if(input$PruebaAnalitica2 ==4){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
}
}
}
More
})
output$FittedVsResiduals <- renderPlot({
ggplot(NULL,aes(x=Modelazofinal()$fitted.values,y=Modelazofinal()$residuals))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="violet")+
labs(title = "Model fitted values vs residuals",x="Model fitted values",y="Model residuals")+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
formulit <- reactive({
formulita <- selecciondevariables()$terms
if(any(grepl("Trans1", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("ProjectedPopulation", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + ProjectedPopulation)
}
}
if(any(grepl("Trans2", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Thefts", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Thefts)
}
}
if(any(grepl("Trans3", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("TrafAccid", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + TrafAccid)
}
}
if(any(grepl("Trans4", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Homicides", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Homicides)
}
}
if(any(grepl("Trans5", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SchoolDes", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SchoolDes)
}
}
if(any(grepl("Trans6", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SportsScenari", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SportsScenari)
}
}
if(any(grepl("Trans7", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Extortions", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Extortions)
}
}
formulita
})
testanalitico3 <- reactive({
if(input$PruebaAnalitica3 == 1){
modelo <- Modelazofinal()
prueba <- bptest(modelo)
prueba$data.name <- "Model Residuals"
}
else
{
modelo <- lm(formulit())
prueba <- ncvTest(modelo)
}
prueba
})
output$Prueba3 <- renderPrint({
testanalitico3()
})
output$Conclusion13 <- renderText({
if(input$PruebaAnalitica3 == 1){
if(testanalitico3()$p.value < 0.05){mensaje="We reject the null hypothesis, the residuals are not homoscedastic"}else{mensaje="We keep the null hypothesis, the residuals are homoscedastic"}
mensaje}
else
{
if(testanalitico3()$p < 0.05){mensaje="We reject the null hypothesis, the residuals are not homoscedastic"}else{mensaje="We keep the null hypothesis, the residuals are homoscedastic"}
mensaje}
})
output$ReadMore3 <- renderUI({
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test", icon("wikipedia-w"),target="_blank"),style="color:black")
More
})
output$ACF <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
acf(Modelazofinal()$residuals,ylim=c(-1,1),main="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
acf(Modelazofinal()$residuals,ylim=c(-1,1),main="ACF de los residuales del modelo")
box(col="white")
})
output$PACF <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
pacf(Modelazofinal()$residuals,ylim=c(-1,1),main="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
pacf(Modelazofinal()$residuals,ylim=c(-1,1),main="ACF de los residuales del modelo")
box(col="white")
})
output$ResVsIndex <- renderPlot({
ggplot(NULL,aes(x=seq(1, 118, 1),y=Modelazofinal()$residuals))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="violet")+
labs(title = "Residuals vs Index",x="Index",y="Model residuals")+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
testanalitico4 <- reactive({
if(input$PruebaAnalitica4 == 1){
prueba <- dwtest(Modelazofinal(),alternative = "two.sided")
}
else
{
prueba <- bgtest(Modelazofinal())
}
prueba$data.name <- "Model Residuals"
prueba
})
output$Prueba4 <- renderPrint({
testanalitico4()
})
output$Conclusion14 <- renderText({
if(testanalitico4()$p.value < 0.05){mensaje="We reject the null hypothesis, the residuals are auto-correlated"}else{mensaje="We keep the null hypothesis, the residuals are not auto-correlated"}
mensaje
})
output$ReadMore4 <- renderUI({
if(input$PruebaAnalitica4 == 1){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Durbin%E2%80%93Watson_statistic", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Breusch%E2%80%93Godfrey_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
More
})
output$Answer1 <- renderUI({
actionID <- input$Question1
if(!is.null(actionID)){
if(input$Question1 == 1){mensaje = "It is right! this equation represents a straight line, a function of two variables that are related in a perfect or deterministic way."}
if(input$Question1 == 2){mensaje = "It is not correct! note that in this equation there is a term that represents random factors (e). These random factors make the relationship between X and Y not perfect or deterministic."}
if(input$Question1 == 3){mensaje = "It is not correct! this equation represents an adjustment, note that the equation includes the adjusted response variable and not the original."}
if(input$Question1 == 4){mensaje = "This relationship includes an adjusted variable and a constant factor, therefore it is not even a relationship between two variables."}
if(input$Question1 == 1){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer2 <- renderUI({
actionID <- input$Question2
if(!is.null(actionID)){
if(input$Question2 == 1){mensaje = "It is not correct! This is an important factor to make inferences in these models but, it is the only one?"}
if(input$Question2 == 2){mensaje = "It is not correct! This is an important factor to make inferences in these models but, it is the only one?"}
if(input$Question2 == 3){mensaje = "It is correct! We need the variability of the response variable to be explained by the independent variables (option a) and also that the statistical assumptions be fulfilled (option b)."}
if(input$Question2 == 4){mensaje = "It is not correct, just try again!"}
if(input$Question2 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer3 <- renderUI({
actionID <- input$Question3
if(!is.null(actionID)){
if(input$Question3 == 1){mensaje = "It is right! For a null hypothesis where the residuals of the model are not autocorrelated, the Durbin Watson independence test shows that there is at least a lag with a significant autocorrelation coefficient. "}
if(input$Question3 == 2){mensaje = "It is not correct! We cannot know this using the Durbin Watson test which purpose is proving independence of residuals"}
if(input$Question3 == 3){mensaje = "It is not correct! We cannot know if every autocorrelation coefficient is significant, to prove this we should do graphical tests like ACF"}
if(input$Question3 == 4){mensaje = "It is not correct! In fact we can conclude about a null hypothesis where the residuals of the model are not autocorrelated and we can do it using the p - value"}
if(input$Question3 == 1){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer4 <- renderUI({
actionID <- input$Question4
if(!is.null(actionID)){
if(input$Question4 == 1){mensaje = "It is not right! Remember that the root of the R squared is the coefficient of autocorrelation, and remember that the r squared is calculated in the following way: "}
if(input$Question4 == 2){mensaje = "It is correct! You made the right calculations. Was it just luck? Check then the concepts of R squared and coefficient of autocorrelation in the section Glossary"}
if(input$Question4 == 3){mensaje = "It is not correct! Remember that the root of the R squared is the coefficient of autocorrelation, and remember that the r squared is calculated in the following way: "}
if(input$Question4 == 4){mensaje = "It is not correct! Remember that the root of the R squared is the coefficient of autocorrelation, and remember that the r squared is calculated in the following way: "}
if(input$Question4 == 2){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,withMathJax('\\( R^2 = \\frac{SSR}{SSR+SSE} \\)'),icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer5 <- renderUI({
actionID <- input$Question5
if(!is.null(actionID)){
if(input$Question5 == 1){mensaje = "It is not right! This mathematical expression tells us that the sum of the real values and the adjusted values are equal, therefore at a general level the adjustment could be considered perfect, but we must understand the point-to-point adjustment in other way"}
if(input$Question5 == 2){mensaje = "It is not correct! If the point-to-point adjustment was perfect there were no terms associated with randomness and uncontrollable factors"}
if(input$Question5 == 3){mensaje = "It is not correct! If the point-to-point adjustment was perfect there were no terms associated with randomness and uncontrollable factors"}
if(input$Question5 == 4){mensaje = "It is correct! This mathematical expression tells us that the sum of the real values and the adjusted values are equal, therefore at a general level the adjustment could be considered perfect, even when in each point there is an error percentage"}
if(input$Question5 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer6 <- renderUI({
actionID <- input$Question6
if(!is.null(actionID)){
if(input$Question6 == 1){mensaje = "It is not correct! Actually the r squared increases as more variables have the model."}
if(input$Question6 == 2){mensaje = "That is right! That is exactly the answer, but was it just luck? then review the glossary section to better understand the concepts of R squared and adjusted R squared."}
if(input$Question6 == 3){mensaje = "It is not correct! This does not make much sense, review the glossary section to better understand the concepts of R squared and adjusted R squared."}
if(input$Question6 == 4){mensaje = "It is not correct! This does not make much sense, review the glossary section to better understand the concepts of R squared and adjusted R squared."}
if(input$Question6 == 2){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer7 <- renderUI({
actionID <- input$Question7
if(!is.null(actionID)){
if(input$Question7 == 1){mensaje = "It is not correct! Actually the points of the diagram are not so scattered and we can see a not so weak linear relationship."}
if(input$Question7 == 2){mensaje = "That is not right! In the diagram we can see how the points follow the shape of a straight line."}
if(input$Question7 == 3){mensaje = "It is not correct! The relationship seems to be strong so it would be worthwhile to perform a regression model between these variables."}
if(input$Question7 == 4){mensaje = "It is right! The relationship seems to be strong so it would be worthwhile to perform a regression model between these variables."}
if(input$Question7 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer8 <- renderUI({
actionID <- input$Question8
if(!is.null(actionID)){
if(input$Question8 == 1){mensaje = "It is not correct! Graphing the residuals of the model and its adjusted values does not tell me anything about the assumption of normality."}
if(input$Question8 == 2){mensaje = "That is right! We can see how the points are not uniformly distributed so the variance is not constant and a solution could be to transform the response variable."}
if(input$Question8 == 3){mensaje = "It is not correct! We can see how the points are not uniformly distributed so the variance is not constant."}
if(input$Question8 == 4){mensaje = "It is not correct! Graphing the residuals of the model and its adjusted values does not tell me anything about the assumption of normality."}
if(input$Question8 == 2){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer9 <- renderUI({
actionID <- input$Question9
if(!is.null(actionID)){
if(input$Question9 == 1){mensaje = "It is not correct! We can see how there are many points outside the confidence bands so there is no normality."}
if(input$Question9 == 2){mensaje = "That is not correct! This graph is not useful to conclude about the assumption of constant variance."}
if(input$Question9 == 3){mensaje = "It is not correct! This graph is not useful to conclude about the assumption of constant variance."}
if(input$Question9 == 4){mensaje = "It is correct! We can see how there are many points outside the confidence bands so there is no normality."}
if(input$Question9 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer10 <- renderUI({
actionID <- input$Question10
if(!is.null(actionID)){
if(input$Question10 == 1){mensaje = "It is not correct! Indeed, the relationship is positive, but is it linear? It does not look like a straight line, does it?"}
if(input$Question10 == 2){mensaje = "That is not correct! Indeed, the relationship is non-linear, but is it negative? It does not seem to decrease, does it?"}
if(input$Question10 == 3){mensaje = "It is correct! We can see the positive relationship with a functional form that is not linear."}
if(input$Question10 == 4){mensaje = "It is not correct! Can not you see any functional form?"}
if(input$Question10 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer11 <- renderUI({
actionID <- input$Question11
if(!is.null(actionID)){
if(input$Question11 == 1){mensaje = "It is not correct! Can you see any functional form?"}
if(input$Question11 == 2){mensaje = "That is not correct! Can you see any functional form?"}
if(input$Question11 == 3){mensaje = "It is not correct! Can you see any functional form?"}
if(input$Question11 == 4){mensaje = "It is correct! There is no association between the variables, neither positive, nor negative, nor with any functional form."}
if(input$Question11 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer12 <- renderUI({
actionID <- input$Question12
if(!is.null(actionID)){
if(input$Question12 == 1){mensaje = "It is not correct! 0.67668 is the value of the parameter associated with the traffic accidents variable and should not be interpreted as a percentage in this case."}
if(input$Question12 == 2){mensaje = "That is not correct! 0.67668 is the value of the parameter associated with the traffic accidents variable and should not be interpreted as a percentage in this case."}
if(input$Question12 == 3){mensaje = "That is right!"}
if(input$Question12 == 4){mensaje = "It is not correct! Think again"}
if(input$Question12 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer13 <- renderUI({
actionID <- input$Question13
if(!is.null(actionID)){
if(input$Question13 == 1){mensaje = "It is not correct! Remember that the problem of multicollinearity is greater the larger the value of the VIF."}
if(input$Question13 == 2){mensaje = "That is not correct! It is true that this variable has the greatest multicollinearity problem but alternatives must be considered before eliminating it."}
if(input$Question13 == 3){mensaje = "It is correct! It is important to try to make indicators before eliminating variables to group all the information that both variables can provide."}
if(input$Question13 == 4){mensaje = "It is not correct! This statement does not make much sense, review step 3 to remember this concept."}
if(input$Question13 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer14 <- renderUI({
actionID <- input$Question14
if(!is.null(actionID)){
if(input$Question14 == 1){mensaje = "It is correct! With this small p value we can conclude that the parameter that represents the linear relationship (slope of the regression line) between the variables is significant."}
if(input$Question14 == 2){mensaje = "That is not correct! With this small p value we can conclude that the parameter that represents the linear relationship (slope of the regression line) between the variables is significant."}
if(input$Question14 == 3){mensaje = "It is not correct! Are you sure that the parameter is not significant?"}
if(input$Question14 == 4){mensaje = "It is not correct! First, look at the p value, are you sure that the parameter is not significant?"}
if(input$Question14 == 1){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer15 <- renderUI({
actionID <- input$Question15
if(!is.null(actionID)){
if(input$Question15 == 1){mensaje = "It is not correct! We can not conclude on the goodness of fit having only one test statistic (F in this case)."}
if(input$Question15 == 2){mensaje = "That is not correct! The multiple R squared is not given directly in terms of percentage."}
if(input$Question15 == 3){mensaje = "It is correct! The adjusted R squared converted to percentage tells us the variability of the response variable that is explained by the independent variables."}
if(input$Question15 == 4){mensaje = "It is not correct! The presented p-value does not show the statistical validity of the model."}
if(input$Question15 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer16 <- renderUI({
actionID <- input$Question16
if(!is.null(actionID)){
if(input$Question16 == 1){mensaje = "It is not correct! There is no deterministic relationship with the response variable, in addition the adjusted R squared and R squared coefficients do not exceed 80% of variability explained."}
if(input$Question16 == 2){mensaje = "That is not correct! Before affirming this, another type of relationship should be considered, not just linear."}
if(input$Question16 == 3){mensaje = "It is right! it seems that there are variables whose linear relationship with the response variable is not significant but may have other types of relationships."}
if(input$Question16 == 4){mensaje = "It is not correct! Sports venues and extortions are important variables that should not be excluded"}
if(input$Question16 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
})
| /Learn_new/Shiny_dashboard/Didactic modeling/server.R | no_license | duybluemind1988/R_project | R | false | false | 59,838 | r | library(shiny)
library(shinythemes)
library(DT)
library(ggplot2)
library(car)
library(nortest)
library(tseries)
library(RcmdrMisc)
library(lmtest)
datos <-read.csv("www/dataset.csv",dec = ",")
shinyServer(function(input, output) {
output$RawData <- DT::renderDataTable(
DT::datatable({
datos
},
options = list(lengthMenu=list(c(5,15,20),c('5','15','20')),pageLength=10,
initComplete = JS(
"function(settings, json) {",
"$(this.api().table().header()).css({'background-color': 'moccasin', 'color': '1c1b1b'});",
"}"),
columnDefs=list(list(className='dt-center',targets="_all"))
),
filter = "top",
selection = 'multiple',
style = 'bootstrap',
class = 'cell-border stripe',
rownames = FALSE,
colnames = c("Subregion","Municipality","Projected population","Thefts","Traffic accidents","Homicides","School deserters","Sports venues","Extortions","Personal injuries")
))
variabletrans <- reactive({
attach(datos)
if(input$Transformacion == 0){Lesiones.t=log(LesionesPer)}else{Lesiones.t=LesionesPer^as.double(input$Transformacion)}
if(input$Transformacion ==1){titulo="Personal Injuries"}else{titulo="Transformed Personal Injuries"}
Resultado <- cbind(Lesiones.t,titulo)
})
variabletrans2 <- reactive({
attach(datos)
if(input$Transformacion == 0){Lesiones.t=log(LesionesPer)}else{Lesiones.t=LesionesPer^as.double(input$Transformacion)}
if(input$Transformacion ==1){titulo="Personal Injuries"}else{titulo="Transformed Personal Injuries"}
Lesiones.t
})
output$Histograma <- renderPlot({
ggplot(NULL,aes(as.double(variabletrans()[,1])))+geom_histogram(bins=nclass.Sturges(as.double(variabletrans()[,1])),color="white",
fill="seagreen1",aes(y=..density..),lwd=0.8)+geom_density(color="seagreen4",
alpha=0.3,fill="seagreen4",lty=1)+
labs(title = paste(variabletrans()[1,2], "\n histogram"),x=variabletrans()[1,2],y="Density")+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=13, face="bold"),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$Boxplot <- renderPlot({
ggplot(NULL,aes(x=0,y=as.double(variabletrans()[,1])))+geom_boxplot(color="black",fill="skyblue",alpha=0.5)+ stat_summary(fun.y=mean, colour="darkred", geom="point",shape=18, size=3)+
labs(title = paste(variabletrans()[1,2], "\n boxplot"),x="",y=variabletrans()[1,2])+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(),
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$qqPlot <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
qqPlot(as.double(variabletrans()[,1]),grid=F,xlab="",ylab="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
qqPlot(as.double(variabletrans()[,1]),col="coral",pch=16,id=T,lwd=1.9,col.lines = "black",grid = F, main = paste(variabletrans()[1,2], "\n Q-Q plot"),xlab="Normal quantiles",ylab=variabletrans()[1,2])
box(col="white")
})
testanalitico <- reactive({
if(input$PruebaAnalitica == 1){
prueba <- shapiro.test(as.double(variabletrans()[,1]))
}
else
{
if(input$PruebaAnalitica ==2){
prueba <- ad.test(as.double(variabletrans()[,1]))
}
else
{
if(input$PruebaAnalitica == 3){
prueba <- cvm.test(as.double(variabletrans()[,1]))
}
else
{
if(input$PruebaAnalitica == 4){
prueba <- lillie.test(as.double(variabletrans()[,1]))
}
else
{
prueba <- jarque.bera.test(as.double(variabletrans()[,1]))
}
}
}
}
prueba$data.name <- variabletrans()[1,2]
prueba
})
output$Prueba <- renderPrint({
testanalitico()
})
output$Conclusion1 <- renderText({
if(testanalitico()$p.value < 0.05){mensaje="We reject the null hypothesis, the variable is not normally distributed"}else{mensaje="We keep the null hypothesis, the variable is normally distributed"}
mensaje
})
output$ReadMore <- renderUI({
if(input$PruebaAnalitica == 1){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
if(input$PruebaAnalitica == 2){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
if(input$PruebaAnalitica == 3){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
if(input$PruebaAnalitica ==4){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
else
{
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test", icon("wikipedia-w"),target="_blank"),style="color:black;text-align:center")
}
}
}
}
More
})
Trans1 <- reactive({
attach(datos)
if(input$Transformacion1 == 0){ProjectedPopulation.t=log(ProjectedPopulation)}else{ProjectedPopulation.t=ProjectedPopulation^as.double(input$Transformacion1)}
if(input$Transformacion1 ==1){titulo="Projected Population"}else{
if(input$Transformacion1==0){titulo="Projected Population Logarithm"}else{titulo="Transformed Projected Population"}}
Resultado1 <- cbind(ProjectedPopulation.t,titulo)
})
Trans2 <- reactive({
attach(datos)
if(input$Transformacion2 == 0){Thefts.t=log(Thefts)}else{Thefts.t=Thefts^as.double(input$Transformacion2)}
if(input$Transformacion2 ==1){titulo="Thefts"}else{
if(input$Transformacion2==0){titulo="Thefts Logarithm"}else{titulo="Transformed Thefts"}}
Resultado2 <- cbind(Thefts.t,titulo)
})
Trans3 <- reactive({
attach(datos)
if(input$Transformacion3 == 0){TrafAccid.t=log(TrafAccid)}else{TrafAccid.t=TrafAccid^as.double(input$Transformacion3)}
if(input$Transformacion3 ==1){titulo="Traffic accidents"}else{
if(input$Transformacion3==0){titulo="Traffic accidents Logarithm"}else{titulo="Transformed Traffic accidents"}}
Resultado3 <- cbind(TrafAccid.t,titulo)
})
Trans4 <- reactive({
attach(datos)
if(input$Transformacion4 == 0){Homicides.t=log(Homicides)}else{Homicides.t=Homicides^as.double(input$Transformacion4)}
if(input$Transformacion4 ==1){titulo="Homicides"}else{
if(input$Transformacion4 ==0){titulo="Homicides Logarithm"}else{titulo="Transformed Homicides"}}
Resultado4 <- cbind(Homicides.t,titulo)
})
Trans5 <- reactive({
attach(datos)
if(input$Transformacion5 == 0){SchoolDes.t=log(SchoolDes)}else{SchoolDes.t=SchoolDes^as.double(input$Transformacion5)}
if(input$Transformacion5 ==1){titulo="School deserters"}else{
if(input$Transformacion5 ==0){titulo="School deserters Logarithm"}else{titulo="Transformed School deserters"}}
Resultado5 <- cbind(SchoolDes.t,titulo)
})
Trans6 <- reactive({
attach(datos)
if(input$Transformacion6 == 0){SportsScenari.t=log(SportsScenari)}else{SportsScenari.t=SportsScenari^as.double(input$Transformacion6)}
if(input$Transformacion6 ==1){titulo="Sports venues"}else{
if(input$Transformacion6 ==0){titulo="Sports venues Logarithm"}else{titulo="Transformed Sports venues"}}
Resultado6 <- cbind(SportsScenari.t,titulo)
})
Trans7 <- reactive({
attach(datos)
if(input$Transformacion7 == 0){Extortions.t=log(Extortions)}else{Extortions.t=Extortions^as.double(input$Transformacion7)}
if(input$Transformacion7 ==1){titulo="Extortions"}else{
if(input$Transformacion7 ==0){titulo="Extortions Logarithm"}else{titulo="Transformed Extortions"}}
Resultado7 <- cbind(Extortions.t,titulo)
})
output$Dispersion1 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans1()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="seagreen3")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans1()[1,2], "\n"),x=paste(Trans1()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion1 <- renderText({
corr <- cor(as.double(Trans1()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion2 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans2()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="peachpuff")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans2()[1,2], "\n"),x=paste(Trans2()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion2 <- renderText({
corr <- cor(as.double(Trans2()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion3 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans3()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="cyan")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans3()[1,2], "\n"),x=paste(Trans3()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion3 <- renderText({
corr <- cor(as.double(Trans3()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion4 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans4()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="violet")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans4()[1,2], "\n"),x=paste(Trans4()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion4 <- renderText({
corr <- cor(as.double(Trans4()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion5 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans5()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="sandybrown")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans5()[1,2], "\n"),x=paste(Trans5()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion5 <- renderText({
corr <- cor(as.double(Trans5()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion6 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans6()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="hotpink")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans6()[1,2], "\n"),x=paste(Trans6()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion6 <- renderText({
corr <- cor(as.double(Trans6()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
output$Dispersion7 <- renderPlot({
ggplot(NULL,aes(x=as.double(Trans7()[,1]),y=as.double(variabletrans()[,1])))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="yellow")+
labs(title = paste("\n",variabletrans()[1,2], "vs", Trans7()[1,2], "\n"),x=paste(Trans7()[1,2]),y=paste(variabletrans()[1,2],"\n"))+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
output$correlacion7 <- renderText({
corr <- cor(as.double(Trans7()[,1]),as.double(variabletrans()[,1]))
mensaje <- paste("Cor = ", corr)
mensaje
})
Modelcuanti <- reactive({
cuantis <- datos[,c(3:10)]
xnam <- paste0(colnames(cuantis[as.double(input$variablescuantis)]))
if(length(xnam)==0)
{
fmla <- as.formula(paste(deparse(substitute(variabletrans2())), "~ 1"))
nombrefmla <- paste(variabletrans()[1,2], "~ 1")
}
else
{
fmla <- as.formula(paste(deparse(substitute(variabletrans2())), "~",paste(xnam,collapse = " + ")))
nombrefmla <- paste(variabletrans()[1,2], "~",paste(xnam,collapse = " + "))
}
Model1 <- lm(fmla)
Model1[["call"]] <- nombrefmla
Model1
})
output$Model1 <- renderPrint({
summary(Modelcuanti())
})
output$VIF <- renderPrint({
if(length(Modelcuanti()$coefficients)<=2){
mensaje="The model must have at least two explanatory variables to execute this function"
mensaje
}
else
{
vif(Modelcuanti())
}
})
output$Alarm <- renderText({
if(length(Modelcuanti()$coefficients)<=2){
alerta = "The model does not has enough independent variables"
alerta
}
else
{
listadevifs <- vif(Modelcuanti())
mensaje="There are no multicollinearity problems"
nombres <- vector(mode = "numeric",length = 7)
for(i in 1:length(listadevifs)){
if(listadevifs[[i]]>5){
mensaje="There are multicollinearity problems in the following variables:"
nombres[i] = i
}
}
variablesconproblemas <- paste(names(listadevifs[nombres]),collapse = ", ")
if(nombres[1]==0 & nombres[2]==0 & nombres[3]==0 & nombres[4]==0 & nombres[5]==0 & nombres[6]==0 & nombres[7]==0){
mensaje
}
else
{
paste(mensaje,variablesconproblemas,". You should keep only one of these.")
}
}
})
output$Determination <- renderText({
valoresp <- summary(Modelcuanti())$coefficients[,4]
mensaje1 = "All the model parameters are significant for a confidence level of 95%"
nombresbetas <- vector(mode = "numeric",length = 6)
for(i in 1:length(valoresp)){
if(valoresp[[i]]>0.05){
mensaje1="There are parameters which are not significant with a confidence level of 95%, that is, there are relationships between variables that are not important, these parameters correspond to:"
nombresbetas[i]=i
}
}
betasnosignificativos <- paste(names(valoresp[nombresbetas]),collapse = ", ")
paste(mensaje1, betasnosignificativos)
})
output$AdjustedDetermination <- renderText({
Rajustado <- summary(Modelcuanti())$adj.r.squared
paste("Whit the current model you got an adjusted R squared of",Rajustado, ". But remember, if there are multicollinearity problems this is not a reliable result, because the estimation can be quite imprecise, and the variances and IC will be too broad")
})
Modelofinalfinal <- reactive({
todasvariables <- cbind(datos[,c(3:9)],as.double(Trans1()[,1]),as.double(Trans2()[,1]),as.double(Trans3()[,1]),as.double(Trans4()[,1]),as.double(Trans5()[,1]),as.double(Trans6()[,1]),as.double(Trans7()[,1]))
todasvariables <- todasvariables[, c(matrix(1:ncol(todasvariables), nrow = 2, byrow = T))]
nombrestodas <- c("Projected population","Transformed Projected population","Thefts","Transformed Thefts","Traffic accidents","Transformed Traffic accidents","Homicides","Transformed Homicides","School deserters","Transformed School deserters","Sports venues", "Transformed Sports venues","Extortions", "Transformed Extortions")
variables <- c(input$variablesincluidas,input$incluirtrans)
sort(variables,decreasing = F)
xnames <- paste0(colnames(todasvariables[as.double(variables)]))
if(length(xnames)==0)
{
fmla2 <- as.formula(paste(deparse(substitute(variabletrans2())), "~ 1"))
nombrefmla2 <- paste(variabletrans()[1,2], "~ 1")
}
else
{
fmla2 <- as.formula(paste(deparse(substitute(variabletrans2())), "~",paste(xnames,collapse = " + ")))
nombrefmla2 <- paste(variabletrans()[1,2], "~",paste(nombrestodas[as.double(variables)],collapse = " + "))
}
Model2 <- lm(fmla2)
Model2[["call"]] <- nombrefmla2
names(Model2$coefficients) <- c("Intercept",nombrestodas[as.double(variables)])
Model2
})
output$finalmodel <- renderPrint({
summary(Modelofinalfinal())
})
output$Significancy2 <- renderUI({
valoresp2 <- summary(Modelofinalfinal())$coefficients[,4]
mensaje2 = "All the model parameters are significant for a confidence level of 95%"
nombresbetas2 <- vector(mode = "numeric",length = 15)
for(i in 1:length(valoresp2)){
if(valoresp2[[i]]>0.05){
mensaje2="There are parameters which are not significant with a confidence level of 95%, that is, there are relationships between variables that are not important, these parameters correspond to:"
nombresbetas2[i]=i
}
}
betasnosignificativos2 <- paste(names(valoresp2[nombresbetas2]),collapse = ", ")
p(paste(mensaje2, betasnosignificativos2),style="padding:25px;background-color:lavender;border-left:8px solid blue;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center")
})
output$Anothermessage <- renderUI({
p("Those variables whose betas are not significant should be eliminated from the model, try to get them out one by one prioritizing those whose betas have a higher p-value (Pr(>|t|))",style="padding:25px;background-color:papayawhip;border-left:8px solid coral;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center")
})
output$FinalAlarma <- renderUI({
multicoli <- summary(Modelofinalfinal())$coefficients[,4]
contador = 0
for(i in 1:length(names(multicoli))){
if(names(multicoli[i])=="Projected population"| names(multicoli[i])=="School deserters")
{
contador = contador + 1
}
}
if(contador >= 2)
{
mensaje = "There are multicollinearity problems, remember the analysis of the previous section"
}
else
{
mensaje = "It seems that there are no problems of multicollinearity, you are doing great so far"
}
p(mensaje, style="padding:25px;background-color:papayawhip;border-left:8px solid coral;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center" )
})
selecciondevariables <- reactive({
todasvariables2 <- cbind(datos[,c(3:9)],as.double(Trans1()[,1]),as.double(Trans2()[,1]),as.double(Trans3()[,1]),as.double(Trans4()[,1]),as.double(Trans5()[,1]),as.double(Trans6()[,1]),as.double(Trans7()[,1]))
todasvariables2 <- todasvariables2[, c(matrix(1:ncol(todasvariables2), nrow = 2, byrow = T))]
nombrestodas2 <- c("Projected population","Transformed Projected population","Thefts","Transformed Thefts","Traffic accidents","Transformed Traffic accidents","Homicides","Transformed Homicides","School deserters","Transformed School deserters","Sports venues", "Transformed Sports venues","Extortions", "Transformed Extortions")
variables <- c(input$variablesincluidas,input$incluirtrans)
sort(variables,decreasing = F)
xnames2 <- paste0(colnames(todasvariables2[as.double(variables)]))
if(length(xnames2)==0)
{
fmla3 <- as.formula(paste(deparse(substitute(variabletrans2())), "~ 1"))
}
else
{
fmla3 <- as.formula(paste(deparse(substitute(variabletrans2())), "~",paste(xnames2,collapse = " + ")))
}
Model3 <- lm(fmla3)
Model3
Modelbase <- stepwise(Model3,direction = input$direccion,criterion = input$criterio)
Modelbase
})
Modelazofinal <- reactive({
formulita <- selecciondevariables()$terms
if(any(grepl("Trans1", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("ProjectedPopulation", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + ProjectedPopulation)
}
}
if(any(grepl("Trans2", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Thefts", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Thefts)
}
}
if(any(grepl("Trans3", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("TrafAccid", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + TrafAccid)
}
}
if(any(grepl("Trans4", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Homicides", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Homicides)
}
}
if(any(grepl("Trans5", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SchoolDes", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SchoolDes)
}
}
if(any(grepl("Trans6", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SportsScenari", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SportsScenari)
}
}
if(any(grepl("Trans7", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Extortions", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Extortions)
}
}
Model4 <- lm(formulita)
Model4[["call"]] <- selecciondevariables()$call
Model4[["call"]] <- gsub("[(]","",Model4[["call"]])
Model4[["call"]] <- gsub("[)]","",Model4[["call"]])
Model4[["call"]] <- gsub("variabletrans2",variabletrans()[1,2],Model4[["call"]])
Model4[["call"]] <- paste0(Model4[["call"]][1],"(formula = ",Model4[["call"]][2],")")
names(Model4$coefficients) <- gsub("[.]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[,]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[(]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[)]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[[]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("[]]","",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans1 1","Transformed Projected Population",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans2 1","Transformed Thefts",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans3 1","Transformed TrafAccid",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans4 1","Transformed Homicides",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans5 1","Transformed SchoolDes",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans6 1","Transformed SportsScenari",names(Model4$coefficients))
names(Model4$coefficients) <- gsub("asdoubleTrans3 1","Transformed Extortions",names(Model4$coefficients))
Model4
})
output$ModeloBack <- renderPrint({
invisible(capture.output(modelo <- summary(Modelazofinal())))
modelo
})
output$Determinacionfinal <- renderUI({
coeficiente = summary(Modelazofinal())$adj.r.squared
p(paste("With the final model you built, you get an adjusted square R of:",coeficiente),style="padding:25px;background-color:papayawhip;border-left:8px solid coral;border-top: 1px solid black;border-right:1px solid black;border-bottom: 1px solid black;color:black;text-align:center" )
})
output$Histograma2 <- renderPlot({
ggplot(NULL,aes(Modelazofinal()$residuals))+geom_histogram(bins=nclass.Sturges(Modelazofinal()$residuals),color="white",
fill="seagreen1",aes(y=..density..),lwd=0.8)+geom_density(color="seagreen4",
alpha=0.3,fill="seagreen4",lty=1)+
labs(title = "Residuals \n histogram",x="Residuals",y="Density")+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=13, face="bold"),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$Boxplot2 <- renderPlot({
ggplot(NULL,aes(x=0,y=Modelazofinal()$residuals))+geom_boxplot(color="black",fill="skyblue",alpha=0.5)+ stat_summary(fun.y=mean, colour="darkred", geom="point",shape=18, size=3)+
labs(title = " Residuals \n boxplot",x="",y="Residuals")+
theme(plot.title = element_text(color="navy", size=15, face="bold.italic",hjust=0.5),
axis.title.x = element_text(),
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
axis.title.y = element_text(color="navy", size=13, face="bold"))
})
output$qqPlot2 <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
qqPlot(Modelazofinal()$residuals,grid=F,xlab="",ylab="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
qqPlot(Modelazofinal()$residuals,col="coral",pch=16,id=T,lwd=1.9,col.lines = "black",grid = F, main = "Residuals \n Q-Q plot",xlab="Normal quantiles",ylab="Residuals")
box(col="white")
})
testanalitico2 <- reactive({
if(input$PruebaAnalitica2 == 1){
prueba <- shapiro.test(Modelazofinal()$residuals)
}
else
{
if(input$PruebaAnalitica2 ==2){
prueba <- ad.test(Modelazofinal()$residuals)
}
else
{
if(input$PruebaAnalitica2 == 3){
prueba <- cvm.test(Modelazofinal()$residuals)
}
else
{
if(input$PruebaAnalitica2 == 4){
prueba <- lillie.test(Modelazofinal()$residuals)
}
else
{
prueba <- jarque.bera.test(Modelazofinal()$residuals)
}
}
}
}
prueba$data.name <- "Model Residuals"
prueba
})
output$Prueba2 <- renderPrint({
testanalitico2()
})
output$Conclusion12 <- renderText({
if(testanalitico2()$p.value < 0.05){mensaje="We reject the null hypothesis, the residuals are not normally distributed"}else{mensaje="We keep the null hypothesis, the residuals are normally distributed"}
mensaje
})
output$ReadMore2 <- renderUI({
if(input$PruebaAnalitica2 == 1){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
if(input$PruebaAnalitica2 == 2){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
if(input$PruebaAnalitica2 == 3){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
if(input$PruebaAnalitica2 ==4){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Jarque%E2%80%93Bera_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
}
}
}
More
})
output$FittedVsResiduals <- renderPlot({
ggplot(NULL,aes(x=Modelazofinal()$fitted.values,y=Modelazofinal()$residuals))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="violet")+
labs(title = "Model fitted values vs residuals",x="Model fitted values",y="Model residuals")+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
formulit <- reactive({
formulita <- selecciondevariables()$terms
if(any(grepl("Trans1", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("ProjectedPopulation", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + ProjectedPopulation)
}
}
if(any(grepl("Trans2", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Thefts", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Thefts)
}
}
if(any(grepl("Trans3", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("TrafAccid", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + TrafAccid)
}
}
if(any(grepl("Trans4", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Homicides", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Homicides)
}
}
if(any(grepl("Trans5", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SchoolDes", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SchoolDes)
}
}
if(any(grepl("Trans6", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("SportsScenari", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + SportsScenari)
}
}
if(any(grepl("Trans7", names(selecciondevariables()$coefficients))==TRUE)){
if(any(grepl("Extortions", names(selecciondevariables()$coefficients))==TRUE)){
}else{
formulita <- update.formula(formulita,~ . + Extortions)
}
}
formulita
})
testanalitico3 <- reactive({
if(input$PruebaAnalitica3 == 1){
modelo <- Modelazofinal()
prueba <- bptest(modelo)
prueba$data.name <- "Model Residuals"
}
else
{
modelo <- lm(formulit())
prueba <- ncvTest(modelo)
}
prueba
})
output$Prueba3 <- renderPrint({
testanalitico3()
})
output$Conclusion13 <- renderText({
if(input$PruebaAnalitica3 == 1){
if(testanalitico3()$p.value < 0.05){mensaje="We reject the null hypothesis, the residuals are not homoscedastic"}else{mensaje="We keep the null hypothesis, the residuals are homoscedastic"}
mensaje}
else
{
if(testanalitico3()$p < 0.05){mensaje="We reject the null hypothesis, the residuals are not homoscedastic"}else{mensaje="We keep the null hypothesis, the residuals are homoscedastic"}
mensaje}
})
output$ReadMore3 <- renderUI({
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test", icon("wikipedia-w"),target="_blank"),style="color:black")
More
})
output$ACF <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
acf(Modelazofinal()$residuals,ylim=c(-1,1),main="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
acf(Modelazofinal()$residuals,ylim=c(-1,1),main="ACF de los residuales del modelo")
box(col="white")
})
output$PACF <- renderPlot({
par(font.main=4,font.lab=2,col.main="navy",col.lab="navy",cex.lab=1.1)
pacf(Modelazofinal()$residuals,ylim=c(-1,1),main="")
u <-par("usr")
rect(u[1], u[3], u[2], u[4], col="#EBE9E9", border=TRUE)
grid(NULL,NULL,col="white",lty=1)
par(new=TRUE)
pacf(Modelazofinal()$residuals,ylim=c(-1,1),main="ACF de los residuales del modelo")
box(col="white")
})
output$ResVsIndex <- renderPlot({
ggplot(NULL,aes(x=seq(1, 118, 1),y=Modelazofinal()$residuals))+
geom_point(shape=18,color="blue",size=3)+
geom_smooth(method = lm,linetype="dashed",color="black",fill="violet")+
labs(title = "Residuals vs Index",x="Index",y="Model residuals")+
theme(plot.title = element_text(color="navy", size=18, face="bold.italic",hjust=0.5),
axis.title.x = element_text(color="navy", size=15, face="bold"),
axis.text.x = element_text(size=13),
axis.title.y = element_text(color="navy", size=15, face="bold"),
axis.text.y = element_text(size = 13))
})
testanalitico4 <- reactive({
if(input$PruebaAnalitica4 == 1){
prueba <- dwtest(Modelazofinal(),alternative = "two.sided")
}
else
{
prueba <- bgtest(Modelazofinal())
}
prueba$data.name <- "Model Residuals"
prueba
})
output$Prueba4 <- renderPrint({
testanalitico4()
})
output$Conclusion14 <- renderText({
if(testanalitico4()$p.value < 0.05){mensaje="We reject the null hypothesis, the residuals are auto-correlated"}else{mensaje="We keep the null hypothesis, the residuals are not auto-correlated"}
mensaje
})
output$ReadMore4 <- renderUI({
if(input$PruebaAnalitica4 == 1){
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Durbin%E2%80%93Watson_statistic", icon("wikipedia-w"),target="_blank"),style="color:black")
}
else
{
More <- p("Read more about this test here → ", a(href="https://en.wikipedia.org/wiki/Breusch%E2%80%93Godfrey_test", icon("wikipedia-w"),target="_blank"),style="color:black")
}
More
})
output$Answer1 <- renderUI({
actionID <- input$Question1
if(!is.null(actionID)){
if(input$Question1 == 1){mensaje = "It is right! this equation represents a straight line, a function of two variables that are related in a perfect or deterministic way."}
if(input$Question1 == 2){mensaje = "It is not correct! note that in this equation there is a term that represents random factors (e). These random factors make the relationship between X and Y not perfect or deterministic."}
if(input$Question1 == 3){mensaje = "It is not correct! this equation represents an adjustment, note that the equation includes the adjusted response variable and not the original."}
if(input$Question1 == 4){mensaje = "This relationship includes an adjusted variable and a constant factor, therefore it is not even a relationship between two variables."}
if(input$Question1 == 1){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer2 <- renderUI({
actionID <- input$Question2
if(!is.null(actionID)){
if(input$Question2 == 1){mensaje = "It is not correct! This is an important factor to make inferences in these models but, it is the only one?"}
if(input$Question2 == 2){mensaje = "It is not correct! This is an important factor to make inferences in these models but, it is the only one?"}
if(input$Question2 == 3){mensaje = "It is correct! We need the variability of the response variable to be explained by the independent variables (option a) and also that the statistical assumptions be fulfilled (option b)."}
if(input$Question2 == 4){mensaje = "It is not correct, just try again!"}
if(input$Question2 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer3 <- renderUI({
actionID <- input$Question3
if(!is.null(actionID)){
if(input$Question3 == 1){mensaje = "It is right! For a null hypothesis where the residuals of the model are not autocorrelated, the Durbin Watson independence test shows that there is at least a lag with a significant autocorrelation coefficient. "}
if(input$Question3 == 2){mensaje = "It is not correct! We cannot know this using the Durbin Watson test which purpose is proving independence of residuals"}
if(input$Question3 == 3){mensaje = "It is not correct! We cannot know if every autocorrelation coefficient is significant, to prove this we should do graphical tests like ACF"}
if(input$Question3 == 4){mensaje = "It is not correct! In fact we can conclude about a null hypothesis where the residuals of the model are not autocorrelated and we can do it using the p - value"}
if(input$Question3 == 1){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer4 <- renderUI({
actionID <- input$Question4
if(!is.null(actionID)){
if(input$Question4 == 1){mensaje = "It is not right! Remember that the root of the R squared is the coefficient of autocorrelation, and remember that the r squared is calculated in the following way: "}
if(input$Question4 == 2){mensaje = "It is correct! You made the right calculations. Was it just luck? Check then the concepts of R squared and coefficient of autocorrelation in the section Glossary"}
if(input$Question4 == 3){mensaje = "It is not correct! Remember that the root of the R squared is the coefficient of autocorrelation, and remember that the r squared is calculated in the following way: "}
if(input$Question4 == 4){mensaje = "It is not correct! Remember that the root of the R squared is the coefficient of autocorrelation, and remember that the r squared is calculated in the following way: "}
if(input$Question4 == 2){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,withMathJax('\\( R^2 = \\frac{SSR}{SSR+SSE} \\)'),icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer5 <- renderUI({
actionID <- input$Question5
if(!is.null(actionID)){
if(input$Question5 == 1){mensaje = "It is not right! This mathematical expression tells us that the sum of the real values and the adjusted values are equal, therefore at a general level the adjustment could be considered perfect, but we must understand the point-to-point adjustment in other way"}
if(input$Question5 == 2){mensaje = "It is not correct! If the point-to-point adjustment was perfect there were no terms associated with randomness and uncontrollable factors"}
if(input$Question5 == 3){mensaje = "It is not correct! If the point-to-point adjustment was perfect there were no terms associated with randomness and uncontrollable factors"}
if(input$Question5 == 4){mensaje = "It is correct! This mathematical expression tells us that the sum of the real values and the adjusted values are equal, therefore at a general level the adjustment could be considered perfect, even when in each point there is an error percentage"}
if(input$Question5 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer6 <- renderUI({
actionID <- input$Question6
if(!is.null(actionID)){
if(input$Question6 == 1){mensaje = "It is not correct! Actually the r squared increases as more variables have the model."}
if(input$Question6 == 2){mensaje = "That is right! That is exactly the answer, but was it just luck? then review the glossary section to better understand the concepts of R squared and adjusted R squared."}
if(input$Question6 == 3){mensaje = "It is not correct! This does not make much sense, review the glossary section to better understand the concepts of R squared and adjusted R squared."}
if(input$Question6 == 4){mensaje = "It is not correct! This does not make much sense, review the glossary section to better understand the concepts of R squared and adjusted R squared."}
if(input$Question6 == 2){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer7 <- renderUI({
actionID <- input$Question7
if(!is.null(actionID)){
if(input$Question7 == 1){mensaje = "It is not correct! Actually the points of the diagram are not so scattered and we can see a not so weak linear relationship."}
if(input$Question7 == 2){mensaje = "That is not right! In the diagram we can see how the points follow the shape of a straight line."}
if(input$Question7 == 3){mensaje = "It is not correct! The relationship seems to be strong so it would be worthwhile to perform a regression model between these variables."}
if(input$Question7 == 4){mensaje = "It is right! The relationship seems to be strong so it would be worthwhile to perform a regression model between these variables."}
if(input$Question7 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer8 <- renderUI({
actionID <- input$Question8
if(!is.null(actionID)){
if(input$Question8 == 1){mensaje = "It is not correct! Graphing the residuals of the model and its adjusted values does not tell me anything about the assumption of normality."}
if(input$Question8 == 2){mensaje = "That is right! We can see how the points are not uniformly distributed so the variance is not constant and a solution could be to transform the response variable."}
if(input$Question8 == 3){mensaje = "It is not correct! We can see how the points are not uniformly distributed so the variance is not constant."}
if(input$Question8 == 4){mensaje = "It is not correct! Graphing the residuals of the model and its adjusted values does not tell me anything about the assumption of normality."}
if(input$Question8 == 2){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer9 <- renderUI({
actionID <- input$Question9
if(!is.null(actionID)){
if(input$Question9 == 1){mensaje = "It is not correct! We can see how there are many points outside the confidence bands so there is no normality."}
if(input$Question9 == 2){mensaje = "That is not correct! This graph is not useful to conclude about the assumption of constant variance."}
if(input$Question9 == 3){mensaje = "It is not correct! This graph is not useful to conclude about the assumption of constant variance."}
if(input$Question9 == 4){mensaje = "It is correct! We can see how there are many points outside the confidence bands so there is no normality."}
if(input$Question9 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer10 <- renderUI({
actionID <- input$Question10
if(!is.null(actionID)){
if(input$Question10 == 1){mensaje = "It is not correct! Indeed, the relationship is positive, but is it linear? It does not look like a straight line, does it?"}
if(input$Question10 == 2){mensaje = "That is not correct! Indeed, the relationship is non-linear, but is it negative? It does not seem to decrease, does it?"}
if(input$Question10 == 3){mensaje = "It is correct! We can see the positive relationship with a functional form that is not linear."}
if(input$Question10 == 4){mensaje = "It is not correct! Can not you see any functional form?"}
if(input$Question10 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer11 <- renderUI({
actionID <- input$Question11
if(!is.null(actionID)){
if(input$Question11 == 1){mensaje = "It is not correct! Can you see any functional form?"}
if(input$Question11 == 2){mensaje = "That is not correct! Can you see any functional form?"}
if(input$Question11 == 3){mensaje = "It is not correct! Can you see any functional form?"}
if(input$Question11 == 4){mensaje = "It is correct! There is no association between the variables, neither positive, nor negative, nor with any functional form."}
if(input$Question11 == 4){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer12 <- renderUI({
actionID <- input$Question12
if(!is.null(actionID)){
if(input$Question12 == 1){mensaje = "It is not correct! 0.67668 is the value of the parameter associated with the traffic accidents variable and should not be interpreted as a percentage in this case."}
if(input$Question12 == 2){mensaje = "That is not correct! 0.67668 is the value of the parameter associated with the traffic accidents variable and should not be interpreted as a percentage in this case."}
if(input$Question12 == 3){mensaje = "That is right!"}
if(input$Question12 == 4){mensaje = "It is not correct! Think again"}
if(input$Question12 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer13 <- renderUI({
actionID <- input$Question13
if(!is.null(actionID)){
if(input$Question13 == 1){mensaje = "It is not correct! Remember that the problem of multicollinearity is greater the larger the value of the VIF."}
if(input$Question13 == 2){mensaje = "That is not correct! It is true that this variable has the greatest multicollinearity problem but alternatives must be considered before eliminating it."}
if(input$Question13 == 3){mensaje = "It is correct! It is important to try to make indicators before eliminating variables to group all the information that both variables can provide."}
if(input$Question13 == 4){mensaje = "It is not correct! This statement does not make much sense, review step 3 to remember this concept."}
if(input$Question13 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer14 <- renderUI({
actionID <- input$Question14
if(!is.null(actionID)){
if(input$Question14 == 1){mensaje = "It is correct! With this small p value we can conclude that the parameter that represents the linear relationship (slope of the regression line) between the variables is significant."}
if(input$Question14 == 2){mensaje = "That is not correct! With this small p value we can conclude that the parameter that represents the linear relationship (slope of the regression line) between the variables is significant."}
if(input$Question14 == 3){mensaje = "It is not correct! Are you sure that the parameter is not significant?"}
if(input$Question14 == 4){mensaje = "It is not correct! First, look at the p value, are you sure that the parameter is not significant?"}
if(input$Question14 == 1){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer15 <- renderUI({
actionID <- input$Question15
if(!is.null(actionID)){
if(input$Question15 == 1){mensaje = "It is not correct! We can not conclude on the goodness of fit having only one test statistic (F in this case)."}
if(input$Question15 == 2){mensaje = "That is not correct! The multiple R squared is not given directly in terms of percentage."}
if(input$Question15 == 3){mensaje = "It is correct! The adjusted R squared converted to percentage tells us the variability of the response variable that is explained by the independent variables."}
if(input$Question15 == 4){mensaje = "It is not correct! The presented p-value does not show the statistical validity of the model."}
if(input$Question15 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
output$Answer16 <- renderUI({
actionID <- input$Question16
if(!is.null(actionID)){
if(input$Question16 == 1){mensaje = "It is not correct! There is no deterministic relationship with the response variable, in addition the adjusted R squared and R squared coefficients do not exceed 80% of variability explained."}
if(input$Question16 == 2){mensaje = "That is not correct! Before affirming this, another type of relationship should be considered, not just linear."}
if(input$Question16 == 3){mensaje = "It is right! it seems that there are variables whose linear relationship with the response variable is not significant but may have other types of relationships."}
if(input$Question16 == 4){mensaje = "It is not correct! Sports venues and extortions are important variables that should not be excluded"}
if(input$Question16 == 3){p(mensaje,icon("smile-wink","fa-2x"),style="background-color:#BFF7BB;padding:15px;text-align:justify;border-left: 8px solid green")}
else
{
p(mensaje,icon("sad-cry","fa-2x"),style="background-color:#FFA8A8;padding:15px;text-align:justify;border-left: 8px solid red")
}
}
else
{""}
})
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/hvalir.R
\name{hvalir_hvalir}
\alias{hvalir_hvalir}
\title{Hvalir}
\usage{
hvalir_hvalir(con)
}
\arguments{
\item{con}{Tenging við Oracle}
}
\value{
SQL fyrirspurn
}
\description{
Hvalir
}
| /man/hvalir_hvalir.Rd | no_license | vonStadarhraun/mar | R | false | true | 268 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/hvalir.R
\name{hvalir_hvalir}
\alias{hvalir_hvalir}
\title{Hvalir}
\usage{
hvalir_hvalir(con)
}
\arguments{
\item{con}{Tenging við Oracle}
}
\value{
SQL fyrirspurn
}
\description{
Hvalir
}
|
#' @title Confidence intervals for model parameters
#' @description Computes confidence intervals for parameters in a fitted gllvm model.
#'
#' @param object an object of class 'gllvm'.
#' @param level the confidence level. Scalar between 0 and 1.
#' @param parm a specification of which parameters are to be given confidence intervals, a vector of names. If missing, all parameters are considered.
#' @param ... not used.
#'
#' @author Jenni Niku <jenni.m.e.niku@@jyu.fi>
#'
#' @examples
#'## Load a dataset from the mvabund package
#'data(antTraits)
#'y <- as.matrix(antTraits$abund)
#'X <- as.matrix(antTraits$env[,1:2])
#'# Fit gllvm model
#'fit <- gllvm(y = y, X = X, family = poisson())
#'# 95 % confidence intervals for coefficients of X variables
#'confint(fit, level = 0.95, parm = "Xcoef")
#'
#'@export
confint.gllvm <- function(object, parm=NULL, level = 0.95, ...) {
if(is.logical(object$sd)) stop("Standard errors for parameters haven't been calculated, so confidence intervals can not be calculated.");
n <- NROW(object$y)
p <- NCOL(object$y)
nX <- 0; if(!is.null(object$X)) nX <- dim(object$X)[2]
nTR <- 0; if(!is.null(object$TR)) nTR <- dim(object$TR)[2]
num.lv <- object$num.lv
alfa <- (1 - level) / 2
if(object$row.eff == "random") object$params$row.params = NULL
if(is.null(parm)){
if (object$family == "negative.binomial") {
object$params$phi <- NULL
object$sd$phi <- NULL
}
parm_all <- c("theta", "beta0", "Xcoef", "B", "row.params", "sigma", "inv.phi", "phi", "p")
parmincl <- parm_all[parm_all %in% names(object$params)]
cilow <- unlist(object$params[parmincl]) + qnorm(alfa) * unlist(object$sd[parmincl])
ciup <- unlist(object$params[parmincl]) + qnorm(1 - alfa) * unlist(object$sd[parmincl])
M <- cbind(cilow, ciup)
colnames(M) <- c(paste(alfa * 100, "%"), paste((1 - alfa) * 100, "%"))
rnames <- names(unlist(object$params))
cal <- 0
if (num.lv > 0) {
nr <- rep(1:num.lv, each = p)
nc <- rep(1:p, num.lv)
rnames[1:(num.lv * p)] <- paste(paste("theta.LV", nr, sep = ""), nc, sep = ".")
cal <- cal + num.lv * p
}
rnames[(cal + 1):(cal + p)] <- paste("Intercept",names(object$params$beta0), sep = ".")
cal <- cal + p
if (!is.null(object$TR)) {
nr <- names(object$params$B)
rnames[(cal + 1):(cal + length(nr))] <- nr
cal <- cal + length(nr)
}
if (is.null(object$TR) && !is.null(object$X)) {
cnx <- rep(colnames(object$X), each = p)
rnc <- rep(rownames(object$params$Xcoef), nX)
newnam <- paste(cnx, rnc, sep = ":")
rnames[(cal + 1):(cal + nX * p)] <- paste("Xcoef", newnam, sep = ".")
cal <- cal + nX * p
}
if (object$row.eff %in% c("fixed",TRUE)) {
rnames[(cal + 1):(cal + n)] <- paste("Row.Intercept", 1:n, sep = ".")
cal <- cal + n
}
if (object$row.eff == "random") {
rnames[(cal + 1)] <- "sigma"
cal <- cal + 1
}
if(object$family == "negative.binomial"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("inv.phi", names(object$params$beta0), sep = ".")
}
if(object$family == "tweedie"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("Dispersion phi", names(object$params$phi), sep = ".")
}
if(object$family == "ZIP"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("p", names(object$params$p), sep = ".")
}
if(object$family == "gaussian"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("Standard deviations phi", names(object$params$phi), sep = ".")
}
rownames(M) <- rnames
} else {
if ("beta0" %in% parm) {
object$params$Intercept = object$params$beta0
object$sd$Intercept = object$sd$beta0
parm[parm=="beta0"] = "Intercept"
}
cilow <- unlist(object$params[parm]) + qnorm(alfa) * unlist(object$sd[parm])
ciup <- unlist(object$params[parm]) + qnorm(1 - alfa) * unlist(object$sd[parm])
M <- cbind(cilow, ciup)
}
return(M)
}
| /R/confint.gllvm.R | no_license | jebyrnes/gllvm | R | false | false | 3,986 | r | #' @title Confidence intervals for model parameters
#' @description Computes confidence intervals for parameters in a fitted gllvm model.
#'
#' @param object an object of class 'gllvm'.
#' @param level the confidence level. Scalar between 0 and 1.
#' @param parm a specification of which parameters are to be given confidence intervals, a vector of names. If missing, all parameters are considered.
#' @param ... not used.
#'
#' @author Jenni Niku <jenni.m.e.niku@@jyu.fi>
#'
#' @examples
#'## Load a dataset from the mvabund package
#'data(antTraits)
#'y <- as.matrix(antTraits$abund)
#'X <- as.matrix(antTraits$env[,1:2])
#'# Fit gllvm model
#'fit <- gllvm(y = y, X = X, family = poisson())
#'# 95 % confidence intervals for coefficients of X variables
#'confint(fit, level = 0.95, parm = "Xcoef")
#'
#'@export
confint.gllvm <- function(object, parm=NULL, level = 0.95, ...) {
if(is.logical(object$sd)) stop("Standard errors for parameters haven't been calculated, so confidence intervals can not be calculated.");
n <- NROW(object$y)
p <- NCOL(object$y)
nX <- 0; if(!is.null(object$X)) nX <- dim(object$X)[2]
nTR <- 0; if(!is.null(object$TR)) nTR <- dim(object$TR)[2]
num.lv <- object$num.lv
alfa <- (1 - level) / 2
if(object$row.eff == "random") object$params$row.params = NULL
if(is.null(parm)){
if (object$family == "negative.binomial") {
object$params$phi <- NULL
object$sd$phi <- NULL
}
parm_all <- c("theta", "beta0", "Xcoef", "B", "row.params", "sigma", "inv.phi", "phi", "p")
parmincl <- parm_all[parm_all %in% names(object$params)]
cilow <- unlist(object$params[parmincl]) + qnorm(alfa) * unlist(object$sd[parmincl])
ciup <- unlist(object$params[parmincl]) + qnorm(1 - alfa) * unlist(object$sd[parmincl])
M <- cbind(cilow, ciup)
colnames(M) <- c(paste(alfa * 100, "%"), paste((1 - alfa) * 100, "%"))
rnames <- names(unlist(object$params))
cal <- 0
if (num.lv > 0) {
nr <- rep(1:num.lv, each = p)
nc <- rep(1:p, num.lv)
rnames[1:(num.lv * p)] <- paste(paste("theta.LV", nr, sep = ""), nc, sep = ".")
cal <- cal + num.lv * p
}
rnames[(cal + 1):(cal + p)] <- paste("Intercept",names(object$params$beta0), sep = ".")
cal <- cal + p
if (!is.null(object$TR)) {
nr <- names(object$params$B)
rnames[(cal + 1):(cal + length(nr))] <- nr
cal <- cal + length(nr)
}
if (is.null(object$TR) && !is.null(object$X)) {
cnx <- rep(colnames(object$X), each = p)
rnc <- rep(rownames(object$params$Xcoef), nX)
newnam <- paste(cnx, rnc, sep = ":")
rnames[(cal + 1):(cal + nX * p)] <- paste("Xcoef", newnam, sep = ".")
cal <- cal + nX * p
}
if (object$row.eff %in% c("fixed",TRUE)) {
rnames[(cal + 1):(cal + n)] <- paste("Row.Intercept", 1:n, sep = ".")
cal <- cal + n
}
if (object$row.eff == "random") {
rnames[(cal + 1)] <- "sigma"
cal <- cal + 1
}
if(object$family == "negative.binomial"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("inv.phi", names(object$params$beta0), sep = ".")
}
if(object$family == "tweedie"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("Dispersion phi", names(object$params$phi), sep = ".")
}
if(object$family == "ZIP"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("p", names(object$params$p), sep = ".")
}
if(object$family == "gaussian"){
s <- dim(M)[1]
rnames[(cal + 1):s] <- paste("Standard deviations phi", names(object$params$phi), sep = ".")
}
rownames(M) <- rnames
} else {
if ("beta0" %in% parm) {
object$params$Intercept = object$params$beta0
object$sd$Intercept = object$sd$beta0
parm[parm=="beta0"] = "Intercept"
}
cilow <- unlist(object$params[parm]) + qnorm(alfa) * unlist(object$sd[parm])
ciup <- unlist(object$params[parm]) + qnorm(1 - alfa) * unlist(object$sd[parm])
M <- cbind(cilow, ciup)
}
return(M)
}
|
model_FileName <- "DigitRecognizer-Model.RData"
# Libraries
library(shiny)
library(EBImage)
library(nnet)
load(model_FileName)
ui <- fluidPage(
fileInput(inputId = "image",h3("Choose an Image"),multiple=TRUE,accept=c('image/png','image/jpeg')),
textOutput("digit"),
imageOutput("myImage")
)
server <- function(input, output) {
output$digit <- renderText({
inFile <- input$image
if (is.null(inFile))
return(NULL)
old = inFile$datapath
new = file.path(dirname(inFile$datapath),inFile$name)
file.rename(from = old , to = new)
inFile$datapath <- new
Image <- readImage(inFile$datapath)
nof=numberOfFrames(Image, type = c('total', 'render'))
if(nof==1)
{
image=255*imageData(Image[1:28,1:28])
}else
if(nof==3)
{
r=255*imageData(Image[1:28,1:28,1])
g=255*imageData(Image[1:28,1:28,2])
b=255*imageData(Image[1:28,1:28,3])
image=0.21*r+0.72*g+0.07*b
image=round(image)
}
image=t(image)
image=as.vector(t(image))
write.csv(t(as.matrix(image)),'threepx.csv',row.names = FALSE)
test_FileName <-'threepx.csv'
new_Test_Dataset <- read.csv(test_FileName)
test_reduced <- new_Test_Dataset/255
test_reduced <- as.matrix(test_reduced) %*% prin_comp$rotation[,1:260]
New_Predicted <- predict(model_final,test_reduced,type="class")
paste("Predicted Value is",New_Predicted)
})
output$myImage <- renderImage({
inFile <- input$image
if (is.null(inFile))
return(NULL)
outfile <- tempfile(fileext='.jpg')
old=inFile$datapat
new = file.path(dirname(inFile$datapath),inFile$name)
file.rename(from = old , to = new)
inFile$datapath <- new
list(src = inFile$datapath,
contentType = 'image/jpeg',
width =200,
height=200,
alt = "Predicted Image")
}, deleteFile = TRUE)
}
shinyApp(ui = ui, server = server)
| /ProjectUI.R | no_license | sabhi27/Digit_Recognization | R | false | false | 2,267 | r | model_FileName <- "DigitRecognizer-Model.RData"
# Libraries
library(shiny)
library(EBImage)
library(nnet)
load(model_FileName)
ui <- fluidPage(
fileInput(inputId = "image",h3("Choose an Image"),multiple=TRUE,accept=c('image/png','image/jpeg')),
textOutput("digit"),
imageOutput("myImage")
)
server <- function(input, output) {
output$digit <- renderText({
inFile <- input$image
if (is.null(inFile))
return(NULL)
old = inFile$datapath
new = file.path(dirname(inFile$datapath),inFile$name)
file.rename(from = old , to = new)
inFile$datapath <- new
Image <- readImage(inFile$datapath)
nof=numberOfFrames(Image, type = c('total', 'render'))
if(nof==1)
{
image=255*imageData(Image[1:28,1:28])
}else
if(nof==3)
{
r=255*imageData(Image[1:28,1:28,1])
g=255*imageData(Image[1:28,1:28,2])
b=255*imageData(Image[1:28,1:28,3])
image=0.21*r+0.72*g+0.07*b
image=round(image)
}
image=t(image)
image=as.vector(t(image))
write.csv(t(as.matrix(image)),'threepx.csv',row.names = FALSE)
test_FileName <-'threepx.csv'
new_Test_Dataset <- read.csv(test_FileName)
test_reduced <- new_Test_Dataset/255
test_reduced <- as.matrix(test_reduced) %*% prin_comp$rotation[,1:260]
New_Predicted <- predict(model_final,test_reduced,type="class")
paste("Predicted Value is",New_Predicted)
})
output$myImage <- renderImage({
inFile <- input$image
if (is.null(inFile))
return(NULL)
outfile <- tempfile(fileext='.jpg')
old=inFile$datapat
new = file.path(dirname(inFile$datapath),inFile$name)
file.rename(from = old , to = new)
inFile$datapath <- new
list(src = inFile$datapath,
contentType = 'image/jpeg',
width =200,
height=200,
alt = "Predicted Image")
}, deleteFile = TRUE)
}
shinyApp(ui = ui, server = server)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/io.R
\name{fwrite_mm}
\alias{fwrite_mm}
\title{Efficiently write Matrix Market format}
\usage{
fwrite_mm(x, fname, sep = " ", row.names = TRUE, col.names = TRUE)
}
\arguments{
\item{x}{sparse matrix}
\item{fname}{file name}
\item{sep}{separator for output file}
\item{row.names}{save row names of the matrix}
\item{col.names}{save column names of the matrix}
}
\value{
None
}
\description{
Efficiently write Matrix Market format
}
| /man/fwrite_mm.Rd | no_license | tanaylab/tgutil | R | false | true | 513 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/io.R
\name{fwrite_mm}
\alias{fwrite_mm}
\title{Efficiently write Matrix Market format}
\usage{
fwrite_mm(x, fname, sep = " ", row.names = TRUE, col.names = TRUE)
}
\arguments{
\item{x}{sparse matrix}
\item{fname}{file name}
\item{sep}{separator for output file}
\item{row.names}{save row names of the matrix}
\item{col.names}{save column names of the matrix}
}
\value{
None
}
\description{
Efficiently write Matrix Market format
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/preprint_growth.R
\docType{data}
\name{preprint_growth}
\alias{preprint_growth}
\title{Growth in the number of biology preprints posted
Monthly number of biology preprints posted in several widely used archives.}
\format{An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 1080 rows and 3 columns.}
\source{
Jordan Anaya, \url{http://www.prepubmed.org/}
}
\usage{
preprint_growth
}
\description{
Growth in the number of biology preprints posted
Monthly number of biology preprints posted in several widely used archives.
}
\examples{
preprint_growth \%>\% filter(archive \%in\% c("bioRxiv", "arXiv q-bio")) \%>\%
filter(count > 0) -> df_preprints
df_final <- filter(df_preprints, date == max(date))
ggplot(df_preprints, aes(date, count, color = archive)) + geom_line() +
scale_y_log10(limits = c(30, 1600), breaks = c(30, 100, 300, 1000), expand = c(0, 0),
name = "preprints / month",
sec.axis = dup_axis(breaks = df_final$count, labels = df_final$archive,
name = NULL)) +
scale_x_date(name = NULL, expand = c(0, 0)) +
scale_color_manual(values = c("#D55E00", "#0072B2"), guide = "none") +
labs(title = "Rapid growth of bioRxiv",
subtitle = "After the introduction of bioRxiv, q-bio stopped growing",
caption = "Data source: Jordan Anaya, http://www.prepubmed.org/") +
theme_minimal_grid() +
theme(plot.title = element_text(hjust = 0),
plot.caption = element_text(size = 10))
}
\keyword{datasets}
| /dviz.supp/man/preprint_growth.Rd | permissive | f0nzie/dataviz-wilke-2020 | R | false | true | 1,614 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/preprint_growth.R
\docType{data}
\name{preprint_growth}
\alias{preprint_growth}
\title{Growth in the number of biology preprints posted
Monthly number of biology preprints posted in several widely used archives.}
\format{An object of class \code{tbl_df} (inherits from \code{tbl}, \code{data.frame}) with 1080 rows and 3 columns.}
\source{
Jordan Anaya, \url{http://www.prepubmed.org/}
}
\usage{
preprint_growth
}
\description{
Growth in the number of biology preprints posted
Monthly number of biology preprints posted in several widely used archives.
}
\examples{
preprint_growth \%>\% filter(archive \%in\% c("bioRxiv", "arXiv q-bio")) \%>\%
filter(count > 0) -> df_preprints
df_final <- filter(df_preprints, date == max(date))
ggplot(df_preprints, aes(date, count, color = archive)) + geom_line() +
scale_y_log10(limits = c(30, 1600), breaks = c(30, 100, 300, 1000), expand = c(0, 0),
name = "preprints / month",
sec.axis = dup_axis(breaks = df_final$count, labels = df_final$archive,
name = NULL)) +
scale_x_date(name = NULL, expand = c(0, 0)) +
scale_color_manual(values = c("#D55E00", "#0072B2"), guide = "none") +
labs(title = "Rapid growth of bioRxiv",
subtitle = "After the introduction of bioRxiv, q-bio stopped growing",
caption = "Data source: Jordan Anaya, http://www.prepubmed.org/") +
theme_minimal_grid() +
theme(plot.title = element_text(hjust = 0),
plot.caption = element_text(size = 10))
}
\keyword{datasets}
|
#Load the library
library(dplyr)
library(lubridate)
# Mention the directory path
dir<-"/Users/manavsehgal/Library/Mobile Documents/com~apple~CloudDocs/Work/Study/Coursera/Data-Scientist/datasciencecoursera/ExData_Plotting1"
# Mention the name of the file
file_name<-"household_power_consumption.txt"
#Load the variable names
var_names<-names(read.table(file_name,header = TRUE,sep=";",nrow=1))
# Load the file into a dataset
power_consumption<-read.table(file_name,header = FALSE,sep=";",skip = grep("^1/2/2007",readLines(file_name)),nrow=length(grep("^1/2/2007|^2/2/2007",readLines(file_name)))-1)
names(power_consumption)<-var_names
#Convert the dataset into tibble for easy viewing
power_consumption<-tbl_df(power_consumption)
print(power_consumption)
#Create a datetime variable
date_time<-strptime(paste(power_consumption$Date,power_consumption$Time),"%d/%m/%Y %H:%M:%S")
#Set the device
png("plot4.png",width = 480,height = 480)
#Set the par variable
par(mfcol=c(2,2),mar=c(4,4,2,2))
#Create the 1st of 4 plots
plot(date_time,power_consumption$Global_active_power,type = "l",ylab="Global Active Power",xlab = "")
#Create the 2nd of 4 plots
plot(date_time,power_consumption$Sub_metering_1,type = "l",ylab="Energy Sub metering",xlab="")
lines(date_time,power_consumption$Sub_metering_2,type = "l",col="red")
lines(date_time,power_consumption$Sub_metering_3,type = "l",col="blue")
legend("topright",legend = c("Sub_metering_1","Sub_metering_2","Sub_metering_3"),col=c("black","red","blue"),lty=1,bty="n")
#Create the 3rd of 4 plots
plot(date_time,power_consumption$Voltage,type = "l",ylab="Voltage",xlab="datetime")
#Create the 4th of 4 plots
plot(date_time,power_consumption$Global_reactive_power,type = "l",ylab="Global_reactive_power",xlab="datetime")
#Close the device
dev.off()
| /plot4.R | no_license | msehgal13/ExData_Plotting1 | R | false | false | 1,799 | r | #Load the library
library(dplyr)
library(lubridate)
# Mention the directory path
dir<-"/Users/manavsehgal/Library/Mobile Documents/com~apple~CloudDocs/Work/Study/Coursera/Data-Scientist/datasciencecoursera/ExData_Plotting1"
# Mention the name of the file
file_name<-"household_power_consumption.txt"
#Load the variable names
var_names<-names(read.table(file_name,header = TRUE,sep=";",nrow=1))
# Load the file into a dataset
power_consumption<-read.table(file_name,header = FALSE,sep=";",skip = grep("^1/2/2007",readLines(file_name)),nrow=length(grep("^1/2/2007|^2/2/2007",readLines(file_name)))-1)
names(power_consumption)<-var_names
#Convert the dataset into tibble for easy viewing
power_consumption<-tbl_df(power_consumption)
print(power_consumption)
#Create a datetime variable
date_time<-strptime(paste(power_consumption$Date,power_consumption$Time),"%d/%m/%Y %H:%M:%S")
#Set the device
png("plot4.png",width = 480,height = 480)
#Set the par variable
par(mfcol=c(2,2),mar=c(4,4,2,2))
#Create the 1st of 4 plots
plot(date_time,power_consumption$Global_active_power,type = "l",ylab="Global Active Power",xlab = "")
#Create the 2nd of 4 plots
plot(date_time,power_consumption$Sub_metering_1,type = "l",ylab="Energy Sub metering",xlab="")
lines(date_time,power_consumption$Sub_metering_2,type = "l",col="red")
lines(date_time,power_consumption$Sub_metering_3,type = "l",col="blue")
legend("topright",legend = c("Sub_metering_1","Sub_metering_2","Sub_metering_3"),col=c("black","red","blue"),lty=1,bty="n")
#Create the 3rd of 4 plots
plot(date_time,power_consumption$Voltage,type = "l",ylab="Voltage",xlab="datetime")
#Create the 4th of 4 plots
plot(date_time,power_consumption$Global_reactive_power,type = "l",ylab="Global_reactive_power",xlab="datetime")
#Close the device
dev.off()
|
detach(Banco)
library(readxl)
Banco <- read_excel("arquivos/Banco.xlsx")
#View(Banco)
attach(Banco)
Banco$data_atual = Sys.Date()
idade = round(difftime(data_atual,datanasc)/365.25)
idade <- as.numeric(idade)
Banco$idade = idade
fax_etaria = cut(Banco$idade, breaks = c(46,50,60,70,80), labels = c(1,2,3,4,5), right = T)
Banco$fax_etaria = fax_etaria
Banco
| /calc_risco.R | no_license | flavioti/aula_r | R | false | false | 365 | r | detach(Banco)
library(readxl)
Banco <- read_excel("arquivos/Banco.xlsx")
#View(Banco)
attach(Banco)
Banco$data_atual = Sys.Date()
idade = round(difftime(data_atual,datanasc)/365.25)
idade <- as.numeric(idade)
Banco$idade = idade
fax_etaria = cut(Banco$idade, breaks = c(46,50,60,70,80), labels = c(1,2,3,4,5), right = T)
Banco$fax_etaria = fax_etaria
Banco
|
.set_proj_data <- function(proj_data = ""){
l <- try(.Call("PROJ_proj_set_data_dir", proj_data), silent = TRUE)
if (inherits(l, "try-error")) {
ok <- FALSE
} else {
ok <- TRUE
}
ok
}
.onLoad <- function(libname, pkgname) {
pkg_proj_lib <- FALSE
if (tolower(Sys.info()[["sysname"]]) %in% c("windows", "darwin")) {
path <- system.file("proj", package = "PROJ", mustWork = FALSE)
if (nchar(path) > 0) {
pkg_proj_lib <- .set_proj_data(path)
}
} else {
pkg_proj_lib <- TRUE
}
ok <- ok_proj6()
options(PROJ.HAVE_PROJ6 = ok, PROJ.HAVE_PROJ_LIB_PKG = pkg_proj_lib)
invisible(NULL)
}
| /R/zzz.R | no_license | paleolimbot/PROJ | R | false | false | 651 | r | .set_proj_data <- function(proj_data = ""){
l <- try(.Call("PROJ_proj_set_data_dir", proj_data), silent = TRUE)
if (inherits(l, "try-error")) {
ok <- FALSE
} else {
ok <- TRUE
}
ok
}
.onLoad <- function(libname, pkgname) {
pkg_proj_lib <- FALSE
if (tolower(Sys.info()[["sysname"]]) %in% c("windows", "darwin")) {
path <- system.file("proj", package = "PROJ", mustWork = FALSE)
if (nchar(path) > 0) {
pkg_proj_lib <- .set_proj_data(path)
}
} else {
pkg_proj_lib <- TRUE
}
ok <- ok_proj6()
options(PROJ.HAVE_PROJ6 = ok, PROJ.HAVE_PROJ_LIB_PKG = pkg_proj_lib)
invisible(NULL)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plot_psvcsignal.R
\name{plot.psvcsignal}
\alias{plot.psvcsignal}
\title{Plotting function for \code{psVCSignal}}
\usage{
\method{plot}{psvcsignal}(x, ..., xlab = " ", ylab = " ", Resol = 100)
}
\arguments{
\item{x}{the P-spline object, usually from \code{psVCSignal}.}
\item{...}{other parameters.}
\item{xlab}{label for the x-axis, e.g. "my x" (quotes required).}
\item{ylab}{label for the y-axis, e.g. "my y" (quotes required).}
\item{Resol}{resolution for plotting, default \code{Resol = 100}.}
}
\value{
\item{Plot}{a two panel plot, one of the 2D P-spline signal coefficient surface
and another that displays several slices of the smooth coefficient vectors at fixed levels of the
varying index.}
}
\description{
Plotting function for varying-coefficent signal
regression P-spline smooth coefficients (using \code{psVCSignal} with \code{class psvcsignal}).
Although se surface bands can be comuputed they are intentially left out as they are not
interpretable, and there is generally little data to steer
such a high-dimensional parameterization.
}
\examples{
library(fds)
data(nirc)
iindex <- nirc$x
X <- nirc$y
sel <- 50:650 # 1200 <= x & x<= 2400
X <- X[sel, ]
iindex <- iindex[sel]
dX <- diff(X)
diindex <- iindex[-1]
y <- as.vector(labc[1, 1:40]) # percent fat
t_var <- as.vector(labc[4, 1:40]) # percent flour
oout <- 23
dX <- t(dX[, -oout])
y <- y[-oout]
t_var = t_var[-oout]
Pars = rbind(c(min(diindex), max(diindex), 25, 3, 1e-7, 2),
c(min(t_var), max(t_var), 20, 3, 0.0001, 2))
fit1 <- psVCSignal(y, dX, diindex, t_var, Pars = Pars,
family = "gaussian", link = "identity", int = TRUE)
plot(fit1, xlab = "Coefficient Index", ylab = "VC: \% Flour")
names(fit1)
}
\references{
Eilers, P. H. C. and Marx, B. D. (2003). Multivariate calibration with temperature
interaction using two-dimensional penalized signal regression. \emph{Chemometrics
and Intellegent Laboratory Systems}, 66, 159–174.
Eilers, P.H.C. and Marx, B.D. (2021). \emph{Practical Smoothing, The Joys of
P-splines.} Cambridge University Press.
}
\author{
Paul Eilers and Brian Marx
}
| /man/plot.psvcsignal.Rd | no_license | cran/JOPS | R | false | true | 2,211 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plot_psvcsignal.R
\name{plot.psvcsignal}
\alias{plot.psvcsignal}
\title{Plotting function for \code{psVCSignal}}
\usage{
\method{plot}{psvcsignal}(x, ..., xlab = " ", ylab = " ", Resol = 100)
}
\arguments{
\item{x}{the P-spline object, usually from \code{psVCSignal}.}
\item{...}{other parameters.}
\item{xlab}{label for the x-axis, e.g. "my x" (quotes required).}
\item{ylab}{label for the y-axis, e.g. "my y" (quotes required).}
\item{Resol}{resolution for plotting, default \code{Resol = 100}.}
}
\value{
\item{Plot}{a two panel plot, one of the 2D P-spline signal coefficient surface
and another that displays several slices of the smooth coefficient vectors at fixed levels of the
varying index.}
}
\description{
Plotting function for varying-coefficent signal
regression P-spline smooth coefficients (using \code{psVCSignal} with \code{class psvcsignal}).
Although se surface bands can be comuputed they are intentially left out as they are not
interpretable, and there is generally little data to steer
such a high-dimensional parameterization.
}
\examples{
library(fds)
data(nirc)
iindex <- nirc$x
X <- nirc$y
sel <- 50:650 # 1200 <= x & x<= 2400
X <- X[sel, ]
iindex <- iindex[sel]
dX <- diff(X)
diindex <- iindex[-1]
y <- as.vector(labc[1, 1:40]) # percent fat
t_var <- as.vector(labc[4, 1:40]) # percent flour
oout <- 23
dX <- t(dX[, -oout])
y <- y[-oout]
t_var = t_var[-oout]
Pars = rbind(c(min(diindex), max(diindex), 25, 3, 1e-7, 2),
c(min(t_var), max(t_var), 20, 3, 0.0001, 2))
fit1 <- psVCSignal(y, dX, diindex, t_var, Pars = Pars,
family = "gaussian", link = "identity", int = TRUE)
plot(fit1, xlab = "Coefficient Index", ylab = "VC: \% Flour")
names(fit1)
}
\references{
Eilers, P. H. C. and Marx, B. D. (2003). Multivariate calibration with temperature
interaction using two-dimensional penalized signal regression. \emph{Chemometrics
and Intellegent Laboratory Systems}, 66, 159–174.
Eilers, P.H.C. and Marx, B.D. (2021). \emph{Practical Smoothing, The Joys of
P-splines.} Cambridge University Press.
}
\author{
Paul Eilers and Brian Marx
}
|
#' @param as.df [\code{character(1)}]\cr
#' Return one data.frame as result - or a list of lists of objects?.
#' Default is \code{FALSE}
| /man-roxygen/arg_bmr_asdf.R | no_license | dickoa/mlr | R | false | false | 142 | r | #' @param as.df [\code{character(1)}]\cr
#' Return one data.frame as result - or a list of lists of objects?.
#' Default is \code{FALSE}
|
#' Measurement units in a dataframe
#'
#' Returns a dataframe with measurement units from a parsed OpenClinica
#' odm1.3 .xml export file.
#'
#' @param parsed_xml An object of class \code{XMLInternalDocument}, as returned
#' by \code{XML::xmlParse()}.
#'
#' @return A dataframe.
#' @export
#'
#' @examples
#' # The example odm1.3 xml file address
#' my_file <- system.file("extdata",
#' "odm1.3_full_example.xml",
#' package = "ox",
#' mustWork = TRUE)
#'
#' # Parsing the xml file
#' library(XML)
#' doc <- xmlParse(my_file)
#'
#' # Measurement units in a dataframe
#' measurement_units <- ox_units(doc)
#' head(measurement_units)
ox_units <- function (parsed_xml) {
if (! "XMLInternalDocument" %in% class(parsed_xml)) {
stop("parsed_xml should be an object of class XMLInternalDocument", call. = FALSE)
}
.attrs_node_and_ancestors(parsed_xml, "MeasurementUnit") %>%
data.frame(stringsAsFactors = FALSE) -> res
# Dropping unneded vars
# NOT with dplyr::select, because they are NOT all always present !
res$FileOID <- NULL
res$Description <- NULL
res$CreationDateTime <- NULL
res$FileType <- NULL
res$ODMVersion <- NULL
res$schemaLocation <- NULL
res$MetaDataVersionOID <- NULL
if ("OID" %in% names(res)) {
names(res)[which(names(res) == "OID")] <- "study_oid"
}
if ("OID.1" %in% names(res)) {
names(res)[which(names(res) == "OID.1")] <- "unit_oid"
}
if ("Name" %in% names(res)) {
names(res)[which(names(res) == "Name")] <- "unit_name"
}
# change CamelCase by snake_case
names(res) <- snakecase::to_snake_case(names(res))
# return
res
}
| /R/ox_units.R | no_license | ccollins0601/ox | R | false | false | 1,682 | r | #' Measurement units in a dataframe
#'
#' Returns a dataframe with measurement units from a parsed OpenClinica
#' odm1.3 .xml export file.
#'
#' @param parsed_xml An object of class \code{XMLInternalDocument}, as returned
#' by \code{XML::xmlParse()}.
#'
#' @return A dataframe.
#' @export
#'
#' @examples
#' # The example odm1.3 xml file address
#' my_file <- system.file("extdata",
#' "odm1.3_full_example.xml",
#' package = "ox",
#' mustWork = TRUE)
#'
#' # Parsing the xml file
#' library(XML)
#' doc <- xmlParse(my_file)
#'
#' # Measurement units in a dataframe
#' measurement_units <- ox_units(doc)
#' head(measurement_units)
ox_units <- function (parsed_xml) {
if (! "XMLInternalDocument" %in% class(parsed_xml)) {
stop("parsed_xml should be an object of class XMLInternalDocument", call. = FALSE)
}
.attrs_node_and_ancestors(parsed_xml, "MeasurementUnit") %>%
data.frame(stringsAsFactors = FALSE) -> res
# Dropping unneded vars
# NOT with dplyr::select, because they are NOT all always present !
res$FileOID <- NULL
res$Description <- NULL
res$CreationDateTime <- NULL
res$FileType <- NULL
res$ODMVersion <- NULL
res$schemaLocation <- NULL
res$MetaDataVersionOID <- NULL
if ("OID" %in% names(res)) {
names(res)[which(names(res) == "OID")] <- "study_oid"
}
if ("OID.1" %in% names(res)) {
names(res)[which(names(res) == "OID.1")] <- "unit_oid"
}
if ("Name" %in% names(res)) {
names(res)[which(names(res) == "Name")] <- "unit_name"
}
# change CamelCase by snake_case
names(res) <- snakecase::to_snake_case(names(res))
# return
res
}
|
# The data household_power_comsumption has been stored in the working directiory
#read the data into the working environment
data1 <- read.csv("household_power_consumption.txt", header =T, sep=";", na.strings ="?")
## use subset to select 2 days data for plotting
ndata <- subset(data1, data1$Date == '1/2/2007'|data1$Date == '2/2/2007', )
ndata$Date<- as.Date(ndata$Date, format = "%d/%m/%Y")
## Combine the Date and Time using paste to create DateTime column
DateTime<- paste(as.Date(ndata$Date), ndata$Time, sep = "")
ndata$DateTime<- as.POSIXct(DateTime)
## We use par to change the margins to fit our plot and call the png device
png( filename = "plot2.png", width = 480, height = 480, bg ="white")
par( mar = c(6,6,1,2))
plot(ndata$Global_active_power~ndata$DateTime, type ="l",
ylab = "Global Active Power (kilowatts)", xlab="")
dev.off( )
| /plot2.R | no_license | vivianni/ExploratoryDataAnalysis | R | false | false | 866 | r |
# The data household_power_comsumption has been stored in the working directiory
#read the data into the working environment
data1 <- read.csv("household_power_consumption.txt", header =T, sep=";", na.strings ="?")
## use subset to select 2 days data for plotting
ndata <- subset(data1, data1$Date == '1/2/2007'|data1$Date == '2/2/2007', )
ndata$Date<- as.Date(ndata$Date, format = "%d/%m/%Y")
## Combine the Date and Time using paste to create DateTime column
DateTime<- paste(as.Date(ndata$Date), ndata$Time, sep = "")
ndata$DateTime<- as.POSIXct(DateTime)
## We use par to change the margins to fit our plot and call the png device
png( filename = "plot2.png", width = 480, height = 480, bg ="white")
par( mar = c(6,6,1,2))
plot(ndata$Global_active_power~ndata$DateTime, type ="l",
ylab = "Global Active Power (kilowatts)", xlab="")
dev.off( )
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Musicnotation.R
\name{Musicnotation}
\alias{Musicnotation}
\title{Function Musicnotation}
\usage{
Musicnotation(data_sheet, ...)
}
\arguments{
\item{data_sheet}{\bold{either} a data.frame f.e imported from a data sheet containing\cr
"Name","item.number"\cr
"action.from.","action.to","kind.of.action"\cr
"name.of.action","action.number","classification","weighting"\cr
\cr
\bold{or} only "action.from.","action.to","kind.of.action"if exists actions and items\cr
\cr
actions: with "name.of.action","action.number","classification","weighting\cr
items: with "Name","item.number"\cr
Setting a behaviour to 2 means it is count double\cr}
\item{\dots}{\bold{Additional parameters:}
\describe{
\item{\bold{colors}}{a factor of colors as much as actions}
\item{\bold{lwd}}{line width if lwd_arrows is not used also for line width arrows}
# TODO check this it not working -> no show_items all items will be shown
\item{\bold{show_items}}{items to be shown}
\item{\bold{angel_arrows}}{The angel aof the arrow head default 20}
\item{\bold{length_arrows}}{the length of the arrow default 0.05}
\item{\bold{lwd_arrows}}{the line width of the arrows default 1}
\item{\bold{actions_colors}}{a vector of colors for actions f.e to show one special action}
\item{\bold{starting_time}}{builds the graph with data between starting and ending time}
\item{\bold{ending_time}}{builds the graph with data between starting and ending time}
\item{\bold{user_colors}}{a vector of colors as much as items to show different colors for items}
\item{\bold{color_bits}}{a vector of colors as much as items 1 shows the horse colored 0 in black (defined with actions_colors)}
}
#'}
}
\value{
returns a list with\cr
ADI - the Average Dominance index\cr
}
\description{
A function to visualize interaction wit a musicnotation graph.
}
\examples{
{ #you can eihter use:
data_sheet=data.frame ("action.from"=c(1,4,2,3,4,3,4,3,4,3,4,3,4,3,4),
"action.to"=c(4,1,1,4,3,4,3,4,3,4,3,4,3,4,3),
"kind.of.action"= c(4,1,1,4,3,4,3,4,3,4,3,4,3,4,3),
"Time"=c('03:15:00','03:17:30','03:20:00','03:20:30','03:21:00',
'03:21:30','03:22:00','03:22:30','03:23:00','03:23:30',
'03:25:00','03:25:30','03:26:00','03:26:30','03:27:00'),
stringsAsFactors=FALSE)
items= data.frame ("Name"=c("item1","item2","item3","item4","item5","item6") ,
"item.number"=c(1:6),stringsAsFactors=FALSE)
actions=data.frame("name.of.action"= c("leading","following","approach","bite","threat to bite",
"kick","threat to kick", "chase","retreat"),
"action.number"=c(1:9),
"classification"=c(1,2,1,1,1,1,1,1,2) ,
"weighting"=c(1,-1,1,1,1,1,1,1,-1),stringsAsFactors=FALSE)
# set colors for special encounters
color= c("green","green","red","red","red","red","red","red")
Musicnotation(data_sheet=data_sheet,actions=actions,items=items,sort_dominance=TRUE)
#or you can use a complete f.e Excel sheet
#you can save this data as basic excel sheet to work with
data(data_Musicnotation)
Musicnotation(data_sheet=data_Musicnotation,sort_dominance=TRUE) }
}
\references{
{
Chase, I. D. (2006). Music notation: a new method for visualizing social interaction in animals and humans. Front Zool, 3, 18.
\doi{10.1186/1742-9994-3-18}
}
}
\author{
Knut Krueger, \email{Knut.Krueger@equine-science.de}
}
| /man/Musicnotation.Rd | no_license | cran/Dominance | R | false | true | 3,604 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Musicnotation.R
\name{Musicnotation}
\alias{Musicnotation}
\title{Function Musicnotation}
\usage{
Musicnotation(data_sheet, ...)
}
\arguments{
\item{data_sheet}{\bold{either} a data.frame f.e imported from a data sheet containing\cr
"Name","item.number"\cr
"action.from.","action.to","kind.of.action"\cr
"name.of.action","action.number","classification","weighting"\cr
\cr
\bold{or} only "action.from.","action.to","kind.of.action"if exists actions and items\cr
\cr
actions: with "name.of.action","action.number","classification","weighting\cr
items: with "Name","item.number"\cr
Setting a behaviour to 2 means it is count double\cr}
\item{\dots}{\bold{Additional parameters:}
\describe{
\item{\bold{colors}}{a factor of colors as much as actions}
\item{\bold{lwd}}{line width if lwd_arrows is not used also for line width arrows}
# TODO check this it not working -> no show_items all items will be shown
\item{\bold{show_items}}{items to be shown}
\item{\bold{angel_arrows}}{The angel aof the arrow head default 20}
\item{\bold{length_arrows}}{the length of the arrow default 0.05}
\item{\bold{lwd_arrows}}{the line width of the arrows default 1}
\item{\bold{actions_colors}}{a vector of colors for actions f.e to show one special action}
\item{\bold{starting_time}}{builds the graph with data between starting and ending time}
\item{\bold{ending_time}}{builds the graph with data between starting and ending time}
\item{\bold{user_colors}}{a vector of colors as much as items to show different colors for items}
\item{\bold{color_bits}}{a vector of colors as much as items 1 shows the horse colored 0 in black (defined with actions_colors)}
}
#'}
}
\value{
returns a list with\cr
ADI - the Average Dominance index\cr
}
\description{
A function to visualize interaction wit a musicnotation graph.
}
\examples{
{ #you can eihter use:
data_sheet=data.frame ("action.from"=c(1,4,2,3,4,3,4,3,4,3,4,3,4,3,4),
"action.to"=c(4,1,1,4,3,4,3,4,3,4,3,4,3,4,3),
"kind.of.action"= c(4,1,1,4,3,4,3,4,3,4,3,4,3,4,3),
"Time"=c('03:15:00','03:17:30','03:20:00','03:20:30','03:21:00',
'03:21:30','03:22:00','03:22:30','03:23:00','03:23:30',
'03:25:00','03:25:30','03:26:00','03:26:30','03:27:00'),
stringsAsFactors=FALSE)
items= data.frame ("Name"=c("item1","item2","item3","item4","item5","item6") ,
"item.number"=c(1:6),stringsAsFactors=FALSE)
actions=data.frame("name.of.action"= c("leading","following","approach","bite","threat to bite",
"kick","threat to kick", "chase","retreat"),
"action.number"=c(1:9),
"classification"=c(1,2,1,1,1,1,1,1,2) ,
"weighting"=c(1,-1,1,1,1,1,1,1,-1),stringsAsFactors=FALSE)
# set colors for special encounters
color= c("green","green","red","red","red","red","red","red")
Musicnotation(data_sheet=data_sheet,actions=actions,items=items,sort_dominance=TRUE)
#or you can use a complete f.e Excel sheet
#you can save this data as basic excel sheet to work with
data(data_Musicnotation)
Musicnotation(data_sheet=data_Musicnotation,sort_dominance=TRUE) }
}
\references{
{
Chase, I. D. (2006). Music notation: a new method for visualizing social interaction in animals and humans. Front Zool, 3, 18.
\doi{10.1186/1742-9994-3-18}
}
}
\author{
Knut Krueger, \email{Knut.Krueger@equine-science.de}
}
|
#' select or remove tokens from a tokens object
#'
#' This function selects or discards tokens from a \link{tokens} objects, with
#' the shortcut \code{tokens_remove(x, features)} defined as a shortcut for
#' \code{tokens_select(x, features, selection = "remove")}. The most common
#' usage for \code{tokens_remove} will be to eliminate stop words from a
#' text or text-based object, while the most common use of \code{tokens_select} will
#' be to select only positive features from a list of regular
#' expressions, including a dictionary.
#' @param x \link{tokens} object whose token elements will be selected
#' @param features one of: a character vector of features to be selected, a \link{dictionary} class
#' object whose values (not keys) will provide the features to be selected.
#' @param selection whether to \code{"keep"} or \code{"remove"} the features
#' @inheritParams valuetype
#' @param case_insensitive ignore case when matching, if \code{TRUE}
#' @param verbose if \code{TRUE} print messages about how many features were
#' removed
#' @param padding (only for \code{tokenizedTexts} objects) if \code{TRUE}, leave
#' an empty string where the removed tokens previously existed. This is
#' useful if a positional match is needed between the pre- and post-selected
#' features, for instance if a window of adjacency needs to be computed.
#' @return a tokens object with features removed
#' @export
tokens_select <- function(x, features, selection = c("keep", "remove"),
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, verbose = FALSE) {
UseMethod("tokens_select")
}
#' @rdname tokens_select
#' @noRd
#' @export
#' @examples
#' ## with simple examples
#' toks <- tokens(c("This is a sentence.", "This is a second sentence."),
#' removePunct = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "keep", padding = FALSE)
#' tokens_select(toks, c("is", "a", "this"), selection = "keep", padding = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", padding = FALSE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", padding = TRUE)
#'
#' # how case_insensitive works
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", case_insensitive = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", case_insensitive = FALSE)
#'
#' \dontshow{
#' ## with simple examples
#' toks <- tokenize(c("This is a sentence.", "This is a second sentence."),
#' removePunct = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "fixed", padding = TRUE, case_insensitive = TRUE)
#'
#' # how case_insensitive works
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "fixed", padding = TRUE, case_insensitive = FALSE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "fixed", padding = TRUE, case_insensitive = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "glob", padding = TRUE, case_insensitive = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "glob", padding = TRUE, case_insensitive = FALSE)
#'
#' # with longer texts
#' txts <- data_char_inaugural[1:2]
#' toks <- tokenize(txts)
#' tokens_select(toks, stopwords("english"), "remove")
#' tokens_select(toks, stopwords("english"), "keep")
#' tokens_select(toks, stopwords("english"), "remove", padding = TRUE)
#' tokens_select(toks, stopwords("english"), "keep", padding = TRUE)
#' tokens_select(tokenize(data_char_inaugural[2]), stopwords("english"), "remove", padding = TRUE)
#' }
tokens_select.tokenizedTexts <- function(x, features, selection = c("keep", "remove"),
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, verbose = FALSE) {
x <- tokens_select(as.tokens(x), features, selection, valuetype, case_insensitive = TRUE)
x <- as.tokenizedTexts(x)
return(x)
}
#' @rdname tokens_select
#' @noRd
#' @importFrom RcppParallel RcppParallelLibs
#' @export
#' @examples
#' toksh <- tokens(c(doc1 = "This is a SAMPLE text", doc2 = "this sample text is better"))
#' feats <- c("this", "sample", "is")
#' # keeping features
#' tokens_select(toksh, feats, selection = "keep")
#' tokens_select(toksh, feats, selection = "keep", padding = TRUE)
#' tokens_select(toksh, feats, selection = "keep", case_insensitive = FALSE)
#' tokens_select(toksh, feats, selection = "keep", padding = TRUE, case_insensitive = FALSE)
#' # removing features
#' tokens_select(toksh, feats, selection = "remove")
#' tokens_select(toksh, feats, selection = "remove", padding = TRUE)
#' tokens_select(toksh, feats, selection = "remove", case_insensitive = FALSE)
#' tokens_select(toksh, feats, selection = "remove", padding = TRUE, case_insensitive = FALSE)
#'
#' # With longer texts
#' txts <- data_char_inaugural
#' toks <- tokens(txts)
#' tokens_select(toks, stopwords("english"), "remove")
#' tokens_select(toks, stopwords("english"), "keep")
#' tokens_select(toks, stopwords("english"), "remove", padding = TRUE)
#' tokens_select(toks, stopwords("english"), "keep", padding = TRUE)
#'
#' # With multiple words
#' tokens_select(toks, list(c('President', '*')), "keep")
#' tokens_select(toks, list(c('*', 'crisis')), "keep")
tokens_select.tokens <- function(x, features, selection = c("keep", "remove"),
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, ...) {
if (!is.tokens(x))
stop("x must be a tokens object")
features <- vector2list(features)
selection <- match.arg(selection)
valuetype <- match.arg(valuetype)
names_org <- names(x)
attrs_org <- attributes(x)
types <- types(x)
features <- as.list(features)
features_id <- regex2id(features, types, valuetype, case_insensitive)
if ("" %in% features) features_id <- c(features_id, list(0)) # append padding index
if (selection == 'keep') {
x <- qatd_cpp_tokens_select(x, features_id, 1, padding)
} else {
x <- qatd_cpp_tokens_select(x, features_id, 2, padding)
}
names(x) <- names_org
attributes(x) <- attrs_org
tokens_hashed_recompile(x)
}
#' @rdname tokens_select
#' @export
#' @examples
#' ## for tokenized texts
#' txt <- c(wash1 <- "Fellow citizens, I am again called upon by the voice of my country to
#' execute the functions of its Chief Magistrate.",
#' wash2 <- "When the occasion proper for it shall arrive, I shall endeavor to express
#' the high sense I entertain of this distinguished honor.")
#' tokens_remove(tokens(txt, removePunct = TRUE), stopwords("english"))
#'
#' \dontshow{
#' ## for tokenized texts
#' txt <- c(wash1 <- "Fellow citizens, I am again called upon by the voice of my country to
#' execute the functions of its Chief Magistrate.",
#' wash2 <- "When the occasion proper for it shall arrive, I shall endeavor to express
#' the high sense I entertain of this distinguished honor.")
#' tokens_remove(tokenize(txt, removePunct = TRUE), stopwords("english"))
#'
#' ## example for collocations
#' (myCollocs <- collocations(data_char_inaugural[1:3], n=20))
#' removeFeatures(myCollocs, stopwords("english"))
#' removeFeatures(myCollocs, stopwords("english"), pos = 2)
#' }
tokens_remove <- function(x, features, valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, verbose = FALSE) {
tokens_select(x, features, selection = "remove", valuetype = valuetype,
case_insensitive = case_insensitive, padding = padding, verbose = verbose)
}
| /R/tokens_select.R | no_license | patperry/quanteda | R | false | false | 8,062 | r | #' select or remove tokens from a tokens object
#'
#' This function selects or discards tokens from a \link{tokens} objects, with
#' the shortcut \code{tokens_remove(x, features)} defined as a shortcut for
#' \code{tokens_select(x, features, selection = "remove")}. The most common
#' usage for \code{tokens_remove} will be to eliminate stop words from a
#' text or text-based object, while the most common use of \code{tokens_select} will
#' be to select only positive features from a list of regular
#' expressions, including a dictionary.
#' @param x \link{tokens} object whose token elements will be selected
#' @param features one of: a character vector of features to be selected, a \link{dictionary} class
#' object whose values (not keys) will provide the features to be selected.
#' @param selection whether to \code{"keep"} or \code{"remove"} the features
#' @inheritParams valuetype
#' @param case_insensitive ignore case when matching, if \code{TRUE}
#' @param verbose if \code{TRUE} print messages about how many features were
#' removed
#' @param padding (only for \code{tokenizedTexts} objects) if \code{TRUE}, leave
#' an empty string where the removed tokens previously existed. This is
#' useful if a positional match is needed between the pre- and post-selected
#' features, for instance if a window of adjacency needs to be computed.
#' @return a tokens object with features removed
#' @export
tokens_select <- function(x, features, selection = c("keep", "remove"),
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, verbose = FALSE) {
UseMethod("tokens_select")
}
#' @rdname tokens_select
#' @noRd
#' @export
#' @examples
#' ## with simple examples
#' toks <- tokens(c("This is a sentence.", "This is a second sentence."),
#' removePunct = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "keep", padding = FALSE)
#' tokens_select(toks, c("is", "a", "this"), selection = "keep", padding = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", padding = FALSE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", padding = TRUE)
#'
#' # how case_insensitive works
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", case_insensitive = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove", case_insensitive = FALSE)
#'
#' \dontshow{
#' ## with simple examples
#' toks <- tokenize(c("This is a sentence.", "This is a second sentence."),
#' removePunct = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "fixed", padding = TRUE, case_insensitive = TRUE)
#'
#' # how case_insensitive works
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "fixed", padding = TRUE, case_insensitive = FALSE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "fixed", padding = TRUE, case_insensitive = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "glob", padding = TRUE, case_insensitive = TRUE)
#' tokens_select(toks, c("is", "a", "this"), selection = "remove",
#' valuetype = "glob", padding = TRUE, case_insensitive = FALSE)
#'
#' # with longer texts
#' txts <- data_char_inaugural[1:2]
#' toks <- tokenize(txts)
#' tokens_select(toks, stopwords("english"), "remove")
#' tokens_select(toks, stopwords("english"), "keep")
#' tokens_select(toks, stopwords("english"), "remove", padding = TRUE)
#' tokens_select(toks, stopwords("english"), "keep", padding = TRUE)
#' tokens_select(tokenize(data_char_inaugural[2]), stopwords("english"), "remove", padding = TRUE)
#' }
tokens_select.tokenizedTexts <- function(x, features, selection = c("keep", "remove"),
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, verbose = FALSE) {
x <- tokens_select(as.tokens(x), features, selection, valuetype, case_insensitive = TRUE)
x <- as.tokenizedTexts(x)
return(x)
}
#' @rdname tokens_select
#' @noRd
#' @importFrom RcppParallel RcppParallelLibs
#' @export
#' @examples
#' toksh <- tokens(c(doc1 = "This is a SAMPLE text", doc2 = "this sample text is better"))
#' feats <- c("this", "sample", "is")
#' # keeping features
#' tokens_select(toksh, feats, selection = "keep")
#' tokens_select(toksh, feats, selection = "keep", padding = TRUE)
#' tokens_select(toksh, feats, selection = "keep", case_insensitive = FALSE)
#' tokens_select(toksh, feats, selection = "keep", padding = TRUE, case_insensitive = FALSE)
#' # removing features
#' tokens_select(toksh, feats, selection = "remove")
#' tokens_select(toksh, feats, selection = "remove", padding = TRUE)
#' tokens_select(toksh, feats, selection = "remove", case_insensitive = FALSE)
#' tokens_select(toksh, feats, selection = "remove", padding = TRUE, case_insensitive = FALSE)
#'
#' # With longer texts
#' txts <- data_char_inaugural
#' toks <- tokens(txts)
#' tokens_select(toks, stopwords("english"), "remove")
#' tokens_select(toks, stopwords("english"), "keep")
#' tokens_select(toks, stopwords("english"), "remove", padding = TRUE)
#' tokens_select(toks, stopwords("english"), "keep", padding = TRUE)
#'
#' # With multiple words
#' tokens_select(toks, list(c('President', '*')), "keep")
#' tokens_select(toks, list(c('*', 'crisis')), "keep")
tokens_select.tokens <- function(x, features, selection = c("keep", "remove"),
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, ...) {
if (!is.tokens(x))
stop("x must be a tokens object")
features <- vector2list(features)
selection <- match.arg(selection)
valuetype <- match.arg(valuetype)
names_org <- names(x)
attrs_org <- attributes(x)
types <- types(x)
features <- as.list(features)
features_id <- regex2id(features, types, valuetype, case_insensitive)
if ("" %in% features) features_id <- c(features_id, list(0)) # append padding index
if (selection == 'keep') {
x <- qatd_cpp_tokens_select(x, features_id, 1, padding)
} else {
x <- qatd_cpp_tokens_select(x, features_id, 2, padding)
}
names(x) <- names_org
attributes(x) <- attrs_org
tokens_hashed_recompile(x)
}
#' @rdname tokens_select
#' @export
#' @examples
#' ## for tokenized texts
#' txt <- c(wash1 <- "Fellow citizens, I am again called upon by the voice of my country to
#' execute the functions of its Chief Magistrate.",
#' wash2 <- "When the occasion proper for it shall arrive, I shall endeavor to express
#' the high sense I entertain of this distinguished honor.")
#' tokens_remove(tokens(txt, removePunct = TRUE), stopwords("english"))
#'
#' \dontshow{
#' ## for tokenized texts
#' txt <- c(wash1 <- "Fellow citizens, I am again called upon by the voice of my country to
#' execute the functions of its Chief Magistrate.",
#' wash2 <- "When the occasion proper for it shall arrive, I shall endeavor to express
#' the high sense I entertain of this distinguished honor.")
#' tokens_remove(tokenize(txt, removePunct = TRUE), stopwords("english"))
#'
#' ## example for collocations
#' (myCollocs <- collocations(data_char_inaugural[1:3], n=20))
#' removeFeatures(myCollocs, stopwords("english"))
#' removeFeatures(myCollocs, stopwords("english"), pos = 2)
#' }
tokens_remove <- function(x, features, valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE, padding = FALSE, verbose = FALSE) {
tokens_select(x, features, selection = "remove", valuetype = valuetype,
case_insensitive = case_insensitive, padding = padding, verbose = verbose)
}
|
# Hierarchical Linear Model - Determine Individual Level Effect - Estimate the values in the model for each respondent
# Roller Coaster Conjoint Analysis: levels of maximum speed, height, construction type , and theme.
# when you have multiple observations for an individual or other grouping factor of interest, you should consider a hierarchical model that estimates both sample-level and individual- or group-level effects.
# Simulating Ratings-Based Conjoint Data
set.seed(12345)
resp.id <- 1:200
nques <- 16
speed <- sample(as.factor(c("40", "50", "60", "70")), size = nques,
replace = TRUE)
height <- sample(as.factor(c("200", "300", "400")), size=nques, replace=TRUE)
const <- sample(as.factor(c("Wood", "Steel")), size= nques, replace=TRUE)
theme <- sample(as.factor(c("Dragon", "Eagle")), size=nques, replace=TRUE)
profiles.df <- data.frame(speed, height, const, theme)
profiles.model <- model.matrix(~ speed + height + const + theme, data = profiles.df) # converts the list of design attributes into coded variables;
install.packages("MASS")
library(MASS)
weights <- mvrnorm(length(resp.id),
mu = c(-3, 0.5, 1, 3, 2, 1, 0, -0.5),
Sigma = diag(c(0.2, 0.1, 0.1, 0.1, 0.2, 0.3, 1, 1))) # draw unique preference weights for each respondent.
# Simulate Conjoint Data
conjoint.df <- NULL
for(i in seq_along(resp.id)) {
utility <- profiles.model %*% weights[i, ] + rnorm(16) # add noise in by rnorm()
rating <- as.numeric(cut(utility, 10)) # put on a 10-point scale
conjoint.resp <- cbind(resp.id = rep(i, nques), rating, profiles.df)
conjoint.df <- rbind(conjoint.df,conjoint.resp)
}
# Regular Linear Modelling
summary(conjoint.df)
by(conjoint.df$rating, conjoint.df$height, mean)
ride.lm <- lm(rating ~ speed + height + const + theme, data = conjoint.df) # Fixed effects that are estimated at the sample level.
summary(ride.lm)
# The highest rated roller coaster on average would have a top speed of 70 mph, a height of 300 ft, steel construction, and the dragon theme
# BUT, The coefficients are estimated on the basis of designs that mostly combine both desirable and undesirable attributes, and are not as reliable at the extremes of preference.
# Additionally, it could happen that few people prefer that exact combination even though the individual features are each best on average.
# [Intercept]Hierarchical Linear Model - estimate both the overall average preference level and individual preferences within the group.
install.packages("lme4")
install.packages("Matrix")
library(Matrix)
library(lme4)
ride.hlm1 <- lmer(rating ~ speed + height + const + theme + (1 | resp.id), data = conjoint.df) # allow indiciduals to vary only in terms of the constant intercept
# For the intercept, that is signified as simply “1”; the grouping variable, for which a random effect will be estimated for each unique group
# syntax: predictors | group, specify the random effect and grouping variable with syntax using a vertical bar "|"
# In present case, it is interesting to know the randome effetc("1") of intercept on each individaul level.
summary(ride.hlm1)
fixef(ride.hlm1) # Extract fixed effects at the population level.
ranef(ride.hlm1)$resp.id # Extract random effect estimates for intercept
coef(ride.hlm1)$resp.id # The complete effect for each respondent comprises the overall fixed effects that apply to everyone + the individually varying random effects
# [Complete]Hierarchical Linear Model - estimate a random effect parameter for every coefficient of interest for every respondent.
ride.hlm2 <- lmer(rating ~ speed + height + const + theme + (speed + height + const + theme | resp.id), data = conjoint.df, control = lmerControl(optCtrl = list(maxfun = 10000)) # Allow variance in every coefficients of interest
# control argument increases the maxfun number of iterations to attempt convergence from 10,000 iterations (the default) to 100,000. This allows the model to converge better
summary(ride.hlm2)
fixef(ride.hlm2)
ranef(ride.hlm2)
coef(ride.hlm2)$resp.id
firstcustomercoef_hlm <- coef(ride.hlm2)$resp.id[1, ] #fitting regression line for each respondent WITHOUT throwing away information
firstcustomer <- subset(conjoint.df, resp.id == 1)
firstcustomer_lm <- lm(rating ~ speed + height + const + theme, data = firstcustomer)
coef(firstcustomer_lm) #fitting regression line for each respondent WITH throwing away information
# Visualize Preference
ride.constWood <- ((coef(ride.hlm2)$resp.id)[, "constWood"])
ride.constWood <- as.data.frame(ride.constWood)
colnames(ride.constWood) <- c("Wood Preference")
install.packages("ggplot2")
library(ggplot2)
ggplot(data = ride.constWood, aes(ride.constWood$`Wood Preference`), alpha = 0.3) +
geom_histogram(aes(fill = ..count..)) + labs(title = "Preference for Wood vs. Steel", x = "Rating Points", y = "Counts of Respondents") +
scale_fill_gradient("Count", low = "pink", high = "red")
# Longitudinal Data/Clustered Data Violates the Assumption of Simple Linear Regression
# 1. Obervations are no longer independent. Errors will be correlated within clusters
# 2. Between-group homogeneity of variance assumption was violated. The error variance may be different within different clusters
# 3. Effects of explanatory variables differ in distanct contexts(clusters): different regression lines fitting different clusters
# Example: Voters nested within countries/Workers nested within firms/Cluster sampling/Time points nested within individuals
# Approaches to cluster/nested data:
# 1. Aggregate everything: take means of variable and fit the regression line with means, but this will potentially lead to ecological fallacy, the macro trend(aggregated) might be the opposite of the micro(individual) trends.
# 2. Treat macro variables as micro variables, but it violates the assumption of independence.
# 3. Run the models separately for each group, we are throwing away information and groups with small sample size will be imprecisely estimated.
| /GGplot2.R | no_license | Crystal0108W/R | R | false | false | 6,043 | r | # Hierarchical Linear Model - Determine Individual Level Effect - Estimate the values in the model for each respondent
# Roller Coaster Conjoint Analysis: levels of maximum speed, height, construction type , and theme.
# when you have multiple observations for an individual or other grouping factor of interest, you should consider a hierarchical model that estimates both sample-level and individual- or group-level effects.
# Simulating Ratings-Based Conjoint Data
set.seed(12345)
resp.id <- 1:200
nques <- 16
speed <- sample(as.factor(c("40", "50", "60", "70")), size = nques,
replace = TRUE)
height <- sample(as.factor(c("200", "300", "400")), size=nques, replace=TRUE)
const <- sample(as.factor(c("Wood", "Steel")), size= nques, replace=TRUE)
theme <- sample(as.factor(c("Dragon", "Eagle")), size=nques, replace=TRUE)
profiles.df <- data.frame(speed, height, const, theme)
profiles.model <- model.matrix(~ speed + height + const + theme, data = profiles.df) # converts the list of design attributes into coded variables;
install.packages("MASS")
library(MASS)
weights <- mvrnorm(length(resp.id),
mu = c(-3, 0.5, 1, 3, 2, 1, 0, -0.5),
Sigma = diag(c(0.2, 0.1, 0.1, 0.1, 0.2, 0.3, 1, 1))) # draw unique preference weights for each respondent.
# Simulate Conjoint Data
conjoint.df <- NULL
for(i in seq_along(resp.id)) {
utility <- profiles.model %*% weights[i, ] + rnorm(16) # add noise in by rnorm()
rating <- as.numeric(cut(utility, 10)) # put on a 10-point scale
conjoint.resp <- cbind(resp.id = rep(i, nques), rating, profiles.df)
conjoint.df <- rbind(conjoint.df,conjoint.resp)
}
# Regular Linear Modelling
summary(conjoint.df)
by(conjoint.df$rating, conjoint.df$height, mean)
ride.lm <- lm(rating ~ speed + height + const + theme, data = conjoint.df) # Fixed effects that are estimated at the sample level.
summary(ride.lm)
# The highest rated roller coaster on average would have a top speed of 70 mph, a height of 300 ft, steel construction, and the dragon theme
# BUT, The coefficients are estimated on the basis of designs that mostly combine both desirable and undesirable attributes, and are not as reliable at the extremes of preference.
# Additionally, it could happen that few people prefer that exact combination even though the individual features are each best on average.
# [Intercept]Hierarchical Linear Model - estimate both the overall average preference level and individual preferences within the group.
install.packages("lme4")
install.packages("Matrix")
library(Matrix)
library(lme4)
ride.hlm1 <- lmer(rating ~ speed + height + const + theme + (1 | resp.id), data = conjoint.df) # allow indiciduals to vary only in terms of the constant intercept
# For the intercept, that is signified as simply “1”; the grouping variable, for which a random effect will be estimated for each unique group
# syntax: predictors | group, specify the random effect and grouping variable with syntax using a vertical bar "|"
# In present case, it is interesting to know the randome effetc("1") of intercept on each individaul level.
summary(ride.hlm1)
fixef(ride.hlm1) # Extract fixed effects at the population level.
ranef(ride.hlm1)$resp.id # Extract random effect estimates for intercept
coef(ride.hlm1)$resp.id # The complete effect for each respondent comprises the overall fixed effects that apply to everyone + the individually varying random effects
# [Complete]Hierarchical Linear Model - estimate a random effect parameter for every coefficient of interest for every respondent.
ride.hlm2 <- lmer(rating ~ speed + height + const + theme + (speed + height + const + theme | resp.id), data = conjoint.df, control = lmerControl(optCtrl = list(maxfun = 10000)) # Allow variance in every coefficients of interest
# control argument increases the maxfun number of iterations to attempt convergence from 10,000 iterations (the default) to 100,000. This allows the model to converge better
summary(ride.hlm2)
fixef(ride.hlm2)
ranef(ride.hlm2)
coef(ride.hlm2)$resp.id
firstcustomercoef_hlm <- coef(ride.hlm2)$resp.id[1, ] #fitting regression line for each respondent WITHOUT throwing away information
firstcustomer <- subset(conjoint.df, resp.id == 1)
firstcustomer_lm <- lm(rating ~ speed + height + const + theme, data = firstcustomer)
coef(firstcustomer_lm) #fitting regression line for each respondent WITH throwing away information
# Visualize Preference
ride.constWood <- ((coef(ride.hlm2)$resp.id)[, "constWood"])
ride.constWood <- as.data.frame(ride.constWood)
colnames(ride.constWood) <- c("Wood Preference")
install.packages("ggplot2")
library(ggplot2)
ggplot(data = ride.constWood, aes(ride.constWood$`Wood Preference`), alpha = 0.3) +
geom_histogram(aes(fill = ..count..)) + labs(title = "Preference for Wood vs. Steel", x = "Rating Points", y = "Counts of Respondents") +
scale_fill_gradient("Count", low = "pink", high = "red")
# Longitudinal Data/Clustered Data Violates the Assumption of Simple Linear Regression
# 1. Obervations are no longer independent. Errors will be correlated within clusters
# 2. Between-group homogeneity of variance assumption was violated. The error variance may be different within different clusters
# 3. Effects of explanatory variables differ in distanct contexts(clusters): different regression lines fitting different clusters
# Example: Voters nested within countries/Workers nested within firms/Cluster sampling/Time points nested within individuals
# Approaches to cluster/nested data:
# 1. Aggregate everything: take means of variable and fit the regression line with means, but this will potentially lead to ecological fallacy, the macro trend(aggregated) might be the opposite of the micro(individual) trends.
# 2. Treat macro variables as micro variables, but it violates the assumption of independence.
# 3. Run the models separately for each group, we are throwing away information and groups with small sample size will be imprecisely estimated.
|
library(ape)
testtree <- read.tree("3346_1.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="3346_1_unrooted.txt") | /codeml_files/newick_trees_processed/3346_1/rinput.R | no_license | DaniBoo/cyanobacteria_project | R | false | false | 135 | r | library(ape)
testtree <- read.tree("3346_1.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="3346_1_unrooted.txt") |
#' module_aglu_LA103.ag_R_C_Y_GLU
#'
#' Calculate production, harvested area, and yield by region, crop, GLU, and year.
#'
#' @param command API command to execute
#' @param ... other optional parameters, depending on command
#' @return Depends on \code{command}: either a vector of required inputs,
#' a vector of output names, or (if \code{command} is "MAKE") all
#' the generated outputs: \code{L103.ag_Prod_Mt_R_C_Y_GLU}, \code{L103.ag_Prod_Mt_R_C_Y}, \code{L103.ag_HA_bm2_R_C_Y_GLU}, \code{L103.ag_Yield_kgm2_R_C_Y_GLU}. The corresponding file in the
#' original data system was \code{LA103.ag_R_C_Y_GLU.R} (aglu level1).
#' @details We only have production and harvested area by region and GLU for a single
#' representative base year (circa 2000), and are using that to downscale regional
#' production and harvested area in all years. So, if GLU223 accounted for 20% of U.S.
#' corn production in ~2000, then it accounted for 20% of US corn production in all years.
#' @importFrom assertthat assert_that
#' @importFrom dplyr filter mutate select
#' @importFrom tidyr gather spread
#' @author BBL April 2017
module_aglu_LA103.ag_R_C_Y_GLU <- function(command, ...) {
if(command == driver.DECLARE_INPUTS) {
return(c(FILE = "common/iso_GCAM_regID",
"L101.ag_Prod_Mt_R_C_Y",
"L101.ag_HA_bm2_R_C_Y",
"L102.ag_Prod_Mt_R_C_GLU",
"L102.ag_HA_bm2_R_C_GLU"))
} else if(command == driver.DECLARE_OUTPUTS) {
return(c("L103.ag_Prod_Mt_R_C_Y_GLU",
"L103.ag_Prod_Mt_R_C_Y",
"L103.ag_HA_bm2_R_C_Y_GLU",
"L103.ag_Yield_kgm2_R_C_Y_GLU"))
} else if(command == driver.MAKE) {
year <- GCAM_region_ID <- GCAM_commodity <- value <- value.y <- value.x <-
GLU <- valuesum <- NULL # silence package check.
all_data <- list(...)[[1]]
# Load required inputs
iso_GCAM_regID <- get_data(all_data, "common/iso_GCAM_regID")
get_data(all_data, "L101.ag_Prod_Mt_R_C_Y") %>%
mutate(year = as.integer(year)) ->
L101.ag_Prod_Mt_R_C_Y
get_data(all_data, "L101.ag_HA_bm2_R_C_Y") %>%
mutate(year = as.integer(year)) ->
L101.ag_HA_bm2_R_C_Y
L102.ag_Prod_Mt_R_C_GLU <- get_data(all_data, "L102.ag_Prod_Mt_R_C_GLU")
L102.ag_HA_bm2_R_C_GLU <- get_data(all_data, "L102.ag_HA_bm2_R_C_GLU")
# Combine FAO and GTAP: create tables with crop production and harvested area by
# geographic land unit (GLU) and historical year production (Mt)
L102.ag_Prod_Mt_R_C_GLU %>%
group_by(GCAM_region_ID, GCAM_commodity) %>%
summarise(value = sum(value)) %>%
right_join(L102.ag_Prod_Mt_R_C_GLU, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.y / value.x) %>%
replace_na(list(value = 0)) %>%
select(-value.x, -value.y) ->
L103.ag_Prod_frac_R_C_GLU
# Disaggregate FAO harvested area of all crops to GLUs using GTAP/LDS data
L102.ag_HA_bm2_R_C_GLU %>%
group_by(GCAM_region_ID, GCAM_commodity) %>%
summarise(value = sum(value)) %>%
right_join(L102.ag_HA_bm2_R_C_GLU, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.y / value.x) %>%
replace_na(list(value = 0)) %>%
select(-value.x, -value.y) ->
L103.ag_HA_frac_R_C_GLU
# Multiply historical production by these shares in order to downscale to GLU
# NOTE: There are a few region x crop combinations in the FAO data that aren't in the aggregated gridded dataset,
# presumably because production was zero in around 2000, the base year for the gridded dataset. In analysis of
# these missing values, the quantities are tiny (zero in most years, <0.01 Mt in all/most others) and dropping
# them should not have any consequences
L103.ag_Prod_frac_R_C_GLU %>%
left_join(L101.ag_Prod_Mt_R_C_Y, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.x * value.y) %>%
filter(year %in% HISTORICAL_YEARS) %>%
select(-value.x, -value.y) %>%
arrange(GLU) -> # so we match old d.s. order
L103.ag_Prod_Mt_R_C_Y_GLU
# Remove crops from the written-out data that are zero in all years
# This is part of the "pruning" process of not creating XML tags for land use types that are non-applicable
remove_all_zeros <- function(x) {
x %>%
group_by(GCAM_region_ID, GCAM_commodity) %>%
summarise(valuesum = sum(value)) %>%
right_join(x, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
filter(valuesum > 0) %>%
select(-valuesum)
}
L103.ag_Prod_Mt_R_C_Y_GLU <- remove_all_zeros(L103.ag_Prod_Mt_R_C_Y_GLU)
# Same operation again
L103.ag_HA_frac_R_C_GLU %>%
left_join(L101.ag_HA_bm2_R_C_Y, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.x * value.y) %>%
filter(year %in% HISTORICAL_YEARS) %>%
select(-value.x, -value.y) %>%
arrange(GLU) -> # so we match old d.s. order
L103.ag_HA_bm2_R_C_Y_GLU
L103.ag_HA_bm2_R_C_Y_GLU <- remove_all_zeros(L103.ag_HA_bm2_R_C_Y_GLU)
# Calculate initial yield estimates in kilograms per square meter by region, crop, year, and GLU
# Yield in kilograms per square meter
L103.ag_Prod_Mt_R_C_Y_GLU %>%
left_join(L103.ag_HA_bm2_R_C_Y_GLU, by = c("GCAM_region_ID", "GCAM_commodity", "GLU", "year")) %>%
mutate(value = value.x / value.y) %>%
replace_na(list(value = 0)) %>%
select(-value.x, -value.y) %>%
arrange(GLU) -> # so we match old d.s. order
L103.ag_Yield_kgm2_R_C_Y_GLU
# Aggregate through GLUs to get production by region/crop/year; different from L101 production in that we have now dropped
# some observations that weren't available in the GTAP-based gridded inventories
L103.ag_Prod_Mt_R_C_Y_GLU %>%
group_by(GCAM_region_ID, GCAM_commodity, year) %>%
summarise(value = sum(value)) %>%
ungroup %>%
complete(GCAM_region_ID = unique(iso_GCAM_regID$GCAM_region_ID), GCAM_commodity, year, fill = list(value = 0)) %>%
arrange(GCAM_commodity, GCAM_region_ID) -> # so we match old d.s. order
L103.ag_Prod_Mt_R_C_Y
# Produce outputs
L103.ag_Prod_Mt_R_C_Y_GLU %>%
ungroup %>%
add_title("Crop production by GCAM region / commodity / year / GLU") %>%
add_units("Mt") %>%
add_comments("Crop production computed based on representative base year, using") %>%
add_comments("GLU-specific production and harvested area in all years") %>%
add_legacy_name("L103.ag_Prod_Mt_R_C_Y_GLU") %>%
add_precursors("L101.ag_Prod_Mt_R_C_Y",
"L102.ag_Prod_Mt_R_C_GLU") ->
L103.ag_Prod_Mt_R_C_Y_GLU
L103.ag_Prod_Mt_R_C_Y %>%
add_title("Crop production by GCAM region / commodity / year") %>%
add_units("Mt") %>%
add_comments("Crop production computed based on representative base year") %>%
add_legacy_name("L103.ag_Prod_Mt_R_C_Y") %>%
same_precursors_as(L103.ag_Prod_Mt_R_C_Y_GLU) %>%
add_precursors("common/iso_GCAM_regID") ->
L103.ag_Prod_Mt_R_C_Y
L103.ag_HA_bm2_R_C_Y_GLU %>%
ungroup %>%
add_title("Harvested area by GCAM region / commodity / year / GLU") %>%
add_units("bm2") %>%
add_comments("Agricultural harvested area computed based on representative base year, and") %>%
add_comments("downscaled to regional production and harvested area in all years by GLU") %>%
add_legacy_name("L103.ag_HA_bm2_R_C_Y_GLU") %>%
add_precursors("L101.ag_HA_bm2_R_C_Y",
"L102.ag_HA_bm2_R_C_GLU") ->
L103.ag_HA_bm2_R_C_Y_GLU
L103.ag_Yield_kgm2_R_C_Y_GLU %>%
ungroup() %>%
add_title("Unadjusted agronomic yield by GCAM region / commodity / year / GLU") %>%
add_units("kg/m2") %>%
add_comments("Agricultural yield computed based on production and harvest area, and") %>%
add_comments("downscaled to regional values for all years") %>%
add_legacy_name("L103.ag_Yield_kgm2_R_C_Y_GLU") %>%
same_precursors_as(L103.ag_HA_bm2_R_C_Y_GLU) %>%
add_precursors("L101.ag_Prod_Mt_R_C_Y",
"L102.ag_Prod_Mt_R_C_GLU") ->
L103.ag_Yield_kgm2_R_C_Y_GLU
return_data(L103.ag_Prod_Mt_R_C_Y_GLU, L103.ag_Prod_Mt_R_C_Y, L103.ag_HA_bm2_R_C_Y_GLU, L103.ag_Yield_kgm2_R_C_Y_GLU)
} else {
stop("Unknown command")
}
}
| /R/zchunk_LA103.ag_R_C_Y_GLU.R | permissive | qingyihou/gcamdata | R | false | false | 8,387 | r | #' module_aglu_LA103.ag_R_C_Y_GLU
#'
#' Calculate production, harvested area, and yield by region, crop, GLU, and year.
#'
#' @param command API command to execute
#' @param ... other optional parameters, depending on command
#' @return Depends on \code{command}: either a vector of required inputs,
#' a vector of output names, or (if \code{command} is "MAKE") all
#' the generated outputs: \code{L103.ag_Prod_Mt_R_C_Y_GLU}, \code{L103.ag_Prod_Mt_R_C_Y}, \code{L103.ag_HA_bm2_R_C_Y_GLU}, \code{L103.ag_Yield_kgm2_R_C_Y_GLU}. The corresponding file in the
#' original data system was \code{LA103.ag_R_C_Y_GLU.R} (aglu level1).
#' @details We only have production and harvested area by region and GLU for a single
#' representative base year (circa 2000), and are using that to downscale regional
#' production and harvested area in all years. So, if GLU223 accounted for 20% of U.S.
#' corn production in ~2000, then it accounted for 20% of US corn production in all years.
#' @importFrom assertthat assert_that
#' @importFrom dplyr filter mutate select
#' @importFrom tidyr gather spread
#' @author BBL April 2017
module_aglu_LA103.ag_R_C_Y_GLU <- function(command, ...) {
if(command == driver.DECLARE_INPUTS) {
return(c(FILE = "common/iso_GCAM_regID",
"L101.ag_Prod_Mt_R_C_Y",
"L101.ag_HA_bm2_R_C_Y",
"L102.ag_Prod_Mt_R_C_GLU",
"L102.ag_HA_bm2_R_C_GLU"))
} else if(command == driver.DECLARE_OUTPUTS) {
return(c("L103.ag_Prod_Mt_R_C_Y_GLU",
"L103.ag_Prod_Mt_R_C_Y",
"L103.ag_HA_bm2_R_C_Y_GLU",
"L103.ag_Yield_kgm2_R_C_Y_GLU"))
} else if(command == driver.MAKE) {
year <- GCAM_region_ID <- GCAM_commodity <- value <- value.y <- value.x <-
GLU <- valuesum <- NULL # silence package check.
all_data <- list(...)[[1]]
# Load required inputs
iso_GCAM_regID <- get_data(all_data, "common/iso_GCAM_regID")
get_data(all_data, "L101.ag_Prod_Mt_R_C_Y") %>%
mutate(year = as.integer(year)) ->
L101.ag_Prod_Mt_R_C_Y
get_data(all_data, "L101.ag_HA_bm2_R_C_Y") %>%
mutate(year = as.integer(year)) ->
L101.ag_HA_bm2_R_C_Y
L102.ag_Prod_Mt_R_C_GLU <- get_data(all_data, "L102.ag_Prod_Mt_R_C_GLU")
L102.ag_HA_bm2_R_C_GLU <- get_data(all_data, "L102.ag_HA_bm2_R_C_GLU")
# Combine FAO and GTAP: create tables with crop production and harvested area by
# geographic land unit (GLU) and historical year production (Mt)
L102.ag_Prod_Mt_R_C_GLU %>%
group_by(GCAM_region_ID, GCAM_commodity) %>%
summarise(value = sum(value)) %>%
right_join(L102.ag_Prod_Mt_R_C_GLU, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.y / value.x) %>%
replace_na(list(value = 0)) %>%
select(-value.x, -value.y) ->
L103.ag_Prod_frac_R_C_GLU
# Disaggregate FAO harvested area of all crops to GLUs using GTAP/LDS data
L102.ag_HA_bm2_R_C_GLU %>%
group_by(GCAM_region_ID, GCAM_commodity) %>%
summarise(value = sum(value)) %>%
right_join(L102.ag_HA_bm2_R_C_GLU, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.y / value.x) %>%
replace_na(list(value = 0)) %>%
select(-value.x, -value.y) ->
L103.ag_HA_frac_R_C_GLU
# Multiply historical production by these shares in order to downscale to GLU
# NOTE: There are a few region x crop combinations in the FAO data that aren't in the aggregated gridded dataset,
# presumably because production was zero in around 2000, the base year for the gridded dataset. In analysis of
# these missing values, the quantities are tiny (zero in most years, <0.01 Mt in all/most others) and dropping
# them should not have any consequences
L103.ag_Prod_frac_R_C_GLU %>%
left_join(L101.ag_Prod_Mt_R_C_Y, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.x * value.y) %>%
filter(year %in% HISTORICAL_YEARS) %>%
select(-value.x, -value.y) %>%
arrange(GLU) -> # so we match old d.s. order
L103.ag_Prod_Mt_R_C_Y_GLU
# Remove crops from the written-out data that are zero in all years
# This is part of the "pruning" process of not creating XML tags for land use types that are non-applicable
remove_all_zeros <- function(x) {
x %>%
group_by(GCAM_region_ID, GCAM_commodity) %>%
summarise(valuesum = sum(value)) %>%
right_join(x, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
filter(valuesum > 0) %>%
select(-valuesum)
}
L103.ag_Prod_Mt_R_C_Y_GLU <- remove_all_zeros(L103.ag_Prod_Mt_R_C_Y_GLU)
# Same operation again
L103.ag_HA_frac_R_C_GLU %>%
left_join(L101.ag_HA_bm2_R_C_Y, by = c("GCAM_region_ID", "GCAM_commodity")) %>%
mutate(value = value.x * value.y) %>%
filter(year %in% HISTORICAL_YEARS) %>%
select(-value.x, -value.y) %>%
arrange(GLU) -> # so we match old d.s. order
L103.ag_HA_bm2_R_C_Y_GLU
L103.ag_HA_bm2_R_C_Y_GLU <- remove_all_zeros(L103.ag_HA_bm2_R_C_Y_GLU)
# Calculate initial yield estimates in kilograms per square meter by region, crop, year, and GLU
# Yield in kilograms per square meter
L103.ag_Prod_Mt_R_C_Y_GLU %>%
left_join(L103.ag_HA_bm2_R_C_Y_GLU, by = c("GCAM_region_ID", "GCAM_commodity", "GLU", "year")) %>%
mutate(value = value.x / value.y) %>%
replace_na(list(value = 0)) %>%
select(-value.x, -value.y) %>%
arrange(GLU) -> # so we match old d.s. order
L103.ag_Yield_kgm2_R_C_Y_GLU
# Aggregate through GLUs to get production by region/crop/year; different from L101 production in that we have now dropped
# some observations that weren't available in the GTAP-based gridded inventories
L103.ag_Prod_Mt_R_C_Y_GLU %>%
group_by(GCAM_region_ID, GCAM_commodity, year) %>%
summarise(value = sum(value)) %>%
ungroup %>%
complete(GCAM_region_ID = unique(iso_GCAM_regID$GCAM_region_ID), GCAM_commodity, year, fill = list(value = 0)) %>%
arrange(GCAM_commodity, GCAM_region_ID) -> # so we match old d.s. order
L103.ag_Prod_Mt_R_C_Y
# Produce outputs
L103.ag_Prod_Mt_R_C_Y_GLU %>%
ungroup %>%
add_title("Crop production by GCAM region / commodity / year / GLU") %>%
add_units("Mt") %>%
add_comments("Crop production computed based on representative base year, using") %>%
add_comments("GLU-specific production and harvested area in all years") %>%
add_legacy_name("L103.ag_Prod_Mt_R_C_Y_GLU") %>%
add_precursors("L101.ag_Prod_Mt_R_C_Y",
"L102.ag_Prod_Mt_R_C_GLU") ->
L103.ag_Prod_Mt_R_C_Y_GLU
L103.ag_Prod_Mt_R_C_Y %>%
add_title("Crop production by GCAM region / commodity / year") %>%
add_units("Mt") %>%
add_comments("Crop production computed based on representative base year") %>%
add_legacy_name("L103.ag_Prod_Mt_R_C_Y") %>%
same_precursors_as(L103.ag_Prod_Mt_R_C_Y_GLU) %>%
add_precursors("common/iso_GCAM_regID") ->
L103.ag_Prod_Mt_R_C_Y
L103.ag_HA_bm2_R_C_Y_GLU %>%
ungroup %>%
add_title("Harvested area by GCAM region / commodity / year / GLU") %>%
add_units("bm2") %>%
add_comments("Agricultural harvested area computed based on representative base year, and") %>%
add_comments("downscaled to regional production and harvested area in all years by GLU") %>%
add_legacy_name("L103.ag_HA_bm2_R_C_Y_GLU") %>%
add_precursors("L101.ag_HA_bm2_R_C_Y",
"L102.ag_HA_bm2_R_C_GLU") ->
L103.ag_HA_bm2_R_C_Y_GLU
L103.ag_Yield_kgm2_R_C_Y_GLU %>%
ungroup() %>%
add_title("Unadjusted agronomic yield by GCAM region / commodity / year / GLU") %>%
add_units("kg/m2") %>%
add_comments("Agricultural yield computed based on production and harvest area, and") %>%
add_comments("downscaled to regional values for all years") %>%
add_legacy_name("L103.ag_Yield_kgm2_R_C_Y_GLU") %>%
same_precursors_as(L103.ag_HA_bm2_R_C_Y_GLU) %>%
add_precursors("L101.ag_Prod_Mt_R_C_Y",
"L102.ag_Prod_Mt_R_C_GLU") ->
L103.ag_Yield_kgm2_R_C_Y_GLU
return_data(L103.ag_Prod_Mt_R_C_Y_GLU, L103.ag_Prod_Mt_R_C_Y, L103.ag_HA_bm2_R_C_Y_GLU, L103.ag_Yield_kgm2_R_C_Y_GLU)
} else {
stop("Unknown command")
}
}
|
ggum <- function(x, K=NULL, method="LA", Iters=100, Smpl=1000,
Thin=1, a.s=0.234, temp=1e-2, tmax=NULL,
algo="GA", seed=666, Interval=1e-8){
### Start====
set.seed(seed)
#require(LaplacesDemon)
#require(compiler)
#require(parallel)
#require(tidyr)
CPUs = detectCores(all.tests = FALSE, logical = TRUE) - 1
if(CPUs == 0) CPUs = 1
### Convert data to long format====
lonlong <- gather(data.frame(x), "item", "resp", colnames(x), factor_key=TRUE)
data_long <- data.frame(ID=rep(1:nrow(x), times=ncol(x)),lonlong)
### Assemble data list====
if(is.null(K)) K <- length(table(unlist(c(as.data.frame(x)))))
if (method == "MAP") {
mon.names <- "LL"
} else { mon.names <- "LP" }
parm.names <- as.parm.names(list( theta=rep(0,nrow(x)),
b=rep(0,ncol(x) * {K+1}),
Ds=rep(0,ncol(x)) ))
pos.theta <- grep("theta", parm.names)
pos.b <- grep("b", parm.names)
pos.Ds <- grep("Ds", parm.names)
PGF <- function(Data) {
theta <- rnorm(Data$n)
b <- rnorm(Data$v * {Data$levels+1})
Ds <- rlnorm(Data$v)
return(c(theta, b, Ds))
}
MyData <- list(parm.names=parm.names, mon.names=mon.names,
PGF=PGF, X=data_long, n=nrow(x), v=ncol(x), levels=K,
pos.theta=pos.theta, pos.b=pos.b, pos.Ds=pos.Ds)
is.data(MyData)
### Model====
Model <- function(parm, Data){
## Prior parameters
theta <- parm[Data$pos.theta]
b <- parm[Data$pos.b]
Ds <- interval( parm[Data$pos.Ds], 1e-100, Inf )
parm[Data$pos.Ds] <- Ds
### Log-Priors
theta.prior <- sum(dnorm(theta, mean=0, sd=1, log=T))
b.prior <- sum(dnorm(b , mean=0, sd=1, log=T))
Ds.prior <- sum(dlnorm(Ds , meanlog=0, sdlog=1, log=T))
Lpp <- theta.prior + b.prior + Ds.prior
### Log-Likelihood
thetaLL <- rep(theta, times=Data$v)
bLL <- matrix(rep(b , each=Data$n),
nrow=nrow(Data$X), ncol=Data$levels+1)
DLL <- matrix(rep(Ds , each=Data$n),
nrow=nrow(Data$X), ncol=Data$levels)
# Summation of tau
delta <- bLL[,1]
tauk <- matrixStats::rowCumsums(bLL[,-1])
# Diffs
diff <- sweep(matrix(thetaLL, nrow=nrow(Data$X), ncol=Data$levels) -
matrix(delta, nrow=nrow(Data$X), ncol=Data$levels),
2, c(1:Data$levels), "*")
P1 <- exp(DLL * {diff - tauk})
# Weight diffs
K <- matrix(Data$levels, nrow=nrow(Data$X), ncol=Data$levels)
k <- matrix(rep(c(1:Data$levels), each=nrow(Data$X)),
nrow=nrow(Data$X), ncol=Data$levels)
P2 <- exp(DLL * {{{{2*K} - k - 1} * {diff}} - tauk})
IRF <- {P1 + P2} / matrix(rowSums(P1 + P2), nrow=nrow(Data$X), ncol=Data$levels)
IRF[which(IRF == 1)] <- 1 - 1e-7
LL <- sum( dcat(Data$X[,3], p=IRF, log=T) )
### Log-Posterior
LP <- LL + Lpp
### Estimates
yhat <- tryCatch(qcat(rep(.5, nrow(IRF)), p=IRF),
error=function(e) {
qbinom(rep(.5, nrow(IRF)), Data$levels-1,
rowMeans(IRF)) + min(Data$X[,3])
})
### Output
Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP, yhat=yhat, parm=parm)
return(Modelout)
}
Model <- compiler::cmpfun(Model)
Initial.Values <- GIV(Model, MyData, PGF=T)
is.model(Model, Initial.Values, MyData)
is.bayesian(Model, Initial.Values, MyData)
### Run!====
if (method=="VB") {
Iters=Iters; Smpl=Smpl
Fit <- VariationalBayes(Model=Model, parm=Initial.Values, Data=MyData,
Covar=NULL, Interval=1e-6, Iterations=Iters,
Method="Salimans2", Samples=Smpl, sir=TRUE,
Stop.Tolerance=1e-5, CPUs=CPUs, Type="PSOCK")
} else if (method=="LA") {
Iters=Iters; Smpl=Smpl
Fit <- LaplaceApproximation(Model, parm=Initial.Values, Data=MyData,
Interval=1e-6, Iterations=Iters,
Method="SPG", Samples=Smpl, sir=TRUE,
CovEst="Identity", Stop.Tolerance=1e-5,
CPUs=CPUs, Type="PSOCK")
} else if (method=="MCMC") {
## Hit-And-Run Metropolis
Iters=Iters; Status=Iters/10; Thin=Thin; A=a.s
Fit <- LaplacesDemon(Model=Model, Data=MyData,
Initial.Values=Initial.Values,
Covar=NULL, Iterations=Iters,
Status=Status, Thinning=Thin,
Algorithm="HARM",
Specs=list(alpha.star=A, B=NULL))
} else if (method=="PMC") {
Iters=Iters; Smpl=Smpl; Thin=Thin
Fit <- PMC(Model=Model, Data=MyData, Initial.Values=Initial.Values,
Covar=NULL, Iterations=Iters, Thinning=Thin, alpha=NULL,
M=2, N=Smpl, nu=1e3, CPUs=CPUs, Type="PSOCK")
} else if (method=="IQ") {
Iters=Iters; Smpl=Smpl
Fit <- IterativeQuadrature(Model=Model, parm=Initial.Values,
Data=MyData, Covar=NULL,
Iterations=Iters, Algorithm="CAGH",
Specs=list(N=3, Nmax=10, Packages=NULL,
Dyn.libs=NULL),
Samples=Smpl, sir=T,
Stop.Tolerance=c(1e-5,1e-15),
Type="PSOCK", CPUs=CPUs)
} else if (method=="MAP") {
## Maximum a Posteriori====
#Iters=100; Smpl=1000
Iters=Iters; Status=Iters/10
Fit <- MAP(Model=Model, parm=Initial.Values, Data=MyData, algo=algo, seed=seed,
maxit=Iters, temp=temp, tmax=tmax, REPORT=Status, Interval=Interval)
} else {stop('Unknown optimization method.')}
### Results====
if (method=="MAP") {
abil = Fit[["Model"]]$parm[pos.theta]
diff = Fit[["Model"]]$parm[pos.b][c(1:MyData$v)]
tau = matrix(Fit[["Model"]]$parm[pos.b][-c(1:MyData$v)], nrow=ncol(x))
rownames(tau) = colnames(x)
colnames(tau) = paste("Answer_Key",1:K,sep="_")
disc = Fit[["Model"]]$parm[pos.Ds]
FI = Fit$FI
Results <- list("Data"=MyData,"Fit"=Fit,"Model"=Model,'abil'=abil,
'diff'=diff,"tau"=tau,"disc"=disc,'FitIndexes'=FI)
} else {
if (method=="PMC") {
abil = Fit$Summary[grep("theta", rownames(Fit$Summary), fixed=TRUE),1]
diff = Fit$Summary[grep("b", rownames(Fit$Summary), fixed=TRUE),1][c(1:MyData$v)]
tau = matrix(Fit$Summary[grep("b", rownames(Fit$Summary),
fixed=TRUE),1][-c(1:MyData$v)],
nrow=ncol(x))
disc = Fit$Summary[grep("Ds", rownames(Fit$Summary), fixed=TRUE),1]
} else {
abil = Fit$Summary1[grep("theta", rownames(Fit$Summary1), fixed=TRUE),1]
diff = Fit$Summary1[grep("b", rownames(Fit$Summary1), fixed=TRUE),1][c(1:MyData$v)]
tau = matrix(Fit$Summary1[grep("b", rownames(Fit$Summary1),
fixed=TRUE),1][-c(1:MyData$v)],
nrow=ncol(x))
disc = Fit$Summary1[grep("Ds", rownames(Fit$Summary1), fixed=TRUE),1]
}
rownames(tau) = colnames(x)
colnames(tau) = paste("Answer_Key",1:K,sep="_")
Dev <- Fit$Deviance
mDD <- Dev - min(Dev)
pDD <- Dev[min(which(mDD < 100)):length(Dev)]
pV <- var(pDD)/2
Dbar <- mean(pDD)
#Dbar = mean(Dev)
#pV <- var(Dev)/2
DIC = list(DIC=Dbar + pV, Dbar=Dbar, pV=pV)
Results <- list("Data"=MyData,"Fit"=Fit,"Model"=Model,'abil'=abil,
'tau'=tau,'diff'=diff,"disc"=disc,'DIC'=DIC)
}
return(Results)
}
| /R/ggum.R | no_license | vthorrf/birm | R | false | false | 7,726 | r | ggum <- function(x, K=NULL, method="LA", Iters=100, Smpl=1000,
Thin=1, a.s=0.234, temp=1e-2, tmax=NULL,
algo="GA", seed=666, Interval=1e-8){
### Start====
set.seed(seed)
#require(LaplacesDemon)
#require(compiler)
#require(parallel)
#require(tidyr)
CPUs = detectCores(all.tests = FALSE, logical = TRUE) - 1
if(CPUs == 0) CPUs = 1
### Convert data to long format====
lonlong <- gather(data.frame(x), "item", "resp", colnames(x), factor_key=TRUE)
data_long <- data.frame(ID=rep(1:nrow(x), times=ncol(x)),lonlong)
### Assemble data list====
if(is.null(K)) K <- length(table(unlist(c(as.data.frame(x)))))
if (method == "MAP") {
mon.names <- "LL"
} else { mon.names <- "LP" }
parm.names <- as.parm.names(list( theta=rep(0,nrow(x)),
b=rep(0,ncol(x) * {K+1}),
Ds=rep(0,ncol(x)) ))
pos.theta <- grep("theta", parm.names)
pos.b <- grep("b", parm.names)
pos.Ds <- grep("Ds", parm.names)
PGF <- function(Data) {
theta <- rnorm(Data$n)
b <- rnorm(Data$v * {Data$levels+1})
Ds <- rlnorm(Data$v)
return(c(theta, b, Ds))
}
MyData <- list(parm.names=parm.names, mon.names=mon.names,
PGF=PGF, X=data_long, n=nrow(x), v=ncol(x), levels=K,
pos.theta=pos.theta, pos.b=pos.b, pos.Ds=pos.Ds)
is.data(MyData)
### Model====
Model <- function(parm, Data){
## Prior parameters
theta <- parm[Data$pos.theta]
b <- parm[Data$pos.b]
Ds <- interval( parm[Data$pos.Ds], 1e-100, Inf )
parm[Data$pos.Ds] <- Ds
### Log-Priors
theta.prior <- sum(dnorm(theta, mean=0, sd=1, log=T))
b.prior <- sum(dnorm(b , mean=0, sd=1, log=T))
Ds.prior <- sum(dlnorm(Ds , meanlog=0, sdlog=1, log=T))
Lpp <- theta.prior + b.prior + Ds.prior
### Log-Likelihood
thetaLL <- rep(theta, times=Data$v)
bLL <- matrix(rep(b , each=Data$n),
nrow=nrow(Data$X), ncol=Data$levels+1)
DLL <- matrix(rep(Ds , each=Data$n),
nrow=nrow(Data$X), ncol=Data$levels)
# Summation of tau
delta <- bLL[,1]
tauk <- matrixStats::rowCumsums(bLL[,-1])
# Diffs
diff <- sweep(matrix(thetaLL, nrow=nrow(Data$X), ncol=Data$levels) -
matrix(delta, nrow=nrow(Data$X), ncol=Data$levels),
2, c(1:Data$levels), "*")
P1 <- exp(DLL * {diff - tauk})
# Weight diffs
K <- matrix(Data$levels, nrow=nrow(Data$X), ncol=Data$levels)
k <- matrix(rep(c(1:Data$levels), each=nrow(Data$X)),
nrow=nrow(Data$X), ncol=Data$levels)
P2 <- exp(DLL * {{{{2*K} - k - 1} * {diff}} - tauk})
IRF <- {P1 + P2} / matrix(rowSums(P1 + P2), nrow=nrow(Data$X), ncol=Data$levels)
IRF[which(IRF == 1)] <- 1 - 1e-7
LL <- sum( dcat(Data$X[,3], p=IRF, log=T) )
### Log-Posterior
LP <- LL + Lpp
### Estimates
yhat <- tryCatch(qcat(rep(.5, nrow(IRF)), p=IRF),
error=function(e) {
qbinom(rep(.5, nrow(IRF)), Data$levels-1,
rowMeans(IRF)) + min(Data$X[,3])
})
### Output
Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP, yhat=yhat, parm=parm)
return(Modelout)
}
Model <- compiler::cmpfun(Model)
Initial.Values <- GIV(Model, MyData, PGF=T)
is.model(Model, Initial.Values, MyData)
is.bayesian(Model, Initial.Values, MyData)
### Run!====
if (method=="VB") {
Iters=Iters; Smpl=Smpl
Fit <- VariationalBayes(Model=Model, parm=Initial.Values, Data=MyData,
Covar=NULL, Interval=1e-6, Iterations=Iters,
Method="Salimans2", Samples=Smpl, sir=TRUE,
Stop.Tolerance=1e-5, CPUs=CPUs, Type="PSOCK")
} else if (method=="LA") {
Iters=Iters; Smpl=Smpl
Fit <- LaplaceApproximation(Model, parm=Initial.Values, Data=MyData,
Interval=1e-6, Iterations=Iters,
Method="SPG", Samples=Smpl, sir=TRUE,
CovEst="Identity", Stop.Tolerance=1e-5,
CPUs=CPUs, Type="PSOCK")
} else if (method=="MCMC") {
## Hit-And-Run Metropolis
Iters=Iters; Status=Iters/10; Thin=Thin; A=a.s
Fit <- LaplacesDemon(Model=Model, Data=MyData,
Initial.Values=Initial.Values,
Covar=NULL, Iterations=Iters,
Status=Status, Thinning=Thin,
Algorithm="HARM",
Specs=list(alpha.star=A, B=NULL))
} else if (method=="PMC") {
Iters=Iters; Smpl=Smpl; Thin=Thin
Fit <- PMC(Model=Model, Data=MyData, Initial.Values=Initial.Values,
Covar=NULL, Iterations=Iters, Thinning=Thin, alpha=NULL,
M=2, N=Smpl, nu=1e3, CPUs=CPUs, Type="PSOCK")
} else if (method=="IQ") {
Iters=Iters; Smpl=Smpl
Fit <- IterativeQuadrature(Model=Model, parm=Initial.Values,
Data=MyData, Covar=NULL,
Iterations=Iters, Algorithm="CAGH",
Specs=list(N=3, Nmax=10, Packages=NULL,
Dyn.libs=NULL),
Samples=Smpl, sir=T,
Stop.Tolerance=c(1e-5,1e-15),
Type="PSOCK", CPUs=CPUs)
} else if (method=="MAP") {
## Maximum a Posteriori====
#Iters=100; Smpl=1000
Iters=Iters; Status=Iters/10
Fit <- MAP(Model=Model, parm=Initial.Values, Data=MyData, algo=algo, seed=seed,
maxit=Iters, temp=temp, tmax=tmax, REPORT=Status, Interval=Interval)
} else {stop('Unknown optimization method.')}
### Results====
if (method=="MAP") {
abil = Fit[["Model"]]$parm[pos.theta]
diff = Fit[["Model"]]$parm[pos.b][c(1:MyData$v)]
tau = matrix(Fit[["Model"]]$parm[pos.b][-c(1:MyData$v)], nrow=ncol(x))
rownames(tau) = colnames(x)
colnames(tau) = paste("Answer_Key",1:K,sep="_")
disc = Fit[["Model"]]$parm[pos.Ds]
FI = Fit$FI
Results <- list("Data"=MyData,"Fit"=Fit,"Model"=Model,'abil'=abil,
'diff'=diff,"tau"=tau,"disc"=disc,'FitIndexes'=FI)
} else {
if (method=="PMC") {
abil = Fit$Summary[grep("theta", rownames(Fit$Summary), fixed=TRUE),1]
diff = Fit$Summary[grep("b", rownames(Fit$Summary), fixed=TRUE),1][c(1:MyData$v)]
tau = matrix(Fit$Summary[grep("b", rownames(Fit$Summary),
fixed=TRUE),1][-c(1:MyData$v)],
nrow=ncol(x))
disc = Fit$Summary[grep("Ds", rownames(Fit$Summary), fixed=TRUE),1]
} else {
abil = Fit$Summary1[grep("theta", rownames(Fit$Summary1), fixed=TRUE),1]
diff = Fit$Summary1[grep("b", rownames(Fit$Summary1), fixed=TRUE),1][c(1:MyData$v)]
tau = matrix(Fit$Summary1[grep("b", rownames(Fit$Summary1),
fixed=TRUE),1][-c(1:MyData$v)],
nrow=ncol(x))
disc = Fit$Summary1[grep("Ds", rownames(Fit$Summary1), fixed=TRUE),1]
}
rownames(tau) = colnames(x)
colnames(tau) = paste("Answer_Key",1:K,sep="_")
Dev <- Fit$Deviance
mDD <- Dev - min(Dev)
pDD <- Dev[min(which(mDD < 100)):length(Dev)]
pV <- var(pDD)/2
Dbar <- mean(pDD)
#Dbar = mean(Dev)
#pV <- var(Dev)/2
DIC = list(DIC=Dbar + pV, Dbar=Dbar, pV=pV)
Results <- list("Data"=MyData,"Fit"=Fit,"Model"=Model,'abil'=abil,
'tau'=tau,'diff'=diff,"disc"=disc,'DIC'=DIC)
}
return(Results)
}
|
# description -------------------------------------------------------------
# scraps links to laws
# scraps info on developments pertaining to each law => basis for 2018_law_analysis
# scraps from each law development links to votes
# scraps pdfs behind each link on votes => basis for tesseract_extract_individual_votes.R
# setup -------------------------------------------------------------------
library(rvest)
library(tidyverse)
library(glue)
wdr <- getwd()
# scrap links for all laws ------------------------------------------------
page_seq<- seq(1:17)
page_links <- glue("http://parlament.ba/oLaw/GetOLawsByStatus?page={page_seq}&MandateId=4&Status=-1") %>%
enframe(name=NULL)
pb <- progress_estimated(nrow(page_links))
fn_scrap <- function(x){
pb$tick()$print()
x %>%
read_html() %>%
html_nodes("a") %>%
html_attr(.,"href") %>%
enframe(name=NULL) %>%
filter(str_detect(value, "OLawDetails")) %>%
mutate(link_to_law=paste0("http://parlament.ba",value))
}
df_links_to_laws <- page_links$value %>%
set_names() %>%
map_dfr(., possibly(fn_scrap, otherwise=NULL), .id="page_id")
write_csv(df_links_to_laws, paste0(wdr, "/data/2014_2018_links_to_laws.csv"))
df_links_to_laws <- readr::read_csv(paste0(wdr, "/data/2014_2018_links_to_laws.csv"))
nrow(df_links_to_laws) #163 laws
# law timeline ------------------------------------------------------------
df_links_to_laws <- readr::read_csv(paste0(wdr, "/data/2014_2018/2014_2018_links_to_laws.csv"))
pb <- dplyr::progress_estimated(nrow(df_links_to_laws))
fn_case.details <- function(x) {
pb$tick()$print()
x %>%
read_html() %>%
html_nodes("table") %>%
html_table(., trim = T, fill = T) %>%
map(., ~ map(., as_tibble)) %>%
map(., bind_cols) %>%
bind_rows()
}
df_law_details <- df_links_to_laws$link_to_law %>%
set_names() %>%
map_dfr(., possibly(fn_case.details, otherwise = NULL), .id = "link_to_law") %>%
mutate_at(vars(value, value1), iconv, from="UTF-8", to="windows-1253")
write_csv2(df_law_details, paste0(wdr,"/data/2014_2018/2014_2018_law_details.csv"))
#here analysis on aggregate of outcomes
#origin of law
# get links to voting pdfs -------------------------------------------------------
df_links_to_laws <-readr::read_csv(paste0(wdr, "/data/2014_2018_links_to_laws.csv"))
fn_scrap_listing <- function(x) {
pb$tick()$print()
x %>%
read_html() %>%
#html_nodes("li") %>%
html_nodes(xpath="//a[contains(text(), 'Listing')]") %>% #filters links based on text/name of links
html_attr('href') %>% #extracts links
enframe(name=NULL) %>%
mutate(link_to_vote=paste0("http://parlament.ba", value))
}
pb <- dplyr::progress_estimated(nrow(df_links_to_laws))
df_links_to_votes <- df_links_to_laws$link_to_law %>%
set_names() %>%
map_dfr(., possibly(fn_scrap_listing, otherwise=NULL), .id="link_to_law")
#write_csv(df_links_to_votes, path=paste0(wdr, "/data/2014_2018_links_to_votes.csv"))
# download voting records -------------------------------------------------
df_links_to_votes <- readr::read_csv(paste0(wdr, "/data/2014_2018_links_to_votes.csv")) %>%
mutate(law_id=stringr::str_extract(link_to_law, "[:digit:]+")) %>%
mutate(vote_id=stringr::str_extract(link_to_vote, "[:digit:]+"))
length(unique(df_links_to_votes$link_to_law)) #only 151 laws; means 12 laws without votes?
length(unique(df_links_to_votes$link_to_vote)) #793 votes
df_links_to_votes %>%
group_by(link_to_law) %>%
summarise(n_votes_per_law=n()) %>%
mutate(law_id=stringr::str_extract(link_to_law, "[:digit:]+")) %>%
group_by(n_votes_per_law) %>%
summarise(n_laws_in_group=n()) %>%
ggplot()+
geom_bar(aes(x=reorder(n_votes_per_law, -n_laws_in_group), y=n_laws_in_group),
stat="identity")
download_destinaton <- glue("{wdr}/data/voting_records/law_id_{df_links_to_votes$law_id}_vote_id_{df_links_to_votes$vote_id}.pdf")
walk2(df_links_to_votes$link_to_vote,
download_destinaton,
download.file, mode = "wb")
| /BiHLaws/script/2018_exctract_laws_and_votes.R | no_license | rs2903/BiH | R | false | false | 4,100 | r | # description -------------------------------------------------------------
# scraps links to laws
# scraps info on developments pertaining to each law => basis for 2018_law_analysis
# scraps from each law development links to votes
# scraps pdfs behind each link on votes => basis for tesseract_extract_individual_votes.R
# setup -------------------------------------------------------------------
library(rvest)
library(tidyverse)
library(glue)
wdr <- getwd()
# scrap links for all laws ------------------------------------------------
page_seq<- seq(1:17)
page_links <- glue("http://parlament.ba/oLaw/GetOLawsByStatus?page={page_seq}&MandateId=4&Status=-1") %>%
enframe(name=NULL)
pb <- progress_estimated(nrow(page_links))
fn_scrap <- function(x){
pb$tick()$print()
x %>%
read_html() %>%
html_nodes("a") %>%
html_attr(.,"href") %>%
enframe(name=NULL) %>%
filter(str_detect(value, "OLawDetails")) %>%
mutate(link_to_law=paste0("http://parlament.ba",value))
}
df_links_to_laws <- page_links$value %>%
set_names() %>%
map_dfr(., possibly(fn_scrap, otherwise=NULL), .id="page_id")
write_csv(df_links_to_laws, paste0(wdr, "/data/2014_2018_links_to_laws.csv"))
df_links_to_laws <- readr::read_csv(paste0(wdr, "/data/2014_2018_links_to_laws.csv"))
nrow(df_links_to_laws) #163 laws
# law timeline ------------------------------------------------------------
df_links_to_laws <- readr::read_csv(paste0(wdr, "/data/2014_2018/2014_2018_links_to_laws.csv"))
pb <- dplyr::progress_estimated(nrow(df_links_to_laws))
fn_case.details <- function(x) {
pb$tick()$print()
x %>%
read_html() %>%
html_nodes("table") %>%
html_table(., trim = T, fill = T) %>%
map(., ~ map(., as_tibble)) %>%
map(., bind_cols) %>%
bind_rows()
}
df_law_details <- df_links_to_laws$link_to_law %>%
set_names() %>%
map_dfr(., possibly(fn_case.details, otherwise = NULL), .id = "link_to_law") %>%
mutate_at(vars(value, value1), iconv, from="UTF-8", to="windows-1253")
write_csv2(df_law_details, paste0(wdr,"/data/2014_2018/2014_2018_law_details.csv"))
#here analysis on aggregate of outcomes
#origin of law
# get links to voting pdfs -------------------------------------------------------
df_links_to_laws <-readr::read_csv(paste0(wdr, "/data/2014_2018_links_to_laws.csv"))
fn_scrap_listing <- function(x) {
pb$tick()$print()
x %>%
read_html() %>%
#html_nodes("li") %>%
html_nodes(xpath="//a[contains(text(), 'Listing')]") %>% #filters links based on text/name of links
html_attr('href') %>% #extracts links
enframe(name=NULL) %>%
mutate(link_to_vote=paste0("http://parlament.ba", value))
}
pb <- dplyr::progress_estimated(nrow(df_links_to_laws))
df_links_to_votes <- df_links_to_laws$link_to_law %>%
set_names() %>%
map_dfr(., possibly(fn_scrap_listing, otherwise=NULL), .id="link_to_law")
#write_csv(df_links_to_votes, path=paste0(wdr, "/data/2014_2018_links_to_votes.csv"))
# download voting records -------------------------------------------------
df_links_to_votes <- readr::read_csv(paste0(wdr, "/data/2014_2018_links_to_votes.csv")) %>%
mutate(law_id=stringr::str_extract(link_to_law, "[:digit:]+")) %>%
mutate(vote_id=stringr::str_extract(link_to_vote, "[:digit:]+"))
length(unique(df_links_to_votes$link_to_law)) #only 151 laws; means 12 laws without votes?
length(unique(df_links_to_votes$link_to_vote)) #793 votes
df_links_to_votes %>%
group_by(link_to_law) %>%
summarise(n_votes_per_law=n()) %>%
mutate(law_id=stringr::str_extract(link_to_law, "[:digit:]+")) %>%
group_by(n_votes_per_law) %>%
summarise(n_laws_in_group=n()) %>%
ggplot()+
geom_bar(aes(x=reorder(n_votes_per_law, -n_laws_in_group), y=n_laws_in_group),
stat="identity")
download_destinaton <- glue("{wdr}/data/voting_records/law_id_{df_links_to_votes$law_id}_vote_id_{df_links_to_votes$vote_id}.pdf")
walk2(df_links_to_votes$link_to_vote,
download_destinaton,
download.file, mode = "wb")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/testData.R
\docType{data}
\name{testCounts}
\alias{testCounts}
\title{Counts data used to demonstrate sp.scRNAseq package.}
\format{matrix
\describe{
\item{rownames}{Gene names}
\item{colnames}{Samples. Singlets prefixed with "s" and multiplets "m".}
}}
\usage{
testCounts
}
\value{
Matrix of counts.
}
\description{
Test counts data.
}
\examples{
data(testCounts)
}
| /man/testCounts.Rd | no_license | EngeLab/kNNclassification | R | false | true | 454 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/testData.R
\docType{data}
\name{testCounts}
\alias{testCounts}
\title{Counts data used to demonstrate sp.scRNAseq package.}
\format{matrix
\describe{
\item{rownames}{Gene names}
\item{colnames}{Samples. Singlets prefixed with "s" and multiplets "m".}
}}
\usage{
testCounts
}
\value{
Matrix of counts.
}
\description{
Test counts data.
}
\examples{
data(testCounts)
}
|
library(ggplot2)
require(devEMF)
emf('180ms.emf')
#png("180ms.emf")
#postscript("plot.eps")
#emf("plot.emf")
# header = TRUE ignores the first line, check.names = FALSE allows '+' in 'C++'
benchmark <- read.table("180ms.dat", header = TRUE, row.names = "vwnd", check.names = FALSE)
# 't()' is matrix tranposition, 'beside = TRUE' separates the benchmarks, 'heat' provides nice colors
#barplot(t(as.matrix(benchmark)), beside = TRUE, col = heat.colors(6))
barplot(t(as.matrix(benchmark)), beside = TRUE, col = heat.colors(6), xlab = "VM WINDOW", ylab = "MIGARTION TIME (SEC)")
# 'cex' stands for 'character expansion', 'bty' for 'box type' (we don't want borders)
legend("topright", names(benchmark), cex = 0.9, bty = "n", fill = heat.colors(6))
| /R/180ms.R | no_license | david78k/migration | R | false | false | 751 | r | library(ggplot2)
require(devEMF)
emf('180ms.emf')
#png("180ms.emf")
#postscript("plot.eps")
#emf("plot.emf")
# header = TRUE ignores the first line, check.names = FALSE allows '+' in 'C++'
benchmark <- read.table("180ms.dat", header = TRUE, row.names = "vwnd", check.names = FALSE)
# 't()' is matrix tranposition, 'beside = TRUE' separates the benchmarks, 'heat' provides nice colors
#barplot(t(as.matrix(benchmark)), beside = TRUE, col = heat.colors(6))
barplot(t(as.matrix(benchmark)), beside = TRUE, col = heat.colors(6), xlab = "VM WINDOW", ylab = "MIGARTION TIME (SEC)")
# 'cex' stands for 'character expansion', 'bty' for 'box type' (we don't want borders)
legend("topright", names(benchmark), cex = 0.9, bty = "n", fill = heat.colors(6))
|
# tests retrieval of objective function
# getObjective
test.getObjective <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
setNmPath("internalunit", file.path(unitTestPath, "testdata/TestRun") )
run1 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestRun" ))
# NMBasicModel
checkEquals(getObjective(getProblem(run1), addMinInfo = FALSE), 3228.988 )
obj <- getObjective(run1, addMinInfo = TRUE)
target <- 3228.988
attr(target, "minInfo") <- c("minResult" = "SUCCESSFUL", "numEval" = "143",
"numSigDigits" = "3.5")
checkEquals(obj, target)
# NMSimModel
run2 <- importNm( "TestData1SIM.con", path = file.path(unitTestPath, "testdata/TestSimRun" ))
prob2 <- getProblem(run2)
checkEquals(getObjective(run2, subProblems = 1:5), getObjective(prob2, subProblems = 1:5))
checkEquals(getObjective(prob2, subProblems = c(2, 4)),
structure(c(3575.252, 3606.526), .Names = c("sim2","sim4")),
msg = " |objective functions for simulation problem are correct")
# NMBasicModelNM7
run3 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestDataNM7" ))
prob3 <- getProblem(run3)
checkEquals(getObjective(run3, method = 1:2), getObjective(prob3, method = 1:2), msg = " |getobjective the same on run and NMBasicModelNM7" )
objTarget <- c(3335.250, 2339.093)
attr(objTarget, "minInfo") <- c("OPTIMIZATION COMPLETED", "STOCHASTIC PORTION COMPLETED")
checkEquals(getObjective(run3, method = 1:2), objTarget)
checkEquals(getObjective(prob3, method = 1, addMinInfo = FALSE), 3335.250 )
checkEquals(getObjective(prob3, method = 2, addMinInfo = FALSE), 2339.093 )
removeNmPath("internalunit")
}
# test get estimate cov
test.getEstimateCov <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1notab.ctl", path = file.path(unitTestPath, "testdata/TestRunNoTab" ))
prob1 <- getProblem(run1)
checkEquals(getEstimateCov(run1, corMatrix = TRUE) ,getEstimateCov(run1, corMatrix = TRUE))
# expected covariance matrices
expCovMat <- structure(c(0.927, 2.16, 0.0347, 0.00617, 0, 0, 0.00252, 0, 0.0471,
0.00176, 2.16, 12.7, 0.18, 0.00892, 0, 0, 0.00903, 0, 0.39, 0.000474,
0.0347, 0.18, 0.0474, -0.000661, 0, 0, -0.000648, 0, 0.0657,
0.000168, 0.00617, 0.00892, -0.000661, 0.000571, 0, 0, 0.000373,
0, -0.000778, 1.59e-05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.00252, 0.00903, -0.000648, 0.000373, 0,
0, 0.000486, 0, -0.000812, 6.95e-06, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.0471, 0.39, 0.0657, -0.000778, 0, 0, -0.000812, 0, 0.119,
0.000239, 0.00176, 0.000474, 0.000168, 1.59e-05, 0, 0, 6.95e-06,
0, 0.000239, 1.04e-05), .Dim = c(10L, 10L), .Dimnames = list(
c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13", "OM22", "OM23",
"OM33", "SG11"), c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13",
"OM22", "OM23", "OM33", "SG11")))
expCorMat <- structure(c(1, 0.629, 0.166, 0.268, 0, 0, 0.119, 0, 0.142, 0.567,
0.629, 1, 0.232, 0.105, 0, 0, 0.115, 0, 0.317, 0.0413, 0.166,
0.232, 1, -0.127, 0, 0, -0.135, 0, 0.874, 0.24, 0.268, 0.105,
-0.127, 1, 0, 0, 0.708, 0, -0.0943, 0.206, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.119, 0.115, -0.135,
0.708, 0, 0, 1, 0, -0.107, 0.0978, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.142, 0.317, 0.874, -0.0943, 0, 0, -0.107, 0, 1, 0.215, 0.567,
0.0413, 0.24, 0.206, 0, 0, 0.0978, 0, 0.215, 1), .Dim = c(10L,
10L), .Dimnames = list(c("TH1", "TH2", "TH3", "OM11", "OM12",
"OM13", "OM22", "OM23", "OM33", "SG11"), c("TH1", "TH2", "TH3",
"OM11", "OM12", "OM13", "OM22", "OM23", "OM33", "SG11")))
checkEquals(getEstimateCov(prob1), expCovMat, msg = " |covariance matrix as expected")
test1 <- getEstimateCov(run1, corMatrix = TRUE)
checkEquals(test1, list("covariance" = expCovMat, "correlation" = expCorMat),
msg = " | extracting with both")
# check for appropriate error handling
run2 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestRun" ))
checkTrue(is.null(getEstimateCov(run2)), msg = " |NULL returned when no parameter covariance matrix found" )
############## # now NONMEM 7 run
run3 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestDataNM7" ))
prob3 <- getProblem(run3)
# for method 2, should be NULL
checkTrue( is.null(getEstimateCov(run3, method = 2) ), msg = " |NULL returned when no parameter covariance matrix found")
expCov2 <- structure(c(1.53, -6.69, 0.00102, 0.00653, 0, 0, -0.00655, 0,
-0.0794, -0.00023, -6.69, 59.4, -0.0543, 0.0591, 0, 0, -0.14,
0, 0.188, 0.00104, 0.00102, -0.0543, 0.0162, -0.000882, 0, 0,
0.000434, 0, 0.00117, 2.73e-05, 0.00653, 0.0591, -0.000882, 0.00182,
0, 0, -0.00151, 0, -0.00198, -1.23e-06, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.00655, -0.14, 0.000434,
-0.00151, 0, 0, 0.00233, 0, 0.00143, 2.41e-07, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, -0.0794, 0.188, 0.00117, -0.00198, 0, 0, 0.00143,
0, 0.0403, -4.54e-06, -0.00023, 0.00104, 2.73e-05, -1.23e-06,
0, 0, 2.41e-07, 0, -4.54e-06, 5.56e-07), .Dim = c(10L, 10L), .Dimnames = list(
c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13", "OM22", "OM23",
"OM33", "SG11"), c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13",
"OM22", "OM23", "OM33", "SG11")))
expCor2 <- structure(c(1, -0.702, 0.0065, 0.124, 0, 0, -0.11, 0, -0.32,
-0.25, -0.702, 1, -0.0553, 0.18, 0, 0, -0.377, 0, 0.122, 0.181,
0.0065, -0.0553, 1, -0.162, 0, 0, 0.0705, 0, 0.0457, 0.288, 0.124,
0.18, -0.162, 1, 0, 0, -0.733, 0, -0.232, -0.0388, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.11, -0.377,
0.0705, -0.733, 0, 0, 1, 0, 0.148, 0.0067, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, -0.32, 0.122, 0.0457, -0.232, 0, 0, 0.148, 0, 1,
-0.0303, -0.25, 0.181, 0.288, -0.0388, 0, 0, 0.0067, 0, -0.0303,
1), .Dim = c(10L, 10L), .Dimnames = list(c("TH1", "TH2", "TH3",
"OM11", "OM12", "OM13", "OM22", "OM23", "OM33", "SG11"), c("TH1",
"TH2", "TH3", "OM11", "OM12", "OM13", "OM22", "OM23", "OM33",
"SG11")))
cov2Test <- getEstimateCov( prob3, method = 1 )
checkEquals(cov2Test, expCov2, msg = " | covariance matrix OK")
checkEquals(getEstimateCov(prob3, corMatrix = TRUE),
list("covariance" = expCov2, "correlation" = expCor2),
msg = " |retrieving both at the same time works")
}
# tests the following functions:
# getNmVersion
#
test.getNmVersion <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1notab.ctl", path = file.path(unitTestPath, "testdata/TestRunNoTab" ))
run2 <- importNm( "TestData1SIM.con", path = file.path(unitTestPath, "testdata/TestSimRun" ))
checkEquals( getNmVersion(run1), c(major = "VI", "minor" = "2" ) , " version of run1 is correct")
checkEquals( getNmVersion(run2), c(major = "VI", "minor" = "1" ) , " version of run2 is correct")
prob1 <- getProblem(run1)
prob2 <- getProblem(run2)
checkEquals( getNmVersion(prob1), c(major = "VI", "minor" = "2" ) , " version of run1 problem is correct")
checkEquals( getNmVersion(prob2), c(major = "VI", "minor" = "1" ) , " version of run2 problem is correct")
}
# test the following functions:
# getMethodNames
test.getMethodNames <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestDataNM7" ))
prob1 <- getProblem(run1)
# check that getMethodNames works in the same way for runs as for problems
checkEquals(getMethodNames(run1, what = "control"), getMethodNames(prob1, what = "control"))
checkEquals(getMethodNames(run1, what = "report"), getMethodNames(prob1, what = "report"))
checkEquals(getMethodNames(prob1, what = "control"),c("ITS","SAEM"), msg = " |control stream captured correctly" )
checkEquals(getMethodNames(prob1, what = "report"),c( "Iterative Two Stage", "Stochastic Approximation Expectation-Maximization"),
msg = " |report stream captured correctly" )
}
# tests the following functions:
# getFileinfo
test.getFileinfo <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestRun" ))
fInfoTest <- getFileinfo(run1)
if (.Platform$OS.type == "windows") {
colnametest <- c("size", "mode", "mtime", "ctime", "atime", "exe", "fileName")
} else {
colnametest <- c("size", "mode", "mtime", "ctime", "atime", "uid", "gid", "uname", "grname", "fileName")
}
checkEquals(colnames(fInfoTest), colnametest, msg = " |correct columns present")
checkEquals(rownames(fInfoTest), c("controlFile", "reportFile" ))
checkEquals(fInfoTest$size, c(941, 7820))
# checkEquals(fInfoTest$mtime, structure(c(1231859719, 1231859719), class = c("POSIXt", "POSIXct")))
checkEquals(tolower(basename(fInfoTest["controlFile","fileName"])), "testdata1.ctl", msg = " | correct control file name" )
checkEquals(tolower(basename(fInfoTest["reportFile","fileName"])),"testdata1.lst" , msg = " | correct report file name" )
}
# tests the getSimInfo function on a NMSimMode object
test.getSimInfo <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run <- importNm( "TestData1SIM.con", path = file.path(unitTestPath, "testdata/TestSimRun" ))
simInfo <- getSimInfo(run)
rawInfo <- attr(simInfo, "rawStatement")
attr(simInfo, "rawStatement") <- NULL
checkEquals(rawInfo, "(20050213) SUBPROBLEMS=5", msg = " | Correct raw statement extracted")
checkEquals(simInfo, c("numSimulations" = 5, "seed1" = 20050213, "seed2" = NA), msg = " | Seeds ")
prob <- getProblem(run)
checkEquals(getSimInfo(prob, addRawInfo = FALSE), c("numSimulations" = 5, "seed1" = 20050213, "seed2" = NA) )
}
| /tests/unittests/runit.retrievemisc.R | no_license | MikeKSmith/RNMImport | R | false | false | 10,154 | r |
# tests retrieval of objective function
# getObjective
test.getObjective <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
setNmPath("internalunit", file.path(unitTestPath, "testdata/TestRun") )
run1 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestRun" ))
# NMBasicModel
checkEquals(getObjective(getProblem(run1), addMinInfo = FALSE), 3228.988 )
obj <- getObjective(run1, addMinInfo = TRUE)
target <- 3228.988
attr(target, "minInfo") <- c("minResult" = "SUCCESSFUL", "numEval" = "143",
"numSigDigits" = "3.5")
checkEquals(obj, target)
# NMSimModel
run2 <- importNm( "TestData1SIM.con", path = file.path(unitTestPath, "testdata/TestSimRun" ))
prob2 <- getProblem(run2)
checkEquals(getObjective(run2, subProblems = 1:5), getObjective(prob2, subProblems = 1:5))
checkEquals(getObjective(prob2, subProblems = c(2, 4)),
structure(c(3575.252, 3606.526), .Names = c("sim2","sim4")),
msg = " |objective functions for simulation problem are correct")
# NMBasicModelNM7
run3 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestDataNM7" ))
prob3 <- getProblem(run3)
checkEquals(getObjective(run3, method = 1:2), getObjective(prob3, method = 1:2), msg = " |getobjective the same on run and NMBasicModelNM7" )
objTarget <- c(3335.250, 2339.093)
attr(objTarget, "minInfo") <- c("OPTIMIZATION COMPLETED", "STOCHASTIC PORTION COMPLETED")
checkEquals(getObjective(run3, method = 1:2), objTarget)
checkEquals(getObjective(prob3, method = 1, addMinInfo = FALSE), 3335.250 )
checkEquals(getObjective(prob3, method = 2, addMinInfo = FALSE), 2339.093 )
removeNmPath("internalunit")
}
# test get estimate cov
test.getEstimateCov <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1notab.ctl", path = file.path(unitTestPath, "testdata/TestRunNoTab" ))
prob1 <- getProblem(run1)
checkEquals(getEstimateCov(run1, corMatrix = TRUE) ,getEstimateCov(run1, corMatrix = TRUE))
# expected covariance matrices
expCovMat <- structure(c(0.927, 2.16, 0.0347, 0.00617, 0, 0, 0.00252, 0, 0.0471,
0.00176, 2.16, 12.7, 0.18, 0.00892, 0, 0, 0.00903, 0, 0.39, 0.000474,
0.0347, 0.18, 0.0474, -0.000661, 0, 0, -0.000648, 0, 0.0657,
0.000168, 0.00617, 0.00892, -0.000661, 0.000571, 0, 0, 0.000373,
0, -0.000778, 1.59e-05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.00252, 0.00903, -0.000648, 0.000373, 0,
0, 0.000486, 0, -0.000812, 6.95e-06, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.0471, 0.39, 0.0657, -0.000778, 0, 0, -0.000812, 0, 0.119,
0.000239, 0.00176, 0.000474, 0.000168, 1.59e-05, 0, 0, 6.95e-06,
0, 0.000239, 1.04e-05), .Dim = c(10L, 10L), .Dimnames = list(
c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13", "OM22", "OM23",
"OM33", "SG11"), c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13",
"OM22", "OM23", "OM33", "SG11")))
expCorMat <- structure(c(1, 0.629, 0.166, 0.268, 0, 0, 0.119, 0, 0.142, 0.567,
0.629, 1, 0.232, 0.105, 0, 0, 0.115, 0, 0.317, 0.0413, 0.166,
0.232, 1, -0.127, 0, 0, -0.135, 0, 0.874, 0.24, 0.268, 0.105,
-0.127, 1, 0, 0, 0.708, 0, -0.0943, 0.206, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.119, 0.115, -0.135,
0.708, 0, 0, 1, 0, -0.107, 0.0978, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.142, 0.317, 0.874, -0.0943, 0, 0, -0.107, 0, 1, 0.215, 0.567,
0.0413, 0.24, 0.206, 0, 0, 0.0978, 0, 0.215, 1), .Dim = c(10L,
10L), .Dimnames = list(c("TH1", "TH2", "TH3", "OM11", "OM12",
"OM13", "OM22", "OM23", "OM33", "SG11"), c("TH1", "TH2", "TH3",
"OM11", "OM12", "OM13", "OM22", "OM23", "OM33", "SG11")))
checkEquals(getEstimateCov(prob1), expCovMat, msg = " |covariance matrix as expected")
test1 <- getEstimateCov(run1, corMatrix = TRUE)
checkEquals(test1, list("covariance" = expCovMat, "correlation" = expCorMat),
msg = " | extracting with both")
# check for appropriate error handling
run2 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestRun" ))
checkTrue(is.null(getEstimateCov(run2)), msg = " |NULL returned when no parameter covariance matrix found" )
############## # now NONMEM 7 run
run3 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestDataNM7" ))
prob3 <- getProblem(run3)
# for method 2, should be NULL
checkTrue( is.null(getEstimateCov(run3, method = 2) ), msg = " |NULL returned when no parameter covariance matrix found")
expCov2 <- structure(c(1.53, -6.69, 0.00102, 0.00653, 0, 0, -0.00655, 0,
-0.0794, -0.00023, -6.69, 59.4, -0.0543, 0.0591, 0, 0, -0.14,
0, 0.188, 0.00104, 0.00102, -0.0543, 0.0162, -0.000882, 0, 0,
0.000434, 0, 0.00117, 2.73e-05, 0.00653, 0.0591, -0.000882, 0.00182,
0, 0, -0.00151, 0, -0.00198, -1.23e-06, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.00655, -0.14, 0.000434,
-0.00151, 0, 0, 0.00233, 0, 0.00143, 2.41e-07, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, -0.0794, 0.188, 0.00117, -0.00198, 0, 0, 0.00143,
0, 0.0403, -4.54e-06, -0.00023, 0.00104, 2.73e-05, -1.23e-06,
0, 0, 2.41e-07, 0, -4.54e-06, 5.56e-07), .Dim = c(10L, 10L), .Dimnames = list(
c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13", "OM22", "OM23",
"OM33", "SG11"), c("TH1", "TH2", "TH3", "OM11", "OM12", "OM13",
"OM22", "OM23", "OM33", "SG11")))
expCor2 <- structure(c(1, -0.702, 0.0065, 0.124, 0, 0, -0.11, 0, -0.32,
-0.25, -0.702, 1, -0.0553, 0.18, 0, 0, -0.377, 0, 0.122, 0.181,
0.0065, -0.0553, 1, -0.162, 0, 0, 0.0705, 0, 0.0457, 0.288, 0.124,
0.18, -0.162, 1, 0, 0, -0.733, 0, -0.232, -0.0388, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.11, -0.377,
0.0705, -0.733, 0, 0, 1, 0, 0.148, 0.0067, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, -0.32, 0.122, 0.0457, -0.232, 0, 0, 0.148, 0, 1,
-0.0303, -0.25, 0.181, 0.288, -0.0388, 0, 0, 0.0067, 0, -0.0303,
1), .Dim = c(10L, 10L), .Dimnames = list(c("TH1", "TH2", "TH3",
"OM11", "OM12", "OM13", "OM22", "OM23", "OM33", "SG11"), c("TH1",
"TH2", "TH3", "OM11", "OM12", "OM13", "OM22", "OM23", "OM33",
"SG11")))
cov2Test <- getEstimateCov( prob3, method = 1 )
checkEquals(cov2Test, expCov2, msg = " | covariance matrix OK")
checkEquals(getEstimateCov(prob3, corMatrix = TRUE),
list("covariance" = expCov2, "correlation" = expCor2),
msg = " |retrieving both at the same time works")
}
# tests the following functions:
# getNmVersion
#
test.getNmVersion <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1notab.ctl", path = file.path(unitTestPath, "testdata/TestRunNoTab" ))
run2 <- importNm( "TestData1SIM.con", path = file.path(unitTestPath, "testdata/TestSimRun" ))
checkEquals( getNmVersion(run1), c(major = "VI", "minor" = "2" ) , " version of run1 is correct")
checkEquals( getNmVersion(run2), c(major = "VI", "minor" = "1" ) , " version of run2 is correct")
prob1 <- getProblem(run1)
prob2 <- getProblem(run2)
checkEquals( getNmVersion(prob1), c(major = "VI", "minor" = "2" ) , " version of run1 problem is correct")
checkEquals( getNmVersion(prob2), c(major = "VI", "minor" = "1" ) , " version of run2 problem is correct")
}
# test the following functions:
# getMethodNames
test.getMethodNames <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestDataNM7" ))
prob1 <- getProblem(run1)
# check that getMethodNames works in the same way for runs as for problems
checkEquals(getMethodNames(run1, what = "control"), getMethodNames(prob1, what = "control"))
checkEquals(getMethodNames(run1, what = "report"), getMethodNames(prob1, what = "report"))
checkEquals(getMethodNames(prob1, what = "control"),c("ITS","SAEM"), msg = " |control stream captured correctly" )
checkEquals(getMethodNames(prob1, what = "report"),c( "Iterative Two Stage", "Stochastic Approximation Expectation-Maximization"),
msg = " |report stream captured correctly" )
}
# tests the following functions:
# getFileinfo
test.getFileinfo <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run1 <- importNm( "TestData1.ctl", path = file.path(unitTestPath, "testdata/TestRun" ))
fInfoTest <- getFileinfo(run1)
if (.Platform$OS.type == "windows") {
colnametest <- c("size", "mode", "mtime", "ctime", "atime", "exe", "fileName")
} else {
colnametest <- c("size", "mode", "mtime", "ctime", "atime", "uid", "gid", "uname", "grname", "fileName")
}
checkEquals(colnames(fInfoTest), colnametest, msg = " |correct columns present")
checkEquals(rownames(fInfoTest), c("controlFile", "reportFile" ))
checkEquals(fInfoTest$size, c(941, 7820))
# checkEquals(fInfoTest$mtime, structure(c(1231859719, 1231859719), class = c("POSIXt", "POSIXct")))
checkEquals(tolower(basename(fInfoTest["controlFile","fileName"])), "testdata1.ctl", msg = " | correct control file name" )
checkEquals(tolower(basename(fInfoTest["reportFile","fileName"])),"testdata1.lst" , msg = " | correct report file name" )
}
# tests the getSimInfo function on a NMSimMode object
test.getSimInfo <- function()
{
unitTestPath <- get("TestPath", envir = .RNMImportTestEnv)
run <- importNm( "TestData1SIM.con", path = file.path(unitTestPath, "testdata/TestSimRun" ))
simInfo <- getSimInfo(run)
rawInfo <- attr(simInfo, "rawStatement")
attr(simInfo, "rawStatement") <- NULL
checkEquals(rawInfo, "(20050213) SUBPROBLEMS=5", msg = " | Correct raw statement extracted")
checkEquals(simInfo, c("numSimulations" = 5, "seed1" = 20050213, "seed2" = NA), msg = " | Seeds ")
prob <- getProblem(run)
checkEquals(getSimInfo(prob, addRawInfo = FALSE), c("numSimulations" = 5, "seed1" = 20050213, "seed2" = NA) )
}
|
# Exemplo de conversao de tabela de dados em
# uma tabela de contingencia
# Uma tabela de dados costuma ser apresentada
# com um paciente por linha, e os dados em colunas
library("epiR")
filename <- "MPT0164_GESTANTES_AFolico_B12_rubeola.txt"
gestantes <- read.table(filename,header=TRUE,sep="\t",dec=".")
# adicionando duas colunas com categorias
gestantes$HBcat[gestantes$HB < 12] <- "Gold+"
gestantes$HBcat[gestantes$HB >= 12] <- "Gold-"
gestantes$HTcat[gestantes$HT < 37] <- "Anemia+"
gestantes$HTcat[gestantes$HT >= 37] <- "Anemia-"
# transformando em tabela de contingencia 2x2
tabela <- table (gestantes$HTcat, gestantes$HBcat)
# print (tabela)
# aplicacao da regra de Bayes
cat ("\nAplicando a regra de Bayes:\n")
out <- epi.tests(tabela, conf.level = 0.95)
print (out)
sumario <- summary(out)
print (sumario)
| /Aula02/R/dataframe_to_table.R | no_license | yadevi/2020 | R | false | false | 825 | r | # Exemplo de conversao de tabela de dados em
# uma tabela de contingencia
# Uma tabela de dados costuma ser apresentada
# com um paciente por linha, e os dados em colunas
library("epiR")
filename <- "MPT0164_GESTANTES_AFolico_B12_rubeola.txt"
gestantes <- read.table(filename,header=TRUE,sep="\t",dec=".")
# adicionando duas colunas com categorias
gestantes$HBcat[gestantes$HB < 12] <- "Gold+"
gestantes$HBcat[gestantes$HB >= 12] <- "Gold-"
gestantes$HTcat[gestantes$HT < 37] <- "Anemia+"
gestantes$HTcat[gestantes$HT >= 37] <- "Anemia-"
# transformando em tabela de contingencia 2x2
tabela <- table (gestantes$HTcat, gestantes$HBcat)
# print (tabela)
# aplicacao da regra de Bayes
cat ("\nAplicando a regra de Bayes:\n")
out <- epi.tests(tabela, conf.level = 0.95)
print (out)
sumario <- summary(out)
print (sumario)
|
numSamples <- 1000
obsPerSample <- 1000
###Uniform Distr
resultsUniform <- unlist(lapply(1:numSamples, function(x){
return(mean(
runif(obsPerSample, min = 0, max = 100)
)
)
}))
resultsExponential <- unlist(lapply(1:numSamples, function(x){
return(mean(
rexp(obsPerSample, rate = 1/50)
))
}))
resultsBinom <- unlist(lapply(1:numSamples, function(x){
return(mean(
rbinom(obsPerSample, size = 1, prob = .2)
))
}))
shapiro.test(resultsUniform)
shapiro.test(resultsExponential)
shapiro.test(resultsBinom)
| /Notes/2017-02-06 Statistical Foundations for DS and ML, Metis/CentralLimitTheorem.R | no_license | danaoira/misc | R | false | false | 583 | r | numSamples <- 1000
obsPerSample <- 1000
###Uniform Distr
resultsUniform <- unlist(lapply(1:numSamples, function(x){
return(mean(
runif(obsPerSample, min = 0, max = 100)
)
)
}))
resultsExponential <- unlist(lapply(1:numSamples, function(x){
return(mean(
rexp(obsPerSample, rate = 1/50)
))
}))
resultsBinom <- unlist(lapply(1:numSamples, function(x){
return(mean(
rbinom(obsPerSample, size = 1, prob = .2)
))
}))
shapiro.test(resultsUniform)
shapiro.test(resultsExponential)
shapiro.test(resultsBinom)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.