content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
\name{SelectVarDlg}
\alias{SelectVarDlg}
\alias{SelectVarDlg.default}
\alias{SelectVarDlg.factor}
\alias{SelectVarDlg.data.frame}
\alias{ModelDlg}
\title{Select Elements of a Set by Click}
\description{\code{SelectVarDlg()} is a GUI utility, which brings up a dialog and lets the user select elements (either variables of a data.frame or
levels of a factor) by point and click in a listbox. The list of selected items is written to the clipboard so
that the code can afterwards easily be pasted in the source file.\cr
\code{ModelDlg()} helps to compose model formulas based on the variablenames of a \code{data.frame}.
Both functions are best used together with the package \code{DescToolsAddIns}, with which the functions can be assigned to a keystroke in RStudio.
%% ~~ A concise (1-5 lines) description of what the function does. ~~
}
\usage{
SelectVarDlg(x, ...)
\method{SelectVarDlg}{default}(x, useIndex = FALSE, ...)
\method{SelectVarDlg}{factor}(x, ...)
\method{SelectVarDlg}{data.frame}(x, ...)
ModelDlg(x, ...)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{x}{the object containing the elements to be selected. x can be a data.frame, a factor or any other vector.
%% ~~Describe \code{x} here~~
}
\item{useIndex}{defines, if the enquoted variablenames (default) or the index values should be returned.
%% ~~Describe \code{x} here~~
}
\item{\dots}{further arguments to be passed to the default function. }
}
\details{
When working with big data.frames with many variables it is often tedious to build subsets by typing the
columnnames. Here is where the function comes in offering a "point and click" approach for selecting the interesting
columns.
When x is a \code{data.frame} the columnnames are listed,
when x is a factor the according levels are listed and in all other cases the list is filled with the unique elements of x.
%% ~~ If necessary, more details than the description above ~~
In the model dialog, the variablenames of the selected data.frame are listed on the right, from where they can be inserted in the model box by clicking on a button between the two boxes. Clicking on the \code{+} button will use + to concatenate the variablenames.
\figure{ModelDlg.png}{Model dialog}
After clicking on ok, the formula \code{temperature ~ area + driver + delivery_min, data=d.pizza} will be inserted on the cursor position.
}
\value{
A comma separated list with the selected values enquoted is returned invisibly as well as
written to clipboard for easy inserting the text in an editor afterwards.
}
\author{
Andri Signorell <andri@signorell.net>
%% ~~who you are~~
}
\seealso{\code{\link{select.list}}
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
\dontrun{
SelectVarDlg(x = d.pizza$driver)
SelectVarDlg(x = d.pizza, useIndex=TRUE)
SelectVarDlg(d.pizza$driver)
x <- replicate(10, paste(sample(LETTERS, 5, replace = TRUE), collapse=""))
SelectVarDlg(x)
ModelDlg(d.pizza)
}
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ manip }
| /man/SelectVarDlg.Rd | no_license | forked-packages/DescTools--2 | R | false | false | 3,172 | rd | \name{SelectVarDlg}
\alias{SelectVarDlg}
\alias{SelectVarDlg.default}
\alias{SelectVarDlg.factor}
\alias{SelectVarDlg.data.frame}
\alias{ModelDlg}
\title{Select Elements of a Set by Click}
\description{\code{SelectVarDlg()} is a GUI utility, which brings up a dialog and lets the user select elements (either variables of a data.frame or
levels of a factor) by point and click in a listbox. The list of selected items is written to the clipboard so
that the code can afterwards easily be pasted in the source file.\cr
\code{ModelDlg()} helps to compose model formulas based on the variablenames of a \code{data.frame}.
Both functions are best used together with the package \code{DescToolsAddIns}, with which the functions can be assigned to a keystroke in RStudio.
%% ~~ A concise (1-5 lines) description of what the function does. ~~
}
\usage{
SelectVarDlg(x, ...)
\method{SelectVarDlg}{default}(x, useIndex = FALSE, ...)
\method{SelectVarDlg}{factor}(x, ...)
\method{SelectVarDlg}{data.frame}(x, ...)
ModelDlg(x, ...)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{x}{the object containing the elements to be selected. x can be a data.frame, a factor or any other vector.
%% ~~Describe \code{x} here~~
}
\item{useIndex}{defines, if the enquoted variablenames (default) or the index values should be returned.
%% ~~Describe \code{x} here~~
}
\item{\dots}{further arguments to be passed to the default function. }
}
\details{
When working with big data.frames with many variables it is often tedious to build subsets by typing the
columnnames. Here is where the function comes in offering a "point and click" approach for selecting the interesting
columns.
When x is a \code{data.frame} the columnnames are listed,
when x is a factor the according levels are listed and in all other cases the list is filled with the unique elements of x.
%% ~~ If necessary, more details than the description above ~~
In the model dialog, the variablenames of the selected data.frame are listed on the right, from where they can be inserted in the model box by clicking on a button between the two boxes. Clicking on the \code{+} button will use + to concatenate the variablenames.
\figure{ModelDlg.png}{Model dialog}
After clicking on ok, the formula \code{temperature ~ area + driver + delivery_min, data=d.pizza} will be inserted on the cursor position.
}
\value{
A comma separated list with the selected values enquoted is returned invisibly as well as
written to clipboard for easy inserting the text in an editor afterwards.
}
\author{
Andri Signorell <andri@signorell.net>
%% ~~who you are~~
}
\seealso{\code{\link{select.list}}
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
\dontrun{
SelectVarDlg(x = d.pizza$driver)
SelectVarDlg(x = d.pizza, useIndex=TRUE)
SelectVarDlg(d.pizza$driver)
x <- replicate(10, paste(sample(LETTERS, 5, replace = TRUE), collapse=""))
SelectVarDlg(x)
ModelDlg(d.pizza)
}
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ manip }
|
######################################
# #
# DECISION TREE #
# #
######################################
# Por Juanjo Beunza (ayuda conceptual de Enrique Puertas)
# Agradecimiento a Brett Lantz y Hadley Wickham por su inspiración
#### CLASIFICACIÓN MEDIANTE DECISION TRESS
# Limpieza del environment
rm(list = ls())
## STEP 1 - Obtención de datos
source("/Users/juanjosebeunzanuin/Documents/ALGORITMOS/Machine Learning Salud-UEM/Frami analisis para REC/I- Preparacion datos tres modelos/Preparacion datos 3 modelos v8.R")
## STEP 2 - Exploración y preparación de los datos
## STEP 3 - Entrenamiento del modelo en los datos del Train
###### I- MODELO A
library(C50)
str(framiA_train)
framiA$TenYearCHD <- as.factor(framiA$TenYearCHD)
framiA_train$TenYearCHD <- as.factor(framiA_train$TenYearCHD)
framiA_test$TenYearCHD <- as.factor(framiA_test$TenYearCHD)
str(framiA_train)
framiA_model <- C5.0(framiA_train[-9], framiA_train$TenYearCHD)
framiA_model
summary(framiA_model)
# Evaluating model performance (on test dataset)
framiA_pred <- predict(framiA_model, framiA_test)
library(gmodels)
CrossTable(framiA_test$TenYearCHD, framiA_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | predicted 10yCHD
# actual 10yCHD | 0 | 1 | Row Total |
# --------------|-----------|-----------|-----------|
# 0 | 705 | 12 | 717 |
# | 0.831 | 0.014 | |
# --------------|-----------|-----------|-----------|
# 1 | 120 | 11 | 131 |
# | 0.142 | 0.013 | |
# --------------|-----------|-----------|-----------|
# Column Total | 825 | 23 | 848 |
# --------------|-----------|-----------|-----------|
roc.curve(framiA_test$TenYearCHD, framiA_pred) # 0.534
# Improving model performance
framiA_boost10 <- C5.0(framiA_train[-9], framiA_train$TenYearCHD,
trials = 10)
framiA_boost10
summary(framiA_boost10)
# Let's check the testing dataset
framiA_boost_pred10 <- predict(framiA_boost10, framiA_test)
CrossTable(framiA_test$TenYearCHD, framiA_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | predicted 10YCHD
# actual 10YearCHD | 0 | 1 | Row Total |
# -----------------|-----------|-----------|-----------|
# 0 | 711 | 6 | 717 |
# | 0.838 | 0.007 | |
# -----------------|-----------|-----------|-----------|
# 1 | 123 | 8 | 131 |
# | 0.145 | 0.009 | |
# -----------------|-----------|-----------|-----------|
# Column Total | 834 | 14 | 848 |
# -----------------|-----------|-----------|-----------|
roc.curve(framiA_test$TenYearCHD, framiA_boost_pred10) # 0.526
###### II- MODELO B
# Convertir evento de numérico a factor (en SVM, el outcome es siempre un factor... si no, sería una regresión, no una clasificación)
str(framiB)
str(framiB_train)
str(framiB_test)
framiB$TenYearCHD <- as.factor(framiB$TenYearCHD)
framiB_train$TenYearCHD <- as.factor(framiB_train$TenYearCHD)
framiB_test$TenYearCHD <- as.factor(framiB_test$TenYearCHD)
library(C50)
framiB_model <- C5.0(framiB_train[-9], framiB_train$TenYearCHD)
framiB_model
summary(framiB_model)
# Evaluating model performance (on test dataset)
framiB_pred <- predict(framiB_model, framiB_test)
library(gmodels)
CrossTable(framiB_test$TenYearCHD, framiB_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | framiB_pred
# framiB_test$TenYearCHD | 0 | Row Total |
# -----------------------|-----------|-----------|
# 0 | 642 | 642 |
# | 0.840 | |
# -----------------------|-----------|-----------|
# 1 | 122 | 122 |
# | 0.160 | |
# -----------------------|-----------|-----------|
# Column Total | 764 | 764 |
# -----------------------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_pred) # 0.5
# Improving model performance
framiB_boost10 <- C5.0(framiB_train[-9], framiB_train$TenYearCHD,
trials = 10)
framiB_boost10
summary(framiB_boost10)
# Let's check the testing dataset
framiB_boost_pred10 <- predict(framiB_boost10, framiB_test)
CrossTable(framiB_test$TenYearCHD, framiB_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | framiB_boost_pred10
# framiB_test$TenYearCHD | 0 | Row Total |
# -----------------------|-----------|-----------|
# 0 | 642 | 642 |
# | 0.840 | |
# -----------------------|-----------|-----------|
# 1 | 122 | 122 |
# | 0.160 | |
# -----------------------|-----------|-----------|
# Column Total | 764 | 764 |
# -----------------------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_boost_pred10) # 0.500
# Modelo B balanceado
framiB_train_bal_over$TenYearCHD <- as.factor(framiB_train_bal_over$TenYearCHD)
str(framiB_train_bal_over)
#framiB_test$TenYearCHD <- as.factor(framiB_test$TenYearCHD)
str(framiB_test)
library(C50)
framiB_bal_over_model <- C5.0(framiB_train_bal_over[-9], framiB_train_bal_over$TenYearCHD)
framiB_bal_over_model
summary(framiB_bal_over_model)
# Evaluating model performance (on test dataset)
framiB_bal_over_pred <- predict(framiB_bal_over_model, framiB_test)
library(gmodels)
CrossTable(framiB_test$TenYearCHD, framiB_bal_over_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | predicted 10yCHD
# actual 10yCHD | 0 | 1 | Row Total |
# --------------|-----------|-----------|-----------|
# 0 | 462 | 180 | 642 |
# | 0.605 | 0.236 | |
# --------------|-----------|-----------|-----------|
# 1 | 75 | 47 | 122 |
# | 0.098 | 0.062 | |
# --------------|-----------|-----------|-----------|
# Column Total | 537 | 227 | 764 |
# --------------|-----------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_bal_over_pred) # 0.552
# Improving model performance
framiB_bal_over_boost10 <- C5.0(framiB_train_bal_over[-9], framiB_train_bal_over$TenYearCHD,
trials = 10)
framiB_bal_over_boost10
summary(framiB_bal_over_boost10)
# Let's check the testing dataset
framiB_bal_over_boost_pred10 <- predict(framiB_bal_over_boost10, framiB_test)
CrossTable(framiB_test$TenYearCHD, framiB_bal_over_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | predicted 10YCHD
# actual 10YearCHD | 0 | 1 | Row Total |
# -----------------|-----------|-----------|-----------|
# 0 | 585 | 57 | 642 |
# | 0.766 | 0.075 | |
# -----------------|-----------|-----------|-----------|
# 1 | 88 | 34 | 122 |
# | 0.115 | 0.045 | |
# -----------------|-----------|-----------|-----------|
# Column Total | 673 | 91 | 764 |
# -----------------|-----------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_bal_over_boost_pred10) # 0.595
# III- MODELO C
# Convertir evento de numérico a factor (en SVM, el outcome es siempre un factor... si no, sería una regresión, no una clasificación)
#str(frami)
framiC$TenYearCHD <- as.factor(framiC$TenYearCHD)
framiC_train$TenYearCHD <- as.factor(framiC_train$TenYearCHD)
framiC_test$TenYearCHD <- as.factor(framiC_test$TenYearCHD)
library(C50)
framiC_model <- C5.0(framiC_train[-9], framiC_train$TenYearCHD)
framiC_model
summary(framiC_model)
# Evaluating model performance (on test dataset)
framiC_pred <- predict(framiC_model, framiC_test)
library(gmodels)
CrossTable(framiC_test$TenYearCHD, framiC_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | predicted 10yCHD
# actual 10yCHD | 0 | 1 | Row Total |
# --------------|-----------|-----------|-----------|
# 0 | 703 | 14 | 717 |
# | 0.829 | 0.017 | |
# --------------|-----------|-----------|-----------|
# 1 | 120 | 11 | 131 |
# | 0.142 | 0.013 | |
# --------------|-----------|-----------|-----------|
# Column Total | 823 | 25 | 848 |
# --------------|-----------|-----------|-----------|
roc.curve(framiC_test$TenYearCHD, framiC_pred) # 0.532
# Improving model performance
framiC_boost10 <- C5.0(framiC_train[-9], framiC_train$TenYearCHD,
trials = 10)
framiC_boost10
summary(framiC_boost10)
# Let's check the testing dataset
framiC_boost_pred10 <- predict(framiC_boost10, framiC_test)
CrossTable(framiC_test$TenYearCHD, framiC_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | predicted 10YCHD
# actual 10YearCHD | 0 | 1 | Row Total |
# -----------------|-----------|-----------|-----------|
# 0 | 708 | 9 | 717 |
# | 0.835 | 0.011 | |
# -----------------|-----------|-----------|-----------|
# 1 | 124 | 7 | 131 |
# | 0.146 | 0.008 | |
# -----------------|-----------|-----------|-----------|
# Column Total | 832 | 16 | 848 |
# -----------------|-----------|-----------|-----------|
roc.curve(framiC_test$TenYearCHD, framiC_boost_pred10) # 0.520
| /III- Frami- Decision Tree v5.R | no_license | Juanjobeunza/Aprendizaje-Automatico-FRAMINGHAM | R | false | false | 10,794 | r | ######################################
# #
# DECISION TREE #
# #
######################################
# Por Juanjo Beunza (ayuda conceptual de Enrique Puertas)
# Agradecimiento a Brett Lantz y Hadley Wickham por su inspiración
#### CLASIFICACIÓN MEDIANTE DECISION TRESS
# Limpieza del environment
rm(list = ls())
## STEP 1 - Obtención de datos
source("/Users/juanjosebeunzanuin/Documents/ALGORITMOS/Machine Learning Salud-UEM/Frami analisis para REC/I- Preparacion datos tres modelos/Preparacion datos 3 modelos v8.R")
## STEP 2 - Exploración y preparación de los datos
## STEP 3 - Entrenamiento del modelo en los datos del Train
###### I- MODELO A
library(C50)
str(framiA_train)
framiA$TenYearCHD <- as.factor(framiA$TenYearCHD)
framiA_train$TenYearCHD <- as.factor(framiA_train$TenYearCHD)
framiA_test$TenYearCHD <- as.factor(framiA_test$TenYearCHD)
str(framiA_train)
framiA_model <- C5.0(framiA_train[-9], framiA_train$TenYearCHD)
framiA_model
summary(framiA_model)
# Evaluating model performance (on test dataset)
framiA_pred <- predict(framiA_model, framiA_test)
library(gmodels)
CrossTable(framiA_test$TenYearCHD, framiA_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | predicted 10yCHD
# actual 10yCHD | 0 | 1 | Row Total |
# --------------|-----------|-----------|-----------|
# 0 | 705 | 12 | 717 |
# | 0.831 | 0.014 | |
# --------------|-----------|-----------|-----------|
# 1 | 120 | 11 | 131 |
# | 0.142 | 0.013 | |
# --------------|-----------|-----------|-----------|
# Column Total | 825 | 23 | 848 |
# --------------|-----------|-----------|-----------|
roc.curve(framiA_test$TenYearCHD, framiA_pred) # 0.534
# Improving model performance
framiA_boost10 <- C5.0(framiA_train[-9], framiA_train$TenYearCHD,
trials = 10)
framiA_boost10
summary(framiA_boost10)
# Let's check the testing dataset
framiA_boost_pred10 <- predict(framiA_boost10, framiA_test)
CrossTable(framiA_test$TenYearCHD, framiA_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | predicted 10YCHD
# actual 10YearCHD | 0 | 1 | Row Total |
# -----------------|-----------|-----------|-----------|
# 0 | 711 | 6 | 717 |
# | 0.838 | 0.007 | |
# -----------------|-----------|-----------|-----------|
# 1 | 123 | 8 | 131 |
# | 0.145 | 0.009 | |
# -----------------|-----------|-----------|-----------|
# Column Total | 834 | 14 | 848 |
# -----------------|-----------|-----------|-----------|
roc.curve(framiA_test$TenYearCHD, framiA_boost_pred10) # 0.526
###### II- MODELO B
# Convertir evento de numérico a factor (en SVM, el outcome es siempre un factor... si no, sería una regresión, no una clasificación)
str(framiB)
str(framiB_train)
str(framiB_test)
framiB$TenYearCHD <- as.factor(framiB$TenYearCHD)
framiB_train$TenYearCHD <- as.factor(framiB_train$TenYearCHD)
framiB_test$TenYearCHD <- as.factor(framiB_test$TenYearCHD)
library(C50)
framiB_model <- C5.0(framiB_train[-9], framiB_train$TenYearCHD)
framiB_model
summary(framiB_model)
# Evaluating model performance (on test dataset)
framiB_pred <- predict(framiB_model, framiB_test)
library(gmodels)
CrossTable(framiB_test$TenYearCHD, framiB_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | framiB_pred
# framiB_test$TenYearCHD | 0 | Row Total |
# -----------------------|-----------|-----------|
# 0 | 642 | 642 |
# | 0.840 | |
# -----------------------|-----------|-----------|
# 1 | 122 | 122 |
# | 0.160 | |
# -----------------------|-----------|-----------|
# Column Total | 764 | 764 |
# -----------------------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_pred) # 0.5
# Improving model performance
framiB_boost10 <- C5.0(framiB_train[-9], framiB_train$TenYearCHD,
trials = 10)
framiB_boost10
summary(framiB_boost10)
# Let's check the testing dataset
framiB_boost_pred10 <- predict(framiB_boost10, framiB_test)
CrossTable(framiB_test$TenYearCHD, framiB_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | framiB_boost_pred10
# framiB_test$TenYearCHD | 0 | Row Total |
# -----------------------|-----------|-----------|
# 0 | 642 | 642 |
# | 0.840 | |
# -----------------------|-----------|-----------|
# 1 | 122 | 122 |
# | 0.160 | |
# -----------------------|-----------|-----------|
# Column Total | 764 | 764 |
# -----------------------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_boost_pred10) # 0.500
# Modelo B balanceado
framiB_train_bal_over$TenYearCHD <- as.factor(framiB_train_bal_over$TenYearCHD)
str(framiB_train_bal_over)
#framiB_test$TenYearCHD <- as.factor(framiB_test$TenYearCHD)
str(framiB_test)
library(C50)
framiB_bal_over_model <- C5.0(framiB_train_bal_over[-9], framiB_train_bal_over$TenYearCHD)
framiB_bal_over_model
summary(framiB_bal_over_model)
# Evaluating model performance (on test dataset)
framiB_bal_over_pred <- predict(framiB_bal_over_model, framiB_test)
library(gmodels)
CrossTable(framiB_test$TenYearCHD, framiB_bal_over_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | predicted 10yCHD
# actual 10yCHD | 0 | 1 | Row Total |
# --------------|-----------|-----------|-----------|
# 0 | 462 | 180 | 642 |
# | 0.605 | 0.236 | |
# --------------|-----------|-----------|-----------|
# 1 | 75 | 47 | 122 |
# | 0.098 | 0.062 | |
# --------------|-----------|-----------|-----------|
# Column Total | 537 | 227 | 764 |
# --------------|-----------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_bal_over_pred) # 0.552
# Improving model performance
framiB_bal_over_boost10 <- C5.0(framiB_train_bal_over[-9], framiB_train_bal_over$TenYearCHD,
trials = 10)
framiB_bal_over_boost10
summary(framiB_bal_over_boost10)
# Let's check the testing dataset
framiB_bal_over_boost_pred10 <- predict(framiB_bal_over_boost10, framiB_test)
CrossTable(framiB_test$TenYearCHD, framiB_bal_over_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | predicted 10YCHD
# actual 10YearCHD | 0 | 1 | Row Total |
# -----------------|-----------|-----------|-----------|
# 0 | 585 | 57 | 642 |
# | 0.766 | 0.075 | |
# -----------------|-----------|-----------|-----------|
# 1 | 88 | 34 | 122 |
# | 0.115 | 0.045 | |
# -----------------|-----------|-----------|-----------|
# Column Total | 673 | 91 | 764 |
# -----------------|-----------|-----------|-----------|
roc.curve(framiB_test$TenYearCHD, framiB_bal_over_boost_pred10) # 0.595
# III- MODELO C
# Convertir evento de numérico a factor (en SVM, el outcome es siempre un factor... si no, sería una regresión, no una clasificación)
#str(frami)
framiC$TenYearCHD <- as.factor(framiC$TenYearCHD)
framiC_train$TenYearCHD <- as.factor(framiC_train$TenYearCHD)
framiC_test$TenYearCHD <- as.factor(framiC_test$TenYearCHD)
library(C50)
framiC_model <- C5.0(framiC_train[-9], framiC_train$TenYearCHD)
framiC_model
summary(framiC_model)
# Evaluating model performance (on test dataset)
framiC_pred <- predict(framiC_model, framiC_test)
library(gmodels)
CrossTable(framiC_test$TenYearCHD, framiC_pred,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10yCHD', 'predicted 10yCHD'))
# | predicted 10yCHD
# actual 10yCHD | 0 | 1 | Row Total |
# --------------|-----------|-----------|-----------|
# 0 | 703 | 14 | 717 |
# | 0.829 | 0.017 | |
# --------------|-----------|-----------|-----------|
# 1 | 120 | 11 | 131 |
# | 0.142 | 0.013 | |
# --------------|-----------|-----------|-----------|
# Column Total | 823 | 25 | 848 |
# --------------|-----------|-----------|-----------|
roc.curve(framiC_test$TenYearCHD, framiC_pred) # 0.532
# Improving model performance
framiC_boost10 <- C5.0(framiC_train[-9], framiC_train$TenYearCHD,
trials = 10)
framiC_boost10
summary(framiC_boost10)
# Let's check the testing dataset
framiC_boost_pred10 <- predict(framiC_boost10, framiC_test)
CrossTable(framiC_test$TenYearCHD, framiC_boost_pred10,
prop.chisq = FALSE, prop.c = FALSE, prop.r = FALSE,
dnn = c('actual 10YearCHD', 'predicted 10YCHD'))
# | predicted 10YCHD
# actual 10YearCHD | 0 | 1 | Row Total |
# -----------------|-----------|-----------|-----------|
# 0 | 708 | 9 | 717 |
# | 0.835 | 0.011 | |
# -----------------|-----------|-----------|-----------|
# 1 | 124 | 7 | 131 |
# | 0.146 | 0.008 | |
# -----------------|-----------|-----------|-----------|
# Column Total | 832 | 16 | 848 |
# -----------------|-----------|-----------|-----------|
roc.curve(framiC_test$TenYearCHD, framiC_boost_pred10) # 0.520
|
library(secr)
load("inputs.RData")
# is magnitude of detection a learned response
mClosedg02 <- secr.fit(cptr_hst, model=g0~b, mask=maskClosed, detectfn=1, CL=FALSE)
saveRDS(mClosedg02, file = "mClosedg02.rds") | /hpc09.R | no_license | samual-williams/Hyaena-density | R | false | false | 212 | r | library(secr)
load("inputs.RData")
# is magnitude of detection a learned response
mClosedg02 <- secr.fit(cptr_hst, model=g0~b, mask=maskClosed, detectfn=1, CL=FALSE)
saveRDS(mClosedg02, file = "mClosedg02.rds") |
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Reproduce simulations for the paper.
# The seed to create the data is always the same, while changing the seed before train/test splitting allows to evaluate results over different splits, as suggested by an anonymous referee
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
rm(list = ls())
#+++++++++++++++++++++++++
# Takes roughly 10 minutes
#+++++++++++++++++++++++++
nsim = 50
out = array(0, dim = c(nsim,12,5))
seeds = 1:nsim
preds = list()
for(i in 1:nsim){
#this seed will be used only for the splitting
seed = seeds[i]
set.seed(2711)
#++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Scenario 1. Low rank structures, Y depend on X and Z.
#++++++++++++++++++++++++++++++++++++++++++++++++++++++
k.rid = 10
#n = 500
#p = 1000
n = 1000
p = 200
W = matrix(rnorm(p*k.rid), k.rid)
S = matrix(rnorm(n*k.rid), n)
z=rep(0:1, each=n/2)
lambda = rnorm(k.rid)
A = (S - lambda * z ) %*% W
X = scale(A)
beta = runif(k.rid,-5,5)
y = rnorm(n, mean = (S - lambda * z ) %*% beta )
set.seed(seed)
id.train = sample(1:n,.75*n)
id.test = setdiff(1:n, id.train)
require(sOG)
#Estimation
K=10
res_sfpl = sOG(X[id.train,], z=z[id.train], K=10, t_val = 7,control = control_sOG(lim_step = 1,lim_max=5000))
res_fpl = OG(X=X[id.train,], z=z[id.train], K=10)
res_comp = svd(t(sva::ComBat(dat = t(X[id.train,]),batch = as.vector(z[id.train]))),nu=K,nv=K)
?sva::sva
res_sva = svd(t(sva::psva(dat = t(X[id.train,]),batch = as.factor(z[id.train]),n.sv=K)),nu=K,nv=K)
SVD = svd(X[id.train,], nu = K, nv = K)
#Linear models
y_train = y[id.train]
y_test = y[id.test]
m1 = lm(y_train ~ res_fpl$S)
m2 = lm(y_train ~ res_sfpl$S)
m_comp = lm(y_train ~ res_comp$u)
m_sva = lm(y_train ~ res_sva$u)
m_SVD = lm(y_train ~ SVD$u)
#Test matrices
res_sfpl_test = sOG(X[id.test,], z=z[id.test], K=10, t_val = 5,control = control_sOG(lim_step = 1,lim_max=5000))
res_fpl_test = OG(X=X[id.test,], z=z[id.test], K=10)
res_comp_test = svd(t(sva::ComBat(dat = t(X[id.test,]),batch = as.vector(z[id.test]))),nu=K,nv=K)
res_sva_test = svd(t(sva::psva(dat = t(X[id.test,]),batch = as.factor(z[id.test]),n.sv=K)),nu=K,nv=K)
SVD_test = svd(X[id.test,], nu = K, nv = K)
preds[[i]] = data.frame("COMBAT" = cbind(1,res_comp_test$u) %*% coef(m_comp),
"PSVA" = cbind(1, res_sva_test$u) %*% coef(m_sva),
"OG" = cbind(1, res_fpl_test$S) %*% coef(m1),
"SOG" = cbind(1, res_sfpl_test$S) %*% coef(m2),
"SVD" = cbind(1, SVD_test$u) %*% coef(m_SVD)
#"z" = z[id.test]
)
cat(i,'\n')
}
#If z is added later
predsN = list()
for(i in 1:length(preds)){
seed = seeds[i]
set.seed(seed)
id.train = sample(1:n,.75*n)
id.test = setdiff(1:n, id.train)
tmp = preds[[i]]
tmp$z = z[id.test]
predsN[[i]] = tmp
}
cor_res = matrix(unlist(lapply(predsN, function(x) cor(x)[6,-6])),nrow = nsim,byrow = T)
colnames(cor_res) = colnames(predsN[[1]])[-6]
boxplot(cor_res)
df = reshape2::melt(cor_res)
df$Var2 = factor(df$Var2, labels = c("COMBAT", "PSVA", "OG", "SOG", "SVD"))
require(latex2exp)
require(ggplot2)
ggplot(df, aes(x = Var2, y = value)) + geom_boxplot(size = 1.5) + theme_bw(base_size = 18) +
#geom_jitter(aes(x = Var2, y = value), size = 2, alpha = .6, position=position_jitter(0.2)) +
#theme(panel.grid.major = element_blank(),
#panel.grid.minor = element_blank(),
#panel.background = element_blank()) +
scale_y_continuous(expand = c(0, 0.05)) +
#stat_summary(fun.y = mean, geom = "errorbar",
#aes(ymax = ..y.., ymin = ..y.., group = factor(Var2)),
#width = 0.75, linetype = "dashed", position = position_dodge()) +
xlab('')+
ylab(TeX('Correlation between \\hat{Y} and Z'))+
theme(strip.text = element_text(size=16),
#axis.title = element_text(size=18),
axis.text = element_text(size=18),
axis.text.x = element_text(angle = 0),
legend.position = 'none',
panel.spacing=unit(.5, "lines"),
panel.border = element_rect(color = "black", fill = NA, size = .5),
strip.background = element_rect(color = "black", size = .5),
) + guides(color = guide_legend(override.aes = list(size = 5)))
ggsave(file = "boxplot.pdf", width = 20, height = 4.5)
| /SIMULATIONS/SIMULATION_plot_rep.R | no_license | emanuelealiverti/SOG | R | false | false | 4,344 | r | #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Reproduce simulations for the paper.
# The seed to create the data is always the same, while changing the seed before train/test splitting allows to evaluate results over different splits, as suggested by an anonymous referee
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
rm(list = ls())
#+++++++++++++++++++++++++
# Takes roughly 10 minutes
#+++++++++++++++++++++++++
nsim = 50
out = array(0, dim = c(nsim,12,5))
seeds = 1:nsim
preds = list()
for(i in 1:nsim){
#this seed will be used only for the splitting
seed = seeds[i]
set.seed(2711)
#++++++++++++++++++++++++++++++++++++++++++++++++++++++
# Scenario 1. Low rank structures, Y depend on X and Z.
#++++++++++++++++++++++++++++++++++++++++++++++++++++++
k.rid = 10
#n = 500
#p = 1000
n = 1000
p = 200
W = matrix(rnorm(p*k.rid), k.rid)
S = matrix(rnorm(n*k.rid), n)
z=rep(0:1, each=n/2)
lambda = rnorm(k.rid)
A = (S - lambda * z ) %*% W
X = scale(A)
beta = runif(k.rid,-5,5)
y = rnorm(n, mean = (S - lambda * z ) %*% beta )
set.seed(seed)
id.train = sample(1:n,.75*n)
id.test = setdiff(1:n, id.train)
require(sOG)
#Estimation
K=10
res_sfpl = sOG(X[id.train,], z=z[id.train], K=10, t_val = 7,control = control_sOG(lim_step = 1,lim_max=5000))
res_fpl = OG(X=X[id.train,], z=z[id.train], K=10)
res_comp = svd(t(sva::ComBat(dat = t(X[id.train,]),batch = as.vector(z[id.train]))),nu=K,nv=K)
?sva::sva
res_sva = svd(t(sva::psva(dat = t(X[id.train,]),batch = as.factor(z[id.train]),n.sv=K)),nu=K,nv=K)
SVD = svd(X[id.train,], nu = K, nv = K)
#Linear models
y_train = y[id.train]
y_test = y[id.test]
m1 = lm(y_train ~ res_fpl$S)
m2 = lm(y_train ~ res_sfpl$S)
m_comp = lm(y_train ~ res_comp$u)
m_sva = lm(y_train ~ res_sva$u)
m_SVD = lm(y_train ~ SVD$u)
#Test matrices
res_sfpl_test = sOG(X[id.test,], z=z[id.test], K=10, t_val = 5,control = control_sOG(lim_step = 1,lim_max=5000))
res_fpl_test = OG(X=X[id.test,], z=z[id.test], K=10)
res_comp_test = svd(t(sva::ComBat(dat = t(X[id.test,]),batch = as.vector(z[id.test]))),nu=K,nv=K)
res_sva_test = svd(t(sva::psva(dat = t(X[id.test,]),batch = as.factor(z[id.test]),n.sv=K)),nu=K,nv=K)
SVD_test = svd(X[id.test,], nu = K, nv = K)
preds[[i]] = data.frame("COMBAT" = cbind(1,res_comp_test$u) %*% coef(m_comp),
"PSVA" = cbind(1, res_sva_test$u) %*% coef(m_sva),
"OG" = cbind(1, res_fpl_test$S) %*% coef(m1),
"SOG" = cbind(1, res_sfpl_test$S) %*% coef(m2),
"SVD" = cbind(1, SVD_test$u) %*% coef(m_SVD)
#"z" = z[id.test]
)
cat(i,'\n')
}
#If z is added later
predsN = list()
for(i in 1:length(preds)){
seed = seeds[i]
set.seed(seed)
id.train = sample(1:n,.75*n)
id.test = setdiff(1:n, id.train)
tmp = preds[[i]]
tmp$z = z[id.test]
predsN[[i]] = tmp
}
cor_res = matrix(unlist(lapply(predsN, function(x) cor(x)[6,-6])),nrow = nsim,byrow = T)
colnames(cor_res) = colnames(predsN[[1]])[-6]
boxplot(cor_res)
df = reshape2::melt(cor_res)
df$Var2 = factor(df$Var2, labels = c("COMBAT", "PSVA", "OG", "SOG", "SVD"))
require(latex2exp)
require(ggplot2)
ggplot(df, aes(x = Var2, y = value)) + geom_boxplot(size = 1.5) + theme_bw(base_size = 18) +
#geom_jitter(aes(x = Var2, y = value), size = 2, alpha = .6, position=position_jitter(0.2)) +
#theme(panel.grid.major = element_blank(),
#panel.grid.minor = element_blank(),
#panel.background = element_blank()) +
scale_y_continuous(expand = c(0, 0.05)) +
#stat_summary(fun.y = mean, geom = "errorbar",
#aes(ymax = ..y.., ymin = ..y.., group = factor(Var2)),
#width = 0.75, linetype = "dashed", position = position_dodge()) +
xlab('')+
ylab(TeX('Correlation between \\hat{Y} and Z'))+
theme(strip.text = element_text(size=16),
#axis.title = element_text(size=18),
axis.text = element_text(size=18),
axis.text.x = element_text(angle = 0),
legend.position = 'none',
panel.spacing=unit(.5, "lines"),
panel.border = element_rect(color = "black", fill = NA, size = .5),
strip.background = element_rect(color = "black", size = .5),
) + guides(color = guide_legend(override.aes = list(size = 5)))
ggsave(file = "boxplot.pdf", width = 20, height = 4.5)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mse.R
\name{logit.mse}
\alias{logit.mse}
\title{Allocate treatments according to the MSE matrix when a logisic model for the response is assumed.
We simulate responses sequentially.}
\usage{
logit.mse(covar, true.beta, init, int = NULL, lossfunc = calc.y.D,
epsilon = 1e-05, same.start = NULL, true.bvcov = NULL, ...)
}
\arguments{
\item{covar}{a dataframe for the covariates}
\item{true.beta}{the true parameter values of the data generating mechanism}
\item{init}{the number of units in the initial design}
\item{int}{set to T if you allow for treatment-covariate interactions in the model, NULL otherwise}
\item{lossfunc}{a function for the objective function to minimize}
\item{epsilon}{a small real number used for regularization. If set to zero,
no regularization takes place}
\item{same.start}{set to the intial design if desired or set to NULL otherwise}
\item{true.bvcov}{set to the true values of beta if the mse matrix is to be computed using the true values}
\item{...}{further arguments to be passed to <logit.coord> and <lossfunc>}
}
\value{
the design matrix D, responses y, all estimates of beta, final estimate of beta, probabilities of treatment
assignment, proportion of favorable responses, value of objective function, trace of var-covar matrix, trace of bias matrix,
trace of mse matrix, determinant of mse matrix
}
\description{
Allocate treatments according to the MSE matrix when a logisic model for the response is assumed.
We simulate responses sequentially.
}
| /man/logit.mse.Rd | no_license | mst1g15/biasedcoin | R | false | true | 1,615 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mse.R
\name{logit.mse}
\alias{logit.mse}
\title{Allocate treatments according to the MSE matrix when a logisic model for the response is assumed.
We simulate responses sequentially.}
\usage{
logit.mse(covar, true.beta, init, int = NULL, lossfunc = calc.y.D,
epsilon = 1e-05, same.start = NULL, true.bvcov = NULL, ...)
}
\arguments{
\item{covar}{a dataframe for the covariates}
\item{true.beta}{the true parameter values of the data generating mechanism}
\item{init}{the number of units in the initial design}
\item{int}{set to T if you allow for treatment-covariate interactions in the model, NULL otherwise}
\item{lossfunc}{a function for the objective function to minimize}
\item{epsilon}{a small real number used for regularization. If set to zero,
no regularization takes place}
\item{same.start}{set to the intial design if desired or set to NULL otherwise}
\item{true.bvcov}{set to the true values of beta if the mse matrix is to be computed using the true values}
\item{...}{further arguments to be passed to <logit.coord> and <lossfunc>}
}
\value{
the design matrix D, responses y, all estimates of beta, final estimate of beta, probabilities of treatment
assignment, proportion of favorable responses, value of objective function, trace of var-covar matrix, trace of bias matrix,
trace of mse matrix, determinant of mse matrix
}
\description{
Allocate treatments according to the MSE matrix when a logisic model for the response is assumed.
We simulate responses sequentially.
}
|
# watch out! at the end remove palyers with missing values because they might be lend or some shit
# ranking is removed from the list to simplify
#data managing (list, clean, reduce) -> check thatd df.complete.fanta nrow correspond to data_fata (i.e. same numbers of players)
#df.complete.serie.a$PLAYER <- sub(".*? (.+)", "\\1", df.complete.serie.a$PLAYER) #remove players name
#for future versions would it be better to scrap also the names otherwise is a huge mess (sometimes they change the name on the website remember babacar)
#remember that until each player doesn't play at least 1 game they will be not listed in the seria website, meaning that the column team will not converge while merging dataframes
#!!!correct dataset removing goals, assists and cards from initial games not played [check this every year to correct the df]
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#LIBRARIES
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
library(rvest)
library(XML)
library(dplyr)
library(tidyverse)
library(stringr)
library(rebus)
library(lubridate)
library(stringr)
library(utils)
library(base)
library(plyr)
library(data.table)
library(ggplot2)
library(Rmisc)
library(ggforce)
library(cowplot)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#http://www.legaseriea.it
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#top scores
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.top.scores <- 'http://www.legaseriea.it/en/serie-a/statistics/Gol'#Specifying the url for desired website to be scraped
top.scores.table <- read_html(url.top.scores)#Reading the HTML code from the website
top.scores.html <- html_nodes(top.scores.table,'td')#Using CSS selectors to scrape the rankings section
top.scores <- html_text(top.scores.html)#Converting the ranking data to text
head(top.scores)#Let's have a look at the rankings
top.scores <- str_replace_all(top.scores, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
top.scores <- as.data.frame(top.scores)
top.scores <- data.frame(
top.scores,
ind = rep(1:6, nrow(top.scores))) # create a repeated index
top.scores <-unstack(top.scores, top.scores~ind)
names(top.scores) <- c("RANKING", "TEAM", "PLAYER", "GOALS", "PLAYED", "PENALTIES")#name headers
top.scores$TEAM <- gsub('\\s+', '', top.scores$TEAM)# remove unnecessary space from the beginning
top.scores$PLAYER <- gsub(" $","", top.scores$PLAYER, perl=T)# remove unnecessary space from at the end
top.scores
#clean environment
rm(top.scores.html,top.scores.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#top asssist
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.top.assist <- 'http://www.legaseriea.it/en/serie-a/statistics/NAssistVin'#Specifying the url for desired website to be scraped
top.assist.table <- read_html(url.top.assist)#Reading the HTML code from the website
top.assist.html <- html_nodes(top.assist.table,'td')#Using CSS selectors to scrape the rankings section
top.assist <- html_text(top.assist.html)#Converting the ranking data to text
head(top.assist)#Let's have a look at the rankings
top.assist <- str_replace_all(top.assist, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
top.assist <- as.data.frame(top.assist)
top.assist <- data.frame(
top.assist,
ind = rep(1:4, nrow(top.assist))) # create a repeated index
top.assist <-unstack(top.assist, top.assist~ind)
names(top.assist) <- c("RANKING", "TEAM", "PLAYER", "ASSIST")#name headers
top.assist$TEAM <- gsub('\\s+', '', top.assist$TEAM)# remove unnecessary space from the beginning
top.assist$PLAYER <- gsub(" $","", top.assist$PLAYER, perl=T)# remove unnecessary space from at the end
top.assist
#clean environment
rm(top.assist.html,top.assist.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#shot
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.shot <- 'http://www.legaseriea.it/en/serie-a/statistics/NTir'#Specifying the url for desired website to be scraped
shot.table <- read_html(url.shot)#Reading the HTML code from the website
shot.html <- html_nodes(shot.table,'td')#Using CSS selectors to scrape the rankings section
shot <- html_text(shot.html)#Converting the ranking data to text
head(shot)#Let's have a look at the rankings
shot <- str_replace_all(shot, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
shot <- as.data.frame(shot)
shot <- data.frame(
shot,
ind = rep(1:6, nrow(shot))) # create a repeated index
shot <-unstack(shot, shot~ind)
names(shot) <- c("RANKING", "TEAM", "PLAYER", "TOTAL.SHOT", "ON.TARGET", "OFF.TARGET")#name headers
shot$TEAM <- gsub('\\s+', '', shot$TEAM)# remove unnecessary space from the beginning
shot$PLAYER <- gsub(" $","", shot$PLAYER, perl=T)# remove unnecessary space from at the end
shot
#clean environment
rm(shot.html,shot.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#key passes
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.key.passes <- 'http://www.legaseriea.it/en/serie-a/statistics/PassChiave'#Specifying the url for desired website to be scraped
key.passes.table <- read_html(url.key.passes)#Reading the HTML code from the website
key.passes.html <- html_nodes(key.passes.table,'td')#Using CSS selectors to scrape the rankings section
key.passes <- html_text(key.passes.html)#Converting the ranking data to text
head(key.passes)#Let's have a look at the rankings
key.passes <- str_replace_all(key.passes, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
key.passes <- as.data.frame(key.passes)
key.passes <- data.frame(
key.passes,
ind = rep(1:4, nrow(key.passes))) # create a repeated index
key.passes <-unstack(key.passes, key.passes~ind)
names(key.passes) <- c("RANKING", "TEAM", "PLAYER", "KEY.PASSES")#name headers
key.passes$TEAM <- gsub('\\s+', '', key.passes$TEAM)# remove unnecessary space from the beginning
key.passes$PLAYER <- gsub(" $","", key.passes$PLAYER, perl=T)# remove unnecessary space from at the end
key.passes
#clean environment
rm(key.passes.html,key.passes.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#recoveries
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.recoveries <- 'http://www.legaseriea.it/en/serie-a/statistics/Recuperi'#Specifying the url for desired website to be scraped
recoveries.table <- read_html(url.recoveries)#Reading the HTML code from the website
recoveries.html <- html_nodes(recoveries.table,'td')#Using CSS selectors to scrape the rankings section
recoveries <- html_text(recoveries.html)#Converting the ranking data to text
head(recoveries)#Let's have a look at the rankings
recoveries <- str_replace_all(recoveries, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
recoveries <- as.data.frame(recoveries)
recoveries <- data.frame(
recoveries,
ind = rep(1:4, nrow(recoveries))) # create a repeated index
recoveries <-unstack(recoveries, recoveries~ind)
names(recoveries) <- c("RANKING", "TEAM", "PLAYER", "RECOVERIES")#name headers
recoveries$TEAM <- gsub('\\s+', '', recoveries$TEAM)# remove unnecessary space from the beginning
recoveries$PLAYER <- gsub(" $","", recoveries$PLAYER, perl=T)# remove unnecessary space from at the end
recoveries
#clean environment
rm(recoveries.html,recoveries.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#km
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.km <- 'http://www.legaseriea.it/en/serie-a/statistics/MediaKm'#Specifying the url for desired website to be scraped
km.table <- read_html(url.km)#Reading the HTML code from the website
km.html <- html_nodes(km.table,'td')#Using CSS selectors to scrape the rankings section
km <- html_text(km.html)#Converting the ranking data to text
head(km)#Let's have a look at the rankings
km <- str_replace_all(km, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
km <- as.data.frame(km)
km <- data.frame(
km,
ind = rep(1:5, nrow(km))) # create a repeated index
km <-unstack(km, km~ind)
names(km) <- c("RANKING", "TEAM", "PLAYER", "AVERAGE.km", "MINUTES")#name headers
km$TEAM <- gsub('\\s+', '', km$TEAM)# remove unnecessary space from the beginning
km$PLAYER <- gsub(" $","", km$PLAYER, perl=T)# remove unnecessary space from at the end
km
#clean environment
rm(km.html,km.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#fouls against
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.fouls.against <- 'http://www.legaseriea.it/en/serie-a/statistics/NFalSubiti'#Specifying the url for desired website to be scraped
fouls.against.table <- read_html(url.fouls.against)#Reading the HTML code from the website
fouls.against.html <- html_nodes(fouls.against.table,'td')#Using CSS selectors to scrape the rankings section
fouls.against <- html_text(fouls.against.html)#Converting the ranking data to text
head(fouls.against)#Let's have a look at the rankings
fouls.against <- str_replace_all(fouls.against, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
fouls.against <- as.data.frame(fouls.against)
fouls.against <- data.frame(
fouls.against,
ind = rep(1:4, nrow(fouls.against))) # create a repeated index
fouls.against <-unstack(fouls.against, fouls.against~ind)
names(fouls.against) <- c("RANKING", "TEAM", "PLAYER", "FOULS.AGAINST")#name headers
fouls.against$TEAM <- gsub('\\s+', '', fouls.against$TEAM)# remove unnecessary space from the beginning
fouls.against$PLAYER <- gsub(" $","", fouls.against$PLAYER, perl=T)# remove unnecessary space from at the end
fouls.against
#clean environment
rm(fouls.against.html,fouls.against.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#headed goals
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.headed.goals <- 'http://www.legaseriea.it/en/serie-a/statistics/GolTesta'#Specifying the url for desired website to be scraped
headed.goals.table <- read_html(url.headed.goals)#Reading the HTML code from the website
headed.goals.html <- html_nodes(headed.goals.table,'td')#Using CSS selectors to scrape the rankings section
headed.goals <- html_text(headed.goals.html)#Converting the ranking data to text
head(headed.goals)#Let's have a look at the rankings
headed.goals <- str_replace_all(headed.goals, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
headed.goals <- as.data.frame(headed.goals)
headed.goals <- data.frame(
headed.goals,
ind = rep(1:4, nrow(headed.goals))) # create a repeated index
headed.goals <-unstack(headed.goals, headed.goals~ind)
names(headed.goals) <- c("RANKING", "TEAM", "PLAYER", "HEADED.GOALS")#name headers
headed.goals$TEAM <- gsub('\\s+', '', headed.goals$TEAM)# remove unnecessary space from the beginning
headed.goals$PLAYER <- gsub(" $","", headed.goals$PLAYER, perl=T)# remove unnecessary space from at the end
headed.goals
#clean environment
rm(headed.goals.html,headed.goals.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#posts/crossbar
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.posts.crossbar <- 'http://www.legaseriea.it/en/serie-a/statistics/Palo'#Specifying the url for desired website to be scraped
posts.crossbar.table <- read_html(url.posts.crossbar)#Reading the HTML code from the website
posts.crossbar.html <- html_nodes(posts.crossbar.table,'td')#Using CSS selectors to scrape the rankings section
posts.crossbar <- html_text(posts.crossbar.html)#Converting the ranking data to text
head(posts.crossbar)#Let's have a look at the rankings
posts.crossbar <- str_replace_all(posts.crossbar, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
posts.crossbar <- as.data.frame(posts.crossbar)
posts.crossbar <- data.frame(
posts.crossbar,
ind = rep(1:4, nrow(posts.crossbar))) # create a repeated index
posts.crossbar <-unstack(posts.crossbar, posts.crossbar~ind)
names(posts.crossbar) <- c("RANKING", "TEAM", "PLAYER", "POSTS.CROSSBAR")#name headers
posts.crossbar$TEAM <- gsub('\\s+', '', posts.crossbar$TEAM)# remove unnecessary space from the beginning
posts.crossbar$PLAYER <- gsub(" $","", posts.crossbar$PLAYER, perl=T)# remove unnecessary space from at the end
posts.crossbar
#clean environment
rm(posts.crossbar.html,posts.crossbar.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#data managing (list, clean, reduce)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
df.list.source <- list(fouls.against = fouls.against, # create a list of dataframes
headed.goals = headed.goals,
key.passes = key.passes,
km = km,
posts.crossbar = posts.crossbar,
recoveries = recoveries,
shot = shot,
top.assist = top.assist,
top.scores = top.scores)
df.list.clean <- lapply(df.list.source, function(x) { x["RANKING"] <- NULL; x }) # remove ranking column otherwise is a mess for merging
rm(fouls.against, headed.goals, key.passes, km, posts.crossbar, recoveries, shot, top.assist, top.scores) #clean environment
rm(url.fouls.against, url.headed.goals, url.key.passes, url.km, url.recoveries, url.posts.crossbar, url.shot, url.top.assist, url.top.scores) # clean environment from urls
#merge all the data set
df.complete.serie.a <- reduce(df.list.clean, full_join)
df.complete.serie.a <- df.complete.serie.a[!duplicated(df.complete.serie.a), ] #remove duplicate rows
team.list <- list("ATALANTA", "BOLOGNA", "BRESCIA", "CAGLIARI", "FIORENTINA", "GENOA", "HELLASVERONA", "INTER", "JUVENTUS", "LAZIO", "LECCE", "MILAN", "NAPOLI", "PARMA", "ROMA", "SAMPDORIA", "SASSUOLO", "SPAL", "TORINO", "UDINESE")
df.complete.serie.a <- df.complete.serie.a[ df.complete.serie.a$TEAM %in% team.list, ] #remove team that are not in the list (for instance transferred players after the first game are reported with the new team even if abroad)
rm(team.list)
#df.complete.serie.a$PLAYER <- sub(".*? (.+)", "\\1", df.complete.serie.a$PLAYER) #remove players name
#merge with data fanta
data_fanta <- read.csv(file.path ("D:\\My Drive\\fanta 2019_2020","DATA_FANTA.csv"), sep=";")
setnames(data_fanta, "GIOCATORE", "PLAYER")#rename GIOCATORE into player
data_fanta$PLAYER <- gsub(" $","", data_fanta$PLAYER, perl=T)# remove unnecessary space from at the end
data_fanta$SQUADRA <- NULL #remove squadra
df.complete.fanta.uncorrected <- list(data_fanta = data_fanta, df.complete.serie.a = df.complete.serie.a)
df.complete.fanta.uncorrected <- reduce(df.complete.fanta.uncorrected, full_join)
df.complete.fanta.uncorrected<-df.complete.fanta.uncorrected[-which(is.na(df.complete.fanta.uncorrected$FANTA.TEAM)),]#remove all the players without fanta team
#str dataframe
rm(data_fanta)
#str dataframe
df.complete.fanta.uncorrected$FANTA.TEAM <- as.factor(df.complete.fanta.uncorrected$FANTA.TEAM)
df.complete.fanta.uncorrected$PLAYER <- as.factor(df.complete.fanta.uncorrected$PLAYER)
df.complete.fanta.uncorrected$PREZZO.ASTA <- as.numeric(df.complete.fanta.uncorrected$PREZZO.ASTA)
df.complete.fanta.uncorrected$RUOLO <- as.factor(df.complete.fanta.uncorrected$RUOLO)
df.complete.fanta.uncorrected$TEAM <- as.factor(df.complete.fanta.uncorrected$TEAM)
df.complete.fanta.uncorrected$FOULS.AGAINST <- as.factor(df.complete.fanta.uncorrected$FOULS.AGAINST)
df.complete.fanta.uncorrected$HEADED.GOALS <- as.numeric(df.complete.fanta.uncorrected$HEADED.GOALS)
df.complete.fanta.uncorrected$KEY.PASSES <- as.numeric(df.complete.fanta.uncorrected$KEY.PASSES)
df.complete.fanta.uncorrected$AVERAGE.km <- as.numeric(df.complete.fanta.uncorrected$AVERAGE.km)
df.complete.fanta.uncorrected$MINUTES <- as.numeric(df.complete.fanta.uncorrected$MINUTES)
df.complete.fanta.uncorrected$POSTS.CROSSBAR <- as.numeric(df.complete.fanta.uncorrected$POSTS.CROSSBAR)
df.complete.fanta.uncorrected$RECOVERIES <- as.numeric(df.complete.fanta.uncorrected$RECOVERIES)
df.complete.fanta.uncorrected$TOTAL.SHOT <- as.numeric(df.complete.fanta.uncorrected$TOTAL.SHOT)
df.complete.fanta.uncorrected$ON.TARGET <- as.numeric(df.complete.fanta.uncorrected$ON.TARGET)
df.complete.fanta.uncorrected$OFF.TARGET <- as.numeric(df.complete.fanta.uncorrected$OFF.TARGET)
df.complete.fanta.uncorrected$ASSIST <- as.numeric(df.complete.fanta.uncorrected$ASSIST)
df.complete.fanta.uncorrected$GOALS <- as.numeric(df.complete.fanta.uncorrected$GOALS)
df.complete.fanta.uncorrected$PLAYED <- as.numeric(df.complete.fanta.uncorrected$PLAYED)
df.complete.fanta.uncorrected$PENALTIES <- as.numeric(df.complete.fanta.uncorrected$PENALTIES)
#list all the old df
df.backup <- list(df.complete.serie.a = df.complete.serie.a,
df.list.clean = df.list.clean,
df.list.source = df.list.source)
rm(df.complete.serie.a, df.list.clean, df.list.source)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#general graphic parameters
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
position = position_dodge(width = .75) #to have labels centered at graph end in case you need it
width = .75 #to have labels centered at graph end in case you need it
fillpalette <- c("AZTEKA FC" = "orange",# fill color for each team
"NEVERENDING TURD" = "blue",
"CSK LA RISSA" = "indianred3",
"MATTIA TEAM V5" = "white",
"PARTIZAN DEGRADO" = "firebrick",
"LOKOMOTIV SMILLAUS" = "yellow2",
"REAL GHETTO" = "black",
"ASSASSIN CRIN" = "forestgreen")
borderspalette <- c("AZTEKA FC" = "blue",# fill color for each team
"NEVERENDING TURD" = "black",
"CSK LA RISSA" = "gray88",
"MATTIA TEAM V5" = "red",
"PARTIZAN DEGRADO" = "black",
"LOKOMOTIV SMILLAUS" = "darkgreen",
"REAL GHETTO" = "snow2",
"ASSASSIN CRIN" = "black")
fillpalette_line <- c("AZTEKA FC" = "orange",# fill color for each team
"NEVERENDING TURD" = "blue",
"CSK LA RISSA" = "indianred3",
"MATTIA TEAM V5" = "azure3",
"PARTIZAN DEGRADO" = "firebrick",
"LOKOMOTIV SMILLAUS" = "yellow2",
"REAL GHETTO" = "black",
"ASSASSIN CRIN" = "forestgreen")
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.scores.team
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot.top.scores.team <- aggregate(GOALS ~ FANTA.TEAM, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.scores.team <- plot.top.scores.team[order(plot.top.scores.team$GOALS, decreasing = F), ] #reorder the sum
x = max(plot.top.scores.team$GOALS) #set the maximum value for the graph
plot.top.scores.team$FANTA.TEAM <- factor(plot.top.scores.team$FANTA.TEAM, levels = plot.top.scores.team$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.scores.team <- ggplot(plot.top.scores.team, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x, by=1))+
ylab("GOAL SEGNATI")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team
rm(x)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.scores.team.role
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#after a long interal debate with myself I relaized that the best solution for this graph is to split it in 3 dataset and then plot everything together
#after a second long internal debate with myself I decided that a dotline plot is more better represent the data. I keep anyway the extended version
plot.top.scores.team.role <- aggregate(GOALS ~ FANTA.TEAM + RUOLO, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.scores.team.role <- plot.top.scores.team.role[order(plot.top.scores.team.role$RUOLO, plot.top.scores.team.role$GOALS),] # reorder based on goals and role
plot.top.scores.team.role.df <- subset(plot.top.scores.team.role, subset = RUOLO == "DF", select = c(FANTA.TEAM,RUOLO,GOALS)) #subset separately each role
plot.top.scores.team.role.cc <- subset(plot.top.scores.team.role, subset = RUOLO == "CC", select = c(FANTA.TEAM,RUOLO,GOALS))
plot.top.scores.team.role.at <- subset(plot.top.scores.team.role, subset = RUOLO == "AT", select = c(FANTA.TEAM,RUOLO,GOALS))
x = max(plot.top.scores.team.role$GOALS) #set the maximum value for the graph
x.df = max(plot.top.scores.team.role.df$GOALS)
x.cc = max(plot.top.scores.team.role.cc$GOALS)
x.at = max(plot.top.scores.team.role.at$GOALS)
plot.top.scores.team.role.df$FANTA.TEAM <- factor(plot.top.scores.team.role.df$FANTA.TEAM, levels = plot.top.scores.team.role.df$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.scores.team.role.cc$FANTA.TEAM <- factor(plot.top.scores.team.role.cc$FANTA.TEAM, levels = plot.top.scores.team.role.cc$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.scores.team.role.at$FANTA.TEAM <- factor(plot.top.scores.team.role.at$FANTA.TEAM, levels = plot.top.scores.team.role.at$FANTA.TEAM) #use the factor to impose the order in the graph
#goals.line.plot
plot.top.scores.team.role$RUOLO <- factor(plot.top.scores.team.role$RUOLO, levels=c("DF", "CC", "AT"))# reorder x axis
goals.line.plot <- ggplot(plot.top.scores.team.role,
aes(x = RUOLO, y = GOALS,
group = FANTA.TEAM))+
geom_line(size = 2, aes(color=factor(FANTA.TEAM)))+
geom_point(aes(color=factor(FANTA.TEAM),
fill = factor(FANTA.TEAM)), shape=21, size = 5, stroke = 3)+
scale_fill_manual(values= borderspalette) + #to use my palettes
scale_colour_manual(values= fillpalette_line)+
theme(axis.title.y = element_text(size = 15, face = "bold", color = "black"),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey88"),
legend.position = "right",
legend.title = element_blank(),
legend.text = element_text(size = 12, face = "bold"))+
scale_y_continuous(breaks = seq(0, x, by=1))+
annotate("rect", xmin = 0.5, xmax = 1.5, ymin=0, ymax=Inf, alpha=0.1, fill="cyan")+#color sectors of background
annotate("rect", xmin = 1.5, xmax = 2.5, ymin=0, ymax=Inf, alpha=0.1, fill="lightgoldenrod")+
annotate("rect", xmin = 2.5, xmax = 3.5, ymin=0, ymax=Inf, alpha=0.1, fill="tomato3")+
ylab ("GOAL SEGNATI")
goals.line.plot
rm(x)
rm(plot.top.scores.team.role)
#plot df
plot.top.scores.team.role.df <- ggplot(plot.top.scores.team.role.df, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.df, by=1))+
ylab("GOAL SEGNATI")+
ggtitle("DIFESA")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team.role.df
rm(x.df)
#plot cc
plot.top.scores.team.role.cc <- ggplot(plot.top.scores.team.role.cc, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.cc, by=1))+
ylab("GOAL SEGNATI")+
ggtitle("CENTROCAMPO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team.role.cc
rm(x.cc)
#plot at
plot.top.scores.team.role.at <- ggplot(plot.top.scores.team.role.at, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.at, by=1))+
ylab("GOAL SEGNATI")+
ggtitle("ATTACCO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team.role.at
rm(x.at)
grid.goals.role<-plot_grid(plot.top.scores.team.role.df,
plot.top.scores.team.role.cc,
plot.top.scores.team.role.at,
nrow = 1)
grid.goals.role
#create a lsit with all the goals graphs
goals.plot.list <- list(goals.line.plot = goals.line.plot,
plot.top.scores.team = plot.top.scores.team,
plot.top.scores.team.role.at = plot.top.scores.team.role.at,
plot.top.scores.team.role.cc = plot.top.scores.team.role.cc,
plot.top.scores.team.role.df = plot.top.scores.team.role.df,
grid.goals.role = grid.goals.role)
rm(goals.line.plot, plot.top.scores.team, plot.top.scores.team.role.at, plot.top.scores.team.role.cc, plot.top.scores.team.role.df, grid.goals.role)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.assist.team
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot.top.assist.team <- aggregate(ASSIST ~ FANTA.TEAM, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.assist.team <- plot.top.assist.team[order(plot.top.assist.team$ASSIST, decreasing = F), ] #reorder the sum
x = max(plot.top.assist.team$ASSIST) #set the maximum value for the graph
plot.top.assist.team$FANTA.TEAM <- factor(plot.top.assist.team$FANTA.TEAM, levels = plot.top.assist.team$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.assist.team <- ggplot(plot.top.assist.team, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x, by=1))+
ylab("ASSIST")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team
rm(x)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.assist.team.role
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot.top.assist.team.role <- aggregate(ASSIST ~ FANTA.TEAM + RUOLO, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.assist.team.role <- plot.top.assist.team.role[order(plot.top.assist.team.role$RUOLO, plot.top.assist.team.role$ASSIST),] # reorder based on goals and role
plot.top.assist.team.role.df <- subset(plot.top.assist.team.role, subset = RUOLO == "DF", select = c(FANTA.TEAM,RUOLO,ASSIST)) #subset separately each role
plot.top.assist.team.role.cc <- subset(plot.top.assist.team.role, subset = RUOLO == "CC", select = c(FANTA.TEAM,RUOLO,ASSIST))
plot.top.assist.team.role.at <- subset(plot.top.assist.team.role, subset = RUOLO == "AT", select = c(FANTA.TEAM,RUOLO,ASSIST))
x = max(plot.top.assist.team.role$ASSIST) #set the maximum value for the graph
x.df = max(plot.top.assist.team.role.df$ASSIST)
x.cc = max(plot.top.assist.team.role.cc$ASSIST)
x.at = max(plot.top.assist.team.role.at$ASSIST)
plot.top.assist.team.role.df$FANTA.TEAM <- factor(plot.top.assist.team.role.df$FANTA.TEAM, levels = plot.top.assist.team.role.df$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.assist.team.role.cc$FANTA.TEAM <- factor(plot.top.assist.team.role.cc$FANTA.TEAM, levels = plot.top.assist.team.role.cc$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.assist.team.role.at$FANTA.TEAM <- factor(plot.top.assist.team.role.at$FANTA.TEAM, levels = plot.top.assist.team.role.at$FANTA.TEAM) #use the factor to impose the order in the graph
#assist.line.plot
plot.top.assist.team.role$RUOLO <- factor(plot.top.assist.team.role$RUOLO, levels=c("DF", "CC", "AT"))# reorder x axis
assist.line.plot <- ggplot(plot.top.assist.team.role,
aes(x = RUOLO, y = ASSIST,
group = FANTA.TEAM))+
geom_line(size = 2, aes(color=factor(FANTA.TEAM)))+
geom_point(aes(color=factor(FANTA.TEAM),
fill = factor(FANTA.TEAM)), shape=21, size = 5, stroke = 3)+
scale_fill_manual(values= borderspalette) + #to use my palettes
scale_colour_manual(values= fillpalette_line)+
theme(axis.title.y = element_text(size = 15, face = "bold", color = "black"),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey88"),
legend.position = "right",
legend.title = element_blank(),
legend.text = element_text(size = 12, face = "bold"))+
scale_y_continuous(breaks = seq(0, x, by=1))+
annotate("rect", xmin = 0.5, xmax = 1.5, ymin=0, ymax=Inf, alpha=0.1, fill="cyan")+#color sectors of background
annotate("rect", xmin = 1.5, xmax = 2.5, ymin=0, ymax=Inf, alpha=0.1, fill="lightgoldenrod")+
annotate("rect", xmin = 2.5, xmax = 3.5, ymin=0, ymax=Inf, alpha=0.1, fill="tomato3")+
ylab ("ASSIST")
assist.line.plot
rm(x)
rm(plot.top.assist.team.role)
#plot df
plot.top.assist.team.role.df <- ggplot(plot.top.assist.team.role.df, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.df, by=1))+
ylab("ASSIST")+
ggtitle("DIFESA")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team.role.df
rm(x.df)
#plot cc
plot.top.assist.team.role.cc <- ggplot(plot.top.assist.team.role.cc, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.cc, by=1))+
ylab("ASSIST")+
ggtitle("CENTROCAMPO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team.role.cc
rm(x.cc)
#plot at
plot.top.assist.team.role.at <- ggplot(plot.top.assist.team.role.at, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.at, by=1))+
ylab("ASSIST")+
ggtitle("ATTACCO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team.role.at
rm(x.at)
grid.assist.role<-plot_grid(plot.top.assist.team.role.df,
plot.top.assist.team.role.cc,
plot.top.assist.team.role.at,
nrow = 1)
grid.assist.role
#create a lsit with all the assist graphs
assist.plot.list <- list(assist.line.plot = assist.line.plot,
plot.top.assist.team = plot.top.assist.team,
plot.top.assist.team.role.at = plot.top.assist.team.role.at,
plot.top.assist.team.role.cc = plot.top.assist.team.role.cc,
plot.top.assist.team.role.df = plot.top.assist.team.role.df,
grid.assist.role = grid.assist.role)
rm(assist.line.plot, plot.top.assist.team, plot.top.assist.team.role.at, plot.top.assist.team.role.cc, plot.top.assist.team.role.df, grid.assist.role)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#km plots
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| /scraping.R | no_license | matteofranza/fantacalcio_2019_2020 | R | false | false | 46,471 | r | # watch out! at the end remove palyers with missing values because they might be lend or some shit
# ranking is removed from the list to simplify
#data managing (list, clean, reduce) -> check thatd df.complete.fanta nrow correspond to data_fata (i.e. same numbers of players)
#df.complete.serie.a$PLAYER <- sub(".*? (.+)", "\\1", df.complete.serie.a$PLAYER) #remove players name
#for future versions would it be better to scrap also the names otherwise is a huge mess (sometimes they change the name on the website remember babacar)
#remember that until each player doesn't play at least 1 game they will be not listed in the seria website, meaning that the column team will not converge while merging dataframes
#!!!correct dataset removing goals, assists and cards from initial games not played [check this every year to correct the df]
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#LIBRARIES
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
library(rvest)
library(XML)
library(dplyr)
library(tidyverse)
library(stringr)
library(rebus)
library(lubridate)
library(stringr)
library(utils)
library(base)
library(plyr)
library(data.table)
library(ggplot2)
library(Rmisc)
library(ggforce)
library(cowplot)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#http://www.legaseriea.it
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#top scores
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.top.scores <- 'http://www.legaseriea.it/en/serie-a/statistics/Gol'#Specifying the url for desired website to be scraped
top.scores.table <- read_html(url.top.scores)#Reading the HTML code from the website
top.scores.html <- html_nodes(top.scores.table,'td')#Using CSS selectors to scrape the rankings section
top.scores <- html_text(top.scores.html)#Converting the ranking data to text
head(top.scores)#Let's have a look at the rankings
top.scores <- str_replace_all(top.scores, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
top.scores <- as.data.frame(top.scores)
top.scores <- data.frame(
top.scores,
ind = rep(1:6, nrow(top.scores))) # create a repeated index
top.scores <-unstack(top.scores, top.scores~ind)
names(top.scores) <- c("RANKING", "TEAM", "PLAYER", "GOALS", "PLAYED", "PENALTIES")#name headers
top.scores$TEAM <- gsub('\\s+', '', top.scores$TEAM)# remove unnecessary space from the beginning
top.scores$PLAYER <- gsub(" $","", top.scores$PLAYER, perl=T)# remove unnecessary space from at the end
top.scores
#clean environment
rm(top.scores.html,top.scores.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#top asssist
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.top.assist <- 'http://www.legaseriea.it/en/serie-a/statistics/NAssistVin'#Specifying the url for desired website to be scraped
top.assist.table <- read_html(url.top.assist)#Reading the HTML code from the website
top.assist.html <- html_nodes(top.assist.table,'td')#Using CSS selectors to scrape the rankings section
top.assist <- html_text(top.assist.html)#Converting the ranking data to text
head(top.assist)#Let's have a look at the rankings
top.assist <- str_replace_all(top.assist, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
top.assist <- as.data.frame(top.assist)
top.assist <- data.frame(
top.assist,
ind = rep(1:4, nrow(top.assist))) # create a repeated index
top.assist <-unstack(top.assist, top.assist~ind)
names(top.assist) <- c("RANKING", "TEAM", "PLAYER", "ASSIST")#name headers
top.assist$TEAM <- gsub('\\s+', '', top.assist$TEAM)# remove unnecessary space from the beginning
top.assist$PLAYER <- gsub(" $","", top.assist$PLAYER, perl=T)# remove unnecessary space from at the end
top.assist
#clean environment
rm(top.assist.html,top.assist.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#shot
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.shot <- 'http://www.legaseriea.it/en/serie-a/statistics/NTir'#Specifying the url for desired website to be scraped
shot.table <- read_html(url.shot)#Reading the HTML code from the website
shot.html <- html_nodes(shot.table,'td')#Using CSS selectors to scrape the rankings section
shot <- html_text(shot.html)#Converting the ranking data to text
head(shot)#Let's have a look at the rankings
shot <- str_replace_all(shot, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
shot <- as.data.frame(shot)
shot <- data.frame(
shot,
ind = rep(1:6, nrow(shot))) # create a repeated index
shot <-unstack(shot, shot~ind)
names(shot) <- c("RANKING", "TEAM", "PLAYER", "TOTAL.SHOT", "ON.TARGET", "OFF.TARGET")#name headers
shot$TEAM <- gsub('\\s+', '', shot$TEAM)# remove unnecessary space from the beginning
shot$PLAYER <- gsub(" $","", shot$PLAYER, perl=T)# remove unnecessary space from at the end
shot
#clean environment
rm(shot.html,shot.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#key passes
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.key.passes <- 'http://www.legaseriea.it/en/serie-a/statistics/PassChiave'#Specifying the url for desired website to be scraped
key.passes.table <- read_html(url.key.passes)#Reading the HTML code from the website
key.passes.html <- html_nodes(key.passes.table,'td')#Using CSS selectors to scrape the rankings section
key.passes <- html_text(key.passes.html)#Converting the ranking data to text
head(key.passes)#Let's have a look at the rankings
key.passes <- str_replace_all(key.passes, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
key.passes <- as.data.frame(key.passes)
key.passes <- data.frame(
key.passes,
ind = rep(1:4, nrow(key.passes))) # create a repeated index
key.passes <-unstack(key.passes, key.passes~ind)
names(key.passes) <- c("RANKING", "TEAM", "PLAYER", "KEY.PASSES")#name headers
key.passes$TEAM <- gsub('\\s+', '', key.passes$TEAM)# remove unnecessary space from the beginning
key.passes$PLAYER <- gsub(" $","", key.passes$PLAYER, perl=T)# remove unnecessary space from at the end
key.passes
#clean environment
rm(key.passes.html,key.passes.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#recoveries
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.recoveries <- 'http://www.legaseriea.it/en/serie-a/statistics/Recuperi'#Specifying the url for desired website to be scraped
recoveries.table <- read_html(url.recoveries)#Reading the HTML code from the website
recoveries.html <- html_nodes(recoveries.table,'td')#Using CSS selectors to scrape the rankings section
recoveries <- html_text(recoveries.html)#Converting the ranking data to text
head(recoveries)#Let's have a look at the rankings
recoveries <- str_replace_all(recoveries, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
recoveries <- as.data.frame(recoveries)
recoveries <- data.frame(
recoveries,
ind = rep(1:4, nrow(recoveries))) # create a repeated index
recoveries <-unstack(recoveries, recoveries~ind)
names(recoveries) <- c("RANKING", "TEAM", "PLAYER", "RECOVERIES")#name headers
recoveries$TEAM <- gsub('\\s+', '', recoveries$TEAM)# remove unnecessary space from the beginning
recoveries$PLAYER <- gsub(" $","", recoveries$PLAYER, perl=T)# remove unnecessary space from at the end
recoveries
#clean environment
rm(recoveries.html,recoveries.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#km
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.km <- 'http://www.legaseriea.it/en/serie-a/statistics/MediaKm'#Specifying the url for desired website to be scraped
km.table <- read_html(url.km)#Reading the HTML code from the website
km.html <- html_nodes(km.table,'td')#Using CSS selectors to scrape the rankings section
km <- html_text(km.html)#Converting the ranking data to text
head(km)#Let's have a look at the rankings
km <- str_replace_all(km, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
km <- as.data.frame(km)
km <- data.frame(
km,
ind = rep(1:5, nrow(km))) # create a repeated index
km <-unstack(km, km~ind)
names(km) <- c("RANKING", "TEAM", "PLAYER", "AVERAGE.km", "MINUTES")#name headers
km$TEAM <- gsub('\\s+', '', km$TEAM)# remove unnecessary space from the beginning
km$PLAYER <- gsub(" $","", km$PLAYER, perl=T)# remove unnecessary space from at the end
km
#clean environment
rm(km.html,km.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#fouls against
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.fouls.against <- 'http://www.legaseriea.it/en/serie-a/statistics/NFalSubiti'#Specifying the url for desired website to be scraped
fouls.against.table <- read_html(url.fouls.against)#Reading the HTML code from the website
fouls.against.html <- html_nodes(fouls.against.table,'td')#Using CSS selectors to scrape the rankings section
fouls.against <- html_text(fouls.against.html)#Converting the ranking data to text
head(fouls.against)#Let's have a look at the rankings
fouls.against <- str_replace_all(fouls.against, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
fouls.against <- as.data.frame(fouls.against)
fouls.against <- data.frame(
fouls.against,
ind = rep(1:4, nrow(fouls.against))) # create a repeated index
fouls.against <-unstack(fouls.against, fouls.against~ind)
names(fouls.against) <- c("RANKING", "TEAM", "PLAYER", "FOULS.AGAINST")#name headers
fouls.against$TEAM <- gsub('\\s+', '', fouls.against$TEAM)# remove unnecessary space from the beginning
fouls.against$PLAYER <- gsub(" $","", fouls.against$PLAYER, perl=T)# remove unnecessary space from at the end
fouls.against
#clean environment
rm(fouls.against.html,fouls.against.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#headed goals
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.headed.goals <- 'http://www.legaseriea.it/en/serie-a/statistics/GolTesta'#Specifying the url for desired website to be scraped
headed.goals.table <- read_html(url.headed.goals)#Reading the HTML code from the website
headed.goals.html <- html_nodes(headed.goals.table,'td')#Using CSS selectors to scrape the rankings section
headed.goals <- html_text(headed.goals.html)#Converting the ranking data to text
head(headed.goals)#Let's have a look at the rankings
headed.goals <- str_replace_all(headed.goals, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
headed.goals <- as.data.frame(headed.goals)
headed.goals <- data.frame(
headed.goals,
ind = rep(1:4, nrow(headed.goals))) # create a repeated index
headed.goals <-unstack(headed.goals, headed.goals~ind)
names(headed.goals) <- c("RANKING", "TEAM", "PLAYER", "HEADED.GOALS")#name headers
headed.goals$TEAM <- gsub('\\s+', '', headed.goals$TEAM)# remove unnecessary space from the beginning
headed.goals$PLAYER <- gsub(" $","", headed.goals$PLAYER, perl=T)# remove unnecessary space from at the end
headed.goals
#clean environment
rm(headed.goals.html,headed.goals.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#posts/crossbar
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
url.posts.crossbar <- 'http://www.legaseriea.it/en/serie-a/statistics/Palo'#Specifying the url for desired website to be scraped
posts.crossbar.table <- read_html(url.posts.crossbar)#Reading the HTML code from the website
posts.crossbar.html <- html_nodes(posts.crossbar.table,'td')#Using CSS selectors to scrape the rankings section
posts.crossbar <- html_text(posts.crossbar.html)#Converting the ranking data to text
head(posts.crossbar)#Let's have a look at the rankings
posts.crossbar <- str_replace_all(posts.crossbar, "[\r\n]" , "")#remove all the tags
#transform the values in a dataframe
posts.crossbar <- as.data.frame(posts.crossbar)
posts.crossbar <- data.frame(
posts.crossbar,
ind = rep(1:4, nrow(posts.crossbar))) # create a repeated index
posts.crossbar <-unstack(posts.crossbar, posts.crossbar~ind)
names(posts.crossbar) <- c("RANKING", "TEAM", "PLAYER", "POSTS.CROSSBAR")#name headers
posts.crossbar$TEAM <- gsub('\\s+', '', posts.crossbar$TEAM)# remove unnecessary space from the beginning
posts.crossbar$PLAYER <- gsub(" $","", posts.crossbar$PLAYER, perl=T)# remove unnecessary space from at the end
posts.crossbar
#clean environment
rm(posts.crossbar.html,posts.crossbar.table)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#data managing (list, clean, reduce)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
df.list.source <- list(fouls.against = fouls.against, # create a list of dataframes
headed.goals = headed.goals,
key.passes = key.passes,
km = km,
posts.crossbar = posts.crossbar,
recoveries = recoveries,
shot = shot,
top.assist = top.assist,
top.scores = top.scores)
df.list.clean <- lapply(df.list.source, function(x) { x["RANKING"] <- NULL; x }) # remove ranking column otherwise is a mess for merging
rm(fouls.against, headed.goals, key.passes, km, posts.crossbar, recoveries, shot, top.assist, top.scores) #clean environment
rm(url.fouls.against, url.headed.goals, url.key.passes, url.km, url.recoveries, url.posts.crossbar, url.shot, url.top.assist, url.top.scores) # clean environment from urls
#merge all the data set
df.complete.serie.a <- reduce(df.list.clean, full_join)
df.complete.serie.a <- df.complete.serie.a[!duplicated(df.complete.serie.a), ] #remove duplicate rows
team.list <- list("ATALANTA", "BOLOGNA", "BRESCIA", "CAGLIARI", "FIORENTINA", "GENOA", "HELLASVERONA", "INTER", "JUVENTUS", "LAZIO", "LECCE", "MILAN", "NAPOLI", "PARMA", "ROMA", "SAMPDORIA", "SASSUOLO", "SPAL", "TORINO", "UDINESE")
df.complete.serie.a <- df.complete.serie.a[ df.complete.serie.a$TEAM %in% team.list, ] #remove team that are not in the list (for instance transferred players after the first game are reported with the new team even if abroad)
rm(team.list)
#df.complete.serie.a$PLAYER <- sub(".*? (.+)", "\\1", df.complete.serie.a$PLAYER) #remove players name
#merge with data fanta
data_fanta <- read.csv(file.path ("D:\\My Drive\\fanta 2019_2020","DATA_FANTA.csv"), sep=";")
setnames(data_fanta, "GIOCATORE", "PLAYER")#rename GIOCATORE into player
data_fanta$PLAYER <- gsub(" $","", data_fanta$PLAYER, perl=T)# remove unnecessary space from at the end
data_fanta$SQUADRA <- NULL #remove squadra
df.complete.fanta.uncorrected <- list(data_fanta = data_fanta, df.complete.serie.a = df.complete.serie.a)
df.complete.fanta.uncorrected <- reduce(df.complete.fanta.uncorrected, full_join)
df.complete.fanta.uncorrected<-df.complete.fanta.uncorrected[-which(is.na(df.complete.fanta.uncorrected$FANTA.TEAM)),]#remove all the players without fanta team
#str dataframe
rm(data_fanta)
#str dataframe
df.complete.fanta.uncorrected$FANTA.TEAM <- as.factor(df.complete.fanta.uncorrected$FANTA.TEAM)
df.complete.fanta.uncorrected$PLAYER <- as.factor(df.complete.fanta.uncorrected$PLAYER)
df.complete.fanta.uncorrected$PREZZO.ASTA <- as.numeric(df.complete.fanta.uncorrected$PREZZO.ASTA)
df.complete.fanta.uncorrected$RUOLO <- as.factor(df.complete.fanta.uncorrected$RUOLO)
df.complete.fanta.uncorrected$TEAM <- as.factor(df.complete.fanta.uncorrected$TEAM)
df.complete.fanta.uncorrected$FOULS.AGAINST <- as.factor(df.complete.fanta.uncorrected$FOULS.AGAINST)
df.complete.fanta.uncorrected$HEADED.GOALS <- as.numeric(df.complete.fanta.uncorrected$HEADED.GOALS)
df.complete.fanta.uncorrected$KEY.PASSES <- as.numeric(df.complete.fanta.uncorrected$KEY.PASSES)
df.complete.fanta.uncorrected$AVERAGE.km <- as.numeric(df.complete.fanta.uncorrected$AVERAGE.km)
df.complete.fanta.uncorrected$MINUTES <- as.numeric(df.complete.fanta.uncorrected$MINUTES)
df.complete.fanta.uncorrected$POSTS.CROSSBAR <- as.numeric(df.complete.fanta.uncorrected$POSTS.CROSSBAR)
df.complete.fanta.uncorrected$RECOVERIES <- as.numeric(df.complete.fanta.uncorrected$RECOVERIES)
df.complete.fanta.uncorrected$TOTAL.SHOT <- as.numeric(df.complete.fanta.uncorrected$TOTAL.SHOT)
df.complete.fanta.uncorrected$ON.TARGET <- as.numeric(df.complete.fanta.uncorrected$ON.TARGET)
df.complete.fanta.uncorrected$OFF.TARGET <- as.numeric(df.complete.fanta.uncorrected$OFF.TARGET)
df.complete.fanta.uncorrected$ASSIST <- as.numeric(df.complete.fanta.uncorrected$ASSIST)
df.complete.fanta.uncorrected$GOALS <- as.numeric(df.complete.fanta.uncorrected$GOALS)
df.complete.fanta.uncorrected$PLAYED <- as.numeric(df.complete.fanta.uncorrected$PLAYED)
df.complete.fanta.uncorrected$PENALTIES <- as.numeric(df.complete.fanta.uncorrected$PENALTIES)
#list all the old df
df.backup <- list(df.complete.serie.a = df.complete.serie.a,
df.list.clean = df.list.clean,
df.list.source = df.list.source)
rm(df.complete.serie.a, df.list.clean, df.list.source)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#general graphic parameters
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
position = position_dodge(width = .75) #to have labels centered at graph end in case you need it
width = .75 #to have labels centered at graph end in case you need it
fillpalette <- c("AZTEKA FC" = "orange",# fill color for each team
"NEVERENDING TURD" = "blue",
"CSK LA RISSA" = "indianred3",
"MATTIA TEAM V5" = "white",
"PARTIZAN DEGRADO" = "firebrick",
"LOKOMOTIV SMILLAUS" = "yellow2",
"REAL GHETTO" = "black",
"ASSASSIN CRIN" = "forestgreen")
borderspalette <- c("AZTEKA FC" = "blue",# fill color for each team
"NEVERENDING TURD" = "black",
"CSK LA RISSA" = "gray88",
"MATTIA TEAM V5" = "red",
"PARTIZAN DEGRADO" = "black",
"LOKOMOTIV SMILLAUS" = "darkgreen",
"REAL GHETTO" = "snow2",
"ASSASSIN CRIN" = "black")
fillpalette_line <- c("AZTEKA FC" = "orange",# fill color for each team
"NEVERENDING TURD" = "blue",
"CSK LA RISSA" = "indianred3",
"MATTIA TEAM V5" = "azure3",
"PARTIZAN DEGRADO" = "firebrick",
"LOKOMOTIV SMILLAUS" = "yellow2",
"REAL GHETTO" = "black",
"ASSASSIN CRIN" = "forestgreen")
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.scores.team
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot.top.scores.team <- aggregate(GOALS ~ FANTA.TEAM, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.scores.team <- plot.top.scores.team[order(plot.top.scores.team$GOALS, decreasing = F), ] #reorder the sum
x = max(plot.top.scores.team$GOALS) #set the maximum value for the graph
plot.top.scores.team$FANTA.TEAM <- factor(plot.top.scores.team$FANTA.TEAM, levels = plot.top.scores.team$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.scores.team <- ggplot(plot.top.scores.team, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x, by=1))+
ylab("GOAL SEGNATI")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team
rm(x)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.scores.team.role
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#after a long interal debate with myself I relaized that the best solution for this graph is to split it in 3 dataset and then plot everything together
#after a second long internal debate with myself I decided that a dotline plot is more better represent the data. I keep anyway the extended version
plot.top.scores.team.role <- aggregate(GOALS ~ FANTA.TEAM + RUOLO, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.scores.team.role <- plot.top.scores.team.role[order(plot.top.scores.team.role$RUOLO, plot.top.scores.team.role$GOALS),] # reorder based on goals and role
plot.top.scores.team.role.df <- subset(plot.top.scores.team.role, subset = RUOLO == "DF", select = c(FANTA.TEAM,RUOLO,GOALS)) #subset separately each role
plot.top.scores.team.role.cc <- subset(plot.top.scores.team.role, subset = RUOLO == "CC", select = c(FANTA.TEAM,RUOLO,GOALS))
plot.top.scores.team.role.at <- subset(plot.top.scores.team.role, subset = RUOLO == "AT", select = c(FANTA.TEAM,RUOLO,GOALS))
x = max(plot.top.scores.team.role$GOALS) #set the maximum value for the graph
x.df = max(plot.top.scores.team.role.df$GOALS)
x.cc = max(plot.top.scores.team.role.cc$GOALS)
x.at = max(plot.top.scores.team.role.at$GOALS)
plot.top.scores.team.role.df$FANTA.TEAM <- factor(plot.top.scores.team.role.df$FANTA.TEAM, levels = plot.top.scores.team.role.df$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.scores.team.role.cc$FANTA.TEAM <- factor(plot.top.scores.team.role.cc$FANTA.TEAM, levels = plot.top.scores.team.role.cc$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.scores.team.role.at$FANTA.TEAM <- factor(plot.top.scores.team.role.at$FANTA.TEAM, levels = plot.top.scores.team.role.at$FANTA.TEAM) #use the factor to impose the order in the graph
#goals.line.plot
plot.top.scores.team.role$RUOLO <- factor(plot.top.scores.team.role$RUOLO, levels=c("DF", "CC", "AT"))# reorder x axis
goals.line.plot <- ggplot(plot.top.scores.team.role,
aes(x = RUOLO, y = GOALS,
group = FANTA.TEAM))+
geom_line(size = 2, aes(color=factor(FANTA.TEAM)))+
geom_point(aes(color=factor(FANTA.TEAM),
fill = factor(FANTA.TEAM)), shape=21, size = 5, stroke = 3)+
scale_fill_manual(values= borderspalette) + #to use my palettes
scale_colour_manual(values= fillpalette_line)+
theme(axis.title.y = element_text(size = 15, face = "bold", color = "black"),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey88"),
legend.position = "right",
legend.title = element_blank(),
legend.text = element_text(size = 12, face = "bold"))+
scale_y_continuous(breaks = seq(0, x, by=1))+
annotate("rect", xmin = 0.5, xmax = 1.5, ymin=0, ymax=Inf, alpha=0.1, fill="cyan")+#color sectors of background
annotate("rect", xmin = 1.5, xmax = 2.5, ymin=0, ymax=Inf, alpha=0.1, fill="lightgoldenrod")+
annotate("rect", xmin = 2.5, xmax = 3.5, ymin=0, ymax=Inf, alpha=0.1, fill="tomato3")+
ylab ("GOAL SEGNATI")
goals.line.plot
rm(x)
rm(plot.top.scores.team.role)
#plot df
plot.top.scores.team.role.df <- ggplot(plot.top.scores.team.role.df, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.df, by=1))+
ylab("GOAL SEGNATI")+
ggtitle("DIFESA")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team.role.df
rm(x.df)
#plot cc
plot.top.scores.team.role.cc <- ggplot(plot.top.scores.team.role.cc, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.cc, by=1))+
ylab("GOAL SEGNATI")+
ggtitle("CENTROCAMPO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team.role.cc
rm(x.cc)
#plot at
plot.top.scores.team.role.at <- ggplot(plot.top.scores.team.role.at, aes(x = FANTA.TEAM, y = GOALS, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.at, by=1))+
ylab("GOAL SEGNATI")+
ggtitle("ATTACCO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.scores.team.role.at
rm(x.at)
grid.goals.role<-plot_grid(plot.top.scores.team.role.df,
plot.top.scores.team.role.cc,
plot.top.scores.team.role.at,
nrow = 1)
grid.goals.role
#create a lsit with all the goals graphs
goals.plot.list <- list(goals.line.plot = goals.line.plot,
plot.top.scores.team = plot.top.scores.team,
plot.top.scores.team.role.at = plot.top.scores.team.role.at,
plot.top.scores.team.role.cc = plot.top.scores.team.role.cc,
plot.top.scores.team.role.df = plot.top.scores.team.role.df,
grid.goals.role = grid.goals.role)
rm(goals.line.plot, plot.top.scores.team, plot.top.scores.team.role.at, plot.top.scores.team.role.cc, plot.top.scores.team.role.df, grid.goals.role)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.assist.team
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot.top.assist.team <- aggregate(ASSIST ~ FANTA.TEAM, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.assist.team <- plot.top.assist.team[order(plot.top.assist.team$ASSIST, decreasing = F), ] #reorder the sum
x = max(plot.top.assist.team$ASSIST) #set the maximum value for the graph
plot.top.assist.team$FANTA.TEAM <- factor(plot.top.assist.team$FANTA.TEAM, levels = plot.top.assist.team$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.assist.team <- ggplot(plot.top.assist.team, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x, by=1))+
ylab("ASSIST")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team
rm(x)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#plot.top.assist.team.role
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
plot.top.assist.team.role <- aggregate(ASSIST ~ FANTA.TEAM + RUOLO, df.complete.fanta.uncorrected, sum) #sum per fanta.team
plot.top.assist.team.role <- plot.top.assist.team.role[order(plot.top.assist.team.role$RUOLO, plot.top.assist.team.role$ASSIST),] # reorder based on goals and role
plot.top.assist.team.role.df <- subset(plot.top.assist.team.role, subset = RUOLO == "DF", select = c(FANTA.TEAM,RUOLO,ASSIST)) #subset separately each role
plot.top.assist.team.role.cc <- subset(plot.top.assist.team.role, subset = RUOLO == "CC", select = c(FANTA.TEAM,RUOLO,ASSIST))
plot.top.assist.team.role.at <- subset(plot.top.assist.team.role, subset = RUOLO == "AT", select = c(FANTA.TEAM,RUOLO,ASSIST))
x = max(plot.top.assist.team.role$ASSIST) #set the maximum value for the graph
x.df = max(plot.top.assist.team.role.df$ASSIST)
x.cc = max(plot.top.assist.team.role.cc$ASSIST)
x.at = max(plot.top.assist.team.role.at$ASSIST)
plot.top.assist.team.role.df$FANTA.TEAM <- factor(plot.top.assist.team.role.df$FANTA.TEAM, levels = plot.top.assist.team.role.df$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.assist.team.role.cc$FANTA.TEAM <- factor(plot.top.assist.team.role.cc$FANTA.TEAM, levels = plot.top.assist.team.role.cc$FANTA.TEAM) #use the factor to impose the order in the graph
plot.top.assist.team.role.at$FANTA.TEAM <- factor(plot.top.assist.team.role.at$FANTA.TEAM, levels = plot.top.assist.team.role.at$FANTA.TEAM) #use the factor to impose the order in the graph
#assist.line.plot
plot.top.assist.team.role$RUOLO <- factor(plot.top.assist.team.role$RUOLO, levels=c("DF", "CC", "AT"))# reorder x axis
assist.line.plot <- ggplot(plot.top.assist.team.role,
aes(x = RUOLO, y = ASSIST,
group = FANTA.TEAM))+
geom_line(size = 2, aes(color=factor(FANTA.TEAM)))+
geom_point(aes(color=factor(FANTA.TEAM),
fill = factor(FANTA.TEAM)), shape=21, size = 5, stroke = 3)+
scale_fill_manual(values= borderspalette) + #to use my palettes
scale_colour_manual(values= fillpalette_line)+
theme(axis.title.y = element_text(size = 15, face = "bold", color = "black"),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey88"),
legend.position = "right",
legend.title = element_blank(),
legend.text = element_text(size = 12, face = "bold"))+
scale_y_continuous(breaks = seq(0, x, by=1))+
annotate("rect", xmin = 0.5, xmax = 1.5, ymin=0, ymax=Inf, alpha=0.1, fill="cyan")+#color sectors of background
annotate("rect", xmin = 1.5, xmax = 2.5, ymin=0, ymax=Inf, alpha=0.1, fill="lightgoldenrod")+
annotate("rect", xmin = 2.5, xmax = 3.5, ymin=0, ymax=Inf, alpha=0.1, fill="tomato3")+
ylab ("ASSIST")
assist.line.plot
rm(x)
rm(plot.top.assist.team.role)
#plot df
plot.top.assist.team.role.df <- ggplot(plot.top.assist.team.role.df, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.df, by=1))+
ylab("ASSIST")+
ggtitle("DIFESA")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team.role.df
rm(x.df)
#plot cc
plot.top.assist.team.role.cc <- ggplot(plot.top.assist.team.role.cc, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.cc, by=1))+
ylab("ASSIST")+
ggtitle("CENTROCAMPO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team.role.cc
rm(x.cc)
#plot at
plot.top.assist.team.role.at <- ggplot(plot.top.assist.team.role.at, aes(x = FANTA.TEAM, y = ASSIST, fill = FANTA.TEAM, color = FANTA.TEAM))+
stat_summary(fun.y="sum", geom="bar", size = 2)+
theme(axis.title.y = element_blank(),
axis.text.y = element_text(size = 12, face = "bold", color = "black"),
axis.title.x = element_text(size = 15, face = "bold", color = "black"),
axis.text.x = element_text(size = 12, face = "bold", color = "black"),
panel.background = element_blank(),
axis.line = element_line(colour = "black"),
panel.grid.major.x = element_line(colour = "grey"),
plot.title = element_text(size = 40, face = "bold", hjust = 0.5),
legend.position = "none")+
scale_y_continuous(breaks = seq(0, x.at, by=1))+
ylab("ASSIST")+
ggtitle("ATTACCO")+
coord_flip()+
scale_fill_manual(values = fillpalette)+
scale_color_manual(values = borderspalette)
plot.top.assist.team.role.at
rm(x.at)
grid.assist.role<-plot_grid(plot.top.assist.team.role.df,
plot.top.assist.team.role.cc,
plot.top.assist.team.role.at,
nrow = 1)
grid.assist.role
#create a lsit with all the assist graphs
assist.plot.list <- list(assist.line.plot = assist.line.plot,
plot.top.assist.team = plot.top.assist.team,
plot.top.assist.team.role.at = plot.top.assist.team.role.at,
plot.top.assist.team.role.cc = plot.top.assist.team.role.cc,
plot.top.assist.team.role.df = plot.top.assist.team.role.df,
grid.assist.role = grid.assist.role)
rm(assist.line.plot, plot.top.assist.team, plot.top.assist.team.role.at, plot.top.assist.team.role.cc, plot.top.assist.team.role.df, grid.assist.role)
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#km plots
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
notify <- function() {
e <- get("e", parent.frame())
if(e$val == "No") return(TRUE)
good <- FALSE
while(!good) {
# Get info
name <- readline_clean("What is your full name? ")
address <- readline_clean("What is the email address of the person you'd like to notify? ")
# Repeat back to them
message("\nDoes everything look good?\n")
message("Your name: ", name, "\n", "Send to: ", address)
yn <- select.list(c("Yes", "No"), graphics = FALSE)
if(yn == "Yes") good <- TRUE
}
# Get course and lesson names
course_name <- attr(e$les, "course_name")
lesson_name <- attr(e$les, "lesson_name")
subject <- paste(name, "just completed", course_name, "-", lesson_name)
body = ""
# Send email
swirl:::email(address, subject, body)
hrule()
message("I just tried to create a new email with the following info:\n")
message("To: ", address)
message("Subject: ", subject)
message("Body: <empty>")
message("\nIf it didn't work, you can send the same email manually.")
hrule()
# Return TRUE to satisfy swirl and return to course menu
TRUE
}
readline_clean <- function(prompt = "") {
wrapped <- strwrap(prompt, width = getOption("width") - 2)
mes <- stringr::str_c("| ", wrapped, collapse = "\n")
message(mes)
readline()
}
hrule <- function() {
message("\n", paste0(rep("#", getOption("width") - 2), collapse = ""), "\n")
}
# Put custom tests in this file.
# Uncommenting the following line of code will disable
# auto-detection of new variables and thus prevent swirl from
# executing every command twice, which can slow things down.
# AUTO_DETECT_NEWVAR <- FALSE
# However, this means that you should detect user-created
# variables when appropriate. The answer test, creates_new_var()
# can be used for for the purpose, but it also re-evaluates the
# expression which the user entered, so care must be taken.
# Get the swirl state
getState <- function(){
# Whenever swirl is running, its callback is at the top of its call stack.
# Swirl's state, named e, is stored in the environment of the callback.
environment(sys.function(1))$e
}
# Retrieve the log from swirl's state
getLog <- function(){
getState()$log
}
submit_log <- function(){
# Please edit the link below
pre_fill_link <- "https://docs.google.com/forms/d/e/1FAIpQLSfBwgtRYTv4n1YFK9dHwmE6d_jvdOwOBJaB46O1oP6I1l9bHQ/viewform?entry.516137349"
#pre_fill_link <- "https://docs.google.com/forms/d/1ngWrz5A5w5RiNSuqzdotxkzgk0DKU-88FmnTHj20nuI/viewform?entry.1733728592"
# Do not edit the code below
if(!grepl("=$", pre_fill_link)){
pre_fill_link <- paste0(pre_fill_link, "=")
}
p <- function(x, p, f, l = length(x)){if(l < p){x <- c(x, rep(f, p - l))};x}
temp <- tempfile()
log_ <- getLog()
nrow_ <- max(unlist(lapply(log_, length)))
log_tbl <- data.frame(user = rep(log_$user, nrow_),
course_name = rep(log_$course_name, nrow_),
lesson_name = rep(log_$lesson_name, nrow_),
question_number = p(log_$question_number, nrow_, NA),
correct = p(log_$correct, nrow_, NA),
attempt = p(log_$attempt, nrow_, NA),
skipped = p(log_$skipped, nrow_, NA),
datetime = p(log_$datetime, nrow_, NA),
stringsAsFactors = FALSE)
write.csv(log_tbl, file = temp, row.names = FALSE)
encoded_log <- base64encode(temp)
browseURL(paste0(pre_fill_link, encoded_log))
}
| /Courses/IEG301_R_Programming_E/Sequences_of_Numbers/customTests.R | no_license | yusriy/R_swirl | R | false | false | 3,563 | r | notify <- function() {
e <- get("e", parent.frame())
if(e$val == "No") return(TRUE)
good <- FALSE
while(!good) {
# Get info
name <- readline_clean("What is your full name? ")
address <- readline_clean("What is the email address of the person you'd like to notify? ")
# Repeat back to them
message("\nDoes everything look good?\n")
message("Your name: ", name, "\n", "Send to: ", address)
yn <- select.list(c("Yes", "No"), graphics = FALSE)
if(yn == "Yes") good <- TRUE
}
# Get course and lesson names
course_name <- attr(e$les, "course_name")
lesson_name <- attr(e$les, "lesson_name")
subject <- paste(name, "just completed", course_name, "-", lesson_name)
body = ""
# Send email
swirl:::email(address, subject, body)
hrule()
message("I just tried to create a new email with the following info:\n")
message("To: ", address)
message("Subject: ", subject)
message("Body: <empty>")
message("\nIf it didn't work, you can send the same email manually.")
hrule()
# Return TRUE to satisfy swirl and return to course menu
TRUE
}
readline_clean <- function(prompt = "") {
wrapped <- strwrap(prompt, width = getOption("width") - 2)
mes <- stringr::str_c("| ", wrapped, collapse = "\n")
message(mes)
readline()
}
hrule <- function() {
message("\n", paste0(rep("#", getOption("width") - 2), collapse = ""), "\n")
}
# Put custom tests in this file.
# Uncommenting the following line of code will disable
# auto-detection of new variables and thus prevent swirl from
# executing every command twice, which can slow things down.
# AUTO_DETECT_NEWVAR <- FALSE
# However, this means that you should detect user-created
# variables when appropriate. The answer test, creates_new_var()
# can be used for for the purpose, but it also re-evaluates the
# expression which the user entered, so care must be taken.
# Get the swirl state
getState <- function(){
# Whenever swirl is running, its callback is at the top of its call stack.
# Swirl's state, named e, is stored in the environment of the callback.
environment(sys.function(1))$e
}
# Retrieve the log from swirl's state
getLog <- function(){
getState()$log
}
submit_log <- function(){
# Please edit the link below
pre_fill_link <- "https://docs.google.com/forms/d/e/1FAIpQLSfBwgtRYTv4n1YFK9dHwmE6d_jvdOwOBJaB46O1oP6I1l9bHQ/viewform?entry.516137349"
#pre_fill_link <- "https://docs.google.com/forms/d/1ngWrz5A5w5RiNSuqzdotxkzgk0DKU-88FmnTHj20nuI/viewform?entry.1733728592"
# Do not edit the code below
if(!grepl("=$", pre_fill_link)){
pre_fill_link <- paste0(pre_fill_link, "=")
}
p <- function(x, p, f, l = length(x)){if(l < p){x <- c(x, rep(f, p - l))};x}
temp <- tempfile()
log_ <- getLog()
nrow_ <- max(unlist(lapply(log_, length)))
log_tbl <- data.frame(user = rep(log_$user, nrow_),
course_name = rep(log_$course_name, nrow_),
lesson_name = rep(log_$lesson_name, nrow_),
question_number = p(log_$question_number, nrow_, NA),
correct = p(log_$correct, nrow_, NA),
attempt = p(log_$attempt, nrow_, NA),
skipped = p(log_$skipped, nrow_, NA),
datetime = p(log_$datetime, nrow_, NA),
stringsAsFactors = FALSE)
write.csv(log_tbl, file = temp, row.names = FALSE)
encoded_log <- base64encode(temp)
browseURL(paste0(pre_fill_link, encoded_log))
}
|
plotdata=function(...){
args=list(...)
createwindow<-function(yheight){
if(is.na(yheight)&&!is.na(plotHeight)&&!is.na(eventMargin)){ yheight=(plotHeight+eventMargin)/(1-labelMargin) }
if(is.na(yheight)&&is.na(timeFile)){ yheight=plotHeight }
#Creating rendering window
graphics.off()
if(outputType=="wmf")
win.metafile(paste(paste(folder,filename, sep=""), "wmf", sep="."),xdim,yheight,12, restoreConsole=FALSE)
if(outputType=="png")
png(paste(paste(folder,filename, sep=""), "png", sep="."),xdim,yheight,12, units="in", restoreConsole=FALSE, res=dpi)
return(yheight)
}
axisscale<-function(smax, smin, smult, sscale, maxticks){
sscale<-10^floor(log(abs(smax-smin))/log(10)-1)
smult=1
if(ceiling((smax-smin)/sscale)<maxticks){
}else if(ceiling((smax-smin)/(2*sscale))<maxticks){
sscale=2*sscale
smult=2
}else if(ceiling((smax-smin)/(3*sscale))<maxticks){
sscale=3*sscale
smult=3
}else if(ceiling((smax-smin)/(5*sscale))<maxticks){
sscale=5*sscale
smult=5
}else if(ceiling((smax-smin)/(10*sscale))<maxticks){
sscale=10*sscale
smult=5
}else if(ceiling((smax-smin)/(20*sscale))<maxticks){
sscale=20*sscale
smult=4
}else if(ceiling((smax-smin)/(30*sscale))<maxticks){
sscale=30*sscale
smult=3
}else{
sscale=50*sscale
smult=5
}
return(c(smax, smin, smult, sscale, maxticks))
}
remainder<-function(fractional){return(fractional-trunc(fractional))}
eventLocation<-function(xstart, xend, mintheta, toright=NA){
if(charHeight>eventMargin)
return(NULL)
if(xstart==xmin)
bin=as.vector(subset(timeline, timeline["start"]<=xend&timeline["start"]>=xstart)[["start"]])
else
bin=as.vector(subset(timeline, timeline["start"]<=xend&timeline["start"]>xstart)[["start"]])
if(length(bin)<1)
return(list())
theta=atan(charWidth/charHeight)-acos(eventMargin/sqrt(charHeight^2+charWidth^2))
if(is.nan(theta)){ theta=pi/2 }
width=charHeight/sin(theta)
lrr=0
lrl=0
if(is.na(toright)){
if(mean(bin)<(xstart+xend)/2)
lrl=max(xinch(charWidth*cos(theta)-width*cos(theta)^2-par()[["mai"]][2]),0)
else
lrr=max(xinch(charWidth*cos(theta)-width*cos(theta)^2-par()[["mai"]][4]),0)
total=xinch(width*length(bin))+lrr+lrl
}else if(toright){
lrr=max(max(xinch(charWidth*cos(theta)-width*cos(theta)^2)-(xmax-xend),0)-xinch(par()[["mai"]][4]),0)
total=xinch(width*length(bin))+lrr
}else if(!toright){
lrl=max(max(xinch(charWidth*cos(theta)-width*cos(theta)^2)-(xstart-xmin),0)-xinch(par()[["mai"]][2]-width),0)
total=xinch(width*length(bin))+lrl
}
if((xend-xstart)<total&&is.na(toright)&&theta==pi/2){
cat(" Too many labels.\n")
return(list(theta=theta, location=xinch(.05)+xstart+seq(0,length(bin)-1)*xinch(width), direction="right"))
}
if((xend-xstart)<total)
return(NULL)
if(length(bin)<2)
if(is.na(toright)){
if(bin[1]>xstart+lrl&bin[1]<xend-lrr-xinch(width)){
return(list(theta=theta, location=bin[1], direction="right"))
}else if(bin[1]<xstart+lrl){
return(list(theta=theta, location=xstart+lrl, direction="right"))
}else{
return(list(theta=theta, location=xend-lrr-xinch(width), direction="right"))
}
}else if(toright){
if(bin[1]>xstart+lrl&bin[1]<xend-lrr-xinch(width)){
return(list(theta=theta, location=bin[1], direction="right"))
}else if(bin[1]<xstart+lrl){
return(list(theta=theta, location=xstart+lrl, direction="right"))
}else{
return(list(theta=theta, location=xend-lrr-xinch(width), direction="right"))
}
}else{
if(bin[1]>xstart+lrl&bin[1]<xend-lrr-xinch(width)){
return(list(theta=theta, location=bin[1], direction="left"))
}else if(bin[1]<xstart+lrl){
return(list(theta=theta, location=xstart+lrl, direction="left"))
}else{
return(list(theta=theta, location=xend-lrr-xinch(width), direction="left"))
}
}
if(length(bin)>=2){
if(is.na(toright)){
binmin=xstart+lrl
binmax=xend-lrr
}else if(toright){
binmin=xstart
binmax=xend-lrr
}else{
binmin=xstart+lrl
binmax=xend
}
binmean=seq(length(bin)-1)*(binmax-binmin)/length(bin)+binmin
lmean=binmean-bin[seq(1,length(bin)-1)]
rmean=binmean-bin[seq(2,length(bin))]
means=lmean/rmean
means[means<1&means>0]=1/means[means<1&means>0]
if(any(means<0)){
binmean=(binmean[means<0])[which.min(abs(binmean[means<0]-(binmin+binmax)/2))]
}else{
binmean=binmean[which.min(means)]
}
if(is.na(toright)){
left=eventLocation(xstart, binmean, theta, FALSE)
right=eventLocation(binmean, xend, theta, TRUE)
}else if(toright){
right=eventLocation(binmean, xend, theta, toright)
left=eventLocation(xstart, binmean, theta, toright)
}else{
left=eventLocation(xstart, binmean, theta, toright)
right=eventLocation(binmean, xend, theta, toright)
}
}
if(is.null(left)||is.null(right)){
if(is.na(toright))
if(mean(bin)>(xstart+xend)/2){
left=eventLocation(xstart, xend, theta, FALSE)
if(!is.null(left))
return(list(theta=theta, location=left[["location"]], direction="left"))
else
return(list(theta=theta, location=-xinch(.05)+xend-seq(length(bin)-1,0)*xinch(width), direction="left"))
}else{
right=eventLocation(xstart, xend, theta, TRUE)
if(!is.null(right))
return(list(theta=theta, location=right[["location"]], direction="right"))
else
return(list(theta=theta, location=xinch(.05)+xstart+seq(0,length(bin)-1)*xinch(width), direction="right"))
}
else
if(toright){
if(mean(bin)<(xstart+xend)/2)
return(list(theta=theta, location=xstart+seq(0,length(bin)-1)*xinch(width), direction="right"))
else
return(list(theta=theta, location=xend-seq(length(bin),1)*xinch(width)-lrr, direction="right"))
}else{
if(mean(bin)>(xstart+xend)/2)
return(list(theta=theta, location=xend-seq(length(bin)-1,0)*xinch(width)-xinch(width), direction="left"))
else
return(list(theta=theta, location=xstart+seq(1,length(bin))*xinch(width)-xinch(width)+lrl, direction="left"))
}
}
if(length(left[["location"]])==0&&is.na(toright))
return(list(theta=theta, location=eventLocation(xstart, xend, theta, FALSE)[["location"]], direction="left"))
else if(length(right[["location"]])==0&&is.na(toright))
return(list(theta=theta, location=eventLocation(xstart, xend, theta, TRUE)[["location"]], direction="right"))
else
return(list(theta=theta, location=c(left[["location"]], right[["location"]]), direction="split", median=binmean))
}
renderlegend<-function(isMulti,isTop, errorBarX, errorBarY){
originalsize=legendSize
originalcols=legendcols
colors=vector()
pics=vector()
colorindex=1
for(datafile in seq(data)){
if(length(datafile)!=0)
for(var in seq(2, dim(data[[datafile]])[2])){
if(!isMulti||((isTop&&altVars[[datafile]][var-1])||(!isTop&&!altVars[[datafile]][var-1]))){
colors=append(colors, hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[colorindex])
pics=append(pics, seq(0, totalvars)[colorindex])
}
colorindex=colorindex+1
}
}
if(totalvars>1){
names=vector()
n=1
if(isTop&&isMulti){
tempymin=altymin
tempymax=altymax
}else{
tempymin=ymin
tempymax=ymax
}
for(file in data){
visiblevars=!isMulti|((isTop&altVars[[n]])|(!isTop&!altVars[[n]]))
namelist=dimnames(data[[n]])[[2]][seq(2, dim(as.matrix(dimnames(data[[n]])[[2]]))[1])]
for(i in seq(length(namelist))){
if(visiblevars[i])
if(length(fileSuffix)<n||length(fileSuffix[n])<i&&!suppNames){
names=append(names, namelist[i])
}else if(suppNames){
names=append(names, fileSuffix[n])
}else{
names=append(names, paste(fileSuffix[n], namelist[i], sep="_"))
}
}
n=n+1
}
names=chartr("_"," ",names)
if(is.na(title)||title=="")
title=NULL
if(is.na(legendPos)){
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1, cex=legendSize)
curMinX=xmin+xinch(.00)+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+yinch(.00)+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-xinch(.00)-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-yinch(.00)-legenddata[[1]][[2]]/2*(1+.05)
divides=4
maxdist=-10000
lwidth=legenddata[[1]][[1]]*(1/xinch(1))
lheight=legenddata[[1]][[2]]*(1/yinch(1))
distance=100000
increment=divides
while(increment<=32){
hasChanged=FALSE
if(curMinX<curMaxX){
i=1
while(i<=increment){
j=1
xloc=(curMinX+(curMaxX-curMinX)*(i-.5)/increment)
while(j<=increment){
yloc=(curMinY+(curMaxY-curMinY)*(j-.5)/increment)
ex=(c(errorBarX, errorBarX)-xloc)*(1/xinch(1))
ey=(c(errorBarY*(1-singleErrorPercent), errorBarY*(1+singleErrorPercent))-yloc)*(1/yinch(1))
distance=min((sqrt((ex)^2+(ey)^2)-sqrt((lwidth/2)^2+(lwidth/2*ey/ex)^2))[abs(ex/ey)>lwidth/lheight])
distance=min(distance, min((sqrt((ex)^2+(ey)^2)-sqrt((lheight/2*ex/ey)^2+(lheight/2)^2))[abs(ex/ey)<=lwidth/lheight]))
n=1
for(reduxdata in data){
if(length(reduxdata)==0)
next
if(!isMulti|((isTop&altVars[[n]])|(!isTop&!altVars[[n]]))){
dx=((reduxdata[seq(dim(reduxdata)[1])]-xloc)*(1/xinch(1)))
dy=matrix((reduxdata[seq(dim(reduxdata)[1]), seq(2, dim(reduxdata)[2])]-yloc)*(1/yinch(1)))
distance=min(distance, min((sqrt((dx)^2+(dy)^2)-sqrt((lwidth/2)^2+(lwidth/2*dy/dx)^2))[abs(dx/dy)>lwidth/lheight]))
distance=min(distance, min((sqrt((dx)^2+(dy)^2)-sqrt((lheight/2*dx/dy)^2+(lheight/2)^2))[abs(dx/dy)<=lwidth/lheight]))
}
n=n+1
}
if(distance>=maxdist||!hasChanged){
maxdist=distance
boxX1=(curMinX+(curMaxX-curMinX)*(i-1)/increment)
boxY1=(curMinY+(curMaxY-curMinY)*(j-1)/increment)
boxX2=(curMinX+(curMaxX-curMinX)*(i)/increment)
boxY2=(curMinY+(curMaxY-curMinY)*(j)/increment)
hasChanged=TRUE
}
j=j+1
}
i=i+1
}
increment=max(2,trunc(increment/2))
curMinX=boxX1
curMinY=boxY1
curMaxX=boxX2
curMaxY=boxY2
}else{
divides=1000
}
if((curMaxX-curMinX)<(xmax-xmin)/(10*xdim)&&(curMaxY-curMinY)<(tempymax-tempymin)/(10*ydim)){
if(maxdist>0){
cat(" Legend position within accuracy\n")
increment=1000
}else{
cat(" Empty legend position not found, increasing search\n")
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1)
divides=divides*2
increment=divides
curMinX=xmin+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-legenddata[[1]][[2]]/2*(1+.05)
}
}
if(divides>32&&legendcols<3){
cat(" No space available, switching columns\n")
divides=4
increment=divides
legendcols=legendcols+1
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1, cex=legendSize)
curMinX=xmin+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-legenddata[[1]][[2]]/2*(1+.05)
lwidth=legenddata[[1]][[1]]*(1/xinch(1))
lheight=legenddata[[1]][[2]]*(1/yinch(1))
}else if(divides>32&&legendSize>1/3){
cat(" No space available, shrinking legend\n")
divides=4
increment=divides
legendcols=originalcols
legendSize=legendSize*.75
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1, cex=legendSize)
curMinX=xmin+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-legenddata[[1]][[2]]/2*(1+.05)
lwidth=legenddata[[1]][[1]]*(1/xinch(1))
lheight=legenddata[[1]][[2]]*(1/yinch(1))
}
}
xalign=.5
yalign=.5
legendx=(boxX1+(boxX2-boxX1)/2)
legendy=(boxY1+(boxY2-boxY1)/2)
if(order){
tempnames=names[length(names):1]
temppics=pics[length(pics):1]
tempcolors=colors[length(colors):1]
}else{
tempnames=names
temppics=pics
tempcolors=colors
}
if(symbols==0){
legend(x=legendx, y=legendy, tempnames, lty=1, lwd=lineWidth, col=tempcolors, bg=hsv(1,0,1, alpha=.5), xjust=xalign, yjust=yalign, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="o", title=title, pt.cex=1, pt.lwd=lineWidth, box.col=hsv(1,1,0, alpha=.15), box.lwd=lineWidth)
}else{
legend(x=legendx, y=legendy, tempnames, lty=0, col=tempcolors, bg=hsv(1,0,1, alpha=.5), pch=temppics, xjust=xalign, yjust=yalign, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="o", title=title, pt.cex=1, pt.lwd=lineWidth, box.col=hsv(1,1,0, alpha=.15), box.lwd=lineWidth)
if(pointAccent){
legend(x=legendx, y=legendy, tempnames, lty=0, col="black", bg=hsv(1,0,1, alpha=.5), pch=temppics, xjust=xalign, yjust=yalign, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="n", title=title, pt.cex=1, pt.lwd=lineWidth/2)
}
}
}else{
legend(x=legendPos[1], y=legendPos[2], tempnames, lwd=0, col=tempcolors, bg=bgcolor, pch=temppics, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="o", title=title, pt.cex=1)
}
}
}
renderaxes<-function(isMulti, isTop){
if(!isTop||!isMulti)
mtext(xaxis, side=1, line=1.5, font=2, col=framecolor)
if(!isTop||!isMulti)
axis(side=1, las=1, tck=.0125, at=seq(xmin, xmax, xscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=1, las=1, tck=.0125, at=seq(xmin, xmax, xscale), mgp=c(3,.5,0), labels=FALSE, col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
axis(side=1, las=1, tck=.0075, at=seq(xmin, xmax, xscale/xmult), mgp=c(3,.5,0), labels=FALSE, col=framecolor, lend=endtype, lwd=lineWidth)
if(isMulti)
if(!isTop)
axis(side=2, las=1, tck=.0125, at=seq(ymin, ymax, yscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=2, las=1, tck=.0125, at=seq(altymin, altymax, altyscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=2, las=1, tck=.0125, at=seq(ymin, ymax, yscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
if(isMulti&&isTop)
axis(side=2, las=1, tck=.005, at=seq(altymin, altymax, altyscale/altymult), mgp=c(3,.5,0), labels=FALSE, col=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=2, las=1, tck=.005, at=seq(ymin, ymax, yscale/ymult), mgp=c(3,.5,0), labels=FALSE, col=framecolor, lend=endtype, lwd=lineWidth)
if(!is.na(axis2a)&&!is.na(axis2b)&&!is.na(axis2lab)){
altAxisNew=round(seq(ymin*axis2a+axis2b, ymax*axis2a+axis2b, (ymax-ymin)*axis2a/yticks)/min(c(yscale*100,1)))*min(c(yscale*100,1))
altAxisOld=seq(ymin, ymax, (ymax-ymin)/yticks)
axis(side=4, at=altAxisOld, labels=altAxisNew, mgp=c(3,.5,0), tck=.01, las=1, yaxs="i", col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
mtext(axis2lab, side=4, line=2.5, font=2, col=framecolor)
}
}
renderpoints<-function(isMulti, isTop, isSymbols){
n=1
for(filetable in seq(data)){
reduxdata=data[[filetable]]
if(length(reduxdata)==0)
next
for(var in seq(2,dim(reduxdata)[2])){
if(!isMulti||((altVars[[filetable]][var-1]&&isTop)||(!altVars[[filetable]][var-1]&&!isTop))){
if(!isSymbols){
if(drawLine[filetable])
points(reduxdata[seq(dim(reduxdata)[1]),1], reduxdata[seq(dim(reduxdata)[1]),var], type="l", col=hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[n], lwd=lineWidth)
else
points(reduxdata[seq(dim(reduxdata)[1]),1], reduxdata[seq(dim(reduxdata)[1]),var], type="p", col=hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[n], pch=46, cex=lineWidth)
}else{
points(reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),1],
reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),var],
type="p", col=hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[n], pch=n-1, lwd=lineWidth)
if(pointAccent){
points(reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),1],
reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),var],
type="p", col="black", pch=n-1, lwd=lineWidth/2)
}
}
}
n=n+1
}
}
}
rendererrorbar<-function(isMulti, isTop){
boxX1=0
boxX2=0
boxY1=0
boxY2=0
if(singleErrorPercent!=0){
barwidth=.13
if(singleErrorPercent==0){ barwidth=0 }
datafile=1
newMax=1
lastMax=NULL
while(datafile<=length(data)){
usedVars=!isMulti|((isTop&altVars[[ datafile ]])|(!isTop&!altVars[[ datafile ]]))
if(any(usedVars)){
barwindow=subset(data[[datafile]], data[[datafile]][seq(dim(data.matrix(data[[datafile]]))[1])]<(xmax-(xmax-xmin)*.05)&data[[datafile]][seq(dim(data.matrix(data[[datafile]]))[1])]>(xmin+(xmax-xmin)*.05))
barwindow=barwindow[seq(dim(barwindow)[1]), c( TRUE, usedVars ) ]
var=2
while(var<=dim(barwindow)[2]){
newMax=which.max(abs(barwindow[seq(dim(barwindow)[1]), var]))
if(is.null(lastMax)){
lastMax=barwindow[newMax, var]
errorBarY=as.vector(barwindow[newMax, var])
errorBarX=as.vector(barwindow[newMax, 1])
}
if(abs(barwindow[newMax, var])>abs(lastMax)){
lastMax=barwindow[newMax, var]
errorBarY=as.vector(barwindow[newMax, var])
errorBarX=as.vector(barwindow[newMax, 1])
}
var=var+1
}
}
datafile=datafile+1
}
lines(x=c(errorBarX,errorBarX), y=c((1-singleErrorPercent)*errorBarY, (1+singleErrorPercent)*errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
lines(x=c(errorBarX-xinch(barwidth),errorBarX+xinch(barwidth)), y=c((1-singleErrorPercent)*errorBarY, (1-singleErrorPercent)*errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
lines(x=c(errorBarX-xinch(barwidth),errorBarX+xinch(barwidth)), y=c((1+singleErrorPercent)*errorBarY, (1+singleErrorPercent)*errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
lines(x=c(errorBarX-xinch(barwidth)/2,errorBarX+xinch(barwidth)/2), y=c(errorBarY, errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
}
return(list(errorBarX, errorBarY))
}
rendertimeline<-function(isMulti){
#Rendering timeline data
offsetsize=(labelMargin*ydim-.04)/(max(overlap)+1)
if(isMulti){
ymaxtemp=altymax
ymintemp=altymin
}else{
ymaxtemp=ymax
ymintemp=ymin
}
y1=ymintemp
y2=ymaxtemp
y4=ymaxtemp+yinch(labelMargin*ydim)
for(n in seq(length(eventLocs[["location"]]))){
y3=ymaxtemp+yinch(offsetsize*overlap[n])
x1=data.matrix(timeline["start"])[n]
x3=eventLocs[["location"]][n]
if(!any(names(timeline)=="end")||singleEvents){
lines(x=c(x1, x1), y=c(y2, y3), xpd=NA, col=hsv(1,0, eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
lines(x=c(x1, x3), y=c(y3, y4), xpd=NA, col=hsv(1,0, eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
}else{
x2=data.matrix(timeline["end"])[n]
if(x3>x2){
polygon(x=c(x1, x1, x2, x3, x2,x2), y=c(y1, y3, (x2-x3)*(y3-y4)/(x1-x3)+y4, y4, (x2-x3)*(y3-y4)/(x1-x3)+y4, y1), col=eventcolor, xpd=NA, border=eventcolor)
}else if(x3<x1){
polygon(x=c(x1, x1, x3, x1, x2,x2), y=c(y1, (x1-x3)*(y3-y4)/(x2-x3)+y4, y4, (x1-x3)*(y3-y4)/(x2-x3)+y4, y3, y1), col=eventcolor, xpd=NA, border=eventcolor)
}else{
polygon(x=c(x1, x1, x3, x2, x2), y=c(y1, y3, y4, y3, y1), col=eventcolor, xpd=NA, border=eventcolor)
}
}
if(length(eventLocs[["location"]])==1)
angle=0
else
angle=eventLocs[["theta"]]*180/pi
if((eventLocs[["location"]][n]>=eventLocs[["median"]]&&eventLocs[["direction"]]=="split")||eventLocs[["direction"]]=="right"||eventLocs[["theta"]]==pi/2){
text(x=eventLocs[["location"]][n], y=ymaxtemp+yinch(labelMargin*ydim+.03), timeline[n, "event"], xpd=NA, srt=angle, adj=c(0,0), col=framecolor)
}else{
text(x=eventLocs[["location"]][n], y=ymaxtemp+yinch(labelMargin*ydim+.03), timeline[n, "event"], xpd=NA, srt=angle, adj=c(0,0), col=framecolor)
}
}
}
renderablines<-function(isMulti, isTop){
#Rendering timeline data
offsetsize=(labelMargin*ydim-.04)/(max(overlap)+1)
if(isMulti&&isTop){
ymaxtemp=altymax
ymintemp=altymin
}else{
ymaxtemp=ymax
ymintemp=ymin
}
y1=ymintemp
y2=ymaxtemp
for(n in seq(length(eventLocs[["location"]]))){
x1=data.matrix(timeline["start"])[n]
if(!any(names(timeline)=="end")||singleEvents){
abline(v=x1, col=hsv(1,0, eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
if(isMulti&&!isTop){
lines(x=c(x1, x1), y=c(y2, y2+yinch(mpMedian*ydim)), xpd=NA, col=hsv(1,0,eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
}
if(!is.na(arrowsize))
polygon(x=c(x1, x1+xinch(arrowsize/2), x1-xinch(arrowsize/2), x1), y=c(y1, y1+yinch(arrowsize*.866), y1+yinch(arrowsize*.866), y1), xpd=NA, col="white", lwd=lineWidth, border=hsv(1,0,eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])))
}else{
x2=data.matrix(timeline["end"])[n]
polygon(x=c(x1, x1, x2, x2), y=c(y1, y2, y2, y1), col=eventcolor, xpd=NA, border=eventcolor)
}
}
}
renderplotspace<-function(isMulti, isTop){
#Sizing the figure region
tfMar=1
tfFig=1
axMar=2
tpFig=0
tpMar=3
if(!is.na(timeFile)){
tfMar=0
tfFig=1-eventMargin/ydim-labelMargin
}
if(isMulti)
if(isTop){
tpFig=tfFig/2+.50/ydim/2+mpMedian/2
tpMar=0
}else
tfFig=tfFig/2+.50/ydim/2-mpMedian/2
if(!is.na(axis2a)&&!is.na(axis2b)&&!is.na(axis2lab))
axMar=4
if(!is.na(customaxis))
axMar=customAxisMargin
par(fig=c(0,1, tpFig, tfFig), bg=bgcolor, mar=c(tpMar,4,tfMar,axMar)+.01)
#Rendering the scatter plot
if(isTop)
par(new=TRUE)
if(isMulti&&isTop){
plot.default(NULL, type="l", ylab=NA, xlim = c(xmin,xmax), ylim = c(altymin,altymax), axes=FALSE,
frame.plot=FALSE, las=1, tck=.01, mgp=c(3,.5,0), xaxs="i", yaxs="i", font=2)
}else
plot.default(NULL, type="l", ylab=NA, xlim = c(xmin,xmax), ylim = c(ymin,ymax), axes=FALSE,
frame.plot=FALSE, las=1, tck=.01, mgp=c(3,.5,0), xaxs="i", yaxs="i", font=2)
}
rendergrid<-function(isMulti, isTop){
if(isMulti&&isTop){
for(y in seq(altymin, altymax, altyscale))
lines(x=c(xmin, xmax), y=c(y, y), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
for(x in seq(xmin, xmax, xscale))
lines(x=c(x, x), y=c(altymin, altymax), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
}else{
for(y in seq(ymin, ymax, yscale))
lines(x=c(xmin, xmax), y=c(y, y), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
for(x in seq(xmin, xmax, xscale))
lines(x=c(x, x), y=c(ymin, ymax), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
if(isMulti){
for(x in seq(xmin, xmax, xscale))
lines(x=c(x, x), y=c(ymax, ymax+yinch(mpMedian*ydim)), xpd=NA, col=gridcolor, lwd=lineWidth)
}
}
abline(h=0, lty=1, col=framecolor, lend=endtype, lwd=lineWidth)
abline(v=0, lty=1, col=framecolor, lend=endtype, lwd=lineWidth)
if(!is.na(customaxis)){
rawaxis=list()
#Loading Axis file
if(is.null(rawaxis[[customaxis]])){
cat(" Reading axis file ...\n")
rawaxis=append(rawaxis, list(read.csv(customaxis, header = TRUE, strip.white=TRUE)))
names(rawaxis)[length(rawaxis)]=customaxis
}
customaxisorder=order(rawaxis[[customaxis]][["yaxis"]])
cAxis=data.frame(label=rawaxis[[customaxis]][customaxisorder, "label"], yaxis=rawaxis[[customaxis]][customaxisorder, "yaxis"])
if(is.na(cAxisStart)|is.na(cAxisEnd)){
par(cex=.5)
axis(side=4, at=cAxis[[2]], labels=cAxis[[1]], mgp=c(5,.5,0), tck=.01, las=1, yaxs="i", col=framecolor, col.axis=framecolor, lend=endtype, lwd=0, lwd.ticks=lineWidth, cra=c(12,12))
par(cex=1)
}
for(locs in cAxis[[2]]){
if(!is.na(cAxisStart)&!is.na(cAxisEnd)){
lines(y=c(locs, locs), x=c(cAxisStart, cAxisEnd), lwd=lineWidth)
text(x=cAxisEnd, y=locs, labels=cAxis[[1]][match(locs, cAxis[[2]])], pos=4)
}else
abline(h=locs, lwd=lineWidth)
}
}
}
default=alist(datafile=NULL, vars=NULL, yaxis=NA, timeFile=NA, xaxis="Time (s)",
pointsAvg=1, fuzziness=0, yMinDisc=NA, yMaxDisc=NA, legendPos=NA,
times=1, maxXtick=7, maxYtick=7, xdim=7, ydim=NA, labelMargin=.04,
axis2a=NA, suppNames=FALSE, axis2b=0, axis2lab=NA, symbols=10,
filename="Unnamed", fileSuffix=NA, norm2zero=NA, timelineOffset=FALSE,
legendSize=.75, eventMargin=NA, maxDisp=.15, smoothness=10,
maxOverlap=10, eventLabelBias=.5, plotHeight=NA, endTime=NA, startTime=0,
singleErrorPercent=0, timeOffset=0, dataCut=NA, folder="",
singleEvents=TRUE, continuity=0, primary=.95, secondary=.45, bandwidth=.75,
framecolor="black", endtype="square", legendcols=1, bgcolor="white",
gridcolor="#f0f0f0", eventcolorhi=.8, eventcolorlow=.4, mpMedian=.05, title=NA,
outputType="png", lineWidth=2, dataMax=NA, dpi=600, drawLine=FALSE,
pointAccent=TRUE, binsize=1, arrowsize=.15, showSamples=FALSE, order=FALSE,
customaxis=NA, customAxisMargin=2, cAxisStart=NA, cAxisEnd=NA, xvar="time",
altyMaxDisc=NA, altyMinDisc=NA, reverseVars=FALSE)
if(!is.null(args[["dataFile"]])){ dataFile=args[["dataFile"]] }
else if(!is.null(args[[1]])){ dataFile=args[[1]] }
else{ cat("A non-Null data file is required.") }
if(is.null(args[["vars"]])){
plots=read.csv(dataFile, header=TRUE, fill=TRUE, blank.lines.skip=TRUE)
entry=dim(plots)[1]
}else{
entry=1
plots=NULL
vars=list(vars)
}
rawdata=list()
rawtimeline=list()
while(entry>=1){
for(variable in names(default)){ assign(variable,default[[variable]]) }
for(variable in names(args[names(args)%in%names(default)])){ assign(variable,args[names(args)%in%names(default)][[variable]]) }
if(!is.null(plots)){
curplot=plots[entry, seq(0, (dim(plots)[2]))]
dataFile=vector()
dataFileTemp=vector()
fileSuffix=vector()
varAverageSet=vector()
vars=list()
altVars=list()
timeOffset=vector()
dataCut=vector()
dataCutTemp=vector()
for(variable in names(curplot)[seq(length(curplot))]){
if(!is.na(curplot[[variable]])&&!is.null(curplot[[variable]]))
if(curplot[[variable]]!="")
if(!is.null(default[[variable]]))
assign(variable, as.vector(curplot[[variable]]))
else if(length(grep("^var[[:punct:]]?[[:digit:]]*$", variable))==1){
if(reverseVars){
vars=append(as.vector(curplot[[variable]]), vars)
altVars=append(FALSE, altVars)
dataFile=append(dataFileTemp, dataFile)
dataCut=append(dataCutTemp, dataCut)
}else{
vars=append(vars, as.vector(curplot[[variable]]))
altVars=append(altVars, FALSE)
dataFile=append(dataFile, dataFileTemp)
dataCut=append(dataCut, dataCutTemp)
}
}else if(length(grep("^avg[[:punct:]]?[[:digit:]]*$", variable))==1){
if(reverseVars)
varAverageSet=append(as.vector(curplot[[variable]]), varAverageSet)
else
varAverageSet=append(varAverageSet, as.vector(curplot[[variable]]))
}else if(length(grep("^alt[[:punct:]]?[[:digit:]]*$", variable))==1){
if(reverseVars){
vars=append(as.vector(curplot[[variable]]), vars)
altVars=append(TRUE, altVars)
dataFile=append(dataFileTemp, dataFile)
dataCut=append(dataCutTemp, dataCut)
}else{
vars=append(vars, as.vector(curplot[[variable]]))
altVars=append(altVars, TRUE)
dataFile=append(dataFile, dataFileTemp)
dataCut=append(dataCut, dataCutTemp)
}
}else if(length(grep("^data[[:punct:]]?[[:digit:]]*$", variable))==1){
dataFileTemp=as.vector(curplot[[variable]])
}else if(length(grep("^suffix[[:punct:]]?[[:digit:]]*$", variable))==1)
if(reverseVars)
fileSuffix=append(as.vector(curplot[[variable]]), fileSuffix)
else
fileSuffix=append(fileSuffix, as.vector(curplot[[variable]]))
else if(length(grep("^line[[:punct:]]?[[:digit:]]*$", variable))==1)
if(reverseVars)
drawLine=append(as.vector(curplot[[variable]]), drawLine)
else
drawLine=append(drawLine, as.vector(curplot[[variable]]))
else if(length(grep("^cutoff[[:punct:]]?[[:digit:]]*$", variable))==1)
dataCutTemp=as.vector(curplot[[variable]])
else if(length(grep("^offset[[:punct:]]?[[:digit:]]*$", variable))==1)
if(reverseVars)
timeOffset=append(as.vector(curplot[[variable]]), timeOffset)
else
timeOffset=append(timeOffset, as.vector(curplot[[variable]]))
}
if(length(drawLine)==1){
temp=vector()
for(i in seq(vars)) temp=append(temp, drawLine)
drawLine=temp
}
}
options(warn=-1)
cat("Rendering", filename, "...\n")
#Creating rendering window for reference only
graphics.off()
if(outputType=="wmf")
win.metafile(paste(paste(folder,filename, sep=""), "wmf", sep="."),xdim,4,12, restoreConsole=FALSE)
if(outputType=="png")
png(paste(paste(folder,filename, sep=""), "png", sep="."),xdim,1,12, units="in",restoreConsole=FALSE, res=dpi)
if(!is.na(axis2a)&&!is.na(axis2b)&&!is.na(axis2lab))
par(fig=c(0, 1, 0, 1), bg=bgcolor, mar=c(3,4,1,4)+.01)
else
par(fig=c(0, 1, 0, 1), bg=bgcolor, mar=c(3,4,1,2)+.01)
for(file in dataFile){
if(is.null(rawdata[[file]])){
cat(" Reading data ")
cat(file)
cat("...\n")
rawdata=append(rawdata, list(read.csv(file , header = TRUE, strip.white=TRUE, check.names=FALSE)))
names(rawdata)[length(rawdata)]=file
}
}
#loading data files
datafiles=1
data=list()
times=list()
totalvars=0
for(file in dataFile){
data=append(data, list(rawdata[[file]]))
vars[datafiles]=list(replace(as.numeric(vars[[datafiles]]), is.na(as.numeric(vars[[datafiles]])), match(vars[[datafiles]], dimnames(data[[length(data)]])[[2]])))
vars[datafiles]=list(subset(vars[[datafiles]], !is.na(vars[[datafiles]])))
times=append(times, list(charmatch(tolower(xvar), tolower(dimnames(data[[length(data)]])[[2]]))))
if(is.na(times)){
cat(" Invalid X variable.\n")
next
}
if(length(timeOffset)==1)
data[[length(data)]][times[[datafiles]]]=data[[length(data)]][times[[datafiles]]]+as.numeric(timeOffset)
else if(!is.na(timeOffset[datafiles]))
data[[length(data)]][times[[datafiles]]]=data[[length(data)]][times[[datafiles]]]+as.numeric(timeOffset[datafiles])
totalvars=totalvars+dim(data.matrix(vars[[datafiles]]))[1]
datafiles=datafiles+1
}
#Trim data to used variables
datafiles=1
xmax=0
xmin=0
if(!is.na(startTime)){xmin=startTime}
scaledata=FALSE
while(datafiles<=length(data)){
if(!is.na(norm2zero)){
if(!is.null(dim(norm2zero))&&dim(norm2zero)[1]==dim(data.matrix(vars[[datafiles]]))[1]){
norm=norm2zero
}else{
if(norm2zero=="B"){
background=data[[datafiles]][is.na(data[[datafiles]][times[[datafiles]]])|data[[datafiles]][times[[datafiles]]]<=0,seq(dim(data[[datafiles]])[2])]
background[times[[datafiles]]]=seq(0, by=0, length.out=dim(background)[1])
}else{
background=data[[datafiles]][is.na(data[[datafiles]][times[[datafiles]]])|data[[datafiles]][times[[datafiles]]]==0,seq(dim(data[[datafiles]])[2])]
}
}
for(var in seq(2, dim(data[[datafiles]])[2])){
data[[datafiles]][seq(dim(data[[datafiles]])[1]),var]=data[[datafiles]][seq(dim(data[[datafiles]])[1]),var]-mean(data.matrix(background[var]))
}
}
data[datafiles]=list(as.matrix(cbind(data[[datafiles]][times[[datafiles]]], data[[datafiles]][vars[[datafiles]]])))
if(dim(data[[datafiles]])[2]<2){
cat(" Invalid Variable.\n")
datafiles=datafiles+1
next
}
data[[datafiles]]=data[[datafiles]][!is.na(data[[datafiles]][,2]),seq(2)]
if(!is.na(yMaxDisc)&!altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]<yMaxDisc,seq(2)]
if(length(data[[datafiles]])!=0&!is.na(yMinDisc)&!altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]>yMinDisc,seq(2)]
if(length(data[[datafiles]])!=0&!is.na(altyMaxDisc)&altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]<altyMaxDisc,seq(2)]
if(length(data[[datafiles]])!=0&!is.na(altyMinDisc)&altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]>altyMinDisc,seq(2)]
if(length(data[[datafiles]])==0){
cat(" Variable \"")
cat(dimnames(data[[datafiles]])[[2]][2])
cat("\" contains no usable data.\n")
datafiles=datafiles+1
next
}
vars[datafiles]=list(seq(2,dim(data.matrix(vars[[datafiles]]))[1]+1))
data[datafiles]=list(subset(data[[datafiles]], !is.na(data[[datafiles]][seq(dim(data[[datafiles]])[1])])&data[[datafiles]][seq(dim(data[[datafiles]])[1])]>=startTime))
if(any(altVars[[datafiles]])){
scaledata=TRUE
}
if(!is.na(dataCut[datafiles])){
data[datafiles]=list(subset(data[[datafiles]], data[[datafiles]][seq(dim(data[[datafiles]])[1])]<=dataCut[datafiles]))
}
if(length(dataCut)==1){
data[datafiles]=list(subset(data[[datafiles]], data[[datafiles]][seq(dim(data[[datafiles]])[1])]<=dataCut))
}
if(!is.na(endTime))
xmax=endTime
else
xmax=max(c(xmax, data.matrix(data[[datafiles]][seq(dim(data[[datafiles]])[1])])), rm.na=TRUE)
datafiles=datafiles+1
}
if(xmax==xmin){
cat(" Variable contains trivial data.\n")
dev.off(which = dev.cur())
entry=entry-1
next
}
axisvars=axisscale(xmax, xmin, xmult, xscale, maxXtick)
xmax=axisvars[1]
xmin=axisvars[2]
xmult=axisvars[3]
xscale=axisvars[4]
xmin=floor(xmin/xscale)*xscale
xmax=ceiling(xmax/xscale)*xscale
xticks=(xmax-xmin)/xscale
if(length(varAverageSet)>0){
j = 1
k = 1
datatemp=list()
avgnumber=list()
for(var2avg in data){
var2avg[,1]=round(var2avg[,1]/binsize)*binsize
i = 1
while(i < nrow(var2avg)){
if(length(datatemp)==0){
datatemp=list(var2avg[i,])
avgnumber=list(1)
}else{
if(k <= varAverageSet[j]){
matches=match(var2avg[i,1],as.matrix(datatemp[[j]])[,1])
if(is.na(matches)){
datatemp[[j]]=rbind(datatemp[[j]],var2avg[i,])
avgnumber[[j]]=rbind(avgnumber[[j]],1)
}else{
if(is.null(nrow(datatemp[[j]])))
datatemp[[j]][2]=(datatemp[[j]][2]*avgnumber[[j]][matches]+var2avg[i,2])/(avgnumber[[j]][matches]+1)
else
datatemp[[j]][matches,2]=(datatemp[[j]][matches,2]*avgnumber[[j]][matches]+var2avg[i,2])/(avgnumber[[j]][matches]+1)
avgnumber[[j]][matches]=avgnumber[[j]][matches]+1
}
}else{
datatemp=append(datatemp,list(var2avg[i,]))
avgnumber=append(avgnumber, list(1))
k=1
j=j+1
}
}
i=i+1
}
k=k+1
}
if(showSamples)
data=append(data,datatemp)
else
data=datatemp
totalvars=length(data)
vars=as.list(seq(data)*0+2)
altVars=as.list(as.logical(seq(data)*0))
}
datafiles=1
ymin=0
ymax=0
altymin=0
altymax=0
while(datafiles<=length(data)){
if(length(data[[datafiles]])==0){
datafiles=datafiles+1
next
}
#Average Times and Data into bins, each bin is of size pointsAvg seconds
if(pointsAvg>1){
cat(" Averaging...\n")
reduxdata=vector()
for(n in seq(floor(min(data[[datafiles]][seq(dim(data[[datafiles]])[1])])/pointsAvg), ceiling(max(data[[datafiles]][seq(dim(data[[datafiles]])[1])])/pointsAvg))){
reduxdata=rbind(reduxdata,mean(data.frame(subset(data[[datafiles]], (data[[datafiles]][seq(dim(data[[datafiles]])[1])]<=pointsAvg*n)&(data[[datafiles]][seq(dim(data[[datafiles]])[1])]>pointsAvg*(n-1))), check.names=FALSE), rm.na=TRUE))
}
reduxdata=subset(reduxdata,!is.nan(reduxdata[seq(dim(reduxdata)[1])]))
data[datafiles]=list(reduxdata)
}
if(fuzziness>0){
cat(" Smoothing...\n")
newtimes=seq(from=min(data[[datafiles]][, 1]), to=max(data[[datafiles]][, 1]), length.out=smoothness*dim(data[[datafiles]])[1]-smoothness+1)
oldtimes=data[[datafiles]][, 1]
reduxdata=vector()
if(continuity>0){
oldpoint=data[[datafiles]][1, seq(2, dim(data[[datafiles]])[2])]
for(timeindex in seq(length(newtimes))){
weight=exp(-log((fuzziness+1)/fuzziness)*(oldtimes-newtimes[timeindex])*log((continuity+1)/continuity)*(data[[datafiles]][seq(dim(data[[datafiles]])[1]), seq(2, dim(data[[datafiles]])[2])]-oldpoint))
oldpoint=colSums(weight*data[[datafiles]][seq(dim(data[[datafiles]])[1]), seq(2, dim(data[[datafiles]])[2])])/colSums(weight)
reduxdata=rbind(reduxdata, oldpoint)
}
}else{
for(time in newtimes){
weight=(1+1/fuzziness)^(-abs(oldtimes-time))
reduxdata=rbind(reduxdata, colSums(weight*data[[datafiles]][seq(dim(data[[datafiles]])[1]), seq(2, dim(data[[datafiles]])[2])])/sum(weight))
}
}
reduxdata=cbind(Time=newtimes, reduxdata)
data[datafiles]=list(reduxdata)
}
dataset=as.matrix(data[[datafiles]][, vars[[datafiles]]])
if(singleErrorPercent==0)
barRange=dataset
else
barRange=subset(dataset, data[[datafiles]][seq(dim(data.matrix(dataset))[1])]<(xmax-(xmax-xmin)*.05)&data[[datafiles]][seq(dim(data.matrix(dataset))[1])]>(xmin+(xmax-xmin)*.05))*(1+singleErrorPercent)
datasetDim=seq(dim(dataset)[1])
##Replacing empty data cells with ymin
ymin=min(c(ymin, dataset[ datasetDim, !altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), !altVars[[datafiles]] ]))
ymax=max(c(ymax, dataset[ datasetDim, !altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), !altVars[[datafiles]] ]))
altymin=min(c(altymin, dataset[ datasetDim, altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), altVars[[datafiles]] ]))
altymax=max(c(altymax, dataset[ datasetDim, altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), altVars[[datafiles]] ]))
datafiles=datafiles+1
}
if(scaledata)
maxYtick=maxYtick/2
if(!is.na(dataMax))
ymax=dataMax
if(ymax==ymin){
cat(" Variable contains trivial data.\n")
dev.off(which = dev.cur())
entry=entry-1
next
}
axisvars=axisscale(ymax, ymin, ymult, yscale, maxYtick)
ymax=axisvars[1]
ymin=axisvars[2]
ymult=axisvars[3]
yscale=axisvars[4]
if(scaledata){
if(altymax==altymin){
cat(" Variable contains trivial data.\n")
dev.off(which = dev.cur())
entry=entry-1
next
}
axisvars=axisscale(altymax, altymin, altymult, altyscale, maxYtick)
altymax=axisvars[1]
altymin=axisvars[2]
altymult=axisvars[3]
altyscale=axisvars[4]
altymin=floor(altymin/altyscale)*altyscale
altymax=ceiling(altymax/altyscale)*altyscale
altyticks=(altymax-altymin)/altyscale
}
ymin=floor(ymin/yscale)*yscale
ymax=ceiling(ymax/yscale)*yscale
yticks=(ymax-ymin)/yscale
if(is.na(eventMargin)&&!is.na(plotHeight)&&!is.na(ydim)){ eventMargin=ydim-plotHeight-ydim*labelMargin }
if(is.na(eventMargin)){ eventMargin=0 }
if(!is.na(timeFile)){
#Rendering the scatter plot
plot.default(NULL, type="l", ylab=NA, xlim = c(xmin,xmax), ylim = c(ymin,ymax), axes=FALSE,
frame.plot=FALSE, las=1, tck=.01, mgp=c(3,.5,0), xaxs="i", yaxs="i", font=2)
#Loading Timeline file
if(is.null(rawtimeline[[timeFile]])){
cat(" Reading timeline file ...\n")
rawtimeline=append(rawtimeline, list(read.csv(timeFile, header = TRUE, strip.white=TRUE)))
names(rawtimeline)[length(rawtimeline)]=timeFile
}
eventorder=order(rawtimeline[[timeFile]][["start"]])
if(any(names(rawtimeline[[timeFile]])=="end")){
if(length(timeOffset)==0|!timelineOffset){
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"], end=rawtimeline[[timeFile]][eventorder, "end"])
}else{
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"]+timeOffset, end=rawtimeline[[timeFile]][eventorder, "end"]+timeOffset)
}
}else{
if(length(timeOffset)==0|!timelineOffset){
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"])
}else{
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"]+timeOffset)
}
}
timeline=subset(timeline, timeline["start"]<=xmax&timeline["start"]>=xmin)
charHeight=strheight("/_GypL", units="inches")+.05
charWidth=max(strwidth(timeline[["event"]], units="inches"))+.05
overlap=NULL
cat(" Calculating label positions ...\n")
oldMargin=NULL
oldLocs=NULL
oldScore=NULL
oldOverlap=NULL
notPass=TRUE
while(notPass){
notPass=FALSE
eventLocs=eventLocation(xmin, xmax, 0, NA)
if(is.null(eventLocs)){
notPass=TRUE
eventMargin=eventMargin+.1
}else{
#Creating a list of vertical displacements for line structure on timeline data
#they should be maximally spaced in the label margin
overlap=seq(dim(timeline["start"])[1])*0
#First (right) pass of the line offsetting process,
#evaluates whether (left)labels to (right) lines should be displaced further up
n=2
while(n<=length(eventLocs[["location"]])){
if(eventLocs[["location"]][n]<=data.matrix(timeline["start"])[n-1]){
overlap[n]=overlap[n-1]+1
}
n=n+1
}
#Second (left) pass of the line offsetting process
#evaluates whether (right) labels to (left) lines should be displaced further up
n=length(eventLocs[["location"]])-1
while(n>=1){
if(eventLocs[["location"]][n]>=data.matrix(timeline["start"])[n+1]){
overlap[n]=overlap[n+1]+1
}
n=n-1
}
if(max(sqrt(((timeline[["start"]]-eventLocs[["location"]])/(xmax-xmin)/maxDisp)^2+(overlap/maxOverlap)^2))>1){
oldMargin=c(eventMargin, oldMargin)
oldScore=c(max(sqrt(((timeline[["start"]]-eventLocs[["location"]])/(xmax-xmin)/maxDisp)^2+(overlap/maxOverlap)^2)), oldScore)
oldLocs=c(list(eventLocs), oldLocs)
oldOverlap=c(list(overlap), oldOverlap)
notPass=TRUE
eventMargin=eventMargin+.1
if(eventLocs[["theta"]]==pi/2||(!is.na(ydim)&&!is.na(plotHeight)&&eventMargin>=ydim-plotHeight)){
eventLocs=oldLocs[[which.min(oldScore)]]
eventMargin=oldMargin[which.min(oldScore)]
overlap=oldOverlap[[which.min(oldScore)]]
notPass=FALSE
}
}
}
}
}
ydim=createwindow(ydim)
renderplotspace(scaledata, FALSE)
rendergrid(scaledata, FALSE)
if(!is.na(timeFile))
renderablines(scaledata, FALSE)
errorBarX=NULL
errorBarY=NULL
errorBar=rendererrorbar(scaledata, FALSE)
renderpoints(scaledata, FALSE, FALSE)
if(totalvars>1)
renderpoints(scaledata, FALSE, TRUE)
renderaxes(scaledata, FALSE)
renderlegend(scaledata, FALSE, errorBar[[1]], errorBar[[2]])
if(scaledata){
renderplotspace(scaledata, TRUE)
rendergrid(scaledata, TRUE)
if(!is.na(timeFile))
renderablines(scaledata, TRUE)
errorBar=rendererrorbar(scaledata, TRUE)
renderpoints(scaledata, TRUE, FALSE)
renderpoints(scaledata, TRUE, TRUE)
renderaxes(scaledata, TRUE)
renderlegend(scaledata, TRUE, errorBar[[1]], errorBar[[2]])
}
if(!is.na(timeFile))
rendertimeline(scaledata)
renderplotspace(FALSE, TRUE)
mtext(yaxis, side=2, line=3, font=2, col=framecolor)
dev.off(which = dev.cur())
entry=entry-1
}
}
#plotdata("PA Plots 2.csv", maxXtick=10, folder="Plot Output/", lineWidth=2, outputType="png", drawLine=FALSE, maxDisp=.3, framecolor="black")
#plotdata("Mystery Plot Setup.csv", maxXtick=10, folder="Plot Output/", lineWidth=2, framecolor="black", outputType="png", drawLine=FALSE, eventcolorhi=.30, eventcolorlow=.30)
#plotdata("BasementInput.csv", folder="646B_Plots/")
#plotdata("BP plots Avg.csv", folder="Plot Output/")
#plotdata("FOS RAW.csv", folder="", drawLine=TRUE)
plotdata("219_test1_input.csv", maxXtick=10, outputType="png", symbols=10, lineWidth=2, drawLine=TRUE)
#plotdata("644A_HFinput.csv", maxXtick=10, outputType="png", symbols=10, lineWidth=2, drawLine=TRUE)
#plotdata("Lawson RFID.csv", arrowsize=NA, outputType="pn")
#plotdata("parkway ter/PT Plot.csv", arrowsize=NA, reverseVars=FALSE) | /Projects/Spartanburg_2013/Spartanburg_219/test1/bfrlDataPlot 2.16.r | permissive | mcgratta/nist-fire | R | false | false | 44,907 | r | plotdata=function(...){
args=list(...)
createwindow<-function(yheight){
if(is.na(yheight)&&!is.na(plotHeight)&&!is.na(eventMargin)){ yheight=(plotHeight+eventMargin)/(1-labelMargin) }
if(is.na(yheight)&&is.na(timeFile)){ yheight=plotHeight }
#Creating rendering window
graphics.off()
if(outputType=="wmf")
win.metafile(paste(paste(folder,filename, sep=""), "wmf", sep="."),xdim,yheight,12, restoreConsole=FALSE)
if(outputType=="png")
png(paste(paste(folder,filename, sep=""), "png", sep="."),xdim,yheight,12, units="in", restoreConsole=FALSE, res=dpi)
return(yheight)
}
axisscale<-function(smax, smin, smult, sscale, maxticks){
sscale<-10^floor(log(abs(smax-smin))/log(10)-1)
smult=1
if(ceiling((smax-smin)/sscale)<maxticks){
}else if(ceiling((smax-smin)/(2*sscale))<maxticks){
sscale=2*sscale
smult=2
}else if(ceiling((smax-smin)/(3*sscale))<maxticks){
sscale=3*sscale
smult=3
}else if(ceiling((smax-smin)/(5*sscale))<maxticks){
sscale=5*sscale
smult=5
}else if(ceiling((smax-smin)/(10*sscale))<maxticks){
sscale=10*sscale
smult=5
}else if(ceiling((smax-smin)/(20*sscale))<maxticks){
sscale=20*sscale
smult=4
}else if(ceiling((smax-smin)/(30*sscale))<maxticks){
sscale=30*sscale
smult=3
}else{
sscale=50*sscale
smult=5
}
return(c(smax, smin, smult, sscale, maxticks))
}
remainder<-function(fractional){return(fractional-trunc(fractional))}
eventLocation<-function(xstart, xend, mintheta, toright=NA){
if(charHeight>eventMargin)
return(NULL)
if(xstart==xmin)
bin=as.vector(subset(timeline, timeline["start"]<=xend&timeline["start"]>=xstart)[["start"]])
else
bin=as.vector(subset(timeline, timeline["start"]<=xend&timeline["start"]>xstart)[["start"]])
if(length(bin)<1)
return(list())
theta=atan(charWidth/charHeight)-acos(eventMargin/sqrt(charHeight^2+charWidth^2))
if(is.nan(theta)){ theta=pi/2 }
width=charHeight/sin(theta)
lrr=0
lrl=0
if(is.na(toright)){
if(mean(bin)<(xstart+xend)/2)
lrl=max(xinch(charWidth*cos(theta)-width*cos(theta)^2-par()[["mai"]][2]),0)
else
lrr=max(xinch(charWidth*cos(theta)-width*cos(theta)^2-par()[["mai"]][4]),0)
total=xinch(width*length(bin))+lrr+lrl
}else if(toright){
lrr=max(max(xinch(charWidth*cos(theta)-width*cos(theta)^2)-(xmax-xend),0)-xinch(par()[["mai"]][4]),0)
total=xinch(width*length(bin))+lrr
}else if(!toright){
lrl=max(max(xinch(charWidth*cos(theta)-width*cos(theta)^2)-(xstart-xmin),0)-xinch(par()[["mai"]][2]-width),0)
total=xinch(width*length(bin))+lrl
}
if((xend-xstart)<total&&is.na(toright)&&theta==pi/2){
cat(" Too many labels.\n")
return(list(theta=theta, location=xinch(.05)+xstart+seq(0,length(bin)-1)*xinch(width), direction="right"))
}
if((xend-xstart)<total)
return(NULL)
if(length(bin)<2)
if(is.na(toright)){
if(bin[1]>xstart+lrl&bin[1]<xend-lrr-xinch(width)){
return(list(theta=theta, location=bin[1], direction="right"))
}else if(bin[1]<xstart+lrl){
return(list(theta=theta, location=xstart+lrl, direction="right"))
}else{
return(list(theta=theta, location=xend-lrr-xinch(width), direction="right"))
}
}else if(toright){
if(bin[1]>xstart+lrl&bin[1]<xend-lrr-xinch(width)){
return(list(theta=theta, location=bin[1], direction="right"))
}else if(bin[1]<xstart+lrl){
return(list(theta=theta, location=xstart+lrl, direction="right"))
}else{
return(list(theta=theta, location=xend-lrr-xinch(width), direction="right"))
}
}else{
if(bin[1]>xstart+lrl&bin[1]<xend-lrr-xinch(width)){
return(list(theta=theta, location=bin[1], direction="left"))
}else if(bin[1]<xstart+lrl){
return(list(theta=theta, location=xstart+lrl, direction="left"))
}else{
return(list(theta=theta, location=xend-lrr-xinch(width), direction="left"))
}
}
if(length(bin)>=2){
if(is.na(toright)){
binmin=xstart+lrl
binmax=xend-lrr
}else if(toright){
binmin=xstart
binmax=xend-lrr
}else{
binmin=xstart+lrl
binmax=xend
}
binmean=seq(length(bin)-1)*(binmax-binmin)/length(bin)+binmin
lmean=binmean-bin[seq(1,length(bin)-1)]
rmean=binmean-bin[seq(2,length(bin))]
means=lmean/rmean
means[means<1&means>0]=1/means[means<1&means>0]
if(any(means<0)){
binmean=(binmean[means<0])[which.min(abs(binmean[means<0]-(binmin+binmax)/2))]
}else{
binmean=binmean[which.min(means)]
}
if(is.na(toright)){
left=eventLocation(xstart, binmean, theta, FALSE)
right=eventLocation(binmean, xend, theta, TRUE)
}else if(toright){
right=eventLocation(binmean, xend, theta, toright)
left=eventLocation(xstart, binmean, theta, toright)
}else{
left=eventLocation(xstart, binmean, theta, toright)
right=eventLocation(binmean, xend, theta, toright)
}
}
if(is.null(left)||is.null(right)){
if(is.na(toright))
if(mean(bin)>(xstart+xend)/2){
left=eventLocation(xstart, xend, theta, FALSE)
if(!is.null(left))
return(list(theta=theta, location=left[["location"]], direction="left"))
else
return(list(theta=theta, location=-xinch(.05)+xend-seq(length(bin)-1,0)*xinch(width), direction="left"))
}else{
right=eventLocation(xstart, xend, theta, TRUE)
if(!is.null(right))
return(list(theta=theta, location=right[["location"]], direction="right"))
else
return(list(theta=theta, location=xinch(.05)+xstart+seq(0,length(bin)-1)*xinch(width), direction="right"))
}
else
if(toright){
if(mean(bin)<(xstart+xend)/2)
return(list(theta=theta, location=xstart+seq(0,length(bin)-1)*xinch(width), direction="right"))
else
return(list(theta=theta, location=xend-seq(length(bin),1)*xinch(width)-lrr, direction="right"))
}else{
if(mean(bin)>(xstart+xend)/2)
return(list(theta=theta, location=xend-seq(length(bin)-1,0)*xinch(width)-xinch(width), direction="left"))
else
return(list(theta=theta, location=xstart+seq(1,length(bin))*xinch(width)-xinch(width)+lrl, direction="left"))
}
}
if(length(left[["location"]])==0&&is.na(toright))
return(list(theta=theta, location=eventLocation(xstart, xend, theta, FALSE)[["location"]], direction="left"))
else if(length(right[["location"]])==0&&is.na(toright))
return(list(theta=theta, location=eventLocation(xstart, xend, theta, TRUE)[["location"]], direction="right"))
else
return(list(theta=theta, location=c(left[["location"]], right[["location"]]), direction="split", median=binmean))
}
renderlegend<-function(isMulti,isTop, errorBarX, errorBarY){
originalsize=legendSize
originalcols=legendcols
colors=vector()
pics=vector()
colorindex=1
for(datafile in seq(data)){
if(length(datafile)!=0)
for(var in seq(2, dim(data[[datafile]])[2])){
if(!isMulti||((isTop&&altVars[[datafile]][var-1])||(!isTop&&!altVars[[datafile]][var-1]))){
colors=append(colors, hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[colorindex])
pics=append(pics, seq(0, totalvars)[colorindex])
}
colorindex=colorindex+1
}
}
if(totalvars>1){
names=vector()
n=1
if(isTop&&isMulti){
tempymin=altymin
tempymax=altymax
}else{
tempymin=ymin
tempymax=ymax
}
for(file in data){
visiblevars=!isMulti|((isTop&altVars[[n]])|(!isTop&!altVars[[n]]))
namelist=dimnames(data[[n]])[[2]][seq(2, dim(as.matrix(dimnames(data[[n]])[[2]]))[1])]
for(i in seq(length(namelist))){
if(visiblevars[i])
if(length(fileSuffix)<n||length(fileSuffix[n])<i&&!suppNames){
names=append(names, namelist[i])
}else if(suppNames){
names=append(names, fileSuffix[n])
}else{
names=append(names, paste(fileSuffix[n], namelist[i], sep="_"))
}
}
n=n+1
}
names=chartr("_"," ",names)
if(is.na(title)||title=="")
title=NULL
if(is.na(legendPos)){
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1, cex=legendSize)
curMinX=xmin+xinch(.00)+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+yinch(.00)+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-xinch(.00)-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-yinch(.00)-legenddata[[1]][[2]]/2*(1+.05)
divides=4
maxdist=-10000
lwidth=legenddata[[1]][[1]]*(1/xinch(1))
lheight=legenddata[[1]][[2]]*(1/yinch(1))
distance=100000
increment=divides
while(increment<=32){
hasChanged=FALSE
if(curMinX<curMaxX){
i=1
while(i<=increment){
j=1
xloc=(curMinX+(curMaxX-curMinX)*(i-.5)/increment)
while(j<=increment){
yloc=(curMinY+(curMaxY-curMinY)*(j-.5)/increment)
ex=(c(errorBarX, errorBarX)-xloc)*(1/xinch(1))
ey=(c(errorBarY*(1-singleErrorPercent), errorBarY*(1+singleErrorPercent))-yloc)*(1/yinch(1))
distance=min((sqrt((ex)^2+(ey)^2)-sqrt((lwidth/2)^2+(lwidth/2*ey/ex)^2))[abs(ex/ey)>lwidth/lheight])
distance=min(distance, min((sqrt((ex)^2+(ey)^2)-sqrt((lheight/2*ex/ey)^2+(lheight/2)^2))[abs(ex/ey)<=lwidth/lheight]))
n=1
for(reduxdata in data){
if(length(reduxdata)==0)
next
if(!isMulti|((isTop&altVars[[n]])|(!isTop&!altVars[[n]]))){
dx=((reduxdata[seq(dim(reduxdata)[1])]-xloc)*(1/xinch(1)))
dy=matrix((reduxdata[seq(dim(reduxdata)[1]), seq(2, dim(reduxdata)[2])]-yloc)*(1/yinch(1)))
distance=min(distance, min((sqrt((dx)^2+(dy)^2)-sqrt((lwidth/2)^2+(lwidth/2*dy/dx)^2))[abs(dx/dy)>lwidth/lheight]))
distance=min(distance, min((sqrt((dx)^2+(dy)^2)-sqrt((lheight/2*dx/dy)^2+(lheight/2)^2))[abs(dx/dy)<=lwidth/lheight]))
}
n=n+1
}
if(distance>=maxdist||!hasChanged){
maxdist=distance
boxX1=(curMinX+(curMaxX-curMinX)*(i-1)/increment)
boxY1=(curMinY+(curMaxY-curMinY)*(j-1)/increment)
boxX2=(curMinX+(curMaxX-curMinX)*(i)/increment)
boxY2=(curMinY+(curMaxY-curMinY)*(j)/increment)
hasChanged=TRUE
}
j=j+1
}
i=i+1
}
increment=max(2,trunc(increment/2))
curMinX=boxX1
curMinY=boxY1
curMaxX=boxX2
curMaxY=boxY2
}else{
divides=1000
}
if((curMaxX-curMinX)<(xmax-xmin)/(10*xdim)&&(curMaxY-curMinY)<(tempymax-tempymin)/(10*ydim)){
if(maxdist>0){
cat(" Legend position within accuracy\n")
increment=1000
}else{
cat(" Empty legend position not found, increasing search\n")
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1)
divides=divides*2
increment=divides
curMinX=xmin+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-legenddata[[1]][[2]]/2*(1+.05)
}
}
if(divides>32&&legendcols<3){
cat(" No space available, switching columns\n")
divides=4
increment=divides
legendcols=legendcols+1
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1, cex=legendSize)
curMinX=xmin+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-legenddata[[1]][[2]]/2*(1+.05)
lwidth=legenddata[[1]][[1]]*(1/xinch(1))
lheight=legenddata[[1]][[2]]*(1/yinch(1))
}else if(divides>32&&legendSize>1/3){
cat(" No space available, shrinking legend\n")
divides=4
increment=divides
legendcols=originalcols
legendSize=legendSize*.75
legenddata=legend(x=xmax/2, y=tempymax/2, names, lwd=1, col=colors[seq(dim(reduxdata)[2]-1)], bg=bgcolor, pch=seq(0, totalvars), ncol=legendcols, plot=FALSE, title=title, pt.cex=1, cex=legendSize)
curMinX=xmin+legenddata[[1]][[1]]/2*(1+.05)
curMinY=tempymin+legenddata[[1]][[2]]/2*(1+.05)
curMaxX=xmax-legenddata[[1]][[1]]/2*(1+.05)
curMaxY=tempymax-legenddata[[1]][[2]]/2*(1+.05)
lwidth=legenddata[[1]][[1]]*(1/xinch(1))
lheight=legenddata[[1]][[2]]*(1/yinch(1))
}
}
xalign=.5
yalign=.5
legendx=(boxX1+(boxX2-boxX1)/2)
legendy=(boxY1+(boxY2-boxY1)/2)
if(order){
tempnames=names[length(names):1]
temppics=pics[length(pics):1]
tempcolors=colors[length(colors):1]
}else{
tempnames=names
temppics=pics
tempcolors=colors
}
if(symbols==0){
legend(x=legendx, y=legendy, tempnames, lty=1, lwd=lineWidth, col=tempcolors, bg=hsv(1,0,1, alpha=.5), xjust=xalign, yjust=yalign, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="o", title=title, pt.cex=1, pt.lwd=lineWidth, box.col=hsv(1,1,0, alpha=.15), box.lwd=lineWidth)
}else{
legend(x=legendx, y=legendy, tempnames, lty=0, col=tempcolors, bg=hsv(1,0,1, alpha=.5), pch=temppics, xjust=xalign, yjust=yalign, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="o", title=title, pt.cex=1, pt.lwd=lineWidth, box.col=hsv(1,1,0, alpha=.15), box.lwd=lineWidth)
if(pointAccent){
legend(x=legendx, y=legendy, tempnames, lty=0, col="black", bg=hsv(1,0,1, alpha=.5), pch=temppics, xjust=xalign, yjust=yalign, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="n", title=title, pt.cex=1, pt.lwd=lineWidth/2)
}
}
}else{
legend(x=legendPos[1], y=legendPos[2], tempnames, lwd=0, col=tempcolors, bg=bgcolor, pch=temppics, cex=legendSize, text.col=framecolor, ncol=legendcols, bty="o", title=title, pt.cex=1)
}
}
}
renderaxes<-function(isMulti, isTop){
if(!isTop||!isMulti)
mtext(xaxis, side=1, line=1.5, font=2, col=framecolor)
if(!isTop||!isMulti)
axis(side=1, las=1, tck=.0125, at=seq(xmin, xmax, xscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=1, las=1, tck=.0125, at=seq(xmin, xmax, xscale), mgp=c(3,.5,0), labels=FALSE, col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
axis(side=1, las=1, tck=.0075, at=seq(xmin, xmax, xscale/xmult), mgp=c(3,.5,0), labels=FALSE, col=framecolor, lend=endtype, lwd=lineWidth)
if(isMulti)
if(!isTop)
axis(side=2, las=1, tck=.0125, at=seq(ymin, ymax, yscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=2, las=1, tck=.0125, at=seq(altymin, altymax, altyscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=2, las=1, tck=.0125, at=seq(ymin, ymax, yscale), mgp=c(3,.5,0), col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
if(isMulti&&isTop)
axis(side=2, las=1, tck=.005, at=seq(altymin, altymax, altyscale/altymult), mgp=c(3,.5,0), labels=FALSE, col=framecolor, lend=endtype, lwd=lineWidth)
else
axis(side=2, las=1, tck=.005, at=seq(ymin, ymax, yscale/ymult), mgp=c(3,.5,0), labels=FALSE, col=framecolor, lend=endtype, lwd=lineWidth)
if(!is.na(axis2a)&&!is.na(axis2b)&&!is.na(axis2lab)){
altAxisNew=round(seq(ymin*axis2a+axis2b, ymax*axis2a+axis2b, (ymax-ymin)*axis2a/yticks)/min(c(yscale*100,1)))*min(c(yscale*100,1))
altAxisOld=seq(ymin, ymax, (ymax-ymin)/yticks)
axis(side=4, at=altAxisOld, labels=altAxisNew, mgp=c(3,.5,0), tck=.01, las=1, yaxs="i", col=framecolor, col.axis=framecolor, lend=endtype, lwd=lineWidth)
mtext(axis2lab, side=4, line=2.5, font=2, col=framecolor)
}
}
renderpoints<-function(isMulti, isTop, isSymbols){
n=1
for(filetable in seq(data)){
reduxdata=data[[filetable]]
if(length(reduxdata)==0)
next
for(var in seq(2,dim(reduxdata)[2])){
if(!isMulti||((altVars[[filetable]][var-1]&&isTop)||(!altVars[[filetable]][var-1]&&!isTop))){
if(!isSymbols){
if(drawLine[filetable])
points(reduxdata[seq(dim(reduxdata)[1]),1], reduxdata[seq(dim(reduxdata)[1]),var], type="l", col=hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[n], lwd=lineWidth)
else
points(reduxdata[seq(dim(reduxdata)[1]),1], reduxdata[seq(dim(reduxdata)[1]),var], type="p", col=hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[n], pch=46, cex=lineWidth)
}else{
points(reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),1],
reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),var],
type="p", col=hsv(h=remainder(1+primary-bandwidth/2+seq(0,totalvars-1)/(max(totalvars-1,1))*bandwidth), s=.85, v=.85)[n], pch=n-1, lwd=lineWidth)
if(pointAccent){
points(reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),1],
reduxdata[seq(0, dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1], dim(data.matrix(reduxdata[seq(dim(reduxdata)[1])]))[1]/symbols),var],
type="p", col="black", pch=n-1, lwd=lineWidth/2)
}
}
}
n=n+1
}
}
}
rendererrorbar<-function(isMulti, isTop){
boxX1=0
boxX2=0
boxY1=0
boxY2=0
if(singleErrorPercent!=0){
barwidth=.13
if(singleErrorPercent==0){ barwidth=0 }
datafile=1
newMax=1
lastMax=NULL
while(datafile<=length(data)){
usedVars=!isMulti|((isTop&altVars[[ datafile ]])|(!isTop&!altVars[[ datafile ]]))
if(any(usedVars)){
barwindow=subset(data[[datafile]], data[[datafile]][seq(dim(data.matrix(data[[datafile]]))[1])]<(xmax-(xmax-xmin)*.05)&data[[datafile]][seq(dim(data.matrix(data[[datafile]]))[1])]>(xmin+(xmax-xmin)*.05))
barwindow=barwindow[seq(dim(barwindow)[1]), c( TRUE, usedVars ) ]
var=2
while(var<=dim(barwindow)[2]){
newMax=which.max(abs(barwindow[seq(dim(barwindow)[1]), var]))
if(is.null(lastMax)){
lastMax=barwindow[newMax, var]
errorBarY=as.vector(barwindow[newMax, var])
errorBarX=as.vector(barwindow[newMax, 1])
}
if(abs(barwindow[newMax, var])>abs(lastMax)){
lastMax=barwindow[newMax, var]
errorBarY=as.vector(barwindow[newMax, var])
errorBarX=as.vector(barwindow[newMax, 1])
}
var=var+1
}
}
datafile=datafile+1
}
lines(x=c(errorBarX,errorBarX), y=c((1-singleErrorPercent)*errorBarY, (1+singleErrorPercent)*errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
lines(x=c(errorBarX-xinch(barwidth),errorBarX+xinch(barwidth)), y=c((1-singleErrorPercent)*errorBarY, (1-singleErrorPercent)*errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
lines(x=c(errorBarX-xinch(barwidth),errorBarX+xinch(barwidth)), y=c((1+singleErrorPercent)*errorBarY, (1+singleErrorPercent)*errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
lines(x=c(errorBarX-xinch(barwidth)/2,errorBarX+xinch(barwidth)/2), y=c(errorBarY, errorBarY), lend=endtype, xpd=NA, lwd=lineWidth)
}
return(list(errorBarX, errorBarY))
}
rendertimeline<-function(isMulti){
#Rendering timeline data
offsetsize=(labelMargin*ydim-.04)/(max(overlap)+1)
if(isMulti){
ymaxtemp=altymax
ymintemp=altymin
}else{
ymaxtemp=ymax
ymintemp=ymin
}
y1=ymintemp
y2=ymaxtemp
y4=ymaxtemp+yinch(labelMargin*ydim)
for(n in seq(length(eventLocs[["location"]]))){
y3=ymaxtemp+yinch(offsetsize*overlap[n])
x1=data.matrix(timeline["start"])[n]
x3=eventLocs[["location"]][n]
if(!any(names(timeline)=="end")||singleEvents){
lines(x=c(x1, x1), y=c(y2, y3), xpd=NA, col=hsv(1,0, eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
lines(x=c(x1, x3), y=c(y3, y4), xpd=NA, col=hsv(1,0, eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
}else{
x2=data.matrix(timeline["end"])[n]
if(x3>x2){
polygon(x=c(x1, x1, x2, x3, x2,x2), y=c(y1, y3, (x2-x3)*(y3-y4)/(x1-x3)+y4, y4, (x2-x3)*(y3-y4)/(x1-x3)+y4, y1), col=eventcolor, xpd=NA, border=eventcolor)
}else if(x3<x1){
polygon(x=c(x1, x1, x3, x1, x2,x2), y=c(y1, (x1-x3)*(y3-y4)/(x2-x3)+y4, y4, (x1-x3)*(y3-y4)/(x2-x3)+y4, y3, y1), col=eventcolor, xpd=NA, border=eventcolor)
}else{
polygon(x=c(x1, x1, x3, x2, x2), y=c(y1, y3, y4, y3, y1), col=eventcolor, xpd=NA, border=eventcolor)
}
}
if(length(eventLocs[["location"]])==1)
angle=0
else
angle=eventLocs[["theta"]]*180/pi
if((eventLocs[["location"]][n]>=eventLocs[["median"]]&&eventLocs[["direction"]]=="split")||eventLocs[["direction"]]=="right"||eventLocs[["theta"]]==pi/2){
text(x=eventLocs[["location"]][n], y=ymaxtemp+yinch(labelMargin*ydim+.03), timeline[n, "event"], xpd=NA, srt=angle, adj=c(0,0), col=framecolor)
}else{
text(x=eventLocs[["location"]][n], y=ymaxtemp+yinch(labelMargin*ydim+.03), timeline[n, "event"], xpd=NA, srt=angle, adj=c(0,0), col=framecolor)
}
}
}
renderablines<-function(isMulti, isTop){
#Rendering timeline data
offsetsize=(labelMargin*ydim-.04)/(max(overlap)+1)
if(isMulti&&isTop){
ymaxtemp=altymax
ymintemp=altymin
}else{
ymaxtemp=ymax
ymintemp=ymin
}
y1=ymintemp
y2=ymaxtemp
for(n in seq(length(eventLocs[["location"]]))){
x1=data.matrix(timeline["start"])[n]
if(!any(names(timeline)=="end")||singleEvents){
abline(v=x1, col=hsv(1,0, eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
if(isMulti&&!isTop){
lines(x=c(x1, x1), y=c(y2, y2+yinch(mpMedian*ydim)), xpd=NA, col=hsv(1,0,eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])), lwd=lineWidth)
}
if(!is.na(arrowsize))
polygon(x=c(x1, x1+xinch(arrowsize/2), x1-xinch(arrowsize/2), x1), y=c(y1, y1+yinch(arrowsize*.866), y1+yinch(arrowsize*.866), y1), xpd=NA, col="white", lwd=lineWidth, border=hsv(1,0,eventcolorlow+(eventcolorhi-eventcolorlow)*n/length(eventLocs[["location"]])))
}else{
x2=data.matrix(timeline["end"])[n]
polygon(x=c(x1, x1, x2, x2), y=c(y1, y2, y2, y1), col=eventcolor, xpd=NA, border=eventcolor)
}
}
}
renderplotspace<-function(isMulti, isTop){
#Sizing the figure region
tfMar=1
tfFig=1
axMar=2
tpFig=0
tpMar=3
if(!is.na(timeFile)){
tfMar=0
tfFig=1-eventMargin/ydim-labelMargin
}
if(isMulti)
if(isTop){
tpFig=tfFig/2+.50/ydim/2+mpMedian/2
tpMar=0
}else
tfFig=tfFig/2+.50/ydim/2-mpMedian/2
if(!is.na(axis2a)&&!is.na(axis2b)&&!is.na(axis2lab))
axMar=4
if(!is.na(customaxis))
axMar=customAxisMargin
par(fig=c(0,1, tpFig, tfFig), bg=bgcolor, mar=c(tpMar,4,tfMar,axMar)+.01)
#Rendering the scatter plot
if(isTop)
par(new=TRUE)
if(isMulti&&isTop){
plot.default(NULL, type="l", ylab=NA, xlim = c(xmin,xmax), ylim = c(altymin,altymax), axes=FALSE,
frame.plot=FALSE, las=1, tck=.01, mgp=c(3,.5,0), xaxs="i", yaxs="i", font=2)
}else
plot.default(NULL, type="l", ylab=NA, xlim = c(xmin,xmax), ylim = c(ymin,ymax), axes=FALSE,
frame.plot=FALSE, las=1, tck=.01, mgp=c(3,.5,0), xaxs="i", yaxs="i", font=2)
}
rendergrid<-function(isMulti, isTop){
if(isMulti&&isTop){
for(y in seq(altymin, altymax, altyscale))
lines(x=c(xmin, xmax), y=c(y, y), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
for(x in seq(xmin, xmax, xscale))
lines(x=c(x, x), y=c(altymin, altymax), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
}else{
for(y in seq(ymin, ymax, yscale))
lines(x=c(xmin, xmax), y=c(y, y), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
for(x in seq(xmin, xmax, xscale))
lines(x=c(x, x), y=c(ymin, ymax), lty=1, col=gridcolor, lend=endtype, xpd=NA, lwd=lineWidth)
if(isMulti){
for(x in seq(xmin, xmax, xscale))
lines(x=c(x, x), y=c(ymax, ymax+yinch(mpMedian*ydim)), xpd=NA, col=gridcolor, lwd=lineWidth)
}
}
abline(h=0, lty=1, col=framecolor, lend=endtype, lwd=lineWidth)
abline(v=0, lty=1, col=framecolor, lend=endtype, lwd=lineWidth)
if(!is.na(customaxis)){
rawaxis=list()
#Loading Axis file
if(is.null(rawaxis[[customaxis]])){
cat(" Reading axis file ...\n")
rawaxis=append(rawaxis, list(read.csv(customaxis, header = TRUE, strip.white=TRUE)))
names(rawaxis)[length(rawaxis)]=customaxis
}
customaxisorder=order(rawaxis[[customaxis]][["yaxis"]])
cAxis=data.frame(label=rawaxis[[customaxis]][customaxisorder, "label"], yaxis=rawaxis[[customaxis]][customaxisorder, "yaxis"])
if(is.na(cAxisStart)|is.na(cAxisEnd)){
par(cex=.5)
axis(side=4, at=cAxis[[2]], labels=cAxis[[1]], mgp=c(5,.5,0), tck=.01, las=1, yaxs="i", col=framecolor, col.axis=framecolor, lend=endtype, lwd=0, lwd.ticks=lineWidth, cra=c(12,12))
par(cex=1)
}
for(locs in cAxis[[2]]){
if(!is.na(cAxisStart)&!is.na(cAxisEnd)){
lines(y=c(locs, locs), x=c(cAxisStart, cAxisEnd), lwd=lineWidth)
text(x=cAxisEnd, y=locs, labels=cAxis[[1]][match(locs, cAxis[[2]])], pos=4)
}else
abline(h=locs, lwd=lineWidth)
}
}
}
default=alist(datafile=NULL, vars=NULL, yaxis=NA, timeFile=NA, xaxis="Time (s)",
pointsAvg=1, fuzziness=0, yMinDisc=NA, yMaxDisc=NA, legendPos=NA,
times=1, maxXtick=7, maxYtick=7, xdim=7, ydim=NA, labelMargin=.04,
axis2a=NA, suppNames=FALSE, axis2b=0, axis2lab=NA, symbols=10,
filename="Unnamed", fileSuffix=NA, norm2zero=NA, timelineOffset=FALSE,
legendSize=.75, eventMargin=NA, maxDisp=.15, smoothness=10,
maxOverlap=10, eventLabelBias=.5, plotHeight=NA, endTime=NA, startTime=0,
singleErrorPercent=0, timeOffset=0, dataCut=NA, folder="",
singleEvents=TRUE, continuity=0, primary=.95, secondary=.45, bandwidth=.75,
framecolor="black", endtype="square", legendcols=1, bgcolor="white",
gridcolor="#f0f0f0", eventcolorhi=.8, eventcolorlow=.4, mpMedian=.05, title=NA,
outputType="png", lineWidth=2, dataMax=NA, dpi=600, drawLine=FALSE,
pointAccent=TRUE, binsize=1, arrowsize=.15, showSamples=FALSE, order=FALSE,
customaxis=NA, customAxisMargin=2, cAxisStart=NA, cAxisEnd=NA, xvar="time",
altyMaxDisc=NA, altyMinDisc=NA, reverseVars=FALSE)
if(!is.null(args[["dataFile"]])){ dataFile=args[["dataFile"]] }
else if(!is.null(args[[1]])){ dataFile=args[[1]] }
else{ cat("A non-Null data file is required.") }
if(is.null(args[["vars"]])){
plots=read.csv(dataFile, header=TRUE, fill=TRUE, blank.lines.skip=TRUE)
entry=dim(plots)[1]
}else{
entry=1
plots=NULL
vars=list(vars)
}
rawdata=list()
rawtimeline=list()
while(entry>=1){
for(variable in names(default)){ assign(variable,default[[variable]]) }
for(variable in names(args[names(args)%in%names(default)])){ assign(variable,args[names(args)%in%names(default)][[variable]]) }
if(!is.null(plots)){
curplot=plots[entry, seq(0, (dim(plots)[2]))]
dataFile=vector()
dataFileTemp=vector()
fileSuffix=vector()
varAverageSet=vector()
vars=list()
altVars=list()
timeOffset=vector()
dataCut=vector()
dataCutTemp=vector()
for(variable in names(curplot)[seq(length(curplot))]){
if(!is.na(curplot[[variable]])&&!is.null(curplot[[variable]]))
if(curplot[[variable]]!="")
if(!is.null(default[[variable]]))
assign(variable, as.vector(curplot[[variable]]))
else if(length(grep("^var[[:punct:]]?[[:digit:]]*$", variable))==1){
if(reverseVars){
vars=append(as.vector(curplot[[variable]]), vars)
altVars=append(FALSE, altVars)
dataFile=append(dataFileTemp, dataFile)
dataCut=append(dataCutTemp, dataCut)
}else{
vars=append(vars, as.vector(curplot[[variable]]))
altVars=append(altVars, FALSE)
dataFile=append(dataFile, dataFileTemp)
dataCut=append(dataCut, dataCutTemp)
}
}else if(length(grep("^avg[[:punct:]]?[[:digit:]]*$", variable))==1){
if(reverseVars)
varAverageSet=append(as.vector(curplot[[variable]]), varAverageSet)
else
varAverageSet=append(varAverageSet, as.vector(curplot[[variable]]))
}else if(length(grep("^alt[[:punct:]]?[[:digit:]]*$", variable))==1){
if(reverseVars){
vars=append(as.vector(curplot[[variable]]), vars)
altVars=append(TRUE, altVars)
dataFile=append(dataFileTemp, dataFile)
dataCut=append(dataCutTemp, dataCut)
}else{
vars=append(vars, as.vector(curplot[[variable]]))
altVars=append(altVars, TRUE)
dataFile=append(dataFile, dataFileTemp)
dataCut=append(dataCut, dataCutTemp)
}
}else if(length(grep("^data[[:punct:]]?[[:digit:]]*$", variable))==1){
dataFileTemp=as.vector(curplot[[variable]])
}else if(length(grep("^suffix[[:punct:]]?[[:digit:]]*$", variable))==1)
if(reverseVars)
fileSuffix=append(as.vector(curplot[[variable]]), fileSuffix)
else
fileSuffix=append(fileSuffix, as.vector(curplot[[variable]]))
else if(length(grep("^line[[:punct:]]?[[:digit:]]*$", variable))==1)
if(reverseVars)
drawLine=append(as.vector(curplot[[variable]]), drawLine)
else
drawLine=append(drawLine, as.vector(curplot[[variable]]))
else if(length(grep("^cutoff[[:punct:]]?[[:digit:]]*$", variable))==1)
dataCutTemp=as.vector(curplot[[variable]])
else if(length(grep("^offset[[:punct:]]?[[:digit:]]*$", variable))==1)
if(reverseVars)
timeOffset=append(as.vector(curplot[[variable]]), timeOffset)
else
timeOffset=append(timeOffset, as.vector(curplot[[variable]]))
}
if(length(drawLine)==1){
temp=vector()
for(i in seq(vars)) temp=append(temp, drawLine)
drawLine=temp
}
}
options(warn=-1)
cat("Rendering", filename, "...\n")
#Creating rendering window for reference only
graphics.off()
if(outputType=="wmf")
win.metafile(paste(paste(folder,filename, sep=""), "wmf", sep="."),xdim,4,12, restoreConsole=FALSE)
if(outputType=="png")
png(paste(paste(folder,filename, sep=""), "png", sep="."),xdim,1,12, units="in",restoreConsole=FALSE, res=dpi)
if(!is.na(axis2a)&&!is.na(axis2b)&&!is.na(axis2lab))
par(fig=c(0, 1, 0, 1), bg=bgcolor, mar=c(3,4,1,4)+.01)
else
par(fig=c(0, 1, 0, 1), bg=bgcolor, mar=c(3,4,1,2)+.01)
for(file in dataFile){
if(is.null(rawdata[[file]])){
cat(" Reading data ")
cat(file)
cat("...\n")
rawdata=append(rawdata, list(read.csv(file , header = TRUE, strip.white=TRUE, check.names=FALSE)))
names(rawdata)[length(rawdata)]=file
}
}
#loading data files
datafiles=1
data=list()
times=list()
totalvars=0
for(file in dataFile){
data=append(data, list(rawdata[[file]]))
vars[datafiles]=list(replace(as.numeric(vars[[datafiles]]), is.na(as.numeric(vars[[datafiles]])), match(vars[[datafiles]], dimnames(data[[length(data)]])[[2]])))
vars[datafiles]=list(subset(vars[[datafiles]], !is.na(vars[[datafiles]])))
times=append(times, list(charmatch(tolower(xvar), tolower(dimnames(data[[length(data)]])[[2]]))))
if(is.na(times)){
cat(" Invalid X variable.\n")
next
}
if(length(timeOffset)==1)
data[[length(data)]][times[[datafiles]]]=data[[length(data)]][times[[datafiles]]]+as.numeric(timeOffset)
else if(!is.na(timeOffset[datafiles]))
data[[length(data)]][times[[datafiles]]]=data[[length(data)]][times[[datafiles]]]+as.numeric(timeOffset[datafiles])
totalvars=totalvars+dim(data.matrix(vars[[datafiles]]))[1]
datafiles=datafiles+1
}
#Trim data to used variables
datafiles=1
xmax=0
xmin=0
if(!is.na(startTime)){xmin=startTime}
scaledata=FALSE
while(datafiles<=length(data)){
if(!is.na(norm2zero)){
if(!is.null(dim(norm2zero))&&dim(norm2zero)[1]==dim(data.matrix(vars[[datafiles]]))[1]){
norm=norm2zero
}else{
if(norm2zero=="B"){
background=data[[datafiles]][is.na(data[[datafiles]][times[[datafiles]]])|data[[datafiles]][times[[datafiles]]]<=0,seq(dim(data[[datafiles]])[2])]
background[times[[datafiles]]]=seq(0, by=0, length.out=dim(background)[1])
}else{
background=data[[datafiles]][is.na(data[[datafiles]][times[[datafiles]]])|data[[datafiles]][times[[datafiles]]]==0,seq(dim(data[[datafiles]])[2])]
}
}
for(var in seq(2, dim(data[[datafiles]])[2])){
data[[datafiles]][seq(dim(data[[datafiles]])[1]),var]=data[[datafiles]][seq(dim(data[[datafiles]])[1]),var]-mean(data.matrix(background[var]))
}
}
data[datafiles]=list(as.matrix(cbind(data[[datafiles]][times[[datafiles]]], data[[datafiles]][vars[[datafiles]]])))
if(dim(data[[datafiles]])[2]<2){
cat(" Invalid Variable.\n")
datafiles=datafiles+1
next
}
data[[datafiles]]=data[[datafiles]][!is.na(data[[datafiles]][,2]),seq(2)]
if(!is.na(yMaxDisc)&!altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]<yMaxDisc,seq(2)]
if(length(data[[datafiles]])!=0&!is.na(yMinDisc)&!altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]>yMinDisc,seq(2)]
if(length(data[[datafiles]])!=0&!is.na(altyMaxDisc)&altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]<altyMaxDisc,seq(2)]
if(length(data[[datafiles]])!=0&!is.na(altyMinDisc)&altVars[[datafiles]])
data[[datafiles]]=data[[datafiles]][data[[datafiles]][,2]>altyMinDisc,seq(2)]
if(length(data[[datafiles]])==0){
cat(" Variable \"")
cat(dimnames(data[[datafiles]])[[2]][2])
cat("\" contains no usable data.\n")
datafiles=datafiles+1
next
}
vars[datafiles]=list(seq(2,dim(data.matrix(vars[[datafiles]]))[1]+1))
data[datafiles]=list(subset(data[[datafiles]], !is.na(data[[datafiles]][seq(dim(data[[datafiles]])[1])])&data[[datafiles]][seq(dim(data[[datafiles]])[1])]>=startTime))
if(any(altVars[[datafiles]])){
scaledata=TRUE
}
if(!is.na(dataCut[datafiles])){
data[datafiles]=list(subset(data[[datafiles]], data[[datafiles]][seq(dim(data[[datafiles]])[1])]<=dataCut[datafiles]))
}
if(length(dataCut)==1){
data[datafiles]=list(subset(data[[datafiles]], data[[datafiles]][seq(dim(data[[datafiles]])[1])]<=dataCut))
}
if(!is.na(endTime))
xmax=endTime
else
xmax=max(c(xmax, data.matrix(data[[datafiles]][seq(dim(data[[datafiles]])[1])])), rm.na=TRUE)
datafiles=datafiles+1
}
if(xmax==xmin){
cat(" Variable contains trivial data.\n")
dev.off(which = dev.cur())
entry=entry-1
next
}
axisvars=axisscale(xmax, xmin, xmult, xscale, maxXtick)
xmax=axisvars[1]
xmin=axisvars[2]
xmult=axisvars[3]
xscale=axisvars[4]
xmin=floor(xmin/xscale)*xscale
xmax=ceiling(xmax/xscale)*xscale
xticks=(xmax-xmin)/xscale
if(length(varAverageSet)>0){
j = 1
k = 1
datatemp=list()
avgnumber=list()
for(var2avg in data){
var2avg[,1]=round(var2avg[,1]/binsize)*binsize
i = 1
while(i < nrow(var2avg)){
if(length(datatemp)==0){
datatemp=list(var2avg[i,])
avgnumber=list(1)
}else{
if(k <= varAverageSet[j]){
matches=match(var2avg[i,1],as.matrix(datatemp[[j]])[,1])
if(is.na(matches)){
datatemp[[j]]=rbind(datatemp[[j]],var2avg[i,])
avgnumber[[j]]=rbind(avgnumber[[j]],1)
}else{
if(is.null(nrow(datatemp[[j]])))
datatemp[[j]][2]=(datatemp[[j]][2]*avgnumber[[j]][matches]+var2avg[i,2])/(avgnumber[[j]][matches]+1)
else
datatemp[[j]][matches,2]=(datatemp[[j]][matches,2]*avgnumber[[j]][matches]+var2avg[i,2])/(avgnumber[[j]][matches]+1)
avgnumber[[j]][matches]=avgnumber[[j]][matches]+1
}
}else{
datatemp=append(datatemp,list(var2avg[i,]))
avgnumber=append(avgnumber, list(1))
k=1
j=j+1
}
}
i=i+1
}
k=k+1
}
if(showSamples)
data=append(data,datatemp)
else
data=datatemp
totalvars=length(data)
vars=as.list(seq(data)*0+2)
altVars=as.list(as.logical(seq(data)*0))
}
datafiles=1
ymin=0
ymax=0
altymin=0
altymax=0
while(datafiles<=length(data)){
if(length(data[[datafiles]])==0){
datafiles=datafiles+1
next
}
#Average Times and Data into bins, each bin is of size pointsAvg seconds
if(pointsAvg>1){
cat(" Averaging...\n")
reduxdata=vector()
for(n in seq(floor(min(data[[datafiles]][seq(dim(data[[datafiles]])[1])])/pointsAvg), ceiling(max(data[[datafiles]][seq(dim(data[[datafiles]])[1])])/pointsAvg))){
reduxdata=rbind(reduxdata,mean(data.frame(subset(data[[datafiles]], (data[[datafiles]][seq(dim(data[[datafiles]])[1])]<=pointsAvg*n)&(data[[datafiles]][seq(dim(data[[datafiles]])[1])]>pointsAvg*(n-1))), check.names=FALSE), rm.na=TRUE))
}
reduxdata=subset(reduxdata,!is.nan(reduxdata[seq(dim(reduxdata)[1])]))
data[datafiles]=list(reduxdata)
}
if(fuzziness>0){
cat(" Smoothing...\n")
newtimes=seq(from=min(data[[datafiles]][, 1]), to=max(data[[datafiles]][, 1]), length.out=smoothness*dim(data[[datafiles]])[1]-smoothness+1)
oldtimes=data[[datafiles]][, 1]
reduxdata=vector()
if(continuity>0){
oldpoint=data[[datafiles]][1, seq(2, dim(data[[datafiles]])[2])]
for(timeindex in seq(length(newtimes))){
weight=exp(-log((fuzziness+1)/fuzziness)*(oldtimes-newtimes[timeindex])*log((continuity+1)/continuity)*(data[[datafiles]][seq(dim(data[[datafiles]])[1]), seq(2, dim(data[[datafiles]])[2])]-oldpoint))
oldpoint=colSums(weight*data[[datafiles]][seq(dim(data[[datafiles]])[1]), seq(2, dim(data[[datafiles]])[2])])/colSums(weight)
reduxdata=rbind(reduxdata, oldpoint)
}
}else{
for(time in newtimes){
weight=(1+1/fuzziness)^(-abs(oldtimes-time))
reduxdata=rbind(reduxdata, colSums(weight*data[[datafiles]][seq(dim(data[[datafiles]])[1]), seq(2, dim(data[[datafiles]])[2])])/sum(weight))
}
}
reduxdata=cbind(Time=newtimes, reduxdata)
data[datafiles]=list(reduxdata)
}
dataset=as.matrix(data[[datafiles]][, vars[[datafiles]]])
if(singleErrorPercent==0)
barRange=dataset
else
barRange=subset(dataset, data[[datafiles]][seq(dim(data.matrix(dataset))[1])]<(xmax-(xmax-xmin)*.05)&data[[datafiles]][seq(dim(data.matrix(dataset))[1])]>(xmin+(xmax-xmin)*.05))*(1+singleErrorPercent)
datasetDim=seq(dim(dataset)[1])
##Replacing empty data cells with ymin
ymin=min(c(ymin, dataset[ datasetDim, !altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), !altVars[[datafiles]] ]))
ymax=max(c(ymax, dataset[ datasetDim, !altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), !altVars[[datafiles]] ]))
altymin=min(c(altymin, dataset[ datasetDim, altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), altVars[[datafiles]] ]))
altymax=max(c(altymax, dataset[ datasetDim, altVars[[datafiles]] ], barRange[ seq(dim(barRange)[1]), altVars[[datafiles]] ]))
datafiles=datafiles+1
}
if(scaledata)
maxYtick=maxYtick/2
if(!is.na(dataMax))
ymax=dataMax
if(ymax==ymin){
cat(" Variable contains trivial data.\n")
dev.off(which = dev.cur())
entry=entry-1
next
}
axisvars=axisscale(ymax, ymin, ymult, yscale, maxYtick)
ymax=axisvars[1]
ymin=axisvars[2]
ymult=axisvars[3]
yscale=axisvars[4]
if(scaledata){
if(altymax==altymin){
cat(" Variable contains trivial data.\n")
dev.off(which = dev.cur())
entry=entry-1
next
}
axisvars=axisscale(altymax, altymin, altymult, altyscale, maxYtick)
altymax=axisvars[1]
altymin=axisvars[2]
altymult=axisvars[3]
altyscale=axisvars[4]
altymin=floor(altymin/altyscale)*altyscale
altymax=ceiling(altymax/altyscale)*altyscale
altyticks=(altymax-altymin)/altyscale
}
ymin=floor(ymin/yscale)*yscale
ymax=ceiling(ymax/yscale)*yscale
yticks=(ymax-ymin)/yscale
if(is.na(eventMargin)&&!is.na(plotHeight)&&!is.na(ydim)){ eventMargin=ydim-plotHeight-ydim*labelMargin }
if(is.na(eventMargin)){ eventMargin=0 }
if(!is.na(timeFile)){
#Rendering the scatter plot
plot.default(NULL, type="l", ylab=NA, xlim = c(xmin,xmax), ylim = c(ymin,ymax), axes=FALSE,
frame.plot=FALSE, las=1, tck=.01, mgp=c(3,.5,0), xaxs="i", yaxs="i", font=2)
#Loading Timeline file
if(is.null(rawtimeline[[timeFile]])){
cat(" Reading timeline file ...\n")
rawtimeline=append(rawtimeline, list(read.csv(timeFile, header = TRUE, strip.white=TRUE)))
names(rawtimeline)[length(rawtimeline)]=timeFile
}
eventorder=order(rawtimeline[[timeFile]][["start"]])
if(any(names(rawtimeline[[timeFile]])=="end")){
if(length(timeOffset)==0|!timelineOffset){
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"], end=rawtimeline[[timeFile]][eventorder, "end"])
}else{
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"]+timeOffset, end=rawtimeline[[timeFile]][eventorder, "end"]+timeOffset)
}
}else{
if(length(timeOffset)==0|!timelineOffset){
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"])
}else{
timeline=data.frame(event=rawtimeline[[timeFile]][eventorder, "event"], start=rawtimeline[[timeFile]][eventorder, "start"]+timeOffset)
}
}
timeline=subset(timeline, timeline["start"]<=xmax&timeline["start"]>=xmin)
charHeight=strheight("/_GypL", units="inches")+.05
charWidth=max(strwidth(timeline[["event"]], units="inches"))+.05
overlap=NULL
cat(" Calculating label positions ...\n")
oldMargin=NULL
oldLocs=NULL
oldScore=NULL
oldOverlap=NULL
notPass=TRUE
while(notPass){
notPass=FALSE
eventLocs=eventLocation(xmin, xmax, 0, NA)
if(is.null(eventLocs)){
notPass=TRUE
eventMargin=eventMargin+.1
}else{
#Creating a list of vertical displacements for line structure on timeline data
#they should be maximally spaced in the label margin
overlap=seq(dim(timeline["start"])[1])*0
#First (right) pass of the line offsetting process,
#evaluates whether (left)labels to (right) lines should be displaced further up
n=2
while(n<=length(eventLocs[["location"]])){
if(eventLocs[["location"]][n]<=data.matrix(timeline["start"])[n-1]){
overlap[n]=overlap[n-1]+1
}
n=n+1
}
#Second (left) pass of the line offsetting process
#evaluates whether (right) labels to (left) lines should be displaced further up
n=length(eventLocs[["location"]])-1
while(n>=1){
if(eventLocs[["location"]][n]>=data.matrix(timeline["start"])[n+1]){
overlap[n]=overlap[n+1]+1
}
n=n-1
}
if(max(sqrt(((timeline[["start"]]-eventLocs[["location"]])/(xmax-xmin)/maxDisp)^2+(overlap/maxOverlap)^2))>1){
oldMargin=c(eventMargin, oldMargin)
oldScore=c(max(sqrt(((timeline[["start"]]-eventLocs[["location"]])/(xmax-xmin)/maxDisp)^2+(overlap/maxOverlap)^2)), oldScore)
oldLocs=c(list(eventLocs), oldLocs)
oldOverlap=c(list(overlap), oldOverlap)
notPass=TRUE
eventMargin=eventMargin+.1
if(eventLocs[["theta"]]==pi/2||(!is.na(ydim)&&!is.na(plotHeight)&&eventMargin>=ydim-plotHeight)){
eventLocs=oldLocs[[which.min(oldScore)]]
eventMargin=oldMargin[which.min(oldScore)]
overlap=oldOverlap[[which.min(oldScore)]]
notPass=FALSE
}
}
}
}
}
ydim=createwindow(ydim)
renderplotspace(scaledata, FALSE)
rendergrid(scaledata, FALSE)
if(!is.na(timeFile))
renderablines(scaledata, FALSE)
errorBarX=NULL
errorBarY=NULL
errorBar=rendererrorbar(scaledata, FALSE)
renderpoints(scaledata, FALSE, FALSE)
if(totalvars>1)
renderpoints(scaledata, FALSE, TRUE)
renderaxes(scaledata, FALSE)
renderlegend(scaledata, FALSE, errorBar[[1]], errorBar[[2]])
if(scaledata){
renderplotspace(scaledata, TRUE)
rendergrid(scaledata, TRUE)
if(!is.na(timeFile))
renderablines(scaledata, TRUE)
errorBar=rendererrorbar(scaledata, TRUE)
renderpoints(scaledata, TRUE, FALSE)
renderpoints(scaledata, TRUE, TRUE)
renderaxes(scaledata, TRUE)
renderlegend(scaledata, TRUE, errorBar[[1]], errorBar[[2]])
}
if(!is.na(timeFile))
rendertimeline(scaledata)
renderplotspace(FALSE, TRUE)
mtext(yaxis, side=2, line=3, font=2, col=framecolor)
dev.off(which = dev.cur())
entry=entry-1
}
}
#plotdata("PA Plots 2.csv", maxXtick=10, folder="Plot Output/", lineWidth=2, outputType="png", drawLine=FALSE, maxDisp=.3, framecolor="black")
#plotdata("Mystery Plot Setup.csv", maxXtick=10, folder="Plot Output/", lineWidth=2, framecolor="black", outputType="png", drawLine=FALSE, eventcolorhi=.30, eventcolorlow=.30)
#plotdata("BasementInput.csv", folder="646B_Plots/")
#plotdata("BP plots Avg.csv", folder="Plot Output/")
#plotdata("FOS RAW.csv", folder="", drawLine=TRUE)
plotdata("219_test1_input.csv", maxXtick=10, outputType="png", symbols=10, lineWidth=2, drawLine=TRUE)
#plotdata("644A_HFinput.csv", maxXtick=10, outputType="png", symbols=10, lineWidth=2, drawLine=TRUE)
#plotdata("Lawson RFID.csv", arrowsize=NA, outputType="pn")
#plotdata("parkway ter/PT Plot.csv", arrowsize=NA, reverseVars=FALSE) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/file_io.R
\name{import_raw}
\alias{import_raw}
\title{Function for reading raw data.}
\usage{
import_raw(
file_name,
file_path = NULL,
chan_nos = NULL,
recording = NULL,
participant_id = character(1),
fast_bdf = TRUE
)
}
\arguments{
\item{file_name}{File to import. Should include file extension.}
\item{file_path}{Path to file name, if not included in filename.}
\item{chan_nos}{Channels to import. All channels are included by default.}
\item{recording}{Name of the recording. By default, the filename will be
used.}
\item{participant_id}{Identifier for the participant.}
\item{fast_bdf}{New, faster method for loading BDF files. Experimental.}
}
\description{
Currently BDF/EDF, 32-bit .CNT, and Brain Vision Analyzer files are
supported. Filetype is determined by the file extension.The \code{edfReader}
package is used to load BDF/EDF files, whereas custom code is used for .CNT
and BVA files. The function creates an \code{eeg_data} structure for
subsequent use.
}
\author{
Matt Craddock, \email{matt@mattcraddock.com}
}
| /man/import_raw.Rd | permissive | muschellij2/eegUtils | R | false | true | 1,122 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/file_io.R
\name{import_raw}
\alias{import_raw}
\title{Function for reading raw data.}
\usage{
import_raw(
file_name,
file_path = NULL,
chan_nos = NULL,
recording = NULL,
participant_id = character(1),
fast_bdf = TRUE
)
}
\arguments{
\item{file_name}{File to import. Should include file extension.}
\item{file_path}{Path to file name, if not included in filename.}
\item{chan_nos}{Channels to import. All channels are included by default.}
\item{recording}{Name of the recording. By default, the filename will be
used.}
\item{participant_id}{Identifier for the participant.}
\item{fast_bdf}{New, faster method for loading BDF files. Experimental.}
}
\description{
Currently BDF/EDF, 32-bit .CNT, and Brain Vision Analyzer files are
supported. Filetype is determined by the file extension.The \code{edfReader}
package is used to load BDF/EDF files, whereas custom code is used for .CNT
and BVA files. The function creates an \code{eeg_data} structure for
subsequent use.
}
\author{
Matt Craddock, \email{matt@mattcraddock.com}
}
|
hpc<-"./data/household_power_consumption.txt"
file.info(hpc)
data <-read.csv(hpc, sep=";", stringsAsFactors = FALSE)
object.size(hpc)
dataPart <- data[data$Date == "1/2/2007" | data$Date == "2/2/2007",]
globalActivePower <- as.numeric(dataPart$Global_active_power)
datetime <- strptime(paste(dataPart$Date, dataPart$Time, sep=" "), "%d/%m/%Y %H:%M:%S")
subMetering1 <- as.numeric(dataPart$Sub_metering_1)
subMetering2 <- as.numeric(dataPart$Sub_metering_2)
subMetering3 <- as.numeric(dataPart$Sub_metering_3)
voltage <- as.numeric(dataPart$Voltage)
globalReactivePower<-as.numeric(dataPart$Global_reactive_power)
png("plot4.png", width=480, height=480)
par(mfcol = c(2,2))
plot(datetime, globalActivePower, type="l", xlab="", ylab="Global Active Power")
plot(datetime, subMetering1, type="l", ylab="Energy sub metering", xlab="")
lines(datetime, subMetering2, type="l", col="red")
lines(datetime, subMetering3, type="l", col="blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=1, lwd=2, col=c("black", "red", "blue"), bty = "o", box.col = "white")
plot(datetime, voltage, type="l", xlab="datetime", ylab = "Voltage")
plot(datetime, globalReactivePower, type = "l", xlab = "datetime", ylab = "global_reactive_power")
dev.off() | /plot_4.R | no_license | chr7stos/ExData_Plotting1 | R | false | false | 1,275 | r | hpc<-"./data/household_power_consumption.txt"
file.info(hpc)
data <-read.csv(hpc, sep=";", stringsAsFactors = FALSE)
object.size(hpc)
dataPart <- data[data$Date == "1/2/2007" | data$Date == "2/2/2007",]
globalActivePower <- as.numeric(dataPart$Global_active_power)
datetime <- strptime(paste(dataPart$Date, dataPart$Time, sep=" "), "%d/%m/%Y %H:%M:%S")
subMetering1 <- as.numeric(dataPart$Sub_metering_1)
subMetering2 <- as.numeric(dataPart$Sub_metering_2)
subMetering3 <- as.numeric(dataPart$Sub_metering_3)
voltage <- as.numeric(dataPart$Voltage)
globalReactivePower<-as.numeric(dataPart$Global_reactive_power)
png("plot4.png", width=480, height=480)
par(mfcol = c(2,2))
plot(datetime, globalActivePower, type="l", xlab="", ylab="Global Active Power")
plot(datetime, subMetering1, type="l", ylab="Energy sub metering", xlab="")
lines(datetime, subMetering2, type="l", col="red")
lines(datetime, subMetering3, type="l", col="blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=1, lwd=2, col=c("black", "red", "blue"), bty = "o", box.col = "white")
plot(datetime, voltage, type="l", xlab="datetime", ylab = "Voltage")
plot(datetime, globalReactivePower, type = "l", xlab = "datetime", ylab = "global_reactive_power")
dev.off() |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
args <- commandArgs(TRUE)
VERSION <- args[1]
dst_dir <- paste0("libarrow/arrow-", VERSION)
arrow_repo <- "https://dl.bintray.com/ursalabs/arrow-r/libarrow/"
apache_src_url <- paste0(
"https://archive.apache.org/dist/arrow/arrow-", VERSION,
"/apache-arrow-", VERSION, ".tar.gz"
)
options(.arrow.cleanup = character()) # To collect dirs to rm on exit
on.exit(unlink(getOption(".arrow.cleanup")))
env_is <- function(var, value) identical(tolower(Sys.getenv(var)), value)
# * no download, build_ok: Only build with local git checkout
# * download_ok, no build: Only use prebuilt binary, if found
# * neither: Get the arrow-without-arrow package
download_ok <- env_is("LIBARROW_DOWNLOAD", "true") || !(tolower(Sys.getenv("LIBARROW_BINARY")) %in% c("", "false"))
build_ok <- !env_is("LIBARROW_BUILD", "false")
# For local debugging, set ARROW_R_DEV=TRUE to make this script print more
quietly <- !env_is("ARROW_R_DEV", "true")
download_binary <- function(os = identify_os()) {
libfile <- tempfile()
if (!is.null(os)) {
binary_url <- paste0(arrow_repo, "bin/", os, "/arrow-", VERSION, ".zip")
try(
download.file(binary_url, libfile, quiet = quietly),
silent = quietly
)
if (file.exists(libfile)) {
cat(sprintf("*** Successfully retrieved C++ binaries for %s\n", os))
} else {
cat(sprintf("*** No C++ binaries found for %s\n", os))
libfile <- NULL
}
} else {
libfile <- NULL
}
libfile
}
# Function to figure out which flavor of binary we should download, if at all.
# By default (unset or "FALSE"), it will not download a precompiled library,
# but you can override this by setting the env var LIBARROW_BINARY to:
# * `TRUE` (not case-sensitive), to try to discover your current OS, or
# * some other string, presumably a related "distro-version" that has binaries
# built that work for your OS
identify_os <- function(os = Sys.getenv("LIBARROW_BINARY", Sys.getenv("LIBARROW_DOWNLOAD"))) {
if (tolower(os) %in% c("", "false")) {
# Env var says not to download a binary
return(NULL)
} else if (!identical(tolower(os), "true")) {
# Env var provided an os-version to use--maybe you're on Ubuntu 18.10 but
# we only build for 18.04 and that's fine--so use what the user set
return(os)
}
if (nzchar(Sys.which("lsb_release"))) {
distro <- tolower(system("lsb_release -is", intern = TRUE))
os_version <- system("lsb_release -rs", intern = TRUE)
# In the future, we may be able to do some mapping of distro-versions to
# versions we built for, since there's no way we'll build for everything.
os <- paste0(distro, "-", os_version)
} else if (file.exists("/etc/os-release")) {
os_release <- readLines("/etc/os-release")
vals <- sub("^.*=(.*)$", "\\1", os_release)
names(vals) <- sub("^(.*)=.*$", "\\1", os_release)
distro <- gsub('"', '', vals["ID"])
if (distro == "ubuntu") {
# Keep major.minor version
version_regex <- '^"?([0-9]+\\.[0-9]+).*"?.*$'
} else {
# Only major version number
version_regex <- '^"?([0-9]+).*"?.*$'
}
os_version <- sub(version_regex, "\\1", vals["VERSION_ID"])
os <- paste0(distro, "-", os_version)
} else if (file.exists("/etc/system-release")) {
# Something like "CentOS Linux release 7.7.1908 (Core)"
system_release <- tolower(utils::head(readLines("/etc/system-release"), 1))
# Extract from that the distro and the major version number
os <- sub("^([a-z]+) .* ([0-9]+).*$", "\\1-\\2", system_release)
} else {
cat("*** Unable to identify current OS/version\n")
os <- NULL
}
# Now look to see if we can map this os-version to one we have binaries for
os <- find_available_binary(os)
os
}
find_available_binary <- function(os) {
# Download a csv that maps one to the other, columns "actual" and "use_this"
u <- "https://raw.githubusercontent.com/ursa-labs/arrow-r-nightly/master/linux/distro-map.csv"
lookup <- try(utils::read.csv(u, stringsAsFactors = FALSE), silent = quietly)
if (!inherits(lookup, "try-error") && os %in% lookup$actual) {
new <- lookup$use_this[lookup$actual == os]
if (length(new) == 1 && !is.na(new)) { # Just some sanity checking
cat(sprintf("*** Using %s binary for %s\n", new, os))
os <- new
}
}
os
}
download_source <- function() {
tf1 <- tempfile()
src_dir <- NULL
source_url <- paste0(arrow_repo, "src/arrow-", VERSION, ".zip")
try(
download.file(source_url, tf1, quiet = quietly),
silent = quietly
)
if (!file.exists(tf1)) {
# Try for an official release
try(
download.file(apache_src_url, tf1, quiet = quietly),
silent = quietly
)
}
if (file.exists(tf1)) {
cat("*** Successfully retrieved C++ source\n")
src_dir <- tempfile()
unzip(tf1, exdir = src_dir)
unlink(tf1)
# These scripts need to be executable
system(sprintf("chmod 755 %s/cpp/build-support/*.sh", src_dir))
options(.arrow.cleanup = c(getOption(".arrow.cleanup"), src_dir))
# The actual src is in cpp
src_dir <- paste0(src_dir, "/cpp")
}
src_dir
}
find_local_source <- function(arrow_home = Sys.getenv("ARROW_HOME", "..")) {
if (file.exists(paste0(arrow_home, "/cpp/src/arrow/api.h"))) {
# We're in a git checkout of arrow, so we can build it
cat("*** Found local C++ source\n")
return(paste0(arrow_home, "/cpp"))
} else {
return(NULL)
}
}
build_libarrow <- function(src_dir, dst_dir) {
# We'll need to compile R bindings with these libs, so delete any .o files
system("rm src/*.o", ignore.stdout = quietly, ignore.stderr = quietly)
# Set up make for parallel building
makeflags <- Sys.getenv("MAKEFLAGS")
if (makeflags == "") {
makeflags <- sprintf("-j%s", parallel::detectCores())
Sys.setenv(MAKEFLAGS = makeflags)
}
if (!quietly) {
cat("*** Building with MAKEFLAGS=", makeflags, "\n")
}
# Check for libarrow build dependencies:
# * cmake
cmake <- ensure_cmake()
build_dir <- tempfile()
options(.arrow.cleanup = c(getOption(".arrow.cleanup"), build_dir))
env_vars <- sprintf(
"SOURCE_DIR=%s BUILD_DIR=%s DEST_DIR=%s CMAKE=%s",
src_dir, build_dir, dst_dir, cmake
)
cat("**** arrow", ifelse(quietly, "", paste("with", env_vars)), "\n")
system(
paste(env_vars, "inst/build_arrow_static.sh"),
ignore.stdout = quietly, ignore.stderr = quietly
)
}
ensure_cmake <- function() {
cmake <- Sys.which("cmake")
if (!nzchar(cmake)) {
# If not found, download it
cat("**** cmake\n")
CMAKE_VERSION <- Sys.getenv("CMAKE_VERSION", "3.16.2")
cmake_binary_url <- paste0(
"https://github.com/Kitware/CMake/releases/download/v", CMAKE_VERSION,
"/cmake-", CMAKE_VERSION, "-Linux-x86_64.tar.gz"
)
cmake_tar <- tempfile()
cmake_dir <- tempfile()
try(
download.file(cmake_binary_url, cmake_tar, quiet = quietly),
silent = quietly
)
untar(cmake_tar, exdir = cmake_dir)
unlink(cmake_tar)
options(.arrow.cleanup = c(getOption(".arrow.cleanup"), cmake_dir))
cmake <- paste0(
cmake_dir,
"/cmake-", CMAKE_VERSION, "-Linux-x86_64",
"/bin/cmake"
)
}
cmake
}
#####
if (!file.exists(paste0(dst_dir, "/include/arrow/api.h"))) {
# If we're working in a local checkout and have already built the libs, we
# don't need to do anything. Otherwise,
# (1) Look for a prebuilt binary for this version
bin_file <- src_dir <- NULL
if (download_ok) {
bin_file <- download_binary()
}
if (!is.null(bin_file)) {
# Extract them
dir.create(dst_dir, showWarnings = !quietly, recursive = TRUE)
unzip(bin_file, exdir = dst_dir)
unlink(bin_file)
} else if (build_ok) {
# (2) Find source and build it
if (download_ok) {
src_dir <- download_source()
}
if (is.null(src_dir)) {
src_dir <- find_local_source()
}
if (!is.null(src_dir)) {
cat("*** Building C++ libraries\n")
build_libarrow(src_dir, dst_dir)
}
} else {
cat("*** Proceeding without C++ dependencies\n")
}
}
| /r/tools/linuxlibs.R | permissive | Zxy675272387/arrow | R | false | false | 8,856 | r | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
args <- commandArgs(TRUE)
VERSION <- args[1]
dst_dir <- paste0("libarrow/arrow-", VERSION)
arrow_repo <- "https://dl.bintray.com/ursalabs/arrow-r/libarrow/"
apache_src_url <- paste0(
"https://archive.apache.org/dist/arrow/arrow-", VERSION,
"/apache-arrow-", VERSION, ".tar.gz"
)
options(.arrow.cleanup = character()) # To collect dirs to rm on exit
on.exit(unlink(getOption(".arrow.cleanup")))
env_is <- function(var, value) identical(tolower(Sys.getenv(var)), value)
# * no download, build_ok: Only build with local git checkout
# * download_ok, no build: Only use prebuilt binary, if found
# * neither: Get the arrow-without-arrow package
download_ok <- env_is("LIBARROW_DOWNLOAD", "true") || !(tolower(Sys.getenv("LIBARROW_BINARY")) %in% c("", "false"))
build_ok <- !env_is("LIBARROW_BUILD", "false")
# For local debugging, set ARROW_R_DEV=TRUE to make this script print more
quietly <- !env_is("ARROW_R_DEV", "true")
download_binary <- function(os = identify_os()) {
libfile <- tempfile()
if (!is.null(os)) {
binary_url <- paste0(arrow_repo, "bin/", os, "/arrow-", VERSION, ".zip")
try(
download.file(binary_url, libfile, quiet = quietly),
silent = quietly
)
if (file.exists(libfile)) {
cat(sprintf("*** Successfully retrieved C++ binaries for %s\n", os))
} else {
cat(sprintf("*** No C++ binaries found for %s\n", os))
libfile <- NULL
}
} else {
libfile <- NULL
}
libfile
}
# Function to figure out which flavor of binary we should download, if at all.
# By default (unset or "FALSE"), it will not download a precompiled library,
# but you can override this by setting the env var LIBARROW_BINARY to:
# * `TRUE` (not case-sensitive), to try to discover your current OS, or
# * some other string, presumably a related "distro-version" that has binaries
# built that work for your OS
identify_os <- function(os = Sys.getenv("LIBARROW_BINARY", Sys.getenv("LIBARROW_DOWNLOAD"))) {
if (tolower(os) %in% c("", "false")) {
# Env var says not to download a binary
return(NULL)
} else if (!identical(tolower(os), "true")) {
# Env var provided an os-version to use--maybe you're on Ubuntu 18.10 but
# we only build for 18.04 and that's fine--so use what the user set
return(os)
}
if (nzchar(Sys.which("lsb_release"))) {
distro <- tolower(system("lsb_release -is", intern = TRUE))
os_version <- system("lsb_release -rs", intern = TRUE)
# In the future, we may be able to do some mapping of distro-versions to
# versions we built for, since there's no way we'll build for everything.
os <- paste0(distro, "-", os_version)
} else if (file.exists("/etc/os-release")) {
os_release <- readLines("/etc/os-release")
vals <- sub("^.*=(.*)$", "\\1", os_release)
names(vals) <- sub("^(.*)=.*$", "\\1", os_release)
distro <- gsub('"', '', vals["ID"])
if (distro == "ubuntu") {
# Keep major.minor version
version_regex <- '^"?([0-9]+\\.[0-9]+).*"?.*$'
} else {
# Only major version number
version_regex <- '^"?([0-9]+).*"?.*$'
}
os_version <- sub(version_regex, "\\1", vals["VERSION_ID"])
os <- paste0(distro, "-", os_version)
} else if (file.exists("/etc/system-release")) {
# Something like "CentOS Linux release 7.7.1908 (Core)"
system_release <- tolower(utils::head(readLines("/etc/system-release"), 1))
# Extract from that the distro and the major version number
os <- sub("^([a-z]+) .* ([0-9]+).*$", "\\1-\\2", system_release)
} else {
cat("*** Unable to identify current OS/version\n")
os <- NULL
}
# Now look to see if we can map this os-version to one we have binaries for
os <- find_available_binary(os)
os
}
find_available_binary <- function(os) {
# Download a csv that maps one to the other, columns "actual" and "use_this"
u <- "https://raw.githubusercontent.com/ursa-labs/arrow-r-nightly/master/linux/distro-map.csv"
lookup <- try(utils::read.csv(u, stringsAsFactors = FALSE), silent = quietly)
if (!inherits(lookup, "try-error") && os %in% lookup$actual) {
new <- lookup$use_this[lookup$actual == os]
if (length(new) == 1 && !is.na(new)) { # Just some sanity checking
cat(sprintf("*** Using %s binary for %s\n", new, os))
os <- new
}
}
os
}
download_source <- function() {
tf1 <- tempfile()
src_dir <- NULL
source_url <- paste0(arrow_repo, "src/arrow-", VERSION, ".zip")
try(
download.file(source_url, tf1, quiet = quietly),
silent = quietly
)
if (!file.exists(tf1)) {
# Try for an official release
try(
download.file(apache_src_url, tf1, quiet = quietly),
silent = quietly
)
}
if (file.exists(tf1)) {
cat("*** Successfully retrieved C++ source\n")
src_dir <- tempfile()
unzip(tf1, exdir = src_dir)
unlink(tf1)
# These scripts need to be executable
system(sprintf("chmod 755 %s/cpp/build-support/*.sh", src_dir))
options(.arrow.cleanup = c(getOption(".arrow.cleanup"), src_dir))
# The actual src is in cpp
src_dir <- paste0(src_dir, "/cpp")
}
src_dir
}
find_local_source <- function(arrow_home = Sys.getenv("ARROW_HOME", "..")) {
if (file.exists(paste0(arrow_home, "/cpp/src/arrow/api.h"))) {
# We're in a git checkout of arrow, so we can build it
cat("*** Found local C++ source\n")
return(paste0(arrow_home, "/cpp"))
} else {
return(NULL)
}
}
build_libarrow <- function(src_dir, dst_dir) {
# We'll need to compile R bindings with these libs, so delete any .o files
system("rm src/*.o", ignore.stdout = quietly, ignore.stderr = quietly)
# Set up make for parallel building
makeflags <- Sys.getenv("MAKEFLAGS")
if (makeflags == "") {
makeflags <- sprintf("-j%s", parallel::detectCores())
Sys.setenv(MAKEFLAGS = makeflags)
}
if (!quietly) {
cat("*** Building with MAKEFLAGS=", makeflags, "\n")
}
# Check for libarrow build dependencies:
# * cmake
cmake <- ensure_cmake()
build_dir <- tempfile()
options(.arrow.cleanup = c(getOption(".arrow.cleanup"), build_dir))
env_vars <- sprintf(
"SOURCE_DIR=%s BUILD_DIR=%s DEST_DIR=%s CMAKE=%s",
src_dir, build_dir, dst_dir, cmake
)
cat("**** arrow", ifelse(quietly, "", paste("with", env_vars)), "\n")
system(
paste(env_vars, "inst/build_arrow_static.sh"),
ignore.stdout = quietly, ignore.stderr = quietly
)
}
ensure_cmake <- function() {
cmake <- Sys.which("cmake")
if (!nzchar(cmake)) {
# If not found, download it
cat("**** cmake\n")
CMAKE_VERSION <- Sys.getenv("CMAKE_VERSION", "3.16.2")
cmake_binary_url <- paste0(
"https://github.com/Kitware/CMake/releases/download/v", CMAKE_VERSION,
"/cmake-", CMAKE_VERSION, "-Linux-x86_64.tar.gz"
)
cmake_tar <- tempfile()
cmake_dir <- tempfile()
try(
download.file(cmake_binary_url, cmake_tar, quiet = quietly),
silent = quietly
)
untar(cmake_tar, exdir = cmake_dir)
unlink(cmake_tar)
options(.arrow.cleanup = c(getOption(".arrow.cleanup"), cmake_dir))
cmake <- paste0(
cmake_dir,
"/cmake-", CMAKE_VERSION, "-Linux-x86_64",
"/bin/cmake"
)
}
cmake
}
#####
if (!file.exists(paste0(dst_dir, "/include/arrow/api.h"))) {
# If we're working in a local checkout and have already built the libs, we
# don't need to do anything. Otherwise,
# (1) Look for a prebuilt binary for this version
bin_file <- src_dir <- NULL
if (download_ok) {
bin_file <- download_binary()
}
if (!is.null(bin_file)) {
# Extract them
dir.create(dst_dir, showWarnings = !quietly, recursive = TRUE)
unzip(bin_file, exdir = dst_dir)
unlink(bin_file)
} else if (build_ok) {
# (2) Find source and build it
if (download_ok) {
src_dir <- download_source()
}
if (is.null(src_dir)) {
src_dir <- find_local_source()
}
if (!is.null(src_dir)) {
cat("*** Building C++ libraries\n")
build_libarrow(src_dir, dst_dir)
}
} else {
cat("*** Proceeding without C++ dependencies\n")
}
}
|
library(caret)
timestamp <- format(Sys.time(), "%Y_%m_%d_%H_%M")
model <- "oblique.tree"
#########################################################################
set.seed(1)
training <- twoClassSim(50, linearVars = 2)
testing <- twoClassSim(500, linearVars = 2)
trainX <- training[, -ncol(training)]
trainY <- training$Class
cctrl1 <- trainControl(method = "cv", number = 3, returnResamp = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary)
cctrl2 <- trainControl(method = "LOOCV",
classProbs = TRUE, summaryFunction = twoClassSummary)
cctrl3 <- trainControl(method = "none",
classProbs = TRUE, summaryFunction = twoClassSummary)
cctrlR <- trainControl(method = "cv", number = 3, returnResamp = "all", search = "random")
set.seed(849)
test_class_cv_model <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrl1,
metric = "ROC",
preProc = c("center", "scale"))
set.seed(849)
test_class_cv_form <- train(Class ~ ., data = training,
method = "oblique.tree",
trControl = cctrl1,
metric = "ROC",
preProc = c("center", "scale"))
test_class_pred <- predict(test_class_cv_model, testing[, -ncol(testing)])
test_class_prob <- predict(test_class_cv_model, testing[, -ncol(testing)], type = "prob")
test_class_pred_form <- predict(test_class_cv_form, testing[, -ncol(testing)])
test_class_prob_form <- predict(test_class_cv_form, testing[, -ncol(testing)], type = "prob")
set.seed(849)
test_class_rand <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrlR,
tuneLength = 4)
set.seed(849)
test_class_loo_model <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrl2,
metric = "ROC",
preProc = c("center", "scale"))
set.seed(849)
test_class_none_model <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrl3,
tuneGrid = test_class_cv_model$bestTune,
metric = "ROC",
preProc = c("center", "scale"))
test_class_none_pred <- predict(test_class_none_model, testing[, -ncol(testing)])
test_class_none_prob <- predict(test_class_none_model, testing[, -ncol(testing)], type = "prob")
test_levels <- levels(test_class_cv_model)
if(!all(levels(trainY) %in% test_levels))
cat("wrong levels")
#########################################################################
tests <- grep("test_", ls(), fixed = TRUE, value = TRUE)
sInfo <- sessionInfo()
save(list = c(tests, "sInfo", "timestamp"),
file = file.path(getwd(), paste(model, ".RData", sep = "")))
q("no")
| /RegressionTests/Code/oblique.tree.R | no_license | Ragyi/caret | R | false | false | 3,103 | r | library(caret)
timestamp <- format(Sys.time(), "%Y_%m_%d_%H_%M")
model <- "oblique.tree"
#########################################################################
set.seed(1)
training <- twoClassSim(50, linearVars = 2)
testing <- twoClassSim(500, linearVars = 2)
trainX <- training[, -ncol(training)]
trainY <- training$Class
cctrl1 <- trainControl(method = "cv", number = 3, returnResamp = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary)
cctrl2 <- trainControl(method = "LOOCV",
classProbs = TRUE, summaryFunction = twoClassSummary)
cctrl3 <- trainControl(method = "none",
classProbs = TRUE, summaryFunction = twoClassSummary)
cctrlR <- trainControl(method = "cv", number = 3, returnResamp = "all", search = "random")
set.seed(849)
test_class_cv_model <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrl1,
metric = "ROC",
preProc = c("center", "scale"))
set.seed(849)
test_class_cv_form <- train(Class ~ ., data = training,
method = "oblique.tree",
trControl = cctrl1,
metric = "ROC",
preProc = c("center", "scale"))
test_class_pred <- predict(test_class_cv_model, testing[, -ncol(testing)])
test_class_prob <- predict(test_class_cv_model, testing[, -ncol(testing)], type = "prob")
test_class_pred_form <- predict(test_class_cv_form, testing[, -ncol(testing)])
test_class_prob_form <- predict(test_class_cv_form, testing[, -ncol(testing)], type = "prob")
set.seed(849)
test_class_rand <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrlR,
tuneLength = 4)
set.seed(849)
test_class_loo_model <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrl2,
metric = "ROC",
preProc = c("center", "scale"))
set.seed(849)
test_class_none_model <- train(trainX, trainY,
method = "oblique.tree",
trControl = cctrl3,
tuneGrid = test_class_cv_model$bestTune,
metric = "ROC",
preProc = c("center", "scale"))
test_class_none_pred <- predict(test_class_none_model, testing[, -ncol(testing)])
test_class_none_prob <- predict(test_class_none_model, testing[, -ncol(testing)], type = "prob")
test_levels <- levels(test_class_cv_model)
if(!all(levels(trainY) %in% test_levels))
cat("wrong levels")
#########################################################################
tests <- grep("test_", ls(), fixed = TRUE, value = TRUE)
sInfo <- sessionInfo()
save(list = c(tests, "sInfo", "timestamp"),
file = file.path(getwd(), paste(model, ".RData", sep = "")))
q("no")
|
with(aba82b38b17a34db288ba911df4ff00a9, {ROOT <- 'D:/SEMOSS/SEMOSS_v4.0.0_x64/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/1c4fa71c-191c-4da9-8102-b247ffddc5d3';rm(list=ls())}); | /1c4fa71c-191c-4da9-8102-b247ffddc5d3/R/Temp/aAuqFYlg43Bp5.R | no_license | ayanmanna8/test | R | false | false | 201 | r | with(aba82b38b17a34db288ba911df4ff00a9, {ROOT <- 'D:/SEMOSS/SEMOSS_v4.0.0_x64/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/1c4fa71c-191c-4da9-8102-b247ffddc5d3';rm(list=ls())}); |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bayes.lm.R
\name{bayes.lm}
\alias{bayes.lm}
\title{Bayesian inference for multiple linear regression}
\usage{
bayes.lm(
formula,
data,
subset,
na.action,
model = TRUE,
x = FALSE,
y = FALSE,
center = TRUE,
prior = NULL,
sigma = FALSE
)
}
\arguments{
\item{formula}{an object of class \code{\link[stats]{formula}} (or one that can be coerced to that class): a symbolic
description of the model to be fitted. The details of model specification are given under `Details'.}
\item{data}{an optional data frame, list or environment (or object coercible by \code{\link[base]{as.data.frame}} to a
data frame) containing the variables in the model. If not found in data, the variables are taken
from \code{environment(formula)}, typically the environment from which \code{bayes.lm} is called.}
\item{subset}{an optional vector specifying a subset of observations to be used in the fitting process.}
\item{na.action}{a function which indicates what should happen when the data contain \code{NA}s. The
default is set by the \code{\link[stats]{na.action}} setting of options, and is \code{link[stats]{na.fail}}
if that is unset. The `factory-fresh' default is \code{\link[stats]{na.omit}}. Another possible value
is \code{NULL}, no action. Value \code{\link[stats]{na.exclude}} can be useful.}
\item{model, x, y}{logicals. If \code{TRUE} the corresponding components of the fit (the model frame, the model matrix, the response)
are returned.
\eqn{\beta}{beta}. This argument is ignored for a flat prior.}
\item{center}{logical or numeric. If \code{TRUE} then the covariates will be centered on their means to make them
orthogonal to the intercept. This probably makes no sense for models with factors, and if the argument
is numeric then it contains a vector of covariate indices to be centered (not implemented yet).}
\item{prior}{A list containing b0 (A vector of prior coefficients) and V0 (A prior covariance matrix)}
\item{sigma}{the population standard deviation of the errors. If \code{FALSE} then this is estimated from the residual sum of squares from the ML fit.}
}
\value{
\code{bayes.lm} returns an object of class \code{Bolstad}.
The \code{summary} function is used to obtain and print a summary of the results much like the usual
summary from a linear regression using \code{\link[stats]{lm}}.
The generic accessor functions \code{coef, fitted.values and residuals}
extract various useful features of the value returned by \code{bayes.lm}. Note that the residuals
are computed at the posterior mean values of the coefficients.
An object of class "Bolstad" from this function is a list containing at least the following components:
\item{coefficients}{a named vector of coefficients which contains the posterior mean}
\item{post.var}{a matrix containing the posterior variance-covariance matrix of the coefficients}
\item{post.sd}{sigma}
\item{residuals}{the residuals, that is response minus fitted values (computed at the posterior mean)}
\item{fitted.values}{the fitted mean values (computed at the posterior mean)}
\item{df.residual}{the residual degrees of freedom}
\item{call}{the matched call}
\item{terms}{the \code{\link[stats]{terms}} object used}
\item{y}{if requested, the response used}
\item{x}{if requested, the model matrix used}
\item{model}{if requested (the default), the model frame used}
\item{na.action}{(where relevant) information returned by \code{model.frame} on the special
handling of \code{NA}s}
}
\description{
bayes.lm is used to fit linear models in the Bayesian paradigm. It can be used to carry out regression,
single stratum analysis of variance and analysis of covariance (although these are not tested). This
documentation is shamelessly adapated from the lm documentation
}
\details{
Models for \code{bayes.lm} are specified symbolically. A typical model has the form
\code{response ~ terms} where \code{response} is the (numeric) response vector and \code{terms} is a
series of terms which specifies a linear predictor for \code{response}. A terms specification of the
form \code{first + second} indicates all the terms in \code{first} together with all the terms in
\code{second} with duplicates removed. A specification of the form \code{first:second} indicates the
set of terms obtained by taking the interactions of all terms in \code{first} with all terms in
\code{second}. The specification \code{first*second} indicates the cross of \code{first} and \code{second}.
This is the same as \code{first + second + first:second}.
See \code{\link[stats]{model.matrix}} for some further details. The terms in the formula will be
re-ordered so that main effects come first, followed by the interactions, all second-order,
all third-order and so on: to avoid this pass a \code{terms} object as the formula
(see \code{\link[stats]{aov}} and \code{demo(glm.vr)} for an example).
A formula has an implied intercept term. To remove this use either \code{y ~ x - 1} or
\code{y ~ 0 + x}. See \code{\link[stats]{formula}} for more details of allowed formulae.
\code{bayes.lm} calls the lower level function \code{lm.fit} to get the maximum likelihood estimates
see below, for the actual numerical computations. For programming only, you may consider doing
likewise.
\code{subset} is evaluated in the same way as variables in formula, that is first in data and
then in the environment of formula.
}
\examples{
data(bears)
bears = subset(bears, Obs.No==1)
bears = bears[,-c(1,2,3,11,12)]
bears = bears[ ,c(7, 1:6)]
bears$Sex = bears$Sex - 1
log.bears = data.frame(log.Weight = log(bears$Weight), bears[,2:7])
b0 = rep(0, 7)
V0 = diag(rep(1e6,7))
fit = bayes.lm(log(Weight)~Sex+Head.L+Head.W+Neck.G+Length+Chest.G, data = bears,
prior = list(b0 = b0, V0 = V0))
summary(fit)
print(fit)
## Dobson (1990) Page 9: Plant Weight Data:
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2, 10, 20, labels = c("Ctl","Trt"))
weight <- c(ctl, trt)
lm.D9 <- lm(weight ~ group)
bayes.D9 <- bayes.lm(weight ~ group)
summary(lm.D9)
summary(bayes.D9)
}
\keyword{misc}
| /man/bayes.lm.Rd | no_license | cran/Bolstad | R | false | true | 6,255 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bayes.lm.R
\name{bayes.lm}
\alias{bayes.lm}
\title{Bayesian inference for multiple linear regression}
\usage{
bayes.lm(
formula,
data,
subset,
na.action,
model = TRUE,
x = FALSE,
y = FALSE,
center = TRUE,
prior = NULL,
sigma = FALSE
)
}
\arguments{
\item{formula}{an object of class \code{\link[stats]{formula}} (or one that can be coerced to that class): a symbolic
description of the model to be fitted. The details of model specification are given under `Details'.}
\item{data}{an optional data frame, list or environment (or object coercible by \code{\link[base]{as.data.frame}} to a
data frame) containing the variables in the model. If not found in data, the variables are taken
from \code{environment(formula)}, typically the environment from which \code{bayes.lm} is called.}
\item{subset}{an optional vector specifying a subset of observations to be used in the fitting process.}
\item{na.action}{a function which indicates what should happen when the data contain \code{NA}s. The
default is set by the \code{\link[stats]{na.action}} setting of options, and is \code{link[stats]{na.fail}}
if that is unset. The `factory-fresh' default is \code{\link[stats]{na.omit}}. Another possible value
is \code{NULL}, no action. Value \code{\link[stats]{na.exclude}} can be useful.}
\item{model, x, y}{logicals. If \code{TRUE} the corresponding components of the fit (the model frame, the model matrix, the response)
are returned.
\eqn{\beta}{beta}. This argument is ignored for a flat prior.}
\item{center}{logical or numeric. If \code{TRUE} then the covariates will be centered on their means to make them
orthogonal to the intercept. This probably makes no sense for models with factors, and if the argument
is numeric then it contains a vector of covariate indices to be centered (not implemented yet).}
\item{prior}{A list containing b0 (A vector of prior coefficients) and V0 (A prior covariance matrix)}
\item{sigma}{the population standard deviation of the errors. If \code{FALSE} then this is estimated from the residual sum of squares from the ML fit.}
}
\value{
\code{bayes.lm} returns an object of class \code{Bolstad}.
The \code{summary} function is used to obtain and print a summary of the results much like the usual
summary from a linear regression using \code{\link[stats]{lm}}.
The generic accessor functions \code{coef, fitted.values and residuals}
extract various useful features of the value returned by \code{bayes.lm}. Note that the residuals
are computed at the posterior mean values of the coefficients.
An object of class "Bolstad" from this function is a list containing at least the following components:
\item{coefficients}{a named vector of coefficients which contains the posterior mean}
\item{post.var}{a matrix containing the posterior variance-covariance matrix of the coefficients}
\item{post.sd}{sigma}
\item{residuals}{the residuals, that is response minus fitted values (computed at the posterior mean)}
\item{fitted.values}{the fitted mean values (computed at the posterior mean)}
\item{df.residual}{the residual degrees of freedom}
\item{call}{the matched call}
\item{terms}{the \code{\link[stats]{terms}} object used}
\item{y}{if requested, the response used}
\item{x}{if requested, the model matrix used}
\item{model}{if requested (the default), the model frame used}
\item{na.action}{(where relevant) information returned by \code{model.frame} on the special
handling of \code{NA}s}
}
\description{
bayes.lm is used to fit linear models in the Bayesian paradigm. It can be used to carry out regression,
single stratum analysis of variance and analysis of covariance (although these are not tested). This
documentation is shamelessly adapated from the lm documentation
}
\details{
Models for \code{bayes.lm} are specified symbolically. A typical model has the form
\code{response ~ terms} where \code{response} is the (numeric) response vector and \code{terms} is a
series of terms which specifies a linear predictor for \code{response}. A terms specification of the
form \code{first + second} indicates all the terms in \code{first} together with all the terms in
\code{second} with duplicates removed. A specification of the form \code{first:second} indicates the
set of terms obtained by taking the interactions of all terms in \code{first} with all terms in
\code{second}. The specification \code{first*second} indicates the cross of \code{first} and \code{second}.
This is the same as \code{first + second + first:second}.
See \code{\link[stats]{model.matrix}} for some further details. The terms in the formula will be
re-ordered so that main effects come first, followed by the interactions, all second-order,
all third-order and so on: to avoid this pass a \code{terms} object as the formula
(see \code{\link[stats]{aov}} and \code{demo(glm.vr)} for an example).
A formula has an implied intercept term. To remove this use either \code{y ~ x - 1} or
\code{y ~ 0 + x}. See \code{\link[stats]{formula}} for more details of allowed formulae.
\code{bayes.lm} calls the lower level function \code{lm.fit} to get the maximum likelihood estimates
see below, for the actual numerical computations. For programming only, you may consider doing
likewise.
\code{subset} is evaluated in the same way as variables in formula, that is first in data and
then in the environment of formula.
}
\examples{
data(bears)
bears = subset(bears, Obs.No==1)
bears = bears[,-c(1,2,3,11,12)]
bears = bears[ ,c(7, 1:6)]
bears$Sex = bears$Sex - 1
log.bears = data.frame(log.Weight = log(bears$Weight), bears[,2:7])
b0 = rep(0, 7)
V0 = diag(rep(1e6,7))
fit = bayes.lm(log(Weight)~Sex+Head.L+Head.W+Neck.G+Length+Chest.G, data = bears,
prior = list(b0 = b0, V0 = V0))
summary(fit)
print(fit)
## Dobson (1990) Page 9: Plant Weight Data:
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2, 10, 20, labels = c("Ctl","Trt"))
weight <- c(ctl, trt)
lm.D9 <- lm(weight ~ group)
bayes.D9 <- bayes.lm(weight ~ group)
summary(lm.D9)
summary(bayes.D9)
}
\keyword{misc}
|
# Read in data from file located in the working directory
ex_data <- read.csv("./household_power_consumption.txt",sep=";",stringsAsFactors = FALSE,
colClasses=c("character","character",rep("numeric",7)),na.strings = "?")
# Concatenate date and time columns
ex_data$DateTime <- paste(ex_data$Date , ex_data$Time)
# Convert DateTime column to DateTime format
ex_data$DateTime <- strptime(ex_data$DateTime , "%d/%m/%Y %H:%M:%S")
# Subset data based on date range 2007-02-01 - 2007-02-02
ex_data <- subset(ex_data, DateTime >= '2007-02-01 00:00:00' & DateTime <= '2007-02-02 23:59:59')
# Open new PNG file
png(file = "plot4.png", width = 480, height = 480, type = "cairo", bg = "transparent")
# Setup plotarea
par(mfrow=c(2,2))
# Generate 1st Graph
plot(ex_data$DateTime, ex_data$Global_active_power, type="l",
ylab = "Global Active Power", xlab="")
# Generate 2nd Graph
plot(ex_data$DateTime, ex_data$Voltage, type="l",
ylab = "Voltage", xlab="datetime")
# Generate 3rd Graph
plot(ex_data$DateTime, ex_data$Sub_metering_1, type="l", ylab = "Energy sub metering", xlab="")
lines(ex_data$DateTime, ex_data$Sub_metering_2, col="Red")
lines(ex_data$DateTime, ex_data$Sub_metering_3, col="Blue")
legend("topright",c("Sub_metering_1","Sub_metering_2","Sub_metering_3"),lty=1,
col=c("Black","Red","Blue"), bty="n")
# Generate 4th Graph
plot(ex_data$DateTime, ex_data$Global_reactive_power, type="l",
ylab = "Global_reactive_power", xlab="datetime",yaxt="n")
axis(2, at = seq(0, 0.5, by = 0.1))
# Close off output to file
dev.off() | /plot4.R | no_license | rebecca44/ExData_Plotting1 | R | false | false | 1,576 | r | # Read in data from file located in the working directory
ex_data <- read.csv("./household_power_consumption.txt",sep=";",stringsAsFactors = FALSE,
colClasses=c("character","character",rep("numeric",7)),na.strings = "?")
# Concatenate date and time columns
ex_data$DateTime <- paste(ex_data$Date , ex_data$Time)
# Convert DateTime column to DateTime format
ex_data$DateTime <- strptime(ex_data$DateTime , "%d/%m/%Y %H:%M:%S")
# Subset data based on date range 2007-02-01 - 2007-02-02
ex_data <- subset(ex_data, DateTime >= '2007-02-01 00:00:00' & DateTime <= '2007-02-02 23:59:59')
# Open new PNG file
png(file = "plot4.png", width = 480, height = 480, type = "cairo", bg = "transparent")
# Setup plotarea
par(mfrow=c(2,2))
# Generate 1st Graph
plot(ex_data$DateTime, ex_data$Global_active_power, type="l",
ylab = "Global Active Power", xlab="")
# Generate 2nd Graph
plot(ex_data$DateTime, ex_data$Voltage, type="l",
ylab = "Voltage", xlab="datetime")
# Generate 3rd Graph
plot(ex_data$DateTime, ex_data$Sub_metering_1, type="l", ylab = "Energy sub metering", xlab="")
lines(ex_data$DateTime, ex_data$Sub_metering_2, col="Red")
lines(ex_data$DateTime, ex_data$Sub_metering_3, col="Blue")
legend("topright",c("Sub_metering_1","Sub_metering_2","Sub_metering_3"),lty=1,
col=c("Black","Red","Blue"), bty="n")
# Generate 4th Graph
plot(ex_data$DateTime, ex_data$Global_reactive_power, type="l",
ylab = "Global_reactive_power", xlab="datetime",yaxt="n")
axis(2, at = seq(0, 0.5, by = 0.1))
# Close off output to file
dev.off() |
context("use non-latin text (requires poppler-data)")
test_that("reading Chinese text", {
suppressMessages(text <- pdf_text("chinese.pdf"))
expect_length(text, 1)
expect_equal(Encoding(text), "UTF-8")
expect_match(text, "\u98A8\u96AA\u56E0\u7D20")
# Check fonts
fonts <- pdf_fonts("chinese.pdf")
expect_equal(nrow(fonts), 3)
expect_true(all(fonts$embedded))
expect_match(fonts$name[2], "MSung")
expect_match(fonts$name[3], "MHei")
})
| /tests/testthat/test-chinese.R | permissive | ropensci/pdftools | R | false | false | 456 | r | context("use non-latin text (requires poppler-data)")
test_that("reading Chinese text", {
suppressMessages(text <- pdf_text("chinese.pdf"))
expect_length(text, 1)
expect_equal(Encoding(text), "UTF-8")
expect_match(text, "\u98A8\u96AA\u56E0\u7D20")
# Check fonts
fonts <- pdf_fonts("chinese.pdf")
expect_equal(nrow(fonts), 3)
expect_true(all(fonts$embedded))
expect_match(fonts$name[2], "MSung")
expect_match(fonts$name[3], "MHei")
})
|
library(carcass)
### Name: searches
### Title: Data of a searcher efficiency trial
### Aliases: searches
### Keywords: datasets
### ** Examples
data(searches)
str(searches)
search.efficiency(searches)
| /data/genthat_extracted_code/carcass/examples/searches.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 208 | r | library(carcass)
### Name: searches
### Title: Data of a searcher efficiency trial
### Aliases: searches
### Keywords: datasets
### ** Examples
data(searches)
str(searches)
search.efficiency(searches)
|
###-*- R -*-
###--- This "foo.Rin" script is only used to create the real script "foo.R" :
###--- We need to use such a long "real script" instead of a for loop,
###--- because "error --> jump_to_toplevel", i.e., outside any loop.
core.pkgs <- local({x <- installed.packages(.Library)
unname(x[x[,"Priority"] %in% "base", "Package"])})
core.pkgs <-
core.pkgs[- match(c("methods", "parallel", "tcltk", "stats4"), core.pkgs, 0)]
## move methods to the end because it has side effects (overrides primitives)
## stats4 requires methods
core.pkgs <- c(core.pkgs, "methods", "stats4")
stop.list <- vector("list", length(core.pkgs))
names(stop.list) <- core.pkgs
## -- Stop List for base/graphics/utils:
edit.int <- c("fix", "edit", "edit.data.frame", "edit.matrix",
"edit.default", "vi", "file.edit",
"emacs", "pico", "xemacs", "xedit", "RSiteSearch", "help.request")
## warning: readLines will work, but read all the rest of the script
## warning: trace will load methods.
## warning: rm and remove zap c0, l0, m0, df0
## warning: parent.env(NULL) <- NULL creates a loop
## warning: browseVignettes launches many browser processes.
## news, readNEWS, rtags are slow, and R-only code.
misc.int <- c("browser", "browseVignettes", "bug.report", "checkCRAN",
"getCRANmirrors", "lazyLoad", "menu", "repeat",
"readLines", "package.skeleton", "trace", "recover",
"rm", "remove", "parent.env<-",
"builtins", "data", "help", "news", "rtags", "vignette",
"installed.packages")
inet.list <- c(apropos("download\\."),
apropos("^url\\."), apropos("\\.url"),
apropos("packageStatus"),
paste(c("CRAN", "install", "update", "old"), "packages", sep="."))
socket.fun <- apropos("socket")
## "Interactive" ones:
dev.int <- c("X11", "x11", "pdf", "postscript",
"xfig", "jpeg", "png", "pictex", "quartz",
"svg", "tiff", "cairo_pdf", "cairo_ps",
"getGraphicsEvent")
misc.2 <- c("asS4", "help.start", "browseEnv", "make.packages.html",
"gctorture", "q", "quit", "restart", "try",
"read.fwf", "source",## << MM thinks "FIXME"
"data.entry", "dataentry", "de", apropos("^de\\."),
"chooseCRANmirror", "setRepositories", "select.list", "View")
if(.Platform$OS.type == "windows") {
dev.int <- c(dev.int, "bmp", "windows", "win.graph", "win.print",
"win.metafile")
misc.2 <- c(misc.2, "file.choose", "choose.files", "choose.dir",
"setWindowTitle", "loadRconsole",
"arrangeWindows", "getWindowsHandles")
}
stop.list[["base"]] <-
if(nchar(Sys.getenv("R_TESTLOTS"))) {## SEVERE TESTING, try almost ALL
c(edit.int, misc.int)
} else {
c(inet.list, socket.fun, edit.int, misc.int, misc.2)
}
## S4 group generics should not be called directly
## and doing so sometimes leads to infinite recursion
s4.group.generics <- c("Arith", "Compare", "Ops", "Logic", "Math", "Math2", "Summary", "Complex")
## warning: browseAll will tend to read all the script and/or loop forever
stop.list[["methods"]] <- c("browseAll", "recover", s4.group.generics)
stop.list[["tools"]] <- c("write_PACKAGES", # problems with Packages/PACKAGES
"update_PACKAGES",
"testInstalledBasic",
"testInstalledPackages", # runs whole suite
"readNEWS", # slow, pure R code
"findHTMLlinks", "pskill",
"texi2dvi", "texi2pdf", # hang on Windows
"getVignetteInfo", # very slow on large installation
"CRAN_package_db", # slow, pure R code
"CRAN_check_results",
"CRAN_check_details",
"CRAN_memtest_notes",
"summarize_CRAN_check_status"
)
stop.list[["grDevices"]] <- dev.int
stop.list[["utils"]] <- c("Rprof", "aspell", # hangs on Windows
"winProgressBar",
inet.list, socket.fun, edit.int, misc.int, misc.2)
sink("no-segfault.R")
if(.Platform$OS.type == "unix") cat('options(pager = "cat")\n')
if(.Platform$OS.type == "windows") cat('options(pager = "console")\n')
cat('options(error=expression(NULL))',
"# don't stop on error in batch\n##~~~~~~~~~~~~~~\n")
cat(".proctime00 <- proc.time()\n")
for (pkg in core.pkgs) {
cat("### Package ", pkg, "\n",
"### ", rep("~",nchar(pkg)), "\n", collapse="", sep="")
pkgname <- paste("package", pkg, sep=":")
this.pos <- match(paste("package", pkg, sep=":"), search())
lib.not.loaded <- is.na(this.pos)
if(lib.not.loaded) {
library(pkg, character = TRUE, warn.conflicts = FALSE)
cat("library(", pkg, ")\n")
}
this.pos <- match(paste("package", pkg, sep=":"), search())
for(nm in ls(pkgname)) { # <==> no '.<name>' functions are tested here
if(!(nm %in% stop.list[[pkg]]) &&
is.function(f <- get(nm, pos = pkgname))) {
cat("\n## ", nm, " :\n")
cat("c0 <- character(0)\n",
"l0 <- logical(0)\n",
"m0 <- matrix(1,0,0)\n",
"df0 <- as.data.frame(c0)\n", sep="")
cat("f <- get(\"",nm,"\", pos = '", pkgname, "')\n", sep="")
cat("f()\nf(NULL)\nf(,NULL)\nf(NULL,NULL)\n",
"f(list())\nf(l0)\nf(c0)\nf(m0)\nf(df0)\nf(FALSE)\n",
"f(list(),list())\nf(l0,l0)\nf(c0,c0)\n",
"f(df0,df0)\nf(FALSE,FALSE)\n",
sep="")
}
}
if(lib.not.loaded) {
detach(pos=this.pos)
cat("detach(pos=", this.pos, ")\n", sep="")
}
cat("\n##__________\n\n")
}
cat("proc.time() - .proctime00\n")
| /R-Portable-Win/tests/no-segfault.Rin | permissive | sdownin/sequencer | R | false | false | 5,787 | rin | ###-*- R -*-
###--- This "foo.Rin" script is only used to create the real script "foo.R" :
###--- We need to use such a long "real script" instead of a for loop,
###--- because "error --> jump_to_toplevel", i.e., outside any loop.
core.pkgs <- local({x <- installed.packages(.Library)
unname(x[x[,"Priority"] %in% "base", "Package"])})
core.pkgs <-
core.pkgs[- match(c("methods", "parallel", "tcltk", "stats4"), core.pkgs, 0)]
## move methods to the end because it has side effects (overrides primitives)
## stats4 requires methods
core.pkgs <- c(core.pkgs, "methods", "stats4")
stop.list <- vector("list", length(core.pkgs))
names(stop.list) <- core.pkgs
## -- Stop List for base/graphics/utils:
edit.int <- c("fix", "edit", "edit.data.frame", "edit.matrix",
"edit.default", "vi", "file.edit",
"emacs", "pico", "xemacs", "xedit", "RSiteSearch", "help.request")
## warning: readLines will work, but read all the rest of the script
## warning: trace will load methods.
## warning: rm and remove zap c0, l0, m0, df0
## warning: parent.env(NULL) <- NULL creates a loop
## warning: browseVignettes launches many browser processes.
## news, readNEWS, rtags are slow, and R-only code.
misc.int <- c("browser", "browseVignettes", "bug.report", "checkCRAN",
"getCRANmirrors", "lazyLoad", "menu", "repeat",
"readLines", "package.skeleton", "trace", "recover",
"rm", "remove", "parent.env<-",
"builtins", "data", "help", "news", "rtags", "vignette",
"installed.packages")
inet.list <- c(apropos("download\\."),
apropos("^url\\."), apropos("\\.url"),
apropos("packageStatus"),
paste(c("CRAN", "install", "update", "old"), "packages", sep="."))
socket.fun <- apropos("socket")
## "Interactive" ones:
dev.int <- c("X11", "x11", "pdf", "postscript",
"xfig", "jpeg", "png", "pictex", "quartz",
"svg", "tiff", "cairo_pdf", "cairo_ps",
"getGraphicsEvent")
misc.2 <- c("asS4", "help.start", "browseEnv", "make.packages.html",
"gctorture", "q", "quit", "restart", "try",
"read.fwf", "source",## << MM thinks "FIXME"
"data.entry", "dataentry", "de", apropos("^de\\."),
"chooseCRANmirror", "setRepositories", "select.list", "View")
if(.Platform$OS.type == "windows") {
dev.int <- c(dev.int, "bmp", "windows", "win.graph", "win.print",
"win.metafile")
misc.2 <- c(misc.2, "file.choose", "choose.files", "choose.dir",
"setWindowTitle", "loadRconsole",
"arrangeWindows", "getWindowsHandles")
}
stop.list[["base"]] <-
if(nchar(Sys.getenv("R_TESTLOTS"))) {## SEVERE TESTING, try almost ALL
c(edit.int, misc.int)
} else {
c(inet.list, socket.fun, edit.int, misc.int, misc.2)
}
## S4 group generics should not be called directly
## and doing so sometimes leads to infinite recursion
s4.group.generics <- c("Arith", "Compare", "Ops", "Logic", "Math", "Math2", "Summary", "Complex")
## warning: browseAll will tend to read all the script and/or loop forever
stop.list[["methods"]] <- c("browseAll", "recover", s4.group.generics)
stop.list[["tools"]] <- c("write_PACKAGES", # problems with Packages/PACKAGES
"update_PACKAGES",
"testInstalledBasic",
"testInstalledPackages", # runs whole suite
"readNEWS", # slow, pure R code
"findHTMLlinks", "pskill",
"texi2dvi", "texi2pdf", # hang on Windows
"getVignetteInfo", # very slow on large installation
"CRAN_package_db", # slow, pure R code
"CRAN_check_results",
"CRAN_check_details",
"CRAN_memtest_notes",
"summarize_CRAN_check_status"
)
stop.list[["grDevices"]] <- dev.int
stop.list[["utils"]] <- c("Rprof", "aspell", # hangs on Windows
"winProgressBar",
inet.list, socket.fun, edit.int, misc.int, misc.2)
sink("no-segfault.R")
if(.Platform$OS.type == "unix") cat('options(pager = "cat")\n')
if(.Platform$OS.type == "windows") cat('options(pager = "console")\n')
cat('options(error=expression(NULL))',
"# don't stop on error in batch\n##~~~~~~~~~~~~~~\n")
cat(".proctime00 <- proc.time()\n")
for (pkg in core.pkgs) {
cat("### Package ", pkg, "\n",
"### ", rep("~",nchar(pkg)), "\n", collapse="", sep="")
pkgname <- paste("package", pkg, sep=":")
this.pos <- match(paste("package", pkg, sep=":"), search())
lib.not.loaded <- is.na(this.pos)
if(lib.not.loaded) {
library(pkg, character = TRUE, warn.conflicts = FALSE)
cat("library(", pkg, ")\n")
}
this.pos <- match(paste("package", pkg, sep=":"), search())
for(nm in ls(pkgname)) { # <==> no '.<name>' functions are tested here
if(!(nm %in% stop.list[[pkg]]) &&
is.function(f <- get(nm, pos = pkgname))) {
cat("\n## ", nm, " :\n")
cat("c0 <- character(0)\n",
"l0 <- logical(0)\n",
"m0 <- matrix(1,0,0)\n",
"df0 <- as.data.frame(c0)\n", sep="")
cat("f <- get(\"",nm,"\", pos = '", pkgname, "')\n", sep="")
cat("f()\nf(NULL)\nf(,NULL)\nf(NULL,NULL)\n",
"f(list())\nf(l0)\nf(c0)\nf(m0)\nf(df0)\nf(FALSE)\n",
"f(list(),list())\nf(l0,l0)\nf(c0,c0)\n",
"f(df0,df0)\nf(FALSE,FALSE)\n",
sep="")
}
}
if(lib.not.loaded) {
detach(pos=this.pos)
cat("detach(pos=", this.pos, ")\n", sep="")
}
cat("\n##__________\n\n")
}
cat("proc.time() - .proctime00\n")
|
rm(list=ls())
load("results/concomitant.combined.results2.rdata")
model.index = 3
predictions.all.glm = lapply(preds.glm[model.index], function(x) x@predictions[[1]])
predictions.all.glm = do.call(c,predictions.all.glm)
labels.all.glm = lapply(preds.glm[model.index], function(x) x@labels[[1]])
labels.all.glm = do.call(c,labels.all.glm)
deciles.glm = quantile(predictions.all.glm[labels.all.glm==2],seq(from=0,to=1,by=0.1))
ppvs.glm = mapply(function(a,b){
sum(labels.all.glm[predictions.all.glm<=b&predictions.all.glm>a]==2)/sum(predictions.all.glm<=b&predictions.all.glm>a)
},deciles.glm[-length(deciles.glm)],deciles.glm[-1])
predictions.all.xgb = lapply(preds.xgb[model.index], function(x) x@predictions[[1]])
predictions.all.xgb = do.call(c,predictions.all.xgb)
labels.all.xgb = lapply(preds.xgb[model.index], function(x) x@labels[[1]])
labels.all.xgb = do.call(c,labels.all.xgb)
deciles.xgb = quantile(predictions.all.xgb[labels.all.xgb==2],seq(from=0,to=1,by=0.1))
ppvs.xgb = mapply(function(a,b){
sum(labels.all.xgb[predictions.all.xgb<=b&predictions.all.xgb>a]==2)/sum(predictions.all.xgb<=b&predictions.all.xgb>a)
},deciles.xgb[-length(deciles.xgb)],deciles.xgb[-1])
load("results/results.3layer.gru.concomitant.subset3.rdata")
predictions.all.rnn = pred@predictions[[1]]
labels.all.rnn = pred@labels[[1]]
deciles.rnn = quantile(predictions.all.rnn[labels.all.rnn==1],seq(from=0,to=1,by=0.1))
ppvs.rnn = mapply(function(a,b){
sum(labels.all.rnn[predictions.all.rnn<=b&predictions.all.rnn>a]==1)/sum(predictions.all.rnn<=b&predictions.all.rnn>a)
},deciles.rnn[-length(deciles.rnn)],deciles.rnn[-1])
data = data.frame(percentile=rep(0:9*10,3),ppv=c(ppvs.glm,ppvs.xgb,ppvs.rnn),model=c(rep("glm",10),rep("xgb",10),rep("rnn",10)))
png("figures/patient_ppv.png",width=800,height=600)
ggplot(data,aes(x=percentile,y=ppv,color=factor(model,levels=c("glm","xgb","rnn"))))+geom_line(size=1)+ylim(0,1)+
scale_x_continuous(breaks=seq(from=0,to=90,by=10))+
xlab("Percentile")+ylab("PPV")+
labs(color="Model")+
scale_color_manual(breaks=c("glm","xgb","rnn"),labels=c("GLM","XGBoost","RNN"),values=c("black","red","green"))+
theme(axis.text=element_text(size=20),
axis.title=element_text(size=24),
legend.title=element_text(size=24),
legend.text=element_text(size=20),
legend.justification =c(1,0),
legend.position=c(1,0),
legend.box.margin=margin(c(10,10,10,10)))
dev.off() | /generate_figure_6.R | no_license | harshblue/shockalert-documented | R | false | false | 2,476 | r | rm(list=ls())
load("results/concomitant.combined.results2.rdata")
model.index = 3
predictions.all.glm = lapply(preds.glm[model.index], function(x) x@predictions[[1]])
predictions.all.glm = do.call(c,predictions.all.glm)
labels.all.glm = lapply(preds.glm[model.index], function(x) x@labels[[1]])
labels.all.glm = do.call(c,labels.all.glm)
deciles.glm = quantile(predictions.all.glm[labels.all.glm==2],seq(from=0,to=1,by=0.1))
ppvs.glm = mapply(function(a,b){
sum(labels.all.glm[predictions.all.glm<=b&predictions.all.glm>a]==2)/sum(predictions.all.glm<=b&predictions.all.glm>a)
},deciles.glm[-length(deciles.glm)],deciles.glm[-1])
predictions.all.xgb = lapply(preds.xgb[model.index], function(x) x@predictions[[1]])
predictions.all.xgb = do.call(c,predictions.all.xgb)
labels.all.xgb = lapply(preds.xgb[model.index], function(x) x@labels[[1]])
labels.all.xgb = do.call(c,labels.all.xgb)
deciles.xgb = quantile(predictions.all.xgb[labels.all.xgb==2],seq(from=0,to=1,by=0.1))
ppvs.xgb = mapply(function(a,b){
sum(labels.all.xgb[predictions.all.xgb<=b&predictions.all.xgb>a]==2)/sum(predictions.all.xgb<=b&predictions.all.xgb>a)
},deciles.xgb[-length(deciles.xgb)],deciles.xgb[-1])
load("results/results.3layer.gru.concomitant.subset3.rdata")
predictions.all.rnn = pred@predictions[[1]]
labels.all.rnn = pred@labels[[1]]
deciles.rnn = quantile(predictions.all.rnn[labels.all.rnn==1],seq(from=0,to=1,by=0.1))
ppvs.rnn = mapply(function(a,b){
sum(labels.all.rnn[predictions.all.rnn<=b&predictions.all.rnn>a]==1)/sum(predictions.all.rnn<=b&predictions.all.rnn>a)
},deciles.rnn[-length(deciles.rnn)],deciles.rnn[-1])
data = data.frame(percentile=rep(0:9*10,3),ppv=c(ppvs.glm,ppvs.xgb,ppvs.rnn),model=c(rep("glm",10),rep("xgb",10),rep("rnn",10)))
png("figures/patient_ppv.png",width=800,height=600)
ggplot(data,aes(x=percentile,y=ppv,color=factor(model,levels=c("glm","xgb","rnn"))))+geom_line(size=1)+ylim(0,1)+
scale_x_continuous(breaks=seq(from=0,to=90,by=10))+
xlab("Percentile")+ylab("PPV")+
labs(color="Model")+
scale_color_manual(breaks=c("glm","xgb","rnn"),labels=c("GLM","XGBoost","RNN"),values=c("black","red","green"))+
theme(axis.text=element_text(size=20),
axis.title=element_text(size=24),
legend.title=element_text(size=24),
legend.text=element_text(size=20),
legend.justification =c(1,0),
legend.position=c(1,0),
legend.box.margin=margin(c(10,10,10,10)))
dev.off() |
tcga_path<-"/project/huff/huff/immune_checkpoint/data"
exp_TP<-readr::read_rds(file.path(tcga_path,"pancancer23_exp_tumorPurity.ras.gz"))
gene_list_path <- "/project/huff/huff/immune_checkpoint/checkpoint/20171021_checkpoint"
gene_list <- read.table(file.path(gene_list_path, "gene_list_type"),header=T)
gene_list$symbol<-as.character(gene_list$symbol)
out_path<-"/project/huff/huff/immune_checkpoint/result_20171025/e_1_tumor_purity"
#-----------------------------------------------------------get exp and TP of genes in gene list
gene_list %>%
dplyr::select(symbol) %>%
dplyr::inner_join(exp_TP,by="symbol") ->genelist_exp_TP
#-----------------------------------------------------------get result
#calculate correlation between gene and purity
genelist_exp_TP %>%
dplyr::mutate(chrun=rnorm(length(.$purity),sd=1e-6)) %>%
dplyr::mutate(purity=purity+chrun) %>%
dplyr::mutate(exp=exp+chrun) %>%
dplyr::group_by(cancer_types,symbol) %>%
dplyr::do(
broom::tidy(
tryCatch(
cor.test(.$exp,.$purity,method = c("spearman")),
error = function(e){1},
warning = function(e){1})
)
) %>%
dplyr::mutate(spm_cor=estimate) %>%
dplyr::select(cancer_types,symbol,spm_cor,p.value,method) %>%
dplyr::ungroup()->genelist_exp_purity_spm_cor_result
#filter significant result by correlation and pvalue.
genelist_exp_purity_spm_cor_result %>%
dplyr::filter(abs(spm_cor)>=0.2 & p.value<=0.05) ->sig_gene_cor_tumorPurity
sig_gene_cor_tumorPurity %>%
readr::write_rds(file.path(out_path,"e_01_gene_sig_cor_tumorPurity.rds.gz"),compress = "gz")
#get rank
fn_gene_rank<-function(pattern){
pattern %>%
dplyr::rowwise() %>%
dplyr::do(
symbol=.$symbol,
rank=unlist(.[-1],use.names = F) %>% sum(na.rm=T)
) %>%
dplyr::ungroup() %>%
tidyr::unnest() %>%
dplyr::arrange(rank)
}
fn_cancer_rank<-function(pattern){
pattern %>%
dplyr::summarise_if(.predicate = is.numeric, dplyr::funs(sum(., na.rm = T))) %>%
tidyr::gather(key = cancer_types, value = rank) %>%
dplyr::arrange(rank)
}
sig_gene_cor_tumorPurity %>%
dplyr::mutate(cor_1=ifelse(spm_cor>0,1,-1)) %>%
dplyr::select(cancer_types,symbol,cor_1) %>%
tidyr::spread(cancer_types,cor_1) %>%
fn_gene_rank() %>%
dplyr::inner_join(gene_list,by="symbol") %>%
dplyr::select(symbol,rank,col,size) %>%
dplyr::arrange(col,abs(rank)) ->gene_rank
gene_rank$col %>% as.character() ->gene_rank$col
gene_rank$size %>% as.character() ->gene_rank$size
sig_gene_cor_tumorPurity %>%
dplyr::mutate(cor_1=ifelse(spm_cor>0,1,-1)) %>%
dplyr::select(cancer_types,symbol,cor_1) %>%
tidyr::spread(cancer_types,cor_1) %>%
fn_cancer_rank() ->cancer_rank
#draw pic
library(ggplot2)
sig_gene_cor_tumorPurity %>%
dplyr::mutate(dir=ifelse(spm_cor>0,"Pos","Neg")) %>%
dplyr::mutate(p.value=ifelse(p.value<=1e-10,1e-10,p.value)) %>%
ggplot(aes(x=cancer_types,y=symbol)) +
geom_point(aes(size = -log10(p.value), col = spm_cor))+
scale_color_gradient2(
low = "blue",
mid = "white",
high = "red",
midpoint = 0,
na.value = "white",
breaks = c(0.6,0.4,0.2,0,-0.2,-0.4,0.5,-0.6,-0.8,-1),
labels = c(0.6,0.4,0.2,0,-0.2,-0.4,0.5,-0.6,-0.8,-1),
name = "Spearman"
) +
scale_size_continuous(
name = "-Log10(P)",
breaks = c(2.5,5,7.5,10),
labels = c(2.5,5,7.5,10)
)+
scale_y_discrete(limits=gene_rank$symbol) +
scale_x_discrete(limits=cancer_rank$cancer_types)+
theme(
panel.background = element_rect(colour = "black",fill = "white"),
panel.grid = element_line(colour = "grey", linetype = "dashed"),
panel.grid.major = element_line(
colour = "grey",
linetype = "dashed",
size = 0.2
),
axis.title = element_blank(),
axis.ticks = element_line(color = "black"),
axis.text.y = element_text(colour = gene_rank$col,face = gene_rank$size),
axis.text.x = element_text(angle = 45,hjust = 1),
legend.text = element_text(size = 10),
legend.title = element_text(size = 12),
legend.key = element_rect(fill = "white", colour = "black")
)->p;p
ggsave(
filename = "fig_01_e_tumorPurity_sig_genes.pdf",
plot = p,
device = "pdf",
width = 8,
height = 8,
path = out_path
)
save(list=ls(),file=file.path(out_path,"e_01_tumorPurity.Rdata"))
| /e_01_tumorPurity.R | no_license | flower1996/immune-cp | R | false | false | 4,285 | r |
tcga_path<-"/project/huff/huff/immune_checkpoint/data"
exp_TP<-readr::read_rds(file.path(tcga_path,"pancancer23_exp_tumorPurity.ras.gz"))
gene_list_path <- "/project/huff/huff/immune_checkpoint/checkpoint/20171021_checkpoint"
gene_list <- read.table(file.path(gene_list_path, "gene_list_type"),header=T)
gene_list$symbol<-as.character(gene_list$symbol)
out_path<-"/project/huff/huff/immune_checkpoint/result_20171025/e_1_tumor_purity"
#-----------------------------------------------------------get exp and TP of genes in gene list
gene_list %>%
dplyr::select(symbol) %>%
dplyr::inner_join(exp_TP,by="symbol") ->genelist_exp_TP
#-----------------------------------------------------------get result
#calculate correlation between gene and purity
genelist_exp_TP %>%
dplyr::mutate(chrun=rnorm(length(.$purity),sd=1e-6)) %>%
dplyr::mutate(purity=purity+chrun) %>%
dplyr::mutate(exp=exp+chrun) %>%
dplyr::group_by(cancer_types,symbol) %>%
dplyr::do(
broom::tidy(
tryCatch(
cor.test(.$exp,.$purity,method = c("spearman")),
error = function(e){1},
warning = function(e){1})
)
) %>%
dplyr::mutate(spm_cor=estimate) %>%
dplyr::select(cancer_types,symbol,spm_cor,p.value,method) %>%
dplyr::ungroup()->genelist_exp_purity_spm_cor_result
#filter significant result by correlation and pvalue.
genelist_exp_purity_spm_cor_result %>%
dplyr::filter(abs(spm_cor)>=0.2 & p.value<=0.05) ->sig_gene_cor_tumorPurity
sig_gene_cor_tumorPurity %>%
readr::write_rds(file.path(out_path,"e_01_gene_sig_cor_tumorPurity.rds.gz"),compress = "gz")
#get rank
fn_gene_rank<-function(pattern){
pattern %>%
dplyr::rowwise() %>%
dplyr::do(
symbol=.$symbol,
rank=unlist(.[-1],use.names = F) %>% sum(na.rm=T)
) %>%
dplyr::ungroup() %>%
tidyr::unnest() %>%
dplyr::arrange(rank)
}
fn_cancer_rank<-function(pattern){
pattern %>%
dplyr::summarise_if(.predicate = is.numeric, dplyr::funs(sum(., na.rm = T))) %>%
tidyr::gather(key = cancer_types, value = rank) %>%
dplyr::arrange(rank)
}
sig_gene_cor_tumorPurity %>%
dplyr::mutate(cor_1=ifelse(spm_cor>0,1,-1)) %>%
dplyr::select(cancer_types,symbol,cor_1) %>%
tidyr::spread(cancer_types,cor_1) %>%
fn_gene_rank() %>%
dplyr::inner_join(gene_list,by="symbol") %>%
dplyr::select(symbol,rank,col,size) %>%
dplyr::arrange(col,abs(rank)) ->gene_rank
gene_rank$col %>% as.character() ->gene_rank$col
gene_rank$size %>% as.character() ->gene_rank$size
sig_gene_cor_tumorPurity %>%
dplyr::mutate(cor_1=ifelse(spm_cor>0,1,-1)) %>%
dplyr::select(cancer_types,symbol,cor_1) %>%
tidyr::spread(cancer_types,cor_1) %>%
fn_cancer_rank() ->cancer_rank
#draw pic
library(ggplot2)
sig_gene_cor_tumorPurity %>%
dplyr::mutate(dir=ifelse(spm_cor>0,"Pos","Neg")) %>%
dplyr::mutate(p.value=ifelse(p.value<=1e-10,1e-10,p.value)) %>%
ggplot(aes(x=cancer_types,y=symbol)) +
geom_point(aes(size = -log10(p.value), col = spm_cor))+
scale_color_gradient2(
low = "blue",
mid = "white",
high = "red",
midpoint = 0,
na.value = "white",
breaks = c(0.6,0.4,0.2,0,-0.2,-0.4,0.5,-0.6,-0.8,-1),
labels = c(0.6,0.4,0.2,0,-0.2,-0.4,0.5,-0.6,-0.8,-1),
name = "Spearman"
) +
scale_size_continuous(
name = "-Log10(P)",
breaks = c(2.5,5,7.5,10),
labels = c(2.5,5,7.5,10)
)+
scale_y_discrete(limits=gene_rank$symbol) +
scale_x_discrete(limits=cancer_rank$cancer_types)+
theme(
panel.background = element_rect(colour = "black",fill = "white"),
panel.grid = element_line(colour = "grey", linetype = "dashed"),
panel.grid.major = element_line(
colour = "grey",
linetype = "dashed",
size = 0.2
),
axis.title = element_blank(),
axis.ticks = element_line(color = "black"),
axis.text.y = element_text(colour = gene_rank$col,face = gene_rank$size),
axis.text.x = element_text(angle = 45,hjust = 1),
legend.text = element_text(size = 10),
legend.title = element_text(size = 12),
legend.key = element_rect(fill = "white", colour = "black")
)->p;p
ggsave(
filename = "fig_01_e_tumorPurity_sig_genes.pdf",
plot = p,
device = "pdf",
width = 8,
height = 8,
path = out_path
)
save(list=ls(),file=file.path(out_path,"e_01_tumorPurity.Rdata"))
|
# For each covtype, test that
# * theta estimation is well performed (in the sense that a given theta is not to badly estimated: theta/10 < mean(theta_hat) < 10*theta )
# * MSE is not too hight
# @see http://journal.r-project.org/archive/2011-1/RJournal_2011-1_Wickham.pdf
require("testthat")
build.fun <- function(d, covtype, coef.trend=0, coef.cov = .1, coef.var=1, control=NULL,plot=FALSE, n.grid=10,...){
X <-data.frame(make.grid(c(0,1),d))
names(X) <- paste("x",1:d,sep="")
k <- km(design=X, response=(rep(0,2^d)),
covtype=covtype, coef.trend=coef.trend, coef.cov = rep(coef.cov,d), coef.var=rep(coef.var,d),
control=control,...)
if (plot) sectionview.km(k,center=rep(0.5,d))
# create an uncond. process with these properties
n.grid = min(n.grid,floor(40^(1/d))) # To limit simulate points to 40 (numerical issues)
.k.X = as.matrix(make.grid(seq(0,1,length=n.grid),d))
.k.Y = simulate(k,cond=FALSE,newdata=.k.X,checkNames=FALSE)
# create a basic interpolation function from the previous uncond. process
f <- function(x) {
x=array(x)
dist= (rowSums((as.matrix(.k.X,ncol=d)-t(matrix(as.matrix(x,ncol=d),ncol=nrow(.k.X),nrow=d)))^2))^.5
if(min(dist)<1E-6) return(.k.Y[which.min(dist)])
return(sum(.k.Y/dist)/sum(1/dist))
# sort.dist=sort(dist, index.return = TRUE)
#
# if (sort.dist$x[1]<1E-6) return(.k.Y[sort.dist$ix[1]])
#
# x1=.k.X[sort.dist$ix[1],]
# x2=.k.X[sort.dist$ix[2],]
# y1=.k.Y[sort.dist$ix[1]]
# y2=.k.Y[sort.dist$ix[2]]
# return( (y1*sort.dist$x[2]+y2*sort.dist$x[1]) / (sort.dist$x[1]+sort.dist$x[2]) )
}
if (plot) {
sectionview.fun(fun=f,center=rep(0.5,d),add=TRUE,npoints=10*n.grid,col='red')
if (d==1) points(.k.X,.k.Y,col='red')
}
return(f)
}
# plot(Vectorize(build.fun(d=1,covtype="gauss")))
# plot(Vectorize(build.fun(d=1,covtype="matern3_2")))
# plot(Vectorize(build.fun(d=1,covtype="exp")))
# sectionview3d.fun(build.fun(d=2,covtype="gauss",coef.cov=.9))
context("Testing kriging range estimation")
theta_estim <- function(d,npoints,covtype,theta,control=NULL,plot=FALSE,msg=FALSE,...) {
f = build.fun(d=d,
covtype=covtype,coef.cov=theta,
control=control,plot=plot,n.grid=npoints,...)
estim = estim.theta(d=d,npoints=npoints,f=f,
covtype=covtype,
control=control,...)
# to clean bad kriging (= theta << 1)
estim = estim[which(rowSums(estim)>0.0001),]
if (msg) {
cat(paste(sep="","Kriging (cov ",covtype,") ",d,"D function, on ",npoints," conditional points, ",list(...),"\n"))
print(summary(estim,na.rm=TRUE))
}
if (plot) hist(estim)
return(estim)
}
covtypes=c("gauss","exp","matern3_2","matern5_2")
for (covtype in covtypes) {
theta=0.1
theta_est = theta_estim(d=1, npoints=20, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimTheta:1D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(theta_est) - 0.1*theta),equals(1)))
test_that(desc=paste("CovEstimTheta:1D,covtype=",covtype,",theta=",theta),expect_that( sign(10*theta - mean(theta_est)),equals(1)))
theta_est = theta_estim(d=2, npoints=10, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimTheta:2D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(theta_est) - 0.1*theta),equals(1)))
test_that(desc=paste("CovEstimTheta:2D,covtype=",covtype,",theta=",theta),expect_that( sign(10*theta - mean(theta_est)),equals(1)))
}
context("Testing kriging MSE")
mse_estim <- function(d,npoints,covtype,theta,control=NULL,plot=FALSE,msg=FALSE,...){
f = build.fun(d=d,
covtype=covtype,
control=control,plot=plot,...)
estim = mse(d=d,npoints=npoints,f=f,
covtype=covtype,
control=control,...)
if (msg) {
cat(paste(sep="","Kriging (cov ",covtype,") ",d,"D function, on ",npoints," conditional points, ",list(...),"\n"))
print(summary(estim,na.rm=TRUE))
}
if (plot) hist(estim)
return(estim)
}
covtypes=c("gauss","exp","matern3_2","matern5_2")
for (covtype in covtypes) {
theta=0.1
mse_est = mse_estim(d=1, npoints=20, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimMSE:1D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(mse_est,na.rm=TRUE) - 10),equals(-1)))
}
covtypes=c("gauss","exp","matern3_2","matern5_2")
for (covtype in covtypes) {
theta=0.1
mse_est = mse_estim(d=2, npoints=10, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimMSE:2D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(mse_est,na.rm=TRUE) - 10),equals(-1)))
}
| /tests-opt/test-covtype.R | no_license | IRSN/DiceKriging | R | false | false | 5,104 | r | # For each covtype, test that
# * theta estimation is well performed (in the sense that a given theta is not to badly estimated: theta/10 < mean(theta_hat) < 10*theta )
# * MSE is not too hight
# @see http://journal.r-project.org/archive/2011-1/RJournal_2011-1_Wickham.pdf
require("testthat")
build.fun <- function(d, covtype, coef.trend=0, coef.cov = .1, coef.var=1, control=NULL,plot=FALSE, n.grid=10,...){
X <-data.frame(make.grid(c(0,1),d))
names(X) <- paste("x",1:d,sep="")
k <- km(design=X, response=(rep(0,2^d)),
covtype=covtype, coef.trend=coef.trend, coef.cov = rep(coef.cov,d), coef.var=rep(coef.var,d),
control=control,...)
if (plot) sectionview.km(k,center=rep(0.5,d))
# create an uncond. process with these properties
n.grid = min(n.grid,floor(40^(1/d))) # To limit simulate points to 40 (numerical issues)
.k.X = as.matrix(make.grid(seq(0,1,length=n.grid),d))
.k.Y = simulate(k,cond=FALSE,newdata=.k.X,checkNames=FALSE)
# create a basic interpolation function from the previous uncond. process
f <- function(x) {
x=array(x)
dist= (rowSums((as.matrix(.k.X,ncol=d)-t(matrix(as.matrix(x,ncol=d),ncol=nrow(.k.X),nrow=d)))^2))^.5
if(min(dist)<1E-6) return(.k.Y[which.min(dist)])
return(sum(.k.Y/dist)/sum(1/dist))
# sort.dist=sort(dist, index.return = TRUE)
#
# if (sort.dist$x[1]<1E-6) return(.k.Y[sort.dist$ix[1]])
#
# x1=.k.X[sort.dist$ix[1],]
# x2=.k.X[sort.dist$ix[2],]
# y1=.k.Y[sort.dist$ix[1]]
# y2=.k.Y[sort.dist$ix[2]]
# return( (y1*sort.dist$x[2]+y2*sort.dist$x[1]) / (sort.dist$x[1]+sort.dist$x[2]) )
}
if (plot) {
sectionview.fun(fun=f,center=rep(0.5,d),add=TRUE,npoints=10*n.grid,col='red')
if (d==1) points(.k.X,.k.Y,col='red')
}
return(f)
}
# plot(Vectorize(build.fun(d=1,covtype="gauss")))
# plot(Vectorize(build.fun(d=1,covtype="matern3_2")))
# plot(Vectorize(build.fun(d=1,covtype="exp")))
# sectionview3d.fun(build.fun(d=2,covtype="gauss",coef.cov=.9))
context("Testing kriging range estimation")
theta_estim <- function(d,npoints,covtype,theta,control=NULL,plot=FALSE,msg=FALSE,...) {
f = build.fun(d=d,
covtype=covtype,coef.cov=theta,
control=control,plot=plot,n.grid=npoints,...)
estim = estim.theta(d=d,npoints=npoints,f=f,
covtype=covtype,
control=control,...)
# to clean bad kriging (= theta << 1)
estim = estim[which(rowSums(estim)>0.0001),]
if (msg) {
cat(paste(sep="","Kriging (cov ",covtype,") ",d,"D function, on ",npoints," conditional points, ",list(...),"\n"))
print(summary(estim,na.rm=TRUE))
}
if (plot) hist(estim)
return(estim)
}
covtypes=c("gauss","exp","matern3_2","matern5_2")
for (covtype in covtypes) {
theta=0.1
theta_est = theta_estim(d=1, npoints=20, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimTheta:1D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(theta_est) - 0.1*theta),equals(1)))
test_that(desc=paste("CovEstimTheta:1D,covtype=",covtype,",theta=",theta),expect_that( sign(10*theta - mean(theta_est)),equals(1)))
theta_est = theta_estim(d=2, npoints=10, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimTheta:2D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(theta_est) - 0.1*theta),equals(1)))
test_that(desc=paste("CovEstimTheta:2D,covtype=",covtype,",theta=",theta),expect_that( sign(10*theta - mean(theta_est)),equals(1)))
}
context("Testing kriging MSE")
mse_estim <- function(d,npoints,covtype,theta,control=NULL,plot=FALSE,msg=FALSE,...){
f = build.fun(d=d,
covtype=covtype,
control=control,plot=plot,...)
estim = mse(d=d,npoints=npoints,f=f,
covtype=covtype,
control=control,...)
if (msg) {
cat(paste(sep="","Kriging (cov ",covtype,") ",d,"D function, on ",npoints," conditional points, ",list(...),"\n"))
print(summary(estim,na.rm=TRUE))
}
if (plot) hist(estim)
return(estim)
}
covtypes=c("gauss","exp","matern3_2","matern5_2")
for (covtype in covtypes) {
theta=0.1
mse_est = mse_estim(d=1, npoints=20, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimMSE:1D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(mse_est,na.rm=TRUE) - 10),equals(-1)))
}
covtypes=c("gauss","exp","matern3_2","matern5_2")
for (covtype in covtypes) {
theta=0.1
mse_est = mse_estim(d=2, npoints=10, covtype = covtype, theta = theta, control=list(logLikFailOver=TRUE))
test_that(desc=paste("CovEstimMSE:2D,covtype=",covtype,",theta=",theta),expect_that( sign(mean(mse_est,na.rm=TRUE) - 10),equals(-1)))
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/applicationinsights_operations.R
\name{applicationinsights_list_components}
\alias{applicationinsights_list_components}
\title{Lists the auto-grouped, standalone, and custom components of the
application}
\usage{
applicationinsights_list_components(
ResourceGroupName,
MaxResults = NULL,
NextToken = NULL,
AccountId = NULL
)
}
\arguments{
\item{ResourceGroupName}{[required] The name of the resource group.}
\item{MaxResults}{The maximum number of results to return in a single call. To retrieve
the remaining results, make another call with the returned \code{NextToken}
value.}
\item{NextToken}{The token to request the next page of results.}
\item{AccountId}{The AWS account ID for the resource group owner.}
}
\description{
Lists the auto-grouped, standalone, and custom components of the application.
See \url{https://www.paws-r-sdk.com/docs/applicationinsights_list_components/} for full documentation.
}
\keyword{internal}
| /cran/paws.management/man/applicationinsights_list_components.Rd | permissive | paws-r/paws | R | false | true | 1,020 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/applicationinsights_operations.R
\name{applicationinsights_list_components}
\alias{applicationinsights_list_components}
\title{Lists the auto-grouped, standalone, and custom components of the
application}
\usage{
applicationinsights_list_components(
ResourceGroupName,
MaxResults = NULL,
NextToken = NULL,
AccountId = NULL
)
}
\arguments{
\item{ResourceGroupName}{[required] The name of the resource group.}
\item{MaxResults}{The maximum number of results to return in a single call. To retrieve
the remaining results, make another call with the returned \code{NextToken}
value.}
\item{NextToken}{The token to request the next page of results.}
\item{AccountId}{The AWS account ID for the resource group owner.}
}
\description{
Lists the auto-grouped, standalone, and custom components of the application.
See \url{https://www.paws-r-sdk.com/docs/applicationinsights_list_components/} for full documentation.
}
\keyword{internal}
|
# Time course plots -----------------------------------------------------------
#
# - This script plots the raw eye tracking time course data and the
# model estimates from a growth curve analysis
#
# -----------------------------------------------------------------------------
# Source scripts and load models ----------------------------------------------
source(here::here("scripts", "00_load_libs.R"))
source(here::here("scripts", "01_helpers.R"))
#source(here::here("scripts", "02_load_data.R"))
stress50 <- read_csv(here("data", "clean", "stress_50ms_final_onsetc3updated.csv"))
# Get path to saved models
gca_mods_path <- here("mods", "stress", "gca", "LL_changes")
# Load models as lists and store in global environment
load(paste0(gca_mods_path, "/gca_mon_mods.Rdata"))
load(paste0(gca_mods_path, "/gca_l2_mods.Rdata"))
load(paste0(gca_mods_path, "/nested_model_l2.Rdata"))
load(paste0(gca_mods_path, "/model_preds.Rdata"))
load(paste0(gca_mods_path, "/gca_l2_mods_ints.Rdata"))
# load(paste0(gca_mods_path, "/model_preds_ints.Rdata"))
load(paste0(gca_mods_path, "/model_preds_l1ot2.Rdata"))
list2env(gca_mon_mods, globalenv())
list2env(gca_l2_mods, globalenv())
list2env(model_preds, globalenv())
list2env(gca_l2_mods_ints, globalenv())
list2env(model_preds_l1ot2, globalenv())
# Set path for saving figs
figs_path <- here("figs", "stress", "gca", "LL_changes")
# -----------------------------------------------------------------------------
# Plot raw data ---------------------------------------------------------------
# stress50$cond <- as.factor(as.character(stress50$cond))
#
# condition_names <- c(
# `1` = 'Present',
# `2` = 'Preterit'
# )
#
#
# stress_p1 <- stress50 %>%
# #na.omit(.) %>%
# filter(., time_zero >= -10, time_zero <= 20) %>%
# mutate(., l1 = fct_relevel(l1, "es", "en", "ma")) %>%
# ggplot(., aes(x = time_zero, y = target_prop, fill = l1)) +
# facet_grid(. ~ cond, labeller = as_labeller(condition_names)) +
# geom_hline(yintercept = 0.5, color = 'white', size = 3) +
# geom_vline(xintercept = 0, color = 'grey40', lty = 3) +
# geom_vline(xintercept = 4, color = 'grey40', lty = 3) +
# stat_summary(fun.y = "mean", geom = "line", size = 1) +
# stat_summary(fun.data = mean_cl_boot, geom = 'pointrange', size = 0.5,
# stroke = 0.5, pch = 21) +
# scale_fill_brewer(palette = 'Set1', name = "L1",
# labels = c("ES", "EN", "MA")) +
# scale_x_continuous(breaks = c(-10, 0, 10, 20),
# labels = c("-500", "0", "500", "1000")) +
# labs(y = 'Proportion of target fixations',
# x = 'Time relative to target syllable offset (ms)',
# caption = "Mean +/- 95% CI") +
# annotate("text", x = 3.3, y = 0.02, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
# theme_grey(base_size = 12, base_family = "Times")
#
# l1_names <- c(
# `es` = 'Spanish speakers',
# `en` = 'English speakers',
# `ma` = 'Mandarin speakers'
# )
#
# stress_p2 <- stress50 %>%
# #na.omit(.) %>%
# filter(., time_zero >= -10, time_zero <= 20) %>%
# mutate(., l1 = fct_relevel(l1, "es", "en", "ma")) %>%
# ggplot(., aes(x = time_zero, y = target_prop, fill = cond)) +
# facet_grid(. ~ l1, labeller = as_labeller(l1_names)) +
# geom_hline(yintercept = 0.5, color = 'white', size = 3) +
# geom_vline(xintercept = 0, color = 'grey40', lty = 3) +
# geom_vline(xintercept = 4, color = 'grey40', lty = 3) +
# stat_summary(fun.y = "mean", geom = "line", size = 1) +
# stat_summary(fun.data = mean_cl_boot, geom = 'pointrange', size = 0.5,
# stroke = 0.5, pch = 21) +
# scale_fill_brewer(palette = 'Set1', name = "Tense",
# labels = c("Present", "Preterit")) +
# scale_x_continuous(breaks = c(-10, 0, 10, 20),
# labels = c("-500", "0", "500", "1000")) +
# labs(y = 'Proportion of target fixations',
# x = 'Time relative to target syllable offset (ms)',
# caption = "Mean +/- 95% CI") +
# annotate("text", x = 3.3, y = 0.02, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
# theme_grey(base_size = 12, base_family = "Times")
#
#
# ggsave('stress_l1.png',
# plot = stress_p1, dpi = 600, device = "png",
# path = figs_path,
# height = 3.5, width = 8.5, units = 'in')
# ggsave('stress_tense.png',
# plot = stress_p1, dpi = 600, device = "png",
# path = figs_path,
# height = 3.5, width = 8.5, units = 'in')
#
#
#
# # -----------------------------------------------------------------------------
# Plot GCA nested-interactions --------------------------------------------------------------------
# L2 speakers
# All three variables
base_l2_ints <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) + #fill = Stress, color = Stress,
facet_grid(`Spanish use` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points")) #cm
library(grid)
library(gridExtra)
stress_l2_ints <- grid.arrange(base_l2_ints, #en_gca_plot <-
bottom = textGrob('Spanish use', rot = 270,
x = 0.98, y = 4.3, gp = gpar(fontsize = 9,
fontfamily = 'Times')))
ggsave(paste0(figs_path, "/stress_l2_ints.png"), stress_l2_ints, width = 180,
height = 120, units = "mm", dpi = 600)
# Proficiency and use
prof_plot <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
use_plot <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
ggsave(paste0(figs_path, "/prof_ints.png"), prof_plot, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use_ints.png"), use_plot, width = 180,
height = 120, units = "mm", dpi = 600)
# 2-way interactions
profxl1 <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
usexl1 <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
delexuse <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ `Spanish use`) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
ggtitle("Spanish use") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"),
plot.title = element_text(hjust = 0.5, size = 8))
ggsave(paste0(figs_path, "/prof_l1_ints.png"), profxl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use__l1_ints.png"), usexl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/prof_use_ints.png"), delexuse, width = 200,
height = 80, units = "mm", dpi = 600)
# Plot GCA 3-way interaction straight away --------------------------------------------------------------------
# Monolingual Spanish speakers
stress_mon <- fits_all_mon %>%
# mutate(condition = if_else(condition_sum == 1, "Present", "Preterit"),
# condition = fct_relevel(condition, "Present"))%>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin)) + #, fill = condition, color = condition
geom_hline(yintercept = 0, lty = 3, size = 0.4) +
geom_vline(xintercept = 4, lty = 3, size = 0.4) +
stat_summary(fun.y = "mean", geom = "line", size = 1) +
geom_ribbon(alpha = 0.2, color = "grey", show.legend = F) +
# stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
# alpha = 0.5) +
# geom_point(aes(color = condition), size = 1.3, show.legend = F) +
# geom_point(aes(color = condition), size = 0.85, show.legend = F) +
scale_x_continuous(breaks = c(-4, 0, 4, 8, 12),
labels = c("-200", "0", "200", "400", "600")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_big + legend_adj #+ labs(color = "Condition")
# L2 speakers
base_l2 <- fits_all_l2 %>%
mutate(Proficiency = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = Proficiency)) + #fill = Stress, color = Stress,
facet_grid(`Spanish use` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, 0, 4, 8, 12),
labels = c("-200", "0", "200", "400", "600")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to target syllable offset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points")) #cm
stress_l2 <- grid.arrange(base_l2, #en_gca_plot <-
bottom = textGrob('Spanish use', rot = 270,
x = .96, y = 4, gp = gpar(fontsize = 9,
fontfamily = 'Times')))
ggsave(paste0(figs_path, "/stress_mon.png"), stress_mon, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/stress_l2.png"), stress_l2, width = 180,
height = 120, units = "mm", dpi = 600)
# --------------------------------------------------------------------
# All three variables
base_l2_l1ot2 <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) + #fill = Stress, color = Stress,
facet_grid(`Spanish use` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points")) #cm
stress_l2_l1ot2 <- grid.arrange(base_l2_l1ot2,
bottom = textGrob('Spanish use', rot = 270, x = 0.98, y = 4.3,
gp = gpar(fontsize = 9, fontfamily = 'Times')))
ggsave(paste0(figs_path, "/stress_l2_l1ot2.png"), stress_l2_l1ot2, width = 180,
height = 120, units = "mm", dpi = 600)
# Proficiency and use
prof_plot <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
use_plot <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
ggsave(paste0(figs_path, "/prof_l1ot2.png"), prof_plot, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use_l1ot2.png"), use_plot, width = 180,
height = 120, units = "mm", dpi = 600)
# 2-way interactions
profxl1 <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
usexl1 <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
delexuse <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ `Spanish use`) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
ggtitle("Spanish use") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"),
plot.title = element_text(hjust = 0.5, size = 8))
ggsave(paste0(figs_path, "/prof_l1_l1ot2.png"), profxl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use__l1_l1ot2.png"), usexl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/prof_use_l1ot2.png"), delexuse, width = 200,
height = 80, units = "mm", dpi = 600)
# Switch prof and use in 3-way plot
fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) + #fill = Stress, color = Stress,
facet_grid(`Spanish proficiency` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
# overlay model on empirical data
fits_all_l2_overlay <- fits_all_l2_l1ot2
fits_all_l2_overlay$target_prop <- exp(fits_all_l2_overlay$fit)
fits_all_l2_overlay$l1 <- fits_all_l2_overlay$l1_sum
fits_all_l2_overlay %<>% mutate(l1 = if_else(l1 == -1, "English", "Chinese"),
l1 = fct_relevel(l1, "English", "Chinese"))
gca <- fits_all_l2_overlay %>%
# mutate(`Spanish proficiency` = as.factor(DELE_z),
# `Spanish use` = as.factor(use_z)) %>%
ggplot(., aes(x = time_zero, y = target_prop, #ymax = ymax, ymin = ymin,
lty = l1)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to verb's final syllable onset",
y = "Exponential value of\nfixation probability",
lty = "L1") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
tc <- stress50 %>%
filter(time_zero > -5 & time_zero < 5, l1 != 'es') %>%
ggplot(., aes(x = time_zero, y = target_count, lty = l1)) +
geom_vline(xintercept = 4, lty = 3, color = 'grey40') +
# geom_hline(yintercept = 0.5, lty = 3) +
stat_summary(fun.y = mean, geom = "line") +
scale_color_discrete(name="L1",
breaks = c('en', 'ma'),
labels = c("English", 'Chinese')) +
scale_x_continuous(breaks = c(-8, -6, -4, -2, 0, 2, 4, 6, 8),
labels = c("-400", "-300", "-200", "-100", "0", "100", "200", "300", "400")) +
# ggtitle("Time course per verbal tense") +
# xlab("Time in 50 ms bins (0 = marker time before accounting for 200 ms processing)") +
labs(y = "Count of fixations\non target", xiintercept = 1.5) +
# annotate("text", x = 3.65, y = 0.53, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
theme_grey(base_size = 10, base_family = "Times") +
theme(legend.position = 'none',
axis.title.x=element_blank(),
axis.text.x=element_blank())
grid.newpage()
grid.draw(rbind(ggplotGrob(tc), ggplotGrob(gca), size = "last"))
stress50 %>%
filter(time_zero > -5 & time_zero < 5, l1 != 'es') %>%
ggplot(., aes(x = time_zero, y = target_count, color = l1)) +
geom_vline(xintercept = 4, lty = 3) +
#geom_hline(yintercept = 0.5, lty = 3) +
stat_summary(fun.y = mean, geom = "line") +
geom_smooth(method = "loess", se = FALSE)
# geom_line(data = fits_all_l2_overlay,
# aes(stat_summary(fun.y = "mean", geom = "line", size = .5)))
scale_color_discrete(name="L1",
breaks = c('en', 'ma'),
labels = c("English", 'Chinese')) +
scale_x_continuous(breaks = c(-8, -6, -4, -2, 0, 2, 4, 6, 8),
labels = c("-400", "-300", "-200", "-100", "0", "100", "200", "300", "400")) +
# ggtitle("Time course per verbal tense") +
# xlab("Time in 50 ms bins (0 = marker time before accounting for 200 ms processing)") +
labs(y = "Proportion of fixations on target") +
# annotate("text", x = 3.65, y = 0.53, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
theme_grey(base_size = 10, base_family = "Times") +
theme(axis.title.x=element_blank(),
axis.text.x=element_blank())
| /scripts/stress/analyses/06_plots_gca_use_LL.R | no_license | laurafdeza/Dissertation | R | false | false | 29,474 | r | # Time course plots -----------------------------------------------------------
#
# - This script plots the raw eye tracking time course data and the
# model estimates from a growth curve analysis
#
# -----------------------------------------------------------------------------
# Source scripts and load models ----------------------------------------------
source(here::here("scripts", "00_load_libs.R"))
source(here::here("scripts", "01_helpers.R"))
#source(here::here("scripts", "02_load_data.R"))
stress50 <- read_csv(here("data", "clean", "stress_50ms_final_onsetc3updated.csv"))
# Get path to saved models
gca_mods_path <- here("mods", "stress", "gca", "LL_changes")
# Load models as lists and store in global environment
load(paste0(gca_mods_path, "/gca_mon_mods.Rdata"))
load(paste0(gca_mods_path, "/gca_l2_mods.Rdata"))
load(paste0(gca_mods_path, "/nested_model_l2.Rdata"))
load(paste0(gca_mods_path, "/model_preds.Rdata"))
load(paste0(gca_mods_path, "/gca_l2_mods_ints.Rdata"))
# load(paste0(gca_mods_path, "/model_preds_ints.Rdata"))
load(paste0(gca_mods_path, "/model_preds_l1ot2.Rdata"))
list2env(gca_mon_mods, globalenv())
list2env(gca_l2_mods, globalenv())
list2env(model_preds, globalenv())
list2env(gca_l2_mods_ints, globalenv())
list2env(model_preds_l1ot2, globalenv())
# Set path for saving figs
figs_path <- here("figs", "stress", "gca", "LL_changes")
# -----------------------------------------------------------------------------
# Plot raw data ---------------------------------------------------------------
# stress50$cond <- as.factor(as.character(stress50$cond))
#
# condition_names <- c(
# `1` = 'Present',
# `2` = 'Preterit'
# )
#
#
# stress_p1 <- stress50 %>%
# #na.omit(.) %>%
# filter(., time_zero >= -10, time_zero <= 20) %>%
# mutate(., l1 = fct_relevel(l1, "es", "en", "ma")) %>%
# ggplot(., aes(x = time_zero, y = target_prop, fill = l1)) +
# facet_grid(. ~ cond, labeller = as_labeller(condition_names)) +
# geom_hline(yintercept = 0.5, color = 'white', size = 3) +
# geom_vline(xintercept = 0, color = 'grey40', lty = 3) +
# geom_vline(xintercept = 4, color = 'grey40', lty = 3) +
# stat_summary(fun.y = "mean", geom = "line", size = 1) +
# stat_summary(fun.data = mean_cl_boot, geom = 'pointrange', size = 0.5,
# stroke = 0.5, pch = 21) +
# scale_fill_brewer(palette = 'Set1', name = "L1",
# labels = c("ES", "EN", "MA")) +
# scale_x_continuous(breaks = c(-10, 0, 10, 20),
# labels = c("-500", "0", "500", "1000")) +
# labs(y = 'Proportion of target fixations',
# x = 'Time relative to target syllable offset (ms)',
# caption = "Mean +/- 95% CI") +
# annotate("text", x = 3.3, y = 0.02, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
# theme_grey(base_size = 12, base_family = "Times")
#
# l1_names <- c(
# `es` = 'Spanish speakers',
# `en` = 'English speakers',
# `ma` = 'Mandarin speakers'
# )
#
# stress_p2 <- stress50 %>%
# #na.omit(.) %>%
# filter(., time_zero >= -10, time_zero <= 20) %>%
# mutate(., l1 = fct_relevel(l1, "es", "en", "ma")) %>%
# ggplot(., aes(x = time_zero, y = target_prop, fill = cond)) +
# facet_grid(. ~ l1, labeller = as_labeller(l1_names)) +
# geom_hline(yintercept = 0.5, color = 'white', size = 3) +
# geom_vline(xintercept = 0, color = 'grey40', lty = 3) +
# geom_vline(xintercept = 4, color = 'grey40', lty = 3) +
# stat_summary(fun.y = "mean", geom = "line", size = 1) +
# stat_summary(fun.data = mean_cl_boot, geom = 'pointrange', size = 0.5,
# stroke = 0.5, pch = 21) +
# scale_fill_brewer(palette = 'Set1', name = "Tense",
# labels = c("Present", "Preterit")) +
# scale_x_continuous(breaks = c(-10, 0, 10, 20),
# labels = c("-500", "0", "500", "1000")) +
# labs(y = 'Proportion of target fixations',
# x = 'Time relative to target syllable offset (ms)',
# caption = "Mean +/- 95% CI") +
# annotate("text", x = 3.3, y = 0.02, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
# theme_grey(base_size = 12, base_family = "Times")
#
#
# ggsave('stress_l1.png',
# plot = stress_p1, dpi = 600, device = "png",
# path = figs_path,
# height = 3.5, width = 8.5, units = 'in')
# ggsave('stress_tense.png',
# plot = stress_p1, dpi = 600, device = "png",
# path = figs_path,
# height = 3.5, width = 8.5, units = 'in')
#
#
#
# # -----------------------------------------------------------------------------
# Plot GCA nested-interactions --------------------------------------------------------------------
# L2 speakers
# All three variables
base_l2_ints <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) + #fill = Stress, color = Stress,
facet_grid(`Spanish use` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points")) #cm
library(grid)
library(gridExtra)
stress_l2_ints <- grid.arrange(base_l2_ints, #en_gca_plot <-
bottom = textGrob('Spanish use', rot = 270,
x = 0.98, y = 4.3, gp = gpar(fontsize = 9,
fontfamily = 'Times')))
ggsave(paste0(figs_path, "/stress_l2_ints.png"), stress_l2_ints, width = 180,
height = 120, units = "mm", dpi = 600)
# Proficiency and use
prof_plot <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
use_plot <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
ggsave(paste0(figs_path, "/prof_ints.png"), prof_plot, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use_ints.png"), use_plot, width = 180,
height = 120, units = "mm", dpi = 600)
# 2-way interactions
profxl1 <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
usexl1 <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
delexuse <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ `Spanish use`) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
ggtitle("Spanish use") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"),
plot.title = element_text(hjust = 0.5, size = 8))
ggsave(paste0(figs_path, "/prof_l1_ints.png"), profxl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use__l1_ints.png"), usexl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/prof_use_ints.png"), delexuse, width = 200,
height = 80, units = "mm", dpi = 600)
# Plot GCA 3-way interaction straight away --------------------------------------------------------------------
# Monolingual Spanish speakers
stress_mon <- fits_all_mon %>%
# mutate(condition = if_else(condition_sum == 1, "Present", "Preterit"),
# condition = fct_relevel(condition, "Present"))%>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin)) + #, fill = condition, color = condition
geom_hline(yintercept = 0, lty = 3, size = 0.4) +
geom_vline(xintercept = 4, lty = 3, size = 0.4) +
stat_summary(fun.y = "mean", geom = "line", size = 1) +
geom_ribbon(alpha = 0.2, color = "grey", show.legend = F) +
# stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
# alpha = 0.5) +
# geom_point(aes(color = condition), size = 1.3, show.legend = F) +
# geom_point(aes(color = condition), size = 0.85, show.legend = F) +
scale_x_continuous(breaks = c(-4, 0, 4, 8, 12),
labels = c("-200", "0", "200", "400", "600")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_big + legend_adj #+ labs(color = "Condition")
# L2 speakers
base_l2 <- fits_all_l2 %>%
mutate(Proficiency = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = Proficiency)) + #fill = Stress, color = Stress,
facet_grid(`Spanish use` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, 0, 4, 8, 12),
labels = c("-200", "0", "200", "400", "600")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to target syllable offset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points")) #cm
stress_l2 <- grid.arrange(base_l2, #en_gca_plot <-
bottom = textGrob('Spanish use', rot = 270,
x = .96, y = 4, gp = gpar(fontsize = 9,
fontfamily = 'Times')))
ggsave(paste0(figs_path, "/stress_mon.png"), stress_mon, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/stress_l2.png"), stress_l2, width = 180,
height = 120, units = "mm", dpi = 600)
# --------------------------------------------------------------------
# All three variables
base_l2_l1ot2 <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) + #fill = Stress, color = Stress,
facet_grid(`Spanish use` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points")) #cm
stress_l2_l1ot2 <- grid.arrange(base_l2_l1ot2,
bottom = textGrob('Spanish use', rot = 270, x = 0.98, y = 4.3,
gp = gpar(fontsize = 9, fontfamily = 'Times')))
ggsave(paste0(figs_path, "/stress_l2_l1ot2.png"), stress_l2_l1ot2, width = 180,
height = 120, units = "mm", dpi = 600)
# Proficiency and use
prof_plot <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
use_plot <- fits_all_l2_ints %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Oxytone", "Paroxytone"),
# Stress = fct_relevel(Stress, 'Paroxytone')
) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
ggsave(paste0(figs_path, "/prof_l1ot2.png"), prof_plot, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use_l1ot2.png"), use_plot, width = 180,
height = 120, units = "mm", dpi = 600)
# 2-way interactions
profxl1 <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
usexl1 <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) +
facet_grid(. ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
delexuse <- fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish proficiency`)) +
facet_grid(. ~ `Spanish use`) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
ggtitle("Spanish use") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"),
plot.title = element_text(hjust = 0.5, size = 8))
ggsave(paste0(figs_path, "/prof_l1_l1ot2.png"), profxl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/use__l1_l1ot2.png"), usexl1, width = 180,
height = 120, units = "mm", dpi = 600)
ggsave(paste0(figs_path, "/prof_use_l1ot2.png"), delexuse, width = 200,
height = 80, units = "mm", dpi = 600)
# Switch prof and use in 3-way plot
fits_all_l2_l1ot2 %>%
mutate(`Spanish proficiency` = as.factor(DELE_z),
`Spanish use` = as.factor(use_z),
# Stress = if_else(condition_sum == 1, "Present", "Preterite"),
# Stress = fct_relevel(Stress, 'Present'),
L1 = if_else(l1_sum == 1, "Mandarin Chinese", "English"),
L1 = fct_relevel(L1, "English")) %>%
ggplot(., aes(x = time_zero, y = fit, ymax = ymax, ymin = ymin,
lty = `Spanish use`)) + #fill = Stress, color = Stress,
facet_grid(`Spanish proficiency` ~ L1) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
geom_line(size = 0.35) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
# scale_color_brewer(palette = "Set1", name = "Condition") +
scale_linetype_manual(values=c("solid", "dashed", 'dotted')) +
labs(x = "Time (ms) relative to final syllable onset",
y = "Empirical logit of looks to target") +
theme_grey(base_size = 10, base_family = "Times") + legend_adj_2 +
theme(legend.position = 'bottom', #c(0.1, 0.9)
plot.margin = margin(5.5, 20, 5.5, 5.5, "points"))
# overlay model on empirical data
fits_all_l2_overlay <- fits_all_l2_l1ot2
fits_all_l2_overlay$target_prop <- exp(fits_all_l2_overlay$fit)
fits_all_l2_overlay$l1 <- fits_all_l2_overlay$l1_sum
fits_all_l2_overlay %<>% mutate(l1 = if_else(l1 == -1, "English", "Chinese"),
l1 = fct_relevel(l1, "English", "Chinese"))
gca <- fits_all_l2_overlay %>%
# mutate(`Spanish proficiency` = as.factor(DELE_z),
# `Spanish use` = as.factor(use_z)) %>%
ggplot(., aes(x = time_zero, y = target_prop, #ymax = ymax, ymin = ymin,
lty = l1)) +
geom_hline(yintercept = 0, size = .5, color = "grey40", linetype = 'dotted') +
geom_vline(xintercept = 4, size = .5, color = "grey40", linetype = 'dotted') +
# geom_ribbon(alpha = 0.2, color = NA, show.legend = F) +
stat_summary(fun.y = "mean", geom = "line", size = .5) +
stat_summary(fun.data = mean_cl_boot, geom = 'ribbon',fun.args=list(conf.int=0.95),
alpha = 0.2) +
scale_x_continuous(breaks = c(-4, -2, 0, 2, 4),
labels = c("-200", "-100", "0", "100", "200")) +
labs(x = "Time (ms) relative to verb's final syllable onset",
y = "Exponential value of\nfixation probability",
lty = "L1") +
theme_grey(base_size = 10, base_family = "Times") +
theme(
legend.position = 'bottom')
tc <- stress50 %>%
filter(time_zero > -5 & time_zero < 5, l1 != 'es') %>%
ggplot(., aes(x = time_zero, y = target_count, lty = l1)) +
geom_vline(xintercept = 4, lty = 3, color = 'grey40') +
# geom_hline(yintercept = 0.5, lty = 3) +
stat_summary(fun.y = mean, geom = "line") +
scale_color_discrete(name="L1",
breaks = c('en', 'ma'),
labels = c("English", 'Chinese')) +
scale_x_continuous(breaks = c(-8, -6, -4, -2, 0, 2, 4, 6, 8),
labels = c("-400", "-300", "-200", "-100", "0", "100", "200", "300", "400")) +
# ggtitle("Time course per verbal tense") +
# xlab("Time in 50 ms bins (0 = marker time before accounting for 200 ms processing)") +
labs(y = "Count of fixations\non target", xiintercept = 1.5) +
# annotate("text", x = 3.65, y = 0.53, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
theme_grey(base_size = 10, base_family = "Times") +
theme(legend.position = 'none',
axis.title.x=element_blank(),
axis.text.x=element_blank())
grid.newpage()
grid.draw(rbind(ggplotGrob(tc), ggplotGrob(gca), size = "last"))
stress50 %>%
filter(time_zero > -5 & time_zero < 5, l1 != 'es') %>%
ggplot(., aes(x = time_zero, y = target_count, color = l1)) +
geom_vline(xintercept = 4, lty = 3) +
#geom_hline(yintercept = 0.5, lty = 3) +
stat_summary(fun.y = mean, geom = "line") +
geom_smooth(method = "loess", se = FALSE)
# geom_line(data = fits_all_l2_overlay,
# aes(stat_summary(fun.y = "mean", geom = "line", size = .5)))
scale_color_discrete(name="L1",
breaks = c('en', 'ma'),
labels = c("English", 'Chinese')) +
scale_x_continuous(breaks = c(-8, -6, -4, -2, 0, 2, 4, 6, 8),
labels = c("-400", "-300", "-200", "-100", "0", "100", "200", "300", "400")) +
# ggtitle("Time course per verbal tense") +
# xlab("Time in 50 ms bins (0 = marker time before accounting for 200 ms processing)") +
labs(y = "Proportion of fixations on target") +
# annotate("text", x = 3.65, y = 0.53, label = '200ms',
# angle = 90, size = 3, hjust = 0) +
theme_grey(base_size = 10, base_family = "Times") +
theme(axis.title.x=element_blank(),
axis.text.x=element_blank())
|
library(fdapace)
### Name: CreateCovPlot
### Title: Create the covariance surface plot based on the results from
### FPCA() or FPCder().
### Aliases: CreateCovPlot
### ** Examples
set.seed(1)
n <- 20
pts <- seq(0, 1, by=0.05)
sampWiener <- Wiener(n, pts)
sampWiener <- Sparsify(sampWiener, pts, 10)
res <- FPCA(sampWiener$Ly, sampWiener$Lt,
list(dataType='Sparse', error=FALSE, kernel='epan', verbose=TRUE))
CreateCovPlot(res)
| /data/genthat_extracted_code/fdapace/examples/CreateCovPlot.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 449 | r | library(fdapace)
### Name: CreateCovPlot
### Title: Create the covariance surface plot based on the results from
### FPCA() or FPCder().
### Aliases: CreateCovPlot
### ** Examples
set.seed(1)
n <- 20
pts <- seq(0, 1, by=0.05)
sampWiener <- Wiener(n, pts)
sampWiener <- Sparsify(sampWiener, pts, 10)
res <- FPCA(sampWiener$Ly, sampWiener$Lt,
list(dataType='Sparse', error=FALSE, kernel='epan', verbose=TRUE))
CreateCovPlot(res)
|
pcor.test <- function(x,y,z,use="mat",method="p",na.rm=T){
# The partial correlation coefficient between x and y given z
#
# pcor.test is free and comes with ABSOLUTELY NO WARRANTY.
#
# x and y should be vectors
#
# z be either a vector or a matrix
#
# use: There are two methods to calculate the partial correlation coefficient.
# One is by using variance-covariance matrix ("mat") and the other is by using recursive formula ("rec").
# Default is "mat".
#
# method: There are three ways to calculate the correlation coefficient,
# which are Pearson's ("p"), Spearman's ("s"), and Kendall's ("k") methods.
# The last two methods which are Spearman's and Kendall's coefficient are based on the non-parametric analysis.
# Default is "p".
#
# na.rm: If na.rm is T, then all the missing samples are deleted from the whole dataset, which is (x,y,z).
# If not, the missing samples will be removed just when the correlation coefficient is calculated.
# However, the number of samples for the p-value is the number of samples after removing
# all the missing samples from the whole dataset.
# Default is "T".
x <- c(x)
y <- c(y)
z <- as.data.frame(z)
if(use == "mat"){
p.use <- "Var-Cov matrix"
pcor = pcor.mat(x,y,z,method=method,na.rm=na.rm)
}else if(use == "rec"){
p.use <- "Recursive formula"
pcor = pcor.rec(x,y,z,method=method,na.rm=na.rm)
}else{
stop("\'use\' should be either \"rec\" or \"mat\"!\n")
}
# print the method
if(gregexpr("p",method)[[1]][1] == 1){
p.method <- "Pearson"
}else if(gregexpr("s",method)[[1]][1] == 1){
p.method <- "Spearman"
}else if(gregexpr("k",method)[[1]][1] == 1){
p.method <- "Kendall"
}else{
stop("\'method\' should be \"pearson\" or \"spearman\" or \"kendall\"!\n")
}
# sample number
n <- dim(na.omit(data.frame(x,y,z)))[1]
# given variables' number
gn <- dim(z)[2]
# p-value
if(p.method == "Kendall"){
statistic <- pcor/sqrt(2*(2*(n-gn)+5)/(9*(n-gn)*(n-1-gn)))
p.value <- 2*pnorm(-abs(statistic))
}else{
statistic <- pcor*sqrt((n-2-gn)/(1-pcor^2))
p.value <- 2*pnorm(-abs(statistic))
}
data.frame(estimate=pcor,p.value=p.value,statistic=statistic,n=n,gn=gn,Method=p.method,Use=p.use)
}
# By using var-cov matrix
pcor.mat <- function(x,y,z,method="p",na.rm=T){
x <- c(x)
y <- c(y)
z <- as.data.frame(z)
if(dim(z)[2] == 0){
stop("There should be given data\n")
}
data <- data.frame(x,y,z)
if(na.rm == T){
data = na.omit(data)
}
xdata <- na.omit(data.frame(data[,c(1,2)]))
Sxx <- cov(xdata,xdata,m=method)
xzdata <- na.omit(data)
xdata <- data.frame(xzdata[,c(1,2)])
zdata <- data.frame(xzdata[,-c(1,2)])
Sxz <- cov(xdata,zdata,m=method)
zdata <- na.omit(data.frame(data[,-c(1,2)]))
Szz <- cov(zdata,zdata,m=method)
# is Szz positive definite?
zz.ev <- eigen(Szz)$values
if(min(zz.ev)[1]<0){
stop("\'Szz\' is not positive definite!\n")
}
# partial correlation
Sxx.z <- Sxx - Sxz %*% solve(Szz) %*% t(Sxz)
rxx.z <- cov2cor(Sxx.z)[1,2]
rxx.z
}
# By using recursive formula
pcor.rec <- function(x,y,z,method="p",na.rm=T){
#
x <- c(x)
y <- c(y)
z <- as.data.frame(z)
if(dim(z)[2] == 0){
stop("There should be given data\n")
}
data <- data.frame(x,y,z)
if(na.rm == T){
data = na.omit(data)
}
# recursive formula
if(dim(z)[2] == 1){
tdata <- na.omit(data.frame(data[,1],data[,2]))
rxy <- cor(tdata[,1],tdata[,2],m=method)
tdata <- na.omit(data.frame(data[,1],data[,-c(1,2)]))
rxz <- cor(tdata[,1],tdata[,2],m=method)
tdata <- na.omit(data.frame(data[,2],data[,-c(1,2)]))
ryz <- cor(tdata[,1],tdata[,2],m=method)
rxy.z <- (rxy - rxz*ryz)/( sqrt(1-rxz^2)*sqrt(1-ryz^2) )
return(rxy.z)
}else{
x <- c(data[,1])
y <- c(data[,2])
z0 <- c(data[,3])
zc <- as.data.frame(data[,-c(1,2,3)])
rxy.zc <- pcor.rec(x,y,zc,method=method,na.rm=na.rm)
rxz0.zc <- pcor.rec(x,z0,zc,method=method,na.rm=na.rm)
ryz0.zc <- pcor.rec(y,z0,zc,method=method,na.rm=na.rm)
rxy.z <- (rxy.zc - rxz0.zc*ryz0.zc)/( sqrt(1-rxz0.zc^2)*sqrt(1-ryz0.zc^2) )
return(rxy.z)
}
} | /Gene correlation my cancer type/partial coefficient function.r | no_license | PinarSiyah/Archived-R-files | R | false | false | 4,249 | r | pcor.test <- function(x,y,z,use="mat",method="p",na.rm=T){
# The partial correlation coefficient between x and y given z
#
# pcor.test is free and comes with ABSOLUTELY NO WARRANTY.
#
# x and y should be vectors
#
# z be either a vector or a matrix
#
# use: There are two methods to calculate the partial correlation coefficient.
# One is by using variance-covariance matrix ("mat") and the other is by using recursive formula ("rec").
# Default is "mat".
#
# method: There are three ways to calculate the correlation coefficient,
# which are Pearson's ("p"), Spearman's ("s"), and Kendall's ("k") methods.
# The last two methods which are Spearman's and Kendall's coefficient are based on the non-parametric analysis.
# Default is "p".
#
# na.rm: If na.rm is T, then all the missing samples are deleted from the whole dataset, which is (x,y,z).
# If not, the missing samples will be removed just when the correlation coefficient is calculated.
# However, the number of samples for the p-value is the number of samples after removing
# all the missing samples from the whole dataset.
# Default is "T".
x <- c(x)
y <- c(y)
z <- as.data.frame(z)
if(use == "mat"){
p.use <- "Var-Cov matrix"
pcor = pcor.mat(x,y,z,method=method,na.rm=na.rm)
}else if(use == "rec"){
p.use <- "Recursive formula"
pcor = pcor.rec(x,y,z,method=method,na.rm=na.rm)
}else{
stop("\'use\' should be either \"rec\" or \"mat\"!\n")
}
# print the method
if(gregexpr("p",method)[[1]][1] == 1){
p.method <- "Pearson"
}else if(gregexpr("s",method)[[1]][1] == 1){
p.method <- "Spearman"
}else if(gregexpr("k",method)[[1]][1] == 1){
p.method <- "Kendall"
}else{
stop("\'method\' should be \"pearson\" or \"spearman\" or \"kendall\"!\n")
}
# sample number
n <- dim(na.omit(data.frame(x,y,z)))[1]
# given variables' number
gn <- dim(z)[2]
# p-value
if(p.method == "Kendall"){
statistic <- pcor/sqrt(2*(2*(n-gn)+5)/(9*(n-gn)*(n-1-gn)))
p.value <- 2*pnorm(-abs(statistic))
}else{
statistic <- pcor*sqrt((n-2-gn)/(1-pcor^2))
p.value <- 2*pnorm(-abs(statistic))
}
data.frame(estimate=pcor,p.value=p.value,statistic=statistic,n=n,gn=gn,Method=p.method,Use=p.use)
}
# By using var-cov matrix
pcor.mat <- function(x,y,z,method="p",na.rm=T){
x <- c(x)
y <- c(y)
z <- as.data.frame(z)
if(dim(z)[2] == 0){
stop("There should be given data\n")
}
data <- data.frame(x,y,z)
if(na.rm == T){
data = na.omit(data)
}
xdata <- na.omit(data.frame(data[,c(1,2)]))
Sxx <- cov(xdata,xdata,m=method)
xzdata <- na.omit(data)
xdata <- data.frame(xzdata[,c(1,2)])
zdata <- data.frame(xzdata[,-c(1,2)])
Sxz <- cov(xdata,zdata,m=method)
zdata <- na.omit(data.frame(data[,-c(1,2)]))
Szz <- cov(zdata,zdata,m=method)
# is Szz positive definite?
zz.ev <- eigen(Szz)$values
if(min(zz.ev)[1]<0){
stop("\'Szz\' is not positive definite!\n")
}
# partial correlation
Sxx.z <- Sxx - Sxz %*% solve(Szz) %*% t(Sxz)
rxx.z <- cov2cor(Sxx.z)[1,2]
rxx.z
}
# By using recursive formula
pcor.rec <- function(x,y,z,method="p",na.rm=T){
#
x <- c(x)
y <- c(y)
z <- as.data.frame(z)
if(dim(z)[2] == 0){
stop("There should be given data\n")
}
data <- data.frame(x,y,z)
if(na.rm == T){
data = na.omit(data)
}
# recursive formula
if(dim(z)[2] == 1){
tdata <- na.omit(data.frame(data[,1],data[,2]))
rxy <- cor(tdata[,1],tdata[,2],m=method)
tdata <- na.omit(data.frame(data[,1],data[,-c(1,2)]))
rxz <- cor(tdata[,1],tdata[,2],m=method)
tdata <- na.omit(data.frame(data[,2],data[,-c(1,2)]))
ryz <- cor(tdata[,1],tdata[,2],m=method)
rxy.z <- (rxy - rxz*ryz)/( sqrt(1-rxz^2)*sqrt(1-ryz^2) )
return(rxy.z)
}else{
x <- c(data[,1])
y <- c(data[,2])
z0 <- c(data[,3])
zc <- as.data.frame(data[,-c(1,2,3)])
rxy.zc <- pcor.rec(x,y,zc,method=method,na.rm=na.rm)
rxz0.zc <- pcor.rec(x,z0,zc,method=method,na.rm=na.rm)
ryz0.zc <- pcor.rec(y,z0,zc,method=method,na.rm=na.rm)
rxy.z <- (rxy.zc - rxz0.zc*ryz0.zc)/( sqrt(1-rxz0.zc^2)*sqrt(1-ryz0.zc^2) )
return(rxy.z)
}
} |
# --------------------------------------------------
# David Phillips
#
# 8/1/2017
# Map/graph Uganda and DRC dah data from aiddata.org
# --------------------------------------------------
# TO DO
# make this work for DRC too
# take into account redistricting
# --------------------
# Set up R
rm(list=ls())
library(data.table)
library(reshape2)
library(stringr)
library(ggplot2)
library(gridExtra)
library(RColorBrewer)
# --------------------
# -----------------------------------------------------------------------------
# Files and directories
# data directory
j = ifelse(Sys.info()[1]=='Windows', 'J:', '/home/j')
dir = paste0(j, '/Project/Evaluation/GF/resource_tracking/')
# which country?
iso3 = 'uga'
# input files
inFile = paste0(dir, iso3, '/prepped/uga_aiddata.csv')
# mapping from district names to numbers
idsFile = paste0(dir, '../mapping/', iso3, '/', iso3, '_geographies_map.csv')
# alternate names for districts
altsFile = paste0(dir, '../mapping/', iso3, '/', iso3, '_alternate_dist_names.csv')
# shape data
shapeFile = paste0(dir, '../mapping/', iso3, '/', iso3, '_dist112_map.rdata')
# output files
outFile = paste0(dir, iso3, '/visualizations/', iso3, '_aiddata.pdf')
# -----------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------
# Load/prep data
# load data
data = fread(inFile)
# subset observations
data = data[!is.na(nyears) & !is.na(annual_commitments)]
# expand projects to project-years
data = data[rep(seq(.N), nyears)]
data[, year:=seq(.N), by=c('project_id','gazetteer_adm_name')]
data[, year:=transactions_start_year+year-1]
# parse out locations from gazetteer_adm_name
vars = c('planet','continent','country','province','district','subdistrict1','subdistrict2')
data[, (vars):=tstrsplit(gazetteer_adm_name, '|', fixed=TRUE)]
# fix errors in data
data[district=='St. Kizito Hospital - Matany', subdistrict1:='St. Kizito Hospital - Matany']
data[district=='St. Kizito Hospital - Matany', district:='Napak']
data[district=='Kisozi', subdistrict1:='Kisozi']
data[district=='Kisozi', district:='Gomba']
data[district=='Mutundwe', subdistrict1:='Mutundwe']
data[district=='Mutundwe', district:='Wakiso']
data[district=='Muyenga', subdistrict1:='Muyenga']
data[district=='Muyenga', district:='Kampala']
data[province=='Agwiciri', subdistrict1:='Agwiciri']
data[province=='Agwiciri', district:='Apac']
data[province=='Agwiciri', province:='Northern']
data[province=='Kampala', district:='Kampala']
data[province=='Kampala', province:='Central']
data[province=='Kyabakuza', subdistrict1:='Kyabakuza']
data[province=='Kyabakuza', district:='Masaka']
data[province=='Kyabakuza', province:='Central']
data[province=='Nabirumba', subdistrict1:='Nabirumba']
data[province=='Nabirumba', district:='Kamuli']
data[province=='Nabirumba', province:='Eastern']
data[province=='Namugongo Point', subdistrict1:='Namugongo Point']
data[province=='Namugongo Point', district:='Wakiso']
data[province=='Namugongo Point', province:='Central']
# assume missing location is national level
data[is.na(province) & is.na(district) & is.na(subdistrict1) &
is.na(subdistrict2) & is.na(latitude), country:='Uganda']
# collapse to location-year-donor and location-year
data[is.na(subdistrict1) & is.na(subdistrict2), latitude:=NA]
data[is.na(subdistrict1) & is.na(subdistrict2), longitude:=NA]
byvars = c('year','country','province','district',
'subdistrict1','subdistrict2','latitude','longitude')
donorData = data[, list(annual_commitments=sum(annual_commitments)), by=c(byvars,'donors')]
data = data[, list(annual_commitments=sum(annual_commitments)), by=byvars]
# express in millions
donorData[, annual_commitments:=annual_commitments/1000000]
data[, annual_commitments:=annual_commitments/1000000]
# identify level
data[!is.na(subdistrict1) | !is.na(subdistrict2), level:='Sub-District']
data[(is.na(subdistrict1) & is.na(subdistrict2)) & !is.na(district), level:='District']
data[(is.na(subdistrict1) & is.na(subdistrict2) & is.na(district)) & !is.na(province), level:='Region']
data[(is.na(subdistrict1) & is.na(subdistrict2) & is.na(district) & is.na(province)) & !is.na(country), level:='National']
# -------------------------------------------------------------------------------------------
# ----------------------------
# Load/prep shape data
# load shape data
load(shapeFile)
# prep shape data
map = data.table(fortify(map))
# ----------------------------
# ----------------------------------------------------------------------------
# Merge data to shape data
# load district codes
ids = fread(idsFile)
# prep ids
ids[, dist112:=as.character(dist112)]
ids[, region4:=as.character(region4)]
# correct district/region names
data[, district:=gsub(' District', '', district)]
data[, province:=gsub(' Region', '', province)]
donorData[, district:=gsub(' District', '', district)]
donorData[, province:=gsub(' Region', '', province)]
# confirm no mismatching district names
miss = data[!district %in% ids$dist112_name & !is.na(district)]$district
if (length(miss)!=0) {
stop('Error: some district names not in map!')
}
miss = data[!province %in% ids$region4_name & !is.na(province)]$province
if (length(miss)!=0) {
stop('Error: some province names not in map!')
}
# add ids to data
distids = unique(ids[, c('dist112_name','dist112'), with=FALSE])
provids = unique(ids[, c('region4_name','region4'), with=FALSE])
data = merge(data, distids, by.x='district', by.y='dist112_name', all.x=TRUE)
data = merge(data, provids, by.x='province', by.y='region4_name', all.x=TRUE)
# subset data to different levels
distdata = data[!is.na(district) & is.na(subdistrict1) & is.na(subdistrict2)]
distdata = distdata[, c('dist112','district','year','annual_commitments'), with=FALSE]
# expand map by years
distmap = copy(map)
minyear = min(data$year)
maxyear = max(data$year)
distmap = distmap[rep(seq(.N), maxyear-minyear+1)]
distmap[, year:=rep(seq(minyear, maxyear), each=nrow(map))]
# add data to map
distmap = merge(distmap, distdata, by.x=c('id','year'),
by.y=c('dist112','year'), all.x=TRUE)
# ----------------------------------------------------------------------------
# ---------------------------------------------------------------------------------------------
# Set up to graph
# clean up donor names
donorData[donors=='Global Fund for HIV, TB and Malaria', donors:='GFATM']
donorData[donors=='United Nations Development Programme|Sweden|Norway|World Health Organization|Ireland', donors:='UNDP']
donorData[donors=='Norway|European Union', donors:='Norway']
donorData[donors=='International Development Association', donors:='IDA']
donorData[donors=='United Nations Populations Fund', donors:='UNPF']
donorData[donors=='United Nations Children\'s Fund|United Nations Development Programme', donors:='UNICEF']
donorData[donors=='Denmark/DANIDA', donors:='Denmark']
donorData[donors=='United Kingdom|Germany', donors:='UK & Germany']
donorData[donors=='United States of America', donors:='USA']
# set aside subdistrict data for points
subdata = data[!is.na(subdistrict1) | !is.na(subdistrict2)]
# map colors
mapColors = brewer.pal(10, 'RdYlBu')
# scatterplot colors
pointColors = c('National'='#3C3530', 'Region'='#AACD6E', 'District'='#F16B6F', 'Sub-District'='#77AAAD')
# bar colors
barColors = brewer.pal(length(unique(donorData$donors)), 'Paired')
barColors[barColors=='#FFFF99'] = 'grey50'
names(barColors)=unique(donorData$donors)[order(unique(donorData$donors))]
# ---------------------------------------------------------------------------------------------
# ---------------------------------------------------------------------------------------------
# Make graphs
# map district-level commitments and lower in pairs of two
years = unique(data[!is.na(district)]$year)
years = years[order(years)]
idx = which(years %in% years[c(TRUE, FALSE)])
mapList = lapply(idx, function(i) {
ggplot(distmap[year %in% years[c(i,i+1)]], aes(x=long, y=lat, group=group, fill=annual_commitments)) +
geom_polygon() +
geom_point(data=subdata[year %in% years[c(i,i+1)]],
aes(x=longitude, y=latitude, color=annual_commitments, group=NULL)) +
facet_wrap(~year) +
theme_minimal() +
geom_path(color='grey65', size=.05) +
coord_fixed(ratio=1) +
scale_x_continuous('', breaks=NULL) + scale_y_continuous('', breaks=NULL) +
labs(title='Aid Data Records of Annual Commitments', subtitle='District-Level and Below') +
scale_fill_gradientn(paste('Annual\nCommitments\n(Millions $)'), colours=mapColors, na.value='white') +
scale_color_gradientn(paste('Annual\nCommitments\n(Millions $)'), colours=mapColors) +
theme(plot.title=element_text(hjust=0.5), plot.subtitle=element_text(hjust=0.5))
})
# time series by district
tsPlots = lapply(unique(ids$dist112_name), function(d) {
ggplot(data=NULL, aes(y=annual_commitments, x=year)) +
geom_bar(data=donorData[district==d | is.na(district)],
aes(fill=donors, shape=NULL, color=NULL), position='stack', stat='identity') +
geom_point(data=data[district==d | is.na(district)],
aes(, shape=level), size=2.25) +
labs(title='Aid Data Records of Annual Commitments', subtitle=paste(d,'District'),
y='Annual Commitments (Millions $)', x='') +
scale_fill_manual('Donor', values=barColors) +
scale_shape_manual('Level', values=c('National'=15, 'Region'=3, 'District'=19,'Sub-District'=17)) +
theme_bw() +
theme(plot.title=element_text(hjust=0.5), plot.subtitle=element_text(hjust=0.5))
})
# ---------------------------------------------------------------------------------------------
# -------------------------------------------------
# Store graphs
pdf(outFile, height=6, width=9)
for(i in seq(length(mapList))) print(mapList[[i]])
for(i in seq(length(tsPlots))) print(tsPlots[[i]])
dev.off()
# -------------------------------------------------
| /resource_tracking/visualizations/graph_aiddata.r | no_license | Guitlle/gf | R | false | false | 10,186 | r | # --------------------------------------------------
# David Phillips
#
# 8/1/2017
# Map/graph Uganda and DRC dah data from aiddata.org
# --------------------------------------------------
# TO DO
# make this work for DRC too
# take into account redistricting
# --------------------
# Set up R
rm(list=ls())
library(data.table)
library(reshape2)
library(stringr)
library(ggplot2)
library(gridExtra)
library(RColorBrewer)
# --------------------
# -----------------------------------------------------------------------------
# Files and directories
# data directory
j = ifelse(Sys.info()[1]=='Windows', 'J:', '/home/j')
dir = paste0(j, '/Project/Evaluation/GF/resource_tracking/')
# which country?
iso3 = 'uga'
# input files
inFile = paste0(dir, iso3, '/prepped/uga_aiddata.csv')
# mapping from district names to numbers
idsFile = paste0(dir, '../mapping/', iso3, '/', iso3, '_geographies_map.csv')
# alternate names for districts
altsFile = paste0(dir, '../mapping/', iso3, '/', iso3, '_alternate_dist_names.csv')
# shape data
shapeFile = paste0(dir, '../mapping/', iso3, '/', iso3, '_dist112_map.rdata')
# output files
outFile = paste0(dir, iso3, '/visualizations/', iso3, '_aiddata.pdf')
# -----------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------
# Load/prep data
# load data
data = fread(inFile)
# subset observations
data = data[!is.na(nyears) & !is.na(annual_commitments)]
# expand projects to project-years
data = data[rep(seq(.N), nyears)]
data[, year:=seq(.N), by=c('project_id','gazetteer_adm_name')]
data[, year:=transactions_start_year+year-1]
# parse out locations from gazetteer_adm_name
vars = c('planet','continent','country','province','district','subdistrict1','subdistrict2')
data[, (vars):=tstrsplit(gazetteer_adm_name, '|', fixed=TRUE)]
# fix errors in data
data[district=='St. Kizito Hospital - Matany', subdistrict1:='St. Kizito Hospital - Matany']
data[district=='St. Kizito Hospital - Matany', district:='Napak']
data[district=='Kisozi', subdistrict1:='Kisozi']
data[district=='Kisozi', district:='Gomba']
data[district=='Mutundwe', subdistrict1:='Mutundwe']
data[district=='Mutundwe', district:='Wakiso']
data[district=='Muyenga', subdistrict1:='Muyenga']
data[district=='Muyenga', district:='Kampala']
data[province=='Agwiciri', subdistrict1:='Agwiciri']
data[province=='Agwiciri', district:='Apac']
data[province=='Agwiciri', province:='Northern']
data[province=='Kampala', district:='Kampala']
data[province=='Kampala', province:='Central']
data[province=='Kyabakuza', subdistrict1:='Kyabakuza']
data[province=='Kyabakuza', district:='Masaka']
data[province=='Kyabakuza', province:='Central']
data[province=='Nabirumba', subdistrict1:='Nabirumba']
data[province=='Nabirumba', district:='Kamuli']
data[province=='Nabirumba', province:='Eastern']
data[province=='Namugongo Point', subdistrict1:='Namugongo Point']
data[province=='Namugongo Point', district:='Wakiso']
data[province=='Namugongo Point', province:='Central']
# assume missing location is national level
data[is.na(province) & is.na(district) & is.na(subdistrict1) &
is.na(subdistrict2) & is.na(latitude), country:='Uganda']
# collapse to location-year-donor and location-year
data[is.na(subdistrict1) & is.na(subdistrict2), latitude:=NA]
data[is.na(subdistrict1) & is.na(subdistrict2), longitude:=NA]
byvars = c('year','country','province','district',
'subdistrict1','subdistrict2','latitude','longitude')
donorData = data[, list(annual_commitments=sum(annual_commitments)), by=c(byvars,'donors')]
data = data[, list(annual_commitments=sum(annual_commitments)), by=byvars]
# express in millions
donorData[, annual_commitments:=annual_commitments/1000000]
data[, annual_commitments:=annual_commitments/1000000]
# identify level
data[!is.na(subdistrict1) | !is.na(subdistrict2), level:='Sub-District']
data[(is.na(subdistrict1) & is.na(subdistrict2)) & !is.na(district), level:='District']
data[(is.na(subdistrict1) & is.na(subdistrict2) & is.na(district)) & !is.na(province), level:='Region']
data[(is.na(subdistrict1) & is.na(subdistrict2) & is.na(district) & is.na(province)) & !is.na(country), level:='National']
# -------------------------------------------------------------------------------------------
# ----------------------------
# Load/prep shape data
# load shape data
load(shapeFile)
# prep shape data
map = data.table(fortify(map))
# ----------------------------
# ----------------------------------------------------------------------------
# Merge data to shape data
# load district codes
ids = fread(idsFile)
# prep ids
ids[, dist112:=as.character(dist112)]
ids[, region4:=as.character(region4)]
# correct district/region names
data[, district:=gsub(' District', '', district)]
data[, province:=gsub(' Region', '', province)]
donorData[, district:=gsub(' District', '', district)]
donorData[, province:=gsub(' Region', '', province)]
# confirm no mismatching district names
miss = data[!district %in% ids$dist112_name & !is.na(district)]$district
if (length(miss)!=0) {
stop('Error: some district names not in map!')
}
miss = data[!province %in% ids$region4_name & !is.na(province)]$province
if (length(miss)!=0) {
stop('Error: some province names not in map!')
}
# add ids to data
distids = unique(ids[, c('dist112_name','dist112'), with=FALSE])
provids = unique(ids[, c('region4_name','region4'), with=FALSE])
data = merge(data, distids, by.x='district', by.y='dist112_name', all.x=TRUE)
data = merge(data, provids, by.x='province', by.y='region4_name', all.x=TRUE)
# subset data to different levels
distdata = data[!is.na(district) & is.na(subdistrict1) & is.na(subdistrict2)]
distdata = distdata[, c('dist112','district','year','annual_commitments'), with=FALSE]
# expand map by years
distmap = copy(map)
minyear = min(data$year)
maxyear = max(data$year)
distmap = distmap[rep(seq(.N), maxyear-minyear+1)]
distmap[, year:=rep(seq(minyear, maxyear), each=nrow(map))]
# add data to map
distmap = merge(distmap, distdata, by.x=c('id','year'),
by.y=c('dist112','year'), all.x=TRUE)
# ----------------------------------------------------------------------------
# ---------------------------------------------------------------------------------------------
# Set up to graph
# clean up donor names
donorData[donors=='Global Fund for HIV, TB and Malaria', donors:='GFATM']
donorData[donors=='United Nations Development Programme|Sweden|Norway|World Health Organization|Ireland', donors:='UNDP']
donorData[donors=='Norway|European Union', donors:='Norway']
donorData[donors=='International Development Association', donors:='IDA']
donorData[donors=='United Nations Populations Fund', donors:='UNPF']
donorData[donors=='United Nations Children\'s Fund|United Nations Development Programme', donors:='UNICEF']
donorData[donors=='Denmark/DANIDA', donors:='Denmark']
donorData[donors=='United Kingdom|Germany', donors:='UK & Germany']
donorData[donors=='United States of America', donors:='USA']
# set aside subdistrict data for points
subdata = data[!is.na(subdistrict1) | !is.na(subdistrict2)]
# map colors
mapColors = brewer.pal(10, 'RdYlBu')
# scatterplot colors
pointColors = c('National'='#3C3530', 'Region'='#AACD6E', 'District'='#F16B6F', 'Sub-District'='#77AAAD')
# bar colors
barColors = brewer.pal(length(unique(donorData$donors)), 'Paired')
barColors[barColors=='#FFFF99'] = 'grey50'
names(barColors)=unique(donorData$donors)[order(unique(donorData$donors))]
# ---------------------------------------------------------------------------------------------
# ---------------------------------------------------------------------------------------------
# Make graphs
# map district-level commitments and lower in pairs of two
years = unique(data[!is.na(district)]$year)
years = years[order(years)]
idx = which(years %in% years[c(TRUE, FALSE)])
mapList = lapply(idx, function(i) {
ggplot(distmap[year %in% years[c(i,i+1)]], aes(x=long, y=lat, group=group, fill=annual_commitments)) +
geom_polygon() +
geom_point(data=subdata[year %in% years[c(i,i+1)]],
aes(x=longitude, y=latitude, color=annual_commitments, group=NULL)) +
facet_wrap(~year) +
theme_minimal() +
geom_path(color='grey65', size=.05) +
coord_fixed(ratio=1) +
scale_x_continuous('', breaks=NULL) + scale_y_continuous('', breaks=NULL) +
labs(title='Aid Data Records of Annual Commitments', subtitle='District-Level and Below') +
scale_fill_gradientn(paste('Annual\nCommitments\n(Millions $)'), colours=mapColors, na.value='white') +
scale_color_gradientn(paste('Annual\nCommitments\n(Millions $)'), colours=mapColors) +
theme(plot.title=element_text(hjust=0.5), plot.subtitle=element_text(hjust=0.5))
})
# time series by district
tsPlots = lapply(unique(ids$dist112_name), function(d) {
ggplot(data=NULL, aes(y=annual_commitments, x=year)) +
geom_bar(data=donorData[district==d | is.na(district)],
aes(fill=donors, shape=NULL, color=NULL), position='stack', stat='identity') +
geom_point(data=data[district==d | is.na(district)],
aes(, shape=level), size=2.25) +
labs(title='Aid Data Records of Annual Commitments', subtitle=paste(d,'District'),
y='Annual Commitments (Millions $)', x='') +
scale_fill_manual('Donor', values=barColors) +
scale_shape_manual('Level', values=c('National'=15, 'Region'=3, 'District'=19,'Sub-District'=17)) +
theme_bw() +
theme(plot.title=element_text(hjust=0.5), plot.subtitle=element_text(hjust=0.5))
})
# ---------------------------------------------------------------------------------------------
# -------------------------------------------------
# Store graphs
pdf(outFile, height=6, width=9)
for(i in seq(length(mapList))) print(mapList[[i]])
for(i in seq(length(tsPlots))) print(tsPlots[[i]])
dev.off()
# -------------------------------------------------
|
with(abb166f9823444beabdb1018258f23163, {ROOT <- 'D:/SEMOSS_v4.0.0_x64/SEMOSS_v4.0.0_x64/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/1c4fa71c-191c-4da9-8102-b247ffddc5d3';options(digits.secs=NULL);FRAME950866[,(c('DATA_COLLECTION_TIME')) := lapply(.SD, function(x) as.POSIXct(fast_strptime(x, format='%Y-%m-%d %H:%M:%S'))), .SDcols = c('DATA_COLLECTION_TIME')]}); | /1c4fa71c-191c-4da9-8102-b247ffddc5d3/R/Temp/asTL1E5BqlVuN.R | no_license | ayanmanna8/test | R | false | false | 388 | r | with(abb166f9823444beabdb1018258f23163, {ROOT <- 'D:/SEMOSS_v4.0.0_x64/SEMOSS_v4.0.0_x64/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/1c4fa71c-191c-4da9-8102-b247ffddc5d3';options(digits.secs=NULL);FRAME950866[,(c('DATA_COLLECTION_TIME')) := lapply(.SD, function(x) as.POSIXct(fast_strptime(x, format='%Y-%m-%d %H:%M:%S'))), .SDcols = c('DATA_COLLECTION_TIME')]}); |
#' @param n [\code{integer}]\cr
#' Number of nodes.
| /man-roxygen/arg_n.R | permissive | jakobbossek/grapherator | R | false | false | 54 | r | #' @param n [\code{integer}]\cr
#' Number of nodes.
|
#
# brabio_gantt.R
#
# The MIT License
#
# Copyright (c) 2013 Yuki Fujiwara
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# brabio_gantt.R
#
# プロジェクト管理用クラウドツール Brabio!(http://brabio.jp/)でエクスポートしたCSVを
# 1枚の図で見やすくするためのRスクリプトです。
#
library(ggplot2)
library(reshape)
library(scales)
#
# 日本語をggplotで使えるようにする設定
#
## Mac用フォントの設定
# 参考: http://kohske.wordpress.com/2011/02/26/using-cjk-fonts-in-r-and-ggplot2/
if (Sys.info()[['sysname']] == "Darwin") {
quartzFonts(HiraKaku=quartzFont(rep("HiraKakuProN-W3", 4)))
}
## カスタムテーマの設定
# 参考: http://d.hatena.ne.jp/triadsou/20100528/1275042816
# 白っぽい背景のテーマ
my_theme_bw <- function (base_size = 25, base_family = "") {
theme_bw(base_size = base_size, base_family = base_family) %+replace%
theme(
axis.text.x = element_text(angle=45, vjust = 0.5) # x軸のテキストを45度傾ける
)
}
# 灰色っぽい背景のテーマ
my_theme_gray <- function (base_size = 25, base_family = "") {
theme_gray(base_size = base_size, base_family = base_family) %+replace%
theme(
axis.text.x = element_text(angle=45, vjust = 0.5) # x軸のテキストを45度傾ける
)
}
#
# Brabio CSVファイルの読み込み関数
#
read.brabio.csv <- function (filename = "") {
plan <- read.csv(filename)
tasks <- plan[plan$X.タイプ == "task", ] #タスクのみ抽出、マイルストーンは無視
tasks.nrow<- nrow(tasks)
tasks.name <- tasks$タイトル
tasks.start.date <- tasks$開始日
tasks.end.date <- tasks$〆切日
dfr <- data.frame(
name = factor(tasks.name, levels = rev(tasks.name)),
start.date = tasks.start.date,
end.date = tasks.end.date,
colour = factor(1:tasks.nrow)
)
mdfr <- melt(dfr, measure.vars = c("start.date", "end.date"))
return (mdfr)
}
#
# Brabio CSVからガントチャートをプロットする
#
plot.brabio.gantt <- function (input.csv, output, is.output = FALSE, theme.function = my_theme_gray) {
## 実行
mdfr <- read.brabio.csv(input.csv)
## グラフをプロット
base.plot <- ggplot(mdfr, aes(as.Date(value, "%Y/%m/%d"), name, colour = colour)) +
geom_line(size = 6) +
xlab("") + ylab("") +
scale_x_date(labels = date_format("%Y年%m月")) + theme(legend.position="none")
if (is.output) {
# 画像を出力する場合
my.plot <- base.plot + theme.function()
ggsave(output, base.plot, family="Japan1GothicBBB")
} else {
# 直接グラフを表示する場合
my.plot <- base.plot + theme.function(base_family = "HiraKaku")
}
return (my.plot)
}
#
# main
#
# ファイル名を指定
input.csv <- "brabio_export.csv" #入力(CSV)
output <- "brabio_gantt.pdf" # 出力(形式は拡張子で自動判別してくれる)
is.output = TRUE # 画像を出力する場合はTRUEに、直接グラフを出力する場合はFALSEにする
theme.function <- my_theme_gray # テーマを指定
myplot <- plot.brabio.gantt(input.csv = input.csv, output = output, is.output = is.output, theme.function = theme.function)
# 設定だけを読み込ませる場合
# sourceしただけでは自動でプロットされないので、以下をコピーしてコンソールに貼り付け・実行する
myplot
| /brabio_gantt.R | permissive | sky-y/brabio_gantt_r | R | false | false | 4,436 | r | #
# brabio_gantt.R
#
# The MIT License
#
# Copyright (c) 2013 Yuki Fujiwara
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# brabio_gantt.R
#
# プロジェクト管理用クラウドツール Brabio!(http://brabio.jp/)でエクスポートしたCSVを
# 1枚の図で見やすくするためのRスクリプトです。
#
library(ggplot2)
library(reshape)
library(scales)
#
# 日本語をggplotで使えるようにする設定
#
## Mac用フォントの設定
# 参考: http://kohske.wordpress.com/2011/02/26/using-cjk-fonts-in-r-and-ggplot2/
if (Sys.info()[['sysname']] == "Darwin") {
quartzFonts(HiraKaku=quartzFont(rep("HiraKakuProN-W3", 4)))
}
## カスタムテーマの設定
# 参考: http://d.hatena.ne.jp/triadsou/20100528/1275042816
# 白っぽい背景のテーマ
my_theme_bw <- function (base_size = 25, base_family = "") {
theme_bw(base_size = base_size, base_family = base_family) %+replace%
theme(
axis.text.x = element_text(angle=45, vjust = 0.5) # x軸のテキストを45度傾ける
)
}
# 灰色っぽい背景のテーマ
my_theme_gray <- function (base_size = 25, base_family = "") {
theme_gray(base_size = base_size, base_family = base_family) %+replace%
theme(
axis.text.x = element_text(angle=45, vjust = 0.5) # x軸のテキストを45度傾ける
)
}
#
# Brabio CSVファイルの読み込み関数
#
read.brabio.csv <- function (filename = "") {
plan <- read.csv(filename)
tasks <- plan[plan$X.タイプ == "task", ] #タスクのみ抽出、マイルストーンは無視
tasks.nrow<- nrow(tasks)
tasks.name <- tasks$タイトル
tasks.start.date <- tasks$開始日
tasks.end.date <- tasks$〆切日
dfr <- data.frame(
name = factor(tasks.name, levels = rev(tasks.name)),
start.date = tasks.start.date,
end.date = tasks.end.date,
colour = factor(1:tasks.nrow)
)
mdfr <- melt(dfr, measure.vars = c("start.date", "end.date"))
return (mdfr)
}
#
# Brabio CSVからガントチャートをプロットする
#
plot.brabio.gantt <- function (input.csv, output, is.output = FALSE, theme.function = my_theme_gray) {
## 実行
mdfr <- read.brabio.csv(input.csv)
## グラフをプロット
base.plot <- ggplot(mdfr, aes(as.Date(value, "%Y/%m/%d"), name, colour = colour)) +
geom_line(size = 6) +
xlab("") + ylab("") +
scale_x_date(labels = date_format("%Y年%m月")) + theme(legend.position="none")
if (is.output) {
# 画像を出力する場合
my.plot <- base.plot + theme.function()
ggsave(output, base.plot, family="Japan1GothicBBB")
} else {
# 直接グラフを表示する場合
my.plot <- base.plot + theme.function(base_family = "HiraKaku")
}
return (my.plot)
}
#
# main
#
# ファイル名を指定
input.csv <- "brabio_export.csv" #入力(CSV)
output <- "brabio_gantt.pdf" # 出力(形式は拡張子で自動判別してくれる)
is.output = TRUE # 画像を出力する場合はTRUEに、直接グラフを出力する場合はFALSEにする
theme.function <- my_theme_gray # テーマを指定
myplot <- plot.brabio.gantt(input.csv = input.csv, output = output, is.output = is.output, theme.function = theme.function)
# 設定だけを読み込ませる場合
# sourceしただけでは自動でプロットされないので、以下をコピーしてコンソールに貼り付け・実行する
myplot
|
# Saving data in time series format #
?ts
USGDP <- ts(US_GDP[,2], start=c(1929,1),end = c(1992,1) ,frequency=1)
USGDP <- ts(US_GDP[,2], start=c(1929,1),frequency=1)# frequency =1 for yearly interval
plot(USGDP)
## in the data, 1980 is missing so when we specify the end as 1992, due to 1 missing datapoint,
# it allocates the last datapoint (1992) = 0 {Since, ts converts ordinary dataset into timeseries
# with frequency =1}
Shoe <- ts(Shoe_Sales[,3], start=c(2011,1), frequency=12)
plot(Shoe)
# Here data starts from 2011 January. So we keep (2011,1)indicating January as '1'.frequency as 12
# shows that the months are considered.
Income <- ts(Quarterly_Income[,3], start=c(2000,4), frequency=4)
plot(Income)
# Now we imported a data in which the time interval is quarterly. SO the frequency = 4
Champagne <- ts(Champagne[,2], start=c(1964, 1), frequency=12)
plot(Champagne)
# In the above data, the year and month are specified in the 1st column.So,the data is in 2nd column
# and frequency is in terms of months i.e., frequency =12
AirPax <- ts(AirPassenger[,2], start=c(1949, 1), frequency=12)
plot(AirPax)
# It's same as that of the previous data
# Plot for seasonality
monthplot(Champagne)
monthplot(AirPax)
monthplot(Income)
# monthplot is not only used to determine the seasonality of months but also the seasonality
# of other seasonal factors.Since, we mentioned the frequency before , it accordingly gives
# average of the seasonal components of all years considered (for example, champagne plot gives
# the average of each month i.e., average of all januaries of years considered. )
## For better deduction, keep both plots in a window,and try to explain the relationship.
par(mfrow=c(1,2))
plot(Champagne)
monthplot(Champagne)
plot(AirPax)
monthplot(AirPax)
plot(Income)
monthplot(Income)
# Import Bond_yield data#
# Convert to a timeseries object explicitly stating start and end points and frequency
BY <- ts(Bond_Yield[,-1],start = c(1900,1),end = c(1970,1),frequency = 1)
plot(BY)
# Moving Average with different periods.
# install.packages(forecast)
library(forecast)
BY3 <- ma(BY,order= 3)
BY5 <-ma(BY,order= 5)
BY9 <- ma(BY,order= 9)
BY15 <-ma(BY,order= 15)
BY19 <- ma(BY,order= 19)
BY41 <- ma(BY,order= 41)
BY51 <- ma(BY,order= 51)
par(mfrow=c(1,1))
?ts.plot
ts.plot(BY,BY3,BY9,BY19,BY5,BY15,BY41,BY51,lty=c(1:8),col= c('black','red','dark blue','forest green','green','yellow','orange','magenta'))
?stl
# TS decomposition
IncDec<-stl(Income[,1], s.window='p') #constant seasonality
plot(IncDec) # the right sid erectangle determines the scale.The smaller the rectangle,the more is
# the range / interval of the value.
IncDec
IncDec7<-stl(Income[,1], s.window=7) #seasonality changes
plot(IncDec7)
IncDec7
ch7<- stl(Champagne[,1], s.window=7, robust=T) # seasonality changes
plot(ch7,main = "7")
ch7
ch5<- stl(Champagne[,1], s.window=5, robust=T)
plot(ch5,main="5")
ch5
ch2<- stl(Champagne[,1], s.window=2, robust=T)
plot(ch2,main="2")
ch2
DeseasonRevenue <- (IncDec7$time.series[,2]+IncDec7$time.series[,3])
ts.plot(DeseasonRevenue, Income, col=c("red", "blue"), main="Comparison of Revenue and Deseasonalized Revenue")
# From the original series, if we remove the seasonality value(i.e., only considering Trend +irregularity)
# In the same way, the sum of Seasonality ad Irregularity(i.e., removing the Trend from the original series-> Detrending revenue.)
# stl function is developed for additive time series not multiplicative.So,we used logarithmic here.
# Analysis of a multiplicative series
logAirPax <- log(AirPax[,1])
logAirPaxDec <- stl(logAirPax, s.window="p")
logAirPaxDec$time.series[1:12,1]
AirPaxSeason <- exp(logAirPaxDec$time.series[1:12,1])
plot(AirPaxSeason, type="l")
plot(logAirPaxDec)
# Here we used log because it converts multiplication into addition i.e.,
# x= abc -> logx = loga +lobg +logc
#or
plot(decompose(AirPax,type = "multiplicative")) # 'm' means multiplicative (which we got to know through the increase in
# the variance of seasonality) {stl works better than decompose function.}
# Dividing a time series into train and test
IncomeTrain <- window(Income, start=c(2000,4), end=c(2012,4), frequency=4)
IncomeTest <- window(Income, start=c(2013,1), frequency=4)
# Model fit and forecast
IncTrn7<-stl(IncomeTrain, s.window=7)
fcst.Inc.stl <- forecast(IncTrn7, method="rwdrift", h=5)
Vec<- cbind(IncomeTest,fcst.Inc.stl$mean)
ts.plot(Vec, col=c("blue", "red"), main="Quarterly Income: Actual vs Forecast")
MAPE <- mean(abs(Vec[,1]-Vec[,2])/Vec[,1])
MAPE
# Exponential smooting
#install.packages("fpp2")
Library(fpp2)
?ses # h- number of periods for forecasting.
fcoil <- ses(oil, h=3)
fcoil # Lo80 & Hi80 gives the 80% confidence interval values.
plot(oil)
plot(fcoil)
names(fcoil)
fcoil$model
#Simple exponential smoothing
#Call:
# ses(y = oil, h = 3)
#Smoothing parameters:
# alpha = 0.9999
#Initial states:
# l = 110.8832
#sigma: 49.0507
#AIC AICc BIC
#576.1569 576.6903 581.8324
fcoil$mean # gives the forecasted values including year.
fcoil$level # confidence levels
fcoil$fitted # gives the forecasted data after weighting(one step forecasts), generally it is the next observation in the original data.
fcoil$method # simple or optimal exponential smoothing.
fcoil$x # x is the original value.
fcoil$series # the data considered
fcoil$residuals
plot(fcoil$residuals) # the residual or the difference between the forecast and the original
# Seasonality is included.Champagne dataset is taken.
library(readxl)
Champagne <- read_excel("Champagne.xlsx")
View(Champagne)
Champagne <- ts(Champagne[,2], start=c(1964, 1), frequency=12)
new <- hw(Champagne,h=24) # In case of Seasonality , we should give 'h' atleast 1 total seasonal counts(i.e., 1 year.here h is the month)
plot(new)
new$model
# when we actually forecast develop a test and train data and develop a model.
# Now let's try with multiplicative seasonality.
library(readxl)
AirPassenger <- read_excel("AirPassenger.xlsx")
View(AirPassenger)
# Here test data we can keep 24 months
AirPax <- ts(AirPassenger[,2], start=c(1949, 1), frequency=12)
AirPax1 <- window(AirPax, start=c(1949,1), end=c(1958,12))
AirPaxHO <- window(AirPax, start=c(1959,1), end=c(1960,12))
AirPax.fc <- hw(AirPax1,seasonal = "m",h=24)
plot(AirPax.fc)
AirPax.fc$model
AirPax.fc$mean
#alpha = 0.3477
#beta = 2e-04
#gamma = 0.6519
Vec<- cbind(AirPaxHO,AirPax.fc$mean)
ts.plot(Vec, col=c("blue", "red"), main="Quarterly Income: Actual vs Forecast")
MAPE <- mean(abs(Vec[,1]-Vec[,2])/Vec[,1])
MAPE #0.07314531
# to get better combination that gives least MAPE
AirPax.fc <- hw(AirPax1,seasonal = "m",alpha =0.03,beta = 0.002,gamma = 0.001,h=24)
Vec<- cbind(AirPaxHO,AirPax.fc$mean)
ts.plot(Vec, col=c("blue", "red"), main="Quarterly Income: Actual vs Forecast")
MAPE <- mean(abs(Vec[,1]-Vec[,2])/Vec[,1])
MAPE #0.03314312
## For final predicting we need to use the total data as train and give the alpha,beta,gamma
# which were ideal upon observing.
AirPax.fclast <- hw(AirPax,seasonal = "m",alpha =0.03,beta = 0.002,gamma = 0.001,h=12)
# Now we used that model to predict 1961's data
plot(AirPax.fclast)
AirPax.fclast$mean
Champ1 <- window(Champagne, start=c(1964,1), end=c(1970,12))
ChampHO <- window(Champagne, start=c(1971,1), end=c(1972,9))
Champ.fc <- hw(Champ1, h=21)
Champ1.stl <- stl(Champ1, s.window = 5)
fcst.Champ1.stl <- forecast(Champ1.stl, method="rwdrift", h=21)
# Stationarity & ARIMA
BASF <- ts(GermanMonthlyAverageStockPrice[,9], start=c(1981,1), frequency=12)
library(tseries)
adf.test(BASF)
acf(BASF, lag = 50)
BASF.arima.fit <- arima(BASF, c(1, 1, 0))
BASF.arima.fit
Box.test(BASF.arima.fit$residuals, lag=30, type="Ljung-Box")
plot(forecast(BASF.arima.fit, h=6))
| /BACP.R | no_license | nagapavannukala/machinelearning_R | R | false | false | 8,042 | r | # Saving data in time series format #
?ts
USGDP <- ts(US_GDP[,2], start=c(1929,1),end = c(1992,1) ,frequency=1)
USGDP <- ts(US_GDP[,2], start=c(1929,1),frequency=1)# frequency =1 for yearly interval
plot(USGDP)
## in the data, 1980 is missing so when we specify the end as 1992, due to 1 missing datapoint,
# it allocates the last datapoint (1992) = 0 {Since, ts converts ordinary dataset into timeseries
# with frequency =1}
Shoe <- ts(Shoe_Sales[,3], start=c(2011,1), frequency=12)
plot(Shoe)
# Here data starts from 2011 January. So we keep (2011,1)indicating January as '1'.frequency as 12
# shows that the months are considered.
Income <- ts(Quarterly_Income[,3], start=c(2000,4), frequency=4)
plot(Income)
# Now we imported a data in which the time interval is quarterly. SO the frequency = 4
Champagne <- ts(Champagne[,2], start=c(1964, 1), frequency=12)
plot(Champagne)
# In the above data, the year and month are specified in the 1st column.So,the data is in 2nd column
# and frequency is in terms of months i.e., frequency =12
AirPax <- ts(AirPassenger[,2], start=c(1949, 1), frequency=12)
plot(AirPax)
# It's same as that of the previous data
# Plot for seasonality
monthplot(Champagne)
monthplot(AirPax)
monthplot(Income)
# monthplot is not only used to determine the seasonality of months but also the seasonality
# of other seasonal factors.Since, we mentioned the frequency before , it accordingly gives
# average of the seasonal components of all years considered (for example, champagne plot gives
# the average of each month i.e., average of all januaries of years considered. )
## For better deduction, keep both plots in a window,and try to explain the relationship.
par(mfrow=c(1,2))
plot(Champagne)
monthplot(Champagne)
plot(AirPax)
monthplot(AirPax)
plot(Income)
monthplot(Income)
# Import Bond_yield data#
# Convert to a timeseries object explicitly stating start and end points and frequency
BY <- ts(Bond_Yield[,-1],start = c(1900,1),end = c(1970,1),frequency = 1)
plot(BY)
# Moving Average with different periods.
# install.packages(forecast)
library(forecast)
BY3 <- ma(BY,order= 3)
BY5 <-ma(BY,order= 5)
BY9 <- ma(BY,order= 9)
BY15 <-ma(BY,order= 15)
BY19 <- ma(BY,order= 19)
BY41 <- ma(BY,order= 41)
BY51 <- ma(BY,order= 51)
par(mfrow=c(1,1))
?ts.plot
ts.plot(BY,BY3,BY9,BY19,BY5,BY15,BY41,BY51,lty=c(1:8),col= c('black','red','dark blue','forest green','green','yellow','orange','magenta'))
?stl
# TS decomposition
IncDec<-stl(Income[,1], s.window='p') #constant seasonality
plot(IncDec) # the right sid erectangle determines the scale.The smaller the rectangle,the more is
# the range / interval of the value.
IncDec
IncDec7<-stl(Income[,1], s.window=7) #seasonality changes
plot(IncDec7)
IncDec7
ch7<- stl(Champagne[,1], s.window=7, robust=T) # seasonality changes
plot(ch7,main = "7")
ch7
ch5<- stl(Champagne[,1], s.window=5, robust=T)
plot(ch5,main="5")
ch5
ch2<- stl(Champagne[,1], s.window=2, robust=T)
plot(ch2,main="2")
ch2
DeseasonRevenue <- (IncDec7$time.series[,2]+IncDec7$time.series[,3])
ts.plot(DeseasonRevenue, Income, col=c("red", "blue"), main="Comparison of Revenue and Deseasonalized Revenue")
# From the original series, if we remove the seasonality value(i.e., only considering Trend +irregularity)
# In the same way, the sum of Seasonality ad Irregularity(i.e., removing the Trend from the original series-> Detrending revenue.)
# stl function is developed for additive time series not multiplicative.So,we used logarithmic here.
# Analysis of a multiplicative series
logAirPax <- log(AirPax[,1])
logAirPaxDec <- stl(logAirPax, s.window="p")
logAirPaxDec$time.series[1:12,1]
AirPaxSeason <- exp(logAirPaxDec$time.series[1:12,1])
plot(AirPaxSeason, type="l")
plot(logAirPaxDec)
# Here we used log because it converts multiplication into addition i.e.,
# x= abc -> logx = loga +lobg +logc
#or
plot(decompose(AirPax,type = "multiplicative")) # 'm' means multiplicative (which we got to know through the increase in
# the variance of seasonality) {stl works better than decompose function.}
# Dividing a time series into train and test
IncomeTrain <- window(Income, start=c(2000,4), end=c(2012,4), frequency=4)
IncomeTest <- window(Income, start=c(2013,1), frequency=4)
# Model fit and forecast
IncTrn7<-stl(IncomeTrain, s.window=7)
fcst.Inc.stl <- forecast(IncTrn7, method="rwdrift", h=5)
Vec<- cbind(IncomeTest,fcst.Inc.stl$mean)
ts.plot(Vec, col=c("blue", "red"), main="Quarterly Income: Actual vs Forecast")
MAPE <- mean(abs(Vec[,1]-Vec[,2])/Vec[,1])
MAPE
# Exponential smooting
#install.packages("fpp2")
Library(fpp2)
?ses # h- number of periods for forecasting.
fcoil <- ses(oil, h=3)
fcoil # Lo80 & Hi80 gives the 80% confidence interval values.
plot(oil)
plot(fcoil)
names(fcoil)
fcoil$model
#Simple exponential smoothing
#Call:
# ses(y = oil, h = 3)
#Smoothing parameters:
# alpha = 0.9999
#Initial states:
# l = 110.8832
#sigma: 49.0507
#AIC AICc BIC
#576.1569 576.6903 581.8324
fcoil$mean # gives the forecasted values including year.
fcoil$level # confidence levels
fcoil$fitted # gives the forecasted data after weighting(one step forecasts), generally it is the next observation in the original data.
fcoil$method # simple or optimal exponential smoothing.
fcoil$x # x is the original value.
fcoil$series # the data considered
fcoil$residuals
plot(fcoil$residuals) # the residual or the difference between the forecast and the original
# Seasonality is included.Champagne dataset is taken.
library(readxl)
Champagne <- read_excel("Champagne.xlsx")
View(Champagne)
Champagne <- ts(Champagne[,2], start=c(1964, 1), frequency=12)
new <- hw(Champagne,h=24) # In case of Seasonality , we should give 'h' atleast 1 total seasonal counts(i.e., 1 year.here h is the month)
plot(new)
new$model
# when we actually forecast develop a test and train data and develop a model.
# Now let's try with multiplicative seasonality.
library(readxl)
AirPassenger <- read_excel("AirPassenger.xlsx")
View(AirPassenger)
# Here test data we can keep 24 months
AirPax <- ts(AirPassenger[,2], start=c(1949, 1), frequency=12)
AirPax1 <- window(AirPax, start=c(1949,1), end=c(1958,12))
AirPaxHO <- window(AirPax, start=c(1959,1), end=c(1960,12))
AirPax.fc <- hw(AirPax1,seasonal = "m",h=24)
plot(AirPax.fc)
AirPax.fc$model
AirPax.fc$mean
#alpha = 0.3477
#beta = 2e-04
#gamma = 0.6519
Vec<- cbind(AirPaxHO,AirPax.fc$mean)
ts.plot(Vec, col=c("blue", "red"), main="Quarterly Income: Actual vs Forecast")
MAPE <- mean(abs(Vec[,1]-Vec[,2])/Vec[,1])
MAPE #0.07314531
# to get better combination that gives least MAPE
AirPax.fc <- hw(AirPax1,seasonal = "m",alpha =0.03,beta = 0.002,gamma = 0.001,h=24)
Vec<- cbind(AirPaxHO,AirPax.fc$mean)
ts.plot(Vec, col=c("blue", "red"), main="Quarterly Income: Actual vs Forecast")
MAPE <- mean(abs(Vec[,1]-Vec[,2])/Vec[,1])
MAPE #0.03314312
## For final predicting we need to use the total data as train and give the alpha,beta,gamma
# which were ideal upon observing.
AirPax.fclast <- hw(AirPax,seasonal = "m",alpha =0.03,beta = 0.002,gamma = 0.001,h=12)
# Now we used that model to predict 1961's data
plot(AirPax.fclast)
AirPax.fclast$mean
Champ1 <- window(Champagne, start=c(1964,1), end=c(1970,12))
ChampHO <- window(Champagne, start=c(1971,1), end=c(1972,9))
Champ.fc <- hw(Champ1, h=21)
Champ1.stl <- stl(Champ1, s.window = 5)
fcst.Champ1.stl <- forecast(Champ1.stl, method="rwdrift", h=21)
# Stationarity & ARIMA
BASF <- ts(GermanMonthlyAverageStockPrice[,9], start=c(1981,1), frequency=12)
library(tseries)
adf.test(BASF)
acf(BASF, lag = 50)
BASF.arima.fit <- arima(BASF, c(1, 1, 0))
BASF.arima.fit
Box.test(BASF.arima.fit$residuals, lag=30, type="Ljung-Box")
plot(forecast(BASF.arima.fit, h=6))
|
#' ocs4RLogger
#'
#' @docType class
#' @export
#' @keywords logger
#' @return Object of \code{\link{R6Class}} for modelling a simple logger
#' @format \code{\link{R6Class}} object.
#'
#' @section Abstract Methods:
#' \describe{
#' \item{\code{INFO(text)}}{
#' Logger to report information. Used internally
#' }
#' \item{\code{WARN(text)}}{
#' Logger to report warnings. Used internally
#' }
#' \item{\code{ERROR(text)}}{
#' Logger to report errors. Used internally
#' }
#' }
#'
#' @note Logger class used internally by ocs4R
#'
ocs4RLogger <- R6Class("ocs4RLogger",
portable = TRUE,
public = list(
#logger
verbose.info = FALSE,
verbose.debug = FALSE,
loggerType = NULL,
logger = function(type, text){
if(self$verbose.info){
cat(sprintf("[ocs4R][%s] %s - %s \n", type, self$getClassName(), text))
}
},
INFO = function(text){self$logger("INFO", text)},
WARN = function(text){self$logger("WARN", text)},
ERROR = function(text){self$logger("ERROR", text)},
initialize = function(logger = NULL){
#logger
if(!missing(logger)){
if(!is.null(logger)){
self$loggerType <- toupper(logger)
if(!(self$loggerType %in% c("INFO","DEBUG"))){
stop(sprintf("Unknown logger type '%s", logger))
}
if(self$loggerType == "INFO"){
self$verbose.info = TRUE
}else if(self$loggerType == "DEBUG"){
self$verbose.info = TRUE
self$verbose.debug = TRUE
}
}
}
},
#getClassName
getClassName = function(){
return(class(self)[1])
},
#getClass
getClass = function(){
class <- eval(parse(text=self$getClassName()))
return(class)
}
)
) | /R/ocs4R_logger.R | no_license | cran/ocs4R | R | false | false | 1,864 | r | #' ocs4RLogger
#'
#' @docType class
#' @export
#' @keywords logger
#' @return Object of \code{\link{R6Class}} for modelling a simple logger
#' @format \code{\link{R6Class}} object.
#'
#' @section Abstract Methods:
#' \describe{
#' \item{\code{INFO(text)}}{
#' Logger to report information. Used internally
#' }
#' \item{\code{WARN(text)}}{
#' Logger to report warnings. Used internally
#' }
#' \item{\code{ERROR(text)}}{
#' Logger to report errors. Used internally
#' }
#' }
#'
#' @note Logger class used internally by ocs4R
#'
ocs4RLogger <- R6Class("ocs4RLogger",
portable = TRUE,
public = list(
#logger
verbose.info = FALSE,
verbose.debug = FALSE,
loggerType = NULL,
logger = function(type, text){
if(self$verbose.info){
cat(sprintf("[ocs4R][%s] %s - %s \n", type, self$getClassName(), text))
}
},
INFO = function(text){self$logger("INFO", text)},
WARN = function(text){self$logger("WARN", text)},
ERROR = function(text){self$logger("ERROR", text)},
initialize = function(logger = NULL){
#logger
if(!missing(logger)){
if(!is.null(logger)){
self$loggerType <- toupper(logger)
if(!(self$loggerType %in% c("INFO","DEBUG"))){
stop(sprintf("Unknown logger type '%s", logger))
}
if(self$loggerType == "INFO"){
self$verbose.info = TRUE
}else if(self$loggerType == "DEBUG"){
self$verbose.info = TRUE
self$verbose.debug = TRUE
}
}
}
},
#getClassName
getClassName = function(){
return(class(self)[1])
},
#getClass
getClass = function(){
class <- eval(parse(text=self$getClassName()))
return(class)
}
)
) |
set.seed(120)
nn=20
A=matrix(runif(nn^2),nn,nn)
A=A+t(A)
problem=grassmannQ(n=nn,p=2)
problem["obj"]=function(X){
-sum(diag(t(X)%*%A%*%X))
}
#set gradient function
problem["grad"]=function(X){
-2*A%*%X
}
#set hessian function
problem["hessian"]=function(X,Z){
-2*A%*%Z
}
problem["retraction"]="QR"
problem["control","tol"]=0.001
problem["control","Delta0"]=5
problem["control","DeltaMax"]=10
problem["control","rhoMin"]=0.01
problem["control","iterMax"]=1000
problem["control","alpha"]=1
problem["control","iterSubMax"]=1000
problem["control","particleNum"]=100
problem["control","conjMethod"]="FR"
checkGradient(problem)
checkHessian(problem)
res=particleSwarm(problem)
-res$optValue
sum(eigen(A)$values[1:2])
res$NumIter
#eigen(A)$vectors[,1:2]
#res$optY[[1]]
#svd(cbind(eigen(A)$vectors[,1:2],res$optY[[1]]))
set.seed(120)
nn=20
A=matrix(runif(nn^2),nn,nn)
A=A+t(A)
problem=grassmannSub(n=nn,r=2)
problem["obj"]=function(X){
-sum(diag(A%*%X))
}
#set gradient function
problem["grad"]=function(X){
-A
}
#set hessian function
problem["hessian"]=function(X,Z){
-matrix(0,nn,nn)
}
problem["retraction"]="Cayley"
problem["control","tol"]=0.1
problem["control","Delta0"]=3
problem["control","DeltaMax"]=10
problem["control","rhoMin"]=0.05
problem["control","iterMax"]=2000
problem["control","alpha"]=5
problem["control","iterSubMax"]=1000
problem["control","particleNum"]=100
problem["control","conjMethod"]="FR"
checkGradient(problem)
checkHessian(problem)
res=conjugateGradient(problem)
-res$optValue
sum(eigen(A)$values[1:2])
res$NumIter
| /tests/invariantSubspace.R | no_license | Kejun2013/rOptManifold | R | false | false | 1,555 | r | set.seed(120)
nn=20
A=matrix(runif(nn^2),nn,nn)
A=A+t(A)
problem=grassmannQ(n=nn,p=2)
problem["obj"]=function(X){
-sum(diag(t(X)%*%A%*%X))
}
#set gradient function
problem["grad"]=function(X){
-2*A%*%X
}
#set hessian function
problem["hessian"]=function(X,Z){
-2*A%*%Z
}
problem["retraction"]="QR"
problem["control","tol"]=0.001
problem["control","Delta0"]=5
problem["control","DeltaMax"]=10
problem["control","rhoMin"]=0.01
problem["control","iterMax"]=1000
problem["control","alpha"]=1
problem["control","iterSubMax"]=1000
problem["control","particleNum"]=100
problem["control","conjMethod"]="FR"
checkGradient(problem)
checkHessian(problem)
res=particleSwarm(problem)
-res$optValue
sum(eigen(A)$values[1:2])
res$NumIter
#eigen(A)$vectors[,1:2]
#res$optY[[1]]
#svd(cbind(eigen(A)$vectors[,1:2],res$optY[[1]]))
set.seed(120)
nn=20
A=matrix(runif(nn^2),nn,nn)
A=A+t(A)
problem=grassmannSub(n=nn,r=2)
problem["obj"]=function(X){
-sum(diag(A%*%X))
}
#set gradient function
problem["grad"]=function(X){
-A
}
#set hessian function
problem["hessian"]=function(X,Z){
-matrix(0,nn,nn)
}
problem["retraction"]="Cayley"
problem["control","tol"]=0.1
problem["control","Delta0"]=3
problem["control","DeltaMax"]=10
problem["control","rhoMin"]=0.05
problem["control","iterMax"]=2000
problem["control","alpha"]=5
problem["control","iterSubMax"]=1000
problem["control","particleNum"]=100
problem["control","conjMethod"]="FR"
checkGradient(problem)
checkHessian(problem)
res=conjugateGradient(problem)
-res$optValue
sum(eigen(A)$values[1:2])
res$NumIter
|
source(system.file(file.path('tests', 'testthat', 'AD_test_utils.R'), package = 'nimble'))
EDopt <- nimbleOptions("enableDerivs")
BMDopt <- nimbleOptions("buildModelDerivs")
nimbleOptions(enableDerivs = TRUE)
nimbleOptions(buildModelDerivs = TRUE)
nimbleOptions(useADcholAtomic = TRUE)
nimbleOptions(useADsolveAtomic = TRUE)
nimbleOptions(useADmatMultAtomic = TRUE)
nimbleOptions(useADmatInverseAtomic = TRUE)
relTol <- eval(formals(test_ADModelCalculate)$relTol)
relTol[3] <- 1e-6
relTol[4] <- 1e-4
verbose <- FALSE
context("Testing of derivatives for calculate() for nimbleModel with various mv distributions")
code <- nimbleCode({
Sigma1[1:n,1:n] <- exp(-dist[1:n,1:n]/rho)
Q[1:n,1:n] <- inverse(Sigma1[1:n, 1:n])
y[4, 1:n] ~ dmnorm(mu4[1:n], Q[1:n,1:n])
Uprec[1:n, 1:n] <- chol(Q[1:n,1:n])
Ucov[1:n, 1:n] <- chol(Sigma1[1:n,1:n])
y[5, 1:n] ~ dmnorm(mu5[1:n], cholesky = Uprec[1:n,1:n], prec_param = 1)
y[6, 1:n] ~ dmnorm(mu6[1:n], cholesky = Ucov[1:n,1:n], prec_param = 0)
W1[1:n, 1:n] ~ dinvwish(R = R[1:n,1:n], df = nu)
UR[1:n, 1:n] <- chol(R[1:n,1:n])
US[1:n, 1:n] <- chol(inverse(R[1:n,1:n]))
W2[1:n, 1:n] ~ dinvwish(cholesky = UR[1:n,1:n], df = nu, scale_param = 0)
W3[1:n, 1:n] ~ dinvwish(cholesky = US[1:n,1:n], df = nu, scale_param = 1)
W4[1:n, 1:5] ~ dwish(R[1:n, 1:n], df = nu)
W5[1:n, 1:5] ~ dwish(cholesky = UR[1:n, 1:n], df = nu, scale_param = 0)
W6[1:n, 1:5] ~ dwish(cholesky = US[1:n, 1:n], df = nu, scale_param = 1)
mu4[1:n] ~ dmnorm(z[1:n], W4[1:n,1:n])
mu5[1:n] ~ dmnorm(z[1:n], W5[1:n,1:n])
mu6[1:n] ~ dmnorm(z[1:n], W6[1:n,1:n])
rho ~ dgamma(2, 3)
nu ~ dunif(0, 100)
})
set.seed(1)
n <- 5
locs <- runif(n)
dd <- fields::rdist(locs)
R <- crossprod(matrix(rnorm(n^2), n, n))
model <- nimbleModel(code, constants = list(n = n),
inits = list(dist = dd, R = R, nu = 8, rho = rgamma(1, 1, 1),
z = rep(1, n)))
model$simulate()
model$calculate()
model$setData('y')
newDist <- as.matrix(dist(runif(n)))
newR <- crossprod(matrix(rnorm(n*n), n))
newW4 <- crossprod(matrix(rnorm(n*n), n))
newW5 <- crossprod(matrix(rnorm(n*n), n))
newW6 <- crossprod(matrix(rnorm(n*n), n))
relTolTmp <- relTol
relTolTmp[1] <- 1e-14
relTolTmp[2] <- 1e-6
relTolTmp[3] <- 1e-1
relTolTmp[4] <- 1e-1
relTolTmp[5] <- 1e-11
## rOutput2d11 result can be wildly out of tolerance, so not checking it.
test_ADModelCalculate(model, newUpdateNodes = list(nu = 12.1, dist = newDist, R = newR, W4 = newW4, W5 = newW5, W6 = newW6),
x = 'prior', absTolThreshold = 1e-12, checkCompiledValuesIdentical = FALSE,
useParamTransform = TRUE, useFasterRderivs = TRUE, checkDoubleUncHessian = FALSE,
relTol = relTolTmp, verbose = verbose,
name = 'various multivariate dists')
## 1310 seconds.
## This segfaults as of 2023-05-08 (and did as well 2023-03-25), with libnimble.a (or with libnimble.so).
## There is no call to `clearCompiled` in the models testing, so that shouldn't be the issue.
## Not sure how able to get the timing above.
nimbleOptions(enableDerivs = EDopt)
nimbleOptions(buildModelDerivs = BMDopt)
| /packages/nimble/tests/testthat/test-ADmodels-bigmv.R | permissive | nimble-dev/nimble | R | false | false | 3,351 | r | source(system.file(file.path('tests', 'testthat', 'AD_test_utils.R'), package = 'nimble'))
EDopt <- nimbleOptions("enableDerivs")
BMDopt <- nimbleOptions("buildModelDerivs")
nimbleOptions(enableDerivs = TRUE)
nimbleOptions(buildModelDerivs = TRUE)
nimbleOptions(useADcholAtomic = TRUE)
nimbleOptions(useADsolveAtomic = TRUE)
nimbleOptions(useADmatMultAtomic = TRUE)
nimbleOptions(useADmatInverseAtomic = TRUE)
relTol <- eval(formals(test_ADModelCalculate)$relTol)
relTol[3] <- 1e-6
relTol[4] <- 1e-4
verbose <- FALSE
context("Testing of derivatives for calculate() for nimbleModel with various mv distributions")
code <- nimbleCode({
Sigma1[1:n,1:n] <- exp(-dist[1:n,1:n]/rho)
Q[1:n,1:n] <- inverse(Sigma1[1:n, 1:n])
y[4, 1:n] ~ dmnorm(mu4[1:n], Q[1:n,1:n])
Uprec[1:n, 1:n] <- chol(Q[1:n,1:n])
Ucov[1:n, 1:n] <- chol(Sigma1[1:n,1:n])
y[5, 1:n] ~ dmnorm(mu5[1:n], cholesky = Uprec[1:n,1:n], prec_param = 1)
y[6, 1:n] ~ dmnorm(mu6[1:n], cholesky = Ucov[1:n,1:n], prec_param = 0)
W1[1:n, 1:n] ~ dinvwish(R = R[1:n,1:n], df = nu)
UR[1:n, 1:n] <- chol(R[1:n,1:n])
US[1:n, 1:n] <- chol(inverse(R[1:n,1:n]))
W2[1:n, 1:n] ~ dinvwish(cholesky = UR[1:n,1:n], df = nu, scale_param = 0)
W3[1:n, 1:n] ~ dinvwish(cholesky = US[1:n,1:n], df = nu, scale_param = 1)
W4[1:n, 1:5] ~ dwish(R[1:n, 1:n], df = nu)
W5[1:n, 1:5] ~ dwish(cholesky = UR[1:n, 1:n], df = nu, scale_param = 0)
W6[1:n, 1:5] ~ dwish(cholesky = US[1:n, 1:n], df = nu, scale_param = 1)
mu4[1:n] ~ dmnorm(z[1:n], W4[1:n,1:n])
mu5[1:n] ~ dmnorm(z[1:n], W5[1:n,1:n])
mu6[1:n] ~ dmnorm(z[1:n], W6[1:n,1:n])
rho ~ dgamma(2, 3)
nu ~ dunif(0, 100)
})
set.seed(1)
n <- 5
locs <- runif(n)
dd <- fields::rdist(locs)
R <- crossprod(matrix(rnorm(n^2), n, n))
model <- nimbleModel(code, constants = list(n = n),
inits = list(dist = dd, R = R, nu = 8, rho = rgamma(1, 1, 1),
z = rep(1, n)))
model$simulate()
model$calculate()
model$setData('y')
newDist <- as.matrix(dist(runif(n)))
newR <- crossprod(matrix(rnorm(n*n), n))
newW4 <- crossprod(matrix(rnorm(n*n), n))
newW5 <- crossprod(matrix(rnorm(n*n), n))
newW6 <- crossprod(matrix(rnorm(n*n), n))
relTolTmp <- relTol
relTolTmp[1] <- 1e-14
relTolTmp[2] <- 1e-6
relTolTmp[3] <- 1e-1
relTolTmp[4] <- 1e-1
relTolTmp[5] <- 1e-11
## rOutput2d11 result can be wildly out of tolerance, so not checking it.
test_ADModelCalculate(model, newUpdateNodes = list(nu = 12.1, dist = newDist, R = newR, W4 = newW4, W5 = newW5, W6 = newW6),
x = 'prior', absTolThreshold = 1e-12, checkCompiledValuesIdentical = FALSE,
useParamTransform = TRUE, useFasterRderivs = TRUE, checkDoubleUncHessian = FALSE,
relTol = relTolTmp, verbose = verbose,
name = 'various multivariate dists')
## 1310 seconds.
## This segfaults as of 2023-05-08 (and did as well 2023-03-25), with libnimble.a (or with libnimble.so).
## There is no call to `clearCompiled` in the models testing, so that shouldn't be the issue.
## Not sure how able to get the timing above.
nimbleOptions(enableDerivs = EDopt)
nimbleOptions(buildModelDerivs = BMDopt)
|
#' Inserts a widget
#' @description Wrapper around insertUI(widgetUI(...)) and callModule(widget, ...).
#'@export
insert_widget <- function(id,
args,
datasets,
selector = "#placeholder",
where = "beforeEnd",
buttons = c("close","edit"),
size = list(width = 500, height = 450,
margin = 10, padding = 25,
padding_bottom = 100),
session
){
ui <- widgetUI(session$ns(id),
args = args,
datasets = datasets,
buttons = buttons,
widget_size = size)
insertUI(selector, where = where, ui = ui)
callModule(widget,
id,
args = args,
datasets = datasets,
id_copy = id)
}
#' Loads all widgets from a JSON
#'@export
insert_saved_widgets <- function(dash, datasets, buttons = "", session = getDefaultReactiveDomain(), ...){
if(is.character(dash))dash <- jsonlite::fromJSON(dash)
settings <- list()
for(i in seq_along(dash)){
new_id <- uuid::UUIDgenerate()
insert_widget(new_id, dash[[i]], datasets, buttons = buttons, session = session, ...)
settings[[new_id]] <- dash[[i]]
}
return(settings)
}
| /R/insert_widget.R | no_license | moturoa/shintodashboard | R | false | false | 1,444 | r | #' Inserts a widget
#' @description Wrapper around insertUI(widgetUI(...)) and callModule(widget, ...).
#'@export
insert_widget <- function(id,
args,
datasets,
selector = "#placeholder",
where = "beforeEnd",
buttons = c("close","edit"),
size = list(width = 500, height = 450,
margin = 10, padding = 25,
padding_bottom = 100),
session
){
ui <- widgetUI(session$ns(id),
args = args,
datasets = datasets,
buttons = buttons,
widget_size = size)
insertUI(selector, where = where, ui = ui)
callModule(widget,
id,
args = args,
datasets = datasets,
id_copy = id)
}
#' Loads all widgets from a JSON
#'@export
insert_saved_widgets <- function(dash, datasets, buttons = "", session = getDefaultReactiveDomain(), ...){
if(is.character(dash))dash <- jsonlite::fromJSON(dash)
settings <- list()
for(i in seq_along(dash)){
new_id <- uuid::UUIDgenerate()
insert_widget(new_id, dash[[i]], datasets, buttons = buttons, session = session, ...)
settings[[new_id]] <- dash[[i]]
}
return(settings)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/firstlib.R
\name{ecos.control}
\alias{ecos.control}
\title{Return the default optimization parameters for ECOS}
\usage{
ecos.control(
maxit = 100L,
feastol = 1e-08,
reltol = 1e-08,
abstol = 1e-08,
feastol_inacc = 1e-04,
abstol_inacc = 5e-05,
reltol_inacc = 5e-05,
verbose = 0L,
mi_max_iters = 1000L,
mi_int_tol = 1e-04,
mi_abs_eps = 1e-06,
mi_rel_eps = 1e-06
)
}
\arguments{
\item{maxit}{the maximum number of iterations for ecos, default 100L}
\item{feastol}{the tolerance on the primal and dual residual, default 1e-8}
\item{reltol}{the relative tolerance on the duality gap, default 1e-8}
\item{abstol}{the absolute tolerance on the duality gap, default 1e-8}
\item{feastol_inacc}{the tolerance on the primal and dual residual if reduced precisions, default 1e-4}
\item{abstol_inacc}{the absolute tolerance on the duality gap if reduced precision, default 5e-5}
\item{reltol_inacc}{the relative tolerance on the duality gap if reduced precision, default 5e-5}
\item{verbose}{verbosity level, default 0L. A verbosity level of 1L will show more detail, but clutter session transcript.}
\item{mi_max_iters}{the maximum number of branch and bound iterations (mixed integer problems only), default 1000L}
\item{mi_int_tol}{the integer tolerence (mixed integer problems only), default 1e-4}
\item{mi_abs_eps}{the absolute tolerance between upper and lower bounds (mixed integer problems only), default 1e-6}
\item{mi_rel_eps}{the relative tolerance, \eqn{(U-L)/L}, between upper and lower bounds (mixed integer problems only), default 1e-6}
}
\value{
a list with the following elements:
\describe{
\item{FEASTOL}{ the tolerance on the primal and dual residual, parameter \code{feastol}}
\item{ABSTOL}{ the absolute tolerance on the duality gap, parameter \code{abstol}}
\item{RELTOL}{ the relative tolerance on the duality gap, parameter \code{reltol}}
\item{FEASTOL_INACC}{ the tolerance on the primal and dual residual if reduced precisions, parameter \code{feastol_inacc}}
\item{ABSTOL_INACC}{ the absolute tolerance on the duality gap if reduced precision, parameter \code{abstol_inacc}}
\item{RELTOL_INACC}{ the relative tolerance on the duality gap if reduced precision, parameter \code{reltol_inacc}}
\item{MAXIT}{ the maximum number of iterations for ecos, parameter \code{maxit}}
\item{MI_MAX_ITERS}{ the maximum number of branch and bound iterations (mixed integer problems only), parameter \code{mi_max_iters}}
\item{MI_INT_TOL}{ the integer tolerence (mixed integer problems only), parameter \code{mi_int_tol}}
\item{MI_ABS_EPS}{ the absolute tolerance between upper and lower bounds (mixed integer problems only), parameter \code{mi_abs_eps}}
\item{MI_REL_EPS}{ the relative tolerance, \eqn{(U-L)/L}, between upper and lower bounds (mixed integer problems only), parameter \code{mi_rel_eps}}
\item{VERBOSE}{ verbosity level, parameter \code{verbose}}
}
}
\description{
This is used to control the behavior of the underlying optimization code.
}
| /man/ecos.control.Rd | no_license | bnaras/ECOSolveR | R | false | true | 3,107 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/firstlib.R
\name{ecos.control}
\alias{ecos.control}
\title{Return the default optimization parameters for ECOS}
\usage{
ecos.control(
maxit = 100L,
feastol = 1e-08,
reltol = 1e-08,
abstol = 1e-08,
feastol_inacc = 1e-04,
abstol_inacc = 5e-05,
reltol_inacc = 5e-05,
verbose = 0L,
mi_max_iters = 1000L,
mi_int_tol = 1e-04,
mi_abs_eps = 1e-06,
mi_rel_eps = 1e-06
)
}
\arguments{
\item{maxit}{the maximum number of iterations for ecos, default 100L}
\item{feastol}{the tolerance on the primal and dual residual, default 1e-8}
\item{reltol}{the relative tolerance on the duality gap, default 1e-8}
\item{abstol}{the absolute tolerance on the duality gap, default 1e-8}
\item{feastol_inacc}{the tolerance on the primal and dual residual if reduced precisions, default 1e-4}
\item{abstol_inacc}{the absolute tolerance on the duality gap if reduced precision, default 5e-5}
\item{reltol_inacc}{the relative tolerance on the duality gap if reduced precision, default 5e-5}
\item{verbose}{verbosity level, default 0L. A verbosity level of 1L will show more detail, but clutter session transcript.}
\item{mi_max_iters}{the maximum number of branch and bound iterations (mixed integer problems only), default 1000L}
\item{mi_int_tol}{the integer tolerence (mixed integer problems only), default 1e-4}
\item{mi_abs_eps}{the absolute tolerance between upper and lower bounds (mixed integer problems only), default 1e-6}
\item{mi_rel_eps}{the relative tolerance, \eqn{(U-L)/L}, between upper and lower bounds (mixed integer problems only), default 1e-6}
}
\value{
a list with the following elements:
\describe{
\item{FEASTOL}{ the tolerance on the primal and dual residual, parameter \code{feastol}}
\item{ABSTOL}{ the absolute tolerance on the duality gap, parameter \code{abstol}}
\item{RELTOL}{ the relative tolerance on the duality gap, parameter \code{reltol}}
\item{FEASTOL_INACC}{ the tolerance on the primal and dual residual if reduced precisions, parameter \code{feastol_inacc}}
\item{ABSTOL_INACC}{ the absolute tolerance on the duality gap if reduced precision, parameter \code{abstol_inacc}}
\item{RELTOL_INACC}{ the relative tolerance on the duality gap if reduced precision, parameter \code{reltol_inacc}}
\item{MAXIT}{ the maximum number of iterations for ecos, parameter \code{maxit}}
\item{MI_MAX_ITERS}{ the maximum number of branch and bound iterations (mixed integer problems only), parameter \code{mi_max_iters}}
\item{MI_INT_TOL}{ the integer tolerence (mixed integer problems only), parameter \code{mi_int_tol}}
\item{MI_ABS_EPS}{ the absolute tolerance between upper and lower bounds (mixed integer problems only), parameter \code{mi_abs_eps}}
\item{MI_REL_EPS}{ the relative tolerance, \eqn{(U-L)/L}, between upper and lower bounds (mixed integer problems only), parameter \code{mi_rel_eps}}
\item{VERBOSE}{ verbosity level, parameter \code{verbose}}
}
}
\description{
This is used to control the behavior of the underlying optimization code.
}
|
library(tidyr)
library(ggplot2)
library(lubridate)
library(dplyr)
library(scales)
library(RColorBrewer)
library(ggrepel)
# library(ggmap)
library(ggpubr)
library(treemap)
library(gghighlight)
# library(ggExtra)
# raw_df <- read.csv('fb_anova.csv')
raw_df <- read.csv('rt11.csv')
raw_df <- raw_df[1:4]
raw_df$Date2 <- mdy(raw_df$Date)
raw_df$Date3 <- as.Date(raw_df$Date,format = "%m/%d/%y")
raw_df$Month = strftime(raw_df$Date3, '%b')
raw_df$Year = strftime(raw_df$Date3, '%Y')
# raw_df$Month = format(as.Date(raw_df$Date3), "%b")
raw_df$Month= factor(raw_df$Month, levels = month.abb)
ggplot(filter(raw_df,Size=='6oz' | Size=='2oz'), aes(x=Date2, y = Chips.Sold, group=Chip),size=5) +
geom_line(aes(color=Chip), alpha=.5, size=1) + facet_wrap(~Size, nrow = 2) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + scale_x_date(breaks = "3 month", minor_breaks = "1 month", labels=date_format("%b%y")) +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 2000))+ labs(title = "Average Sold by Type",subtitle = "2 and 6 Oz Only")
sumdf <- raw_df %>%
group_by(Date2, Size) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
ggplot(filter(sumdf,mean.sold>0), aes(x=Date2, y = mean.sold, group=Size),size=5) +
geom_line(alpha=.5, size=1, color="#3B5998", lineend="round") + facet_wrap(~Size, ncol = 2) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + scale_x_date(breaks = "3 month", minor_breaks = "2 months", labels=date_format("%b%y")) +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 2000)) + labs(title = "Average Sold by Size")
sumdf <- raw_df %>%
group_by(Date2) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
ggplot(filter(sumdf,mean.sold>0), aes(x=Date2, y = mean.sold),size=5) +
geom_line(alpha=.5, size=1, color="#3B5998", lineend="round") + #facet_wrap(~Size, ncol = 2) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + scale_x_date(name = "", breaks = "3 month", minor_breaks = "1 month", labels=date_format("%b")) +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 2000)) + labs(title = "Average Sold by Month")
bymon = raw_df
sumdf <- bymon %>%
group_by(Month,Year) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
ggplot(filter(sumdf,mean.sold>0), aes(x=Month, y = mean.sold),size=5) +
geom_col(alpha=.7, size=1, fill="#3B5998") + facet_wrap(~Year, ncol = 1) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 5000)) + labs(title = "Average Sold by Month/Year")
bymon = raw_df
x <- bymon %>%
group_by(Month) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
x <- filter(x, mean.sold>0)
x <-ungroup(x)
ggplot(x, aes(x=Month, y = mean.sold, group=1)) +geom_smooth()
bymon = raw_df
sumdf1 <- bymon %>%
group_by(Year,Month,Chip) %>%
summarise(mean.sold = sum(Chips.Sold)) ##MPG Took out mean
sumdf2 <- sumdf1 %>%
group_by(Month,Chip) %>%
summarise(mean.sold = mean.sold)##MPG Took out mean
#With Chip filler
ggplot(filter(sumdf2,mean.sold>0), aes(x=Month, y = mean.sold, fill=Chip),size=5) +
geom_col(alpha=.7, size=1) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() +scale_fill_brewer(palette = "Set3")+
scale_y_continuous(name="Average Sold",breaks=seq(0, 60000, 1000)) + labs(title = "Average Sold by Month")
sumdf1 <- bymon %>%
group_by(Month) %>%
summarise(mean.sold = sum(Chips.Sold))
ggplot(filter(sumdf1,mean.sold>0), aes(x=Month, y = mean.sold),size=5) +
geom_col(alpha=.7, size=1, fill="#3B5998") +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() +
scale_y_continuous(name="Average Sold",breaks=seq(0, 60000, 1000)) + labs(title = "Average Sold by Month")
#Using the data on actual wholesale purchases during this same time period, confirm (or refute)
#that these are accurate market share predictions for these flavors.
percent_chip <- filter(raw_df, raw_df$Year == 2006 | raw_df$Year == 2007 | raw_df$Year == 2008)
percent_chip <- raw_df %>%
group_by(Chip) %>%
summarise(sum=sum(Chips.Sold))
percent_chip <- filter(percent_chip, sum>0)
percent_chip$total = sum(percent_chip$sum)
percent_chip$Percentage = percent_chip$sum / percent_chip$total
percent_chip$label <- paste(percent_chip$Chip,'\n', (round(percent_chip$Percentage*100,1)), '%')
percent_chip$per <- paste((round(percent_chip$Percentage*100,1)), '%')
# treemap
# treemap(percent_chip,
# index="Chip",
# vSize="sum",
# type="index"
# )
treemap(percent_chip,
index="label",
vSize="sum",
type="index",
palette="Set3",
title="Route 11 Chip Popularity '06-'08",
fontsize.title=20
)
ggplot(percent_chip, aes(reorder(Chip,-sum), sum)) +
geom_col(alpha=.7, size=1, fill="#3B5998") + coord_flip()+
geom_text(aes(label = per), hjust = -0.003, size=4) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Route 11 Chip Popularity '06-'08") + labs(y="Units Sold", x="")
percent_chip0910 <- filter(raw_df, raw_df$Year == 2009 | raw_df$Year == 2010)
percent_chip0910 <- percent_chip0910 %>%
group_by(Chip) %>%
summarise(sum=sum(Chips.Sold))
percent_chip0910 <- filter(percent_chip0910, sum>0)
percent_chip0910$total = sum(percent_chip0910$sum)
percent_chip0910$Percentage = percent_chip0910$sum / percent_chip0910$total
percent_chip0910$label <- paste(percent_chip0910$Chip,'\n', (round(percent_chip0910$Percentage*100,1)), '%')
percent_chip0910$per <- paste((round(percent_chip0910$Percentage*100,1)), '%')
treemap(percent_chip0910,
index="label",
vSize="sum",
type="index",
palette="Set3",
title="Route 11 Chip Popularity '09-'10",
fontsize.title=20
)
ggplot(percent_chip0910, aes(reorder(Chip,-sum), sum)) +
geom_col(alpha=.7, size=1, fill="#3B5998") + coord_flip()+
geom_text(aes(label = per), hjust = -.01, size=4) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Route 11 Chip Popularity '09-'10") + labs(y="Units Sold", x="")
## How did the price increase in January 2008 affect the company's sales?
soldbyyear = raw_df %>%
group_by(Year,Month) %>%
summarise(sold=sum(Chips.Sold))
soldbyyear = filter(soldbyyear, sold>0)
soldbyyear1 <- soldbyyear %>% group_by(Year) %>% mutate(cumsum = cumsum(sold))
soldbyyear1 <-soldbyyear1 %>%
mutate(label = if_else(Month == "Dec", as.character(Year), NA_character_))
ungroup(soldbyyear1)
#Plot by year
ggplot(soldbyyear1, aes(x=Month, y = cumsum, group=Year,color=Year)) +
geom_line( alpha=.5, size=3, lineend="round") +
geom_label_repel(aes(label = label),nudge_x = 100, nudge_y = 40,na.rm = TRUE)+
scale_color_discrete(guide = FALSE) + labs(title = "Decrease in Sales in 2008", y="Cumulative Sold") + scale_y_continuous(labels = scales::number_format(big.mark = ',')) +
theme_pubclean()
# ggplot(filter(sumdf2,mean.sold>0), aes(x=Month, y = mean.sold, fill=Chip),size=5) +
# geom_col(alpha=.7, size=1)
# How did eliminating some of its flavors affect its sales?
chipsoldbyyear = raw_df %>%
group_by(Chip,Year) %>%
summarise(sum=sum(Chips.Sold))
chipsoldbyyear = filter(chipsoldbyyear, sum>0)
ggplot(chipsoldbyyear, aes(reorder(Chip,-sum), sum)) +
geom_col(alpha=.7, fill="#3B5998", ) + coord_flip()+
# geom_text(aes(label = per), hjust = -.01, size=4) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Route 11 Chip Popularity '09-'10") + labs(y="Units Sold", x="")
# Got rid of
ggplot(chipsoldbyyear, aes(x=Year, y=sum, fill=Chip)) +
geom_col(alpha=.7) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Effect of Eliminating Flavors",
subtitle = "Green Chile Enchilada, Garlic & Herb, and Sweet Potato Cinnamon Sugar ") +
labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ','))+
gghighlight(Chip == "Green Chile Enchilada"
| Chip == "Garlic and Herb"
| Chip == "Sweet Potato Cinnamon Sugar",label_key = Month)
#Thinking about
ggplot(chipsoldbyyear, aes(x=Year, y=sum, fill=Chip)) +
geom_col(alpha=.7) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Effect of Eliminating Flavors",
subtitle = "Chesapeake Crab and Salt and Vinegar ") +
labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',')) +
gghighlight(Chip == "Chesapeake Crab"
| Chip == "Salt and Vinegar" ,label_key = Month)
profitability = raw_df %>%
group_by(Year,Chip, Size) %>%
summarise(sum=sum(Chips.Sold)) ##MPG Took out mean
profitability = filter(profitability, sum>0)
ggplot(profitability, aes(x=Year, y=sum, fill=Size)) +
geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Cases Sold By Size") + labs(y="Cases Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ','))
profitability$caseCost = NA
profitability$unitPrice = NA
profitability = filter(profitability, sum>0)
profitability$caseCost[profitability$Size == '2oz'] <- .36*30
profitability$caseCost[profitability$Size == '6oz'] <- .81*12
profitability$caseCost[profitability$Size == '1.5oz'] <- .36*30
profitability$caseCost[profitability$Size == '5oz'] <- .81*12
# profitability$casepack[profitability$Size == '2oz'] <- 30
# profitability$casepack[profitability$Size == '6oz'] <- 12
# profitability$casepack[profitability$Size == '1.5oz'] <- 30
# profitability$casepack[profitability$Size == '5oz'] <- 12
profitability$unitPrice[profitability$Size == '2oz'] <- 20
profitability$unitPrice[profitability$Size == '6oz'] <- 18
profitability$unitPrice[profitability$Size == '2oz' & profitability$Year<2008] <- 18
profitability$unitPrice[profitability$Size == '6oz'& profitability$Year<2008] <- 16
profitability$unitPrice[profitability$Size == '5oz'] <- 26.40
profitability$unitPrice[profitability$Size == '1.5oz'] <- 22
#Without discount
# profitability$profit = (profitability$sum * profitability$unitPrice) - (profitability$sum*profitability$caseCost )
profitability$income = (profitability$sum/2 * profitability$unitPrice) - (profitability$sum/2*profitability$caseCost ) + (profitability$sum/2 * profitability$unitPrice*.75) - (profitability$sum/2*profitability$caseCost )
ggplot(profitability, aes(x=Year, y=income, fill=Chip)) +
geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Income", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="")
profitability %>%
group_by(Year) %>%
summarise(total=sum(income)) %>%
ggplot( aes(x=Year, y=total)) +
geom_col(alpha=.7, fill="chocolate3") + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
geom_text(aes(x = Year, y = total, label = paste0('$ ', as.character(formatC(as.numeric(total), format="f", digits=0, big.mark=","))), vjust=-.5)) +
theme_pubclean() + labs(title = "Income", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="")
profitability1 = profitability %>%
group_by(Year) %>%
summarise(Profit=sum(income)-1000000)
profitability2 <- profitability1 %>%
mutate(pos = Profit >= 0)
profitability2 <- profitability2 %>% mutate(cumulativeProfit = cumsum(Profit))
profitability2 <-profitability2 %>%
mutate(label = if_else(Year==2010, paste0('Cumulative Profit\n', '$', as.character(formatC(as.numeric(cumulativeProfit), format="f", digits=0, big.mark=","))), NA_character_))
ggplot(profitability2, aes(x=Year, y=Profit, fill=pos)) +
geom_col(alpha=.7, position="identity") + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) + theme_pubclean() + labs(title = "Rte 11 Profit") +
scale_y_continuous(name='Net Profit',breaks=seq(-800000, 200000, 200000), limits = c(-900000,200000), labels = scales::number_format(big.mark = ',', prefix = '$ ')) + labs(x="") +
geom_text(aes(x = Year, y = Profit, label = paste0('$ ', as.character(formatC(as.numeric(Profit), format="f", digits=0, big.mark=","))), vjust=-.5)) +
scale_fill_manual(values = c("red", "dark green"), guide=FALSE) +
stat_smooth(geom='line',alpha=.4,data = profitability2, mapping=aes(x=Year, y=cumulativeProfit, group=1),alpha=.8,size=2, color='blue') +
geom_label_repel(data = profitability2, aes(x=Year, y=cumulativeProfit, label = label),color='blue', fill="grey",nudge_x = -.8,na.rm = TRUE)
## INCOME PER CHIP???
perchipsold = raw_df %>%
group_by(Chip, Year) %>%
summarise(sum=mean(sum(Chips.Sold)))
perchipsold %>%
mutate(Chip = fct_reorder(Chip, sum)) %>%
ggplot(aes(x=Chip, y=sum, fill=Year)) +
geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Income before Operating Cost", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="") + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))
perchip = raw_df %>%
group_by(Chip,Size) %>%
summarise(sum=mean(sum(Chips.Sold)))
perchip= filter(perchip,sum>0)
perchip$caseCost = NA
perchip$unitPrice = NA
perchip$caseCost[perchip$Size == '2oz'] <- .36*30
perchip$caseCost[perchip$Size == '6oz'] <- .81*12
perchip$caseCost[perchip$Size == '1.5oz'] <- .36*30
perchip$caseCost[perchip$Size == '5oz'] <- .81*12
perchip$unitPrice[perchip$Size == '2oz'] <- 20
perchip$unitPrice[perchip$Size == '6oz'] <- 18
perchip$unitPrice[perchip$Size == '2oz' & perchip$Year<2008] <- 18
perchip$unitPrice[perchip$Size == '6oz'& perchip$Year<2008] <- 16
perchip$unitPrice[perchip$Size == '5oz'] <- 26.40
perchip$unitPrice[perchip$Size == '1.5oz'] <- 22
perchip$income = (perchip$sum/2 * perchip$unitPrice) - (perchip$sum/2*perchip$caseCost ) + (perchip$sum/2 * perchip$unitPrice*.75) - (perchip$sum/2*perchip$caseCost )
test=perchip
perchip1 = perchip %>%
group_by(Chip) %>%
summarise(metric=sum(income, total=sum(sum)))
test = test %>%
group_by(Chip) %>%
summarise( metric=sum(sum))
# combined=cbind(perchip1,test$total)
test$label='Total Sold'
perchip1$label='Income'
combined=rbind(perchip1,test)
# ggplot(combined, aes(reorder(Chip,-Income), Chip)) +
# geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
# theme_pubclean() + labs(title = "Cases Sold By Flavor") + labs(y="Cases Sold", x="")+
# scale_y_continuous(labels = scales::number_format(big.mark = ',')) + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))
combined %>%
mutate(Chip = fct_reorder(Chip, desc(metric))) %>%
ggplot(aes(x=Chip, y=metric,fill=label)) +
geom_col(alpha=.7, position='dodge') + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Income by chip ('06-'10)", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="") + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))+ theme(legend.title = element_blank())
# ggplot(combined, aes(x=Chip, y=metric,fill=label)) +
# geom_col(alpha=.7, position='dodge') + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
# theme_pubclean() + labs(title = "Income by chip ('06-'10)", size=20) +# labs(y="Units Sold", x="")+
# scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="") + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))
| /bar_totalsold_rte11.R | no_license | greermp/Rte11Chips | R | false | false | 16,821 | r | library(tidyr)
library(ggplot2)
library(lubridate)
library(dplyr)
library(scales)
library(RColorBrewer)
library(ggrepel)
# library(ggmap)
library(ggpubr)
library(treemap)
library(gghighlight)
# library(ggExtra)
# raw_df <- read.csv('fb_anova.csv')
raw_df <- read.csv('rt11.csv')
raw_df <- raw_df[1:4]
raw_df$Date2 <- mdy(raw_df$Date)
raw_df$Date3 <- as.Date(raw_df$Date,format = "%m/%d/%y")
raw_df$Month = strftime(raw_df$Date3, '%b')
raw_df$Year = strftime(raw_df$Date3, '%Y')
# raw_df$Month = format(as.Date(raw_df$Date3), "%b")
raw_df$Month= factor(raw_df$Month, levels = month.abb)
ggplot(filter(raw_df,Size=='6oz' | Size=='2oz'), aes(x=Date2, y = Chips.Sold, group=Chip),size=5) +
geom_line(aes(color=Chip), alpha=.5, size=1) + facet_wrap(~Size, nrow = 2) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + scale_x_date(breaks = "3 month", minor_breaks = "1 month", labels=date_format("%b%y")) +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 2000))+ labs(title = "Average Sold by Type",subtitle = "2 and 6 Oz Only")
sumdf <- raw_df %>%
group_by(Date2, Size) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
ggplot(filter(sumdf,mean.sold>0), aes(x=Date2, y = mean.sold, group=Size),size=5) +
geom_line(alpha=.5, size=1, color="#3B5998", lineend="round") + facet_wrap(~Size, ncol = 2) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + scale_x_date(breaks = "3 month", minor_breaks = "2 months", labels=date_format("%b%y")) +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 2000)) + labs(title = "Average Sold by Size")
sumdf <- raw_df %>%
group_by(Date2) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
ggplot(filter(sumdf,mean.sold>0), aes(x=Date2, y = mean.sold),size=5) +
geom_line(alpha=.5, size=1, color="#3B5998", lineend="round") + #facet_wrap(~Size, ncol = 2) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + scale_x_date(name = "", breaks = "3 month", minor_breaks = "1 month", labels=date_format("%b")) +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 2000)) + labs(title = "Average Sold by Month")
bymon = raw_df
sumdf <- bymon %>%
group_by(Month,Year) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
ggplot(filter(sumdf,mean.sold>0), aes(x=Month, y = mean.sold),size=5) +
geom_col(alpha=.7, size=1, fill="#3B5998") + facet_wrap(~Year, ncol = 1) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() +
scale_y_continuous(name="Average Sold",breaks=seq(0, 30000, 5000)) + labs(title = "Average Sold by Month/Year")
bymon = raw_df
x <- bymon %>%
group_by(Month) %>%
summarise(mean.sold = mean(sum(Chips.Sold)))
x <- filter(x, mean.sold>0)
x <-ungroup(x)
ggplot(x, aes(x=Month, y = mean.sold, group=1)) +geom_smooth()
bymon = raw_df
sumdf1 <- bymon %>%
group_by(Year,Month,Chip) %>%
summarise(mean.sold = sum(Chips.Sold)) ##MPG Took out mean
sumdf2 <- sumdf1 %>%
group_by(Month,Chip) %>%
summarise(mean.sold = mean.sold)##MPG Took out mean
#With Chip filler
ggplot(filter(sumdf2,mean.sold>0), aes(x=Month, y = mean.sold, fill=Chip),size=5) +
geom_col(alpha=.7, size=1) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() +scale_fill_brewer(palette = "Set3")+
scale_y_continuous(name="Average Sold",breaks=seq(0, 60000, 1000)) + labs(title = "Average Sold by Month")
sumdf1 <- bymon %>%
group_by(Month) %>%
summarise(mean.sold = sum(Chips.Sold))
ggplot(filter(sumdf1,mean.sold>0), aes(x=Month, y = mean.sold),size=5) +
geom_col(alpha=.7, size=1, fill="#3B5998") +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() +
scale_y_continuous(name="Average Sold",breaks=seq(0, 60000, 1000)) + labs(title = "Average Sold by Month")
#Using the data on actual wholesale purchases during this same time period, confirm (or refute)
#that these are accurate market share predictions for these flavors.
percent_chip <- filter(raw_df, raw_df$Year == 2006 | raw_df$Year == 2007 | raw_df$Year == 2008)
percent_chip <- raw_df %>%
group_by(Chip) %>%
summarise(sum=sum(Chips.Sold))
percent_chip <- filter(percent_chip, sum>0)
percent_chip$total = sum(percent_chip$sum)
percent_chip$Percentage = percent_chip$sum / percent_chip$total
percent_chip$label <- paste(percent_chip$Chip,'\n', (round(percent_chip$Percentage*100,1)), '%')
percent_chip$per <- paste((round(percent_chip$Percentage*100,1)), '%')
# treemap
# treemap(percent_chip,
# index="Chip",
# vSize="sum",
# type="index"
# )
treemap(percent_chip,
index="label",
vSize="sum",
type="index",
palette="Set3",
title="Route 11 Chip Popularity '06-'08",
fontsize.title=20
)
ggplot(percent_chip, aes(reorder(Chip,-sum), sum)) +
geom_col(alpha=.7, size=1, fill="#3B5998") + coord_flip()+
geom_text(aes(label = per), hjust = -0.003, size=4) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Route 11 Chip Popularity '06-'08") + labs(y="Units Sold", x="")
percent_chip0910 <- filter(raw_df, raw_df$Year == 2009 | raw_df$Year == 2010)
percent_chip0910 <- percent_chip0910 %>%
group_by(Chip) %>%
summarise(sum=sum(Chips.Sold))
percent_chip0910 <- filter(percent_chip0910, sum>0)
percent_chip0910$total = sum(percent_chip0910$sum)
percent_chip0910$Percentage = percent_chip0910$sum / percent_chip0910$total
percent_chip0910$label <- paste(percent_chip0910$Chip,'\n', (round(percent_chip0910$Percentage*100,1)), '%')
percent_chip0910$per <- paste((round(percent_chip0910$Percentage*100,1)), '%')
treemap(percent_chip0910,
index="label",
vSize="sum",
type="index",
palette="Set3",
title="Route 11 Chip Popularity '09-'10",
fontsize.title=20
)
ggplot(percent_chip0910, aes(reorder(Chip,-sum), sum)) +
geom_col(alpha=.7, size=1, fill="#3B5998") + coord_flip()+
geom_text(aes(label = per), hjust = -.01, size=4) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Route 11 Chip Popularity '09-'10") + labs(y="Units Sold", x="")
## How did the price increase in January 2008 affect the company's sales?
soldbyyear = raw_df %>%
group_by(Year,Month) %>%
summarise(sold=sum(Chips.Sold))
soldbyyear = filter(soldbyyear, sold>0)
soldbyyear1 <- soldbyyear %>% group_by(Year) %>% mutate(cumsum = cumsum(sold))
soldbyyear1 <-soldbyyear1 %>%
mutate(label = if_else(Month == "Dec", as.character(Year), NA_character_))
ungroup(soldbyyear1)
#Plot by year
ggplot(soldbyyear1, aes(x=Month, y = cumsum, group=Year,color=Year)) +
geom_line( alpha=.5, size=3, lineend="round") +
geom_label_repel(aes(label = label),nudge_x = 100, nudge_y = 40,na.rm = TRUE)+
scale_color_discrete(guide = FALSE) + labs(title = "Decrease in Sales in 2008", y="Cumulative Sold") + scale_y_continuous(labels = scales::number_format(big.mark = ',')) +
theme_pubclean()
# ggplot(filter(sumdf2,mean.sold>0), aes(x=Month, y = mean.sold, fill=Chip),size=5) +
# geom_col(alpha=.7, size=1)
# How did eliminating some of its flavors affect its sales?
chipsoldbyyear = raw_df %>%
group_by(Chip,Year) %>%
summarise(sum=sum(Chips.Sold))
chipsoldbyyear = filter(chipsoldbyyear, sum>0)
ggplot(chipsoldbyyear, aes(reorder(Chip,-sum), sum)) +
geom_col(alpha=.7, fill="#3B5998", ) + coord_flip()+
# geom_text(aes(label = per), hjust = -.01, size=4) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Route 11 Chip Popularity '09-'10") + labs(y="Units Sold", x="")
# Got rid of
ggplot(chipsoldbyyear, aes(x=Year, y=sum, fill=Chip)) +
geom_col(alpha=.7) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Effect of Eliminating Flavors",
subtitle = "Green Chile Enchilada, Garlic & Herb, and Sweet Potato Cinnamon Sugar ") +
labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ','))+
gghighlight(Chip == "Green Chile Enchilada"
| Chip == "Garlic and Herb"
| Chip == "Sweet Potato Cinnamon Sugar",label_key = Month)
#Thinking about
ggplot(chipsoldbyyear, aes(x=Year, y=sum, fill=Chip)) +
geom_col(alpha=.7) +
theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Effect of Eliminating Flavors",
subtitle = "Chesapeake Crab and Salt and Vinegar ") +
labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',')) +
gghighlight(Chip == "Chesapeake Crab"
| Chip == "Salt and Vinegar" ,label_key = Month)
profitability = raw_df %>%
group_by(Year,Chip, Size) %>%
summarise(sum=sum(Chips.Sold)) ##MPG Took out mean
profitability = filter(profitability, sum>0)
ggplot(profitability, aes(x=Year, y=sum, fill=Size)) +
geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Cases Sold By Size") + labs(y="Cases Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ','))
profitability$caseCost = NA
profitability$unitPrice = NA
profitability = filter(profitability, sum>0)
profitability$caseCost[profitability$Size == '2oz'] <- .36*30
profitability$caseCost[profitability$Size == '6oz'] <- .81*12
profitability$caseCost[profitability$Size == '1.5oz'] <- .36*30
profitability$caseCost[profitability$Size == '5oz'] <- .81*12
# profitability$casepack[profitability$Size == '2oz'] <- 30
# profitability$casepack[profitability$Size == '6oz'] <- 12
# profitability$casepack[profitability$Size == '1.5oz'] <- 30
# profitability$casepack[profitability$Size == '5oz'] <- 12
profitability$unitPrice[profitability$Size == '2oz'] <- 20
profitability$unitPrice[profitability$Size == '6oz'] <- 18
profitability$unitPrice[profitability$Size == '2oz' & profitability$Year<2008] <- 18
profitability$unitPrice[profitability$Size == '6oz'& profitability$Year<2008] <- 16
profitability$unitPrice[profitability$Size == '5oz'] <- 26.40
profitability$unitPrice[profitability$Size == '1.5oz'] <- 22
#Without discount
# profitability$profit = (profitability$sum * profitability$unitPrice) - (profitability$sum*profitability$caseCost )
profitability$income = (profitability$sum/2 * profitability$unitPrice) - (profitability$sum/2*profitability$caseCost ) + (profitability$sum/2 * profitability$unitPrice*.75) - (profitability$sum/2*profitability$caseCost )
ggplot(profitability, aes(x=Year, y=income, fill=Chip)) +
geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Income", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="")
profitability %>%
group_by(Year) %>%
summarise(total=sum(income)) %>%
ggplot( aes(x=Year, y=total)) +
geom_col(alpha=.7, fill="chocolate3") + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
geom_text(aes(x = Year, y = total, label = paste0('$ ', as.character(formatC(as.numeric(total), format="f", digits=0, big.mark=","))), vjust=-.5)) +
theme_pubclean() + labs(title = "Income", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="")
profitability1 = profitability %>%
group_by(Year) %>%
summarise(Profit=sum(income)-1000000)
profitability2 <- profitability1 %>%
mutate(pos = Profit >= 0)
profitability2 <- profitability2 %>% mutate(cumulativeProfit = cumsum(Profit))
profitability2 <-profitability2 %>%
mutate(label = if_else(Year==2010, paste0('Cumulative Profit\n', '$', as.character(formatC(as.numeric(cumulativeProfit), format="f", digits=0, big.mark=","))), NA_character_))
ggplot(profitability2, aes(x=Year, y=Profit, fill=pos)) +
geom_col(alpha=.7, position="identity") + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) + theme_pubclean() + labs(title = "Rte 11 Profit") +
scale_y_continuous(name='Net Profit',breaks=seq(-800000, 200000, 200000), limits = c(-900000,200000), labels = scales::number_format(big.mark = ',', prefix = '$ ')) + labs(x="") +
geom_text(aes(x = Year, y = Profit, label = paste0('$ ', as.character(formatC(as.numeric(Profit), format="f", digits=0, big.mark=","))), vjust=-.5)) +
scale_fill_manual(values = c("red", "dark green"), guide=FALSE) +
stat_smooth(geom='line',alpha=.4,data = profitability2, mapping=aes(x=Year, y=cumulativeProfit, group=1),alpha=.8,size=2, color='blue') +
geom_label_repel(data = profitability2, aes(x=Year, y=cumulativeProfit, label = label),color='blue', fill="grey",nudge_x = -.8,na.rm = TRUE)
## INCOME PER CHIP???
perchipsold = raw_df %>%
group_by(Chip, Year) %>%
summarise(sum=mean(sum(Chips.Sold)))
perchipsold %>%
mutate(Chip = fct_reorder(Chip, sum)) %>%
ggplot(aes(x=Chip, y=sum, fill=Year)) +
geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Income before Operating Cost", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="") + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))
perchip = raw_df %>%
group_by(Chip,Size) %>%
summarise(sum=mean(sum(Chips.Sold)))
perchip= filter(perchip,sum>0)
perchip$caseCost = NA
perchip$unitPrice = NA
perchip$caseCost[perchip$Size == '2oz'] <- .36*30
perchip$caseCost[perchip$Size == '6oz'] <- .81*12
perchip$caseCost[perchip$Size == '1.5oz'] <- .36*30
perchip$caseCost[perchip$Size == '5oz'] <- .81*12
perchip$unitPrice[perchip$Size == '2oz'] <- 20
perchip$unitPrice[perchip$Size == '6oz'] <- 18
perchip$unitPrice[perchip$Size == '2oz' & perchip$Year<2008] <- 18
perchip$unitPrice[perchip$Size == '6oz'& perchip$Year<2008] <- 16
perchip$unitPrice[perchip$Size == '5oz'] <- 26.40
perchip$unitPrice[perchip$Size == '1.5oz'] <- 22
perchip$income = (perchip$sum/2 * perchip$unitPrice) - (perchip$sum/2*perchip$caseCost ) + (perchip$sum/2 * perchip$unitPrice*.75) - (perchip$sum/2*perchip$caseCost )
test=perchip
perchip1 = perchip %>%
group_by(Chip) %>%
summarise(metric=sum(income, total=sum(sum)))
test = test %>%
group_by(Chip) %>%
summarise( metric=sum(sum))
# combined=cbind(perchip1,test$total)
test$label='Total Sold'
perchip1$label='Income'
combined=rbind(perchip1,test)
# ggplot(combined, aes(reorder(Chip,-Income), Chip)) +
# geom_col(alpha=.7) + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
# theme_pubclean() + labs(title = "Cases Sold By Flavor") + labs(y="Cases Sold", x="")+
# scale_y_continuous(labels = scales::number_format(big.mark = ',')) + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))
combined %>%
mutate(Chip = fct_reorder(Chip, desc(metric))) %>%
ggplot(aes(x=Chip, y=metric,fill=label)) +
geom_col(alpha=.7, position='dodge') + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
theme_pubclean() + labs(title = "Income by chip ('06-'10)", size=20) +# labs(y="Units Sold", x="")+
scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="") + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))+ theme(legend.title = element_blank())
# ggplot(combined, aes(x=Chip, y=metric,fill=label)) +
# geom_col(alpha=.7, position='dodge') + theme(legend.position="bottom" ,plot.title = element_text(size=15, face="bold")) +
# theme_pubclean() + labs(title = "Income by chip ('06-'10)", size=20) +# labs(y="Units Sold", x="")+
# scale_y_continuous(labels = scales::number_format(big.mark = ',', prefix = '$')) + labs(x="") + theme(axis.text.x = element_text(angle=45,hjust=1, vjust=.9))
|
\name{collections.string}
\alias{collections.string}
%
\title{Create a string with a Kindle collections object}
\description{Returns a string containing a JSON object with the collections.}
\usage{collections.string(kindle_dir, lang='en-US')}
\arguments{
\item{kindle_dir}{directory where the Kindle collections are located; use single / and not \\\\ in the directory path}
\item{lang}{language, default 'en-US'}
}
%
\details{
The directory with Kindle collections must have the following structure:
\itemize{
\item Kindle Root Dir
\itemize{
\item /documents
\itemize{
\item /Collection1
\itemize{
\item /document1.pdf
}
\item /Collection2
}
\item /audible
\itemize{
\item /Collection3
\item /Collection4
}
\item /system
}
}
Collection folder names will become collection labels.
}
\value{A string containing the JSON object with the collections.}
\references{}
\author{Juraj Medzihorsky}
\note{}
%
\seealso{
\code{\link[RKindle:collections]{collections}}
% \code{\link[rjson:toJSON]{toJSON}}
% \code{\link[RKindle:hash.names]{hash.names}}
}
\examples{}
%
\keyword{ Kindle }
%\keyword{ Collection }
%\keyword{ Collections }
| /man/collections.string.Rd | no_license | jmedzihorsky/RKindle | R | false | false | 1,179 | rd | \name{collections.string}
\alias{collections.string}
%
\title{Create a string with a Kindle collections object}
\description{Returns a string containing a JSON object with the collections.}
\usage{collections.string(kindle_dir, lang='en-US')}
\arguments{
\item{kindle_dir}{directory where the Kindle collections are located; use single / and not \\\\ in the directory path}
\item{lang}{language, default 'en-US'}
}
%
\details{
The directory with Kindle collections must have the following structure:
\itemize{
\item Kindle Root Dir
\itemize{
\item /documents
\itemize{
\item /Collection1
\itemize{
\item /document1.pdf
}
\item /Collection2
}
\item /audible
\itemize{
\item /Collection3
\item /Collection4
}
\item /system
}
}
Collection folder names will become collection labels.
}
\value{A string containing the JSON object with the collections.}
\references{}
\author{Juraj Medzihorsky}
\note{}
%
\seealso{
\code{\link[RKindle:collections]{collections}}
% \code{\link[rjson:toJSON]{toJSON}}
% \code{\link[RKindle:hash.names]{hash.names}}
}
\examples{}
%
\keyword{ Kindle }
%\keyword{ Collection }
%\keyword{ Collections }
|
source("functions/downscaling/aux_functions/categorize.bn.R")
source("functions/downscaling/aux_functions/predict.DBN.R")
downscale.BN <- function(downscale.BN, global,
prediction.type = "probabilities", event = "1", threshold.vector = NULL,
parallelize = FALSE, n.cores = NULL , cluster.type = "PSOCK"){
# Parallelize = TRUE should help a lot when lots of evidences are provided.
# cluster.type Accepts "PSOCK" and "FORK". "FORK" cannot be used in Windows systems.
# prediction.type Options are "event" "probabilities" "probabilities.list"
# "event" returns a binary prediction based on threshold.vector. By default threshold.vector
# is set to NULL, which will use as threshold 1-MP where MP is the marginal probability,
# for each node. If downscale.BN has no $marginals value or threshold.vector is NULL,
# "probabilities" setting will apply.
# "probabilities" returns the probabilities as a matrix where dimensions are [obs, cat, node]
# requires predictand nodes to have the same categories
# "probabilities.list" returns a list of nodes with their probability tables.
# Warning: Beware of the nodes ordering if set to FALSE!
#
BN <- downscale.BN$BN
BN.fit <- downscale.BN$BN.fit
clusterS <- downscale.BN$clusterS
categorization.type <- downscale.BN$categorization.type
Nglobal <- downscale.BN$Nglobals
predictors <- names(BN$nodes)[1:Nglobal]
predictands <- names(BN$nodes)[- (1:Nglobal) ]
categorization.attributes <- downscale.BN$categorization.attributes
if (categorization.type != "no"){
print("Categorizing data...")
categorized <- categorize.bn(global, type = categorization.type, cat.args = categorization.attributes,
ncategories = NULL,
clustering.args.list = NULL,
clusterS = clusterS,
parallelize = parallelize, cluster.type = cluster.type, n.cores = n.cores,
training.phase = FALSE)
if (categorization.type == "atmosphere"){categorized <- as.matrix(categorized)}
print("Done...")
} else { categorized <- as.matrix(preprocess( global, rm.na = TRUE, rm.na.mode = "observations")[[1]]) }
print("Compiling junction...")
junction <- compile( as.grain(BN.fit) )
print("Done.")
print("Propagating evidence and computing Probability Tables...")
if ( parallelize == TRUE) {
if ( is.null(n.cores) ){
n.cores <- floor(detectCores()-1)
}
# Initiate cluster
cl <- makeCluster( n.cores, type = cluster.type )
if (cluster.type == "PSOCK") {
clusterExport(cl, list("setEvidence", "querygrain" , "predict.DBN") , envir = environment())
clusterExport(cl, list( "junction", "predictors" , "predictands", "categorized") , envir = environment())
}
PT <- parApply(cl, categorized, MARGIN = 1, FUN = predict.DBN,
predictors = predictors, junction = junction , predictands = predictands )
stopCluster(cl)
}
else { # Do not parallelize
PT <- apply(categorized, MARGIN = 1, FUN = predict.DBN,
predictors = predictors, junction = junction , predictands = predictands )
}
print("Done.")
if ( prediction.type == "probabilities.list" ) {
return(PT)
}
else {
# Node re-ordering due to bnlearn disordering
downscaled <- aperm(simplify2array( sapply(PT , simplify2array, simplify = FALSE) , higher = TRUE ) , c(3,1,2))
PT <- downscaled[,,match(predictands, colnames(downscaled[1,,]))]
if ( prediction.type == "event" & ( !(is.null(downscale.BN$marginals)) | !(is.null(threshold.vector)) ) ){
if (is.null(threshold.vector)){ threshold.vector <- 1 - downscale.BN$marginals[event, ] }
return( is.mostLikely(PT, event = event, threshold.vector = threshold.vector) )
}
else {
return(PT)
}
}
}
| /functions/downscaling/downscale.BN.R | permissive | shepherdmeng/BNdownscale | R | false | false | 4,110 | r | source("functions/downscaling/aux_functions/categorize.bn.R")
source("functions/downscaling/aux_functions/predict.DBN.R")
downscale.BN <- function(downscale.BN, global,
prediction.type = "probabilities", event = "1", threshold.vector = NULL,
parallelize = FALSE, n.cores = NULL , cluster.type = "PSOCK"){
# Parallelize = TRUE should help a lot when lots of evidences are provided.
# cluster.type Accepts "PSOCK" and "FORK". "FORK" cannot be used in Windows systems.
# prediction.type Options are "event" "probabilities" "probabilities.list"
# "event" returns a binary prediction based on threshold.vector. By default threshold.vector
# is set to NULL, which will use as threshold 1-MP where MP is the marginal probability,
# for each node. If downscale.BN has no $marginals value or threshold.vector is NULL,
# "probabilities" setting will apply.
# "probabilities" returns the probabilities as a matrix where dimensions are [obs, cat, node]
# requires predictand nodes to have the same categories
# "probabilities.list" returns a list of nodes with their probability tables.
# Warning: Beware of the nodes ordering if set to FALSE!
#
BN <- downscale.BN$BN
BN.fit <- downscale.BN$BN.fit
clusterS <- downscale.BN$clusterS
categorization.type <- downscale.BN$categorization.type
Nglobal <- downscale.BN$Nglobals
predictors <- names(BN$nodes)[1:Nglobal]
predictands <- names(BN$nodes)[- (1:Nglobal) ]
categorization.attributes <- downscale.BN$categorization.attributes
if (categorization.type != "no"){
print("Categorizing data...")
categorized <- categorize.bn(global, type = categorization.type, cat.args = categorization.attributes,
ncategories = NULL,
clustering.args.list = NULL,
clusterS = clusterS,
parallelize = parallelize, cluster.type = cluster.type, n.cores = n.cores,
training.phase = FALSE)
if (categorization.type == "atmosphere"){categorized <- as.matrix(categorized)}
print("Done...")
} else { categorized <- as.matrix(preprocess( global, rm.na = TRUE, rm.na.mode = "observations")[[1]]) }
print("Compiling junction...")
junction <- compile( as.grain(BN.fit) )
print("Done.")
print("Propagating evidence and computing Probability Tables...")
if ( parallelize == TRUE) {
if ( is.null(n.cores) ){
n.cores <- floor(detectCores()-1)
}
# Initiate cluster
cl <- makeCluster( n.cores, type = cluster.type )
if (cluster.type == "PSOCK") {
clusterExport(cl, list("setEvidence", "querygrain" , "predict.DBN") , envir = environment())
clusterExport(cl, list( "junction", "predictors" , "predictands", "categorized") , envir = environment())
}
PT <- parApply(cl, categorized, MARGIN = 1, FUN = predict.DBN,
predictors = predictors, junction = junction , predictands = predictands )
stopCluster(cl)
}
else { # Do not parallelize
PT <- apply(categorized, MARGIN = 1, FUN = predict.DBN,
predictors = predictors, junction = junction , predictands = predictands )
}
print("Done.")
if ( prediction.type == "probabilities.list" ) {
return(PT)
}
else {
# Node re-ordering due to bnlearn disordering
downscaled <- aperm(simplify2array( sapply(PT , simplify2array, simplify = FALSE) , higher = TRUE ) , c(3,1,2))
PT <- downscaled[,,match(predictands, colnames(downscaled[1,,]))]
if ( prediction.type == "event" & ( !(is.null(downscale.BN$marginals)) | !(is.null(threshold.vector)) ) ){
if (is.null(threshold.vector)){ threshold.vector <- 1 - downscale.BN$marginals[event, ] }
return( is.mostLikely(PT, event = event, threshold.vector = threshold.vector) )
}
else {
return(PT)
}
}
}
|
library(pacman)
pacman::p_load(readxl, tidyverse, ggplot2, plotly, patchwork)
# Funcao para identificar NAs em colunas de data frames
funcaoNA <- function(df){
library(pacman)
pacman::p_load(dplyr, tibble)
index_col_na <- NULL
quantidade_na <- NULL
for (i in 1:ncol(df)) {
if(sum(is.na(df[,i])) > 0) {
index_col_na[i] <- i
quantidade_na[i] <- sum(is.na(df[,i]))
}
}
resultados <- data.frame(index_col_na,quantidade_na)
resultados <- resultados %>% filter(quantidade_na>0)
return(resultados)
}
# Importacao de tratamento dos dados
dados <- read_xlsx("D:/Projetos_em_R/Diversos/Dados/temperatura_porto_alegre.xlsx")
dplyr::glimpse(dados)
dados$codigo_estacao <- base::factor(dados$codigo_estacao)
dados$temp_inst <- base::as.numeric(dados$temp_inst)
dados$temp_max <- base::as.numeric(dados$temp_max)
dados$temp_min <- base::as.numeric(dados$temp_min)
dados$pto_orvalho_inst <- base::as.numeric(dados$pto_orvalho_inst)
dados$pto_orvalho_max <- base::as.numeric(dados$pto_orvalho_max)
dados$pto_orvalho_min <- base::as.numeric(dados$pto_orvalho_min)
dados$pressao <- base::as.numeric(dados$pressao)
dados$pressao_max <- base::as.numeric(dados$pressao_max)
dados$pressao_min <- base::as.numeric(dados$pressao_min)
dados$vento_direcao <- base::factor(dados$vento_direcao)
dados$vento_rajada <- base::as.numeric(dados$vento_rajada)
dados$radiacao <- base::as.numeric(dados$radiacao)
dados$precipitacao <- base::as.numeric(dados$precipitacao)
dados$regiao <- base::factor(dados$regiao)
# Verificando NAs
funcaoNA(dados)
# Grafico de dispersao (Temperatura instante x Ponto Orvalho x Hora)
dispersaoTemperatura <- ggplot2::ggplot(data = dados) +
ggplot2::geom_point(mapping = aes(x = umid_inst, y = pto_orvalho_inst,
colour = temp_inst, size = hora)) +
ggplot2::labs(x = "Umidade", y = "Ponto Orvalho") +
ggplot2::theme_classic()
plotly::ggplotly(dispersaoTemperatura)
# Grafico dispersao com reta (Velocidade do vento x Umidade instante)
dispersaoVento <- ggplot2::ggplot(data = dados) +
ggplot2::geom_point(mapping = aes(x = vento_vel, y = umid_inst,
colour = pressao, size = radiacao)) +
ggplot2::geom_abline(intercept = 80, slope = 0.05) +
ggplot2::labs(x = "Velocidade do Vento", y = "Umidade") +
ggplot2::theme_classic()
dispersaoTemperatura / dispersaoVento
# Calculo do coeficiente angular
coeficiente <- stats::coef(lm(vento_vel ~ umid_inst, data = dados))
dispersaoVento2 <- ggplot2::ggplot(data = dados) +
ggplot2::geom_point(mapping = aes(x = vento_vel, y = umid_inst,
colour = pressao, size = radiacao)) +
ggplot2::geom_abline(intercept = coeficiente[1], slope = coeficiente[2]) +
ggplot2::labs(x = "Velocidade do Vento", y = "Umidade") +
ggplot2::theme_classic()
dispersaoVento3 <- ggplot2::ggplot(data = dados, aes(x = vento_vel, y = umid_inst)) +
ggplot2::geom_point(mapping = aes(colour = pressao, size = radiacao)) +
geom_smooth(method = "lm", se = TRUE) +
ggplot2::labs(x = "Velocidade do Vento", y = "Umidade") +
ggplot2::theme_classic()
dispersaoVento / dispersaoVento2 / dispersaoVento3
# BoxPlot
boxPlot <- ggplot2::ggplot(data = dados, aes(regiao, umid_inst)) +
ggplot2::geom_boxplot(mapping = aes(colour = regiao), fill = "#FFEBCD",
notch = TRUE, show.legend = TRUE, position = "dodge2") +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
boxPlotPontos <- ggplot2::ggplot(data = dados, aes(regiao, umid_inst)) +
ggplot2::geom_boxplot(mapping = aes(colour = regiao), fill = "#FFEBCD",
notch = TRUE, show.legend = TRUE, position = "dodge2") +
ggplot2::geom_jitter(width = 0.2) +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
boxPlotEstacoes <- ggplot2::ggplot(data = dados, aes(regiao, umid_inst)) +
ggplot2::geom_boxplot(mapping = aes(colour = codigo_estacao), fill = "#F5FFFA",
notch = FALSE, show.legend = TRUE, position = "dodge2") +
ggplot2::scale_alpha_continuous() +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
dispersaoVento3 + (boxPlotPontos / boxPlotEstacoes)
# Grafico de barras
graficoBarra <- ggplot2::ggplot(dados) +
ggplot2::geom_bar(aes(y = codigo_estacao))
graficoBarra2 <- ggplot2::ggplot(dados, aes(umid_inst)) +
ggplot2::geom_bar(aes(fill = codigo_estacao)) +
ggplot2::theme_classic() +
ggplot2::theme(legend.position = "top")
(dispersaoVento3 + graficoBarra2) / (boxPlotPontos + boxPlotEstacoes)
# Histograma
histograma <- ggplot2::ggplot(dados, aes(umid_inst,
colour = regiao)) +
ggplot2::geom_histogram(binwidth = 1.0) +
ggplot2::theme_classic()
densidade1 <- ggplot2::ggplot(dados, aes(umid_inst,
after_stat(density),
colour = codigo_estacao)) +
ggplot2::geom_freqpoly(binwidth = 1.0) +
ggplot2::theme_classic()
densidade2 <- ggplot2::ggplot(dados, aes(umid_inst,after_stat(density))) +
ggplot2::geom_freqpoly(binwidth = 6.0) +
ggplot2::theme_classic()
# Formas geometricas
graficoGeom <- ggplot2::ggplot(data = dados, aes(factor(regiao),umid_inst)) +
ggplot2::geom_violin(aes(fill = regiao),
draw_quantiles = c(0.25, 0.50, 0.75)) +
ggplot2::geom_jitter(width = 0.2) +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
plotly::ggplotly(graficoGeom)
# Mapa de CAlor
heatMap <- ggplot2::ggplot(data = dados, aes(codigo_estacao, regiao,
fill = umid_inst)) +
ggplot2::geom_tile() +
ggplot2::scale_fill_distiller(palette = "RdPu") +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
| /Graficos_gerais/template_ggplot2.R | no_license | joscelino/Graficos_em_R | R | false | false | 5,895 | r | library(pacman)
pacman::p_load(readxl, tidyverse, ggplot2, plotly, patchwork)
# Funcao para identificar NAs em colunas de data frames
funcaoNA <- function(df){
library(pacman)
pacman::p_load(dplyr, tibble)
index_col_na <- NULL
quantidade_na <- NULL
for (i in 1:ncol(df)) {
if(sum(is.na(df[,i])) > 0) {
index_col_na[i] <- i
quantidade_na[i] <- sum(is.na(df[,i]))
}
}
resultados <- data.frame(index_col_na,quantidade_na)
resultados <- resultados %>% filter(quantidade_na>0)
return(resultados)
}
# Importacao de tratamento dos dados
dados <- read_xlsx("D:/Projetos_em_R/Diversos/Dados/temperatura_porto_alegre.xlsx")
dplyr::glimpse(dados)
dados$codigo_estacao <- base::factor(dados$codigo_estacao)
dados$temp_inst <- base::as.numeric(dados$temp_inst)
dados$temp_max <- base::as.numeric(dados$temp_max)
dados$temp_min <- base::as.numeric(dados$temp_min)
dados$pto_orvalho_inst <- base::as.numeric(dados$pto_orvalho_inst)
dados$pto_orvalho_max <- base::as.numeric(dados$pto_orvalho_max)
dados$pto_orvalho_min <- base::as.numeric(dados$pto_orvalho_min)
dados$pressao <- base::as.numeric(dados$pressao)
dados$pressao_max <- base::as.numeric(dados$pressao_max)
dados$pressao_min <- base::as.numeric(dados$pressao_min)
dados$vento_direcao <- base::factor(dados$vento_direcao)
dados$vento_rajada <- base::as.numeric(dados$vento_rajada)
dados$radiacao <- base::as.numeric(dados$radiacao)
dados$precipitacao <- base::as.numeric(dados$precipitacao)
dados$regiao <- base::factor(dados$regiao)
# Verificando NAs
funcaoNA(dados)
# Grafico de dispersao (Temperatura instante x Ponto Orvalho x Hora)
dispersaoTemperatura <- ggplot2::ggplot(data = dados) +
ggplot2::geom_point(mapping = aes(x = umid_inst, y = pto_orvalho_inst,
colour = temp_inst, size = hora)) +
ggplot2::labs(x = "Umidade", y = "Ponto Orvalho") +
ggplot2::theme_classic()
plotly::ggplotly(dispersaoTemperatura)
# Grafico dispersao com reta (Velocidade do vento x Umidade instante)
dispersaoVento <- ggplot2::ggplot(data = dados) +
ggplot2::geom_point(mapping = aes(x = vento_vel, y = umid_inst,
colour = pressao, size = radiacao)) +
ggplot2::geom_abline(intercept = 80, slope = 0.05) +
ggplot2::labs(x = "Velocidade do Vento", y = "Umidade") +
ggplot2::theme_classic()
dispersaoTemperatura / dispersaoVento
# Calculo do coeficiente angular
coeficiente <- stats::coef(lm(vento_vel ~ umid_inst, data = dados))
dispersaoVento2 <- ggplot2::ggplot(data = dados) +
ggplot2::geom_point(mapping = aes(x = vento_vel, y = umid_inst,
colour = pressao, size = radiacao)) +
ggplot2::geom_abline(intercept = coeficiente[1], slope = coeficiente[2]) +
ggplot2::labs(x = "Velocidade do Vento", y = "Umidade") +
ggplot2::theme_classic()
dispersaoVento3 <- ggplot2::ggplot(data = dados, aes(x = vento_vel, y = umid_inst)) +
ggplot2::geom_point(mapping = aes(colour = pressao, size = radiacao)) +
geom_smooth(method = "lm", se = TRUE) +
ggplot2::labs(x = "Velocidade do Vento", y = "Umidade") +
ggplot2::theme_classic()
dispersaoVento / dispersaoVento2 / dispersaoVento3
# BoxPlot
boxPlot <- ggplot2::ggplot(data = dados, aes(regiao, umid_inst)) +
ggplot2::geom_boxplot(mapping = aes(colour = regiao), fill = "#FFEBCD",
notch = TRUE, show.legend = TRUE, position = "dodge2") +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
boxPlotPontos <- ggplot2::ggplot(data = dados, aes(regiao, umid_inst)) +
ggplot2::geom_boxplot(mapping = aes(colour = regiao), fill = "#FFEBCD",
notch = TRUE, show.legend = TRUE, position = "dodge2") +
ggplot2::geom_jitter(width = 0.2) +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
boxPlotEstacoes <- ggplot2::ggplot(data = dados, aes(regiao, umid_inst)) +
ggplot2::geom_boxplot(mapping = aes(colour = codigo_estacao), fill = "#F5FFFA",
notch = FALSE, show.legend = TRUE, position = "dodge2") +
ggplot2::scale_alpha_continuous() +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
dispersaoVento3 + (boxPlotPontos / boxPlotEstacoes)
# Grafico de barras
graficoBarra <- ggplot2::ggplot(dados) +
ggplot2::geom_bar(aes(y = codigo_estacao))
graficoBarra2 <- ggplot2::ggplot(dados, aes(umid_inst)) +
ggplot2::geom_bar(aes(fill = codigo_estacao)) +
ggplot2::theme_classic() +
ggplot2::theme(legend.position = "top")
(dispersaoVento3 + graficoBarra2) / (boxPlotPontos + boxPlotEstacoes)
# Histograma
histograma <- ggplot2::ggplot(dados, aes(umid_inst,
colour = regiao)) +
ggplot2::geom_histogram(binwidth = 1.0) +
ggplot2::theme_classic()
densidade1 <- ggplot2::ggplot(dados, aes(umid_inst,
after_stat(density),
colour = codigo_estacao)) +
ggplot2::geom_freqpoly(binwidth = 1.0) +
ggplot2::theme_classic()
densidade2 <- ggplot2::ggplot(dados, aes(umid_inst,after_stat(density))) +
ggplot2::geom_freqpoly(binwidth = 6.0) +
ggplot2::theme_classic()
# Formas geometricas
graficoGeom <- ggplot2::ggplot(data = dados, aes(factor(regiao),umid_inst)) +
ggplot2::geom_violin(aes(fill = regiao),
draw_quantiles = c(0.25, 0.50, 0.75)) +
ggplot2::geom_jitter(width = 0.2) +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
plotly::ggplotly(graficoGeom)
# Mapa de CAlor
heatMap <- ggplot2::ggplot(data = dados, aes(codigo_estacao, regiao,
fill = umid_inst)) +
ggplot2::geom_tile() +
ggplot2::scale_fill_distiller(palette = "RdPu") +
ggplot2::labs(x = "Regiao", y = "Umidade") +
ggplot2::theme_classic()
|
#' HCRTools: a streamlined way to develop and deploy models
#'
#' HCRTools provide a clean interface that lets one create and compare multiple
#' models on your data, and then deploy the model that is most accurate.
#'
#' This is done in a two-step process:
#'
#' \itemize{
#' \item Use \code{\link{LassoDevelopment}} or
#' \code{\link{RandomForestDevelopment}} to test and
#' compare models based on your data.
#' \item Once you've determined which model is best, use
#' \code{\link{LassoDeployment}} or \code{\link{RandomForestDeployment}} to
#' create a final model, automatically save it, predict against test data, and
#' push predicted values into SQL Server.
#' }
#' @references \url{http://healthcareml.org/}
#' @seealso \code{\link{LinearMixedModelDevelopment}}
#' @seealso \code{\link{LinearMixedModelDeployment}}
#' @seealso \code{\link{RiskAdjustedComparisons}}
#' @seealso \code{\link{imputeColumn}}
#' @seealso \code{\link{groupedLOCF}}
#' @seealso \code{\link{selectData}}
#' @seealso \code{\link{writeData}}
#' @seealso \code{\link{orderByDate}}
#' @seealso \code{\link{isBinary}}
#' @seealso \code{\link{removeRowsWithNAInSpecCol}}
#' @seealso \code{\link{countPercentEmpty}}
#' @seealso \code{\link{removeColsWithAllSameValue}}
#' @seealso \code{\link{returnColsWithMoreThanFiftyCategories}}
#' @seealso \code{\link{findTrends}}
#' @seealso \code{\link{convertDateTimeColToDummies}}
#' @seealso \code{\link{countDaysSinceFirstDate}}
#' @seealso \code{\link{calculateTargetedCorrelations}}
#' @seealso \code{\link{calculateAllCorrelations}}
#' @docType package
#' @name HCRTools
NULL
| /R/HCRTools.R | no_license | braydengramse/HCRTools | R | false | false | 1,601 | r | #' HCRTools: a streamlined way to develop and deploy models
#'
#' HCRTools provide a clean interface that lets one create and compare multiple
#' models on your data, and then deploy the model that is most accurate.
#'
#' This is done in a two-step process:
#'
#' \itemize{
#' \item Use \code{\link{LassoDevelopment}} or
#' \code{\link{RandomForestDevelopment}} to test and
#' compare models based on your data.
#' \item Once you've determined which model is best, use
#' \code{\link{LassoDeployment}} or \code{\link{RandomForestDeployment}} to
#' create a final model, automatically save it, predict against test data, and
#' push predicted values into SQL Server.
#' }
#' @references \url{http://healthcareml.org/}
#' @seealso \code{\link{LinearMixedModelDevelopment}}
#' @seealso \code{\link{LinearMixedModelDeployment}}
#' @seealso \code{\link{RiskAdjustedComparisons}}
#' @seealso \code{\link{imputeColumn}}
#' @seealso \code{\link{groupedLOCF}}
#' @seealso \code{\link{selectData}}
#' @seealso \code{\link{writeData}}
#' @seealso \code{\link{orderByDate}}
#' @seealso \code{\link{isBinary}}
#' @seealso \code{\link{removeRowsWithNAInSpecCol}}
#' @seealso \code{\link{countPercentEmpty}}
#' @seealso \code{\link{removeColsWithAllSameValue}}
#' @seealso \code{\link{returnColsWithMoreThanFiftyCategories}}
#' @seealso \code{\link{findTrends}}
#' @seealso \code{\link{convertDateTimeColToDummies}}
#' @seealso \code{\link{countDaysSinceFirstDate}}
#' @seealso \code{\link{calculateTargetedCorrelations}}
#' @seealso \code{\link{calculateAllCorrelations}}
#' @docType package
#' @name HCRTools
NULL
|
library(itsadug)
### Name: get_random
### Title: Get coefficients for the random intercepts and random slopes.
### Aliases: get_random
### ** Examples
data(simdat)
## Not run:
##D # Condition as factor, to have a random intercept
##D # for illustration purposes:
##D simdat$Condition <- as.factor(simdat$Condition)
##D
##D # Model with random effect and interactions:
##D m2 <- bam(Y ~ s(Time) + s(Trial)
##D + ti(Time, Trial)
##D + s(Condition, bs='re')
##D + s(Time, Subject, bs='re'),
##D data=simdat)
##D
##D # extract all random effects combined:
##D newd <- get_random(m2)
##D head(newd)
##D
##D # extract coefficients for the random intercept for Condition:
##D # Make bar plot:
##D barplot(newd[[1]])
##D abline(h=0)
##D
##D # or select:
##D get_random(m2, cond=list(Condition=c('2','3')))
## End(Not run)
| /data/genthat_extracted_code/itsadug/examples/get_random.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 828 | r | library(itsadug)
### Name: get_random
### Title: Get coefficients for the random intercepts and random slopes.
### Aliases: get_random
### ** Examples
data(simdat)
## Not run:
##D # Condition as factor, to have a random intercept
##D # for illustration purposes:
##D simdat$Condition <- as.factor(simdat$Condition)
##D
##D # Model with random effect and interactions:
##D m2 <- bam(Y ~ s(Time) + s(Trial)
##D + ti(Time, Trial)
##D + s(Condition, bs='re')
##D + s(Time, Subject, bs='re'),
##D data=simdat)
##D
##D # extract all random effects combined:
##D newd <- get_random(m2)
##D head(newd)
##D
##D # extract coefficients for the random intercept for Condition:
##D # Make bar plot:
##D barplot(newd[[1]])
##D abline(h=0)
##D
##D # or select:
##D get_random(m2, cond=list(Condition=c('2','3')))
## End(Not run)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/survey-tidiers.R
\name{glance.svyglm}
\alias{glance.svyglm}
\title{Glance at a(n) svyglm object}
\usage{
\method{glance}{svyglm}(x, maximal = x, ...)
}
\arguments{
\item{x}{A \code{svyglm} object returned from \code{\link[survey:svyglm]{survey::svyglm()}}.}
\item{maximal}{A \code{svyglm} object corresponding to the maximal
model against which to compute the BIC. See Lumley and Scott
(2015) for details. Defaults to \code{x}, which is equivalent
to not using a maximal model.}
\item{...}{Additional arguments. Not used. Needed to match generic
signature only. \strong{Cautionary note:} Misspelled arguments will be
absorbed in \code{...}, where they will be ignored. If the misspelled
argument has a default value, the default value will be used.
For example, if you pass \code{conf.level = 0.9}, all computation will
proceed using \code{conf.level = 0.95}. Additionally, if you pass
\code{newdata = my_tibble} to an \code{\link[=augment]{augment()}} method that does not
accept a \code{newdata} argument, it will use the default value for
the \code{data} argument.}
}
\description{
Glance accepts a model object and returns a \code{\link[tibble:tibble]{tibble::tibble()}}
with exactly one row of model summaries. The summaries are typically
goodness of fit measures, p-values for hypothesis tests on residuals,
or model convergence information.
Glance never returns information from the original call to the modeling
function. This includes the name of the modeling function or any
arguments passed to the modeling function.
Glance does not calculate summary measures. Rather, it farms out these
computations to appropriate methods and gathers the results together.
Sometimes a goodness of fit measure will be undefined. In these cases
the measure will be reported as \code{NA}.
Glance returns the same number of columns regardless of whether the
model matrix is rank-deficient or not. If so, entries in columns
that no longer have a well-defined value are filled in with an \code{NA}
of the appropriate type.
}
\examples{
library(survey)
set.seed(123)
data(api)
# survey design
dstrat <-
svydesign(
id = ~1,
strata = ~stype,
weights = ~pw,
data = apistrat,
fpc = ~fpc
)
# model
m <- survey::svyglm(
formula = sch.wide ~ ell + meals + mobility,
design = dstrat,
family = quasibinomial()
)
glance(m)
}
\references{
Lumley T, Scott A (2015). AIC and BIC for modelling with complex
survey data. \emph{Journal of Survey Statistics and Methodology}, 3(1).
\url{https://doi.org/10.1093/jssam/smu021}.
}
\seealso{
\code{\link[survey:svyglm]{survey::svyglm()}}, \code{\link[stats:glm]{stats::glm()}}, \link[survey:anova.svyglm]{survey::anova.svyglm}
Other lm tidiers:
\code{\link{augment.glm}()},
\code{\link{augment.lm}()},
\code{\link{glance.glm}()},
\code{\link{glance.lm}()},
\code{\link{glance.summary.lm}()},
\code{\link{tidy.glm}()},
\code{\link{tidy.lm.beta}()},
\code{\link{tidy.lm}()},
\code{\link{tidy.mlm}()},
\code{\link{tidy.summary.lm}()}
}
\concept{lm tidiers}
\value{
A \code{\link[tibble:tibble]{tibble::tibble()}} with exactly one row and columns:
\item{AIC}{Akaike's Information Criterion for the model.}
\item{BIC}{Bayesian Information Criterion for the model.}
\item{deviance}{Deviance of the model.}
\item{df.null}{Degrees of freedom used by the null model.}
\item{df.residual}{Residual degrees of freedom.}
\item{null.deviance}{Deviance of the null model.}
}
| /man/glance.svyglm.Rd | no_license | norberello/broom | R | false | true | 3,511 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/survey-tidiers.R
\name{glance.svyglm}
\alias{glance.svyglm}
\title{Glance at a(n) svyglm object}
\usage{
\method{glance}{svyglm}(x, maximal = x, ...)
}
\arguments{
\item{x}{A \code{svyglm} object returned from \code{\link[survey:svyglm]{survey::svyglm()}}.}
\item{maximal}{A \code{svyglm} object corresponding to the maximal
model against which to compute the BIC. See Lumley and Scott
(2015) for details. Defaults to \code{x}, which is equivalent
to not using a maximal model.}
\item{...}{Additional arguments. Not used. Needed to match generic
signature only. \strong{Cautionary note:} Misspelled arguments will be
absorbed in \code{...}, where they will be ignored. If the misspelled
argument has a default value, the default value will be used.
For example, if you pass \code{conf.level = 0.9}, all computation will
proceed using \code{conf.level = 0.95}. Additionally, if you pass
\code{newdata = my_tibble} to an \code{\link[=augment]{augment()}} method that does not
accept a \code{newdata} argument, it will use the default value for
the \code{data} argument.}
}
\description{
Glance accepts a model object and returns a \code{\link[tibble:tibble]{tibble::tibble()}}
with exactly one row of model summaries. The summaries are typically
goodness of fit measures, p-values for hypothesis tests on residuals,
or model convergence information.
Glance never returns information from the original call to the modeling
function. This includes the name of the modeling function or any
arguments passed to the modeling function.
Glance does not calculate summary measures. Rather, it farms out these
computations to appropriate methods and gathers the results together.
Sometimes a goodness of fit measure will be undefined. In these cases
the measure will be reported as \code{NA}.
Glance returns the same number of columns regardless of whether the
model matrix is rank-deficient or not. If so, entries in columns
that no longer have a well-defined value are filled in with an \code{NA}
of the appropriate type.
}
\examples{
library(survey)
set.seed(123)
data(api)
# survey design
dstrat <-
svydesign(
id = ~1,
strata = ~stype,
weights = ~pw,
data = apistrat,
fpc = ~fpc
)
# model
m <- survey::svyglm(
formula = sch.wide ~ ell + meals + mobility,
design = dstrat,
family = quasibinomial()
)
glance(m)
}
\references{
Lumley T, Scott A (2015). AIC and BIC for modelling with complex
survey data. \emph{Journal of Survey Statistics and Methodology}, 3(1).
\url{https://doi.org/10.1093/jssam/smu021}.
}
\seealso{
\code{\link[survey:svyglm]{survey::svyglm()}}, \code{\link[stats:glm]{stats::glm()}}, \link[survey:anova.svyglm]{survey::anova.svyglm}
Other lm tidiers:
\code{\link{augment.glm}()},
\code{\link{augment.lm}()},
\code{\link{glance.glm}()},
\code{\link{glance.lm}()},
\code{\link{glance.summary.lm}()},
\code{\link{tidy.glm}()},
\code{\link{tidy.lm.beta}()},
\code{\link{tidy.lm}()},
\code{\link{tidy.mlm}()},
\code{\link{tidy.summary.lm}()}
}
\concept{lm tidiers}
\value{
A \code{\link[tibble:tibble]{tibble::tibble()}} with exactly one row and columns:
\item{AIC}{Akaike's Information Criterion for the model.}
\item{BIC}{Bayesian Information Criterion for the model.}
\item{deviance}{Deviance of the model.}
\item{df.null}{Degrees of freedom used by the null model.}
\item{df.residual}{Residual degrees of freedom.}
\item{null.deviance}{Deviance of the null model.}
}
|
# - option to remove an account from account_list -
# NB: this option is probably not yet fail-proof
observeEvent(input$remove, {
if (is.null(rv$account_list)){
rv$infoText <- "There are no accounts in your account_list yet to remove"
return()
}
account_list <- rv$account_list
Index <- input$index
Index <- as.numeric(Index)
if (! Index %in% 1:length(account_list)){
rv$infoText <- "The given index was not an index in your account_list, please correct input."
return()
}
account_list <- account_list[-Index]
if (length(account_list) == 0){
rv$account_list <- NULL
} else {
rv$account_list <- account_list
}
rv$infoText <- paste("removed the account that had the index:", as.character(Index))
})
# -- | /account_viewer/Tabs_server/Tab04_RemoveAccount.R | no_license | TBrach/shiny_apps | R | false | false | 966 | r | # - option to remove an account from account_list -
# NB: this option is probably not yet fail-proof
observeEvent(input$remove, {
if (is.null(rv$account_list)){
rv$infoText <- "There are no accounts in your account_list yet to remove"
return()
}
account_list <- rv$account_list
Index <- input$index
Index <- as.numeric(Index)
if (! Index %in% 1:length(account_list)){
rv$infoText <- "The given index was not an index in your account_list, please correct input."
return()
}
account_list <- account_list[-Index]
if (length(account_list) == 0){
rv$account_list <- NULL
} else {
rv$account_list <- account_list
}
rv$infoText <- paste("removed the account that had the index:", as.character(Index))
})
# -- |
# nasty code
x = rnorm(100,3.4,2)
plot(x)
| /R/processedCode/myfirstStep.R | no_license | nelsonauner/rgraphicsgallery | R | false | false | 45 | r | # nasty code
x = rnorm(100,3.4,2)
plot(x)
|
#Quiz 1
#setwd("~/Data_Science/Coursera/Data_Science_Specialization-John_Hopkins_University/R_Programming/R_Programming_R/Quiz_1/")
Tab=read.table("hw1_data.csv",sep=",",header=T)
head(Tab)
tail(Tab)
names(Tab)
dim(Tab)
nrow(Tab)
Tab$Ozone[47]
sum(is.na(Tab$Ozone))
mean(Tab$Ozone,na.rm=T)
Tab_sub1=Tab[which(Tab$Ozone>31&Tab$Temp>90),]
mean(Tab_sub1$Solar.R)
Tab_sub2=Tab[which(Tab$Month==6),]
mean(Tab_sub2$Temp)
Tab_sub3=Tab[which(Tab$Month==5),]
max(Tab_sub3$Ozone,na.rm = T)
Tab[1:2,]
| /Quiz_1/quiz_1.R | no_license | raghulmz/R_programming_coursera | R | false | false | 498 | r | #Quiz 1
#setwd("~/Data_Science/Coursera/Data_Science_Specialization-John_Hopkins_University/R_Programming/R_Programming_R/Quiz_1/")
Tab=read.table("hw1_data.csv",sep=",",header=T)
head(Tab)
tail(Tab)
names(Tab)
dim(Tab)
nrow(Tab)
Tab$Ozone[47]
sum(is.na(Tab$Ozone))
mean(Tab$Ozone,na.rm=T)
Tab_sub1=Tab[which(Tab$Ozone>31&Tab$Temp>90),]
mean(Tab_sub1$Solar.R)
Tab_sub2=Tab[which(Tab$Month==6),]
mean(Tab_sub2$Temp)
Tab_sub3=Tab[which(Tab$Month==5),]
max(Tab_sub3$Ozone,na.rm = T)
Tab[1:2,]
|
\name{viewKEGG}
\alias{viewKEGG}
\title{viewKEGG function is for visualize KEGG pathways
works with enrichResult object to visualize enriched KEGG pathway}
\usage{
viewKEGG(obj, pathwayID, foldChange, color.low = "green",
color.high = "red", kegg.native = TRUE, out.suffix = "clusterProfiler")
}
\arguments{
\item{obj}{enrichResult object}
\item{pathwayID}{pathway ID or index}
\item{foldChange}{fold change values}
\item{color.low}{color of low foldChange genes}
\item{color.high}{color of high foldChange genes}
\item{kegg.native}{logical}
\item{out.suffix}{suffix of output file}
}
\description{
viewKEGG function is for visualize KEGG pathways works with
enrichResult object to visualize enriched KEGG pathway
}
\references{
Luo et al. (2013) Pathview: an R/Bioconductor package for
pathway-based data integration and visualization.
\emph{Bioinformatics} (Oxford, England), 29:14 1830--1831,
2013. ISSN 1367-4803
\url{http://bioinformatics.oxfordjournals.org/content/abstract/29/14/1830.abstract}
PMID: 23740750
}
| /2X/2.14/clusterProfiler/man/viewKEGG.Rd | no_license | GuangchuangYu/bioc-release | R | false | false | 1,041 | rd | \name{viewKEGG}
\alias{viewKEGG}
\title{viewKEGG function is for visualize KEGG pathways
works with enrichResult object to visualize enriched KEGG pathway}
\usage{
viewKEGG(obj, pathwayID, foldChange, color.low = "green",
color.high = "red", kegg.native = TRUE, out.suffix = "clusterProfiler")
}
\arguments{
\item{obj}{enrichResult object}
\item{pathwayID}{pathway ID or index}
\item{foldChange}{fold change values}
\item{color.low}{color of low foldChange genes}
\item{color.high}{color of high foldChange genes}
\item{kegg.native}{logical}
\item{out.suffix}{suffix of output file}
}
\description{
viewKEGG function is for visualize KEGG pathways works with
enrichResult object to visualize enriched KEGG pathway
}
\references{
Luo et al. (2013) Pathview: an R/Bioconductor package for
pathway-based data integration and visualization.
\emph{Bioinformatics} (Oxford, England), 29:14 1830--1831,
2013. ISSN 1367-4803
\url{http://bioinformatics.oxfordjournals.org/content/abstract/29/14/1830.abstract}
PMID: 23740750
}
|
\name{explorespec}
\alias{explorespec}
\title{Plot spectral curves}
\usage{
explorespec(rspecdata, by = 1, scale = c("equal", "free"),
legpos = "topright", ...)
}
\arguments{
\item{rspecdata}{(required) a data frame, possibly an
object of class \code{rspec} that has wavelength range in
the first column, named 'wl', and spectral measurements
in the remaining columns.}
\item{by}{number of spectra to include in each graph
(defaults to 1)}
\item{scale}{defines how the y-axis should be scaled.
\code{'free'}: panels can vary in the range of the
y-axis; \code{'equal'}: all panels have the y-axis with
the same range.}
\item{legpos}{legend position control. Either a vector
containing \code{x} and \code{y} coordinates or a single
keyword from the list: \code{"bottomright"},
\code{"bottom"}, \code{"bottomleft"}, \code{"left"},
\code{"topleft"}, \code{"top"}, \code{"topright"},
\code{"right"} and \code{"center"}.}
\item{...}{additional parameters to be passed to plot}
}
\value{
Spectral curve plots
}
\description{
Plots one or multiple spectral curves in the same graph to
rapidly compare groups of spectra.
}
\note{
Number of plots presented per page depends on the number of
graphs produced.
}
\examples{
\dontrun{
data(sicalis)
explorespec(sicalis, 3)
explorespec(sicalis, 3, ylim=c(0,100), legpos=c(500,80))}
}
\author{
Pierre-Paul Bitton \email{bittonp@uwindsor.ca}
}
| /man/explorespec.Rd | no_license | biocrazy/pavo | R | false | false | 1,418 | rd | \name{explorespec}
\alias{explorespec}
\title{Plot spectral curves}
\usage{
explorespec(rspecdata, by = 1, scale = c("equal", "free"),
legpos = "topright", ...)
}
\arguments{
\item{rspecdata}{(required) a data frame, possibly an
object of class \code{rspec} that has wavelength range in
the first column, named 'wl', and spectral measurements
in the remaining columns.}
\item{by}{number of spectra to include in each graph
(defaults to 1)}
\item{scale}{defines how the y-axis should be scaled.
\code{'free'}: panels can vary in the range of the
y-axis; \code{'equal'}: all panels have the y-axis with
the same range.}
\item{legpos}{legend position control. Either a vector
containing \code{x} and \code{y} coordinates or a single
keyword from the list: \code{"bottomright"},
\code{"bottom"}, \code{"bottomleft"}, \code{"left"},
\code{"topleft"}, \code{"top"}, \code{"topright"},
\code{"right"} and \code{"center"}.}
\item{...}{additional parameters to be passed to plot}
}
\value{
Spectral curve plots
}
\description{
Plots one or multiple spectral curves in the same graph to
rapidly compare groups of spectra.
}
\note{
Number of plots presented per page depends on the number of
graphs produced.
}
\examples{
\dontrun{
data(sicalis)
explorespec(sicalis, 3)
explorespec(sicalis, 3, ylim=c(0,100), legpos=c(500,80))}
}
\author{
Pierre-Paul Bitton \email{bittonp@uwindsor.ca}
}
|
library(tidyverse)
library(magrittr)
library(sf)
library(DT)
library(leaflet)
library(shiny)
library(shinydashboard)
library(htmltools)
library(scales)
elections <- read_rds('output/elections_nest.rds') %>%
mutate(race_num = as.numeric(str_remove_all(race, 'house(0)*')))
state_2018_approx <- read_rds('output/approx_2018_shp.rds')
candidates_2018 <- read_rds('output/house_candidates_2018.rds') %>%
mutate(Party = if_else(Party == 'Democratic Party', 'Dem', 'GOP'))
candidates_2016 <- read_csv('input/house_candidates.csv')
house_full <- st_read(dsn = 'output/house_f', layer = 'house_map_shp') %>%
mutate(DIST = paste0('house',str_pad(DISTRICT, 3, 'left', '0')))
aggregate_2016 <- read_csv('input/ky_house_agg.csv') %>%
rename(race_num = District) %>%
mutate(race_num = map_dbl(race_num, ~as.numeric(paste(str_extract_all(., '\\d', T), collapse = ''))),
District = paste0('house', map_chr(race_num, ~paste(str_extract_all(., '\\d', T), collapse = ''))),
agg_d_pct = Dem / (Dem +GOP))
incumbents <- read_csv('input/house_incumbents.csv')
# elections <- read_rds('predict_2018/output/elections_nest.rds')
# state_2018_approx <- read_rds('predict_2018/output/approx_2018_shp.rds')
# candidates_2018 <- read_rds('predict_2018/output/house_candidates_2018.rds')
# candidates_2016 <- read_csv('predict_2018/input/house_candidates.csv')
# house_full <- st_read(dsn = 'predict_2018/shapefiles/house_f', layer = 'house_map_shp') %>%
# mutate(DIST = paste0('house',str_pad(DISTRICT, 3, 'left', '0')))
# aggregate_2016 <- read_csv('predict_2018/input/ky_house_agg.csv') %>%
# rename(race_num = District) %>%
# mutate(race_num = map_dbl(race_num, ~as.numeric(paste(str_extract_all(., '\\d', T), collapse = ''))),
# District = paste0('house', map_chr(race_num, ~paste(str_extract_all(., '\\d', T), collapse = ''))),
# agg_d_pct = Dem / (Dem +GOP))
# incumbents <- read_csv('predict_2018/input/house_incumbents.csv')
pol_pal <- function(x) case_when(x == 'GOP' ~ '#de2d26',
x == 'Dem' ~ '#3182bd',
T ~ 'white')
make_table_18 <- function(race_int, d_delta, r_delta){
elections %>%
filter(race_num == race_int) %>%
select(data) %>%
unnest() %>%
mutate(`Margin 16` = Dem - GOP,
`Winner 16` = case_when(`Margin 16` < 0 ~ 'GOP',
`Margin 16` > 0 ~ 'Dem',
T ~ 'Tie'),
`Margin 16` = abs(`Margin 16`)) %>%
select(Precinct, Registered, `Dem 16` = Dem, `GOP 16` = GOP, `Winner 16`) %>%
mutate(`Dem 18` = round(`Dem 16` * (1 + d_delta), 0),
`GOP 18` = round(`GOP 16` * (1 + r_delta),0),
`Margin 18` = `Dem 18` - `GOP 18`,
`Winner 18` = case_when(`Margin 18` < 0 ~ 'GOP',
`Margin 18` > 0 ~ 'Dem',
T ~ 'Tie'))
}
make_leaflet_18 <- function(race_int, d_delta, r_delta){
df <- make_table_18(race_int, d_delta, r_delta)
df_l <- inner_join(state_2018_approx, df, by = "Precinct")
df_centroid <- df_l %>%
mutate(lon = map_dbl(geometry, ~st_centroid(.x)[[1]]),
lat = map_dbl(geometry, ~st_centroid(.x)[[2]])) %>%
select(-geometry)
df_centroid <- tibble(Precinct = df_centroid$Precinct,
lon = df_centroid$lon,
lat = df_centroid$lat,
margin = df_centroid$`Margin 18`,
winner = df_centroid$`Winner 18`) %>%
mutate(lab = map2(Precinct, margin,
function(p,m) htmltools::HTML(str_glue('<b>{str_sub(p, 6)}</b><br>
<b>Margin:</b> {scales::comma_format()(m)}'))))
leaflet(df_l) %>%
addTiles() %>%
addPolygons(
fillOpacity = 0,
color = 'black',
weight = 1
) %>%
addCircleMarkers(data = df_centroid, radius = ~(margin / 25),
color = ~pol_pal(winner),
label = ~lab,
fillOpacity = 0.8)
}
ui <- dashboardPage(
dashboardHeader(title = 'Kentucky Election, 2018', titleWidth = 300),
dashboardSidebar(
sidebarMenu(
menuItem('Introduction', tabName = 'intro'),
menuItem('Statewide', tabName = 'statewide'),
menuItem('District', tabName = 'district')
),
selectInput('district_select', 'Select District', choices = 1:100),
sliderInput('dem_delta', 'Change in Democratic Turnout', min = -100, max = 10, value = -30),
sliderInput('gop_delta', 'Change in GOP Turnout', min = -100, max = 10, value = -45)
),
dashboardBody(
tabItems(
tabItem(tabName = 'intro', uiOutput('app_intro')),
tabItem(tabName = 'district',
fluidRow(
box(width = 12, solidHeader = F,
uiOutput('inc_district'),
uiOutput('inc_name'),
uiOutput('inc_party'),
uiOutput('inc_counties'),
box(width = 12, title = '2018 Candidates',
DT::dataTableOutput('candidates_2018'))
),
align = 'center'
),
fluidRow(
box(width = 6, solidHeader = F, leafletOutput('district_leaflet')),
box(width = 6, title = 'Projected Results', tableOutput('district_results'))
),
fluidRow(
box(width = 12, solidHeader = F, title = 'Precinct Detail',
DT::dataTableOutput('precinct_dt'))
)
),
tabItem(tabName = 'statewide',
fluidRow(
box(width = 12, leafletOutput('state_leaflet')),
box(width = 12, DT::dataTableOutput('state_table'))
)
)
)
)
)
server <- function(input, output){
output$inc_district <- renderUI({
h1(str_glue('Distrct {input$district_select}'))
})
output$inc_name <- renderUI({
df <- incumbents %>%
filter(DISTRICT == reactive({input$district_select})())
h2(str_glue('{df$First[1]} {df$Last[1]}'))
})
output$inc_party <- renderUI({
df <- incumbents %>%
filter(DISTRICT == reactive({input$district_select})())
h3(if_else(df$PARTY[1] == '(D)', 'Democratic', 'Republican'))
})
output$inc_counties <- renderUI({
df <- incumbents %>%
filter(DISTRICT == reactive({input$district_select})())
h4(df$COUNTIES[1])
})
output$candidates_2018 <- DT::renderDataTable({
candidates_2018 %>%
filter(District == reactive({input$district_select})()) %>%
select(-District) %>%
mutate_at(c('Website', 'Facebook', 'Twitter'), ~ifelse(is.na(.),'',str_glue('<a href = {.}>link</a>'))) %>%
datatable(options = list(dom = 't', ordering = F), rownames = F, escape = F)
})
output$district_leaflet <- renderLeaflet({
shp <- filter(elections, race_num == reactive({input$district_select})())$shp[1] %>% if_else(is.na(.), F, .)
if(shp == T){
make_leaflet_18(reactive({input$district_select})(),
reactive({input$dem_delta})() / 100,
reactive({input$gop_delta})() / 100)
}else{
leaflet(house_full %>% filter(DISTRICT == reactive({input$district_select})())) %>%
addTiles() %>%
addPolygons(
fillOpacity = 0,
color = 'black',
weight = 4
)
}
})
output$district_results <- renderTable({
race_df <- aggregate_2016 %>%
filter(race_num == reactive({input$district_select})())
d_16 <- race_df$Dem[1]
d_18 <- race_df$Dem[1] * (1 + (reactive({input$dem_delta})() / 100))
r_16 <- race_df$GOP[1]
r_18 <- race_df$GOP[1] * (1 + (reactive({input$gop_delta})() / 100))
turnout_delta <- ((r_18 + d_18) - (r_16 + d_16)) / (r_16 + d_16)
win_16 <- case_when(d_16 > r_16 ~ 'Dem',
d_16 < r_16 ~ 'GOP',
T ~ 'Tie')
win_18 <- case_when(d_18 > r_18 ~ 'Dem',
d_18 < r_18 ~ 'GOP',
T ~ 'Tie')
tibble(Year = c('2016', '2018'),
`Democratic Votes` = c(str_glue('{comma_format()(d_16)}, ({percent_format()(race_df$agg_d_pct[1])})'),
str_glue('{comma_format()(round(d_18,0))}, ({percent_format()(d_18 / (d_18 + r_18))})')),
`Republican Votes` = c(str_glue('{comma_format()(r_16)}, ({percent_format()(1 - race_df$agg_d_pct)})'),
str_glue('{comma_format()(round(r_18,0))}, ({percent_format()(r_18 / (d_18 + r_18))})')),
Winner = c(win_16, win_18),
`Turnout Change` = c('', percent_format()(turnout_delta))
)
})
output$precinct_dt <- DT::renderDataTable({
if(reactive({input$district_select})() %in% elections$race_num){
precinct_df <- elections %>%
filter(race_num == reactive({input$district_select})()) %>%
select(data) %>%
unnest() %>%
mutate(Registered = comma_format()(Registered),
`Dem 18` = Dem * (1+(reactive({input$dem_delta})() / 100)),
`GOP 18` = GOP * (1+(reactive({input$gop_delta})() / 100)),
`Turnout Change` = percent_format()(((`Dem 18` + `GOP 18`) - (Dem + GOP)) / (Dem + GOP)),
Precinct = str_sub(Precinct, 6),
`Dem Margin 16` = str_glue('{round(Dem - GOP,)} ({percent_format()((Dem - GOP) / (Dem + GOP))})'),
`Dem 16` = str_glue('{Dem} ({percent_format()(d_pct)})'),
`GOP 16` = str_glue('{GOP} ({percent_format()(d_pct)})'),
`Dem Margin 18` = str_glue('{round(`Dem 18` - `GOP 18`,0)} ({percent_format()((`Dem 18` - `GOP 18`) / (`Dem 18` + `GOP 18`))})'),
Dem_18 = str_glue('{round(`Dem 18`,0)} ({percent_format()(`Dem 18` / (`Dem 18` + `GOP 18`))})'),
`GOP 18` = str_glue('{round(`GOP 18`,0)} ({percent_format()(`Dem 18` / (`Dem 18` + `GOP 18`))})')) %>%
select(Precinct, Registered, `Dem 16`, `GOP 16`, `Dem Margin 16`, `Dem 18` = Dem_18, `GOP 18`, `Dem Margin 18`,
`Turnout Change`) %>%
datatable(options = list(pageLength = 1000, dom = 't'), rownames = F)
}else{
datatable(tibble(`Precinct Data Unavailable` = ''), options = list(dom = 't'), rownames = F)
}
})
output$state_leaflet <- renderLeaflet({
df_centroid <- house_full %>%
left_join(aggregate_2016 %>%
mutate(Dem = Dem * (1 + (reactive({input$dem_delta})() / 100)),
GOP = GOP * (1 + (reactive({input$gop_delta})() / 100)),
margin = Dem - GOP,
winner = case_when(margin > 0 ~ 'Dem',
margin < 0 ~ 'GOP',
T ~ 'Tie'),
margin = abs(margin),
race_num = as.character(race_num)),
by = c('DISTRICT' = 'race_num')) %>%
mutate(lon = map_dbl(geometry, ~st_centroid(.x)[[1]]),
lat = map_dbl(geometry, ~st_centroid(.x)[[2]])) %>%
select(-geometry)
df_centroid <- tibble(District = df_centroid$DISTRICT,
lon = df_centroid$lon,
lat = df_centroid$lat,
margin = round(df_centroid$margin,0),
winner = df_centroid$winner) %>%
mutate(lab = map2(District, margin,
function(p,m) htmltools::HTML(str_glue('<b>District: </b>{p}<br>
<b>Margin:</b> {scales::comma_format()(m)}'))))
leaflet(house_full) %>%
addTiles() %>%
addPolygons(
fillOpacity = 0,
color = 'black',
weight = 1
) %>%
addCircleMarkers(data = df_centroid, radius = ~(margin / 1000),
color = ~pol_pal(winner),
label = ~lab,
fillOpacity = 0.8)
})
output$state_table <- DT::renderDataTable({
state_df <- aggregate_2016 %>%
mutate(Dem18 = Dem * (1 + (reactive({input$dem_delta})() / 100)),
GOP18 = GOP * (1 + (reactive({input$gop_delta})() / 100)),
margin = Dem18 - GOP18,
winner = case_when(margin > 0 ~ 'Dem',
margin < 0 ~ 'GOP',
T ~ 'Tie'),
margin = abs(margin),
race_num = as.character(race_num))
dem_seats <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == T) %>% nrow()
gop_seats <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == F) %>% nrow()
wi_k_d <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == T, margin < 1000) %>% nrow()
wi_k_r <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == F, margin < 1000) %>% nrow()
turnout_delta <- ((sum(state_df$Dem18) + sum(state_df$GOP18)) -
(sum(state_df$Dem) + sum(state_df$GOP))) / (sum(state_df$Dem) + sum(state_df$GOP))
tibble(`Democratic Seats` = dem_seats,
`GOP Seats` = gop_seats,
`Dem Seats, <1000 Vote Margin` = wi_k_d,
`GOP Seats, <1000 Vote Margin` = wi_k_r,
`Total Turnout Change` = percent_format()(turnout_delta)) %>%
datatable(options = list(dom = 't', ordering = F), rownames = F)
})
output$app_intro <- renderUI({
tags$div(
tags$h1('Predicting the 2018 Kentucky House of Representatives'),
tags$p('This application uses election data from 2016 to assist in forming predictions about the 2018 election for the Kentucky House of Representatives. Use the sliders in the sidebar to estimate how turnout will change from the 2016 election for Democrats and Republicans. The defaults are set to a 30% decrease in turnout for Democrats and a 45% decrease in turnout for Republicans -- this results in a total decrease in turnout between 2016 and 2018 of 38.4%. That number is in line with the 39% average decrease in turnout between Presidential elections and midterm elections which has occurred over the past 4 cycles. I selected -30% for Democrats and -45% for Republicans very loosely on the special election in District 49 which occurred earlier this year, as well as a gut feeling, and dumb optimism.'),
tags$h2('Elements'),
tags$p('The first page of this application is a map of all the House districts in Kentucky, with a circle drawn on top of the center of the district. The color of the circle indicates the candidate who the sliders in the sidebar predict will win the election, and the size of the circles dictates the margin of victory. I’ve also included a table at the bottom which shows the number of races which fell within 1,000 votes.'),
tags$p('The second page of the application is a district detail. Each district has different elements. For many districts, especially those entirely within Fayette, Jefferson, Boone, Kenton, Campbell, and Pendleton counties, there are detailed maps which show the impacts of your predicts on each individual precinct. Unfortunately, not every district has a precinct map -- while I’ve managed to receive 2016 precinct maps from the counties listed above, the last statewide precinct map was created based on the 2015 election. Any district which has seen their precinct change between 2015 and now will not have a precinct level map.'),
tags$p('The second page also includes other information. At the top, I’ve included the information I found in February of 2018 about the candidates running in each district (email me at rkahne@gmail.com if you would like to see an update). I’ve also included an aggregate table showing the impact of your predictions on the race as a whole, and a detailed table of the impact of your predictions on each individual precinct. However, the Secretary of State only has precinct level election results in tabluar format for 83 of the 100 districts. '),
tags$h2('Caveats and Thanks'),
tags$p('For districts that went uncontested in 2016, I used the US Senate data as a proxy. That might make results in Lexington and surrounding areas a little wonky, as Jim Gray greatly outperformed many House candidates in those areas.'),
tags$p('The idea for this app came from Troy Ransdell, who built a really great Excel tool that formed a lot of logic that went into creating this application. Troy is the best!'),
tags$p('This app was created by me, Robert Kahne. Feel free to use any information you find in it anywhere you like, but please provide a citation.')
)
})
}
shinyApp(ui = ui, server = server)
| /app.R | no_license | rkahne/ky-house-18 | R | false | false | 16,842 | r | library(tidyverse)
library(magrittr)
library(sf)
library(DT)
library(leaflet)
library(shiny)
library(shinydashboard)
library(htmltools)
library(scales)
elections <- read_rds('output/elections_nest.rds') %>%
mutate(race_num = as.numeric(str_remove_all(race, 'house(0)*')))
state_2018_approx <- read_rds('output/approx_2018_shp.rds')
candidates_2018 <- read_rds('output/house_candidates_2018.rds') %>%
mutate(Party = if_else(Party == 'Democratic Party', 'Dem', 'GOP'))
candidates_2016 <- read_csv('input/house_candidates.csv')
house_full <- st_read(dsn = 'output/house_f', layer = 'house_map_shp') %>%
mutate(DIST = paste0('house',str_pad(DISTRICT, 3, 'left', '0')))
aggregate_2016 <- read_csv('input/ky_house_agg.csv') %>%
rename(race_num = District) %>%
mutate(race_num = map_dbl(race_num, ~as.numeric(paste(str_extract_all(., '\\d', T), collapse = ''))),
District = paste0('house', map_chr(race_num, ~paste(str_extract_all(., '\\d', T), collapse = ''))),
agg_d_pct = Dem / (Dem +GOP))
incumbents <- read_csv('input/house_incumbents.csv')
# elections <- read_rds('predict_2018/output/elections_nest.rds')
# state_2018_approx <- read_rds('predict_2018/output/approx_2018_shp.rds')
# candidates_2018 <- read_rds('predict_2018/output/house_candidates_2018.rds')
# candidates_2016 <- read_csv('predict_2018/input/house_candidates.csv')
# house_full <- st_read(dsn = 'predict_2018/shapefiles/house_f', layer = 'house_map_shp') %>%
# mutate(DIST = paste0('house',str_pad(DISTRICT, 3, 'left', '0')))
# aggregate_2016 <- read_csv('predict_2018/input/ky_house_agg.csv') %>%
# rename(race_num = District) %>%
# mutate(race_num = map_dbl(race_num, ~as.numeric(paste(str_extract_all(., '\\d', T), collapse = ''))),
# District = paste0('house', map_chr(race_num, ~paste(str_extract_all(., '\\d', T), collapse = ''))),
# agg_d_pct = Dem / (Dem +GOP))
# incumbents <- read_csv('predict_2018/input/house_incumbents.csv')
pol_pal <- function(x) case_when(x == 'GOP' ~ '#de2d26',
x == 'Dem' ~ '#3182bd',
T ~ 'white')
make_table_18 <- function(race_int, d_delta, r_delta){
elections %>%
filter(race_num == race_int) %>%
select(data) %>%
unnest() %>%
mutate(`Margin 16` = Dem - GOP,
`Winner 16` = case_when(`Margin 16` < 0 ~ 'GOP',
`Margin 16` > 0 ~ 'Dem',
T ~ 'Tie'),
`Margin 16` = abs(`Margin 16`)) %>%
select(Precinct, Registered, `Dem 16` = Dem, `GOP 16` = GOP, `Winner 16`) %>%
mutate(`Dem 18` = round(`Dem 16` * (1 + d_delta), 0),
`GOP 18` = round(`GOP 16` * (1 + r_delta),0),
`Margin 18` = `Dem 18` - `GOP 18`,
`Winner 18` = case_when(`Margin 18` < 0 ~ 'GOP',
`Margin 18` > 0 ~ 'Dem',
T ~ 'Tie'))
}
make_leaflet_18 <- function(race_int, d_delta, r_delta){
df <- make_table_18(race_int, d_delta, r_delta)
df_l <- inner_join(state_2018_approx, df, by = "Precinct")
df_centroid <- df_l %>%
mutate(lon = map_dbl(geometry, ~st_centroid(.x)[[1]]),
lat = map_dbl(geometry, ~st_centroid(.x)[[2]])) %>%
select(-geometry)
df_centroid <- tibble(Precinct = df_centroid$Precinct,
lon = df_centroid$lon,
lat = df_centroid$lat,
margin = df_centroid$`Margin 18`,
winner = df_centroid$`Winner 18`) %>%
mutate(lab = map2(Precinct, margin,
function(p,m) htmltools::HTML(str_glue('<b>{str_sub(p, 6)}</b><br>
<b>Margin:</b> {scales::comma_format()(m)}'))))
leaflet(df_l) %>%
addTiles() %>%
addPolygons(
fillOpacity = 0,
color = 'black',
weight = 1
) %>%
addCircleMarkers(data = df_centroid, radius = ~(margin / 25),
color = ~pol_pal(winner),
label = ~lab,
fillOpacity = 0.8)
}
ui <- dashboardPage(
dashboardHeader(title = 'Kentucky Election, 2018', titleWidth = 300),
dashboardSidebar(
sidebarMenu(
menuItem('Introduction', tabName = 'intro'),
menuItem('Statewide', tabName = 'statewide'),
menuItem('District', tabName = 'district')
),
selectInput('district_select', 'Select District', choices = 1:100),
sliderInput('dem_delta', 'Change in Democratic Turnout', min = -100, max = 10, value = -30),
sliderInput('gop_delta', 'Change in GOP Turnout', min = -100, max = 10, value = -45)
),
dashboardBody(
tabItems(
tabItem(tabName = 'intro', uiOutput('app_intro')),
tabItem(tabName = 'district',
fluidRow(
box(width = 12, solidHeader = F,
uiOutput('inc_district'),
uiOutput('inc_name'),
uiOutput('inc_party'),
uiOutput('inc_counties'),
box(width = 12, title = '2018 Candidates',
DT::dataTableOutput('candidates_2018'))
),
align = 'center'
),
fluidRow(
box(width = 6, solidHeader = F, leafletOutput('district_leaflet')),
box(width = 6, title = 'Projected Results', tableOutput('district_results'))
),
fluidRow(
box(width = 12, solidHeader = F, title = 'Precinct Detail',
DT::dataTableOutput('precinct_dt'))
)
),
tabItem(tabName = 'statewide',
fluidRow(
box(width = 12, leafletOutput('state_leaflet')),
box(width = 12, DT::dataTableOutput('state_table'))
)
)
)
)
)
server <- function(input, output){
output$inc_district <- renderUI({
h1(str_glue('Distrct {input$district_select}'))
})
output$inc_name <- renderUI({
df <- incumbents %>%
filter(DISTRICT == reactive({input$district_select})())
h2(str_glue('{df$First[1]} {df$Last[1]}'))
})
output$inc_party <- renderUI({
df <- incumbents %>%
filter(DISTRICT == reactive({input$district_select})())
h3(if_else(df$PARTY[1] == '(D)', 'Democratic', 'Republican'))
})
output$inc_counties <- renderUI({
df <- incumbents %>%
filter(DISTRICT == reactive({input$district_select})())
h4(df$COUNTIES[1])
})
output$candidates_2018 <- DT::renderDataTable({
candidates_2018 %>%
filter(District == reactive({input$district_select})()) %>%
select(-District) %>%
mutate_at(c('Website', 'Facebook', 'Twitter'), ~ifelse(is.na(.),'',str_glue('<a href = {.}>link</a>'))) %>%
datatable(options = list(dom = 't', ordering = F), rownames = F, escape = F)
})
output$district_leaflet <- renderLeaflet({
shp <- filter(elections, race_num == reactive({input$district_select})())$shp[1] %>% if_else(is.na(.), F, .)
if(shp == T){
make_leaflet_18(reactive({input$district_select})(),
reactive({input$dem_delta})() / 100,
reactive({input$gop_delta})() / 100)
}else{
leaflet(house_full %>% filter(DISTRICT == reactive({input$district_select})())) %>%
addTiles() %>%
addPolygons(
fillOpacity = 0,
color = 'black',
weight = 4
)
}
})
output$district_results <- renderTable({
race_df <- aggregate_2016 %>%
filter(race_num == reactive({input$district_select})())
d_16 <- race_df$Dem[1]
d_18 <- race_df$Dem[1] * (1 + (reactive({input$dem_delta})() / 100))
r_16 <- race_df$GOP[1]
r_18 <- race_df$GOP[1] * (1 + (reactive({input$gop_delta})() / 100))
turnout_delta <- ((r_18 + d_18) - (r_16 + d_16)) / (r_16 + d_16)
win_16 <- case_when(d_16 > r_16 ~ 'Dem',
d_16 < r_16 ~ 'GOP',
T ~ 'Tie')
win_18 <- case_when(d_18 > r_18 ~ 'Dem',
d_18 < r_18 ~ 'GOP',
T ~ 'Tie')
tibble(Year = c('2016', '2018'),
`Democratic Votes` = c(str_glue('{comma_format()(d_16)}, ({percent_format()(race_df$agg_d_pct[1])})'),
str_glue('{comma_format()(round(d_18,0))}, ({percent_format()(d_18 / (d_18 + r_18))})')),
`Republican Votes` = c(str_glue('{comma_format()(r_16)}, ({percent_format()(1 - race_df$agg_d_pct)})'),
str_glue('{comma_format()(round(r_18,0))}, ({percent_format()(r_18 / (d_18 + r_18))})')),
Winner = c(win_16, win_18),
`Turnout Change` = c('', percent_format()(turnout_delta))
)
})
output$precinct_dt <- DT::renderDataTable({
if(reactive({input$district_select})() %in% elections$race_num){
precinct_df <- elections %>%
filter(race_num == reactive({input$district_select})()) %>%
select(data) %>%
unnest() %>%
mutate(Registered = comma_format()(Registered),
`Dem 18` = Dem * (1+(reactive({input$dem_delta})() / 100)),
`GOP 18` = GOP * (1+(reactive({input$gop_delta})() / 100)),
`Turnout Change` = percent_format()(((`Dem 18` + `GOP 18`) - (Dem + GOP)) / (Dem + GOP)),
Precinct = str_sub(Precinct, 6),
`Dem Margin 16` = str_glue('{round(Dem - GOP,)} ({percent_format()((Dem - GOP) / (Dem + GOP))})'),
`Dem 16` = str_glue('{Dem} ({percent_format()(d_pct)})'),
`GOP 16` = str_glue('{GOP} ({percent_format()(d_pct)})'),
`Dem Margin 18` = str_glue('{round(`Dem 18` - `GOP 18`,0)} ({percent_format()((`Dem 18` - `GOP 18`) / (`Dem 18` + `GOP 18`))})'),
Dem_18 = str_glue('{round(`Dem 18`,0)} ({percent_format()(`Dem 18` / (`Dem 18` + `GOP 18`))})'),
`GOP 18` = str_glue('{round(`GOP 18`,0)} ({percent_format()(`Dem 18` / (`Dem 18` + `GOP 18`))})')) %>%
select(Precinct, Registered, `Dem 16`, `GOP 16`, `Dem Margin 16`, `Dem 18` = Dem_18, `GOP 18`, `Dem Margin 18`,
`Turnout Change`) %>%
datatable(options = list(pageLength = 1000, dom = 't'), rownames = F)
}else{
datatable(tibble(`Precinct Data Unavailable` = ''), options = list(dom = 't'), rownames = F)
}
})
output$state_leaflet <- renderLeaflet({
df_centroid <- house_full %>%
left_join(aggregate_2016 %>%
mutate(Dem = Dem * (1 + (reactive({input$dem_delta})() / 100)),
GOP = GOP * (1 + (reactive({input$gop_delta})() / 100)),
margin = Dem - GOP,
winner = case_when(margin > 0 ~ 'Dem',
margin < 0 ~ 'GOP',
T ~ 'Tie'),
margin = abs(margin),
race_num = as.character(race_num)),
by = c('DISTRICT' = 'race_num')) %>%
mutate(lon = map_dbl(geometry, ~st_centroid(.x)[[1]]),
lat = map_dbl(geometry, ~st_centroid(.x)[[2]])) %>%
select(-geometry)
df_centroid <- tibble(District = df_centroid$DISTRICT,
lon = df_centroid$lon,
lat = df_centroid$lat,
margin = round(df_centroid$margin,0),
winner = df_centroid$winner) %>%
mutate(lab = map2(District, margin,
function(p,m) htmltools::HTML(str_glue('<b>District: </b>{p}<br>
<b>Margin:</b> {scales::comma_format()(m)}'))))
leaflet(house_full) %>%
addTiles() %>%
addPolygons(
fillOpacity = 0,
color = 'black',
weight = 1
) %>%
addCircleMarkers(data = df_centroid, radius = ~(margin / 1000),
color = ~pol_pal(winner),
label = ~lab,
fillOpacity = 0.8)
})
output$state_table <- DT::renderDataTable({
state_df <- aggregate_2016 %>%
mutate(Dem18 = Dem * (1 + (reactive({input$dem_delta})() / 100)),
GOP18 = GOP * (1 + (reactive({input$gop_delta})() / 100)),
margin = Dem18 - GOP18,
winner = case_when(margin > 0 ~ 'Dem',
margin < 0 ~ 'GOP',
T ~ 'Tie'),
margin = abs(margin),
race_num = as.character(race_num))
dem_seats <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == T) %>% nrow()
gop_seats <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == F) %>% nrow()
wi_k_d <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == T, margin < 1000) %>% nrow()
wi_k_r <- state_df %>% mutate(d_seat = Dem18 > GOP18) %>% filter(d_seat == F, margin < 1000) %>% nrow()
turnout_delta <- ((sum(state_df$Dem18) + sum(state_df$GOP18)) -
(sum(state_df$Dem) + sum(state_df$GOP))) / (sum(state_df$Dem) + sum(state_df$GOP))
tibble(`Democratic Seats` = dem_seats,
`GOP Seats` = gop_seats,
`Dem Seats, <1000 Vote Margin` = wi_k_d,
`GOP Seats, <1000 Vote Margin` = wi_k_r,
`Total Turnout Change` = percent_format()(turnout_delta)) %>%
datatable(options = list(dom = 't', ordering = F), rownames = F)
})
output$app_intro <- renderUI({
tags$div(
tags$h1('Predicting the 2018 Kentucky House of Representatives'),
tags$p('This application uses election data from 2016 to assist in forming predictions about the 2018 election for the Kentucky House of Representatives. Use the sliders in the sidebar to estimate how turnout will change from the 2016 election for Democrats and Republicans. The defaults are set to a 30% decrease in turnout for Democrats and a 45% decrease in turnout for Republicans -- this results in a total decrease in turnout between 2016 and 2018 of 38.4%. That number is in line with the 39% average decrease in turnout between Presidential elections and midterm elections which has occurred over the past 4 cycles. I selected -30% for Democrats and -45% for Republicans very loosely on the special election in District 49 which occurred earlier this year, as well as a gut feeling, and dumb optimism.'),
tags$h2('Elements'),
tags$p('The first page of this application is a map of all the House districts in Kentucky, with a circle drawn on top of the center of the district. The color of the circle indicates the candidate who the sliders in the sidebar predict will win the election, and the size of the circles dictates the margin of victory. I’ve also included a table at the bottom which shows the number of races which fell within 1,000 votes.'),
tags$p('The second page of the application is a district detail. Each district has different elements. For many districts, especially those entirely within Fayette, Jefferson, Boone, Kenton, Campbell, and Pendleton counties, there are detailed maps which show the impacts of your predicts on each individual precinct. Unfortunately, not every district has a precinct map -- while I’ve managed to receive 2016 precinct maps from the counties listed above, the last statewide precinct map was created based on the 2015 election. Any district which has seen their precinct change between 2015 and now will not have a precinct level map.'),
tags$p('The second page also includes other information. At the top, I’ve included the information I found in February of 2018 about the candidates running in each district (email me at rkahne@gmail.com if you would like to see an update). I’ve also included an aggregate table showing the impact of your predictions on the race as a whole, and a detailed table of the impact of your predictions on each individual precinct. However, the Secretary of State only has precinct level election results in tabluar format for 83 of the 100 districts. '),
tags$h2('Caveats and Thanks'),
tags$p('For districts that went uncontested in 2016, I used the US Senate data as a proxy. That might make results in Lexington and surrounding areas a little wonky, as Jim Gray greatly outperformed many House candidates in those areas.'),
tags$p('The idea for this app came from Troy Ransdell, who built a really great Excel tool that formed a lot of logic that went into creating this application. Troy is the best!'),
tags$p('This app was created by me, Robert Kahne. Feel free to use any information you find in it anywhere you like, but please provide a citation.')
)
})
}
shinyApp(ui = ui, server = server)
|
#' ba_trials
#'
#' Lists trial summaries available on a brapi server
#'
#' @param con list, brapi connection object
#' @param programDbId character; program filter to only return trials associated
#' with given program database identifier; default: ""
#' @param locationDbId character, location filter to only return trails associated
#' with given location databas identifier; default: ""
#' @param active logical; filter active status; default: "any", other possible
#' values TRUE/FALSE
#' @param sortBy character; name of the field to sort by; default: ""
#' @param sortOrder character; sort order direction; default: "", possible values
#' "asc"/"desc"
#' @param pageSize integer, items per page to be returned; default: 1000
#' @param page integer, the requested page to be returned; default: 0 (1st page)
#' @param rclass character, class of the object to be returned; default: "tibble"
#' , possible other values: "json"/"list"/"data.frame"
#'
#' @return An object of class as defined by rclass containing tiral summaries.
#'
#' @note Tested against: BMS, sweetpotatobase, test-server
#' @note BrAPI Version: 1.0, 1.1, 1.2
#' @note BrAPI Status: active
#'
#' @author Reinhard Simon, Maikel Verouden
#' @references \href{https://github.com/plantbreeding/API/blob/V1.2/Specification/Trials/ListTrialSummaries.md}{github}
#' @family trials
#' @family phenotyping
#' @example inst/examples/ex-ba_trials.R
#' @import tibble
#' @export
ba_trials <- function(con = NULL,
programDbId = "",
locationDbId = "",
active = "any",
sortBy = "",
sortOrder = "",
pageSize = 1000,
page = 0,
rclass = "tibble") {
ba_check(con = con, verbose = FALSE, brapi_calls = "trials")
stopifnot(is.character(programDbId))
stopifnot(is.character(locationDbId))
stopifnot(is.logical(active) || active == "any")
stopifnot(is.character(sortBy))
stopifnot(is.character(sortOrder))
if (programDbId == "") {
ba_message('Consider specifying other parameters like "pogramDbId"!\n')
}
check_paging(pageSize = pageSize, page = page)
check_rclass(rclass = rclass)
brp <- get_brapi(con = con)
ptrials <- paste0(brp, "trials?")
pprogramDbId <- ifelse(programDbId != "",
paste0("programDbId=", programDbId, "&"), "")
plocationDbId <- ifelse(locationDbId != "",
paste0("locationDbId=", locationDbId, "&"), "")
pactive <- ifelse(active != "any", paste0("active=", tolower(active), "&"), "")
psortBy <- ifelse(sortBy != "", paste0("sortBy=", sortBy, "&"), "")
psortOrder <- ifelse(sortOrder != "", paste0("sortOrder=", sortOrder, "&"), "")
ppage <- ifelse(is.numeric(page), paste0("page=", page, "", "&"), "")
ppageSize <- ifelse(is.numeric(pageSize),
paste0("pageSize=", pageSize, "&"), "")
if (page == 0 & pageSize == 1000) {
ppage <- ""
ppageSize <- ""
}
ptrials <- sub("[/?&]$",
"",
paste0(ptrials,
pprogramDbId,
plocationDbId,
pactive,
psortBy,
psortOrder,
ppageSize,
ppage))
try({
res <- brapiGET(url = ptrials, con = con)
res2 <- httr::content(x = res, as = "text", encoding = "UTF-8")
out <- NULL
if (rclass %in% c("list", "json")) {
out <- dat2tbl(res = res2, rclass = rclass)
}
if (rclass %in% c("data.frame", "tibble")) {
out <- trl2tbl2(res = res2, rclass = rclass)
}
class(out) <- c(class(out), "ba_trials")
show_metadata(res)
return(out)
})
}
| /R/ba_trials.R | no_license | ClayBirkett/brapi | R | false | false | 3,857 | r | #' ba_trials
#'
#' Lists trial summaries available on a brapi server
#'
#' @param con list, brapi connection object
#' @param programDbId character; program filter to only return trials associated
#' with given program database identifier; default: ""
#' @param locationDbId character, location filter to only return trails associated
#' with given location databas identifier; default: ""
#' @param active logical; filter active status; default: "any", other possible
#' values TRUE/FALSE
#' @param sortBy character; name of the field to sort by; default: ""
#' @param sortOrder character; sort order direction; default: "", possible values
#' "asc"/"desc"
#' @param pageSize integer, items per page to be returned; default: 1000
#' @param page integer, the requested page to be returned; default: 0 (1st page)
#' @param rclass character, class of the object to be returned; default: "tibble"
#' , possible other values: "json"/"list"/"data.frame"
#'
#' @return An object of class as defined by rclass containing tiral summaries.
#'
#' @note Tested against: BMS, sweetpotatobase, test-server
#' @note BrAPI Version: 1.0, 1.1, 1.2
#' @note BrAPI Status: active
#'
#' @author Reinhard Simon, Maikel Verouden
#' @references \href{https://github.com/plantbreeding/API/blob/V1.2/Specification/Trials/ListTrialSummaries.md}{github}
#' @family trials
#' @family phenotyping
#' @example inst/examples/ex-ba_trials.R
#' @import tibble
#' @export
ba_trials <- function(con = NULL,
programDbId = "",
locationDbId = "",
active = "any",
sortBy = "",
sortOrder = "",
pageSize = 1000,
page = 0,
rclass = "tibble") {
ba_check(con = con, verbose = FALSE, brapi_calls = "trials")
stopifnot(is.character(programDbId))
stopifnot(is.character(locationDbId))
stopifnot(is.logical(active) || active == "any")
stopifnot(is.character(sortBy))
stopifnot(is.character(sortOrder))
if (programDbId == "") {
ba_message('Consider specifying other parameters like "pogramDbId"!\n')
}
check_paging(pageSize = pageSize, page = page)
check_rclass(rclass = rclass)
brp <- get_brapi(con = con)
ptrials <- paste0(brp, "trials?")
pprogramDbId <- ifelse(programDbId != "",
paste0("programDbId=", programDbId, "&"), "")
plocationDbId <- ifelse(locationDbId != "",
paste0("locationDbId=", locationDbId, "&"), "")
pactive <- ifelse(active != "any", paste0("active=", tolower(active), "&"), "")
psortBy <- ifelse(sortBy != "", paste0("sortBy=", sortBy, "&"), "")
psortOrder <- ifelse(sortOrder != "", paste0("sortOrder=", sortOrder, "&"), "")
ppage <- ifelse(is.numeric(page), paste0("page=", page, "", "&"), "")
ppageSize <- ifelse(is.numeric(pageSize),
paste0("pageSize=", pageSize, "&"), "")
if (page == 0 & pageSize == 1000) {
ppage <- ""
ppageSize <- ""
}
ptrials <- sub("[/?&]$",
"",
paste0(ptrials,
pprogramDbId,
plocationDbId,
pactive,
psortBy,
psortOrder,
ppageSize,
ppage))
try({
res <- brapiGET(url = ptrials, con = con)
res2 <- httr::content(x = res, as = "text", encoding = "UTF-8")
out <- NULL
if (rclass %in% c("list", "json")) {
out <- dat2tbl(res = res2, rclass = rclass)
}
if (rclass %in% c("data.frame", "tibble")) {
out <- trl2tbl2(res = res2, rclass = rclass)
}
class(out) <- c(class(out), "ba_trials")
show_metadata(res)
return(out)
})
}
|
# Title: script to prep data
# Description: script to prep data
# Input:
# Output:
curry <- read.csv("../data/stephen-curry.csv",stringsAsFactors = FALSE)
thompson <- read.csv("../data/klay-thompson.csv",stringsAsFactors = FALSE)
green <- read.csv("../data/draymond-green.csv",stringsAsFactors = FALSE)
durant <- read.csv("../data/kevin-durant.csv",stringsAsFactors = FALSE)
iguodala <- read.csv("../data/andre-iguodala.csv",stringsAsFactors = FALSE)
curry$name <- "Stephen Curry"
thompson$name <- "Klay Thompson"
durant$name <- "Kevin Durant"
green$name <- "Draymond Green"
iguodala$name <- "Andre Iguodala"
curry$shot_made_flag[curry$shot_made_flag == 'y'] <- 'shot_yes'
curry$shot_made_flag[curry$shot_made_flag == 'n'] <- 'shot_no'
thompson$shot_made_flag[thompson$shot_made_flag == 'y'] <- 'shot_yes'
thompson$shot_made_flag[thompson$shot_made_flag == 'n'] <- 'shot_no'
durant$shot_made_flag[durant$shot_made_flag == 'y'] <- 'shot_yes'
durant$shot_made_flag[durant$shot_made_flag == 'n'] <- 'shot_no'
green$shot_made_flag[green$shot_made_flag == 'y'] <- 'shot_yes'
green$shot_made_flag[green$shot_made_flag == 'n'] <- 'shot_no'
iguodala$shot_made_flag[iguodala$shot_made_flag == 'y'] <- 'shot_yes'
iguodala$shot_made_flag[iguodala$shot_made_flag == 'n'] <- 'shot_no'
curry$minute <- curry$period * 12 - curry$minutes_remaining
thompson$minute <- thompson$period * 12 - thompson$minutes_remaining
durant$minute <- durant$period * 12 - durant$minutes_remaining
green$minute <- green$period * 12 - green$minutes_remaining
iguodala$minute <- iguodala$period * 12 - iguodala$minutes_remaining
sink(file = '../output/andre-iguodala-summary.txt')
summary(iguodala)
sink()
sink(file = '../output/draymond-green-summary.txt')
summary(green)
sink()
sink(file = '../output/stephen-curry-summary.txt')
summary(curry)
sink()
sink(file = '../output/klay-thompson-summary.txt')
summary(thompson)
sink()
sink(file = '../output/kevin-durant-summary.txt')
summary(durant)
sink()
df <- rbind(iguodala,green,curry,thompson,durant)
write.csv(
x = df, # R object to be exported
file = '../data/shots-data.csv' # file path
)
sink(file = '../output/shots-data-summary.txt')
summary(df)
sink()
| /workout01/code/make-shots-data-script.R | no_license | stat133-sp19/hw-stat133-merryle-wang | R | false | false | 2,189 | r | # Title: script to prep data
# Description: script to prep data
# Input:
# Output:
curry <- read.csv("../data/stephen-curry.csv",stringsAsFactors = FALSE)
thompson <- read.csv("../data/klay-thompson.csv",stringsAsFactors = FALSE)
green <- read.csv("../data/draymond-green.csv",stringsAsFactors = FALSE)
durant <- read.csv("../data/kevin-durant.csv",stringsAsFactors = FALSE)
iguodala <- read.csv("../data/andre-iguodala.csv",stringsAsFactors = FALSE)
curry$name <- "Stephen Curry"
thompson$name <- "Klay Thompson"
durant$name <- "Kevin Durant"
green$name <- "Draymond Green"
iguodala$name <- "Andre Iguodala"
curry$shot_made_flag[curry$shot_made_flag == 'y'] <- 'shot_yes'
curry$shot_made_flag[curry$shot_made_flag == 'n'] <- 'shot_no'
thompson$shot_made_flag[thompson$shot_made_flag == 'y'] <- 'shot_yes'
thompson$shot_made_flag[thompson$shot_made_flag == 'n'] <- 'shot_no'
durant$shot_made_flag[durant$shot_made_flag == 'y'] <- 'shot_yes'
durant$shot_made_flag[durant$shot_made_flag == 'n'] <- 'shot_no'
green$shot_made_flag[green$shot_made_flag == 'y'] <- 'shot_yes'
green$shot_made_flag[green$shot_made_flag == 'n'] <- 'shot_no'
iguodala$shot_made_flag[iguodala$shot_made_flag == 'y'] <- 'shot_yes'
iguodala$shot_made_flag[iguodala$shot_made_flag == 'n'] <- 'shot_no'
curry$minute <- curry$period * 12 - curry$minutes_remaining
thompson$minute <- thompson$period * 12 - thompson$minutes_remaining
durant$minute <- durant$period * 12 - durant$minutes_remaining
green$minute <- green$period * 12 - green$minutes_remaining
iguodala$minute <- iguodala$period * 12 - iguodala$minutes_remaining
sink(file = '../output/andre-iguodala-summary.txt')
summary(iguodala)
sink()
sink(file = '../output/draymond-green-summary.txt')
summary(green)
sink()
sink(file = '../output/stephen-curry-summary.txt')
summary(curry)
sink()
sink(file = '../output/klay-thompson-summary.txt')
summary(thompson)
sink()
sink(file = '../output/kevin-durant-summary.txt')
summary(durant)
sink()
df <- rbind(iguodala,green,curry,thompson,durant)
write.csv(
x = df, # R object to be exported
file = '../data/shots-data.csv' # file path
)
sink(file = '../output/shots-data-summary.txt')
summary(df)
sink()
|
p <- 1/3
#p(X=r) = pq^(r-1)
result <- p * (1-p)^4
cat(round(result,3))
-----------------------------------------
#R function dgeom(x, prob) is the probability of x failures prior to the first success (note the difference) when the probability of success is prob.
p <- 1/3
x <- 5 - 1
result <- dgeom(x, p)
cat(round(result,3))
| /Day 4: Geometric Distribution I.R | no_license | cc59chong/Hacker-10-Days-of-Statistics--R | R | false | false | 332 | r | p <- 1/3
#p(X=r) = pq^(r-1)
result <- p * (1-p)^4
cat(round(result,3))
-----------------------------------------
#R function dgeom(x, prob) is the probability of x failures prior to the first success (note the difference) when the probability of success is prob.
p <- 1/3
x <- 5 - 1
result <- dgeom(x, p)
cat(round(result,3))
|
# Inputs:
# inputs/data_proc/Data_BART_demographics_20190630_raw.RData
# - df_nactives_raw
# - df_nretirees_raw
#
# - agecuts_actives
# - yoscuts_actives
#
# - agecuts_retirees
# Outputs:
# - imputed member data in tidy format
# - df_n_servRet_fillin
# - df_n_disbRet_occ_fillin
# - df_n_disbRet_nonocc_fillin
# - df_n_beneficiaries_fillin
#*******************************************************************************
# ## Global settings ####
#*******************************************************************************
dir_data <- "inputs/data_proc/"
#*******************************************************************************
# ## Loading data ####
#*******************************************************************************
load(paste0(dir_data, "Data_BART_demographics_20190630_raw.RData"))
#*******************************************************************************
# ## Local tools ####
#*******************************************************************************
# Interpolation of actives
fillin.actives.spreadyos.splineage <- function(lactives) {
# salary:
# first spread uniformly within age.cell-yos.cell group (same salary for all)
# then for every yos, estimate salary for each age using a spline - adjust endpoints first for plausibility
# finally, adjust resulting salary within each age.cell-yos.cell proportionately to hit total payroll values from grouped data
# then add ea to the data
# nactives: spread uniformly within age.cell-yos.cell group (same nactives for all), then add ea to the data
lactives
adf <- lactives$actives.yos
agecuts <- lactives$agecuts
yoscuts <- lactives$yoscuts
#eacuts <- lactives$eacuts
minage <- min(agecuts$agelb)
maxage <- max(agecuts$ageub)
minyos <- min(yoscuts$yoslb)
maxyos <- max(yoscuts$yosub)
planname <- paste0(adf$planname[1])
AV_date <- adf$AV_date[1]
# adf %>% select(age, ea, salary) %>% spread(ea, salary)
# adf %>% select(age, ea, nactives) %>% spread(ea, nactives)
# create a master grouped data frame
adf.g <- adf %>% select(-planname, -age, -yos, nactives.cell=nactives, salary.cell=salary) %>%
mutate(pay.cell=nactives.cell * salary.cell) %>%
mutate(ageidx = findInterval(age.cell, agecuts$agelb),
age.lb = agecuts$agelb[ageidx],
age.ub = agecuts$ageub[ageidx],
yosidx = findInterval(yos.cell, yoscuts$yoslb),
yos.lb = yoscuts$yoslb[yosidx],
yos.ub = yoscuts$yosub[yosidx]) %>%
select(age.cell, yos.cell, age.lb, age.ub, yos.lb, yos.ub, nactives.cell, salary.cell, pay.cell)
adf.g
# expand the grouped data frame to all allowable age-yos combinations
xpnd <- function(df) {
# expand to all age-yos combinations but only keep those where ea>=15 or, if there are no such records,
# keep the recrods with max ea
df2 <- expand.grid(age=df$age.lb:df$age.ub, yos=df$yos.lb:df$yos.ub) %>%
mutate(ea=age - yos) %>%
filter((ea >= 20) | (ea<20 & ea==max(ea))) %>%
select(-ea)
return(df2)
}
adf.x <- adf.g %>% rowwise() %>%
do(cbind(., xpnd(.))) %>%
ungroup %>% # get rid of rowwise
group_by(age.cell, yos.cell) %>%
mutate(n.cell=n()) %>%
select(age, yos, everything()) %>%
arrange(age, yos)
# work with the expanded data
# we have to anchor the endpoints with reasonable values BEFORE computing the spline
adjustends <- function(age, salary) {
# the basic idea is that if an endpoint is NA, insert a plausible value
# simple rule: if spline first or last value falls within +/ 50% of the nearest nonNA value, use spline estimate
# otherwise use the capped value
firstnonna <- salary[which.min(is.na(salary))]
lastnonna <- rev(salary)[which.min(is.na(rev(salary)))]
bound <- .5
firstrange <- c(firstnonna * bound, firstnonna * (1 + bound))
lastrange <- c(lastnonna * bound, lastnonna * (1 + bound))
cap <- function(sal, range) {
cappedval <- max(sal, range[1])
cappedval <- min(cappedval, range[2])
return(cappedval)
}
salary.est <- spline(age, salary, xout=age)$y # what does spline think naively?
salary.adjusted <- salary
if(is.na(salary[1])) salary.adjusted[1] <- cap(salary.est[1], firstrange)
ilast <- length(salary)
if(is.na(salary[ilast])) salary.adjusted[ilast] <- cap(salary.est[ilast], firstrange)
return(salary.adjusted)
}
# test out adjustends
# fs <- function(age, sal) return(spline(age, sal, xout=age)$y) # spline doesn't seem to work with dplyr if not in function
# # various salaries to try out
# salary <- seq(20, 50, length.out = 10)
# salary <- c(20, NA, 30, NA, 40, NA, 50, NA, NA, 80)
# salary <- c(20, NA, 30, NA, 40, NA, 50, NA, NA, 30)
# salary <- c(NA, NA, 30, NA, 40, NA, 50, NA, NA, 30)
# salary <- c(NA, NA, 30, NA, 40, NA, 50, NA, NA, NA)
# salary <- c(NA, 10, 30, NA, 40, NA, 50, 80, NA, NA)
# age <- 21:30
# d <- data_frame(age, salary, saladj=adjustends(age, salary)) %>%
# mutate(sal.spline=fs(age, salary),
# saladj.spline=fs(age, saladj))
# d
# qplot(age, value, data=gather(d, variable, value, -age), colour=variable, geom=c("point", "line")) + scale_x_continuous(breaks=0:100) + geom_hline(y=0)
spline.y2 <- function(age, salary, safesalary) {
# safesalary is what we use if salary has no data
if(all(is.na(salary))) {
print("AllNA")
salary <- safesalary
}
salary.adjusted <- adjustends(age, salary)
sp.out <- spline(age, salary.adjusted, xout=age)
salout <- sp.out$y
return(salout)
}
adf.x3 <- adf.x %>% ungroup %>% # MUST be ungrouped or ifelse won't work if there is only one rec in a group
mutate(nactives=nactives.cell / n.cell, # always spread nactives uniformly
salary.group=ifelse(age==age.cell & yos==yos.cell, salary.cell, NA),
salary.group=ifelse(salary.group==0, NA, salary.group),
salary.agecell=ifelse(age==age.cell, salary.cell, NA)) %>% # Yimeng's first step
group_by(yos) %>%
arrange(age) %>%
mutate(salary.spline.adjep=spline.y2(age, salary.agecell, salary.cell)) %>% # Yimeng's 2nd step with endpoint adjustment
group_by(age.cell, yos.cell) %>%
mutate(planname=planname,
AV_date = AV_date,
pay.unadj=sum(salary.spline.adjep * nactives),
adjust=pay.cell / pay.unadj,
salary.final=salary.spline.adjep * adjust,
pay.adj=sum(salary.final * nactives),
ea=age - yos
#ea.cell=eacuts$stub[findInterval(ea, eacuts$lb)]
)
return(adf.x3)
}
# data structure of the input list of fillin.actives.spreadyos.splineag
# list name: lactives
# $agecuts:
# - age.cell
# - agelb
# - ageub
# $yoscuts:
# - yos.cell
# - yoslb
# - yosub
# $actives.yos
# - age.cell
# - yos.cell
# - age
# - yos
# - planname
# - nactives
# - salary
# Interpolation of retirees
fillin.retirees.splineage <- function(list_data) {
# list_data <- lretirees
rdf <- select(list_data$data, planname, age, N, V) # keep only the vars we want
agecuts <- list_data$agecuts
planname <- paste0(rdf$planname[1], "")
# name_N <- list_data$varNames["name_N"]
# name_V <- list_data$varNames["name_V"]
AV_date <- list_data$data$AV_date[1]
name_N <- list_data$varNames$name_N
name_V <- list_data$varNames$name_V
# add group ranges to the retirees data frame
combo <- rdf %>%
mutate(totben=N * V) %>%
mutate(ageidx=findInterval(age, agecuts$agelb),
age.lb=agecuts$agelb[ageidx],
age.ub=agecuts$ageub[ageidx]) %>%
arrange(age)
# get avg benefits by age, via spline
avgben <- splong(select(combo, age, V) %>% as.data.frame, "age", min(combo$age.lb):max(combo$age.ub))
# force benefit to be non-negative DJB added 10/30/2015
avgben <- avgben %>% mutate(V=ifelse(V<0, 0, V))
guessdf <- data.frame(age=min(combo$age.lb):max(combo$age.ub)) %>%
mutate(ageidx=findInterval(age, agecuts$agelb),
age.cell=combo$age[match(ageidx, combo$ageidx)],
N.cell=combo$N[match(ageidx, combo$ageidx)],
V.cell=combo$V[match(ageidx, combo$ageidx)]) %>%
group_by(age.cell) %>%
mutate(n.cell=n(),
N=N.cell / n.cell, # spread nretirees evenly
adjV=avgben$V[match(age, avgben$age)], # get the spline-based avg benefit
adjtotben=N * adjV)
# refine the guess by adjusting ensure that we hit the right total benefits in each group
guessdf2 <- guessdf %>% group_by(age.cell) %>%
mutate(adjust=mean(N.cell * V.cell) / sum(adjtotben),
V=adjV*adjust,
totben=N * V)
rdf.fillin <- guessdf2 %>%
mutate(AV_date = AV_date,
planname = planname) %>%
mutate(across(all_of(c("N", "V")), na2zero)) %>%
select(AV_date, planname, age.cell, age, N, V) %>%
ungroup
#plyr::rename(c("N" = list_data$varNames["name_N"])))
names(rdf.fillin)[names(rdf.fillin) == "N"] <- name_N
names(rdf.fillin)[names(rdf.fillin) == "V"] <- name_V
return(rdf.fillin)
}
# data structure of the input list of fillin.actives.spreadyos.splineag
# list name: ldata
# $agecuts:
# - age.cell
# - agelb
# - ageub
# $data
# - age.cell
# - age
# - planname
# - N
# - V
# $varNames
# - name_N
# - name_V
# get_agecuts <- function(df){
# df %>%
# mutate(age.cell = (age_lb + age_ub)/ 2) %>%
# select(age.cell, agelb = age_lb, ageub = age_ub)
# }
#*******************************************************************************
# Initial actives ####
#*******************************************************************************
# The output data frame includes active members of all tiers.
# df_nactives_raw
# agecuts_actives
# yoscuts_actives
# Prepare data for the interpolation function
make_lactives <- function(df, agecuts, yoscuts){
lactives <- list(
agecuts = agecuts,
yoscuts = yoscuts,
actives.yos =
df %>%
select(
AV_date,
planname = grp,
age.cell,
yos.cell,
nactives,
salary,
) %>%
mutate(age = age.cell, yos = yos.cell)
)
}
fillin_actives <- function(df){
fillin.actives.spreadyos.splineage(df) %>%
ungroup %>%
select(AV_date,
grp = planname,
age,
yos,
ea,
nactives,
salary = salary.final) %>%
#mutate_at(vars(nactives, salary), funs(na2zero))
mutate(across(all_of(c("nactives", "salary")), na2zero)) %>%
arrange(ea, yos)
}
## Try using a single group
# lactives <- filter(df_nactives_raw, grp == "inds") %>% make_lactives(agecuts_actives, yoscuts_actives)
# lactives
#
# nactives_fillin <- fillin_actives(lactives)
# nactives_fillin
## looping through all groups
df_nactives_fillin <-
df_nactives_raw %>%
split(.$grp) %>%
map( ~ make_lactives(.x, agecuts_actives, yoscuts_actives) %>% fillin_actives) %>%
bind_rows()
# Examine results
# actives_fillin_spread <-
# df_nactives_fillin %>%
# filter(grp == "inds") %>%
# select(ea, age, nactives) %>%
# spread(age, nactives)
#
# salary_fillin_spread <-
# df_nactives_fillin %>%
# filter(grp == "inds") %>%
# select(ea, age, salary) %>%
# spread(age, salary)
# actives_fillin %>%
# summarise(avg.sal = sum(nactives * salary) / sum(nactives))
#*******************************************************************************
# Initial retirees and beneficiaries ####
#*******************************************************************************
## Local helper functions
make_lretirees <- function(df, agecuts, ben_type){
# data structure of the input list of fillin.actives.spreadyos.splineag
# list name: ldata
# $agecuts:
# - age.cell
# - agelb
# - ageub
# $data
# - age.cell
# - age
# - planname
# - N
# - V
# $varNames
# - name_N
# - name_V
name_V <- paste0("benefit_", ben_type)
name_N <- paste0("n_", ben_type)
data <- list(
agecuts = agecuts,
data =
df %>%
select(
AV_date,
planname = grp,
age.cell,
!!name_N,
!!name_V
) %>%
rename(N = !!name_N,
V = !!name_V) %>%
mutate(age = age.cell),
varNames = list(name_N = name_N,
name_V = name_V)
)
}
fillin_retirees <- function(df){
fillin.retirees.splineage(df) %>%
ungroup %>%
select(AV_date,
grp = planname,
age,
everything()) %>%
# mutate_at(vars(nactives, salary), funs(na2zero))
# mutate(across(all_of(c("nactives", "salary")), na2zero)) %>%
arrange(age)
}
## Try using a single group
lretirees <- filter(df_nretirees_raw, grp == "misc") %>% make_lretirees(agecuts_retirees, "beneficiaries")
lretirees %>% fillin_retirees()
## Loop through all groups for each type of benefit
df_n_servRet_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "servRet") %>% fillin_retirees) %>%
bind_rows()
df_n_disbRet_occ_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "disbRet_occ") %>% fillin_retirees) %>%
bind_rows()
df_n_disbRet_nonocc_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "disbRet_nonocc") %>% fillin_retirees) %>%
bind_rows()
df_n_beneficiaries_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "beneficiaries") %>% fillin_retirees) %>%
bind_rows()
df_n_deathBen_occ_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "death_occ") %>% fillin_retirees) %>%
bind_rows()
df_n_deathBen_nonocc_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "death_nonocc") %>% fillin_retirees) %>%
bind_rows()
df_n_servRet_fillin
df_n_disbRet_occ_fillin
df_n_disbRet_nonocc_fillin
df_n_beneficiaries_fillin
df_n_deathBen_occ_fillin
df_n_deathBen_nonocc_fillin
#*******************************************************************************
# Review and save results ####
#*******************************************************************************
# df_nactives_fillin
# df_n_servRet_fillin
# df_n_disbRet_occ_fillin
# df_n_disbRet_nonocc_fillin
# df_n_beneficiaries_fillin
save(
df_nactives_fillin,
df_n_servRet_fillin,
df_n_disbRet_occ_fillin,
df_n_disbRet_nonocc_fillin,
df_n_beneficiaries_fillin,
df_n_deathBen_occ_fillin,
df_n_deathBen_nonocc_fillin,
file = paste0(dir_data, "Data_BART_demographics_20190630_fillin.RData")
)
| /inputs/Data_imputation_demographics_20190630.R | no_license | yimengyin16/model_BART | R | false | false | 14,637 | r |
# Inputs:
# inputs/data_proc/Data_BART_demographics_20190630_raw.RData
# - df_nactives_raw
# - df_nretirees_raw
#
# - agecuts_actives
# - yoscuts_actives
#
# - agecuts_retirees
# Outputs:
# - imputed member data in tidy format
# - df_n_servRet_fillin
# - df_n_disbRet_occ_fillin
# - df_n_disbRet_nonocc_fillin
# - df_n_beneficiaries_fillin
#*******************************************************************************
# ## Global settings ####
#*******************************************************************************
dir_data <- "inputs/data_proc/"
#*******************************************************************************
# ## Loading data ####
#*******************************************************************************
load(paste0(dir_data, "Data_BART_demographics_20190630_raw.RData"))
#*******************************************************************************
# ## Local tools ####
#*******************************************************************************
# Interpolation of actives
fillin.actives.spreadyos.splineage <- function(lactives) {
# salary:
# first spread uniformly within age.cell-yos.cell group (same salary for all)
# then for every yos, estimate salary for each age using a spline - adjust endpoints first for plausibility
# finally, adjust resulting salary within each age.cell-yos.cell proportionately to hit total payroll values from grouped data
# then add ea to the data
# nactives: spread uniformly within age.cell-yos.cell group (same nactives for all), then add ea to the data
lactives
adf <- lactives$actives.yos
agecuts <- lactives$agecuts
yoscuts <- lactives$yoscuts
#eacuts <- lactives$eacuts
minage <- min(agecuts$agelb)
maxage <- max(agecuts$ageub)
minyos <- min(yoscuts$yoslb)
maxyos <- max(yoscuts$yosub)
planname <- paste0(adf$planname[1])
AV_date <- adf$AV_date[1]
# adf %>% select(age, ea, salary) %>% spread(ea, salary)
# adf %>% select(age, ea, nactives) %>% spread(ea, nactives)
# create a master grouped data frame
adf.g <- adf %>% select(-planname, -age, -yos, nactives.cell=nactives, salary.cell=salary) %>%
mutate(pay.cell=nactives.cell * salary.cell) %>%
mutate(ageidx = findInterval(age.cell, agecuts$agelb),
age.lb = agecuts$agelb[ageidx],
age.ub = agecuts$ageub[ageidx],
yosidx = findInterval(yos.cell, yoscuts$yoslb),
yos.lb = yoscuts$yoslb[yosidx],
yos.ub = yoscuts$yosub[yosidx]) %>%
select(age.cell, yos.cell, age.lb, age.ub, yos.lb, yos.ub, nactives.cell, salary.cell, pay.cell)
adf.g
# expand the grouped data frame to all allowable age-yos combinations
xpnd <- function(df) {
# expand to all age-yos combinations but only keep those where ea>=15 or, if there are no such records,
# keep the recrods with max ea
df2 <- expand.grid(age=df$age.lb:df$age.ub, yos=df$yos.lb:df$yos.ub) %>%
mutate(ea=age - yos) %>%
filter((ea >= 20) | (ea<20 & ea==max(ea))) %>%
select(-ea)
return(df2)
}
adf.x <- adf.g %>% rowwise() %>%
do(cbind(., xpnd(.))) %>%
ungroup %>% # get rid of rowwise
group_by(age.cell, yos.cell) %>%
mutate(n.cell=n()) %>%
select(age, yos, everything()) %>%
arrange(age, yos)
# work with the expanded data
# we have to anchor the endpoints with reasonable values BEFORE computing the spline
adjustends <- function(age, salary) {
# the basic idea is that if an endpoint is NA, insert a plausible value
# simple rule: if spline first or last value falls within +/ 50% of the nearest nonNA value, use spline estimate
# otherwise use the capped value
firstnonna <- salary[which.min(is.na(salary))]
lastnonna <- rev(salary)[which.min(is.na(rev(salary)))]
bound <- .5
firstrange <- c(firstnonna * bound, firstnonna * (1 + bound))
lastrange <- c(lastnonna * bound, lastnonna * (1 + bound))
cap <- function(sal, range) {
cappedval <- max(sal, range[1])
cappedval <- min(cappedval, range[2])
return(cappedval)
}
salary.est <- spline(age, salary, xout=age)$y # what does spline think naively?
salary.adjusted <- salary
if(is.na(salary[1])) salary.adjusted[1] <- cap(salary.est[1], firstrange)
ilast <- length(salary)
if(is.na(salary[ilast])) salary.adjusted[ilast] <- cap(salary.est[ilast], firstrange)
return(salary.adjusted)
}
# test out adjustends
# fs <- function(age, sal) return(spline(age, sal, xout=age)$y) # spline doesn't seem to work with dplyr if not in function
# # various salaries to try out
# salary <- seq(20, 50, length.out = 10)
# salary <- c(20, NA, 30, NA, 40, NA, 50, NA, NA, 80)
# salary <- c(20, NA, 30, NA, 40, NA, 50, NA, NA, 30)
# salary <- c(NA, NA, 30, NA, 40, NA, 50, NA, NA, 30)
# salary <- c(NA, NA, 30, NA, 40, NA, 50, NA, NA, NA)
# salary <- c(NA, 10, 30, NA, 40, NA, 50, 80, NA, NA)
# age <- 21:30
# d <- data_frame(age, salary, saladj=adjustends(age, salary)) %>%
# mutate(sal.spline=fs(age, salary),
# saladj.spline=fs(age, saladj))
# d
# qplot(age, value, data=gather(d, variable, value, -age), colour=variable, geom=c("point", "line")) + scale_x_continuous(breaks=0:100) + geom_hline(y=0)
spline.y2 <- function(age, salary, safesalary) {
# safesalary is what we use if salary has no data
if(all(is.na(salary))) {
print("AllNA")
salary <- safesalary
}
salary.adjusted <- adjustends(age, salary)
sp.out <- spline(age, salary.adjusted, xout=age)
salout <- sp.out$y
return(salout)
}
adf.x3 <- adf.x %>% ungroup %>% # MUST be ungrouped or ifelse won't work if there is only one rec in a group
mutate(nactives=nactives.cell / n.cell, # always spread nactives uniformly
salary.group=ifelse(age==age.cell & yos==yos.cell, salary.cell, NA),
salary.group=ifelse(salary.group==0, NA, salary.group),
salary.agecell=ifelse(age==age.cell, salary.cell, NA)) %>% # Yimeng's first step
group_by(yos) %>%
arrange(age) %>%
mutate(salary.spline.adjep=spline.y2(age, salary.agecell, salary.cell)) %>% # Yimeng's 2nd step with endpoint adjustment
group_by(age.cell, yos.cell) %>%
mutate(planname=planname,
AV_date = AV_date,
pay.unadj=sum(salary.spline.adjep * nactives),
adjust=pay.cell / pay.unadj,
salary.final=salary.spline.adjep * adjust,
pay.adj=sum(salary.final * nactives),
ea=age - yos
#ea.cell=eacuts$stub[findInterval(ea, eacuts$lb)]
)
return(adf.x3)
}
# data structure of the input list of fillin.actives.spreadyos.splineag
# list name: lactives
# $agecuts:
# - age.cell
# - agelb
# - ageub
# $yoscuts:
# - yos.cell
# - yoslb
# - yosub
# $actives.yos
# - age.cell
# - yos.cell
# - age
# - yos
# - planname
# - nactives
# - salary
# Interpolation of retirees
fillin.retirees.splineage <- function(list_data) {
# list_data <- lretirees
rdf <- select(list_data$data, planname, age, N, V) # keep only the vars we want
agecuts <- list_data$agecuts
planname <- paste0(rdf$planname[1], "")
# name_N <- list_data$varNames["name_N"]
# name_V <- list_data$varNames["name_V"]
AV_date <- list_data$data$AV_date[1]
name_N <- list_data$varNames$name_N
name_V <- list_data$varNames$name_V
# add group ranges to the retirees data frame
combo <- rdf %>%
mutate(totben=N * V) %>%
mutate(ageidx=findInterval(age, agecuts$agelb),
age.lb=agecuts$agelb[ageidx],
age.ub=agecuts$ageub[ageidx]) %>%
arrange(age)
# get avg benefits by age, via spline
avgben <- splong(select(combo, age, V) %>% as.data.frame, "age", min(combo$age.lb):max(combo$age.ub))
# force benefit to be non-negative DJB added 10/30/2015
avgben <- avgben %>% mutate(V=ifelse(V<0, 0, V))
guessdf <- data.frame(age=min(combo$age.lb):max(combo$age.ub)) %>%
mutate(ageidx=findInterval(age, agecuts$agelb),
age.cell=combo$age[match(ageidx, combo$ageidx)],
N.cell=combo$N[match(ageidx, combo$ageidx)],
V.cell=combo$V[match(ageidx, combo$ageidx)]) %>%
group_by(age.cell) %>%
mutate(n.cell=n(),
N=N.cell / n.cell, # spread nretirees evenly
adjV=avgben$V[match(age, avgben$age)], # get the spline-based avg benefit
adjtotben=N * adjV)
# refine the guess by adjusting ensure that we hit the right total benefits in each group
guessdf2 <- guessdf %>% group_by(age.cell) %>%
mutate(adjust=mean(N.cell * V.cell) / sum(adjtotben),
V=adjV*adjust,
totben=N * V)
rdf.fillin <- guessdf2 %>%
mutate(AV_date = AV_date,
planname = planname) %>%
mutate(across(all_of(c("N", "V")), na2zero)) %>%
select(AV_date, planname, age.cell, age, N, V) %>%
ungroup
#plyr::rename(c("N" = list_data$varNames["name_N"])))
names(rdf.fillin)[names(rdf.fillin) == "N"] <- name_N
names(rdf.fillin)[names(rdf.fillin) == "V"] <- name_V
return(rdf.fillin)
}
# data structure of the input list of fillin.actives.spreadyos.splineag
# list name: ldata
# $agecuts:
# - age.cell
# - agelb
# - ageub
# $data
# - age.cell
# - age
# - planname
# - N
# - V
# $varNames
# - name_N
# - name_V
# get_agecuts <- function(df){
# df %>%
# mutate(age.cell = (age_lb + age_ub)/ 2) %>%
# select(age.cell, agelb = age_lb, ageub = age_ub)
# }
#*******************************************************************************
# Initial actives ####
#*******************************************************************************
# The output data frame includes active members of all tiers.
# df_nactives_raw
# agecuts_actives
# yoscuts_actives
# Prepare data for the interpolation function
make_lactives <- function(df, agecuts, yoscuts){
lactives <- list(
agecuts = agecuts,
yoscuts = yoscuts,
actives.yos =
df %>%
select(
AV_date,
planname = grp,
age.cell,
yos.cell,
nactives,
salary,
) %>%
mutate(age = age.cell, yos = yos.cell)
)
}
fillin_actives <- function(df){
fillin.actives.spreadyos.splineage(df) %>%
ungroup %>%
select(AV_date,
grp = planname,
age,
yos,
ea,
nactives,
salary = salary.final) %>%
#mutate_at(vars(nactives, salary), funs(na2zero))
mutate(across(all_of(c("nactives", "salary")), na2zero)) %>%
arrange(ea, yos)
}
## Try using a single group
# lactives <- filter(df_nactives_raw, grp == "inds") %>% make_lactives(agecuts_actives, yoscuts_actives)
# lactives
#
# nactives_fillin <- fillin_actives(lactives)
# nactives_fillin
## looping through all groups
df_nactives_fillin <-
df_nactives_raw %>%
split(.$grp) %>%
map( ~ make_lactives(.x, agecuts_actives, yoscuts_actives) %>% fillin_actives) %>%
bind_rows()
# Examine results
# actives_fillin_spread <-
# df_nactives_fillin %>%
# filter(grp == "inds") %>%
# select(ea, age, nactives) %>%
# spread(age, nactives)
#
# salary_fillin_spread <-
# df_nactives_fillin %>%
# filter(grp == "inds") %>%
# select(ea, age, salary) %>%
# spread(age, salary)
# actives_fillin %>%
# summarise(avg.sal = sum(nactives * salary) / sum(nactives))
#*******************************************************************************
# Initial retirees and beneficiaries ####
#*******************************************************************************
## Local helper functions
make_lretirees <- function(df, agecuts, ben_type){
# data structure of the input list of fillin.actives.spreadyos.splineag
# list name: ldata
# $agecuts:
# - age.cell
# - agelb
# - ageub
# $data
# - age.cell
# - age
# - planname
# - N
# - V
# $varNames
# - name_N
# - name_V
name_V <- paste0("benefit_", ben_type)
name_N <- paste0("n_", ben_type)
data <- list(
agecuts = agecuts,
data =
df %>%
select(
AV_date,
planname = grp,
age.cell,
!!name_N,
!!name_V
) %>%
rename(N = !!name_N,
V = !!name_V) %>%
mutate(age = age.cell),
varNames = list(name_N = name_N,
name_V = name_V)
)
}
fillin_retirees <- function(df){
fillin.retirees.splineage(df) %>%
ungroup %>%
select(AV_date,
grp = planname,
age,
everything()) %>%
# mutate_at(vars(nactives, salary), funs(na2zero))
# mutate(across(all_of(c("nactives", "salary")), na2zero)) %>%
arrange(age)
}
## Try using a single group
lretirees <- filter(df_nretirees_raw, grp == "misc") %>% make_lretirees(agecuts_retirees, "beneficiaries")
lretirees %>% fillin_retirees()
## Loop through all groups for each type of benefit
df_n_servRet_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "servRet") %>% fillin_retirees) %>%
bind_rows()
df_n_disbRet_occ_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "disbRet_occ") %>% fillin_retirees) %>%
bind_rows()
df_n_disbRet_nonocc_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "disbRet_nonocc") %>% fillin_retirees) %>%
bind_rows()
df_n_beneficiaries_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "beneficiaries") %>% fillin_retirees) %>%
bind_rows()
df_n_deathBen_occ_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "death_occ") %>% fillin_retirees) %>%
bind_rows()
df_n_deathBen_nonocc_fillin <-
df_nretirees_raw %>%
split(.$grp) %>%
map( ~ make_lretirees(.x, agecuts_retirees, "death_nonocc") %>% fillin_retirees) %>%
bind_rows()
df_n_servRet_fillin
df_n_disbRet_occ_fillin
df_n_disbRet_nonocc_fillin
df_n_beneficiaries_fillin
df_n_deathBen_occ_fillin
df_n_deathBen_nonocc_fillin
#*******************************************************************************
# Review and save results ####
#*******************************************************************************
# df_nactives_fillin
# df_n_servRet_fillin
# df_n_disbRet_occ_fillin
# df_n_disbRet_nonocc_fillin
# df_n_beneficiaries_fillin
save(
df_nactives_fillin,
df_n_servRet_fillin,
df_n_disbRet_occ_fillin,
df_n_disbRet_nonocc_fillin,
df_n_beneficiaries_fillin,
df_n_deathBen_occ_fillin,
df_n_deathBen_nonocc_fillin,
file = paste0(dir_data, "Data_BART_demographics_20190630_fillin.RData")
)
|
\name{locfdrFit}
\alias{locfdrFit}
\title{Modified version of Efron's locfdr}
\usage{
locfdrFit(zz, bre = 120, df = 7, pct = 0, pct0 = 1/4,
nulltype = 1, type = 0, plot = 1, mult, mlests,
main = " ", sw = 0, verbose = T)
}
\arguments{
\item{zz}{A vector of summary statistics, one for each
case under simultaneous consideration. The calculations
assume a large number of cases, say length(zz) exceeding
200. Results may be improved by transforming zz so that
its elements are theoretically distributed as N(0, 1)
under the null hypothesis. See the \code{locfdr} vignette
for tips on creating \code{zz}.}
\item{bre}{Number of breaks in the discretization of the
z-score axis, or a vector of breakpoints fully describing
the discretization. If length(zz) is small, such as when
the number of cases is less than about 1000, set
\code{bre} to a number lower than the default of 120. The
tornado package keeps this at its default.}
\item{df}{Degrees of freedom for fitting the estimated
density f(z). The tornado package keeps this at its
default.}
\item{pct}{Excluded tail proportions of zz's when fitting
f(z). \code{pct=0} includes full range of zz's.
\code{pct} can also be a 2-vector, describing the fitting
range. The tornado package keeps this at its default.}
\item{pct0}{Proportion of the zz distribution used in
fitting the null density f0(z) by central matching. If a
2-vector, e.g. \code{pct0=c(0.25,0.60)}, the range
\code{[pct0[1], pct0[2]]} is used. If a scalar,
\code{[pct0, 1-pct0]} is used. The tornado package keeps
this at its default.}
\item{nulltype}{Type of null hypothesis assumed in
estimating f0(z), for use in the fdr calculations. 0 is
the theoretical null N(0; 1), 1 is maximum likelihood
estimation, 2 is central matching estimation, 3 is a
split normal version of 2. The tornado package fixes this
at 1.}
\item{type}{Type of fitting used for f; 0 is a natural
spline, 1 is a polynomial, in either case with degrees of
freedom \code{df} [so total degrees of freedom including
the intercept is \code{df+1}.] The tornado package fixes
this at 0.}
\item{plot}{Plots desired. 0 gives no plots. 1 gives
single plot showing the histogram of zz and fitted
densities f and p0 f0. 2 also gives plot of fdr, and the
right and left tail area Fdr curves. 3 gives instead the
f1 cdf of the estimated fdr curve; plot=4 gives all three
plots. The tornado package allows choices 0 and 1;
equivalent to \code{plots = F} and \code{plots = T} in
\code{getParams}.}
\item{mult}{Optional scalar multiple (or vector of
multiples) of the sample size for calculation of the
corresponding hypothetical Efdr value(s). This is not
used in the tornado package.}
\item{mlests}{Optional vector of initial values for
(delta0, sigma0) in the maximum likelihood iteration. The
tornado package includes these values in an updated run
of \code{locfdrFit} if they are suggested via warning in
the first run.}
\item{main}{Main heading for the histogram plot when
\code{plot}>0.}
\item{sw}{Determines the type of output desired. 2 gives
a list consisting of the last 5 values listed under Value
below. 3 gives the square matrix of dimension
\code{bre}-1 representing the influence function of
log(fdr). Any other value of sw returns a list consisting
of the first 5 (6 if \code{mult} is supplied) values
listed below. The tornado package fixes this at 0.}
\item{verbose}{if TRUE, various messages are printed
onscreen during \code{getParams}.}
}
\value{
See the \code{locfdr} manual for returned values.
\code{locfdrFit} returns the following additional
elements: \item{yt }{bin heights} \item{mlest.lo }{if a
warning that \code{mlest} should be used in a re-run of
\code{locfdrFit}, the suggested first element of
\code{mlest}.} \item{mlest.hi }{if a warning that
\code{mlest} should be used in a re-run of
\code{locfdrFit}, the suggested second element of
\code{mlest}.} \item{needsfix }{0 if no warning to fix
the run with \code{mlest} parameters, 1 otherwise}
\item{nulldens }{a rough estimate of y-axis values for
f0(x)} \item{fulldens }{a rough estimate of y-axis values
for f(x)}
}
\description{
Compute local false discovery rates, following the
definitions and description in references listed below.
Exactly the same as \code{locfdr}, but returns extra
information.
}
\details{
Generally not used directly in the tornado package, but
is a workhorse for \code{getParams}.
}
\author{
Bradley Efron, slight modifications by Alyssa Frazee
}
\references{
http://cran.r-project.org/web/packages/locfdr/locfdr.pdf
}
\seealso{
\code{getParams}
}
| /man/locfdrFit.Rd | no_license | BioinformaticsArchive/derfinder | R | false | false | 4,733 | rd | \name{locfdrFit}
\alias{locfdrFit}
\title{Modified version of Efron's locfdr}
\usage{
locfdrFit(zz, bre = 120, df = 7, pct = 0, pct0 = 1/4,
nulltype = 1, type = 0, plot = 1, mult, mlests,
main = " ", sw = 0, verbose = T)
}
\arguments{
\item{zz}{A vector of summary statistics, one for each
case under simultaneous consideration. The calculations
assume a large number of cases, say length(zz) exceeding
200. Results may be improved by transforming zz so that
its elements are theoretically distributed as N(0, 1)
under the null hypothesis. See the \code{locfdr} vignette
for tips on creating \code{zz}.}
\item{bre}{Number of breaks in the discretization of the
z-score axis, or a vector of breakpoints fully describing
the discretization. If length(zz) is small, such as when
the number of cases is less than about 1000, set
\code{bre} to a number lower than the default of 120. The
tornado package keeps this at its default.}
\item{df}{Degrees of freedom for fitting the estimated
density f(z). The tornado package keeps this at its
default.}
\item{pct}{Excluded tail proportions of zz's when fitting
f(z). \code{pct=0} includes full range of zz's.
\code{pct} can also be a 2-vector, describing the fitting
range. The tornado package keeps this at its default.}
\item{pct0}{Proportion of the zz distribution used in
fitting the null density f0(z) by central matching. If a
2-vector, e.g. \code{pct0=c(0.25,0.60)}, the range
\code{[pct0[1], pct0[2]]} is used. If a scalar,
\code{[pct0, 1-pct0]} is used. The tornado package keeps
this at its default.}
\item{nulltype}{Type of null hypothesis assumed in
estimating f0(z), for use in the fdr calculations. 0 is
the theoretical null N(0; 1), 1 is maximum likelihood
estimation, 2 is central matching estimation, 3 is a
split normal version of 2. The tornado package fixes this
at 1.}
\item{type}{Type of fitting used for f; 0 is a natural
spline, 1 is a polynomial, in either case with degrees of
freedom \code{df} [so total degrees of freedom including
the intercept is \code{df+1}.] The tornado package fixes
this at 0.}
\item{plot}{Plots desired. 0 gives no plots. 1 gives
single plot showing the histogram of zz and fitted
densities f and p0 f0. 2 also gives plot of fdr, and the
right and left tail area Fdr curves. 3 gives instead the
f1 cdf of the estimated fdr curve; plot=4 gives all three
plots. The tornado package allows choices 0 and 1;
equivalent to \code{plots = F} and \code{plots = T} in
\code{getParams}.}
\item{mult}{Optional scalar multiple (or vector of
multiples) of the sample size for calculation of the
corresponding hypothetical Efdr value(s). This is not
used in the tornado package.}
\item{mlests}{Optional vector of initial values for
(delta0, sigma0) in the maximum likelihood iteration. The
tornado package includes these values in an updated run
of \code{locfdrFit} if they are suggested via warning in
the first run.}
\item{main}{Main heading for the histogram plot when
\code{plot}>0.}
\item{sw}{Determines the type of output desired. 2 gives
a list consisting of the last 5 values listed under Value
below. 3 gives the square matrix of dimension
\code{bre}-1 representing the influence function of
log(fdr). Any other value of sw returns a list consisting
of the first 5 (6 if \code{mult} is supplied) values
listed below. The tornado package fixes this at 0.}
\item{verbose}{if TRUE, various messages are printed
onscreen during \code{getParams}.}
}
\value{
See the \code{locfdr} manual for returned values.
\code{locfdrFit} returns the following additional
elements: \item{yt }{bin heights} \item{mlest.lo }{if a
warning that \code{mlest} should be used in a re-run of
\code{locfdrFit}, the suggested first element of
\code{mlest}.} \item{mlest.hi }{if a warning that
\code{mlest} should be used in a re-run of
\code{locfdrFit}, the suggested second element of
\code{mlest}.} \item{needsfix }{0 if no warning to fix
the run with \code{mlest} parameters, 1 otherwise}
\item{nulldens }{a rough estimate of y-axis values for
f0(x)} \item{fulldens }{a rough estimate of y-axis values
for f(x)}
}
\description{
Compute local false discovery rates, following the
definitions and description in references listed below.
Exactly the same as \code{locfdr}, but returns extra
information.
}
\details{
Generally not used directly in the tornado package, but
is a workhorse for \code{getParams}.
}
\author{
Bradley Efron, slight modifications by Alyssa Frazee
}
\references{
http://cran.r-project.org/web/packages/locfdr/locfdr.pdf
}
\seealso{
\code{getParams}
}
|
# compute the expected number of cache misses
# based on the derived formula
expected_cm = function(n, s, d) {
index = seq(1, n);
prob_arr = 1-(1-s)^{512}*(1-(d-s)^index)^512
return (sum(prob_arr))
}
# %row% is an array with 0 and non-zero entries
# %d% is the bit density - the number of bits set to 1 because of noise and signals in the entire array
make_noise = function(row, d) {
r_row = row
n = length(r_row)
s = sum(r_row != 0)/n
thold = d-s
empty_idx = which(r_row == 0)
noise = as.numeric(runif(length(empty_idx)) < thold)
r_row[empty_idx] = noise
return (r_row)
}
# term table per single posting
make_term_table = function(K, N, s, d){
# K - number of rows
# N - number of documents
# s - signal (local signal)
# d - bit density (signal and noise)
table = matrix(0, nrow=K, ncol=N)
# position of the match documents
sig = which(runif(N) < s)
# add signal columns
table[,sig] = 1
# add noise bits
table = apply(table, 1, function(x) make_noise(x, D)) #?? FIX for some weird reason I need to transpose
table = t(table)
return(table)
}
# returns an array with the number of misses per cache line
count_cache_misses = function(table) {
# an array to keep track of the cache misses
N = ncol(table)
cache_misses = rep(NA, N/512)
for (i in 1:length(cache_misses)) {
l_idx = (i-1)*512 + 1
r_idx = i*512
accumulator = table[1,l_idx:r_idx] # initially the accumulator is the first rows
for (j in 1:nrow(table) ) {
cache_line = table[j,l_idx:r_idx]
accumulator = bitwAnd(accumulator, cache_line)
if (sum(accumulator)==0) {
cache_misses[i] = j
break
}
if (j == nrow(table)) {
cache_misses[i] =j
}
}
}
return(cache_misses)
}
# given a term table returns the ratio between nr of cache misses
# when using linear vs random (multiple terms)
linear_vs_random = function(table, verbose = getOption(TRUE, FALSE)) {
# returns an array with the number of misses per cache line
cache_misses1 = count_cache_misses(table) # <- linear procedure
K = nrow(table)
r_order = sample(1:K)
r_table = table[r_order,] # <- randomize the table
cache_misses2 = count_cache_misses(r_table) # <- random procedure
ratio = mean(cache_misses1)/mean(cache_misses2)
if (verbose == FALSE) {
return(ratio)
}
print("The linear approach has on average ")
print(ratio)
print(" times more cache misses than the randomized approach")
p1 = qplot(cache_misses1, geom="histogram",
main = "Linear intersections",
xlab = "Number of cache misses per cache line",
fill = I("blue"),
col = I("red"))
p2 = qplot(cache_misses2, geom="histogram",
main = "Random intersections",
xlab = "Number of cache misses per cache line",
fill = I("blue"),
col = I("red"))
multiplot(p1, p2, cols = 1)
}
# Multiple plot function
#
# ggplot objects can be passed in ..., or to plotlist (as a list of ggplot objects)
# - cols: Number of columns in layout
# - layout: A matrix specifying the layout. If present, 'cols' is ignored.
#
# If the layout is something like matrix(c(1,2,3,3), nrow=2, byrow=TRUE),
# then plot 1 will go in the upper left, 2 will go in the upper right, and
# 3 will go all the way across the bottom.
#
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
library(grid)
# Make a list from the ... arguments and plotlist
plots <- c(list(...), plotlist)
numPlots = length(plots)
# If layout is NULL, then use 'cols' to determine layout
if (is.null(layout)) {
# Make the panel
# ncol: Number of columns of plots
# nrow: Number of rows needed, calculated from # of cols
layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
ncol = cols, nrow = ceiling(numPlots/cols))
}
if (numPlots==1) {
print(plots[[1]])
} else {
# Set up the page
grid.newpage()
pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))
# Make each plot, in the correct location
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))
print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col))
}
}
}
| /Functions.R | no_license | mcurmei627/RabBIT | R | false | false | 4,546 | r | # compute the expected number of cache misses
# based on the derived formula
expected_cm = function(n, s, d) {
index = seq(1, n);
prob_arr = 1-(1-s)^{512}*(1-(d-s)^index)^512
return (sum(prob_arr))
}
# %row% is an array with 0 and non-zero entries
# %d% is the bit density - the number of bits set to 1 because of noise and signals in the entire array
make_noise = function(row, d) {
r_row = row
n = length(r_row)
s = sum(r_row != 0)/n
thold = d-s
empty_idx = which(r_row == 0)
noise = as.numeric(runif(length(empty_idx)) < thold)
r_row[empty_idx] = noise
return (r_row)
}
# term table per single posting
make_term_table = function(K, N, s, d){
# K - number of rows
# N - number of documents
# s - signal (local signal)
# d - bit density (signal and noise)
table = matrix(0, nrow=K, ncol=N)
# position of the match documents
sig = which(runif(N) < s)
# add signal columns
table[,sig] = 1
# add noise bits
table = apply(table, 1, function(x) make_noise(x, D)) #?? FIX for some weird reason I need to transpose
table = t(table)
return(table)
}
# returns an array with the number of misses per cache line
count_cache_misses = function(table) {
# an array to keep track of the cache misses
N = ncol(table)
cache_misses = rep(NA, N/512)
for (i in 1:length(cache_misses)) {
l_idx = (i-1)*512 + 1
r_idx = i*512
accumulator = table[1,l_idx:r_idx] # initially the accumulator is the first rows
for (j in 1:nrow(table) ) {
cache_line = table[j,l_idx:r_idx]
accumulator = bitwAnd(accumulator, cache_line)
if (sum(accumulator)==0) {
cache_misses[i] = j
break
}
if (j == nrow(table)) {
cache_misses[i] =j
}
}
}
return(cache_misses)
}
# given a term table returns the ratio between nr of cache misses
# when using linear vs random (multiple terms)
linear_vs_random = function(table, verbose = getOption(TRUE, FALSE)) {
# returns an array with the number of misses per cache line
cache_misses1 = count_cache_misses(table) # <- linear procedure
K = nrow(table)
r_order = sample(1:K)
r_table = table[r_order,] # <- randomize the table
cache_misses2 = count_cache_misses(r_table) # <- random procedure
ratio = mean(cache_misses1)/mean(cache_misses2)
if (verbose == FALSE) {
return(ratio)
}
print("The linear approach has on average ")
print(ratio)
print(" times more cache misses than the randomized approach")
p1 = qplot(cache_misses1, geom="histogram",
main = "Linear intersections",
xlab = "Number of cache misses per cache line",
fill = I("blue"),
col = I("red"))
p2 = qplot(cache_misses2, geom="histogram",
main = "Random intersections",
xlab = "Number of cache misses per cache line",
fill = I("blue"),
col = I("red"))
multiplot(p1, p2, cols = 1)
}
# Multiple plot function
#
# ggplot objects can be passed in ..., or to plotlist (as a list of ggplot objects)
# - cols: Number of columns in layout
# - layout: A matrix specifying the layout. If present, 'cols' is ignored.
#
# If the layout is something like matrix(c(1,2,3,3), nrow=2, byrow=TRUE),
# then plot 1 will go in the upper left, 2 will go in the upper right, and
# 3 will go all the way across the bottom.
#
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
library(grid)
# Make a list from the ... arguments and plotlist
plots <- c(list(...), plotlist)
numPlots = length(plots)
# If layout is NULL, then use 'cols' to determine layout
if (is.null(layout)) {
# Make the panel
# ncol: Number of columns of plots
# nrow: Number of rows needed, calculated from # of cols
layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
ncol = cols, nrow = ceiling(numPlots/cols))
}
if (numPlots==1) {
print(plots[[1]])
} else {
# Set up the page
grid.newpage()
pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))
# Make each plot, in the correct location
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))
print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col))
}
}
}
|
# This is a helper function that is used while loading the job from historyServer
transposeListOfLists <- function(listoflist)
{
result<-list()
for(i in 1:length(listoflist))
{
for(j in 1:length(listoflist[[i]]))
{
result[[names(listoflist[[i]][j])]]<-c(result[[names(listoflist[[i]][j])]],listoflist[[i]][[j]])
}
}
result
}
cutNamesNode02<-function(name)
{
substr(name,5,6)
}
cutNamesUnicum<-function(name)
{
substr(name,4,6)
} | /RProjects/Utils.R | permissive | abdul-git/yarn-monitoring | R | false | false | 465 | r | # This is a helper function that is used while loading the job from historyServer
transposeListOfLists <- function(listoflist)
{
result<-list()
for(i in 1:length(listoflist))
{
for(j in 1:length(listoflist[[i]]))
{
result[[names(listoflist[[i]][j])]]<-c(result[[names(listoflist[[i]][j])]],listoflist[[i]][[j]])
}
}
result
}
cutNamesNode02<-function(name)
{
substr(name,5,6)
}
cutNamesUnicum<-function(name)
{
substr(name,4,6)
} |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/DatabaseLinke.R
\name{link_MGI.JAX}
\alias{link_MGI.JAX}
\title{link_MGI.JAX}
\usage{
link_MGI.JAX(
vector_of_gene_symbols,
writeOut = b.dbl.writeOut,
Open = b.dbl.Open
)
}
\arguments{
\item{vector_of_gene_symbols}{A character vector of gene symbols for which to generate MGI JAX links.}
\item{writeOut}{A logical indicating whether to write the generated links to a file, default: FALSE.}
\item{Open}{A logical indicating whether to open the generated links in a web browser, default: TRUE.}
\item{MGI_search_prefix}{The base URL for MGI JAX search.}
\item{MGI_search_suffix}{The suffix for MGI JAX search.}
}
\description{
Generate MGI JAX mouse genomics search query links for a list of gene symbols.
}
\examples{
geneSymbols = c('Sox2', 'Actb'); link_MGI.JAX(geneSymbols); link_MGI.JAX(geneSymbols, Open = TRUE)
}
| /man/link_MGI.JAX.Rd | permissive | vertesy/DatabaseLinke.R | R | false | true | 907 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/DatabaseLinke.R
\name{link_MGI.JAX}
\alias{link_MGI.JAX}
\title{link_MGI.JAX}
\usage{
link_MGI.JAX(
vector_of_gene_symbols,
writeOut = b.dbl.writeOut,
Open = b.dbl.Open
)
}
\arguments{
\item{vector_of_gene_symbols}{A character vector of gene symbols for which to generate MGI JAX links.}
\item{writeOut}{A logical indicating whether to write the generated links to a file, default: FALSE.}
\item{Open}{A logical indicating whether to open the generated links in a web browser, default: TRUE.}
\item{MGI_search_prefix}{The base URL for MGI JAX search.}
\item{MGI_search_suffix}{The suffix for MGI JAX search.}
}
\description{
Generate MGI JAX mouse genomics search query links for a list of gene symbols.
}
\examples{
geneSymbols = c('Sox2', 'Actb'); link_MGI.JAX(geneSymbols); link_MGI.JAX(geneSymbols, Open = TRUE)
}
|
sample_data <- lapply(1:100, function(i) road_safety_greater_manchester) %>%
do.call(rbind, .)
f1 <- function() {
geo_to_h3(sample_data, 8) %>%
table() %>%
tibble::as_tibble(n = "count")
}
f2 <- function() {
count_points(sample_data, 8) %>%
tibble::as_tibble()
}
microbenchmark::microbenchmark(
r = f1(),
js = f2(),
times = 3
)
| /sandbox/benchmark-count-points.R | no_license | federicotallis/h3forr | R | false | false | 355 | r | sample_data <- lapply(1:100, function(i) road_safety_greater_manchester) %>%
do.call(rbind, .)
f1 <- function() {
geo_to_h3(sample_data, 8) %>%
table() %>%
tibble::as_tibble(n = "count")
}
f2 <- function() {
count_points(sample_data, 8) %>%
tibble::as_tibble()
}
microbenchmark::microbenchmark(
r = f1(),
js = f2(),
times = 3
)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gen_lookup_tax_CO_par_win_mac.R
\name{gen_lookup_tax_CO_par_win_mac}
\alias{gen_lookup_tax_CO_par_win_mac}
\title{This function generates the lookup table with a tax policy for KS.}
\usage{
gen_lookup_tax_CO_par_win_mac(
tax_amount = 1,
DSSAT_files = "./input_files/DSSAT_files",
price_file = "./input_files/crop_prices.csv",
fixed_cost_file = "./input_files/fixed_cost_input.csv",
pumping_cost = 3.21,
default_well_capacity_col_name = "Well_Capacity(gpm)",
soil_moisture_targets = c(25, 35, 45, 55, 65, 75),
IFREQ_seq = 2,
IFREQ_interpolate = 0.1,
maximum_well_capacity = 1000,
well_capacity_intervals = 20,
number_of_quarters = 6,
num_clusters = 12
)
}
\arguments{
\item{tax_amount}{is the amount of tax on the unit of groundwater extracted. Defaults to 1.1.}
\item{DSSAT_files}{is the directory where DSSAT files are located. Defaults to "C:/Users/manirad/Dropbox/DSSAT subregions work pre-2018 annual meeting/subregion KS files/outputs_for_econ/revised".}
\item{price_file}{is the file that includes crop prices. Defaults to "C:/Users/manirad/Dropbox/DSSAT subregions work pre-2018 annual meeting/subregion KS files/crop_prices.csv".}
\item{fixed_cost_file}{is the file tha includes fixed costs (per acre) costs of production. Defaults to "C:/Users/manirad/Downloads/test/fixed_cost_input.csv".}
\item{pumping_cost}{is the cost of pumping an acre-inch of groundwater excluding taxes. Defaults to 3.56 which is the cost of pumpin an acre-inch of groundwater in parts of Kansas.}
\item{default_well_capacity_col_name}{is the name of the well capacity column generated from the MODFLOW model. Defaults to 'Well_Capacity(gpm)' as this is the original column name we started with.}
\item{soil_moisture_targets}{is the vector of soil moisture targets that are generated by the DSSAT model. (25, 35, 45, 55, 65, 75).}
\item{IFREQ_seq}{is the difference between two consequtive irrigation frequencies (IFREQ) in DSSAT. Defaults to 2.}
\item{IFREQ_interpolate}{is the size of interpolation interval. Defaults to 0.1 so that IFREQ_seq of 2 adds 0, 0.1, 0.2, ..., 1.9.}
\item{maximum_well_capacity}{is the maximum well capacity that needs to be included in the lookup table. Defaults to 1000gpm.}
\item{well_capacity_intervals}{is the size of interpolation interval for well capacity. Defaults to 20gpm.}
\item{number_of_quarters}{is the maximum number of quarters that need to be considered in the lookup table generation.}
\item{num_clusters}{is the number of cores for parallelization.}
}
\value{
returns the output table.
}
\description{
This function generates the lookup table with a tax policy for KS.
}
\examples{
\dontrun{
gen_lookup_tax(tax_amount = 2)
}
}
| /man/gen_lookup_tax_CO_par_win_mac.Rd | no_license | manirouhirad/MODSSAT | R | false | true | 2,776 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gen_lookup_tax_CO_par_win_mac.R
\name{gen_lookup_tax_CO_par_win_mac}
\alias{gen_lookup_tax_CO_par_win_mac}
\title{This function generates the lookup table with a tax policy for KS.}
\usage{
gen_lookup_tax_CO_par_win_mac(
tax_amount = 1,
DSSAT_files = "./input_files/DSSAT_files",
price_file = "./input_files/crop_prices.csv",
fixed_cost_file = "./input_files/fixed_cost_input.csv",
pumping_cost = 3.21,
default_well_capacity_col_name = "Well_Capacity(gpm)",
soil_moisture_targets = c(25, 35, 45, 55, 65, 75),
IFREQ_seq = 2,
IFREQ_interpolate = 0.1,
maximum_well_capacity = 1000,
well_capacity_intervals = 20,
number_of_quarters = 6,
num_clusters = 12
)
}
\arguments{
\item{tax_amount}{is the amount of tax on the unit of groundwater extracted. Defaults to 1.1.}
\item{DSSAT_files}{is the directory where DSSAT files are located. Defaults to "C:/Users/manirad/Dropbox/DSSAT subregions work pre-2018 annual meeting/subregion KS files/outputs_for_econ/revised".}
\item{price_file}{is the file that includes crop prices. Defaults to "C:/Users/manirad/Dropbox/DSSAT subregions work pre-2018 annual meeting/subregion KS files/crop_prices.csv".}
\item{fixed_cost_file}{is the file tha includes fixed costs (per acre) costs of production. Defaults to "C:/Users/manirad/Downloads/test/fixed_cost_input.csv".}
\item{pumping_cost}{is the cost of pumping an acre-inch of groundwater excluding taxes. Defaults to 3.56 which is the cost of pumpin an acre-inch of groundwater in parts of Kansas.}
\item{default_well_capacity_col_name}{is the name of the well capacity column generated from the MODFLOW model. Defaults to 'Well_Capacity(gpm)' as this is the original column name we started with.}
\item{soil_moisture_targets}{is the vector of soil moisture targets that are generated by the DSSAT model. (25, 35, 45, 55, 65, 75).}
\item{IFREQ_seq}{is the difference between two consequtive irrigation frequencies (IFREQ) in DSSAT. Defaults to 2.}
\item{IFREQ_interpolate}{is the size of interpolation interval. Defaults to 0.1 so that IFREQ_seq of 2 adds 0, 0.1, 0.2, ..., 1.9.}
\item{maximum_well_capacity}{is the maximum well capacity that needs to be included in the lookup table. Defaults to 1000gpm.}
\item{well_capacity_intervals}{is the size of interpolation interval for well capacity. Defaults to 20gpm.}
\item{number_of_quarters}{is the maximum number of quarters that need to be considered in the lookup table generation.}
\item{num_clusters}{is the number of cores for parallelization.}
}
\value{
returns the output table.
}
\description{
This function generates the lookup table with a tax policy for KS.
}
\examples{
\dontrun{
gen_lookup_tax(tax_amount = 2)
}
}
|
#!/usr/bin/env RScript
# Helper functions to make plots
library(ggplot2)
library(ggvenn)
stack_hists <- function(vector1, vector2, name1 = "freebayes", name2 = "mpileup") {
# Generates a stacked histogram (ggplot2) from 2 vectors
df1 <- data.frame(name1, vector1)
df2 <- data.frame(name2, vector2)
names(df1) <- c("x", "y")
names(df2) <- c("x", "y")
df <- rbind(df1, df2)
gh <- ggplot(df, aes(y, fill = x)) +
geom_histogram() +
ylab("Frequency") +
labs(fill = 'Method')
return(gh)
}
transform_counts <- function(df) {
# Takes a df containing counts and categories (1/1, 1/0, 0/1) and creates data to generate a venn diagram from.
# https://stackoverflow.com/questions/26761754/create-venn-diagram-using-existing-counts-in-r
sets = vector(mode = 'list', length = ncol(df) - 1)
names(sets) = names(df)[1:(ncol(df) - 1)]
lastElement = 0
for (i in 1:nrow(df)) {
elements = lastElement:(lastElement + df[i, ncol(df)] - 1)
lastElement = elements[length(elements)] + 1
for (j in 1:(ncol(df) - 1)) {
if (df[i, j] == 1) {
sets[[j]] = c(sets[[j]], elements)
}
}
}
return(sets)
}
venn_by_counts <- function(count1, count2, common, name1 = "freebayes", name2 = "mpileup") {
# Generates a venn diagram from raw count numbers
df.counts <- as.data.frame(rbind(c(1, 1, common), c(1, 0, count1), c(0, 1, count2)))
names(df.counts) <- c(name1, name2, "counts")
return(ggvenn(transform_counts(df.counts), fill_color = c("red", "blue")))
}
barplot_by_indiv <- function(df1, df2, name1 = "freebayes", name2 = "mpileup") {
# Generates a side by side bar plot (ggplot) from 2 dataframes
ns <- c(names(df1), "fill")
df1 <- cbind(df1, rep(name1, nrow(df1)))
df2 <- cbind(df2, rep(name2, nrow(df2)))
names(df1) <- ns
names(df2) <- ns
df <- rbind(df1, df2)
gb <- ggplot(data = df, aes(x = INDV, y = MEAN_DEPTH, fill = fill)) +
geom_bar(stat = "identity", position = position_dodge())
return(gb)
}
| /funcs.R | no_license | llalon/VCFComp | R | false | false | 1,987 | r | #!/usr/bin/env RScript
# Helper functions to make plots
library(ggplot2)
library(ggvenn)
stack_hists <- function(vector1, vector2, name1 = "freebayes", name2 = "mpileup") {
# Generates a stacked histogram (ggplot2) from 2 vectors
df1 <- data.frame(name1, vector1)
df2 <- data.frame(name2, vector2)
names(df1) <- c("x", "y")
names(df2) <- c("x", "y")
df <- rbind(df1, df2)
gh <- ggplot(df, aes(y, fill = x)) +
geom_histogram() +
ylab("Frequency") +
labs(fill = 'Method')
return(gh)
}
transform_counts <- function(df) {
# Takes a df containing counts and categories (1/1, 1/0, 0/1) and creates data to generate a venn diagram from.
# https://stackoverflow.com/questions/26761754/create-venn-diagram-using-existing-counts-in-r
sets = vector(mode = 'list', length = ncol(df) - 1)
names(sets) = names(df)[1:(ncol(df) - 1)]
lastElement = 0
for (i in 1:nrow(df)) {
elements = lastElement:(lastElement + df[i, ncol(df)] - 1)
lastElement = elements[length(elements)] + 1
for (j in 1:(ncol(df) - 1)) {
if (df[i, j] == 1) {
sets[[j]] = c(sets[[j]], elements)
}
}
}
return(sets)
}
venn_by_counts <- function(count1, count2, common, name1 = "freebayes", name2 = "mpileup") {
# Generates a venn diagram from raw count numbers
df.counts <- as.data.frame(rbind(c(1, 1, common), c(1, 0, count1), c(0, 1, count2)))
names(df.counts) <- c(name1, name2, "counts")
return(ggvenn(transform_counts(df.counts), fill_color = c("red", "blue")))
}
barplot_by_indiv <- function(df1, df2, name1 = "freebayes", name2 = "mpileup") {
# Generates a side by side bar plot (ggplot) from 2 dataframes
ns <- c(names(df1), "fill")
df1 <- cbind(df1, rep(name1, nrow(df1)))
df2 <- cbind(df2, rep(name2, nrow(df2)))
names(df1) <- ns
names(df2) <- ns
df <- rbind(df1, df2)
gb <- ggplot(data = df, aes(x = INDV, y = MEAN_DEPTH, fill = fill)) +
geom_bar(stat = "identity", position = position_dodge())
return(gb)
}
|
CNOT3_21 <- function(a){
I4=matrix(c(1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1),nrow=4,ncol=4)
I=matrix(c(1,0,0,1),nrow=2,ncol=2)
l = TensorProd(I,CNOT2_10(I4))
result = l %*% a
result
}
| /R/CNOT3_21.R | no_license | tvganesh/QCSim | R | false | false | 248 | r | CNOT3_21 <- function(a){
I4=matrix(c(1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1),nrow=4,ncol=4)
I=matrix(c(1,0,0,1),nrow=2,ncol=2)
l = TensorProd(I,CNOT2_10(I4))
result = l %*% a
result
}
|
# CustID, DateMatches, SpendMatches, DateSpendMatches
# 8, 0, 0, 0
# 159, 3, 3, 1
m <- read.csv("cum3.csv")
fac <- as.integer( 10 * (m$CustID-1) / max(m$CustID) )
m$Bin <- fac
dateSpendMatches <- c(0, diff( m$DateSpendMatches ) )
dateMatches <- c(0, diff( m$DateMatches ) )
spendMatches <- c(0, diff( m$SpendMatches ) )
plot(tapply( dateSpendMatches, fac, mean ), type="b", main="%DateSpendMatches by CustID Decile",xlab="CustID Decile",ylab="%Matches")
plot(tapply( dateMatches, fac, mean ), type="b", main="%DateMatches by CustID Decile",xlab="CustID Decile",ylab="%Matches")
plot(tapply( spendMatches, fac, mean ), type="b", main="%SpendMatches by CustID Decile",xlab="CustID Decile",ylab="%Matches")
tapply( dateSpendMatches, fac, mean )
tapply( dateMatches, fac, mean )
tapply( spendMatches, fac, mean )
plot(tapply( m$Bin, fac, length))
plot(tapply( m$Bin, fac, length) , type="b", main="Entries in by CustID Decile", xlab="CustID Decile",ylab="#Entries")
| /Supermarket/knn/matchbins.R | no_license | chrishefele/kaggle-sample-code | R | false | false | 982 | r | # CustID, DateMatches, SpendMatches, DateSpendMatches
# 8, 0, 0, 0
# 159, 3, 3, 1
m <- read.csv("cum3.csv")
fac <- as.integer( 10 * (m$CustID-1) / max(m$CustID) )
m$Bin <- fac
dateSpendMatches <- c(0, diff( m$DateSpendMatches ) )
dateMatches <- c(0, diff( m$DateMatches ) )
spendMatches <- c(0, diff( m$SpendMatches ) )
plot(tapply( dateSpendMatches, fac, mean ), type="b", main="%DateSpendMatches by CustID Decile",xlab="CustID Decile",ylab="%Matches")
plot(tapply( dateMatches, fac, mean ), type="b", main="%DateMatches by CustID Decile",xlab="CustID Decile",ylab="%Matches")
plot(tapply( spendMatches, fac, mean ), type="b", main="%SpendMatches by CustID Decile",xlab="CustID Decile",ylab="%Matches")
tapply( dateSpendMatches, fac, mean )
tapply( dateMatches, fac, mean )
tapply( spendMatches, fac, mean )
plot(tapply( m$Bin, fac, length))
plot(tapply( m$Bin, fac, length) , type="b", main="Entries in by CustID Decile", xlab="CustID Decile",ylab="#Entries")
|
source("set_wdir.r")
# Setting the current dir
set_wdir()
# Get data
input_data <- paste(getwd(),"berkeley.csv",sep="/")
Data <- read.csv(input_data, header=TRUE)
# Show data
head(Data)
plot(Data[,1],Data[,4])
attach(mtcars)
par(mfrow=c(2,2))
male_data <- Data[Data[,'Gender'] == 'Male',]
male_data
plot(male_data[,1], male_data[,4], main = 'Male', xlab = 'Admit', ylab = 'Frequence')
plot(male_data[3:4], main = 'Male', xlab = 'Department', ylab = 'Frequence')
depts <- c('A','B','C','D','E','F')
dept_a_admt <- subset(fem_data, Dept == 'A')
dept_a_admt <- nrow(subset(dept_a_admt,Admit == 'Admitted'))
dept_b_admt <- subset(fem_data, Dept == 'B')
dept_b_admt <- nrow(subset(dept_b_admt,Admit == 'Admitted'))
dept_c_admt <- subset(fem_data, Dept == 'C')
dept_c_admt <- nrow(subset(dept_c_admt,Admit == 'Admitted'))
dept_d_admt <- subset(fem_data, Dept == 'D')
dept_d_admt <- nrow(subset(dept_d_admt,Admit == 'Admitted'))
dept_e_admt <- subset(fem_data, Dept == 'E')
dept_e_admt <- nrow(subset(dept_e_admt,Admit == 'Admitted'))
dept_f_admt <- subset(fem_data, Dept == 'F')
dept_f_admt <- nrow(subset(dept_f_admt,Admit == 'Admitted'))
m_depts_df <- data.frame(matrix(ncol = 2, nrow = 6))
m_depts_df[,1] <- col1
m_depts_df[,2] <- c(dept_a_admt,dept_b_admt,dept_c_admt,
dept_d_admt,dept_e_admt,dept_f_admt)
fem_data <- Data[Data[,'Gender'] == 'Female',]
fem_data
plot(fem_data[,1], fem_data[,4], main = 'Female', xlab = 'Admit', ylab = 'Frequence')
plot(fem_data[3:4], main = 'Female', xlab = 'Department', ylab = 'Frequence')
# TODO: CREATE A FUNCTION
depts <- c('A','B','C','D','E','F')
dept_a_admt <- subset(male_data, Dept == 'A')
dept_a_admt <- nrow(subset(dept_a_admt,Admit == 'Admitted'))
dept_b_admt <- subset(male_data, Dept == 'B')
dept_b_admt <- nrow(subset(dept_b_admt,Admit == 'Admitted'))
dept_c_admt <- subset(male_data, Dept == 'C')
dept_c_admt <- nrow(subset(dept_c_admt,Admit == 'Admitted'))
dept_d_admt <- subset(male_data, Dept == 'D')
dept_d_admt <- nrow(subset(dept_d_admt,Admit == 'Admitted'))
dept_e_admt <- subset(male_data, Dept == 'E')
dept_e_admt <- nrow(subset(dept_e_admt,Admit == 'Admitted'))
dept_f_admt <- subset(male_data, Dept == 'F')
dept_f_admt <- nrow(subset(dept_f_admt,Admit == 'Admitted'))
f_depts_df <- data.frame(matrix(ncol = 2, nrow = 6))
f_depts_df[,1] <- col1
f_depts_df[,2] <- c(dept_a_admt,dept_b_admt,dept_c_admt,
dept_d_admt,dept_e_admt,dept_f_admt)
depts <- c('A','B','C','D','E','F')
dept_a_admt <- subset(fem_data, Dept == 'A')
dept_a_admt <- nrow(subset(dept_a_admt,Admit == 'Admitted'))
dept_b_admt <- subset(fem_data, Dept == 'B')
dept_b_admt <- nrow(subset(dept_b_admt,Admit == 'Admitted'))
dept_c_admt <- subset(fem_data, Dept == 'C')
dept_c_admt <- nrow(subset(dept_c_admt,Admit == 'Admitted'))
dept_d_admt <- subset(fem_data, Dept == 'D')
dept_d_admt <- nrow(subset(dept_d_admt,Admit == 'Admitted'))
dept_e_admt <- subset(fem_data, Dept == 'E')
dept_e_admt <- nrow(subset(dept_e_admt,Admit == 'Admitted'))
dept_f_admt <- subset(fem_data, Dept == 'F')
dept_f_admt <- nrow(subset(dept_f_admt,Admit == 'Admitted'))
f_depts_df <- data.frame(matrix(ncol = 2, nrow = 6))
f_depts_df[,1] <- col1
f_depts_df[,2] <- c(dept_a_admt,dept_b_admt,dept_c_admt,
dept_d_admt,dept_e_admt,dept_f_admt)
m_depts_df
f_depts_df | /1-IntroToDataScience/1-berkeley_data/berkeley-analysis.r | no_license | jonathanalmd/Udacity-DataScience-Nanodegree | R | false | false | 3,370 | r |
source("set_wdir.r")
# Setting the current dir
set_wdir()
# Get data
input_data <- paste(getwd(),"berkeley.csv",sep="/")
Data <- read.csv(input_data, header=TRUE)
# Show data
head(Data)
plot(Data[,1],Data[,4])
attach(mtcars)
par(mfrow=c(2,2))
male_data <- Data[Data[,'Gender'] == 'Male',]
male_data
plot(male_data[,1], male_data[,4], main = 'Male', xlab = 'Admit', ylab = 'Frequence')
plot(male_data[3:4], main = 'Male', xlab = 'Department', ylab = 'Frequence')
depts <- c('A','B','C','D','E','F')
dept_a_admt <- subset(fem_data, Dept == 'A')
dept_a_admt <- nrow(subset(dept_a_admt,Admit == 'Admitted'))
dept_b_admt <- subset(fem_data, Dept == 'B')
dept_b_admt <- nrow(subset(dept_b_admt,Admit == 'Admitted'))
dept_c_admt <- subset(fem_data, Dept == 'C')
dept_c_admt <- nrow(subset(dept_c_admt,Admit == 'Admitted'))
dept_d_admt <- subset(fem_data, Dept == 'D')
dept_d_admt <- nrow(subset(dept_d_admt,Admit == 'Admitted'))
dept_e_admt <- subset(fem_data, Dept == 'E')
dept_e_admt <- nrow(subset(dept_e_admt,Admit == 'Admitted'))
dept_f_admt <- subset(fem_data, Dept == 'F')
dept_f_admt <- nrow(subset(dept_f_admt,Admit == 'Admitted'))
m_depts_df <- data.frame(matrix(ncol = 2, nrow = 6))
m_depts_df[,1] <- col1
m_depts_df[,2] <- c(dept_a_admt,dept_b_admt,dept_c_admt,
dept_d_admt,dept_e_admt,dept_f_admt)
fem_data <- Data[Data[,'Gender'] == 'Female',]
fem_data
plot(fem_data[,1], fem_data[,4], main = 'Female', xlab = 'Admit', ylab = 'Frequence')
plot(fem_data[3:4], main = 'Female', xlab = 'Department', ylab = 'Frequence')
# TODO: CREATE A FUNCTION
depts <- c('A','B','C','D','E','F')
dept_a_admt <- subset(male_data, Dept == 'A')
dept_a_admt <- nrow(subset(dept_a_admt,Admit == 'Admitted'))
dept_b_admt <- subset(male_data, Dept == 'B')
dept_b_admt <- nrow(subset(dept_b_admt,Admit == 'Admitted'))
dept_c_admt <- subset(male_data, Dept == 'C')
dept_c_admt <- nrow(subset(dept_c_admt,Admit == 'Admitted'))
dept_d_admt <- subset(male_data, Dept == 'D')
dept_d_admt <- nrow(subset(dept_d_admt,Admit == 'Admitted'))
dept_e_admt <- subset(male_data, Dept == 'E')
dept_e_admt <- nrow(subset(dept_e_admt,Admit == 'Admitted'))
dept_f_admt <- subset(male_data, Dept == 'F')
dept_f_admt <- nrow(subset(dept_f_admt,Admit == 'Admitted'))
f_depts_df <- data.frame(matrix(ncol = 2, nrow = 6))
f_depts_df[,1] <- col1
f_depts_df[,2] <- c(dept_a_admt,dept_b_admt,dept_c_admt,
dept_d_admt,dept_e_admt,dept_f_admt)
depts <- c('A','B','C','D','E','F')
dept_a_admt <- subset(fem_data, Dept == 'A')
dept_a_admt <- nrow(subset(dept_a_admt,Admit == 'Admitted'))
dept_b_admt <- subset(fem_data, Dept == 'B')
dept_b_admt <- nrow(subset(dept_b_admt,Admit == 'Admitted'))
dept_c_admt <- subset(fem_data, Dept == 'C')
dept_c_admt <- nrow(subset(dept_c_admt,Admit == 'Admitted'))
dept_d_admt <- subset(fem_data, Dept == 'D')
dept_d_admt <- nrow(subset(dept_d_admt,Admit == 'Admitted'))
dept_e_admt <- subset(fem_data, Dept == 'E')
dept_e_admt <- nrow(subset(dept_e_admt,Admit == 'Admitted'))
dept_f_admt <- subset(fem_data, Dept == 'F')
dept_f_admt <- nrow(subset(dept_f_admt,Admit == 'Admitted'))
f_depts_df <- data.frame(matrix(ncol = 2, nrow = 6))
f_depts_df[,1] <- col1
f_depts_df[,2] <- c(dept_a_admt,dept_b_admt,dept_c_admt,
dept_d_admt,dept_e_admt,dept_f_admt)
m_depts_df
f_depts_df |
shinyUI(
fluidPage(
tags$head(includeScript("google-analytics.js")),
titlePanel("Happiness Report"),
sidebarLayout(
sidebarPanel(
textInput(inputId="text1", label = "Enter your code here"),
actionButton('Submit','Submit'),
textOutput("code_error"),
h4(textOutput("hello")),
h5(textOutput("per_act")),
br(),
h5(textOutput("per_loc"))
),
mainPanel(
h3('1. Summary and Statistics'),
br(),
p("On average, each person responded about 20 times to our emails. We sent around 65 emails to each person, so this is an average response rate of 1 out of every 3 emails. The overall average happiness score is 68.6, with a standard deviation of 8.8. Assuming people scores follow a normal distribution, about 66% of all scores fall between 60 and 76."),
div("People score their happiness differently. For example, an 80 could be someone's mean, or it could be someone else's maximum score. Therefore, comparing scores by itself does not tell us that one person is happier than another. Instead, we should have a reference point for each person. So, in some of the analyses below, we use a normalized score to compare happiness across users. A normalized score assumes a mean of 0 and a standard deviation of 1. With everyone having a mean happiness of 0, it's easier to tell if one person is relatively happier."),
br(),
tableOutput("summary_table"),
br(),
h3('2. Happiness Score Distribution: You vs. all Users'),
plotOutput("Q1_Dist"),
p("In Grey: The average person has a mean of 68.6, and a standard deviation of 8.8. We generated a normal distribution with these values."),
p("If your own distribution of responses is to the right of the average persons' distribution, you score yourself happier than most people (in this survey anyway)."),
br(),
h3('3. Happiness Swings'),
plotOutput("Var_Dist"),
br(),
p("Each bar shows a different person's variance in his/her responses. Variance is a measure of how much the score deviates from the mean."),
textOutput("happiness_swings_text"),
br(),
br(),
h3("4. Would you rather be doing something other than what you're doing right now?"),
h5("If it's not a 'No', it's probably a 'Yes'."),
plotOutput("preference_hist"),
br(),
p("When people answer 'No', they are happier (average is at 0). Intuitively, that makes sense. If you'd rather not be doing something else, you're probably a little happier than if you answered 'Yes'. When people answer 'Not Sure', they report lower than average scores."),
p("Just like a definite 'Yes', If you're 'Not sure', you'll probably be happier doing something else."),
br(),
br(),
h3('5. Happiest Time'),
h5('How much does your happiness change over the week? And, during the day? Also, how do you compare to other people?'),
plotOutput("Var_Time_Dist"),
p("People report higher scores on Friday, Saturday, and Sunday (Go figure)! Check out that change between Thursday and Friday. It's like a switch is flipped and people are just happier. "),
p("Mondays are the worst. By a lot. People dislike Mondays more than they like any other day"),
p("As for the time of day: Mornings are pretty nuetral, right? Nope. There are three clusters of responses in the Morning group: one group really likes mornings, another group is indifferent, and the third group really dislikes mornings. All of this averages out to a nuetral feeling (around 0.0 mean variation for the group), so it's hard to see the disparity here."),
br(),
br(),
h3("6. Shopping makes you smile, work does not (surprise, surprise)."),
h4("Exercising feels better than eating? Could it be ... ? "),
br(),
h5("The text is ordered from happiest (top, pink, above average) to unhappiest (bottom, blue, below average)."),
plotOutput("loc_act_text",height=500),
p("The above charts show the aggregated result from all users. You can see your own activity/location preference summary on the top left panel. If you don't see it, we don't have sufficient data for you*."),
br(),
br(),
h3("7. Happiest Location & Activity Pair"),
h4("If you are slacking off at work, you are way better off chatting with someone than browsing internet. Heck, browsing internet is in the negative, regardless of where you are."),
h4("Exercising trumps eating, again!"),
h4("Stay home and cook people! Eating out does not agree with you."),
plotOutput("loc_act_p",height=600),
p("The above chart shows the aggregated result from all users. We do not have this for individuals due to lack of data."),
br(),
br(),
p("*Notes:
For group happiness rankings by location and activity, we filtered out the options that fewer than 5 users had selected.
e.g. if there is only one entry for 'Grooming' at 'work', and for some odd reason it's scored really high, it won't be counted because there are no other 'Grooming' at 'work' responses. We will need the same responses from at least 5 different users to calculate the normalized mean, and include it in the ranking.
For all analyses comparing means, we have ran ANOVA tests showing that the means are different, with a significant p value.
For all analyses, users' data are included only if they have submitted at least 5 responses.")
)
)
)
)
| /ui.R | no_license | zyenge/happy_shiny | R | false | false | 5,465 | r | shinyUI(
fluidPage(
tags$head(includeScript("google-analytics.js")),
titlePanel("Happiness Report"),
sidebarLayout(
sidebarPanel(
textInput(inputId="text1", label = "Enter your code here"),
actionButton('Submit','Submit'),
textOutput("code_error"),
h4(textOutput("hello")),
h5(textOutput("per_act")),
br(),
h5(textOutput("per_loc"))
),
mainPanel(
h3('1. Summary and Statistics'),
br(),
p("On average, each person responded about 20 times to our emails. We sent around 65 emails to each person, so this is an average response rate of 1 out of every 3 emails. The overall average happiness score is 68.6, with a standard deviation of 8.8. Assuming people scores follow a normal distribution, about 66% of all scores fall between 60 and 76."),
div("People score their happiness differently. For example, an 80 could be someone's mean, or it could be someone else's maximum score. Therefore, comparing scores by itself does not tell us that one person is happier than another. Instead, we should have a reference point for each person. So, in some of the analyses below, we use a normalized score to compare happiness across users. A normalized score assumes a mean of 0 and a standard deviation of 1. With everyone having a mean happiness of 0, it's easier to tell if one person is relatively happier."),
br(),
tableOutput("summary_table"),
br(),
h3('2. Happiness Score Distribution: You vs. all Users'),
plotOutput("Q1_Dist"),
p("In Grey: The average person has a mean of 68.6, and a standard deviation of 8.8. We generated a normal distribution with these values."),
p("If your own distribution of responses is to the right of the average persons' distribution, you score yourself happier than most people (in this survey anyway)."),
br(),
h3('3. Happiness Swings'),
plotOutput("Var_Dist"),
br(),
p("Each bar shows a different person's variance in his/her responses. Variance is a measure of how much the score deviates from the mean."),
textOutput("happiness_swings_text"),
br(),
br(),
h3("4. Would you rather be doing something other than what you're doing right now?"),
h5("If it's not a 'No', it's probably a 'Yes'."),
plotOutput("preference_hist"),
br(),
p("When people answer 'No', they are happier (average is at 0). Intuitively, that makes sense. If you'd rather not be doing something else, you're probably a little happier than if you answered 'Yes'. When people answer 'Not Sure', they report lower than average scores."),
p("Just like a definite 'Yes', If you're 'Not sure', you'll probably be happier doing something else."),
br(),
br(),
h3('5. Happiest Time'),
h5('How much does your happiness change over the week? And, during the day? Also, how do you compare to other people?'),
plotOutput("Var_Time_Dist"),
p("People report higher scores on Friday, Saturday, and Sunday (Go figure)! Check out that change between Thursday and Friday. It's like a switch is flipped and people are just happier. "),
p("Mondays are the worst. By a lot. People dislike Mondays more than they like any other day"),
p("As for the time of day: Mornings are pretty nuetral, right? Nope. There are three clusters of responses in the Morning group: one group really likes mornings, another group is indifferent, and the third group really dislikes mornings. All of this averages out to a nuetral feeling (around 0.0 mean variation for the group), so it's hard to see the disparity here."),
br(),
br(),
h3("6. Shopping makes you smile, work does not (surprise, surprise)."),
h4("Exercising feels better than eating? Could it be ... ? "),
br(),
h5("The text is ordered from happiest (top, pink, above average) to unhappiest (bottom, blue, below average)."),
plotOutput("loc_act_text",height=500),
p("The above charts show the aggregated result from all users. You can see your own activity/location preference summary on the top left panel. If you don't see it, we don't have sufficient data for you*."),
br(),
br(),
h3("7. Happiest Location & Activity Pair"),
h4("If you are slacking off at work, you are way better off chatting with someone than browsing internet. Heck, browsing internet is in the negative, regardless of where you are."),
h4("Exercising trumps eating, again!"),
h4("Stay home and cook people! Eating out does not agree with you."),
plotOutput("loc_act_p",height=600),
p("The above chart shows the aggregated result from all users. We do not have this for individuals due to lack of data."),
br(),
br(),
p("*Notes:
For group happiness rankings by location and activity, we filtered out the options that fewer than 5 users had selected.
e.g. if there is only one entry for 'Grooming' at 'work', and for some odd reason it's scored really high, it won't be counted because there are no other 'Grooming' at 'work' responses. We will need the same responses from at least 5 different users to calculate the normalized mean, and include it in the ranking.
For all analyses comparing means, we have ran ANOVA tests showing that the means are different, with a significant p value.
For all analyses, users' data are included only if they have submitted at least 5 responses.")
)
)
)
)
|
hpc <- read.table("household_power_consumption.txt", header=TRUE, sep=";",stringsAsFactors =FALSE) #read data into R
hpsub <- hpc[hpc$Date %in% c("1/2/2007", "2/2/2007"),]#extract required time period
png("plot4.png", width = 480, height = 480) #plot graphs
par(mfrow = c(2, 2))
plot(time, as.numeric(subset$Global_active_power), type="l",
ylab="Global Active Power", xlab="", cex=0.2)
plot(time, as.numeric(subset$Voltage), type="l",
ylab="Voltage", xlab="datetime")
plot(time, as.numeric(subset$Sub_metering_1), type="l",
ylab="Energy Submetering", xlab="")
lines(time, as.numeric(subset$Sub_metering_2), type="l", col="red")
lines(time, as.numeric(subset$Sub_metering_3), type="l", col="blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=1, lwd =2.5,
col=c("black", "red", "blue"), bty = "n")
plot(time, as.numeric(subset$Global_reactive_power), type="l", xlab="datetime",
ylab="Global_reactive_power")
dev.off() | /plot4.R | no_license | andersonhaynes/ExData_Plotting1 | R | false | false | 980 | r | hpc <- read.table("household_power_consumption.txt", header=TRUE, sep=";",stringsAsFactors =FALSE) #read data into R
hpsub <- hpc[hpc$Date %in% c("1/2/2007", "2/2/2007"),]#extract required time period
png("plot4.png", width = 480, height = 480) #plot graphs
par(mfrow = c(2, 2))
plot(time, as.numeric(subset$Global_active_power), type="l",
ylab="Global Active Power", xlab="", cex=0.2)
plot(time, as.numeric(subset$Voltage), type="l",
ylab="Voltage", xlab="datetime")
plot(time, as.numeric(subset$Sub_metering_1), type="l",
ylab="Energy Submetering", xlab="")
lines(time, as.numeric(subset$Sub_metering_2), type="l", col="red")
lines(time, as.numeric(subset$Sub_metering_3), type="l", col="blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=1, lwd =2.5,
col=c("black", "red", "blue"), bty = "n")
plot(time, as.numeric(subset$Global_reactive_power), type="l", xlab="datetime",
ylab="Global_reactive_power")
dev.off() |
# prior comparison
source('requirements.R')
model_files = list.files('output/spatiophylogenetic_modelling/featurewise/',
pattern = "[:alnum:]*_kappa_2_sigma_1.15_pcprior.*.qs",
full.names = TRUE)
model_output_list = lapply(model_files, function(m){
model = qread(m)
dd = model[[1]]$hyper_sample
binomial_error = pi^2 / 3
# Calculate h^2
(1 / dd) / (rowSums(1 / dd) + 1 + binomial_error)
})
names(model_output_list) = basename(model_files)
model_output = map_df(model_output_list, ~as.data.frame(.x), .id="id")
model_output$feature = str_extract(model_output$id, pattern = "[:alnum:]*")
model_output$settings = str_extract(model_output$id, "kappa_\\d+([.,]\\d+)?_sigma_\\d+([.,]\\d+)?")
model_output$prior = str_extract(model_output$id, "pcprior\\d+([.,]\\d+)?")
colnames(model_output) = str_replace_all(colnames(model_output),
pattern = " ",
replacement = ".")
model_summary =
model_output %>%
group_by(feature, prior) %>%
summarise(Spatial_estimate = mean(Precision.for.spatial_id),
Phylogenetic_estimate = mean(Precision.for.phylo_id),
.groups = "drop")
model_long = pivot_longer(model_summary, cols = c("Spatial_estimate", "Phylogenetic_estimate"))
col_vector <- c("purple4", "turquoise3")
model_long$name <- str_replace_all(model_long$name, "_", " ")
p = ggplot(model_long, aes(x = prior, y = value, group = feature, color = name)) +
geom_point(alpha = 0.6) +
geom_line(alpha = 0.4, size = 1) +
ylim(c(0, 1)) +
ylab("Spatiophylogenetic parameter estimates") +
xlab("Penalizing Complexity Priors") +
scale_x_discrete(labels = c('0.01',
'0.1',
'0.5',
'0.99')) +
theme_classic() +
scale_color_manual(values = col_vector) +
theme(legend.position = 'None') +
facet_wrap(~name)
ggsave(plot = p,
filename = "output/spatiophylogenetic_modelling/prior_effects.png",
width = 150,
height = 100, units = "mm")
| /R_grambank/spatiophylogenetic_modelling/analysis/featurewise/summarise_modelprior.R | no_license | grambank/grambank-analysed | R | false | false | 2,129 | r | # prior comparison
source('requirements.R')
model_files = list.files('output/spatiophylogenetic_modelling/featurewise/',
pattern = "[:alnum:]*_kappa_2_sigma_1.15_pcprior.*.qs",
full.names = TRUE)
model_output_list = lapply(model_files, function(m){
model = qread(m)
dd = model[[1]]$hyper_sample
binomial_error = pi^2 / 3
# Calculate h^2
(1 / dd) / (rowSums(1 / dd) + 1 + binomial_error)
})
names(model_output_list) = basename(model_files)
model_output = map_df(model_output_list, ~as.data.frame(.x), .id="id")
model_output$feature = str_extract(model_output$id, pattern = "[:alnum:]*")
model_output$settings = str_extract(model_output$id, "kappa_\\d+([.,]\\d+)?_sigma_\\d+([.,]\\d+)?")
model_output$prior = str_extract(model_output$id, "pcprior\\d+([.,]\\d+)?")
colnames(model_output) = str_replace_all(colnames(model_output),
pattern = " ",
replacement = ".")
model_summary =
model_output %>%
group_by(feature, prior) %>%
summarise(Spatial_estimate = mean(Precision.for.spatial_id),
Phylogenetic_estimate = mean(Precision.for.phylo_id),
.groups = "drop")
model_long = pivot_longer(model_summary, cols = c("Spatial_estimate", "Phylogenetic_estimate"))
col_vector <- c("purple4", "turquoise3")
model_long$name <- str_replace_all(model_long$name, "_", " ")
p = ggplot(model_long, aes(x = prior, y = value, group = feature, color = name)) +
geom_point(alpha = 0.6) +
geom_line(alpha = 0.4, size = 1) +
ylim(c(0, 1)) +
ylab("Spatiophylogenetic parameter estimates") +
xlab("Penalizing Complexity Priors") +
scale_x_discrete(labels = c('0.01',
'0.1',
'0.5',
'0.99')) +
theme_classic() +
scale_color_manual(values = col_vector) +
theme(legend.position = 'None') +
facet_wrap(~name)
ggsave(plot = p,
filename = "output/spatiophylogenetic_modelling/prior_effects.png",
width = 150,
height = 100, units = "mm")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{checkErrorsInLogFiles}
\alias{checkErrorsInLogFiles}
\title{checkErrorsInLogFiles}
\usage{
checkErrorsInLogFiles(logFns, errorKeys = c("error", "fail", "except"))
}
\arguments{
\item{logFns}{(vector of) log/text file names}
}
\value{
...
}
\description{
Check and report lines containing error keywords in text files
}
\author{
Fabian Mueller
}
| /man/checkErrorsInLogFiles.Rd | no_license | MPIIComputationalEpigenetics/muPipeR | R | false | true | 437 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{checkErrorsInLogFiles}
\alias{checkErrorsInLogFiles}
\title{checkErrorsInLogFiles}
\usage{
checkErrorsInLogFiles(logFns, errorKeys = c("error", "fail", "except"))
}
\arguments{
\item{logFns}{(vector of) log/text file names}
}
\value{
...
}
\description{
Check and report lines containing error keywords in text files
}
\author{
Fabian Mueller
}
|
context("lawn_area")
poly <- lawn_data$poly
multi <- lawn_data$multipoly
pt <- lawn_point(c(-71.4226, 41.4945))
test_that("lawn_area works", {
expect_is(lawn_area(poly), "numeric")
expect_is(lawn_area(multi), "numeric")
expect_is(lawn_area(pt), "integer")
expect_equal(round(lawn_area(poly)/1000000), 12391 )
expect_equal(round(lawn_area(multi)/1000000), 24779 )
expect_equal(lawn_area(pt), 0 )
})
test_that("lawn_area fails correctly", {
expect_error(lawn_area(), "argument \"input\" is missing, with no default")
})
| /tests/testthat/test-area.R | permissive | jhollist/lawn | R | false | false | 535 | r | context("lawn_area")
poly <- lawn_data$poly
multi <- lawn_data$multipoly
pt <- lawn_point(c(-71.4226, 41.4945))
test_that("lawn_area works", {
expect_is(lawn_area(poly), "numeric")
expect_is(lawn_area(multi), "numeric")
expect_is(lawn_area(pt), "integer")
expect_equal(round(lawn_area(poly)/1000000), 12391 )
expect_equal(round(lawn_area(multi)/1000000), 24779 )
expect_equal(lawn_area(pt), 0 )
})
test_that("lawn_area fails correctly", {
expect_error(lawn_area(), "argument \"input\" is missing, with no default")
})
|
## File Name: tamaan.3pl.mixture.R
## File Version: 9.07
#######################################################################
# tamaan 3PL mixture module
tamaan.3pl.mixture <- function( res0 , anal.list , con , ... )
{
if ( ! is.null( anal.list$NSTARTS ) ){
NSTARTS <- anal.list$NSTARTS
} else {
NSTARTS <- c(0,0)
}
#*** initial gammaslope estimate
# different starts if NSTARTS > 0
con0 <- con
con0$maxiter <- NSTARTS[2]
con0$progress <- FALSE
devmin <- 1E100
itempartable <- res0$itempartable_MIXTURE
itempartable.int <- itempartable[ itempartable$int == 1 , ]
itempartable.slo <- itempartable[ itempartable$slo == 1 , ]
gammaslope0 <- itempartable$val
resp <- res0$resp
items0 <- res0$items
# initial values
I <- ncol(resp)
beta0 <- sapply( 1:I , FUN = function(ii){
ncat.ii <- items0[ii , "ncat"] - 1
l1 <- rep(0,ncat.ii)
for (hh in 1:ncat.ii){
l1[hh] <- stats::qlogis( mean( resp[,ii] >= hh , na.rm=TRUE ) / ncat.ii )
}
return(l1)
} )
beta0 <- unlist( beta0)
B0 <- length(beta0)
ncl <- anal.list$NCLASSES
if (NSTARTS[1] > 0 ){
for (nn in 1:(NSTARTS[1]) ){
gammaslope <- gammaslope0
gammaslope[ itempartable.int$index ] <- rep( beta0 , ncl ) +
stats::rnorm( ncl*B0 , mean=0, sd = log(1+nn^(1/5) ) )
N0 <- nrow(itempartable.slo)
if ( ! res0$raschtype ){
gammaslope[ itempartable.slo$index ] <- stats::runif( N0 , max(.2,1-nn/5) , min( 1.8 , 1+nn/5) )
}
# delta.inits
if (nn==1){ delta.inits <- NULL }
res <- tam.mml.3pl(resp= res0$resp , E=res0$E , skillspace="discrete" ,
theta.k= res0$theta.k , gammaslope=gammaslope ,
gammaslope.constr.V = res0$gammaslope.constr.V,
gammaslope.constr.c = res0$gammaslope.constr.c,
notA= TRUE , control=con0 , delta.inits=delta.inits ,
delta.designmatrix = res0$delta.designmatrix ,
delta.fixed = res0$delta.fixed ,
gammaslope.fixed = res0$gammaslope.fixed ,
... )
if (con$progress){
cat( paste0( "*** Random Start " , nn ,
" | Deviance = " , round( res$deviance , 2 ) , "\n") )
utils::flush.console()
}
if ( res$deviance < devmin ){
devmin <- res$deviance
gammaslope.min <- res$gammaslope
delta.min <- res$delta
}
}
}
# use inits or best solution from random starts
if (NSTARTS[1] > 0 ){
gammaslope <- gammaslope.min
delta.inits <- delta.min
} else {
gammaslope <- NULL
delta.inits <- NULL
}
res <- tam.mml.3pl(resp= res0$resp , E=res0$E , skillspace="discrete" ,
theta.k= res0$theta.k , gammaslope=gammaslope ,
gammaslope.fixed = res0$gammaslope.fixed ,
gammaslope.constr.V = res0$gammaslope.constr.V,
gammaslope.constr.c = res0$gammaslope.constr.c,
notA= TRUE , delta.inits = delta.inits ,
delta.fixed = res0$delta.fixed ,
control=con ,
delta.designmatrix = res0$delta.designmatrix ,
... )
#*****************************************
# processing output
# probabilities mixture distributions
itempartable <- res0$itempartable_MIXTURE
theta_MIXTURE <- res0$theta_MIXTURE
TG <- nrow(theta_MIXTURE)
TP <- ncl*TG
pi.k <- res$pi.k
D <- ncol(theta_MIXTURE )
G <- 1
# mixture probabilities
probs_MIXTURE <- rep(NA,ncl)
names(probs_MIXTURE) <- paste0("Cl" , 1:ncl )
moments_MIXTURE <- as.list( 1:ncl )
for (cl in 1:ncl){
cl.index <- 1:TG + (cl-1)*TG
probs_MIXTURE[cl] <- sum(pi.k[ cl.index , 1 ] )
pi.ktemp <- pi.k[ cl.index ,,drop=FALSE]
pi.ktemp <- pi.ktemp / colSums( pi.ktemp)
moments_MIXTURE[[cl]] <- tam_mml_3pl_distributionmoments( D =D ,
G =G , pi.k=pi.ktemp , theta.k=theta_MIXTURE )
}
# item parameters
res$probs_MIXTURE <- probs_MIXTURE
res$moments_MIXTURE <- moments_MIXTURE
ipar <- res0$itempartable_MIXTURE
p11 <- strsplit( paste(ipar$parm) , split="_Cl" )
ipar$parm0 <- unlist( lapply( p11 , FUN = function(pp){ pp[1] } ) )
ipar$est <- gammaslope[ ipar$index ]
# res$itempartable1_MIXTURE <- ipar
res$gammaslope <- gammaslope
# second item parameter table
ipar2 <- ipar[ ipar$Class == 1 , c("item" , "parm0")]
colnames(ipar2)[2] <- "parm"
for (cl in 1:ncl){
ipar2[ , paste0("Cl" , cl ) ] <- ipar[ ipar$Class == cl , "est" ]
}
res$itempartable_MIXTURE <- ipar2
#---- individual class probabilities
res$ind_classprobs <- tamaan_3pl_mixture_individual_class_probabilities(hwt=res$hwt,
NCLASSES=anal.list$NCLASSES)
#----- output
res$tamaan.method <- "tam.mml.3pl"
return(res)
}
#######################################################################
| /R/tamaan.3pl.mixture.R | no_license | yaozeyang90/TAM | R | false | false | 4,624 | r | ## File Name: tamaan.3pl.mixture.R
## File Version: 9.07
#######################################################################
# tamaan 3PL mixture module
tamaan.3pl.mixture <- function( res0 , anal.list , con , ... )
{
if ( ! is.null( anal.list$NSTARTS ) ){
NSTARTS <- anal.list$NSTARTS
} else {
NSTARTS <- c(0,0)
}
#*** initial gammaslope estimate
# different starts if NSTARTS > 0
con0 <- con
con0$maxiter <- NSTARTS[2]
con0$progress <- FALSE
devmin <- 1E100
itempartable <- res0$itempartable_MIXTURE
itempartable.int <- itempartable[ itempartable$int == 1 , ]
itempartable.slo <- itempartable[ itempartable$slo == 1 , ]
gammaslope0 <- itempartable$val
resp <- res0$resp
items0 <- res0$items
# initial values
I <- ncol(resp)
beta0 <- sapply( 1:I , FUN = function(ii){
ncat.ii <- items0[ii , "ncat"] - 1
l1 <- rep(0,ncat.ii)
for (hh in 1:ncat.ii){
l1[hh] <- stats::qlogis( mean( resp[,ii] >= hh , na.rm=TRUE ) / ncat.ii )
}
return(l1)
} )
beta0 <- unlist( beta0)
B0 <- length(beta0)
ncl <- anal.list$NCLASSES
if (NSTARTS[1] > 0 ){
for (nn in 1:(NSTARTS[1]) ){
gammaslope <- gammaslope0
gammaslope[ itempartable.int$index ] <- rep( beta0 , ncl ) +
stats::rnorm( ncl*B0 , mean=0, sd = log(1+nn^(1/5) ) )
N0 <- nrow(itempartable.slo)
if ( ! res0$raschtype ){
gammaslope[ itempartable.slo$index ] <- stats::runif( N0 , max(.2,1-nn/5) , min( 1.8 , 1+nn/5) )
}
# delta.inits
if (nn==1){ delta.inits <- NULL }
res <- tam.mml.3pl(resp= res0$resp , E=res0$E , skillspace="discrete" ,
theta.k= res0$theta.k , gammaslope=gammaslope ,
gammaslope.constr.V = res0$gammaslope.constr.V,
gammaslope.constr.c = res0$gammaslope.constr.c,
notA= TRUE , control=con0 , delta.inits=delta.inits ,
delta.designmatrix = res0$delta.designmatrix ,
delta.fixed = res0$delta.fixed ,
gammaslope.fixed = res0$gammaslope.fixed ,
... )
if (con$progress){
cat( paste0( "*** Random Start " , nn ,
" | Deviance = " , round( res$deviance , 2 ) , "\n") )
utils::flush.console()
}
if ( res$deviance < devmin ){
devmin <- res$deviance
gammaslope.min <- res$gammaslope
delta.min <- res$delta
}
}
}
# use inits or best solution from random starts
if (NSTARTS[1] > 0 ){
gammaslope <- gammaslope.min
delta.inits <- delta.min
} else {
gammaslope <- NULL
delta.inits <- NULL
}
res <- tam.mml.3pl(resp= res0$resp , E=res0$E , skillspace="discrete" ,
theta.k= res0$theta.k , gammaslope=gammaslope ,
gammaslope.fixed = res0$gammaslope.fixed ,
gammaslope.constr.V = res0$gammaslope.constr.V,
gammaslope.constr.c = res0$gammaslope.constr.c,
notA= TRUE , delta.inits = delta.inits ,
delta.fixed = res0$delta.fixed ,
control=con ,
delta.designmatrix = res0$delta.designmatrix ,
... )
#*****************************************
# processing output
# probabilities mixture distributions
itempartable <- res0$itempartable_MIXTURE
theta_MIXTURE <- res0$theta_MIXTURE
TG <- nrow(theta_MIXTURE)
TP <- ncl*TG
pi.k <- res$pi.k
D <- ncol(theta_MIXTURE )
G <- 1
# mixture probabilities
probs_MIXTURE <- rep(NA,ncl)
names(probs_MIXTURE) <- paste0("Cl" , 1:ncl )
moments_MIXTURE <- as.list( 1:ncl )
for (cl in 1:ncl){
cl.index <- 1:TG + (cl-1)*TG
probs_MIXTURE[cl] <- sum(pi.k[ cl.index , 1 ] )
pi.ktemp <- pi.k[ cl.index ,,drop=FALSE]
pi.ktemp <- pi.ktemp / colSums( pi.ktemp)
moments_MIXTURE[[cl]] <- tam_mml_3pl_distributionmoments( D =D ,
G =G , pi.k=pi.ktemp , theta.k=theta_MIXTURE )
}
# item parameters
res$probs_MIXTURE <- probs_MIXTURE
res$moments_MIXTURE <- moments_MIXTURE
ipar <- res0$itempartable_MIXTURE
p11 <- strsplit( paste(ipar$parm) , split="_Cl" )
ipar$parm0 <- unlist( lapply( p11 , FUN = function(pp){ pp[1] } ) )
ipar$est <- gammaslope[ ipar$index ]
# res$itempartable1_MIXTURE <- ipar
res$gammaslope <- gammaslope
# second item parameter table
ipar2 <- ipar[ ipar$Class == 1 , c("item" , "parm0")]
colnames(ipar2)[2] <- "parm"
for (cl in 1:ncl){
ipar2[ , paste0("Cl" , cl ) ] <- ipar[ ipar$Class == cl , "est" ]
}
res$itempartable_MIXTURE <- ipar2
#---- individual class probabilities
res$ind_classprobs <- tamaan_3pl_mixture_individual_class_probabilities(hwt=res$hwt,
NCLASSES=anal.list$NCLASSES)
#----- output
res$tamaan.method <- "tam.mml.3pl"
return(res)
}
#######################################################################
|
#problem 1
year = seq(1993,2010,1)
storesOpen = c(38,45,61,80,108,141,186,241,311,396,519,629,721,809,888,971,1037,1100)
storesYearZero = year - 1993
storesYearZero
#plot(year,storesYearZero)
plot(ts(storesOpen))
#linear trend
linTrendEq = lm(storesOpen~storesYearZero)
predictLin = predict(linTrendEq)
lines(ts(predictLin),type="l",col="red",lwd=3)
predict1 = 18
predict2 = 19
predict2011 = linTrendEq[[1]][2]*predict1 + linTrendEq[[1]][1]
predict2012 = linTrendEq[[1]][2]*predict2 + linTrendEq[[1]][1]
predict2011
predict2012
#quadratic trend
quadTrendEq = lm(storesOpen~storesYearZero + I(storesYearZero^2))
predictQuad = predict(quadTrendEq)
lines(ts(predictQuad),type="l",col="green",lwd=3)
predict2011 = predict15 = quadTrendEq[[1]][3]*(predict1^2) + quadTrendEq[[1]][2]*predict1 + quadTrendEq[[1]][1]
predict2012 = predict15 = quadTrendEq[[1]][3]*(predict2^2) + quadTrendEq[[1]][2]*predict2 + quadTrendEq[[1]][1]
predict2011
predict2012
#exponential trend
expTrendEq = lm(log10(storesOpen)~storesYearZero)
predictExp = predict(expTrendEq)
predictR = 10^predictExp
lines(ts(predictR),type="l",col="blue",lwd=3)
beta0 = 10^expTrendEq[[1]][1]
beta1 = 10^expTrendEq[[1]][2]
predict2011 = (beta1^predict1) * beta0
predict2012 = (beta1^predict2) * beta0
predict2011
predict2012
#each forecast is different because each model uses a different calculation
#based on the graph, the quadratic trend is the most accurate because it most closely matches with the given plot
#problem 2
baseballYears = seq(2000,2010,1)
baseballSalary = c(1.99,2.29,2.38,2.58,2.49,2.63,2.83,2.92,3.13,3.26,3.27)
baseballYearZero = baseballYears - 2000
plot(ts(baseballSalary))
predict1 = 11
#linear trend
baseballLin = lm(baseballSalary~baseballYearZero)
predictBbLin = predict(baseballLin)
lines(ts(predictBbLin),type="l",col="red",lwd=3)
predictBb2011 = baseballLin[[1]][2]*predict1 + baseballLin[[1]][1]
predictBb2011
#quadratic trend
baseballQuad = lm(baseballSalary~baseballYearZero + I(baseballYearZero^2))
predictBbQuad = predict(baseballQuad)
lines(ts(predictBbQuad),type="l",col="green",lwd=3)
predictBb2011 = baseballQuad[[1]][3]*(predict1^2) + baseballQuad[[1]][2]*predict1 + baseballQuad[[1]][1]
predictBb2011
#exponential trend
baseballExp = lm(log10(baseballSalary)~baseballYearZero)
predictBbExp = predict(baseballExp)
predictS = 10^predictBbExp
lines(ts(predictS),type="l",col="blue",lwd=3)
beta0 = 10^baseballExp[[1]][1]
beta1 = 10^baseballExp[[1]][2]
predictBb2011 = (beta1^predict1) * beta0
predictBb2011
#all 3 models are very close in their predictions, but based on the graph, the exponential model fits the best
#problem 3
travelData = read.csv(file="/Users/Josh/Google Drive/TimeSeriesAnalysis/Assignments/Assignment4/travel.csv",header=TRUE)
plot(ts(travelData$Travel),type='o')
dataPts = dim(travelData)[1] - 1
travelData$codedMonth = seq(0,dataPts,1)
jan = rep(c(1,0,0,0,0,0,0,0,0,0,0,0),6)
jan = c(jan,c(1,0,0,0,0,0,0,0))
travelData$Jan = jan
feb = rep(c(0,1,0,0,0,0,0,0,0,0,0,0),6)
feb = c(feb,c(0,1,0,0,0,0,0,0))
travelData$Feb = feb
mar = rep(c(0,0,1,0,0,0,0,0,0,0,0,0),6)
mar = c(mar,c(0,0,1,0,0,0,0,0))
travelData$Mar = mar
apr = rep(c(0,0,0,1,0,0,0,0,0,0,0,0),6)
apr = c(apr,c(0,0,0,1,0,0,0,0))
travelData$Apr = apr
may = rep(c(0,0,0,0,1,0,0,0,0,0,0,0),6)
may = c(may,c(0,0,0,0,1,0,0,0))
travelData$May = may
jun = rep(c(0,0,0,0,0,1,0,0,0,0,0,0),6)
jun = c(jun,c(0,0,0,0,0,1,0,0))
travelData$Jun = jun
jul = rep(c(0,0,0,0,0,0,1,0,0,0,0,0),6)
jul = c(jul,c(0,0,0,0,0,0,1,0))
travelData$Jul = jul
aug = rep(c(0,0,0,0,0,0,0,1,0,0,0,0),6)
aug = c(aug,c(0,0,0,0,0,0,0,1))
travelData$Aug = aug
sept = rep(c(0,0,0,0,0,0,0,0,1,0,0,0),6)
sept = c(sept,c(0,0,0,0,0,0,0,0))
travelData$Sept = sept
oct = rep(c(0,0,0,0,0,0,0,0,0,1,0,0),6)
oct = c(oct,c(0,0,0,0,0,0,0,0))
travelData$Oct = oct
nov = rep(c(0,0,0,0,0,0,0,0,0,0,1,0),6)
nov = c(nov,c(0,0,0,0,0,0,0,0))
travelData$Nov = nov
travelModel = lm(log10(travelData$Travel)~codedMonth + travelData$Jan + travelData$Feb + travelData$Mar + travelData$Apr + travelData$May + travelData$Jun + travelData$Jul +
travelData$Aug + travelData$Sept + travelData$Oct + travelData$Nov)
p = predict(travelModel)
pr = 10^p
lines(ts(pr),col="red", type="o")
#prediction for august
fitValueAug = pr[80]
fitValueAug
#predictions for last four months
septPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*80 + travelModel$coefficients[11])
octPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*81 + travelModel$coefficients[12])
novPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*82 + travelModel$coefficients[13])
decPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*83)
septPredic
octPredic
novPredic
decPredic
#problem 4
silverQ = read.csv(file="/Users/Josh/Google Drive/TimeSeriesAnalysis/Assignments/Assignment4/quarter.csv",header=TRUE)
plot(ts(silverQ$Price),type='o')
dataPts = dim(silverQ)[1] - 1
silverQ$codedQuarter = seq(0,dataPts,1)
silverQ$q1 = c(1,0,0,0, 1,0,0,0, 1,0,0,0, 1,0,0,0, 1,0,0,0, 1,0,0,0)
silverQ$q2 = c(0,1,0,0, 0,1,0,0, 0,1,0,0, 0,1,0,0, 0,1,0,0, 0,1,0,0)
silverQ$q3 = c(0,0,1,0, 0,0,1,0, 0,0,1,0, 0,0,1,0, 0,0,1,0, 0,0,1,0)
silverQ
silverQModel = lm(log10(silverQ$Price)~silverQ$codedQuarter + silverQ$Q1 + silverQ$Q2 + silverQ$Q3)
s = predict(silverQModel)
sr = 10^s
lines(ts(sr),col="green",type='o')
silverQModel$coefficients
#fitted value for last quarter of 2009
fitValue2009 = sr[24]
fitValue2009
#forecasts for all four quarters of 2010
q1Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*24 + silverQModel$coefficients[3])
q2Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*25 + silverQModel$coefficients[4])
q3Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*26 + silverQModel$coefficients[5])
q4Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*27)
q1Predic
q2Predic
q3Predic
q4Predic
#these forecasts were not as accurate as they were for the monthly above, because an exponential model doesn't fit this data as well
| /TimeSeries4.R | no_license | joshgraves3/Time-Series-Analysis | R | false | false | 6,152 | r | #problem 1
year = seq(1993,2010,1)
storesOpen = c(38,45,61,80,108,141,186,241,311,396,519,629,721,809,888,971,1037,1100)
storesYearZero = year - 1993
storesYearZero
#plot(year,storesYearZero)
plot(ts(storesOpen))
#linear trend
linTrendEq = lm(storesOpen~storesYearZero)
predictLin = predict(linTrendEq)
lines(ts(predictLin),type="l",col="red",lwd=3)
predict1 = 18
predict2 = 19
predict2011 = linTrendEq[[1]][2]*predict1 + linTrendEq[[1]][1]
predict2012 = linTrendEq[[1]][2]*predict2 + linTrendEq[[1]][1]
predict2011
predict2012
#quadratic trend
quadTrendEq = lm(storesOpen~storesYearZero + I(storesYearZero^2))
predictQuad = predict(quadTrendEq)
lines(ts(predictQuad),type="l",col="green",lwd=3)
predict2011 = predict15 = quadTrendEq[[1]][3]*(predict1^2) + quadTrendEq[[1]][2]*predict1 + quadTrendEq[[1]][1]
predict2012 = predict15 = quadTrendEq[[1]][3]*(predict2^2) + quadTrendEq[[1]][2]*predict2 + quadTrendEq[[1]][1]
predict2011
predict2012
#exponential trend
expTrendEq = lm(log10(storesOpen)~storesYearZero)
predictExp = predict(expTrendEq)
predictR = 10^predictExp
lines(ts(predictR),type="l",col="blue",lwd=3)
beta0 = 10^expTrendEq[[1]][1]
beta1 = 10^expTrendEq[[1]][2]
predict2011 = (beta1^predict1) * beta0
predict2012 = (beta1^predict2) * beta0
predict2011
predict2012
#each forecast is different because each model uses a different calculation
#based on the graph, the quadratic trend is the most accurate because it most closely matches with the given plot
#problem 2
baseballYears = seq(2000,2010,1)
baseballSalary = c(1.99,2.29,2.38,2.58,2.49,2.63,2.83,2.92,3.13,3.26,3.27)
baseballYearZero = baseballYears - 2000
plot(ts(baseballSalary))
predict1 = 11
#linear trend
baseballLin = lm(baseballSalary~baseballYearZero)
predictBbLin = predict(baseballLin)
lines(ts(predictBbLin),type="l",col="red",lwd=3)
predictBb2011 = baseballLin[[1]][2]*predict1 + baseballLin[[1]][1]
predictBb2011
#quadratic trend
baseballQuad = lm(baseballSalary~baseballYearZero + I(baseballYearZero^2))
predictBbQuad = predict(baseballQuad)
lines(ts(predictBbQuad),type="l",col="green",lwd=3)
predictBb2011 = baseballQuad[[1]][3]*(predict1^2) + baseballQuad[[1]][2]*predict1 + baseballQuad[[1]][1]
predictBb2011
#exponential trend
baseballExp = lm(log10(baseballSalary)~baseballYearZero)
predictBbExp = predict(baseballExp)
predictS = 10^predictBbExp
lines(ts(predictS),type="l",col="blue",lwd=3)
beta0 = 10^baseballExp[[1]][1]
beta1 = 10^baseballExp[[1]][2]
predictBb2011 = (beta1^predict1) * beta0
predictBb2011
#all 3 models are very close in their predictions, but based on the graph, the exponential model fits the best
#problem 3
travelData = read.csv(file="/Users/Josh/Google Drive/TimeSeriesAnalysis/Assignments/Assignment4/travel.csv",header=TRUE)
plot(ts(travelData$Travel),type='o')
dataPts = dim(travelData)[1] - 1
travelData$codedMonth = seq(0,dataPts,1)
jan = rep(c(1,0,0,0,0,0,0,0,0,0,0,0),6)
jan = c(jan,c(1,0,0,0,0,0,0,0))
travelData$Jan = jan
feb = rep(c(0,1,0,0,0,0,0,0,0,0,0,0),6)
feb = c(feb,c(0,1,0,0,0,0,0,0))
travelData$Feb = feb
mar = rep(c(0,0,1,0,0,0,0,0,0,0,0,0),6)
mar = c(mar,c(0,0,1,0,0,0,0,0))
travelData$Mar = mar
apr = rep(c(0,0,0,1,0,0,0,0,0,0,0,0),6)
apr = c(apr,c(0,0,0,1,0,0,0,0))
travelData$Apr = apr
may = rep(c(0,0,0,0,1,0,0,0,0,0,0,0),6)
may = c(may,c(0,0,0,0,1,0,0,0))
travelData$May = may
jun = rep(c(0,0,0,0,0,1,0,0,0,0,0,0),6)
jun = c(jun,c(0,0,0,0,0,1,0,0))
travelData$Jun = jun
jul = rep(c(0,0,0,0,0,0,1,0,0,0,0,0),6)
jul = c(jul,c(0,0,0,0,0,0,1,0))
travelData$Jul = jul
aug = rep(c(0,0,0,0,0,0,0,1,0,0,0,0),6)
aug = c(aug,c(0,0,0,0,0,0,0,1))
travelData$Aug = aug
sept = rep(c(0,0,0,0,0,0,0,0,1,0,0,0),6)
sept = c(sept,c(0,0,0,0,0,0,0,0))
travelData$Sept = sept
oct = rep(c(0,0,0,0,0,0,0,0,0,1,0,0),6)
oct = c(oct,c(0,0,0,0,0,0,0,0))
travelData$Oct = oct
nov = rep(c(0,0,0,0,0,0,0,0,0,0,1,0),6)
nov = c(nov,c(0,0,0,0,0,0,0,0))
travelData$Nov = nov
travelModel = lm(log10(travelData$Travel)~codedMonth + travelData$Jan + travelData$Feb + travelData$Mar + travelData$Apr + travelData$May + travelData$Jun + travelData$Jul +
travelData$Aug + travelData$Sept + travelData$Oct + travelData$Nov)
p = predict(travelModel)
pr = 10^p
lines(ts(pr),col="red", type="o")
#prediction for august
fitValueAug = pr[80]
fitValueAug
#predictions for last four months
septPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*80 + travelModel$coefficients[11])
octPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*81 + travelModel$coefficients[12])
novPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*82 + travelModel$coefficients[13])
decPredic = 10^(travelModel$coefficients[1] + travelModel$coefficients[2]*83)
septPredic
octPredic
novPredic
decPredic
#problem 4
silverQ = read.csv(file="/Users/Josh/Google Drive/TimeSeriesAnalysis/Assignments/Assignment4/quarter.csv",header=TRUE)
plot(ts(silverQ$Price),type='o')
dataPts = dim(silverQ)[1] - 1
silverQ$codedQuarter = seq(0,dataPts,1)
silverQ$q1 = c(1,0,0,0, 1,0,0,0, 1,0,0,0, 1,0,0,0, 1,0,0,0, 1,0,0,0)
silverQ$q2 = c(0,1,0,0, 0,1,0,0, 0,1,0,0, 0,1,0,0, 0,1,0,0, 0,1,0,0)
silverQ$q3 = c(0,0,1,0, 0,0,1,0, 0,0,1,0, 0,0,1,0, 0,0,1,0, 0,0,1,0)
silverQ
silverQModel = lm(log10(silverQ$Price)~silverQ$codedQuarter + silverQ$Q1 + silverQ$Q2 + silverQ$Q3)
s = predict(silverQModel)
sr = 10^s
lines(ts(sr),col="green",type='o')
silverQModel$coefficients
#fitted value for last quarter of 2009
fitValue2009 = sr[24]
fitValue2009
#forecasts for all four quarters of 2010
q1Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*24 + silverQModel$coefficients[3])
q2Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*25 + silverQModel$coefficients[4])
q3Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*26 + silverQModel$coefficients[5])
q4Predic = 10^(silverQModel$coefficients[1] + silverQModel$coefficients[2]*27)
q1Predic
q2Predic
q3Predic
q4Predic
#these forecasts were not as accurate as they were for the monthly above, because an exponential model doesn't fit this data as well
|
# Yige Wu @ WashU Mar 2019
## to show the downstream effects of TP53 mutations
# source ------------------------------------------------------------------
setwd(dir = "Box Sync/")
source("./cptac2p_analysis/phospho_network/phospho_network_plotting.R")
source("./cptac2p_analysis/phospho_network/phospho_network_shared.R")
source("./cptac2p_analysis/p53/TP53_shared.R")
# input mutational associations -------------------------------------------
mut_impact_tab <- fread(input = "./cptac2p/analysis_results/p53/tables/test_mut_impact_proteome_TP53/TP53_mut_impact_proteome_RNA_cptac2p_cptac3_tab.txt", data.table = F)
# filter for downstrea ----------------------------------------------------
mut_impact_tab_downstream <- mut_impact_tab %>%
filter(SUB_GENE.is_downstream == T)
mut_impact_tab %>%
filter(cancer == "CCRCC" & variant_class == "truncation")
mut_impact_tab %>%
filter(SUB_GENE == "ESR1")
## make sure every cancer type is presented in each facet
mut_impact_tab_downstream2add <- mut_impact_tab_downstream %>%
filter(cancer == "CCRCC" & variant_class == "missense") %>%
mutate(variant_class = "truncation") %>%
mutate(meddiff = NA) %>%
mutate(fdr = NA)
mut_impact_tab_downstream <- rbind(mut_impact_tab_downstream, mut_impact_tab_downstream2add)
# set variables -----------------------------------------------------------
gene_altered <- "TP53"
fdr_thres2p <- 0.05
sig_colors <- c("black", "white")
names(sig_colors) <- c(paste0('FDR<', fdr_thres2p), paste0('FDR>', fdr_thres2p))
expression_cap <- 1.5
expression_types2p <- c("RNA", "PRO")
# input TF relation table ----------------------------------------------------------
# TF_tab <- fread(input = "./Ding_Lab/Projects_Current/TP53_shared_data/resources/PPI/TF_interactions.txt", data.table = F)
TF_tab <- fread(input = "./Ding_Lab/Projects_Current/TP53_shared_data/resources/PPI/TF_interactions_TP53_manual.txt", data.table = F)
TF_tab <- TF_tab %>%
mutate(TF_effect = Effect)
TF_tab %>%
filter(source_genesymbol == "TP53" & target_genesymbol == "ESR1")
TF_tab %>%
filter(source_genesymbol == gene_altered) %>%
select(TF_effect) %>%
table()
for (TF_effect_tmp in unique(TF_tab$TF_effect)) {
tab2p <- mut_impact_tab_downstream %>%
filter(SUB_GENE %in% TF_tab$target_genesymbol[TF_tab$source_genesymbol == gene_altered & TF_tab$TF_effect == TF_effect_tmp]) %>%
filter(affected_exp_type %in% expression_types2p)
## only plot genes with significant results
affected_genes2p <- unique(tab2p$SUB_GENE[tab2p$fdr_sig & order(tab2p$meddiff)])
tab2p <- tab2p %>%
filter(SUB_GENE %in% affected_genes2p)
tab2p$meddiff_capped <- add_cap(x = tab2p$meddiff, cap = expression_cap)
tab2p$log10_FDR <- -log10(x = tab2p$fdr)
tab2p$cancer <- order_cancer_rev(x = tab2p$cancer)
tab2p$affected_exp_type <- factor(x = tab2p$affected_exp_type, levels = expression_types2p)
tab2p$variant_class <- order_variant_class(x = tab2p$variant_class)
tab2p$SUB_GENE <- factor(x = tab2p$SUB_GENE, levels = affected_genes2p)
p <- ggplot()
p <- p + geom_point(data = tab2p, mapping = aes(x = SUB_GENE, y = cancer, fill = meddiff_capped, size = log10_FDR,
colour = ifelse(fdr_sig == TRUE, paste0('FDR<', fdr_thres2p), paste0('FDR>', fdr_thres2p))),
alpha = 0.6, shape = 22)
p <- p + facet_grid(affected_exp_type + variant_class ~ ., drop=F, space = "free",scales = "free")
p <- p + scale_color_manual(values = sig_colors)
p = p + scale_fill_gradientn(name= "Mut-WT\nExpression\nFold Change", na.value=NA, colours=RdBu1024, limit=c(-expression_cap,expression_cap))
p <- p + xlab("Affected Gene") + ylab("Cancer Type")
p <- p + guides(colour = guide_legend(title = "FDR"))
p <- p + theme_bw()
p <- p + theme(panel.spacing.x = unit(0, "lines"), panel.spacing.y = unit(0, "lines"))
p <- p + theme(axis.text.y = element_text(size = 10, face = "bold"))
p <- p + theme(axis.text.x = element_text(size = 10, face = "bold", angle = 90, hjust = 0.5, vjust = 0.5))
p <- p + theme(panel.grid.major = element_line(size = 0.5, linetype = 'solid', colour = "grey80"))
p <- p + theme(strip.text.x = element_text(angle = 90))
p
fn <- paste0(makeOutDir(resultD = resultD), gene_altered, "_" , TF_effect_tmp, "_targets_expression_changes.pdf")
if (TF_effect_tmp == "Unknown") {
ggsave(filename = fn,
width = 10, height = 6)
}
if (TF_effect_tmp == "Activation" | TF_effect_tmp == "Inhibition") {
ggsave(filename = fn,
width = 10, height = 6)
}
}
| /p53/figures/grid_bubble_TP53_downstream.R | no_license | ding-lab/phospho-signaling | R | false | false | 4,570 | r | # Yige Wu @ WashU Mar 2019
## to show the downstream effects of TP53 mutations
# source ------------------------------------------------------------------
setwd(dir = "Box Sync/")
source("./cptac2p_analysis/phospho_network/phospho_network_plotting.R")
source("./cptac2p_analysis/phospho_network/phospho_network_shared.R")
source("./cptac2p_analysis/p53/TP53_shared.R")
# input mutational associations -------------------------------------------
mut_impact_tab <- fread(input = "./cptac2p/analysis_results/p53/tables/test_mut_impact_proteome_TP53/TP53_mut_impact_proteome_RNA_cptac2p_cptac3_tab.txt", data.table = F)
# filter for downstrea ----------------------------------------------------
mut_impact_tab_downstream <- mut_impact_tab %>%
filter(SUB_GENE.is_downstream == T)
mut_impact_tab %>%
filter(cancer == "CCRCC" & variant_class == "truncation")
mut_impact_tab %>%
filter(SUB_GENE == "ESR1")
## make sure every cancer type is presented in each facet
mut_impact_tab_downstream2add <- mut_impact_tab_downstream %>%
filter(cancer == "CCRCC" & variant_class == "missense") %>%
mutate(variant_class = "truncation") %>%
mutate(meddiff = NA) %>%
mutate(fdr = NA)
mut_impact_tab_downstream <- rbind(mut_impact_tab_downstream, mut_impact_tab_downstream2add)
# set variables -----------------------------------------------------------
gene_altered <- "TP53"
fdr_thres2p <- 0.05
sig_colors <- c("black", "white")
names(sig_colors) <- c(paste0('FDR<', fdr_thres2p), paste0('FDR>', fdr_thres2p))
expression_cap <- 1.5
expression_types2p <- c("RNA", "PRO")
# input TF relation table ----------------------------------------------------------
# TF_tab <- fread(input = "./Ding_Lab/Projects_Current/TP53_shared_data/resources/PPI/TF_interactions.txt", data.table = F)
TF_tab <- fread(input = "./Ding_Lab/Projects_Current/TP53_shared_data/resources/PPI/TF_interactions_TP53_manual.txt", data.table = F)
TF_tab <- TF_tab %>%
mutate(TF_effect = Effect)
TF_tab %>%
filter(source_genesymbol == "TP53" & target_genesymbol == "ESR1")
TF_tab %>%
filter(source_genesymbol == gene_altered) %>%
select(TF_effect) %>%
table()
for (TF_effect_tmp in unique(TF_tab$TF_effect)) {
tab2p <- mut_impact_tab_downstream %>%
filter(SUB_GENE %in% TF_tab$target_genesymbol[TF_tab$source_genesymbol == gene_altered & TF_tab$TF_effect == TF_effect_tmp]) %>%
filter(affected_exp_type %in% expression_types2p)
## only plot genes with significant results
affected_genes2p <- unique(tab2p$SUB_GENE[tab2p$fdr_sig & order(tab2p$meddiff)])
tab2p <- tab2p %>%
filter(SUB_GENE %in% affected_genes2p)
tab2p$meddiff_capped <- add_cap(x = tab2p$meddiff, cap = expression_cap)
tab2p$log10_FDR <- -log10(x = tab2p$fdr)
tab2p$cancer <- order_cancer_rev(x = tab2p$cancer)
tab2p$affected_exp_type <- factor(x = tab2p$affected_exp_type, levels = expression_types2p)
tab2p$variant_class <- order_variant_class(x = tab2p$variant_class)
tab2p$SUB_GENE <- factor(x = tab2p$SUB_GENE, levels = affected_genes2p)
p <- ggplot()
p <- p + geom_point(data = tab2p, mapping = aes(x = SUB_GENE, y = cancer, fill = meddiff_capped, size = log10_FDR,
colour = ifelse(fdr_sig == TRUE, paste0('FDR<', fdr_thres2p), paste0('FDR>', fdr_thres2p))),
alpha = 0.6, shape = 22)
p <- p + facet_grid(affected_exp_type + variant_class ~ ., drop=F, space = "free",scales = "free")
p <- p + scale_color_manual(values = sig_colors)
p = p + scale_fill_gradientn(name= "Mut-WT\nExpression\nFold Change", na.value=NA, colours=RdBu1024, limit=c(-expression_cap,expression_cap))
p <- p + xlab("Affected Gene") + ylab("Cancer Type")
p <- p + guides(colour = guide_legend(title = "FDR"))
p <- p + theme_bw()
p <- p + theme(panel.spacing.x = unit(0, "lines"), panel.spacing.y = unit(0, "lines"))
p <- p + theme(axis.text.y = element_text(size = 10, face = "bold"))
p <- p + theme(axis.text.x = element_text(size = 10, face = "bold", angle = 90, hjust = 0.5, vjust = 0.5))
p <- p + theme(panel.grid.major = element_line(size = 0.5, linetype = 'solid', colour = "grey80"))
p <- p + theme(strip.text.x = element_text(angle = 90))
p
fn <- paste0(makeOutDir(resultD = resultD), gene_altered, "_" , TF_effect_tmp, "_targets_expression_changes.pdf")
if (TF_effect_tmp == "Unknown") {
ggsave(filename = fn,
width = 10, height = 6)
}
if (TF_effect_tmp == "Activation" | TF_effect_tmp == "Inhibition") {
ggsave(filename = fn,
width = 10, height = 6)
}
}
|
maximum.dist <- function(data.x, data.y=data.x, rank=FALSE){
dx <- dim(data.x)
dy <- dim(data.y)
if(is.null(dx) & is.null(dy) ){
x.lab <- names(data.x)
y.lab <- names(data.y)
if(rank){
nx <- length(data.x)
ny <- length(data.y)
if((nx==ny) && all(data.x==data.y)){
rx <- rank(data.x, na.last="keep", ties.method="average")/(nx+1)
ry <- rx
}
else{
rxy <- rank(c(data.x, data.y), na.last="keep", ties.method="average")/(nx + ny+1)
rx <- rxy[1:nx]
ry <- rxy[-(1:nx)]
}
mdist <- abs(outer(rx, ry, FUN="-"))
}
else mdist <- abs(outer(data.x, data.y, FUN="-"))
}
else{
if(is.null(dx) & !is.null(dy)){
data.y <- data.matrix(data.y)
if(is.list(data.x)) data.x <- data.matrix(data.x)
else data.x <- matrix(data.x, nrow=1)
}
if(!is.null(dx) & is.null(dy)){
data.x <- data.matrix(data.x)
if(is.list(data.y)) data.y <- data.matrix(data.y)
else data.y <- matrix(data.y, nrow=1)
}
else {
data.x <- data.matrix(data.x)
data.y <- data.matrix(data.y)
}
x.lab <- rownames(data.x)
y.lab <- rownames(data.y)
nx <- nrow(data.x)
ny <- nrow(data.y)
if(rank){
if(all(dx==dy)){
if(all(data.x==data.y)){
rx <- apply(data.x, 2, rank, na.last="keep", ties.method="average")/(nx+1)
ry <- rx
}
else {
rxy <- apply(rbind(data.x,data.y), 2, rank, na.last="keep", ties.method="average")/(nx+ny+1)
rx <- rxy[1:nx,, drop=FALSE]
ry <- rxy[-(1:nx), , drop=FALSE]
}
}
else{
rxy <- apply(rbind(data.x,data.y), 2, rank, na.last="keep", ties.method="average")/(nx+ny+1)
rx <- rxy[1:nx, , drop=FALSE]
ry <- rxy[-(1:nx), , drop=FALSE]
}
}
else{
rx <- data.x
ry <- data.y
}
mdist <- matrix(0, nx, ny)
for(i in 1:nx){
dd <- abs(rx[i,] - t(ry))
mdist[i,] <- apply(dd, 2, max, na.rm=TRUE)
}
}
dimnames(mdist) <- list(x.lab, y.lab)
mdist
}
| /StatMatch_1.3.0/R/maximum.dist.R | no_license | marcellodo/StatMatch | R | false | false | 2,482 | r | maximum.dist <- function(data.x, data.y=data.x, rank=FALSE){
dx <- dim(data.x)
dy <- dim(data.y)
if(is.null(dx) & is.null(dy) ){
x.lab <- names(data.x)
y.lab <- names(data.y)
if(rank){
nx <- length(data.x)
ny <- length(data.y)
if((nx==ny) && all(data.x==data.y)){
rx <- rank(data.x, na.last="keep", ties.method="average")/(nx+1)
ry <- rx
}
else{
rxy <- rank(c(data.x, data.y), na.last="keep", ties.method="average")/(nx + ny+1)
rx <- rxy[1:nx]
ry <- rxy[-(1:nx)]
}
mdist <- abs(outer(rx, ry, FUN="-"))
}
else mdist <- abs(outer(data.x, data.y, FUN="-"))
}
else{
if(is.null(dx) & !is.null(dy)){
data.y <- data.matrix(data.y)
if(is.list(data.x)) data.x <- data.matrix(data.x)
else data.x <- matrix(data.x, nrow=1)
}
if(!is.null(dx) & is.null(dy)){
data.x <- data.matrix(data.x)
if(is.list(data.y)) data.y <- data.matrix(data.y)
else data.y <- matrix(data.y, nrow=1)
}
else {
data.x <- data.matrix(data.x)
data.y <- data.matrix(data.y)
}
x.lab <- rownames(data.x)
y.lab <- rownames(data.y)
nx <- nrow(data.x)
ny <- nrow(data.y)
if(rank){
if(all(dx==dy)){
if(all(data.x==data.y)){
rx <- apply(data.x, 2, rank, na.last="keep", ties.method="average")/(nx+1)
ry <- rx
}
else {
rxy <- apply(rbind(data.x,data.y), 2, rank, na.last="keep", ties.method="average")/(nx+ny+1)
rx <- rxy[1:nx,, drop=FALSE]
ry <- rxy[-(1:nx), , drop=FALSE]
}
}
else{
rxy <- apply(rbind(data.x,data.y), 2, rank, na.last="keep", ties.method="average")/(nx+ny+1)
rx <- rxy[1:nx, , drop=FALSE]
ry <- rxy[-(1:nx), , drop=FALSE]
}
}
else{
rx <- data.x
ry <- data.y
}
mdist <- matrix(0, nx, ny)
for(i in 1:nx){
dd <- abs(rx[i,] - t(ry))
mdist[i,] <- apply(dd, 2, max, na.rm=TRUE)
}
}
dimnames(mdist) <- list(x.lab, y.lab)
mdist
}
|
\name{MakeReadable}
\alias{MakeReadable}
\title{Convert line breaks in vignette documentation}
\usage{
MakeReadable(pkg)
}
\arguments{
\item{pkg}{The package to investigate for vignette source files.}
}
\value{
Nothing in the workspace. All files are stored in a vignettes folder within MyBrailleR.
}
\description{
The Rnw files used for vignettes use Linux style line breaks that make reading vignette source files difficult for Windows users. A Python script is called which converts the line breaks and saves the vignette source in the user's MyBrailleR folder.
}
\details{
Must have Python 3.8 installed for this function to work.
}
\author{
A. Jonathan R. Godfrey
}
| /man/MakeReadable.Rd | no_license | cran/BrailleR | R | false | false | 694 | rd | \name{MakeReadable}
\alias{MakeReadable}
\title{Convert line breaks in vignette documentation}
\usage{
MakeReadable(pkg)
}
\arguments{
\item{pkg}{The package to investigate for vignette source files.}
}
\value{
Nothing in the workspace. All files are stored in a vignettes folder within MyBrailleR.
}
\description{
The Rnw files used for vignettes use Linux style line breaks that make reading vignette source files difficult for Windows users. A Python script is called which converts the line breaks and saves the vignette source in the user's MyBrailleR folder.
}
\details{
Must have Python 3.8 installed for this function to work.
}
\author{
A. Jonathan R. Godfrey
}
|
## Function accepts a valid matrix as a parameter
## returns inverse of matrix
## Uses the Solve function to invert a matrix
## Assumes a matrix is always invertible, only works with square matrixes
## Usage Example
## my_matrix<-matrix(rnorm(1:16),4,4)
## ds<-makeCacheMatrix(my_matrix)
## cacheSolve(ds)
makeCacheMatrix <- function(x = matrix()) {
m <- NULL
setMatrix <- function(y) {
x <<- y
m <<- NULL
}
getMatrix <- function() x
setSolve <- function(solve) m <<- solve
getSolve <- function() m
list(setMatrix = setMatrix, getMatrix = getMatrix
, setSolve = setSolve, getSolve = getSolve)
}
## Function returns the inverse value of a matrix.
## Value is cached for future references.
cacheSolve <- function(x, ...) {
#Check the matrix is square
if(ncol(x$getMatrix())==nrow(x$getMatrix()))
{
## Return a matrix that is the inverse of 'x'
m <- x$getSolve()
#If value exists retrieve from cache
if(!is.null(m)) {
message("Getting cached data...")
}
#Else calculate inverse value
else{
data <- x$getMatrix()
m <- solve(data, ...)
x$setSolve(m)
}
}
else{
message("Not a square matrix, cannot be inverted...")
m<-NULL
}
return(m)
}
| /cachematrix.R | no_license | barkhaj/ProgrammingAssignment2 | R | false | false | 1,395 | r | ## Function accepts a valid matrix as a parameter
## returns inverse of matrix
## Uses the Solve function to invert a matrix
## Assumes a matrix is always invertible, only works with square matrixes
## Usage Example
## my_matrix<-matrix(rnorm(1:16),4,4)
## ds<-makeCacheMatrix(my_matrix)
## cacheSolve(ds)
makeCacheMatrix <- function(x = matrix()) {
m <- NULL
setMatrix <- function(y) {
x <<- y
m <<- NULL
}
getMatrix <- function() x
setSolve <- function(solve) m <<- solve
getSolve <- function() m
list(setMatrix = setMatrix, getMatrix = getMatrix
, setSolve = setSolve, getSolve = getSolve)
}
## Function returns the inverse value of a matrix.
## Value is cached for future references.
cacheSolve <- function(x, ...) {
#Check the matrix is square
if(ncol(x$getMatrix())==nrow(x$getMatrix()))
{
## Return a matrix that is the inverse of 'x'
m <- x$getSolve()
#If value exists retrieve from cache
if(!is.null(m)) {
message("Getting cached data...")
}
#Else calculate inverse value
else{
data <- x$getMatrix()
m <- solve(data, ...)
x$setSolve(m)
}
}
else{
message("Not a square matrix, cannot be inverted...")
m<-NULL
}
return(m)
}
|
\name{StockmanMacLeodJohnson10degConeFundamentals1993}
\alias{StockmanMacLeodJohnson10degConeFundamentals1993}
\title{Stockman & Sharpe (2000) 10-deg cone fundamentals}
\usage{StockmanMacLeodJohnson10degConeFundamentals1993}
\description{\code{StockmanMacLeodJohnson10degConeFundamentals1993} Stockman,
MacLeod & Johnson (1993) 2-deg cone fundamentals based on the
CIE 10-deg CMFs (adjusted to 2-deg).}
\format{
This data frame contains the following data:
\describe{
\item{wlnm}{ wavelength (nm)}
\item{L10}{ L-cone spectral sensitivity, L10(lambda)}
\item{M10}{ M-cone spectral sensitivity, M10(lambda)}
\item{S10}{ S-cone spectral sensitivity, S10(lambda)}
}
}
\source{
The Colour & Vision Research laboratory(CVRL)
Institute of Ophthalmology, University College London
www.cvrl.org
}
\references{
The Colour & Vision Research laboratory(CVRL)
Institute of Ophthalmology, University College London
www.cvrl.org
}
\author{Jose Gama}
\examples{
data(StockmanMacLeodJohnson10degConeFundamentals1993)
StockmanMacLeodJohnson10degConeFundamentals1993
}
\keyword{datasets}
| /man/StockmanMacLeodJohnson10degConeFundamentals1993.Rd | no_license | playwar/colorscience | R | false | false | 1,095 | rd | \name{StockmanMacLeodJohnson10degConeFundamentals1993}
\alias{StockmanMacLeodJohnson10degConeFundamentals1993}
\title{Stockman & Sharpe (2000) 10-deg cone fundamentals}
\usage{StockmanMacLeodJohnson10degConeFundamentals1993}
\description{\code{StockmanMacLeodJohnson10degConeFundamentals1993} Stockman,
MacLeod & Johnson (1993) 2-deg cone fundamentals based on the
CIE 10-deg CMFs (adjusted to 2-deg).}
\format{
This data frame contains the following data:
\describe{
\item{wlnm}{ wavelength (nm)}
\item{L10}{ L-cone spectral sensitivity, L10(lambda)}
\item{M10}{ M-cone spectral sensitivity, M10(lambda)}
\item{S10}{ S-cone spectral sensitivity, S10(lambda)}
}
}
\source{
The Colour & Vision Research laboratory(CVRL)
Institute of Ophthalmology, University College London
www.cvrl.org
}
\references{
The Colour & Vision Research laboratory(CVRL)
Institute of Ophthalmology, University College London
www.cvrl.org
}
\author{Jose Gama}
\examples{
data(StockmanMacLeodJohnson10degConeFundamentals1993)
StockmanMacLeodJohnson10degConeFundamentals1993
}
\keyword{datasets}
|
library(caret)
require(Rmosek)
Sigmoid <- function(X)
{
1/(1 + exp(-X))
}
ProjectSolution <- function()
{
currentWorkDir <- getwd()
setwd('C:\\Users\\new\\Desktop\\Studies\\Sem2\\ML\\Project\\BlogFeedback')
trainAll <- read.csv('blogData_train.csv', header=FALSE)
set.seed(123)
trainAllSample <- trainAll[sample(nrow(trainAll), 5000), ]
trInd <- sample(5000, 3750)
trainAllSampleTr <- trainAllSample[trInd, ]
trainAllSampleTst <- trainAllSample[-trInd, ]
print(nrow(trainAllSampleTr))
optModel <- list()
optModel$sense <- "min"
#optModel$c <- c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1)
require(Rmosek)
xMatrix <- data.matrix(trainAllSampleTr[,51:60])
yMatrix <- data.matrix(trainAllSampleTr[,"V281"])
#cMatrix <- cbind(yMatrix, xMatrix)
cMatrix <- crossprod(xMatrix,yMatrix)
optModel$c <- as.vector(cMatrix)
X2Matrix <- crossprod(xMatrix, xMatrix)
X2Matrix[upper.tri(X2Matrix)]<-0
inds <- which(X2Matrix != 0, arr.ind=TRUE)
optModel$qobj <- list(i = inds[,1], j = inds[,2], v = X2Matrix[inds])
optModel$A <- Matrix( rep(0, 10), nrow = 1, byrow=TRUE, sparse=TRUE )
blc<- c(-Inf)
buc<- c(Inf)
#blc<-as.vector(yMatrix)
#buc<-as.vector(yMatrix)
optModel$bc <- rbind(blc, buc)
#blx<-c(min(cMatrix[,1]), min(cMatrix[,2]), min(cMatrix[,3]), min(cMatrix[,4]), min(cMatrix[,5]), min(cMatrix[,6]), min(cMatrix[,7]), min(cMatrix[,8]), min(cMatrix[,9]), min(cMatrix[,10]))
#bux<-c(max(cMatrix[,1]), max(cMatrix[,2]), max(cMatrix[,3]), max(cMatrix[,4]), max(cMatrix[,5]), max(cMatrix[,6]), max(cMatrix[,7]), max(cMatrix[,8]), max(cMatrix[,9]), max(cMatrix[,10]))
blx<-rep(-Inf, 10)
bux<-rep(Inf, 10)
optModel$bx<-rbind(blx, bux)
r<-mosek(optModel)
#print(r)
print("Train Data - Mosek Error")
CheckErrorMosekParams(trainAllSampleTr[, 51:60], trainAllSampleTr[, "V281"])
print("Test Data - Mosek Error")
CheckErrorMosekParams(trainAllSampleTst[, 51:60], trainAllSampleTst[, "V281"])
CheckLMErrors(trainAllSampleTr, trainAllSampleTst)
}
CheckErrorMosekParams <- function(data, target){
yHatMat = data[,1]*-0.20473016 + data[,2]*-0.28727579 + data[,3]*0.02768148 + data[,4]*0.21594995 + data[,5]*0.00000000 + data[,6]*2.30850597 + data[,7]*-3.28659353 + data[,8]*-1.41793199 + data[,9]*-1.78338969
#print(yHatMat)
errors <- (target - yHatMat)*(target - yHatMat)
#print(errors)
mse <- mean(errors)
print(mse)
}
CheckLMErrors <- function(data, testD) {
datafr <- data[, c(51:60, 281)]
print(colnames(datafr))
lmodel <- lm(V281~., data=datafr)
print("Train Data - LM Error")
print(mean(lmodel$residuals^2))
#print(lmodel)
predicted <- predict(lmodel, testD[, c(51:60, 281)], se.fit=TRUE)
MSE = mean((predicted$fit - testD[,281])^2)
print("Test Data - LM Error")
print(MSE)
} | /sol.r | no_license | DeepakMR23/MLProject2 | R | false | false | 2,874 | r | library(caret)
require(Rmosek)
Sigmoid <- function(X)
{
1/(1 + exp(-X))
}
ProjectSolution <- function()
{
currentWorkDir <- getwd()
setwd('C:\\Users\\new\\Desktop\\Studies\\Sem2\\ML\\Project\\BlogFeedback')
trainAll <- read.csv('blogData_train.csv', header=FALSE)
set.seed(123)
trainAllSample <- trainAll[sample(nrow(trainAll), 5000), ]
trInd <- sample(5000, 3750)
trainAllSampleTr <- trainAllSample[trInd, ]
trainAllSampleTst <- trainAllSample[-trInd, ]
print(nrow(trainAllSampleTr))
optModel <- list()
optModel$sense <- "min"
#optModel$c <- c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1)
require(Rmosek)
xMatrix <- data.matrix(trainAllSampleTr[,51:60])
yMatrix <- data.matrix(trainAllSampleTr[,"V281"])
#cMatrix <- cbind(yMatrix, xMatrix)
cMatrix <- crossprod(xMatrix,yMatrix)
optModel$c <- as.vector(cMatrix)
X2Matrix <- crossprod(xMatrix, xMatrix)
X2Matrix[upper.tri(X2Matrix)]<-0
inds <- which(X2Matrix != 0, arr.ind=TRUE)
optModel$qobj <- list(i = inds[,1], j = inds[,2], v = X2Matrix[inds])
optModel$A <- Matrix( rep(0, 10), nrow = 1, byrow=TRUE, sparse=TRUE )
blc<- c(-Inf)
buc<- c(Inf)
#blc<-as.vector(yMatrix)
#buc<-as.vector(yMatrix)
optModel$bc <- rbind(blc, buc)
#blx<-c(min(cMatrix[,1]), min(cMatrix[,2]), min(cMatrix[,3]), min(cMatrix[,4]), min(cMatrix[,5]), min(cMatrix[,6]), min(cMatrix[,7]), min(cMatrix[,8]), min(cMatrix[,9]), min(cMatrix[,10]))
#bux<-c(max(cMatrix[,1]), max(cMatrix[,2]), max(cMatrix[,3]), max(cMatrix[,4]), max(cMatrix[,5]), max(cMatrix[,6]), max(cMatrix[,7]), max(cMatrix[,8]), max(cMatrix[,9]), max(cMatrix[,10]))
blx<-rep(-Inf, 10)
bux<-rep(Inf, 10)
optModel$bx<-rbind(blx, bux)
r<-mosek(optModel)
#print(r)
print("Train Data - Mosek Error")
CheckErrorMosekParams(trainAllSampleTr[, 51:60], trainAllSampleTr[, "V281"])
print("Test Data - Mosek Error")
CheckErrorMosekParams(trainAllSampleTst[, 51:60], trainAllSampleTst[, "V281"])
CheckLMErrors(trainAllSampleTr, trainAllSampleTst)
}
CheckErrorMosekParams <- function(data, target){
yHatMat = data[,1]*-0.20473016 + data[,2]*-0.28727579 + data[,3]*0.02768148 + data[,4]*0.21594995 + data[,5]*0.00000000 + data[,6]*2.30850597 + data[,7]*-3.28659353 + data[,8]*-1.41793199 + data[,9]*-1.78338969
#print(yHatMat)
errors <- (target - yHatMat)*(target - yHatMat)
#print(errors)
mse <- mean(errors)
print(mse)
}
CheckLMErrors <- function(data, testD) {
datafr <- data[, c(51:60, 281)]
print(colnames(datafr))
lmodel <- lm(V281~., data=datafr)
print("Train Data - LM Error")
print(mean(lmodel$residuals^2))
#print(lmodel)
predicted <- predict(lmodel, testD[, c(51:60, 281)], se.fit=TRUE)
MSE = mean((predicted$fit - testD[,281])^2)
print("Test Data - LM Error")
print(MSE)
} |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/paws.ses_operations.R
\name{create_configuration_set_tracking_options}
\alias{create_configuration_set_tracking_options}
\title{Creates an association between a configuration set and a custom domain for open and click event tracking}
\usage{
create_configuration_set_tracking_options(ConfigurationSetName,
TrackingOptions)
}
\arguments{
\item{ConfigurationSetName}{[required] The name of the configuration set that the tracking options should be associated with.}
\item{TrackingOptions}{[required]}
}
\description{
Creates an association between a configuration set and a custom domain for open and click event tracking.
}
\details{
By default, images and links used for tracking open and click events are hosted on domains operated by Amazon SES. You can configure a subdomain of your own to handle these events. For information about using custom domains, see the \href{http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html}{Amazon SES Developer Guide}.
}
\section{Accepted Parameters}{
\preformatted{create_configuration_set_tracking_options(
ConfigurationSetName = "string",
TrackingOptions = list(
CustomRedirectDomain = "string"
)
)
}
}
| /service/paws.ses/man/create_configuration_set_tracking_options.Rd | permissive | CR-Mercado/paws | R | false | true | 1,275 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/paws.ses_operations.R
\name{create_configuration_set_tracking_options}
\alias{create_configuration_set_tracking_options}
\title{Creates an association between a configuration set and a custom domain for open and click event tracking}
\usage{
create_configuration_set_tracking_options(ConfigurationSetName,
TrackingOptions)
}
\arguments{
\item{ConfigurationSetName}{[required] The name of the configuration set that the tracking options should be associated with.}
\item{TrackingOptions}{[required]}
}
\description{
Creates an association between a configuration set and a custom domain for open and click event tracking.
}
\details{
By default, images and links used for tracking open and click events are hosted on domains operated by Amazon SES. You can configure a subdomain of your own to handle these events. For information about using custom domains, see the \href{http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html}{Amazon SES Developer Guide}.
}
\section{Accepted Parameters}{
\preformatted{create_configuration_set_tracking_options(
ConfigurationSetName = "string",
TrackingOptions = list(
CustomRedirectDomain = "string"
)
)
}
}
|
myData = read.table("household_power_consumption.txt",header = T,sep = ';')
head(myData)
date1 <- as.Date("2007-02-01",format = "%Y-%m-%d")
date2 <- as.Date("2007-02-02",format = "%Y-%m-%d")
myData <- transform(myData,Date = as.Date(Date,format = "%d/%m/%Y"))
relevantData <- myData[myData$Date==date1 | myData$Date==date2,]
hist(as.numeric(as.character(relevantData$Global_active_power)),main = 'Global Active Power',col = 'red',xlab = 'Global Active Power(kilowatts)')
dev.copy(png,file = 'plot1.png',width = 480,height = 480,units = 'px')
dev.off() | /Assignment1/plot1.R | no_license | agankur21/datasciencecoursera | R | false | false | 551 | r | myData = read.table("household_power_consumption.txt",header = T,sep = ';')
head(myData)
date1 <- as.Date("2007-02-01",format = "%Y-%m-%d")
date2 <- as.Date("2007-02-02",format = "%Y-%m-%d")
myData <- transform(myData,Date = as.Date(Date,format = "%d/%m/%Y"))
relevantData <- myData[myData$Date==date1 | myData$Date==date2,]
hist(as.numeric(as.character(relevantData$Global_active_power)),main = 'Global Active Power',col = 'red',xlab = 'Global Active Power(kilowatts)')
dev.copy(png,file = 'plot1.png',width = 480,height = 480,units = 'px')
dev.off() |
# Peer Graded Assignment
setwd("C:/Users/Ando/Dropbox/Coursera/Reproducible Research/Peer Graded Assignment")
# Loading data
df <- read.csv(file="activity.csv", header=T, sep=",", stringsAsFactors=F)
str(df)
# 2. Histogram of the total number of steps (tns) taken each day
tns <- with(data=df, tapply(steps, date, sum, na.rm=T ))
br <- seq(0, 25000, 1000)
hist(tns, col="steelblue", breaks = br, xlab="Total number of steps",
main="Histogram of total number of steps (per day)")
# 3. Mean and median number of steps taken each day
coef <- c(Mean.of.Steps = mean(tns), Median.of.Steps = median(tns))
# 4. Time series plot of the average number of steps taken
tns.avg <- with(data=df, tapply(steps, date, mean, na.rm=T))
plot(as.Date(unique(df$date)), tns.avg, t="h", lwd="3",
ylab="Average number of steps", xlab="Date")
points(as.Date(unique(df$date)), tns.avg, pch=19)
# The 5-minute interval that, on average, contains the maximum number
# of steps
int5min <- with(data=df, tapply(steps, interval, mean, na.rm=T))
max(int5min)
# Code to describe and show a strategy for imputing missing data
sum(!complete.cases(df))
ix <- is.na(df$steps)
library(dplyr)
df.m <- tbl_df(df)
df.m <- df.m %>%
group_by(interval) %>%
mutate(avg = mean(steps, na.rm=T))
df.m$steps[ix] <- df.m$avg[ix]
tns.m <- with(data=df.m, tapply(steps, date, sum))
hist(tns.m, col="steelblue", breaks = br, xlab="Total number of steps",
main="Histogram of total number of steps (per day)")
coef.m <- c(Mean.of.Steps = mean(tns.m), Median.of.Steps = median(tns.m))
# Are there differences in activity patterns between weekdays and weekends?
wd <-weekdays(as.Date(df$date))
names <- names(table(wd))
dummy <- as.factor(wd == names[6:7])
levels(dummy) <- c("weekday", "weekend")
table(dummy)
df.m <- cbind(df.m, dummy)
library(lattice)
df.n <- df.m %>%
group_by(dummy, interval) %>%
mutate(m = mean(steps))
xyplot(data=df.n, m~interval|dummy, layout=c(1, 2), t="s",
ylab="Number of steps", xlab="Interval")
| /Script.R | no_license | andronikoss/RepData_PeerAssessment1 | R | false | false | 2,066 | r | # Peer Graded Assignment
setwd("C:/Users/Ando/Dropbox/Coursera/Reproducible Research/Peer Graded Assignment")
# Loading data
df <- read.csv(file="activity.csv", header=T, sep=",", stringsAsFactors=F)
str(df)
# 2. Histogram of the total number of steps (tns) taken each day
tns <- with(data=df, tapply(steps, date, sum, na.rm=T ))
br <- seq(0, 25000, 1000)
hist(tns, col="steelblue", breaks = br, xlab="Total number of steps",
main="Histogram of total number of steps (per day)")
# 3. Mean and median number of steps taken each day
coef <- c(Mean.of.Steps = mean(tns), Median.of.Steps = median(tns))
# 4. Time series plot of the average number of steps taken
tns.avg <- with(data=df, tapply(steps, date, mean, na.rm=T))
plot(as.Date(unique(df$date)), tns.avg, t="h", lwd="3",
ylab="Average number of steps", xlab="Date")
points(as.Date(unique(df$date)), tns.avg, pch=19)
# The 5-minute interval that, on average, contains the maximum number
# of steps
int5min <- with(data=df, tapply(steps, interval, mean, na.rm=T))
max(int5min)
# Code to describe and show a strategy for imputing missing data
sum(!complete.cases(df))
ix <- is.na(df$steps)
library(dplyr)
df.m <- tbl_df(df)
df.m <- df.m %>%
group_by(interval) %>%
mutate(avg = mean(steps, na.rm=T))
df.m$steps[ix] <- df.m$avg[ix]
tns.m <- with(data=df.m, tapply(steps, date, sum))
hist(tns.m, col="steelblue", breaks = br, xlab="Total number of steps",
main="Histogram of total number of steps (per day)")
coef.m <- c(Mean.of.Steps = mean(tns.m), Median.of.Steps = median(tns.m))
# Are there differences in activity patterns between weekdays and weekends?
wd <-weekdays(as.Date(df$date))
names <- names(table(wd))
dummy <- as.factor(wd == names[6:7])
levels(dummy) <- c("weekday", "weekend")
table(dummy)
df.m <- cbind(df.m, dummy)
library(lattice)
df.n <- df.m %>%
group_by(dummy, interval) %>%
mutate(m = mean(steps))
xyplot(data=df.n, m~interval|dummy, layout=c(1, 2), t="s",
ylab="Number of steps", xlab="Interval")
|
p <- ggplot(mtcars, aes(mpg, wt)) +
geom_point(aes(colour = factor(cyl)))
cols <- c("8" = "red", "4" = "blue", "6" = "darkgreen", "10" = "orange")
p <- p + scale_colour_manual(values = cols, limits = c("4", "6", "8", "10"))
| /ggplot2/Scales/scale_manual/example8.R | no_license | plotly/ssim_baselines | R | false | false | 227 | r | p <- ggplot(mtcars, aes(mpg, wt)) +
geom_point(aes(colour = factor(cyl)))
cols <- c("8" = "red", "4" = "blue", "6" = "darkgreen", "10" = "orange")
p <- p + scale_colour_manual(values = cols, limits = c("4", "6", "8", "10"))
|
<<<<<<< HEAD
source('paste.r')
source('paste.r')
a<-"manisha"
b<-"saini"
paste(a,b)
source('paste.r')
source('paste.r')
q()
=======
>>>>>>> 6c51a7d57843bbb422d22ad7c4cf835f66802431
5
5.5
-5.5
5/3
class(5)
class(5.5)
class(-5.5)
class(5/3)
# logical data type in R
class(TRUE)
class(FALSE)
a<-8; b<-9
a>b
class(a>b)
b>a
class(b>a)
# character data type in R
'A'
"A"
"manisha"
"@"
class('A')
class("A")
class("manisha")
class("@")
a<- "A"
class(a)
# complex data type in R
1+2i
4-5i
class(1+2i)
class(4-5i)
# interger data type in R
class(2L)
class("2L")
a<-as.integer(2)
class(a)
a<-2
class(2)
# data type conversions in R
a<-6
class(a)
a1<-as.character(a)
a1
class(a1)
# conversion of character data type to logical data type
a2<-as.logical(a1)
class(a2)
# conversion of logical data type to complex data type
a3<-as.complex(a2)
a3
class(a3)
| /Data_types.r | no_license | SainiManisha/r-tutorial | R | false | false | 846 | r | <<<<<<< HEAD
source('paste.r')
source('paste.r')
a<-"manisha"
b<-"saini"
paste(a,b)
source('paste.r')
source('paste.r')
q()
=======
>>>>>>> 6c51a7d57843bbb422d22ad7c4cf835f66802431
5
5.5
-5.5
5/3
class(5)
class(5.5)
class(-5.5)
class(5/3)
# logical data type in R
class(TRUE)
class(FALSE)
a<-8; b<-9
a>b
class(a>b)
b>a
class(b>a)
# character data type in R
'A'
"A"
"manisha"
"@"
class('A')
class("A")
class("manisha")
class("@")
a<- "A"
class(a)
# complex data type in R
1+2i
4-5i
class(1+2i)
class(4-5i)
# interger data type in R
class(2L)
class("2L")
a<-as.integer(2)
class(a)
a<-2
class(2)
# data type conversions in R
a<-6
class(a)
a1<-as.character(a)
a1
class(a1)
# conversion of character data type to logical data type
a2<-as.logical(a1)
class(a2)
# conversion of logical data type to complex data type
a3<-as.complex(a2)
a3
class(a3)
|
## Master data
if (Sys.info()['sysname'] == "Linux") {
pp <- read.csv("~/Dropbox/Shared/Data/pptreemas10bv.csv")
} else
pp <- read.csv("C:/Users/noah/Dropbox/Shared/Data/pptreemas10bv.csv")
## Data pertaining to crown shapes:
## - Data from years 86/87 for plots in H/M/L
## - Only data from EAST
## - Length along two perpendicular axes
## - Direction of long axis
## -
p86 <- !is.na(pp$CLONG86)
p87 <- !is.na(pp$CLONG87)
## Total number of trees with crown data from both years by plot
table(pp[!is.na(pp$CLONG86) | !is.na(pp$CLONG87), "PPLOT"]) # Long axis data
table(pp[!is.na(pp$CPERP86) | !is.na(pp$CPERP87), "PPLOT"]) # Perp axis data
table(pp[!is.na(pp$CAZLNG86) | !is.na(pp$CAZLNG87), "PPLOT"]) # Long axis direction
## Number of trees in each plot by year
table(pp[p86, "PPLOT"]) # 1986
table(pp[p87, "PPLOT"]) # 1987
## Number of trees by elevation (both years)
table(pp[p86 | p87, "ELEVCL"])
## Number of trees by aspect (both years)
table(pp[p86 | p87, "ASPCL"]) # Only east side
| /surround/rewrite/scratch.R | no_license | nverno/neighborhoods | R | false | false | 1,008 | r | ## Master data
if (Sys.info()['sysname'] == "Linux") {
pp <- read.csv("~/Dropbox/Shared/Data/pptreemas10bv.csv")
} else
pp <- read.csv("C:/Users/noah/Dropbox/Shared/Data/pptreemas10bv.csv")
## Data pertaining to crown shapes:
## - Data from years 86/87 for plots in H/M/L
## - Only data from EAST
## - Length along two perpendicular axes
## - Direction of long axis
## -
p86 <- !is.na(pp$CLONG86)
p87 <- !is.na(pp$CLONG87)
## Total number of trees with crown data from both years by plot
table(pp[!is.na(pp$CLONG86) | !is.na(pp$CLONG87), "PPLOT"]) # Long axis data
table(pp[!is.na(pp$CPERP86) | !is.na(pp$CPERP87), "PPLOT"]) # Perp axis data
table(pp[!is.na(pp$CAZLNG86) | !is.na(pp$CAZLNG87), "PPLOT"]) # Long axis direction
## Number of trees in each plot by year
table(pp[p86, "PPLOT"]) # 1986
table(pp[p87, "PPLOT"]) # 1987
## Number of trees by elevation (both years)
table(pp[p86 | p87, "ELEVCL"])
## Number of trees by aspect (both years)
table(pp[p86 | p87, "ASPCL"]) # Only east side
|
mydata<-read.table("household_power_consumption.txt",header=TRUE,quote="",sep=";",dec=".",stringsAsFactors = FALSE)
step1<-subset(mydata,as.Date(Date,"%d/%m/%Y")=="2007-02-02" | as.Date(Date,"%d/%m/%Y")=="2007-02-01")
step1$Time<-strftime(strptime(step1$Time,"%R"),"%H:%M")
install.packages("stringr")
library(stringr)
step1$Time<-str_replace(step1$Time,":",".")
step2<-data.frame(cbind(step1$Time,step1$Global_active_power))
names(step2)<-c("readingTime","activePower")
step2$readingTime<-as.numeric(as.character(step2$readingTime))
step2$activePower<-as.numeric(as.character(step2$activePower))
step3<-step2[order(step2$readingTime),]
par(mfrow=c(1,1),mar=c(4,4,2,1))
hist(step3$activePower,main="Global Active Power",ylab="Frequency",xlab="Global Active Power (kilowatts)",col="red")
dev.copy(png,file="plot1.png",width=480,height=480)
dev.off()
| /plot1.R | no_license | mgoldade/ExData_Plotting1 | R | false | false | 866 | r | mydata<-read.table("household_power_consumption.txt",header=TRUE,quote="",sep=";",dec=".",stringsAsFactors = FALSE)
step1<-subset(mydata,as.Date(Date,"%d/%m/%Y")=="2007-02-02" | as.Date(Date,"%d/%m/%Y")=="2007-02-01")
step1$Time<-strftime(strptime(step1$Time,"%R"),"%H:%M")
install.packages("stringr")
library(stringr)
step1$Time<-str_replace(step1$Time,":",".")
step2<-data.frame(cbind(step1$Time,step1$Global_active_power))
names(step2)<-c("readingTime","activePower")
step2$readingTime<-as.numeric(as.character(step2$readingTime))
step2$activePower<-as.numeric(as.character(step2$activePower))
step3<-step2[order(step2$readingTime),]
par(mfrow=c(1,1),mar=c(4,4,2,1))
hist(step3$activePower,main="Global Active Power",ylab="Frequency",xlab="Global Active Power (kilowatts)",col="red")
dev.copy(png,file="plot1.png",width=480,height=480)
dev.off()
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/datasets.R
\docType{data}
\name{TF_CENSUS_2011_HC10_L}
\alias{TF_CENSUS_2011_HC10_L}
\title{TF_CENSUS_2011_HC10_L}
\description{
census2011: TF_CENSUS_2011_HC10_L. More information about this data can be found in the inst/docs folder and at \url{http://statbel.fgov.be/nl/statistieken/opendata/datasets/census2011}
}
\examples{
data(TF_CENSUS_2011_HC10_L)
str(TF_CENSUS_2011_HC10_L)
}
\references{
\url{http://statbel.fgov.be/nl/statistieken/opendata/home}, \url{http://statbel.fgov.be/nl/statistieken/opendata/datasets/census2011}
}
| /BelgiumStatistics/man/TF_CENSUS_2011_HC10_L.Rd | permissive | 0tertra/BelgiumStatistics | R | false | false | 621 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/datasets.R
\docType{data}
\name{TF_CENSUS_2011_HC10_L}
\alias{TF_CENSUS_2011_HC10_L}
\title{TF_CENSUS_2011_HC10_L}
\description{
census2011: TF_CENSUS_2011_HC10_L. More information about this data can be found in the inst/docs folder and at \url{http://statbel.fgov.be/nl/statistieken/opendata/datasets/census2011}
}
\examples{
data(TF_CENSUS_2011_HC10_L)
str(TF_CENSUS_2011_HC10_L)
}
\references{
\url{http://statbel.fgov.be/nl/statistieken/opendata/home}, \url{http://statbel.fgov.be/nl/statistieken/opendata/datasets/census2011}
}
|
X <- nnTensor::toyModel("NMF")
out1 <- myNMF(X, k=10)
expect_equivalent(dim(out1), c(100, 10))
| /tests/testthat/test_myNMF.R | permissive | rikenbit/mwTensor | R | false | false | 97 | r | X <- nnTensor::toyModel("NMF")
out1 <- myNMF(X, k=10)
expect_equivalent(dim(out1), c(100, 10))
|
testlist <- list(hi = 9.61276249046606e+281, lo = 9.612762490466e+281, mu = 9.61276249046606e+281, sig = 9.61276249046606e+281)
result <- do.call(gjam:::tnormRcpp,testlist)
str(result) | /gjam/inst/testfiles/tnormRcpp/libFuzzer_tnormRcpp/tnormRcpp_valgrind_files/1610047183-test.R | no_license | akhikolla/updated-only-Issues | R | false | false | 189 | r | testlist <- list(hi = 9.61276249046606e+281, lo = 9.612762490466e+281, mu = 9.61276249046606e+281, sig = 9.61276249046606e+281)
result <- do.call(gjam:::tnormRcpp,testlist)
str(result) |
#function for plotting
dyplot <- function(correl_data){
dygraph(data=correl_data, main = "Cross-correlation") %>%
dyRangeSelector(fillColor = "grey") %>%
dyHighlight(highlightSeriesOpts = list(strokeWidth = 3),
highlightCircleSize = 4,
highlightSeriesBackgroundAlpha = 0.75) %>%
dyLegend(show = "follow", hideOnMouseOut = T, width = 300,
labelsSeparateLines= TRUE) %>%
dyAxis("y", label = "Cross-correlation of WTI", axisLabelFontSize=15,labelWidth=20, axisLabelWidth=40) %>%
dyAxis("x", label = "Date", axisLabelFontSize=20,labelWidth=30, axisLabelWidth=60)
}
| /dyplot.r | no_license | banna11/efrp_r_BanocziHeilmannReizinger | R | false | false | 646 | r | #function for plotting
dyplot <- function(correl_data){
dygraph(data=correl_data, main = "Cross-correlation") %>%
dyRangeSelector(fillColor = "grey") %>%
dyHighlight(highlightSeriesOpts = list(strokeWidth = 3),
highlightCircleSize = 4,
highlightSeriesBackgroundAlpha = 0.75) %>%
dyLegend(show = "follow", hideOnMouseOut = T, width = 300,
labelsSeparateLines= TRUE) %>%
dyAxis("y", label = "Cross-correlation of WTI", axisLabelFontSize=15,labelWidth=20, axisLabelWidth=40) %>%
dyAxis("x", label = "Date", axisLabelFontSize=20,labelWidth=30, axisLabelWidth=60)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{san_joaquin_river_instream}
\alias{san_joaquin_river_instream}
\title{San Joaqin River Instream Flow to Habitat Area Relationship}
\format{dataframe with 14 rows and 5 variables:
\describe{
\item{flow_cfs}{integer flow value in cubic feet per second}
\item{FR_spawn_wua}{spawning WUA in square feet per 1000 feet}
\item{FR_fry_wua}{fry (up to 50 mm) WUA in square feet per 1000 feet}
\item{FR_juv_wua}{juvenile WUA in square feet per 1000 feet}
\item{watershed}{name of watershed}
}}
\source{
Sadie Gill
}
\usage{
san_joaquin_river_instream
}
\description{
A dataset containing the Weighted Usable Area (WUA) in square feet per 1000 feet
as a function of flow in cubic feet per second
}
\details{
The Lower San Joaquin River instream rearing habitat has not been modeled.
The WUA values were estimated using the mean WUA at each flow from the Merced, Stanislaus,
and Tuolumne Rivers.
}
\examples{
san_joaquin_river_instream
}
\keyword{datasets}
| /man/san_joaquin_river_instream.Rd | no_license | FlowWest/cvpiaHabitat | R | false | true | 1,062 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{san_joaquin_river_instream}
\alias{san_joaquin_river_instream}
\title{San Joaqin River Instream Flow to Habitat Area Relationship}
\format{dataframe with 14 rows and 5 variables:
\describe{
\item{flow_cfs}{integer flow value in cubic feet per second}
\item{FR_spawn_wua}{spawning WUA in square feet per 1000 feet}
\item{FR_fry_wua}{fry (up to 50 mm) WUA in square feet per 1000 feet}
\item{FR_juv_wua}{juvenile WUA in square feet per 1000 feet}
\item{watershed}{name of watershed}
}}
\source{
Sadie Gill
}
\usage{
san_joaquin_river_instream
}
\description{
A dataset containing the Weighted Usable Area (WUA) in square feet per 1000 feet
as a function of flow in cubic feet per second
}
\details{
The Lower San Joaquin River instream rearing habitat has not been modeled.
The WUA values were estimated using the mean WUA at each flow from the Merced, Stanislaus,
and Tuolumne Rivers.
}
\examples{
san_joaquin_river_instream
}
\keyword{datasets}
|
# Copyright 2017 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
#' Extract monthly levels information from the HYDAT database
#'
#' Tidy data of monthly river or lake levels information from the DLY_LEVELS HYDAT table. \code{station_number} and
#' \code{prov_terr_state_loc} can both be supplied. If both are omitted all values from the \code{hy_stations} table are returned.
#' That is a large vector for \code{hy_monthly_levels}.
#'
#' @inheritParams hy_stations
#' @param start_date Leave blank if all dates are required. Date format needs to be in YYYY-MM-DD. Date is inclusive.
#' @param end_date Leave blank if all dates are required. Date format needs to be in YYYY-MM-DD. Date is inclusive.
#'
#' @return A tibble of monthly levels.
#'
#' @format A tibble with 8 variables:
#' \describe{
#' \item{STATION_NUMBER}{Unique 7 digit Water Survey of Canada station number}
#' \item{YEAR}{Year of record.}
#' \item{MONTH}{Numeric month value}
#' \item{FULL_MONTH}{Logical value is there is full record from MONTH}
#' \item{NO_DAYS}{Number of days in that month}
#' \item{Sum_stat}{Summary statistic being used.}
#' \item{Value}{Value of the measurement in metres.}
#' \item{Date_occurred}{Observation date. Formatted as a Date class. MEAN is a annual summary
#' and therefore has an NA value for Date.}
#' }
#'
#' @examples
#' \dontrun{
#' hy_monthly_levels(station_number = c("02JE013","08MF005"),
#' start_date = "1996-01-01", end_date = "2000-01-01")
#'
#' hy_monthly_levels(prov_terr_state_loc = "PE")
#' }
#' @family HYDAT functions
#' @source HYDAT
#' @export
hy_monthly_levels <- function(station_number = NULL,
hydat_path = NULL,
prov_terr_state_loc = NULL, start_date ="ALL", end_date = "ALL") {
if (!is.null(station_number) && station_number == "ALL") {
stop("Deprecated behaviour.Omit the station_number = \"ALL\" argument. See ?hy_monthly_levels for examples.")
}
## Read in database
hydat_con <- hy_src(hydat_path)
if (!dplyr::is.src(hydat_path)) {
on.exit(hy_src_disconnect(hydat_con))
}
if (start_date == "ALL" & end_date == "ALL") {
message("No start and end dates specified. All dates available will be returned.")
} else {
## When we want date contraints we need to break apart the dates because SQL has no native date format
## Start
start_year <- lubridate::year(start_date)
start_month <- lubridate::month(start_date)
start_day <- lubridate::day(start_date)
## End
end_year <- lubridate::year(end_date)
end_month <- lubridate::month(end_date)
end_day <- lubridate::day(end_date)
}
## Check date is in the right format
if (start_date != "ALL" | end_date != "ALL") {
if (is.na(as.Date(start_date, format = "%Y-%m-%d")) | is.na(as.Date(end_date, format = "%Y-%m-%d"))) {
stop("Invalid date format. Dates need to be in YYYY-MM-DD format")
}
if (start_date > end_date) {
stop("start_date is after end_date. Try swapping values.")
}
}
## Determine which stations we are querying
stns <- station_choice(hydat_con, station_number, prov_terr_state_loc)
## Creating rlang symbols
sym_YEAR <- sym("YEAR")
sym_STATION_NUMBER <- sym("STATION_NUMBER")
sym_variable <- sym("variable")
sym_temp <- sym("temp")
sym_temp2 <- sym("temp2")
## Data manipulations to make it "tidy"
monthly_levels <- dplyr::tbl(hydat_con, "DLY_LEVELS")
monthly_levels <- dplyr::filter(monthly_levels, !!sym_STATION_NUMBER %in% stns)
## Do the initial subset to take advantage of dbplyr only issuing sql query when it has too
if (start_date != "ALL" | end_date != "ALL") {
monthly_levels <- dplyr::filter(monthly_levels, !!sym_YEAR >= start_year &
!!sym_YEAR <= end_year)
#monthly_levels <- dplyr::filter(monthly_levels, MONTH >= start_month &
# MONTH <= end_month)
}
monthly_levels <- dplyr::select(monthly_levels, .data$STATION_NUMBER:.data$MAX)
monthly_levels <- dplyr::collect(monthly_levels)
if(is.data.frame(monthly_levels) && nrow(monthly_levels)==0)
{stop("This station is not present in HYDAT")}
## Need to rename columns for gather
colnames(monthly_levels) <- c("STATION_NUMBER","YEAR","MONTH", "PRECISION_CODE", "FULL_MONTH", "NO_DAYS", "MEAN_Value",
"TOTAL_Value", "MIN_DAY","MIN_Value", "MAX_DAY","MAX_Value")
monthly_levels <- tidyr::gather(monthly_levels, !!sym_variable, !!sym_temp, -(.data$STATION_NUMBER:.data$NO_DAYS))
monthly_levels <- tidyr::separate(monthly_levels, !!sym_variable, into = c("Sum_stat","temp2"), sep = "_")
monthly_levels <- tidyr::spread(monthly_levels, !!sym_temp2, !!sym_temp)
## convert into R date for date of occurence.
monthly_levels <- dplyr::mutate(monthly_levels, Date_occurred = lubridate::ymd(
paste0(.data$YEAR, "-", .data$MONTH, "-", .data$DAY), quiet = TRUE))
## TODO: convert dates incorrectly. Make sure NA DAYs aren't converted into dates
monthly_levels <- dplyr::select(monthly_levels, -.data$DAY)
monthly_levels <- dplyr::mutate(monthly_levels, FULL_MONTH = .data$FULL_MONTH == 1)
## What stations were missed?
differ_msg(unique(stns), unique(monthly_levels$STATION_NUMBER))
monthly_levels
}
| /R/hy_monthly_levels.R | permissive | nunofernandes-plight/tidyhydat | R | false | false | 5,857 | r | # Copyright 2017 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
#' Extract monthly levels information from the HYDAT database
#'
#' Tidy data of monthly river or lake levels information from the DLY_LEVELS HYDAT table. \code{station_number} and
#' \code{prov_terr_state_loc} can both be supplied. If both are omitted all values from the \code{hy_stations} table are returned.
#' That is a large vector for \code{hy_monthly_levels}.
#'
#' @inheritParams hy_stations
#' @param start_date Leave blank if all dates are required. Date format needs to be in YYYY-MM-DD. Date is inclusive.
#' @param end_date Leave blank if all dates are required. Date format needs to be in YYYY-MM-DD. Date is inclusive.
#'
#' @return A tibble of monthly levels.
#'
#' @format A tibble with 8 variables:
#' \describe{
#' \item{STATION_NUMBER}{Unique 7 digit Water Survey of Canada station number}
#' \item{YEAR}{Year of record.}
#' \item{MONTH}{Numeric month value}
#' \item{FULL_MONTH}{Logical value is there is full record from MONTH}
#' \item{NO_DAYS}{Number of days in that month}
#' \item{Sum_stat}{Summary statistic being used.}
#' \item{Value}{Value of the measurement in metres.}
#' \item{Date_occurred}{Observation date. Formatted as a Date class. MEAN is a annual summary
#' and therefore has an NA value for Date.}
#' }
#'
#' @examples
#' \dontrun{
#' hy_monthly_levels(station_number = c("02JE013","08MF005"),
#' start_date = "1996-01-01", end_date = "2000-01-01")
#'
#' hy_monthly_levels(prov_terr_state_loc = "PE")
#' }
#' @family HYDAT functions
#' @source HYDAT
#' @export
hy_monthly_levels <- function(station_number = NULL,
hydat_path = NULL,
prov_terr_state_loc = NULL, start_date ="ALL", end_date = "ALL") {
if (!is.null(station_number) && station_number == "ALL") {
stop("Deprecated behaviour.Omit the station_number = \"ALL\" argument. See ?hy_monthly_levels for examples.")
}
## Read in database
hydat_con <- hy_src(hydat_path)
if (!dplyr::is.src(hydat_path)) {
on.exit(hy_src_disconnect(hydat_con))
}
if (start_date == "ALL" & end_date == "ALL") {
message("No start and end dates specified. All dates available will be returned.")
} else {
## When we want date contraints we need to break apart the dates because SQL has no native date format
## Start
start_year <- lubridate::year(start_date)
start_month <- lubridate::month(start_date)
start_day <- lubridate::day(start_date)
## End
end_year <- lubridate::year(end_date)
end_month <- lubridate::month(end_date)
end_day <- lubridate::day(end_date)
}
## Check date is in the right format
if (start_date != "ALL" | end_date != "ALL") {
if (is.na(as.Date(start_date, format = "%Y-%m-%d")) | is.na(as.Date(end_date, format = "%Y-%m-%d"))) {
stop("Invalid date format. Dates need to be in YYYY-MM-DD format")
}
if (start_date > end_date) {
stop("start_date is after end_date. Try swapping values.")
}
}
## Determine which stations we are querying
stns <- station_choice(hydat_con, station_number, prov_terr_state_loc)
## Creating rlang symbols
sym_YEAR <- sym("YEAR")
sym_STATION_NUMBER <- sym("STATION_NUMBER")
sym_variable <- sym("variable")
sym_temp <- sym("temp")
sym_temp2 <- sym("temp2")
## Data manipulations to make it "tidy"
monthly_levels <- dplyr::tbl(hydat_con, "DLY_LEVELS")
monthly_levels <- dplyr::filter(monthly_levels, !!sym_STATION_NUMBER %in% stns)
## Do the initial subset to take advantage of dbplyr only issuing sql query when it has too
if (start_date != "ALL" | end_date != "ALL") {
monthly_levels <- dplyr::filter(monthly_levels, !!sym_YEAR >= start_year &
!!sym_YEAR <= end_year)
#monthly_levels <- dplyr::filter(monthly_levels, MONTH >= start_month &
# MONTH <= end_month)
}
monthly_levels <- dplyr::select(monthly_levels, .data$STATION_NUMBER:.data$MAX)
monthly_levels <- dplyr::collect(monthly_levels)
if(is.data.frame(monthly_levels) && nrow(monthly_levels)==0)
{stop("This station is not present in HYDAT")}
## Need to rename columns for gather
colnames(monthly_levels) <- c("STATION_NUMBER","YEAR","MONTH", "PRECISION_CODE", "FULL_MONTH", "NO_DAYS", "MEAN_Value",
"TOTAL_Value", "MIN_DAY","MIN_Value", "MAX_DAY","MAX_Value")
monthly_levels <- tidyr::gather(monthly_levels, !!sym_variable, !!sym_temp, -(.data$STATION_NUMBER:.data$NO_DAYS))
monthly_levels <- tidyr::separate(monthly_levels, !!sym_variable, into = c("Sum_stat","temp2"), sep = "_")
monthly_levels <- tidyr::spread(monthly_levels, !!sym_temp2, !!sym_temp)
## convert into R date for date of occurence.
monthly_levels <- dplyr::mutate(monthly_levels, Date_occurred = lubridate::ymd(
paste0(.data$YEAR, "-", .data$MONTH, "-", .data$DAY), quiet = TRUE))
## TODO: convert dates incorrectly. Make sure NA DAYs aren't converted into dates
monthly_levels <- dplyr::select(monthly_levels, -.data$DAY)
monthly_levels <- dplyr::mutate(monthly_levels, FULL_MONTH = .data$FULL_MONTH == 1)
## What stations were missed?
differ_msg(unique(stns), unique(monthly_levels$STATION_NUMBER))
monthly_levels
}
|
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
selected = grep("motor.*vehicle",SCC$Short.Name,ignore.case = T)
selected = SCC[selected,'SCC']
data=NEI[ NEI$SCC %in% selected & NEI$fips=="24510",]
data=aggregate(data$Emissions,by=list(data$year),FUN = sum)
names(data)=c('year','total_emissions')
data$year=as.factor(data$year)
png('plot5.png')
barplot(data$total_emissions,names.arg = data$year,xlab='year',
ylab='total_emissions',main='total emission for each year')
dev.off()
| /exdata-016/project2/plot5.R | no_license | wujohn1990/coursera_courses | R | false | false | 530 | r | NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
selected = grep("motor.*vehicle",SCC$Short.Name,ignore.case = T)
selected = SCC[selected,'SCC']
data=NEI[ NEI$SCC %in% selected & NEI$fips=="24510",]
data=aggregate(data$Emissions,by=list(data$year),FUN = sum)
names(data)=c('year','total_emissions')
data$year=as.factor(data$year)
png('plot5.png')
barplot(data$total_emissions,names.arg = data$year,xlab='year',
ylab='total_emissions',main='total emission for each year')
dev.off()
|
\name{echin2}
\docType{data}
\alias{echin2}
\title{Life History Data on Echinacea angustifolia}
\description{
Data on life history traits for the narrow-leaved purple coneflower
\emph{Echinacea angustifolia}
}
\usage{data(echin2)}
\format{
A data frame with records for 557 plants observed over five years.
Data are already in \dQuote{long} format; no need to reshape.
\describe{
\item{resp}{Response vector.}
\item{varb}{Categorical. Gives node of graphical model corresponding
to each component of \code{resp}. See details below.}
\item{root}{All ones. Root variables for graphical model.}
\item{id}{Categorical. Indicates individual plants.}
\item{flat}{Categorical. Position in growth chamber.}
\item{row}{Categorical. Row in the field.}
\item{posi}{Numerical. Position within row in the field.}
\item{crosstype}{Categorical. See details.}
\item{yearcross}{Categorical. Year in which cross was done.}
}
}
\details{
The levels of \code{varb} indicate nodes of the graphical model to which
the corresponding elements of the response vector \code{resp} belong.
This is the typical \dQuote{long} format produced by the R \code{reshape}
function. For each individual, there are several response variables.
All response variables are combined in one vector \code{resp}.
The variable \code{varb} indicates which \dQuote{original} variable
the number was for. The variable \code{id} indicates which individual
the number was for. The levels of \code{varb}, which are the names
of the \dQuote{original} variables are
\describe{
\item{lds1}{Survival for the first month in the growth
chamber.}
\item{lds2}{Ditto for 2nd month in the growth chamber.}
\item{lds3}{Ditto for 3rd month in the growth chamber.}
\item{ld01}{Survival for first year in the field.}
\item{ld02}{Ditto for 2nd year in the field.}
\item{ld03}{Ditto for 3rd year in the field.}
\item{ld04}{Ditto for 4th year in the field.}
\item{ld05}{Ditto for 5th year in the field.}
\item{roct2003}{Rosette count, measure of size and vigor,
recorded for 3rd year in the field.}
\item{roct2004}{Ditto for 4th year in the field.}
\item{roct2005}{Ditto for 5th year in the field.}
}
These data are complicated by the experiment being done in two parts.
Plants start their life indoors in a growth chamber. The predictor
variable \code{flat} only makes sense during this time in which three
response variables \code{lds1}, \code{lds2}, and \code{lds3} are observed.
After three months in the growth chamber, the plants (if they
survived, i. e., if \code{lds3 == 1}) were planted in an experimental
field plot outdoors. The variables \code{row} and \code{posi} only make
sense during this time in which all of the rest of the response variables
are observed. Because of certain predictor variables only making sense
with respect to certain components of the response vector, the R formula
mini-language is unable to cope, and model matrices must be constructed
"by hand".
\emph{Echinacea angustifolia} is native to North American tallgrass prairie,
which was once extensive but now exists only in isolated remnants.
To evaluate the effects of different mating regimes on the fitness of
resulting progeny, crosses were conducted to produce progeny of (a) mates
from different remnants, (b) mates chosen at random from the same remnant,
and (c) mates known to share maternal parent. These three categories are
the three levels of \code{crosstype}.
}
\source{
Stuart Wagenius,
\url{https://www.chicagobotanic.org/research/staff/wagenius}
}
\references{
These data are analyzed in the following.
Shaw, R. G., Geyer, C. J., Wagenius, S., Hangelbroek, H. H.,
and Etterson, J. R. (2008)
Unifying life history analyses for inference of fitness and population growth.
\emph{American Naturalist}, \bold{172}, E35-E47.
\doi{10.1086/588063}.
}
\examples{
data(echin2)
}
\keyword{datasets}
| /man/echin2.Rd | no_license | cran/aster | R | false | false | 3,972 | rd |
\name{echin2}
\docType{data}
\alias{echin2}
\title{Life History Data on Echinacea angustifolia}
\description{
Data on life history traits for the narrow-leaved purple coneflower
\emph{Echinacea angustifolia}
}
\usage{data(echin2)}
\format{
A data frame with records for 557 plants observed over five years.
Data are already in \dQuote{long} format; no need to reshape.
\describe{
\item{resp}{Response vector.}
\item{varb}{Categorical. Gives node of graphical model corresponding
to each component of \code{resp}. See details below.}
\item{root}{All ones. Root variables for graphical model.}
\item{id}{Categorical. Indicates individual plants.}
\item{flat}{Categorical. Position in growth chamber.}
\item{row}{Categorical. Row in the field.}
\item{posi}{Numerical. Position within row in the field.}
\item{crosstype}{Categorical. See details.}
\item{yearcross}{Categorical. Year in which cross was done.}
}
}
\details{
The levels of \code{varb} indicate nodes of the graphical model to which
the corresponding elements of the response vector \code{resp} belong.
This is the typical \dQuote{long} format produced by the R \code{reshape}
function. For each individual, there are several response variables.
All response variables are combined in one vector \code{resp}.
The variable \code{varb} indicates which \dQuote{original} variable
the number was for. The variable \code{id} indicates which individual
the number was for. The levels of \code{varb}, which are the names
of the \dQuote{original} variables are
\describe{
\item{lds1}{Survival for the first month in the growth
chamber.}
\item{lds2}{Ditto for 2nd month in the growth chamber.}
\item{lds3}{Ditto for 3rd month in the growth chamber.}
\item{ld01}{Survival for first year in the field.}
\item{ld02}{Ditto for 2nd year in the field.}
\item{ld03}{Ditto for 3rd year in the field.}
\item{ld04}{Ditto for 4th year in the field.}
\item{ld05}{Ditto for 5th year in the field.}
\item{roct2003}{Rosette count, measure of size and vigor,
recorded for 3rd year in the field.}
\item{roct2004}{Ditto for 4th year in the field.}
\item{roct2005}{Ditto for 5th year in the field.}
}
These data are complicated by the experiment being done in two parts.
Plants start their life indoors in a growth chamber. The predictor
variable \code{flat} only makes sense during this time in which three
response variables \code{lds1}, \code{lds2}, and \code{lds3} are observed.
After three months in the growth chamber, the plants (if they
survived, i. e., if \code{lds3 == 1}) were planted in an experimental
field plot outdoors. The variables \code{row} and \code{posi} only make
sense during this time in which all of the rest of the response variables
are observed. Because of certain predictor variables only making sense
with respect to certain components of the response vector, the R formula
mini-language is unable to cope, and model matrices must be constructed
"by hand".
\emph{Echinacea angustifolia} is native to North American tallgrass prairie,
which was once extensive but now exists only in isolated remnants.
To evaluate the effects of different mating regimes on the fitness of
resulting progeny, crosses were conducted to produce progeny of (a) mates
from different remnants, (b) mates chosen at random from the same remnant,
and (c) mates known to share maternal parent. These three categories are
the three levels of \code{crosstype}.
}
\source{
Stuart Wagenius,
\url{https://www.chicagobotanic.org/research/staff/wagenius}
}
\references{
These data are analyzed in the following.
Shaw, R. G., Geyer, C. J., Wagenius, S., Hangelbroek, H. H.,
and Etterson, J. R. (2008)
Unifying life history analyses for inference of fitness and population growth.
\emph{American Naturalist}, \bold{172}, E35-E47.
\doi{10.1086/588063}.
}
\examples{
data(echin2)
}
\keyword{datasets}
|
rm(list=ls())
library("ggpubr")
setwd("C:/Users/kaylee/Documents/Kaylee_Stuff/Smith_Lab/Data/Mouse_Validation_Data/Fetal_EPO")
PCR_file <- "FetLivEPO_Ctcalcs_file1.csv"
data <- read.csv(PCR_file, header=TRUE)
data_frame <- data.frame(data)
MD <- data_frame[is.element(data_frame$Biological.Sets, "MD"),]
EtOH <- data_frame[is.element(data_frame$Biological.Sets, "EtOH"),]
sets <- data.frame(MD$Cq.Averages)
sets$EtOH <- EtOH$Cq.Averages
colnames(sets)[1] <- "MD" #renaming the first column
treatments <- c(sets$MD, sets$EtOH)
tech_repl <- 9 #number of biological replicates
set_nums <- 1 #number of technical plate replicates
reps <- tech_repl*set_nums #reps should match the number of rows
groups <- c(rep("MD", reps), rep("EtOH", reps))
join <- data.frame(treatments, groups)
#Checking for the assumption of equal variance
bartlett.test(treatments, groups)
#checking for assumption of normality
shapiro.test(data_frame$Cq.Averages)
hist(data_frame$Cq.Averages)
qqnorm(data_frame$Cq.Averages)
qqline(data_frame$Cq.Averages)
#student's t-test for normal data
t.test(sets$MD, sets$EtOH)
#Wilcoxon rank-sum test for non-normal data
wilcox.test(sets$MD, sets$EtOH, alternative = "two.sided", exact = FALSE)
#Outlier Test
summary(MD$Cq.Averages)
summary(EtOH$Cq.Averages)
library(ggplot2)
order <- c("MD", "EtOH")
p <- ggplot(merge, aes(x=groups, y=treatments)) +
geom_boxplot(outlier.shape=NA) + #avoid plotting outliers twice
geom_jitter(position=position_jitter(width=.1, height=0)) +
theme(plot.title = element_text(hjust = 0.5, size=18),
axis.title.x = element_blank(), #gets rid of x-axis label
panel.grid.major = element_blank(), #gets rid of major gridlines
panel.grid.minor = element_blank(), #gets rid of minor gridlines
panel.background = element_blank(), #turns background white instead of gray
axis.line = element_line(colour = "black"),
legend.position="none", #gets rid of legend
axis.text=element_text(size=18, color = "black"), #sets size of x and y axis labels
axis.title=element_text(size=14,face="bold")) +
scale_x_discrete (limits = order)
p+labs(title=("Fetal Liver Hepcidin Expression: Ct Values"), y = ("Fetal Hepc/Gapdh dCt")) +
scale_fill_manual(values = c("#3399cc", "#0000FF"))
| /FetLivEPO_Stats.R | no_license | kayleehelfrich/MouseProjectGraphs | R | false | false | 2,348 | r | rm(list=ls())
library("ggpubr")
setwd("C:/Users/kaylee/Documents/Kaylee_Stuff/Smith_Lab/Data/Mouse_Validation_Data/Fetal_EPO")
PCR_file <- "FetLivEPO_Ctcalcs_file1.csv"
data <- read.csv(PCR_file, header=TRUE)
data_frame <- data.frame(data)
MD <- data_frame[is.element(data_frame$Biological.Sets, "MD"),]
EtOH <- data_frame[is.element(data_frame$Biological.Sets, "EtOH"),]
sets <- data.frame(MD$Cq.Averages)
sets$EtOH <- EtOH$Cq.Averages
colnames(sets)[1] <- "MD" #renaming the first column
treatments <- c(sets$MD, sets$EtOH)
tech_repl <- 9 #number of biological replicates
set_nums <- 1 #number of technical plate replicates
reps <- tech_repl*set_nums #reps should match the number of rows
groups <- c(rep("MD", reps), rep("EtOH", reps))
join <- data.frame(treatments, groups)
#Checking for the assumption of equal variance
bartlett.test(treatments, groups)
#checking for assumption of normality
shapiro.test(data_frame$Cq.Averages)
hist(data_frame$Cq.Averages)
qqnorm(data_frame$Cq.Averages)
qqline(data_frame$Cq.Averages)
#student's t-test for normal data
t.test(sets$MD, sets$EtOH)
#Wilcoxon rank-sum test for non-normal data
wilcox.test(sets$MD, sets$EtOH, alternative = "two.sided", exact = FALSE)
#Outlier Test
summary(MD$Cq.Averages)
summary(EtOH$Cq.Averages)
library(ggplot2)
order <- c("MD", "EtOH")
p <- ggplot(merge, aes(x=groups, y=treatments)) +
geom_boxplot(outlier.shape=NA) + #avoid plotting outliers twice
geom_jitter(position=position_jitter(width=.1, height=0)) +
theme(plot.title = element_text(hjust = 0.5, size=18),
axis.title.x = element_blank(), #gets rid of x-axis label
panel.grid.major = element_blank(), #gets rid of major gridlines
panel.grid.minor = element_blank(), #gets rid of minor gridlines
panel.background = element_blank(), #turns background white instead of gray
axis.line = element_line(colour = "black"),
legend.position="none", #gets rid of legend
axis.text=element_text(size=18, color = "black"), #sets size of x and y axis labels
axis.title=element_text(size=14,face="bold")) +
scale_x_discrete (limits = order)
p+labs(title=("Fetal Liver Hepcidin Expression: Ct Values"), y = ("Fetal Hepc/Gapdh dCt")) +
scale_fill_manual(values = c("#3399cc", "#0000FF"))
|
get_emails_webpage <- function(urls, return.df=F, email.href=F) {
# scrap emails from webpages
library(XML)
library(stringr)
library(RCurl)
# define email pattern
emailpattern <- '([a-zA-Z0-9._-]*-)?[[:alnum:]\\-_.%+]+@[[:alnum:]\\-_.%+]+\\.[[:alpha:]]+'
emails <- list()
# download cert if required
if(.Platform$OS.type == "windows") { if(!file.exists("cacert.perm")) download.file(url="http://curl.haxx.se/ca/cacert.pem", destfile="cacert.perm") }
# download webpages and extract emails
for(i in 1:length(urls)) {
if(email.href==F){
web.temp <- try(gsub("mailto:(.*?)@", "", getURL(urls[i], cainfo = "cacert.perm")))
emails[i] <- try(str_extract_all(web.temp, emailpattern))
cat(".")
} else {
web.temp <- try(as.character(xpathSApply(htmlParse(urls[i]), "//a/@href")))
emails[[i]] <- try(as.list(gsub("mailto:", "", web.temp[grep("mailto:", web.temp)])))
cat(".")
}
}
if(return.df==T) {
# create dataframe with aggregate results
max.length <- max(unlist(lapply(emails, length)))
emails.df <- do.call(rbind.data.frame, lapply(emails, function(v) { c(v, rep(NA, max.length-length(v)))}))
colnames(emails.df) <- paste0("Email_", seq(1:max.length))
results <- cbind(data.frame(URL=as.character(urls)), emails.df)
return(results)
} else {
return(unlist(emails))
}
}
| /get_emails_webpage.R | permissive | toledobastos/socmedtools | R | false | false | 1,487 | r | get_emails_webpage <- function(urls, return.df=F, email.href=F) {
# scrap emails from webpages
library(XML)
library(stringr)
library(RCurl)
# define email pattern
emailpattern <- '([a-zA-Z0-9._-]*-)?[[:alnum:]\\-_.%+]+@[[:alnum:]\\-_.%+]+\\.[[:alpha:]]+'
emails <- list()
# download cert if required
if(.Platform$OS.type == "windows") { if(!file.exists("cacert.perm")) download.file(url="http://curl.haxx.se/ca/cacert.pem", destfile="cacert.perm") }
# download webpages and extract emails
for(i in 1:length(urls)) {
if(email.href==F){
web.temp <- try(gsub("mailto:(.*?)@", "", getURL(urls[i], cainfo = "cacert.perm")))
emails[i] <- try(str_extract_all(web.temp, emailpattern))
cat(".")
} else {
web.temp <- try(as.character(xpathSApply(htmlParse(urls[i]), "//a/@href")))
emails[[i]] <- try(as.list(gsub("mailto:", "", web.temp[grep("mailto:", web.temp)])))
cat(".")
}
}
if(return.df==T) {
# create dataframe with aggregate results
max.length <- max(unlist(lapply(emails, length)))
emails.df <- do.call(rbind.data.frame, lapply(emails, function(v) { c(v, rep(NA, max.length-length(v)))}))
colnames(emails.df) <- paste0("Email_", seq(1:max.length))
results <- cbind(data.frame(URL=as.character(urls)), emails.df)
return(results)
} else {
return(unlist(emails))
}
}
|
library(shiny)
library(ggplot2)
data(diamonds)
fitModel <<- lm(price ~ carat + cut + clarity + color, data = diamonds)
shinyServer(
function(input, output) {
prediction <- reactive({
prediction_parameters<-data.frame(carat=input$carat, cut = input$cut,
clarity = input$clarity,color = input$color)
value <- predict(fitModel,prediction_parameters)
if (value < 0) {
cat('Error: No data or Prediction Value < 0. Please try again.')
} else if (value > 0) {
cat(round(value,0), "$ \n")
} })
output$prediction <- renderPrint({prediction()})
}
)
| /server.R | no_license | JuliasData/DevDataApp_RCode | R | false | false | 644 | r | library(shiny)
library(ggplot2)
data(diamonds)
fitModel <<- lm(price ~ carat + cut + clarity + color, data = diamonds)
shinyServer(
function(input, output) {
prediction <- reactive({
prediction_parameters<-data.frame(carat=input$carat, cut = input$cut,
clarity = input$clarity,color = input$color)
value <- predict(fitModel,prediction_parameters)
if (value < 0) {
cat('Error: No data or Prediction Value < 0. Please try again.')
} else if (value > 0) {
cat(round(value,0), "$ \n")
} })
output$prediction <- renderPrint({prediction()})
}
)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Generics.R, R/InputData.R
\name{WriteSparse}
\alias{WriteSparse}
\alias{writesparse}
\alias{WriteSparse,LTMG-method}
\title{Title}
\usage{
WriteSparse(object, ...)
writesparse(object = NULL, path = "./", gene.name = TRUE, cell.name = FALSE)
\S4method{WriteSparse}{LTMG}(object = NULL, path = "./", gene.name = TRUE, cell.name = FALSE)
}
\arguments{
\item{path}{output path}
\item{gene.name}{whether output gene name for sparse matrix}
\item{cell.name}{whether output cell name for sparse matrix}
}
\value{
}
\description{
Title
}
| /man/WriteSparse.Rd | no_license | BMEngineeR/scGNNLTMG | R | false | true | 613 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Generics.R, R/InputData.R
\name{WriteSparse}
\alias{WriteSparse}
\alias{writesparse}
\alias{WriteSparse,LTMG-method}
\title{Title}
\usage{
WriteSparse(object, ...)
writesparse(object = NULL, path = "./", gene.name = TRUE, cell.name = FALSE)
\S4method{WriteSparse}{LTMG}(object = NULL, path = "./", gene.name = TRUE, cell.name = FALSE)
}
\arguments{
\item{path}{output path}
\item{gene.name}{whether output gene name for sparse matrix}
\item{cell.name}{whether output cell name for sparse matrix}
}
\value{
}
\description{
Title
}
|
#Loading complete data
C_data <- read.csv("household_power_consumption.txt", header = T, sep = ';', na.strings = "?", nrows = 2075259, check.names = F, stringsAsFactors = F, comment.char = "", quote = '\"')
C_data$Date <- as.Date(C_data$Date, format = "%d/%m/%Y")
#Subsetting the data
data <- subset(C_data, subset = (Date >= "2007-02-01" & Date <= "2007-02-02"))
datetime <- paste(as.Date(data$Date), data$Time)
data$Datetime <- as.POSIXct(datetime)
# Generating Plot 2
plot(data$Global_active_power ~ data$Datetime, type = "l", ylab = "Global Active Power (kilowatts)", xlab = "")
#copying the plot into the file plot2.png
dev.copy(png,file="plot2.png")
dev.off() | /figure/plot2.R | no_license | saran4599/ExData_Plotting1 | R | false | false | 668 | r | #Loading complete data
C_data <- read.csv("household_power_consumption.txt", header = T, sep = ';', na.strings = "?", nrows = 2075259, check.names = F, stringsAsFactors = F, comment.char = "", quote = '\"')
C_data$Date <- as.Date(C_data$Date, format = "%d/%m/%Y")
#Subsetting the data
data <- subset(C_data, subset = (Date >= "2007-02-01" & Date <= "2007-02-02"))
datetime <- paste(as.Date(data$Date), data$Time)
data$Datetime <- as.POSIXct(datetime)
# Generating Plot 2
plot(data$Global_active_power ~ data$Datetime, type = "l", ylab = "Global Active Power (kilowatts)", xlab = "")
#copying the plot into the file plot2.png
dev.copy(png,file="plot2.png")
dev.off() |
# line plot, type, plot type
x <- c(1:10)
y <- x^2+10
# 기본형
plot(x,y, type="p")
# none
plot(x,y, type="n")
# 선 연결
plot(x,y, type="b")
# 계단식
plot(x,y, type="S")
# 세로줄
plot(x,y, type="h")
# 플롯 개수
par(mfrow=c(2,4))
for (i in 1:8) {
plot(x,y, type="p", col="blue", pch=i)
}
# type
types=c("p", "l", "o", "b", "C", "S", "s", "h")
for (i in 1:8) {
plot(x,y, type=types[i], col="blue", pch=i)
} | /inflearn_lecture/lec 1-24 [Plot type].R | permissive | cutz-j/R-Programming | R | false | false | 426 | r | # line plot, type, plot type
x <- c(1:10)
y <- x^2+10
# 기본형
plot(x,y, type="p")
# none
plot(x,y, type="n")
# 선 연결
plot(x,y, type="b")
# 계단식
plot(x,y, type="S")
# 세로줄
plot(x,y, type="h")
# 플롯 개수
par(mfrow=c(2,4))
for (i in 1:8) {
plot(x,y, type="p", col="blue", pch=i)
}
# type
types=c("p", "l", "o", "b", "C", "S", "s", "h")
for (i in 1:8) {
plot(x,y, type=types[i], col="blue", pch=i)
} |
# authors: Andres Pitta, Braden Tam, Serhiy Pokrovskyy
# date: 2020-01-24
"This script wrangles the data for ML purposes. It takes the following arguments:
the file were the root file is,
the path where the test and train dataset is going to be saved,
the train/test set split in decimal numbers,
a variable where 'YES' = remove outliers and 'NO' do not remove.
and a target
Usage: wrangling.R [--DATA_FILE_PATH=<DATA_FILE_PATH>] [--TRAIN_FILE_PATH=<TRAIN_FILE_PATH>] [--TEST_FILE_PATH=<TEST_FILE_PATH>] [--TARGET=<TARGET>][--REMOVE_OUTLIERS=<REMOVE_OUTLIERS>] [--TRAIN_SIZE=<TRAIN_SIZE>]
Options:
--DATA_FILE_PATH=<DATA_FILE_PATH> Path (including filename) to retrieve the csv file. [default: data/vehicles.csv]
--TRAIN_FILE_PATH=<TRAIN_FILE_PATH> Path (including filename) to print the train portion as a csv file. [default: data/vehicles_train.csv]
--TEST_FILE_PATH=<TEST_FILE_PATH> Path (including filename) to print the test portion as a csv file. [default: data/vehicles_test.csv]
--TARGET=<TARGET> Name of the response variable to use. [default: price]
--REMOVE_OUTLIERS=<REMOVE_OUTLIERS> Logical value that takes YES as value if the outliers should be remove, NO otherwise. [default: YES]
--TRAIN_SIZE=<TRAIN_SIZE> Decimal value for the train/test split. [default: 0.8]
" -> doc
library(tidyverse, quietly = TRUE)
library(docopt, quietly = TRUE)
library(readr, quietly = TRUE)
library(stats, quietly = TRUE)
set.seed(1234)
options(readr.num_columns = 0)
opt <- docopt(doc)
main <- function(data_path, train_path, test_path, target, remove_outliers, train_size) {
print("Loading data...")
data <- load(data_path)
print("Splitting data...")
list_traintest <- split_data(data, train_size)
print("Wrangling train data...")
wrangled_train <- wrangling(list_traintest[[1]], target, remove_outliers)
print("Wrangling test data...")
wrangled_test <- wrangling(list_traintest[[2]], target, remove_outliers)
# Might need to rethink this line:
# filter(target < max(list_traintest[[1]][[target]]))
print("Saving train data...")
print_dataset(train_path, wrangled_train)
print("Saving test data...")
print_dataset(test_path, wrangled_test)
print("DONE")
}
#' Loads the data from a provided path
#'
#' @param data_path path from where to load the data
#' @return the data
#' @examples
#' load('../data/vehicles.csv')
load <- function(data_path) {
if (file.exists(data_path)) {
readr::read_csv(data_path) %>% select('year',
'price',
'odometer',
'manufacturer',
'transmission',
'fuel',
'paint_color',
'cylinders',
'drive',
'size',
'state',
'condition',
'title_status')
}
else {
print("The path does not exist")
}
}
#' Data wrangling function. Fills all the NAs from the character variables with 'No value',
#' Removes the 99th-percentile from the target variable.
#'
#' @param data data to perform data wrangling
#' @param target response variable
#' @param remove_outliers TRUE if the user wants to remove values over the 99th percentile, FALSE if not.
#' @return the data
#' @examples
#' wrangling(vehicles, price, TRUE)
wrangling <- function(data, target, remove_outliers) {
for (i in names(data)) {
if (is.character(data[[i]])) {
data[[i]] <- if_else(is.na(data[[i]]), "No value", data[[i]])
}
}
if (remove_outliers == "YES") {
data_filtered <- data[data[[target]] <= quantile(data[[target]], c(0.99)),]
data_filtered <- data_filtered[data_filtered[[target]] > 10,]
data_filtered <- data_filtered %>% filter(odometer > 0)
} else {
data_filtered <- data
}
return(data_filtered)
}
#' Splits the data given a train_size parameter
#'
#' @param data data to split
#' @param train_size split size
#' @return the data with a split column TRUE for train dataset FALSE test
#' @examples
#' split_data(vehicles, 0.9)
split_data <- function(data, train_size) {
train_size <- as.double(train_size)
data$state <- toupper(data$state)
if (train_size >= 0 && train_size <= 1) {
sample_size <- floor(as.double(train_size) * nrow(data))
train_id <- sample(seq_len(nrow(data)), size = sample_size)
train <- data[train_id,]
test <- data[-train_id,]
list_results <- list(train, test)
return(list_results)
}
else {
print("The train_size parameter is not valid")
}
}
#' Prints the data the a provided paths
#'
#' @param data_path Path to print the data
#' @param data Dataset to print
#' @param replace Replace existing file or stop
#' @return Nothing
#' @examples
#' print_train_test('../data/vehicles_train.csv',vehicles_train)
print_dataset <- function(data_path, data, replace = TRUE) {
if (file.exists(data_path) && !replace) {
print("The file already exists")
}
else {
write_csv(data, data_path)
}
}
main(opt$DATA_FILE_PATH, opt$TRAIN_FILE_PATH, opt$TEST_FILE_PATH, opt$TARGET, opt$REMOVE_OUTLIERS, opt$TRAIN_SIZE)
| /scripts/wrangling.R | permissive | bradentam/DSCI_522_Group-308_Used-Cars | R | false | false | 5,430 | r | # authors: Andres Pitta, Braden Tam, Serhiy Pokrovskyy
# date: 2020-01-24
"This script wrangles the data for ML purposes. It takes the following arguments:
the file were the root file is,
the path where the test and train dataset is going to be saved,
the train/test set split in decimal numbers,
a variable where 'YES' = remove outliers and 'NO' do not remove.
and a target
Usage: wrangling.R [--DATA_FILE_PATH=<DATA_FILE_PATH>] [--TRAIN_FILE_PATH=<TRAIN_FILE_PATH>] [--TEST_FILE_PATH=<TEST_FILE_PATH>] [--TARGET=<TARGET>][--REMOVE_OUTLIERS=<REMOVE_OUTLIERS>] [--TRAIN_SIZE=<TRAIN_SIZE>]
Options:
--DATA_FILE_PATH=<DATA_FILE_PATH> Path (including filename) to retrieve the csv file. [default: data/vehicles.csv]
--TRAIN_FILE_PATH=<TRAIN_FILE_PATH> Path (including filename) to print the train portion as a csv file. [default: data/vehicles_train.csv]
--TEST_FILE_PATH=<TEST_FILE_PATH> Path (including filename) to print the test portion as a csv file. [default: data/vehicles_test.csv]
--TARGET=<TARGET> Name of the response variable to use. [default: price]
--REMOVE_OUTLIERS=<REMOVE_OUTLIERS> Logical value that takes YES as value if the outliers should be remove, NO otherwise. [default: YES]
--TRAIN_SIZE=<TRAIN_SIZE> Decimal value for the train/test split. [default: 0.8]
" -> doc
library(tidyverse, quietly = TRUE)
library(docopt, quietly = TRUE)
library(readr, quietly = TRUE)
library(stats, quietly = TRUE)
set.seed(1234)
options(readr.num_columns = 0)
opt <- docopt(doc)
main <- function(data_path, train_path, test_path, target, remove_outliers, train_size) {
print("Loading data...")
data <- load(data_path)
print("Splitting data...")
list_traintest <- split_data(data, train_size)
print("Wrangling train data...")
wrangled_train <- wrangling(list_traintest[[1]], target, remove_outliers)
print("Wrangling test data...")
wrangled_test <- wrangling(list_traintest[[2]], target, remove_outliers)
# Might need to rethink this line:
# filter(target < max(list_traintest[[1]][[target]]))
print("Saving train data...")
print_dataset(train_path, wrangled_train)
print("Saving test data...")
print_dataset(test_path, wrangled_test)
print("DONE")
}
#' Loads the data from a provided path
#'
#' @param data_path path from where to load the data
#' @return the data
#' @examples
#' load('../data/vehicles.csv')
load <- function(data_path) {
if (file.exists(data_path)) {
readr::read_csv(data_path) %>% select('year',
'price',
'odometer',
'manufacturer',
'transmission',
'fuel',
'paint_color',
'cylinders',
'drive',
'size',
'state',
'condition',
'title_status')
}
else {
print("The path does not exist")
}
}
#' Data wrangling function. Fills all the NAs from the character variables with 'No value',
#' Removes the 99th-percentile from the target variable.
#'
#' @param data data to perform data wrangling
#' @param target response variable
#' @param remove_outliers TRUE if the user wants to remove values over the 99th percentile, FALSE if not.
#' @return the data
#' @examples
#' wrangling(vehicles, price, TRUE)
wrangling <- function(data, target, remove_outliers) {
for (i in names(data)) {
if (is.character(data[[i]])) {
data[[i]] <- if_else(is.na(data[[i]]), "No value", data[[i]])
}
}
if (remove_outliers == "YES") {
data_filtered <- data[data[[target]] <= quantile(data[[target]], c(0.99)),]
data_filtered <- data_filtered[data_filtered[[target]] > 10,]
data_filtered <- data_filtered %>% filter(odometer > 0)
} else {
data_filtered <- data
}
return(data_filtered)
}
#' Splits the data given a train_size parameter
#'
#' @param data data to split
#' @param train_size split size
#' @return the data with a split column TRUE for train dataset FALSE test
#' @examples
#' split_data(vehicles, 0.9)
split_data <- function(data, train_size) {
train_size <- as.double(train_size)
data$state <- toupper(data$state)
if (train_size >= 0 && train_size <= 1) {
sample_size <- floor(as.double(train_size) * nrow(data))
train_id <- sample(seq_len(nrow(data)), size = sample_size)
train <- data[train_id,]
test <- data[-train_id,]
list_results <- list(train, test)
return(list_results)
}
else {
print("The train_size parameter is not valid")
}
}
#' Prints the data the a provided paths
#'
#' @param data_path Path to print the data
#' @param data Dataset to print
#' @param replace Replace existing file or stop
#' @return Nothing
#' @examples
#' print_train_test('../data/vehicles_train.csv',vehicles_train)
print_dataset <- function(data_path, data, replace = TRUE) {
if (file.exists(data_path) && !replace) {
print("The file already exists")
}
else {
write_csv(data, data_path)
}
}
main(opt$DATA_FILE_PATH, opt$TRAIN_FILE_PATH, opt$TEST_FILE_PATH, opt$TARGET, opt$REMOVE_OUTLIERS, opt$TRAIN_SIZE)
|
#!/share/apps/R-3.2.2_gcc/bin/Rscript
library(data.table)
library(reshape2)
library(dplyr)
library(foreach)
library(iterators)
library(chron)
source_path = "/home/hnoorazar/cleaner_codes/core.R"
source(source_path)
raw_data_dir = "/data/hydro/users/Hossein/codling_moth/local/raw/"
write_dir = "/data/hydro/users/Hossein/codling_moth_new/local/processed/"
param_dir = "/home/hnoorazar/cleaner_codes/parameters/"
file_prefix = "data_"
file_list = "local_list"
conn = file(paste0(param_dir, file_list), open = "r")
locations = readLines(conn)
close(conn)
ClimateGroup = list("Historical", "2040's", "2060's", "2080's")
cellByCounty = data.table(read.csv(paste0(param_dir, "CropParamCRB.csv")))
start_h = 1979
end_h = 2015
start_f = 2006
end_f = 2099
categories = c("historical")
cod_param <- "CodlingMothparameters.txt"
for(category in categories) {
if(category == "historical") {
for(location in locations) {
filename = paste0(category, "/", file_prefix, location)
temp_data <- produce_CMPOP_local(input_folder= raw_data_dir,
filename=filename,
param_dir = param_dir,
cod_moth_param_name = cod_param,
start_year=start_h, end_year=end_h,
lower=10, upper=31.11,
location = location, category = category)
write_dir = paste0(write_path, "historical_CMPOP/")
dir.create(file.path(write_dir), recursive = TRUE)
write.table(temp_data, file = paste0(write_dir, "CMPOP_", location),
sep = ",",
row.names = FALSE,
col.names = TRUE)
}
}
## FUTURE CMPOP
else{
for(version in versions){
for(location in locations) {
filename = paste0(category, "/", version, "/", file_prefix, location)
temp_data <- produce_CMPOP(input_folder=raw_data_dir, filename=filename,
param_dir=param_dir,
cod_moth_param_name=cod_param,
start_year=start_f, end_year=end_f,
lower=10, upper=31.11,
location=location, category=category)
write_dir = paste0(write_path, "/", "future_CMPOP/", category, "/", version)
dir.create(file.path(write_dir), recursive = TRUE)
write.table(temp_data, file = paste0(write_dir, "/CMPOP_", location),
sep = ",",
row.names = FALSE,
col.names = TRUE)
}
}
}
} | /codling_moth/code/drivers/local_historical/LH_CMPOP_shrunk.R | permissive | HNoorazar/Ag | R | false | false | 2,755 | r | #!/share/apps/R-3.2.2_gcc/bin/Rscript
library(data.table)
library(reshape2)
library(dplyr)
library(foreach)
library(iterators)
library(chron)
source_path = "/home/hnoorazar/cleaner_codes/core.R"
source(source_path)
raw_data_dir = "/data/hydro/users/Hossein/codling_moth/local/raw/"
write_dir = "/data/hydro/users/Hossein/codling_moth_new/local/processed/"
param_dir = "/home/hnoorazar/cleaner_codes/parameters/"
file_prefix = "data_"
file_list = "local_list"
conn = file(paste0(param_dir, file_list), open = "r")
locations = readLines(conn)
close(conn)
ClimateGroup = list("Historical", "2040's", "2060's", "2080's")
cellByCounty = data.table(read.csv(paste0(param_dir, "CropParamCRB.csv")))
start_h = 1979
end_h = 2015
start_f = 2006
end_f = 2099
categories = c("historical")
cod_param <- "CodlingMothparameters.txt"
for(category in categories) {
if(category == "historical") {
for(location in locations) {
filename = paste0(category, "/", file_prefix, location)
temp_data <- produce_CMPOP_local(input_folder= raw_data_dir,
filename=filename,
param_dir = param_dir,
cod_moth_param_name = cod_param,
start_year=start_h, end_year=end_h,
lower=10, upper=31.11,
location = location, category = category)
write_dir = paste0(write_path, "historical_CMPOP/")
dir.create(file.path(write_dir), recursive = TRUE)
write.table(temp_data, file = paste0(write_dir, "CMPOP_", location),
sep = ",",
row.names = FALSE,
col.names = TRUE)
}
}
## FUTURE CMPOP
else{
for(version in versions){
for(location in locations) {
filename = paste0(category, "/", version, "/", file_prefix, location)
temp_data <- produce_CMPOP(input_folder=raw_data_dir, filename=filename,
param_dir=param_dir,
cod_moth_param_name=cod_param,
start_year=start_f, end_year=end_f,
lower=10, upper=31.11,
location=location, category=category)
write_dir = paste0(write_path, "/", "future_CMPOP/", category, "/", version)
dir.create(file.path(write_dir), recursive = TRUE)
write.table(temp_data, file = paste0(write_dir, "/CMPOP_", location),
sep = ",",
row.names = FALSE,
col.names = TRUE)
}
}
}
} |
L3train<- read.csv("./Datasets/loan_high_train.csv")
L3test <- read.csv("./Datasets/loan_high_test.csv")
L1train <- read.csv("./Datasets/loan_low_train.csv")
L1test <- read.csv("./Datasets/loan_low_test.csv")
L2train <- read.csv("./Datasets/loan_med_train.csv")
L2test <- read.csv("./Datasets/loan_med_test.csv")
Ltrain <- read.csv("./Datasets/loan_all_train.csv")
Ltest <- read.csv("./Datasets/loan_all_test.csv")
library(caret)
xgb_class <- function(data_train, data_test, para) {
result <- c()
acc <- c()
rec <- c()
output <- data.frame()
library(foreach)
library(doParallel)
cl <- makePSOCKcluster(detectCores())
registerDoParallel(cl)
#.combine = "cbind", .multicombine = TRUE,
result <- foreach(i = 1:nrow(para), .combine = "rbind", .packages = c("xgboost"), .verbose = TRUE) %dopar% {
dtrain <- xgb.DMatrix(data.matrix(subset(data_train, select = -c(loan_status))), label = data_train[, "loan_status"])
cv <- xgb.cv(data = dtrain, nrounds = 5, nfold = 5, metrics = list("error"),
max_depth = para[i,]$depth, eta = para[i,]$eta, min_child_weight = para[i,]$min_child_weight,
objective = "binary:logistic", gamma = para[i,]$gamma, verbose = TRUE)
result[i] <- sum(1-cv$evaluation_log[,2])/5
}
stopCluster(cl)
best <- para[order(result[,1], decreasing = T)[1],]
best$mean_accuracy <- max(result)
return(best)
}
cf <- function(L3, L3train, L3test, from, to, by){
fit <- c()
acc <- c()
rec <- c()
pred <- c()
itera <- seq(from, to, by = by)
trn <- as.factor(L3train[, "loan_status"])
tet <- as.factor(L3test[,"loan_status"])
library(foreach)
library(doParallel)
cl <- makePSOCKcluster(detectCores())
registerDoParallel(cl)
ite <- foreach(i = 1:length(itera), .combine = "rbind", .packages = c("xgboost"), .verbose = TRUE) %dopar% {
fit <- xgboost(data = data.matrix(subset(L3train, select = -c(loan_status))),
label = L3train[, "loan_status"], max_depth = L3$depth, eta = L3$eta,
min_child_weight = L3$min_child_weight,
objective = "binary:logistic", gamma = L3$gamma, nrounds = itera[i])
pred <- predict(fit, data.matrix(subset(L3test, select = -c(loan_status))))
pred <- ifelse(pred >= .5, 1, 0 )
acc[i] <- mean(as.factor(pred) == tet)
conf <- table(pred, L3test$loan_status)
rec[i] <- conf[2,2]/(conf[1,2] + conf[2,2])
data.frame(itera[i], acc[i], rec[i])
}
return(ite)
}
para3 <- expand.grid(max_depth = c(8,9,10), eta = c(0.3,0.4,0.2), min_child_weight = c(10,9,8), gamma = c(0,1,5,10))
names(para3) <- c("depth", "eta", "min_child_weight","gamma")
para2 <- expand.grid(max_depth = c(9,10), eta = c(0.37,0.4,0.42), min_child_weight = c(6,7,8), gamma = c(0.25,1,.5))
names(para2) <- c("depth", "eta", "min_child_weight","gamma")
para1 <- expand.grid(max_depth = c(9,10), eta = c(0.37,0.4,0.42), min_child_weight = c(6,7,8), gamma = c(0.3,.2))
names(para1) <- c("depth", "eta", "min_child_weight","gamma")
para <- expand.grid(max_depth = c(9,10), eta = c(0.37,0.42), min_child_weight = c(10,9,8), gamma = c(0.6,1,1.3))
names(para) <- c("depth", "eta", "min_child_weight","gamma")
L3 <- xgb_class(L3train, L3test, para3)
L3
L3_ite <- cf(L3, L3train, L3test, 1, 100, 5)
L3_ite
L2 <- xgb_class(L2train, L2test, para2)
L2
L2_ite <- cf(L2, L2train, L2test, 10, 500,10)
L2_ite
L1 <- xgb_class(L1train, L1test, para1)
L1
L1_ite <- cf(L1, L1train, L1test, 1, 320, 5)
L1_ite
L <- xgb_class(Ltrain, Ltest, para)
L
L_ite <- cf(L, Ltrain, Ltest, 10, 800, 50)
L_ite
library(ggplot2)
library(tidyr)
par(mfrow=c(2,2))
L3_data <-
data.frame(
Accuracy3 = L3_ite[,2],
Recall3 = L3_ite[,3],
iter = L3_ite[,1])
L3_data %>%
gather(High,rate, Accuracy3, Recall3) %>%
ggplot(aes(x=iter, y=rate, colour = High)) +
geom_line()
L2_data <-
data.frame(
Accuracy2 = L2_ite[,2],
Recall2 = L2_ite[,3],
iter = L2_ite[,1])
L2_data %>%
gather(Middle,rate, Accuracy2, Recall2) %>%
ggplot(aes(x=iter, y=rate, colour = Middle)) +
geom_line()
L1_data <-
data.frame(
Accuracy1 = L1_ite[,2],
Recall1 = L1_ite[,3],
iter = L1_ite[,1])
L1_data %>%
gather(Low,rate, Accuracy1, Recall1) %>%
ggplot(aes(x=iter, y=rate, colour = Low)) +
geom_line()
L_data <-
data.frame(
Accuracy = L_ite[,2],
Recall = L_ite[,3],
iter = L_ite[,1])
L_data %>%
gather(All,rate, Accuracy, Recall) %>%
ggplot(aes(x=iter, y=rate, colour = All)) +
geom_line()
| /lib/xgb.R | no_license | TZstatsADS/Fall2018-project5-sec1-grp6 | R | false | false | 5,078 | r |
L3train<- read.csv("./Datasets/loan_high_train.csv")
L3test <- read.csv("./Datasets/loan_high_test.csv")
L1train <- read.csv("./Datasets/loan_low_train.csv")
L1test <- read.csv("./Datasets/loan_low_test.csv")
L2train <- read.csv("./Datasets/loan_med_train.csv")
L2test <- read.csv("./Datasets/loan_med_test.csv")
Ltrain <- read.csv("./Datasets/loan_all_train.csv")
Ltest <- read.csv("./Datasets/loan_all_test.csv")
library(caret)
xgb_class <- function(data_train, data_test, para) {
result <- c()
acc <- c()
rec <- c()
output <- data.frame()
library(foreach)
library(doParallel)
cl <- makePSOCKcluster(detectCores())
registerDoParallel(cl)
#.combine = "cbind", .multicombine = TRUE,
result <- foreach(i = 1:nrow(para), .combine = "rbind", .packages = c("xgboost"), .verbose = TRUE) %dopar% {
dtrain <- xgb.DMatrix(data.matrix(subset(data_train, select = -c(loan_status))), label = data_train[, "loan_status"])
cv <- xgb.cv(data = dtrain, nrounds = 5, nfold = 5, metrics = list("error"),
max_depth = para[i,]$depth, eta = para[i,]$eta, min_child_weight = para[i,]$min_child_weight,
objective = "binary:logistic", gamma = para[i,]$gamma, verbose = TRUE)
result[i] <- sum(1-cv$evaluation_log[,2])/5
}
stopCluster(cl)
best <- para[order(result[,1], decreasing = T)[1],]
best$mean_accuracy <- max(result)
return(best)
}
cf <- function(L3, L3train, L3test, from, to, by){
fit <- c()
acc <- c()
rec <- c()
pred <- c()
itera <- seq(from, to, by = by)
trn <- as.factor(L3train[, "loan_status"])
tet <- as.factor(L3test[,"loan_status"])
library(foreach)
library(doParallel)
cl <- makePSOCKcluster(detectCores())
registerDoParallel(cl)
ite <- foreach(i = 1:length(itera), .combine = "rbind", .packages = c("xgboost"), .verbose = TRUE) %dopar% {
fit <- xgboost(data = data.matrix(subset(L3train, select = -c(loan_status))),
label = L3train[, "loan_status"], max_depth = L3$depth, eta = L3$eta,
min_child_weight = L3$min_child_weight,
objective = "binary:logistic", gamma = L3$gamma, nrounds = itera[i])
pred <- predict(fit, data.matrix(subset(L3test, select = -c(loan_status))))
pred <- ifelse(pred >= .5, 1, 0 )
acc[i] <- mean(as.factor(pred) == tet)
conf <- table(pred, L3test$loan_status)
rec[i] <- conf[2,2]/(conf[1,2] + conf[2,2])
data.frame(itera[i], acc[i], rec[i])
}
return(ite)
}
para3 <- expand.grid(max_depth = c(8,9,10), eta = c(0.3,0.4,0.2), min_child_weight = c(10,9,8), gamma = c(0,1,5,10))
names(para3) <- c("depth", "eta", "min_child_weight","gamma")
para2 <- expand.grid(max_depth = c(9,10), eta = c(0.37,0.4,0.42), min_child_weight = c(6,7,8), gamma = c(0.25,1,.5))
names(para2) <- c("depth", "eta", "min_child_weight","gamma")
para1 <- expand.grid(max_depth = c(9,10), eta = c(0.37,0.4,0.42), min_child_weight = c(6,7,8), gamma = c(0.3,.2))
names(para1) <- c("depth", "eta", "min_child_weight","gamma")
para <- expand.grid(max_depth = c(9,10), eta = c(0.37,0.42), min_child_weight = c(10,9,8), gamma = c(0.6,1,1.3))
names(para) <- c("depth", "eta", "min_child_weight","gamma")
L3 <- xgb_class(L3train, L3test, para3)
L3
L3_ite <- cf(L3, L3train, L3test, 1, 100, 5)
L3_ite
L2 <- xgb_class(L2train, L2test, para2)
L2
L2_ite <- cf(L2, L2train, L2test, 10, 500,10)
L2_ite
L1 <- xgb_class(L1train, L1test, para1)
L1
L1_ite <- cf(L1, L1train, L1test, 1, 320, 5)
L1_ite
L <- xgb_class(Ltrain, Ltest, para)
L
L_ite <- cf(L, Ltrain, Ltest, 10, 800, 50)
L_ite
library(ggplot2)
library(tidyr)
par(mfrow=c(2,2))
L3_data <-
data.frame(
Accuracy3 = L3_ite[,2],
Recall3 = L3_ite[,3],
iter = L3_ite[,1])
L3_data %>%
gather(High,rate, Accuracy3, Recall3) %>%
ggplot(aes(x=iter, y=rate, colour = High)) +
geom_line()
L2_data <-
data.frame(
Accuracy2 = L2_ite[,2],
Recall2 = L2_ite[,3],
iter = L2_ite[,1])
L2_data %>%
gather(Middle,rate, Accuracy2, Recall2) %>%
ggplot(aes(x=iter, y=rate, colour = Middle)) +
geom_line()
L1_data <-
data.frame(
Accuracy1 = L1_ite[,2],
Recall1 = L1_ite[,3],
iter = L1_ite[,1])
L1_data %>%
gather(Low,rate, Accuracy1, Recall1) %>%
ggplot(aes(x=iter, y=rate, colour = Low)) +
geom_line()
L_data <-
data.frame(
Accuracy = L_ite[,2],
Recall = L_ite[,3],
iter = L_ite[,1])
L_data %>%
gather(All,rate, Accuracy, Recall) %>%
ggplot(aes(x=iter, y=rate, colour = All)) +
geom_line()
|
git.describe<-function(){
system2(
"git",
c("describe", "--tags", "--always", "--long", "--dirty='-UNCOMMITED'"),
stdout=T
)
}
VERSIONTOOLSFILE="HaloXi.v1"
VERSIONTOOLS=paste0(VERSIONTOOLSFILE,".",git.describe())
DATE <- function(){gsub("-", "", Sys.Date())}
fixColNames <- function(ss) {
gsub(" ","_",ss) %>% gsub("_\\(.*\\)$","",.)
}
read_halo <- function(ff,...) {
read_csv(ff,...) %>% rename_all(~fixColNames(.))
}
generateCellUUID <- function(dat,cols.UUID){
lapply(
transpose(dat[,cols.UUID]),
function(x){digest::digest(paste(x,collapse=";"),algo="sha1")}
) %>%
unlist
}
load_halo <- function(hfile,cols.UUID,sampleName,cols.extra) {
if(missing(cols.UUID)) {
stop("\n\nFATAL ERROR::tools.R::load_halo\ncols.UUID Missing\n")
}
if(missing(sampleName)) {
sampleName=basename(hfile) %>% gsub("\\.csv.*","",.)
}
dd=read_halo(hfile) %>% mutate(Sample=sampleName)
dd$UUID=generateCellUUID(dd,cols.UUID)
cell.data=dd %>% select(UUID,Sample,XMin,XMax,YMin,YMax)
marker.data=dd %>%
select(UUID,matches("_Positive_Classification")) %>%
gather(Marker,Positive,-UUID) %>%
mutate(Marker=gsub("_\\(.*","",Marker)) %>%
mutate(MarkerNorm=toupper(Marker))
markerPos=marker.data %>%
group_by(UUID) %>%
summarize(MarkerPos=paste0(sort(MarkerNorm[Positive==1]),collapse=";")) %>%
ungroup
cell.data=left_join(cell.data,markerPos)
if(!missing(cols.extra)) {
extra.data=dd %>% select(UUID,all_of(cols.extra))
cell.data=left_join(cell.data,extra.data)
}
obj=list(cell.data=cell.data,marker.data=marker.data,VERSION=VERSIONTOOLS,DATE=DATE())
obj
}
| /tools.R | no_license | soccin/HaloXi | R | false | false | 1,770 | r | git.describe<-function(){
system2(
"git",
c("describe", "--tags", "--always", "--long", "--dirty='-UNCOMMITED'"),
stdout=T
)
}
VERSIONTOOLSFILE="HaloXi.v1"
VERSIONTOOLS=paste0(VERSIONTOOLSFILE,".",git.describe())
DATE <- function(){gsub("-", "", Sys.Date())}
fixColNames <- function(ss) {
gsub(" ","_",ss) %>% gsub("_\\(.*\\)$","",.)
}
read_halo <- function(ff,...) {
read_csv(ff,...) %>% rename_all(~fixColNames(.))
}
generateCellUUID <- function(dat,cols.UUID){
lapply(
transpose(dat[,cols.UUID]),
function(x){digest::digest(paste(x,collapse=";"),algo="sha1")}
) %>%
unlist
}
load_halo <- function(hfile,cols.UUID,sampleName,cols.extra) {
if(missing(cols.UUID)) {
stop("\n\nFATAL ERROR::tools.R::load_halo\ncols.UUID Missing\n")
}
if(missing(sampleName)) {
sampleName=basename(hfile) %>% gsub("\\.csv.*","",.)
}
dd=read_halo(hfile) %>% mutate(Sample=sampleName)
dd$UUID=generateCellUUID(dd,cols.UUID)
cell.data=dd %>% select(UUID,Sample,XMin,XMax,YMin,YMax)
marker.data=dd %>%
select(UUID,matches("_Positive_Classification")) %>%
gather(Marker,Positive,-UUID) %>%
mutate(Marker=gsub("_\\(.*","",Marker)) %>%
mutate(MarkerNorm=toupper(Marker))
markerPos=marker.data %>%
group_by(UUID) %>%
summarize(MarkerPos=paste0(sort(MarkerNorm[Positive==1]),collapse=";")) %>%
ungroup
cell.data=left_join(cell.data,markerPos)
if(!missing(cols.extra)) {
extra.data=dd %>% select(UUID,all_of(cols.extra))
cell.data=left_join(cell.data,extra.data)
}
obj=list(cell.data=cell.data,marker.data=marker.data,VERSION=VERSIONTOOLS,DATE=DATE())
obj
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.