content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
## plot2.R script file written by Dave Martin
## 5/5/2014
## Programming Assignment 1, plot 2
## Exploratory Data Analysis
## read in the data
dframe <- read.table("household_power_consumption.txt", header = T,
colClasses = c("character", "character", "numeric", "numeric", "numeric", "numeric", "numeric", "numeric", "numeric"),
na.strings = "?", sep=";")
## get a datetime
dframe$DT <- strptime(paste(dframe$Date, dframe$Time, sep=" "), format = "%d/%m/%Y %H:%M:%S")
## get start and end date:time
startdate <- as.POSIXlt("2007-02-01 00:00:00")
enddate <- as.POSIXlt("2007-02-02 23:59:59")
## subset the two days
subdf <- subset(dframe, dframe$DT >= startdate & dframe$DT <= enddate)
png("plot2.png", width=480, height=480)
plot(subdf$DT, subdf$Global_active_power, type="l", xlab="", ylab="Global Active Power (kilowatts)")
dev.off()
| /Plot2.R | no_license | davekohlmartin/ExData_Plotting1 | R | false | false | 886 | r | ## plot2.R script file written by Dave Martin
## 5/5/2014
## Programming Assignment 1, plot 2
## Exploratory Data Analysis
## read in the data
dframe <- read.table("household_power_consumption.txt", header = T,
colClasses = c("character", "character", "numeric", "numeric", "numeric", "numeric", "numeric", "numeric", "numeric"),
na.strings = "?", sep=";")
## get a datetime
dframe$DT <- strptime(paste(dframe$Date, dframe$Time, sep=" "), format = "%d/%m/%Y %H:%M:%S")
## get start and end date:time
startdate <- as.POSIXlt("2007-02-01 00:00:00")
enddate <- as.POSIXlt("2007-02-02 23:59:59")
## subset the two days
subdf <- subset(dframe, dframe$DT >= startdate & dframe$DT <= enddate)
png("plot2.png", width=480, height=480)
plot(subdf$DT, subdf$Global_active_power, type="l", xlab="", ylab="Global Active Power (kilowatts)")
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/timepoints.R
\name{timepoints.blockForest}
\alias{timepoints.blockForest}
\alias{timepoints}
\title{blockForest timepoints}
\usage{
\method{timepoints}{blockForest}(x, ...)
}
\arguments{
\item{x}{blockForest Survival forest object.}
\item{...}{Further arguments passed to or from other methods.}
}
\value{
Unique death times
}
\description{
Extract unique death times of blockForest Survival forest
}
\seealso{
\code{\link{blockForest}}
}
\author{
Marvin N. Wright
}
| /man/timepoints.blockForest.Rd | no_license | bips-hb/blockForest | R | false | true | 546 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/timepoints.R
\name{timepoints.blockForest}
\alias{timepoints.blockForest}
\alias{timepoints}
\title{blockForest timepoints}
\usage{
\method{timepoints}{blockForest}(x, ...)
}
\arguments{
\item{x}{blockForest Survival forest object.}
\item{...}{Further arguments passed to or from other methods.}
}
\value{
Unique death times
}
\description{
Extract unique death times of blockForest Survival forest
}
\seealso{
\code{\link{blockForest}}
}
\author{
Marvin N. Wright
}
|
\name{mcCovMat}
\alias{mcCovMat}
\alias{mcCovMat.mleDb}
\alias{mcCovMat.mleBb}
\alias{mcCovMat.Dbdpars}
\alias{mcCovMat.Bbdpars}
\alias{mcCovMat.default}
\title{
Monte Carlo estimation of a covariance matrix.
}
\description{
Calculate an estimate of the covariance matrix for the parameter
estimates of a db or beta binomial distribution via simulation.
}
\usage{
mcCovMat(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{mleDb}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{mleBb}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{Dbdpars}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{Bbdpars}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{default}(object, nsim = 100, seed=NULL, maxit=1000)
}
\arguments{
\item{object}{
An object of class either \code{"mleDb"}, \code{"mleBb"},
\code{Dbdpars} or \code{Bbdpars}. In the first two cases such an
object would be returned by the function \code{\link{mleDb}()} or
by \code{\link{mleBb}()}. In the second two cases such an object
would be returned by the function \code{\link{makeDbdpars}()}
or by \code{\link{makeBbdpars}()}.
}
\item{nsim}{
Integer scalar. The number of simulations to be used to produce
the Monte Carlo estimate of the covariance matrix.
}
\item{seed}{
Integer scalar. The seed for the random number generator. If not
specified it is randomly sampled from the sequence \code{1:1e5}.
}
\item{maxit}{
Integer scalar. The maximum number of iterations to be undertaken
by \code{\link{optim}()} when fitting models to the simulated data.
}
}
\details{
The procedure is to simulate \code{nsim} data sets, all of
the same size. This will be the size of the data set to which
\code{object} was fitted), in the case of the \code{"mleDb"} and
\code{"mleBb"} methods, and will be the value of the \code{ndata}
argument supplied to the \dQuote{\code{make}} function in the
case of the \code{"Dbdpars"} and \code{"Bbdpars"} methods. The
simulations are from models determined by the parameter value
contained in \code{object}.
From each such simulated data, parameter estimates are obtained.
The covariance matrix of these latter parameter estimates
(adjusted for the fact that the true parameters are known in
a simulation) is taken to be the required covariance matrix
estimated.
The default method simply throws an error.
}
\value{
A two-by-two positive definite (with any luck!) numeric matrix.
It is an estimate of the covariance matrix of the parameter estimates.
It has an attribute \code{"seed"} which is the seed that was used
for the random number generator. This is either the value of the
argument \code{seed} or (if this argument was left \code{NULL}) the
value that was randomly sampled from \code{1:1e5}.
}
\author{Rolf Turner
\email{r.turner@auckland.ac.nz}
}
\seealso{
\code{link{aHess}()}
\code{link{nHess}()}
\code{link{vcov.mleDb}()}
\code{link{vcov.mleBb}()}
}
\examples{
X <- hmm.discnp::SydColDisc
X$y <- as.numeric(X$y)
X <- split(X,f=with(X,interaction(locn,depth)))
x <- X[[19]]$y
fit <- mleDb(x, ntop=5)
set.seed(42)
CM.m <- mcCovMat(fit,nsim=500) # Lots of simulations!
CM.a <- vcov(fit)
CM.n <- solve(nHess(fit,x))
cat("Monte Carlo:\n\n")
print(CM.m)
cat("Analytic:\n\n")
print(CM.a)
cat("Numeric:\n\n")
print(CM.n)
X <- hrsRcePred
top1e <- X[X$sbjType=="Expert","top1"]
fit <- mleBb(top1e,size=10)
CM.m <- mcCovMat(fit,nsim=500) # Lots of simulations!
CM.a <- vcov(fit)
CM.n <- solve(nHess(fit,top1e))
cat("Monte Carlo:\n\n")
print(CM.m)
cat("Analytic:\n\n")
print(CM.a)
cat("Numeric:\n\n")
print(CM.n)
}
\concept{ covariance estimation }
\concept{ inference }
| /man/mcCovMat.Rd | no_license | cran/dbd | R | false | false | 3,768 | rd | \name{mcCovMat}
\alias{mcCovMat}
\alias{mcCovMat.mleDb}
\alias{mcCovMat.mleBb}
\alias{mcCovMat.Dbdpars}
\alias{mcCovMat.Bbdpars}
\alias{mcCovMat.default}
\title{
Monte Carlo estimation of a covariance matrix.
}
\description{
Calculate an estimate of the covariance matrix for the parameter
estimates of a db or beta binomial distribution via simulation.
}
\usage{
mcCovMat(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{mleDb}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{mleBb}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{Dbdpars}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{Bbdpars}(object, nsim = 100, seed=NULL, maxit=1000)
\method{mcCovMat}{default}(object, nsim = 100, seed=NULL, maxit=1000)
}
\arguments{
\item{object}{
An object of class either \code{"mleDb"}, \code{"mleBb"},
\code{Dbdpars} or \code{Bbdpars}. In the first two cases such an
object would be returned by the function \code{\link{mleDb}()} or
by \code{\link{mleBb}()}. In the second two cases such an object
would be returned by the function \code{\link{makeDbdpars}()}
or by \code{\link{makeBbdpars}()}.
}
\item{nsim}{
Integer scalar. The number of simulations to be used to produce
the Monte Carlo estimate of the covariance matrix.
}
\item{seed}{
Integer scalar. The seed for the random number generator. If not
specified it is randomly sampled from the sequence \code{1:1e5}.
}
\item{maxit}{
Integer scalar. The maximum number of iterations to be undertaken
by \code{\link{optim}()} when fitting models to the simulated data.
}
}
\details{
The procedure is to simulate \code{nsim} data sets, all of
the same size. This will be the size of the data set to which
\code{object} was fitted), in the case of the \code{"mleDb"} and
\code{"mleBb"} methods, and will be the value of the \code{ndata}
argument supplied to the \dQuote{\code{make}} function in the
case of the \code{"Dbdpars"} and \code{"Bbdpars"} methods. The
simulations are from models determined by the parameter value
contained in \code{object}.
From each such simulated data, parameter estimates are obtained.
The covariance matrix of these latter parameter estimates
(adjusted for the fact that the true parameters are known in
a simulation) is taken to be the required covariance matrix
estimated.
The default method simply throws an error.
}
\value{
A two-by-two positive definite (with any luck!) numeric matrix.
It is an estimate of the covariance matrix of the parameter estimates.
It has an attribute \code{"seed"} which is the seed that was used
for the random number generator. This is either the value of the
argument \code{seed} or (if this argument was left \code{NULL}) the
value that was randomly sampled from \code{1:1e5}.
}
\author{Rolf Turner
\email{r.turner@auckland.ac.nz}
}
\seealso{
\code{link{aHess}()}
\code{link{nHess}()}
\code{link{vcov.mleDb}()}
\code{link{vcov.mleBb}()}
}
\examples{
X <- hmm.discnp::SydColDisc
X$y <- as.numeric(X$y)
X <- split(X,f=with(X,interaction(locn,depth)))
x <- X[[19]]$y
fit <- mleDb(x, ntop=5)
set.seed(42)
CM.m <- mcCovMat(fit,nsim=500) # Lots of simulations!
CM.a <- vcov(fit)
CM.n <- solve(nHess(fit,x))
cat("Monte Carlo:\n\n")
print(CM.m)
cat("Analytic:\n\n")
print(CM.a)
cat("Numeric:\n\n")
print(CM.n)
X <- hrsRcePred
top1e <- X[X$sbjType=="Expert","top1"]
fit <- mleBb(top1e,size=10)
CM.m <- mcCovMat(fit,nsim=500) # Lots of simulations!
CM.a <- vcov(fit)
CM.n <- solve(nHess(fit,top1e))
cat("Monte Carlo:\n\n")
print(CM.m)
cat("Analytic:\n\n")
print(CM.a)
cat("Numeric:\n\n")
print(CM.n)
}
\concept{ covariance estimation }
\concept{ inference }
|
library(dataCompareR)
### Name: coerceFactorsToChar
### Title: coerceFactorsToChar: convert all factor type fields to
### characters
### Aliases: coerceFactorsToChar
### ** Examples
## Not run: coerceFactorsToChar(iris)
| /data/genthat_extracted_code/dataCompareR/examples/coerceFactorsToChar.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 229 | r | library(dataCompareR)
### Name: coerceFactorsToChar
### Title: coerceFactorsToChar: convert all factor type fields to
### characters
### Aliases: coerceFactorsToChar
### ** Examples
## Not run: coerceFactorsToChar(iris)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/LearnerRegr.R
\docType{data}
\name{LearnerRegr}
\alias{LearnerRegr}
\title{Regression Learner}
\format{\link[R6:R6Class]{R6::R6Class} object inheriting from \link{Learner}.}
\description{
This Learner specializes \link{Learner} for regression problems.
Predefined learners can be found in the \link{Dictionary} \link{mlr_learners}.
}
\section{Construction}{
\preformatted{l = LearnerRegr$new(id, param_set = ParamSet$new(), param_vals = list(), predict_types = character(),
feature_types = character(), properties = character(), data_formats = "data.table", packages = character())
}
For a description of the arguments, see \link{Learner}.
\code{task_type} is set to \code{"regr"}.
Possible values for \code{predict_types} are a subset of \code{c("response", "se")}.
}
\section{Fields}{
See \link{Learner}.
}
\section{Methods}{
All methods of \link{Learner}, and additionally:
\itemize{
\item \code{new_prediction(task, response = NULL, prob = NULL)}\cr
(\link{Task}, \code{numeric()}, \code{numeric()}) -> \link{PredictionRegr}\cr
This method is intended to be called in \code{predict()} to create a \link{PredictionRegr} object.
Uses \code{task} to extract \code{row_ids}.
To manually construct a \link{PredictionRegr} object, see its constructor.
}
}
\examples{
# get all regression learners from mlr_learners:
lrns = mlr_learners$mget(mlr_learners$keys("^regr"))
names(lrns)
# get a specific learner from mlr_learners:
lrn = mlr_learners$get("regr.rpart")
print(lrn)
}
\seealso{
Example regression learner: \code{\link[=mlr_learners_regr.rpart]{regr.rpart}}.
Other Learner: \code{\link{LearnerClassif}},
\code{\link{Learner}}, \code{\link{mlr_learners}}
}
\concept{Learner}
\keyword{datasets}
| /man/LearnerRegr.Rd | permissive | vpolisky/mlr3 | R | false | true | 1,791 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/LearnerRegr.R
\docType{data}
\name{LearnerRegr}
\alias{LearnerRegr}
\title{Regression Learner}
\format{\link[R6:R6Class]{R6::R6Class} object inheriting from \link{Learner}.}
\description{
This Learner specializes \link{Learner} for regression problems.
Predefined learners can be found in the \link{Dictionary} \link{mlr_learners}.
}
\section{Construction}{
\preformatted{l = LearnerRegr$new(id, param_set = ParamSet$new(), param_vals = list(), predict_types = character(),
feature_types = character(), properties = character(), data_formats = "data.table", packages = character())
}
For a description of the arguments, see \link{Learner}.
\code{task_type} is set to \code{"regr"}.
Possible values for \code{predict_types} are a subset of \code{c("response", "se")}.
}
\section{Fields}{
See \link{Learner}.
}
\section{Methods}{
All methods of \link{Learner}, and additionally:
\itemize{
\item \code{new_prediction(task, response = NULL, prob = NULL)}\cr
(\link{Task}, \code{numeric()}, \code{numeric()}) -> \link{PredictionRegr}\cr
This method is intended to be called in \code{predict()} to create a \link{PredictionRegr} object.
Uses \code{task} to extract \code{row_ids}.
To manually construct a \link{PredictionRegr} object, see its constructor.
}
}
\examples{
# get all regression learners from mlr_learners:
lrns = mlr_learners$mget(mlr_learners$keys("^regr"))
names(lrns)
# get a specific learner from mlr_learners:
lrn = mlr_learners$get("regr.rpart")
print(lrn)
}
\seealso{
Example regression learner: \code{\link[=mlr_learners_regr.rpart]{regr.rpart}}.
Other Learner: \code{\link{LearnerClassif}},
\code{\link{Learner}}, \code{\link{mlr_learners}}
}
\concept{Learner}
\keyword{datasets}
|
# edit distance function #
# counts the number of differences between two strings of equal length
# to be used for e.g. index pool designs in Illumina seq
# INPUT
# a=string1,
# b = string2, can be a string vector!!!
# OUTPUT - integer, edit distance (number of changes needed to get from a to b)
# if b is a string vector, a list is returned with position(minpos), edit distance (minedist) and sequence (minseq) of the most similar string in the vector
## USAGE
# edist(a, b)
# for a list of a arguments, map_df(a, edist, b)
# or even cbind(celero, map_df(celero$index1, edist, xt$i7_bases))
###
## ADVANCED USAGE
# to check one character vector (charvec) of indices for the next closest index:
# map_df(1:length(charvec), function(x) { edist(charvec[x], charvec[-x]) })
###
edist <- function(a, b) {
require(stringr)
#if(!is.character(a) | !is.character(b)) stop("Both arguments must be characters!")
#if(nchar(a) != nchar(b)) stop("The two strings have to be of the same length!")
a <- toupper(a)
b <- toupper(b)
if(length(b) > 1) {
countslist <- lapply(str_split(b, ""), str_count, unlist(str_split(a, "")))
sumslist <- lapply(countslist, function(x) { length(x) - sum(x) } )
#unlist(sumslist)
return(
list(
qseq = a,
minhitseq = b[which.min(unlist(sumslist))],
minhitpos = which.min(unlist(sumslist)),
minhitedist = min(unlist(sumslist))
)
)
} else {
countvector <- str_count(string = unlist(str_split(a, "")), pattern = unlist(str_split(b, "")))
return(
length(countvector) - sum(countvector)
)
}
}
| /bin/edist.R | no_license | angelovangel/etc | R | false | false | 1,629 | r | # edit distance function #
# counts the number of differences between two strings of equal length
# to be used for e.g. index pool designs in Illumina seq
# INPUT
# a=string1,
# b = string2, can be a string vector!!!
# OUTPUT - integer, edit distance (number of changes needed to get from a to b)
# if b is a string vector, a list is returned with position(minpos), edit distance (minedist) and sequence (minseq) of the most similar string in the vector
## USAGE
# edist(a, b)
# for a list of a arguments, map_df(a, edist, b)
# or even cbind(celero, map_df(celero$index1, edist, xt$i7_bases))
###
## ADVANCED USAGE
# to check one character vector (charvec) of indices for the next closest index:
# map_df(1:length(charvec), function(x) { edist(charvec[x], charvec[-x]) })
###
edist <- function(a, b) {
require(stringr)
#if(!is.character(a) | !is.character(b)) stop("Both arguments must be characters!")
#if(nchar(a) != nchar(b)) stop("The two strings have to be of the same length!")
a <- toupper(a)
b <- toupper(b)
if(length(b) > 1) {
countslist <- lapply(str_split(b, ""), str_count, unlist(str_split(a, "")))
sumslist <- lapply(countslist, function(x) { length(x) - sum(x) } )
#unlist(sumslist)
return(
list(
qseq = a,
minhitseq = b[which.min(unlist(sumslist))],
minhitpos = which.min(unlist(sumslist)),
minhitedist = min(unlist(sumslist))
)
)
} else {
countvector <- str_count(string = unlist(str_split(a, "")), pattern = unlist(str_split(b, "")))
return(
length(countvector) - sum(countvector)
)
}
}
|
#
# This is the user-interface definition of a Shiny web application. You can
# run the application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
# Define UI for application that draws a histogram
shinyUI(fluidPage(
includeCSS("styles.css"),
titlePanel("Predictive Typing Model Demo"),
fluidRow(
column(12, textInput("text","Type something in the text box below and click on the button", width = 450))
),
fluidRow(
column(12, submitButton("Guess the next word!"))
),
fluidRow(
column(12, textOutput("response"))
)
)) | /webapp/PredictiveTyping/ui.R | no_license | jedlejedi/DataScienceCapstone | R | false | false | 685 | r | #
# This is the user-interface definition of a Shiny web application. You can
# run the application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
# Define UI for application that draws a histogram
shinyUI(fluidPage(
includeCSS("styles.css"),
titlePanel("Predictive Typing Model Demo"),
fluidRow(
column(12, textInput("text","Type something in the text box below and click on the button", width = 450))
),
fluidRow(
column(12, submitButton("Guess the next word!"))
),
fluidRow(
column(12, textOutput("response"))
)
)) |
flip<-structure(
function # Flip an array.
##description<<
## Flip an array along some dimension(s), e.g. flip a matrix upside
## down or left to right.
(x, ##<< an array
dim = 1 ##<< dimension(s) along which to flip (in sequence), defaults
## to 1 (i.e. flip rows).
) {
if (!is.array(x)) x<-as.array(x)
if (max(dim)>length(dim(x))) {
stop('\'dim\' argument is ',max(dim),', but \'x\' has only ',length(dim(x)),' dimension(s).')
}
for (d in dim) {
if (dim(x)[d]>0) {
x<-eval(parse(text=paste(
'x[',
paste(rep(',',d-1),collapse=''),
'dim(x)[d]:1',
paste(rep(',',length(dim(x))-d),collapse=''),
',drop=FALSE]',sep='')))
#rownames(tmp)<-rev(rownames(x))
}
}
return(x)
### The array \code{x} having the \code{dim} dimension flipped.
},ex=function() {
# flip a matrix
x<-matrix(1:6,2)
x
# flip upside down
flip(x,1)
# flip left to right
flip(x,2)
# flip both upside down and left to right
flip(x,1:2)
# flip a vector
v<-1:10
v
flip(v)
# flip an array
a<-array(1:prod(2:4),2:4)
a
# flip along dim 1
flip(a,1)
# flip along dim 2
flip(a,2)
# flip along dim 3
flip(a,3)
})
| /R/flip.R | no_license | tsieger/tsiMisc | R | false | false | 1,190 | r | flip<-structure(
function # Flip an array.
##description<<
## Flip an array along some dimension(s), e.g. flip a matrix upside
## down or left to right.
(x, ##<< an array
dim = 1 ##<< dimension(s) along which to flip (in sequence), defaults
## to 1 (i.e. flip rows).
) {
if (!is.array(x)) x<-as.array(x)
if (max(dim)>length(dim(x))) {
stop('\'dim\' argument is ',max(dim),', but \'x\' has only ',length(dim(x)),' dimension(s).')
}
for (d in dim) {
if (dim(x)[d]>0) {
x<-eval(parse(text=paste(
'x[',
paste(rep(',',d-1),collapse=''),
'dim(x)[d]:1',
paste(rep(',',length(dim(x))-d),collapse=''),
',drop=FALSE]',sep='')))
#rownames(tmp)<-rev(rownames(x))
}
}
return(x)
### The array \code{x} having the \code{dim} dimension flipped.
},ex=function() {
# flip a matrix
x<-matrix(1:6,2)
x
# flip upside down
flip(x,1)
# flip left to right
flip(x,2)
# flip both upside down and left to right
flip(x,1:2)
# flip a vector
v<-1:10
v
flip(v)
# flip an array
a<-array(1:prod(2:4),2:4)
a
# flip along dim 1
flip(a,1)
# flip along dim 2
flip(a,2)
# flip along dim 3
flip(a,3)
})
|
model_kmeans_janis_steroids <- function(node_data, goal_country,maxl){
kmeansdata <- node_data %>% select(
node,
period,
overall_flux_n,
ratio,
links_tot,
links_net,
between,
eigen_w,
ave_influx_n,
ave_outflux_n
)
kmeansdata <- kmeansdata %>% mutate(countrytime = paste(as.character(node),as.character(period),sep='_')) %>%
select(-node,-period)
row.names(kmeansdata) <- kmeansdata$countrytime
kmeansdata <- kmeansdata %>% select(-countrytime)
list_of_countries <-node_data %>% select(node)
set.seed(42) #Ensure reproducibility
ncmax <- 40 #Pick a cluster number.
wss <- (nrow(kmeansdata)-1)*sum(apply(kmeansdata,2,var))
for (i in 2:ncmax) wss[i] <- sum(kmeans(kmeansdata, centers=i, iter.max = 25)$withinss)
pl1 <- plot(1:ncmax, log10(wss), type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")
#Automatically determine optimum number of clusters: very slowwwwww
#d_clust <- Mclust(as.matrix(kmeansdata), G=15:ncmax)
#m.best <- dim(d_clust$z)[2]
#cat("model-based optimal number of clusters:", m.best, "\n")
#pl2 <- plot(d_clust)
set.seed(42) #Ensure reproducibility
nc <- 25
fit <- kmeans(kmeansdata, nc, iter.max = 25) #K-means
aggregate(kmeansdata,by=list(fit$cluster),FUN=mean)
kmeansres <- data.frame(node_data, fit$cluster) %>% rename(cluster = fit.cluster)
# For a given country and a period, find the most `similar' countries according to cluster classification
all_periods <- sort(unique(kmeansres$period))
#print(all_periods)
i <- 0
c1 <- character(1)
c2 <- list()
for (goal_period in all_periods){
tmp <- kmeansres %>% filter(node == goal_country) %>% filter(period == goal_period)
if(nrow(tmp)>0){
the_cluster <- tmp$cluster
partners <- kmeansres %>% filter(cluster == the_cluster) %>% filter(period == goal_period)
partners <- unique(partners$node)
partners <- setdiff(partners,goal_country)
#print(partners)
i <- i + 1
c1[i] <- goal_period
c2[[i]] <- unlist(partners)
}
}
# Most often similar countries
ddd <- as.data.frame(table(cbind(unlist(c2)))) %>% arrange(desc(Freq))
simlist <- as.character(ddd[1:min(maxl,length(ddd$Var1)),1])
ans <- list(kmeansres,simlist,pl1)
return(ans)
}
| /handover/modeling/model_kmeans_janis_steroids.R | no_license | Waztom/FSA | R | false | false | 2,356 | r | model_kmeans_janis_steroids <- function(node_data, goal_country,maxl){
kmeansdata <- node_data %>% select(
node,
period,
overall_flux_n,
ratio,
links_tot,
links_net,
between,
eigen_w,
ave_influx_n,
ave_outflux_n
)
kmeansdata <- kmeansdata %>% mutate(countrytime = paste(as.character(node),as.character(period),sep='_')) %>%
select(-node,-period)
row.names(kmeansdata) <- kmeansdata$countrytime
kmeansdata <- kmeansdata %>% select(-countrytime)
list_of_countries <-node_data %>% select(node)
set.seed(42) #Ensure reproducibility
ncmax <- 40 #Pick a cluster number.
wss <- (nrow(kmeansdata)-1)*sum(apply(kmeansdata,2,var))
for (i in 2:ncmax) wss[i] <- sum(kmeans(kmeansdata, centers=i, iter.max = 25)$withinss)
pl1 <- plot(1:ncmax, log10(wss), type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")
#Automatically determine optimum number of clusters: very slowwwwww
#d_clust <- Mclust(as.matrix(kmeansdata), G=15:ncmax)
#m.best <- dim(d_clust$z)[2]
#cat("model-based optimal number of clusters:", m.best, "\n")
#pl2 <- plot(d_clust)
set.seed(42) #Ensure reproducibility
nc <- 25
fit <- kmeans(kmeansdata, nc, iter.max = 25) #K-means
aggregate(kmeansdata,by=list(fit$cluster),FUN=mean)
kmeansres <- data.frame(node_data, fit$cluster) %>% rename(cluster = fit.cluster)
# For a given country and a period, find the most `similar' countries according to cluster classification
all_periods <- sort(unique(kmeansres$period))
#print(all_periods)
i <- 0
c1 <- character(1)
c2 <- list()
for (goal_period in all_periods){
tmp <- kmeansres %>% filter(node == goal_country) %>% filter(period == goal_period)
if(nrow(tmp)>0){
the_cluster <- tmp$cluster
partners <- kmeansres %>% filter(cluster == the_cluster) %>% filter(period == goal_period)
partners <- unique(partners$node)
partners <- setdiff(partners,goal_country)
#print(partners)
i <- i + 1
c1[i] <- goal_period
c2[[i]] <- unlist(partners)
}
}
# Most often similar countries
ddd <- as.data.frame(table(cbind(unlist(c2)))) %>% arrange(desc(Freq))
simlist <- as.character(ddd[1:min(maxl,length(ddd$Var1)),1])
ans <- list(kmeansres,simlist,pl1)
return(ans)
}
|
#' @title Predicts ethnicity using placental DNA methylation microarray data
#'
#' @description Uses 1860 CpGs to predict self-reported ethnicity on placental
#' microarray data.
#'
#' @details Predicts self-reported ethnicity from 3 classes: Africans, Asians,
#' and Caucasians, using placental DNA methylation data measured on the Infinium
#' 450k/EPIC methylation array. Will return membership probabilities that often
#' reflect genetic ancestry composition.
#'
#' The input data should contain all 1860 predictors (cpgs) of the final GLMNET
#' model.
#'
#' It's recommended to use the same normalization methods used on the training
#' data: NOOB and BMIQ.
#'
#' @param betas n x m dataframe of methylation values on the beta scale (0, 1),
#' where the variables are arranged in rows, and samples in columns. Should
#' contain all 1860 predictors and be normalized with NOOB and BMIQ.
#' @param threshold A probability threshold ranging from (0, 1) to call samples
#' 'ambiguous'. Defaults to 0.75.
#'
#' @return a [tibble][tibble::tibble-package]
#' @examples
#' ## To predict ethnicity on 450k/850k samples
#'
#' # Load placenta DNAm data
#' data(plBetas)
#' predictEthnicity(plBetas)
#'
#' @export predictEthnicity
#' @export pl_infer_ethnicity
#' @aliases pl_infer_ethnicity
predictEthnicity <- function(betas, threshold = 0.75) {
data(ethnicityCpGs, envir=environment())
pf <- intersect(rownames(betas), ethnicityCpGs)
if (length(pf) < length(ethnicityCpGs)) {
warning(paste(
"Only", length(pf), "out of",
length(ethnicityCpGs), "present."
))
} else {
message(paste(length(pf), "of 1860 predictors present."))
}
# subset down to 1860 final features
betas <- t(betas[pf, ])
# This code is modified from glmnet v3.0.2, GPL-2 license
# modifications include reducing the number of features from the original
# training set, to only where coefficients != 0 (1860 features)
# These modifications were made to significantly reduce memory size of the
# internal object `nbeta`
# see https://glmnet.stanford.edu/ for original glmnet package
npred <- nrow(betas) # number of samples
dn <- list(names(nbeta), "1", dimnames(betas)[[1]])
dp <- array(0, c(nclass, nlambda, npred), dimnames = dn) # set up results
# cross product with coeeficients
for (i in seq(nclass)) {
fitk <- methods::cbind2(1, betas) %*%
matrix(nbeta[[i]][c("(Intercept)", colnames(betas)), ])
dp[i, , ] <- dp[i, , ] + t(as.matrix(fitk))
}
# probabilities
pp <- exp(dp)
psum <- apply(pp, c(2, 3), sum)
probs <- data.frame(aperm(
pp / rep(psum, rep(nclass, nlambda * npred)),
c(3, 1, 2)
))
colnames(probs) <- paste0("Prob_", dn[[1]])
# classification
link <- aperm(dp, c(3, 1, 2))
dpp <- aperm(dp, c(3, 1, 2))
preds <- data.frame(apply(dpp, 3, glmnet_softmax))
colnames(preds) <- "Predicted_ethnicity_nothresh"
# combine and apply thresholding
p <- cbind(preds, probs)
p$Highest_Prob <- apply(p[, 2:4], 1, max)
p$Predicted_ethnicity <- ifelse(
p$Highest_Prob < threshold, "Ambiguous",
as.character(p$Predicted_ethnicity_nothresh)
)
p$Sample_ID <- rownames(p)
p <- p[, c(7, 1, 6, 2:5)]
return(tibble::as_tibble(p))
}
# This code is copied directly from glmnet v3.0.2, GPL-2 license
# see https://glmnet.stanford.edu/ for original glmnet package
# The authors and copy right holders include:
# Jerome Friedman [aut], Trevor Hastie [aut, cre], Rob Tibshirani [aut],
# Balasubramanian Narasimhan [aut], Kenneth Tay [aut], Noah Simon [aut],
# Junyang Qian [ctb]
glmnet_softmax <- function(x, ignore_labels = FALSE) {
d <- dim(x)
dd <- dimnames(x)[[2]]
if (is.null(dd) || !length(dd)) {
ignore_labels <- TRUE
}
nas <- apply(is.na(x), 1, any)
if (any(nas)) {
pclass <- rep(NA, d[1])
if (sum(nas) < d[1]) {
pclass2 <- glmnet_softmax(x[!nas, ], ignore_labels)
pclass[!nas] <- pclass2
if (is.factor(pclass2)) {
pclass <- factor(
pclass,
levels = seq(d[2]),
labels = levels(pclass2)
)
}
}
} else {
maxdist <- x[, 1]
pclass <- rep(1, d[1])
for (i in seq(2, d[2])) {
l <- x[, i] > maxdist
pclass[l] <- i
maxdist[l] <- x[l, i]
}
dd <- dimnames(x)[[2]]
if (!ignore_labels) {
pclass <- factor(pclass, levels = seq(d[2]), labels = dd)
}
}
pclass
}
pl_infer_ethnicity <- function(betas, threshold = 0.75) {
.Deprecated("predictEthnicity")
predictEthnicity(betas = betas, threshold = threshold)
}
| /R/predictEthnicity.R | no_license | wvictor14/planet | R | false | false | 4,881 | r | #' @title Predicts ethnicity using placental DNA methylation microarray data
#'
#' @description Uses 1860 CpGs to predict self-reported ethnicity on placental
#' microarray data.
#'
#' @details Predicts self-reported ethnicity from 3 classes: Africans, Asians,
#' and Caucasians, using placental DNA methylation data measured on the Infinium
#' 450k/EPIC methylation array. Will return membership probabilities that often
#' reflect genetic ancestry composition.
#'
#' The input data should contain all 1860 predictors (cpgs) of the final GLMNET
#' model.
#'
#' It's recommended to use the same normalization methods used on the training
#' data: NOOB and BMIQ.
#'
#' @param betas n x m dataframe of methylation values on the beta scale (0, 1),
#' where the variables are arranged in rows, and samples in columns. Should
#' contain all 1860 predictors and be normalized with NOOB and BMIQ.
#' @param threshold A probability threshold ranging from (0, 1) to call samples
#' 'ambiguous'. Defaults to 0.75.
#'
#' @return a [tibble][tibble::tibble-package]
#' @examples
#' ## To predict ethnicity on 450k/850k samples
#'
#' # Load placenta DNAm data
#' data(plBetas)
#' predictEthnicity(plBetas)
#'
#' @export predictEthnicity
#' @export pl_infer_ethnicity
#' @aliases pl_infer_ethnicity
predictEthnicity <- function(betas, threshold = 0.75) {
data(ethnicityCpGs, envir=environment())
pf <- intersect(rownames(betas), ethnicityCpGs)
if (length(pf) < length(ethnicityCpGs)) {
warning(paste(
"Only", length(pf), "out of",
length(ethnicityCpGs), "present."
))
} else {
message(paste(length(pf), "of 1860 predictors present."))
}
# subset down to 1860 final features
betas <- t(betas[pf, ])
# This code is modified from glmnet v3.0.2, GPL-2 license
# modifications include reducing the number of features from the original
# training set, to only where coefficients != 0 (1860 features)
# These modifications were made to significantly reduce memory size of the
# internal object `nbeta`
# see https://glmnet.stanford.edu/ for original glmnet package
npred <- nrow(betas) # number of samples
dn <- list(names(nbeta), "1", dimnames(betas)[[1]])
dp <- array(0, c(nclass, nlambda, npred), dimnames = dn) # set up results
# cross product with coeeficients
for (i in seq(nclass)) {
fitk <- methods::cbind2(1, betas) %*%
matrix(nbeta[[i]][c("(Intercept)", colnames(betas)), ])
dp[i, , ] <- dp[i, , ] + t(as.matrix(fitk))
}
# probabilities
pp <- exp(dp)
psum <- apply(pp, c(2, 3), sum)
probs <- data.frame(aperm(
pp / rep(psum, rep(nclass, nlambda * npred)),
c(3, 1, 2)
))
colnames(probs) <- paste0("Prob_", dn[[1]])
# classification
link <- aperm(dp, c(3, 1, 2))
dpp <- aperm(dp, c(3, 1, 2))
preds <- data.frame(apply(dpp, 3, glmnet_softmax))
colnames(preds) <- "Predicted_ethnicity_nothresh"
# combine and apply thresholding
p <- cbind(preds, probs)
p$Highest_Prob <- apply(p[, 2:4], 1, max)
p$Predicted_ethnicity <- ifelse(
p$Highest_Prob < threshold, "Ambiguous",
as.character(p$Predicted_ethnicity_nothresh)
)
p$Sample_ID <- rownames(p)
p <- p[, c(7, 1, 6, 2:5)]
return(tibble::as_tibble(p))
}
# This code is copied directly from glmnet v3.0.2, GPL-2 license
# see https://glmnet.stanford.edu/ for original glmnet package
# The authors and copy right holders include:
# Jerome Friedman [aut], Trevor Hastie [aut, cre], Rob Tibshirani [aut],
# Balasubramanian Narasimhan [aut], Kenneth Tay [aut], Noah Simon [aut],
# Junyang Qian [ctb]
glmnet_softmax <- function(x, ignore_labels = FALSE) {
d <- dim(x)
dd <- dimnames(x)[[2]]
if (is.null(dd) || !length(dd)) {
ignore_labels <- TRUE
}
nas <- apply(is.na(x), 1, any)
if (any(nas)) {
pclass <- rep(NA, d[1])
if (sum(nas) < d[1]) {
pclass2 <- glmnet_softmax(x[!nas, ], ignore_labels)
pclass[!nas] <- pclass2
if (is.factor(pclass2)) {
pclass <- factor(
pclass,
levels = seq(d[2]),
labels = levels(pclass2)
)
}
}
} else {
maxdist <- x[, 1]
pclass <- rep(1, d[1])
for (i in seq(2, d[2])) {
l <- x[, i] > maxdist
pclass[l] <- i
maxdist[l] <- x[l, i]
}
dd <- dimnames(x)[[2]]
if (!ignore_labels) {
pclass <- factor(pclass, levels = seq(d[2]), labels = dd)
}
}
pclass
}
pl_infer_ethnicity <- function(betas, threshold = 0.75) {
.Deprecated("predictEthnicity")
predictEthnicity(betas = betas, threshold = threshold)
}
|
##
## Exported symobls in package `sourcetools`
##
## Exported package methods
tokenize_string <- function (string)
{
.Call("sourcetools_tokenize_string", as.character(string),
PACKAGE = "sourcetools")
}
read <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read", path, PACKAGE = "sourcetools")
}
tokenize_file <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_tokenize_file", path, PACKAGE = "sourcetools")
}
tokenize <- function (file = "", text = NULL)
{
if (is.null(text))
text <- read(file)
tokenize_string(text)
}
read_lines <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read_lines", path, PACKAGE = "sourcetools")
}
read_lines_bytes <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read_lines_bytes", path, PACKAGE = "sourcetools")
}
read_bytes <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read_bytes", path, PACKAGE = "sourcetools")
}
## Package Data
# none
## Package Info
.skeleton_package_title = "Tools for Reading, Tokenizing and Parsing R Code"
.skeleton_package_version = "0.1.5"
.skeleton_package_depends = ""
.skeleton_package_imports = ""
## Internal
.skeleton_version = 5
## EOF | /testData/r_skeletons/sourcetools.R | permissive | yoongkang0122/r4intellij | R | false | false | 1,391 | r | ##
## Exported symobls in package `sourcetools`
##
## Exported package methods
tokenize_string <- function (string)
{
.Call("sourcetools_tokenize_string", as.character(string),
PACKAGE = "sourcetools")
}
read <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read", path, PACKAGE = "sourcetools")
}
tokenize_file <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_tokenize_file", path, PACKAGE = "sourcetools")
}
tokenize <- function (file = "", text = NULL)
{
if (is.null(text))
text <- read(file)
tokenize_string(text)
}
read_lines <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read_lines", path, PACKAGE = "sourcetools")
}
read_lines_bytes <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read_lines_bytes", path, PACKAGE = "sourcetools")
}
read_bytes <- function (path)
{
path <- normalizePath(path, mustWork = TRUE)
.Call("sourcetools_read_bytes", path, PACKAGE = "sourcetools")
}
## Package Data
# none
## Package Info
.skeleton_package_title = "Tools for Reading, Tokenizing and Parsing R Code"
.skeleton_package_version = "0.1.5"
.skeleton_package_depends = ""
.skeleton_package_imports = ""
## Internal
.skeleton_version = 5
## EOF |
setwd("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/")
library(here)
library(slingshot)
library(RColorBrewer)
library(mgcv)
library(tradeSeq)
library(edgeR)
library(rafalib)
library(wesanderson)
library(ggplot2)
library(cowplot)
library(iCOBRA)
library(scales)
library(dplyr)
### prepare performance plots ####
cols <- c(rep(c("#C6DBEF", "#08306B"), each = 3), "#4292C6", "#4daf4a",
"#e41a1c", "#e78ac3", "#ff7f00", "darkgoldenrod1")
names(cols) <- c("tradeSeq_slingshot_end", "tradeSeq_GPfates_end", "tradeSeq_3_knots",
"tradeSeq_slingshot_pattern", "tradeSeq_GPfates_pattern",
"tradeSeq_10_knots", "tradeSeq_5_knots", "GPcounts_NB",
"tradeSeq_true_10k", "GPcounts_true_Gaussian", "GPcounts_Gaussian", "GPcounts_true_NB")
linetypes <- c(rep(c("solid", "solid", "solid"), 2), rep("solid", 7))
names(linetypes) <- c("tradeSeq_slingshot_end", "tradeSeq_GPfates_end", "tradeSeq_3_knots",
"tradeSeq_slingshot_pattern", "tradeSeq_GPfates_pattern",
"tradeSeq_10_knots", "tradeSeq_5_knots", "GPcounts_NB",
"tradeSeq_true_10k", "GPcounts_true_Gaussian", "GPcounts_Gaussian", "GPcounts_true_NB")
theme_set(theme_bw())
theme_update(legend.position = "none",
panel.grid.major.y = element_blank(),
panel.grid.minor.y = element_blank(),
panel.grid.major.x = element_line(linetype = "dashed", colour = "black"),
panel.grid.minor.x = element_line(linetype = "dashed", colour = "grey"),
axis.title.x = element_text(size = rel(1)),
axis.title.y = element_text(size = rel(1)),
axis.text.x = element_text(size = rel(.8)),
axis.text.y = element_text(size = rel(.8))
)
dir <-("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/")
cobraFiles <- list.files(dir, pattern="cobra*", full.names=TRUE)
cobraFiles <- cobraFiles[c(1,3:10,2)] #order from 1 to 10
plotPerformanceCurve <- function(cobraObject){
cn <- colnames(pval(cobraObject))
cn <- gsub(cn,pattern="tradeR",replacement="tradeSeq")
colnames(pval(cobraObject)) <- cn
colnames(cobraObject@pval) <- gsub(colnames(cobraObject@pval),pattern="tradeR",replacement="tradeSeq")
cobraObject <- calculate_adjp(cobraObject)
cobraObject <- calculate_performance(cobraObject, binary_truth = "status")
DyntoyPlot <- data.frame(FDP = cobraObject@fdrtprcurve$FDR,
TPR = cobraObject@fdrtprcurve$TPR,
method = cobraObject@fdrtprcurve$method,
cutoff = cobraObject@fdrtprcurve$CUTOFF)
pDyntoy <- ggplot(DyntoyPlot, aes(x = FDP, y = TPR, col = method)) +
geom_path(size = 1, aes(linetype = method)) +
xlab("FDP") +
scale_x_continuous(limits = c(0, 1), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes)) + ylab("TPR")
pDyntoy
}
for(ii in 1:length(cobraFiles)){
cobra <- readRDS(cobraFiles[ii])
assign(paste0("bifplot",ii),plotPerformanceCurve(cobra))
}
p1 <- plot_grid(bifplot1, bifplot2, bifplot3, bifplot4, bifplot5,
bifplot6, bifplot7, bifplot8, bifplot9, bifplot10,
nrow=2, ncol=5)
p1
plotsBif <- sapply(cobraFiles, function(file){
cobra <- readRDS(file)
plotPerformanceCurve(cobra)
})
resPlots <- plotsBif[seq(1,length(plotsBif),by=9)] # get relevant data frame
pAll <- ggplot(resPlots[[1]],
aes(x = FDP, y = TPR, col = method)) +
geom_path(size = 1, aes(linetype = method)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes))
legend_all <- get_legend(pAll + labs(col = "", linetype = "") +
theme(legend.position = "bottom",
legend.text = element_text(size = 14)))
#legend.key.width = unit(1.5, "cm"),
pLeg <- plot_grid(pAll, legend_all, rel_heights=c(1,0.2), nrow=2, ncol=1)
pLeg
#### plot with trajectories
FQnorm <- function(counts){
rk <- apply(counts,2,rank,ties.method='min')
counts.sort <- apply(counts,2,sort)
refdist <- apply(counts.sort,1,median)
norm <- apply(rk,2,function(r){ refdist[r] })
rownames(norm) <- rownames(counts)
return(norm)
}
pal <- wes_palette("Zissou1", 12, type = "continuous")
dataAll <- readRDS("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/datasets_for_koen.rds")
for(datasetIter in c(1:10)){
data <- dataAll[[datasetIter]]
counts <- t(data$counts)
# get milestones
gid <- data$prior_information$groups_id
gid <- gid[match(colnames(counts),gid$cell_id),]
pal <- wes_palette("Zissou1", 12, type = "continuous")
truePseudotime <- data$prior_information$timecourse_continuous
g <- Hmisc::cut2(truePseudotime,g=12)
# quantile normalization
normCounts <- FQnorm(counts)
## dim red
pca <- prcomp(log1p(t(normCounts)), scale. = FALSE)
rd <- pca$x[,1:2]
#plot(rd, pch=16, asp = 1, col=pal[g])
library(princurve)
pcc <- principal_curve(rd, smoother="periodic_lowess")
#lines(x=pcc$s[order(pcc$lambda),1], y=pcc$s[order(pcc$lambda),2], col="red", lwd=2)
ggTraj <- ggplot(as.data.frame(rd), aes(x = PC1, y = PC2)) +
geom_point(col = pal[g], size=1) +
theme_classic() +
geom_path(x = pcc$s[order(pcc$lambda),1],
y = pcc$s[order(pcc$lambda),2])
assign(paste0("trajplot",datasetIter),ggTraj)
}
### fixed aspect ratio
p1 <- plot_grid(trajplot1 + coord_fixed(), trajplot2 + coord_fixed(), trajplot3 + coord_fixed(), trajplot4 + coord_fixed(), trajplot5 + coord_fixed(),
bifplot1 + coord_fixed(), bifplot2 + coord_fixed(), bifplot3 + coord_fixed(), bifplot4 + coord_fixed(), bifplot5 + coord_fixed(),
nrow=2, ncol=5)#, rel_heights=c(0.8,1,0.8,1))
pLeg1 <- plot_grid(p1, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pLeg1
ggsave("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/individualPerformance_cyclic1To5_v2IncludingEdgeR.pdf", width = unit(15, "in"), height = unit(10, "in"), scale = .7)
## fixed aspect ratio
dev.new()
p2 <- plot_grid(trajplot6 + coord_fixed(), trajplot7 + coord_fixed(), trajplot8 + coord_fixed(), trajplot9 + coord_fixed(), trajplot10 + coord_fixed(),
bifplot6 + coord_fixed(), bifplot7 + coord_fixed(), bifplot8 + coord_fixed(), bifplot9 + coord_fixed(), bifplot10 + coord_fixed(),
nrow=2, ncol=5, rel_heights=c(0.8,1))
pLeg2 <- plot_grid(p2, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pLeg2
ggsave("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/individualPerformance_cyclic6To10_v2IncludingEdgeR.pdf", width = unit(15, "in"), height = unit(10, "in"), scale = .7)
# ### mean plot
resList <- c()
resList2 <- c()
alphaSeq <- c(seq(1e-16,1e-9,length=100),seq(5e-9,1e-3,length=250),seq(5e-2,1,length=500))
alphaSeq2 <- c(seq(-20,0,length=100),seq(0,3,length=250),seq(3,50,length=500))
for(ii in 1:length(cobraFiles)){
cobra <- readRDS(cobraFiles[ii])
pvals <- pval(cobra)
colnames(pvals) <- gsub(colnames(pvals),pattern="tradeR",replacement="tradeSeq")
truths <- as.logical(truth(cobra)[,1])
# performance for all p-value based methods
hlp <- apply(pvals,2,function(x){
pOrder <- order(x,decreasing=FALSE)
padj <- p.adjust(x,"fdr")
df <- as.data.frame(t(sapply(alphaSeq, function(alpha){
R <- which(padj <= alpha)
fdr <- sum(!truths[R])/length(R) #false over rejected
tpr <- sum(truths[R])/sum(truths) #TP over all true
c(fdr=fdr, tpr=tpr, cutoff=alpha)
})))
})
scores <- score(cobra)
# performance for all score-value based methods
hlp2 <- apply(scores,2,function(x){
pOrder <- order(scores[,1],decreasing=FALSE)
df <- as.data.frame(t(sapply(alphaSeq2, function(alpha2){
R <- which(x >= alpha2)
fdr <- sum(!truths[R])/length(R) #false over rejected
tpr <- sum(truths[R])/sum(truths) #TP over all true
c(fdr=fdr, tpr=tpr, cutoff=alpha2)
})))
})
# summarize
dfIter <- do.call(rbind,hlp)
dfIter2 <- do.call(rbind,hlp2)
dfIter$method=rep(colnames(pvals),each=length(alphaSeq))
dfIter2$method=rep(colnames(scores),each=length(alphaSeq))
dfIter$dataset <- ii
resList[[ii]] <- dfIter
dfIter2$dataset <- ii
resList2[[ii]] <- dfIter2
}
# #### across all datasets
library(tidyverse)
df <- as_tibble(do.call(rbind,resList))
df <- df %>% group_by(method,cutoff) %>%
summarize(meanTPR=mean(tpr,na.rm=TRUE),
meanFDR=mean(fdr,na.rm=TRUE)) %>% arrange(method,cutoff)
df2 <- as_tibble(do.call(rbind,resList2))
df2 <- df2 %>% group_by(method,cutoff) %>%
summarize(meanTPR=mean(tpr,na.rm=TRUE),
meanFDR=mean(fdr,na.rm=TRUE)) %>% arrange(method,cutoff)
pMeanAll <- ggplot(df, aes(x=meanFDR, y=meanTPR, col=method)) + geom_path(size = 1) +
scale_x_continuous(limits = c(0, 1), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes)) #+ xlab("FDR") + ylab("TPR")
pMeanLegAll <- plot_grid(pMeanAll, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pMeanLegAll
pMeanAll2 <- ggplot(df2, aes(x=meanFDR, y=meanTPR, col=method)) + geom_path(size = 1) +
scale_x_continuous(limits = c(0, 1), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes))# + xlab("FDR") + ylab("TPR")
pMeanLegAll2 <- plot_grid(pMeanAll2, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pMeanLegAll2
df_all <- rbind(df, df2)
pMeanAll3 <- ggplot(df_all, aes(x=meanFDR, y=meanTPR, col=method)) + geom_path(size = 1) +
scale_x_continuous(limits = c(0, .6), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes)) +xlab("") + ylab("")+theme(axis.text.x = element_text(
size=18, angle=-90), axis.text.y = element_text( size=18, angle=0))
#+ xlab("FDR") + ylab("TPR")
#pMeanLegAll3 <- plot_grid(pMeanAll3, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1) + guides(fill=guide_legend(nrow=2,byrow=TRUE))
pMeanLegAll3
saveRDS(pMeanAll3, file="/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/pMeanCycle_v2_GPcounts.rds")
| /paper_notebooks/tradeSeq/performanceTIPlotCyclic_acrossSimulations.R | permissive | ManchesterBioinference/GPcounts | R | false | false | 11,040 | r | setwd("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/")
library(here)
library(slingshot)
library(RColorBrewer)
library(mgcv)
library(tradeSeq)
library(edgeR)
library(rafalib)
library(wesanderson)
library(ggplot2)
library(cowplot)
library(iCOBRA)
library(scales)
library(dplyr)
### prepare performance plots ####
cols <- c(rep(c("#C6DBEF", "#08306B"), each = 3), "#4292C6", "#4daf4a",
"#e41a1c", "#e78ac3", "#ff7f00", "darkgoldenrod1")
names(cols) <- c("tradeSeq_slingshot_end", "tradeSeq_GPfates_end", "tradeSeq_3_knots",
"tradeSeq_slingshot_pattern", "tradeSeq_GPfates_pattern",
"tradeSeq_10_knots", "tradeSeq_5_knots", "GPcounts_NB",
"tradeSeq_true_10k", "GPcounts_true_Gaussian", "GPcounts_Gaussian", "GPcounts_true_NB")
linetypes <- c(rep(c("solid", "solid", "solid"), 2), rep("solid", 7))
names(linetypes) <- c("tradeSeq_slingshot_end", "tradeSeq_GPfates_end", "tradeSeq_3_knots",
"tradeSeq_slingshot_pattern", "tradeSeq_GPfates_pattern",
"tradeSeq_10_knots", "tradeSeq_5_knots", "GPcounts_NB",
"tradeSeq_true_10k", "GPcounts_true_Gaussian", "GPcounts_Gaussian", "GPcounts_true_NB")
theme_set(theme_bw())
theme_update(legend.position = "none",
panel.grid.major.y = element_blank(),
panel.grid.minor.y = element_blank(),
panel.grid.major.x = element_line(linetype = "dashed", colour = "black"),
panel.grid.minor.x = element_line(linetype = "dashed", colour = "grey"),
axis.title.x = element_text(size = rel(1)),
axis.title.y = element_text(size = rel(1)),
axis.text.x = element_text(size = rel(.8)),
axis.text.y = element_text(size = rel(.8))
)
dir <-("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/")
cobraFiles <- list.files(dir, pattern="cobra*", full.names=TRUE)
cobraFiles <- cobraFiles[c(1,3:10,2)] #order from 1 to 10
plotPerformanceCurve <- function(cobraObject){
cn <- colnames(pval(cobraObject))
cn <- gsub(cn,pattern="tradeR",replacement="tradeSeq")
colnames(pval(cobraObject)) <- cn
colnames(cobraObject@pval) <- gsub(colnames(cobraObject@pval),pattern="tradeR",replacement="tradeSeq")
cobraObject <- calculate_adjp(cobraObject)
cobraObject <- calculate_performance(cobraObject, binary_truth = "status")
DyntoyPlot <- data.frame(FDP = cobraObject@fdrtprcurve$FDR,
TPR = cobraObject@fdrtprcurve$TPR,
method = cobraObject@fdrtprcurve$method,
cutoff = cobraObject@fdrtprcurve$CUTOFF)
pDyntoy <- ggplot(DyntoyPlot, aes(x = FDP, y = TPR, col = method)) +
geom_path(size = 1, aes(linetype = method)) +
xlab("FDP") +
scale_x_continuous(limits = c(0, 1), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes)) + ylab("TPR")
pDyntoy
}
for(ii in 1:length(cobraFiles)){
cobra <- readRDS(cobraFiles[ii])
assign(paste0("bifplot",ii),plotPerformanceCurve(cobra))
}
p1 <- plot_grid(bifplot1, bifplot2, bifplot3, bifplot4, bifplot5,
bifplot6, bifplot7, bifplot8, bifplot9, bifplot10,
nrow=2, ncol=5)
p1
plotsBif <- sapply(cobraFiles, function(file){
cobra <- readRDS(file)
plotPerformanceCurve(cobra)
})
resPlots <- plotsBif[seq(1,length(plotsBif),by=9)] # get relevant data frame
pAll <- ggplot(resPlots[[1]],
aes(x = FDP, y = TPR, col = method)) +
geom_path(size = 1, aes(linetype = method)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes))
legend_all <- get_legend(pAll + labs(col = "", linetype = "") +
theme(legend.position = "bottom",
legend.text = element_text(size = 14)))
#legend.key.width = unit(1.5, "cm"),
pLeg <- plot_grid(pAll, legend_all, rel_heights=c(1,0.2), nrow=2, ncol=1)
pLeg
#### plot with trajectories
FQnorm <- function(counts){
rk <- apply(counts,2,rank,ties.method='min')
counts.sort <- apply(counts,2,sort)
refdist <- apply(counts.sort,1,median)
norm <- apply(rk,2,function(r){ refdist[r] })
rownames(norm) <- rownames(counts)
return(norm)
}
pal <- wes_palette("Zissou1", 12, type = "continuous")
dataAll <- readRDS("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/datasets_for_koen.rds")
for(datasetIter in c(1:10)){
data <- dataAll[[datasetIter]]
counts <- t(data$counts)
# get milestones
gid <- data$prior_information$groups_id
gid <- gid[match(colnames(counts),gid$cell_id),]
pal <- wes_palette("Zissou1", 12, type = "continuous")
truePseudotime <- data$prior_information$timecourse_continuous
g <- Hmisc::cut2(truePseudotime,g=12)
# quantile normalization
normCounts <- FQnorm(counts)
## dim red
pca <- prcomp(log1p(t(normCounts)), scale. = FALSE)
rd <- pca$x[,1:2]
#plot(rd, pch=16, asp = 1, col=pal[g])
library(princurve)
pcc <- principal_curve(rd, smoother="periodic_lowess")
#lines(x=pcc$s[order(pcc$lambda),1], y=pcc$s[order(pcc$lambda),2], col="red", lwd=2)
ggTraj <- ggplot(as.data.frame(rd), aes(x = PC1, y = PC2)) +
geom_point(col = pal[g], size=1) +
theme_classic() +
geom_path(x = pcc$s[order(pcc$lambda),1],
y = pcc$s[order(pcc$lambda),2])
assign(paste0("trajplot",datasetIter),ggTraj)
}
### fixed aspect ratio
p1 <- plot_grid(trajplot1 + coord_fixed(), trajplot2 + coord_fixed(), trajplot3 + coord_fixed(), trajplot4 + coord_fixed(), trajplot5 + coord_fixed(),
bifplot1 + coord_fixed(), bifplot2 + coord_fixed(), bifplot3 + coord_fixed(), bifplot4 + coord_fixed(), bifplot5 + coord_fixed(),
nrow=2, ncol=5)#, rel_heights=c(0.8,1,0.8,1))
pLeg1 <- plot_grid(p1, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pLeg1
ggsave("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/individualPerformance_cyclic1To5_v2IncludingEdgeR.pdf", width = unit(15, "in"), height = unit(10, "in"), scale = .7)
## fixed aspect ratio
dev.new()
p2 <- plot_grid(trajplot6 + coord_fixed(), trajplot7 + coord_fixed(), trajplot8 + coord_fixed(), trajplot9 + coord_fixed(), trajplot10 + coord_fixed(),
bifplot6 + coord_fixed(), bifplot7 + coord_fixed(), bifplot8 + coord_fixed(), bifplot9 + coord_fixed(), bifplot10 + coord_fixed(),
nrow=2, ncol=5, rel_heights=c(0.8,1))
pLeg2 <- plot_grid(p2, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pLeg2
ggsave("/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/individualPerformance_cyclic6To10_v2IncludingEdgeR.pdf", width = unit(15, "in"), height = unit(10, "in"), scale = .7)
# ### mean plot
resList <- c()
resList2 <- c()
alphaSeq <- c(seq(1e-16,1e-9,length=100),seq(5e-9,1e-3,length=250),seq(5e-2,1,length=500))
alphaSeq2 <- c(seq(-20,0,length=100),seq(0,3,length=250),seq(3,50,length=500))
for(ii in 1:length(cobraFiles)){
cobra <- readRDS(cobraFiles[ii])
pvals <- pval(cobra)
colnames(pvals) <- gsub(colnames(pvals),pattern="tradeR",replacement="tradeSeq")
truths <- as.logical(truth(cobra)[,1])
# performance for all p-value based methods
hlp <- apply(pvals,2,function(x){
pOrder <- order(x,decreasing=FALSE)
padj <- p.adjust(x,"fdr")
df <- as.data.frame(t(sapply(alphaSeq, function(alpha){
R <- which(padj <= alpha)
fdr <- sum(!truths[R])/length(R) #false over rejected
tpr <- sum(truths[R])/sum(truths) #TP over all true
c(fdr=fdr, tpr=tpr, cutoff=alpha)
})))
})
scores <- score(cobra)
# performance for all score-value based methods
hlp2 <- apply(scores,2,function(x){
pOrder <- order(scores[,1],decreasing=FALSE)
df <- as.data.frame(t(sapply(alphaSeq2, function(alpha2){
R <- which(x >= alpha2)
fdr <- sum(!truths[R])/length(R) #false over rejected
tpr <- sum(truths[R])/sum(truths) #TP over all true
c(fdr=fdr, tpr=tpr, cutoff=alpha2)
})))
})
# summarize
dfIter <- do.call(rbind,hlp)
dfIter2 <- do.call(rbind,hlp2)
dfIter$method=rep(colnames(pvals),each=length(alphaSeq))
dfIter2$method=rep(colnames(scores),each=length(alphaSeq))
dfIter$dataset <- ii
resList[[ii]] <- dfIter
dfIter2$dataset <- ii
resList2[[ii]] <- dfIter2
}
# #### across all datasets
library(tidyverse)
df <- as_tibble(do.call(rbind,resList))
df <- df %>% group_by(method,cutoff) %>%
summarize(meanTPR=mean(tpr,na.rm=TRUE),
meanFDR=mean(fdr,na.rm=TRUE)) %>% arrange(method,cutoff)
df2 <- as_tibble(do.call(rbind,resList2))
df2 <- df2 %>% group_by(method,cutoff) %>%
summarize(meanTPR=mean(tpr,na.rm=TRUE),
meanFDR=mean(fdr,na.rm=TRUE)) %>% arrange(method,cutoff)
pMeanAll <- ggplot(df, aes(x=meanFDR, y=meanTPR, col=method)) + geom_path(size = 1) +
scale_x_continuous(limits = c(0, 1), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes)) #+ xlab("FDR") + ylab("TPR")
pMeanLegAll <- plot_grid(pMeanAll, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pMeanLegAll
pMeanAll2 <- ggplot(df2, aes(x=meanFDR, y=meanTPR, col=method)) + geom_path(size = 1) +
scale_x_continuous(limits = c(0, 1), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes))# + xlab("FDR") + ylab("TPR")
pMeanLegAll2 <- plot_grid(pMeanAll2, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1)
pMeanLegAll2
df_all <- rbind(df, df2)
pMeanAll3 <- ggplot(df_all, aes(x=meanFDR, y=meanTPR, col=method)) + geom_path(size = 1) +
scale_x_continuous(limits = c(0, .6), breaks = c(0.01, 0.05, 0.1, 0.5, 1),
minor_breaks = c(0:5) * .1) +
scale_y_continuous(limits = c(0, 1)) +
scale_color_manual(values = cols, breaks = names(cols)) +
scale_linetype_manual(values = linetypes, breaks = names(linetypes)) +xlab("") + ylab("")+theme(axis.text.x = element_text(
size=18, angle=-90), axis.text.y = element_text( size=18, angle=0))
#+ xlab("FDR") + ylab("TPR")
#pMeanLegAll3 <- plot_grid(pMeanAll3, legend_all, rel_heights=c(1,0.15), nrow=2, ncol=1) + guides(fill=guide_legend(nrow=2,byrow=TRUE))
pMeanLegAll3
saveRDS(pMeanAll3, file="/Users/nuhabintayyash/GPcounts/paper_notebooks/tradeSeq/pMeanCycle_v2_GPcounts.rds")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RCy3-deprecated.R
\name{cyPlot}
\alias{cyPlot}
\alias{cyPlotdefunct}
\title{DEFUNCT: cyPlot}
\usage{
cyPlotdefunct
}
\value{
None
}
\description{
This function is defunct and will be removed in the next release.
}
| /man/cyPlot-defunct.Rd | permissive | olbeimarton/RCy3 | R | false | true | 292 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RCy3-deprecated.R
\name{cyPlot}
\alias{cyPlot}
\alias{cyPlotdefunct}
\title{DEFUNCT: cyPlot}
\usage{
cyPlotdefunct
}
\value{
None
}
\description{
This function is defunct and will be removed in the next release.
}
|
#' A function to combine otu tables
#'
#' This function takes two or more otu tables as a list of matrices, and combines them.
#' Must keep rows as samples and taxa as columns (or vice versa, must be consistent)
#' Must provide a list of otu matrices as input (i.e., not a list of dataframes)
#'
#' @param otu_tabs a list of otu matrices to be combined
#' @export
#' @examples
#' otu_tab1 <- matrix(
#' c(2, 4, 3, 1, 5, 7),
#' nrow=3, ncol=2)
#' otu_tab2 <- matrix(
#' c(3, 5, 1, 3, 5, 8),
#' nrow=2, ncol=3)
#' rownames(otu_tab1) <- c("otu1","otu2","otu3")
#' colnames(otu_tab1) <- c("sample1","sample2")
#' rownames(otu_tab2) <- c("otu3","otu4")
#' colnames(otu_tab2) <- c("sample3","sample4","sample5")
#'
#' otu_tabs <- list(otu_tab1, otu_tab2)
#' combine_otu_tables(otu_tabs)
combine_otu_tables <- function(otu_tabs){
sample_list <- lapply(otu_tabs, rownames) # next create a list of rownames
taxa_list <- lapply(otu_tabs, colnames) # create a list of colnames (taxa here)
all_samp_names <- Reduce(union, sample_list) #then get the union of the list, samples as rows here
all_tax_names <- Reduce(union, taxa_list)
all_samples <- matrix(0, nrow=length(all_samp_names), ncol=length(all_tax_names),
dimnames=list(all_samp_names, all_tax_names)) #create matrix of correct dimensions
all_samples[rownames(otu_tabs[[1]]), colnames(otu_tabs[[1]])] <- otu_tabs[[1]] #add in first otu table
#loop through the rest of the otu tables, adding each
for (i in 2:length(otu_tabs)) {
all_samples[rownames(otu_tabs[[i]]), colnames(otu_tabs[[i]])] <- otu_tabs[[i]]
}
return(all_samples)
}
# need more inforamtion to describe otu table format
# otu_tabs <- list(otu_table1, otu_table2)
# combined_otu_table <- combine_otu_tables(otu_tabs)
| /R/combine_otu_tables.R | no_license | cjschulz/micromixR | R | false | false | 1,875 | r | #' A function to combine otu tables
#'
#' This function takes two or more otu tables as a list of matrices, and combines them.
#' Must keep rows as samples and taxa as columns (or vice versa, must be consistent)
#' Must provide a list of otu matrices as input (i.e., not a list of dataframes)
#'
#' @param otu_tabs a list of otu matrices to be combined
#' @export
#' @examples
#' otu_tab1 <- matrix(
#' c(2, 4, 3, 1, 5, 7),
#' nrow=3, ncol=2)
#' otu_tab2 <- matrix(
#' c(3, 5, 1, 3, 5, 8),
#' nrow=2, ncol=3)
#' rownames(otu_tab1) <- c("otu1","otu2","otu3")
#' colnames(otu_tab1) <- c("sample1","sample2")
#' rownames(otu_tab2) <- c("otu3","otu4")
#' colnames(otu_tab2) <- c("sample3","sample4","sample5")
#'
#' otu_tabs <- list(otu_tab1, otu_tab2)
#' combine_otu_tables(otu_tabs)
combine_otu_tables <- function(otu_tabs){
sample_list <- lapply(otu_tabs, rownames) # next create a list of rownames
taxa_list <- lapply(otu_tabs, colnames) # create a list of colnames (taxa here)
all_samp_names <- Reduce(union, sample_list) #then get the union of the list, samples as rows here
all_tax_names <- Reduce(union, taxa_list)
all_samples <- matrix(0, nrow=length(all_samp_names), ncol=length(all_tax_names),
dimnames=list(all_samp_names, all_tax_names)) #create matrix of correct dimensions
all_samples[rownames(otu_tabs[[1]]), colnames(otu_tabs[[1]])] <- otu_tabs[[1]] #add in first otu table
#loop through the rest of the otu tables, adding each
for (i in 2:length(otu_tabs)) {
all_samples[rownames(otu_tabs[[i]]), colnames(otu_tabs[[i]])] <- otu_tabs[[i]]
}
return(all_samples)
}
# need more inforamtion to describe otu table format
# otu_tabs <- list(otu_table1, otu_table2)
# combined_otu_table <- combine_otu_tables(otu_tabs)
|
# Copyright 2001 by Nicholas Lewin-Koh, modified RSB 2016-05-31
#
#soi.graph <- function(tri.nb, coords){
# x <- coords
# if (!is.matrix(x)) stop("Data not in matrix form")
# if (any(is.na(x))) stop("Data cannot include NAs")
# np<-length(tri.nb)
# noedges<-0
# rad<-nearneigh<-rep(0,np)
# neigh<-unlist(tri.nb)
# noneigh<-unlist(lapply(tri.nb,length))
# g1<-g2<-rep(0,sum(noneigh))
# storage.mode(x) <- "double"
# called Computational Geometry in C functions banned by Debian admins
# answ<-.C("compute_soi", np=as.integer(np), from=as.integer(g1),
# to=as.integer(g2), nedges=as.integer(noedges),
# notri.nb=as.integer(noneigh), tri.nb=as.integer(neigh),
# nn=as.integer(nearneigh),
# circles=as.double(rad), x=x[,1], y=x[,2],
# PACKAGE="spdep")
# answ$from<-answ$from[1:answ$nedges]
# answ$to<-answ$to[1:answ$nedges]
# answ<-list(np=answ$np,nedges=answ$nedges,
# from=answ$from,to=answ$to,circles=answ$circ)
# attr(answ, "call") <- match.call()
# class(answ)<-c("Graph","SOI")
# answ
#}
soi.graph <- function(tri.nb, coords, quadsegs=10){
obj <- NULL
if (!is.matrix(coords)) obj <- coords
if (inherits(obj, "SpatialPoints")) {
obj <- sf::st_geometry(sf::st_as_sf(obj))
}
if (inherits(obj, "sfc")) {
if (!inherits(obj, "sfc_POINT"))
stop("Point geometries required")
if (attr(obj, "n_empty") > 0L)
stop("Empty geometries found")
if (!is.na(sf::st_is_longlat(obj)) && sf::st_is_longlat(obj))
warning("tri2nb: coordinates should be planar")
coords <- sf::st_coordinates(obj)
}
if (!is.matrix(coords)) stop("Data not in matrix form")
if (any(is.na(coords))) stop("Data cannot include NAs")
stopifnot(length(tri.nb) == nrow(coords))
if (requireNamespace("dbscan", quietly = TRUE)) {
dists_1 <- dbscan::kNN(coords, k=1)$dist[,1]
} else {
stop("dbscan required")
}
if (is.null(obj)) obj <- st_geometry(st_as_sf(as.data.frame(coords),
coords=1:2))
if(inherits(obj, "sfc")) {
bobj <- st_buffer(obj, dist=dists_1, nQuadSegs=quadsegs)
gI <- st_intersects(bobj)
}
gI_1 <- lapply(1:length(gI), function(i) {
x <- gI[[i]][match(tri.nb[[i]], gI[[i]])]; x[!is.na(x)]
})
answ <- list(np=length(tri.nb),nedges=length(unlist(gI_1)),
from=rep(1:length(tri.nb), times=sapply(gI_1, length)), to=unlist(gI_1),
circles=dists_1)
attr(answ, "call") <- match.call()
class(answ) <- c("Graph","SOI")
answ
}
| /R/soi.R | no_license | jeffcsauer/spdep | R | false | false | 2,478 | r | # Copyright 2001 by Nicholas Lewin-Koh, modified RSB 2016-05-31
#
#soi.graph <- function(tri.nb, coords){
# x <- coords
# if (!is.matrix(x)) stop("Data not in matrix form")
# if (any(is.na(x))) stop("Data cannot include NAs")
# np<-length(tri.nb)
# noedges<-0
# rad<-nearneigh<-rep(0,np)
# neigh<-unlist(tri.nb)
# noneigh<-unlist(lapply(tri.nb,length))
# g1<-g2<-rep(0,sum(noneigh))
# storage.mode(x) <- "double"
# called Computational Geometry in C functions banned by Debian admins
# answ<-.C("compute_soi", np=as.integer(np), from=as.integer(g1),
# to=as.integer(g2), nedges=as.integer(noedges),
# notri.nb=as.integer(noneigh), tri.nb=as.integer(neigh),
# nn=as.integer(nearneigh),
# circles=as.double(rad), x=x[,1], y=x[,2],
# PACKAGE="spdep")
# answ$from<-answ$from[1:answ$nedges]
# answ$to<-answ$to[1:answ$nedges]
# answ<-list(np=answ$np,nedges=answ$nedges,
# from=answ$from,to=answ$to,circles=answ$circ)
# attr(answ, "call") <- match.call()
# class(answ)<-c("Graph","SOI")
# answ
#}
soi.graph <- function(tri.nb, coords, quadsegs=10){
obj <- NULL
if (!is.matrix(coords)) obj <- coords
if (inherits(obj, "SpatialPoints")) {
obj <- sf::st_geometry(sf::st_as_sf(obj))
}
if (inherits(obj, "sfc")) {
if (!inherits(obj, "sfc_POINT"))
stop("Point geometries required")
if (attr(obj, "n_empty") > 0L)
stop("Empty geometries found")
if (!is.na(sf::st_is_longlat(obj)) && sf::st_is_longlat(obj))
warning("tri2nb: coordinates should be planar")
coords <- sf::st_coordinates(obj)
}
if (!is.matrix(coords)) stop("Data not in matrix form")
if (any(is.na(coords))) stop("Data cannot include NAs")
stopifnot(length(tri.nb) == nrow(coords))
if (requireNamespace("dbscan", quietly = TRUE)) {
dists_1 <- dbscan::kNN(coords, k=1)$dist[,1]
} else {
stop("dbscan required")
}
if (is.null(obj)) obj <- st_geometry(st_as_sf(as.data.frame(coords),
coords=1:2))
if(inherits(obj, "sfc")) {
bobj <- st_buffer(obj, dist=dists_1, nQuadSegs=quadsegs)
gI <- st_intersects(bobj)
}
gI_1 <- lapply(1:length(gI), function(i) {
x <- gI[[i]][match(tri.nb[[i]], gI[[i]])]; x[!is.na(x)]
})
answ <- list(np=length(tri.nb),nedges=length(unlist(gI_1)),
from=rep(1:length(tri.nb), times=sapply(gI_1, length)), to=unlist(gI_1),
circles=dists_1)
attr(answ, "call") <- match.call()
class(answ) <- c("Graph","SOI")
answ
}
|
# This script generates the random forest model used in this package
mtcars_rf <- randomForest::randomForest(mpg ~ cyl + wt, data = mtcars)
# Warning about "partial argument match of 'along' to 'along.with'" here
# It doesn't seem to affect the results and this is just a toy example.
# Ignoring.
usethis::use_data(mtcars_rf, internal = TRUE, overwrite = TRUE)
| /inst/generate-random-forest-model.R | permissive | mdneuzerling/ImportingRandomForest | R | false | false | 363 | r | # This script generates the random forest model used in this package
mtcars_rf <- randomForest::randomForest(mpg ~ cyl + wt, data = mtcars)
# Warning about "partial argument match of 'along' to 'along.with'" here
# It doesn't seem to affect the results and this is just a toy example.
# Ignoring.
usethis::use_data(mtcars_rf, internal = TRUE, overwrite = TRUE)
|
# Required R packages
library(ape)
library(phytools)
install.packages('TreeDist')
library(TreeDist)
# load treespace and packages for plotting:
library(treespace)
library(phylogram)
library(phangorn)
library(seqinr)
library(adegraphics)
library(adegenet)
library(apTreeshape)
library(ggtree)
# Set seed for reproducibility
set.seed(1)
# Load metadata file
#SRA_metadata <- read.csv("Salmonella_outbreak_SRA_metadata.csv", header = FALSE)
# Read in phylogenetic trees
lyve_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/lyveset.newick")
# kSNP3 tree.NJ.tre, tree.ML.tre, tree.core.tre, tree.parsimony.tre
ksnp_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/ksnp3.newick")
# Cfsan
cfsan_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/cfsan.newick")
# Enterobase
#enterobase_tree <- read.tree(file = "Pipeline_results/Old_resultsE_coli_romaine_outbreak/etoki/enterobase_SRA_phylo_tree.nwk")
enterobase_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/enterobase.newick")
# Combine trees
combined_trees <- c(lyve_tree,ksnp_tree,cfsan_tree,enterobase_tree)
# Combine trees from single dataset into vector
dataset1_tree_vector <- c(lyve_tree,ksnp_tree,cfsan_tree,enterobase_tree)
dataset1_tree_vector <- c(as.phylo(lyve_tree),as.phylo(ksnp_tree),as.phylo(cfsan_tree),as.phylo(enterobase_tree))
# From this point on, you have all of the phylogenetic trees loaded
# and can perform any further analysis you wish.
# Alot of the code below is still experimental and needs improvement.
#
##
### Code for subsetting trees with unmatched nodes
## Need to automate this
#
## Check for sample matches, between each tree
all_SRA_to_drop = c()
SRA_to_drop <- unique(enterobase_tree$tip.label[! enterobase_tree$tip.label %in% cfsan_tree$tip.label])
all_SRA_to_drop = c(all_SRA_to_drop,SRA_to_drop)
SRA_to_drop <- unique(enterobase_tree$tip.label[! enterobase_tree$tip.label %in% ksnp_tree$tip.label])
all_SRA_to_drop = c(all_SRA_to_drop,SRA_to_drop)
SRA_to_drop <- unique(cfsan_tree$tip.label[! cfsan_tree$tip.label %in% enterobase_tree$tip.label])
all_SRA_to_drop = c(all_SRA_to_drop,SRA_to_drop)
SRA_to_drop <- unique(lyve_tree$tip.label[! lyve_tree$tip.label %in% ksnp_tree$tip.label])
SRA_to_drop <- unique(ksnp_tree$tip.label[! ksnp_tree$tip.label %in% lyve_tree$tip.label])
all_SRA_to_drop <- unique(all_SRA_to_drop)
lyve_tree <- drop.tip(combined_trees[[1]], all_SRA_to_drop)
ksnp_tree <- drop.tip(combined_trees[[2]], all_SRA_to_drop)
cfsan_tree <- drop.tip(combined_trees[[3]], all_SRA_to_drop)
enterobase_tree <- drop.tip(combined_trees[[4]], all_SRA_to_drop)
# Add root to tree
lyve_tree_rooted <- root(lyve_tree,1, r = TRUE)
ksnp_tree_rooted <- root(ksnp_tree,1, r = TRUE)
cfsan_tree_rooted <- root(cfsan_tree,1, r = TRUE)
enterobase_tree_rooted <- root(enterobase_tree,1, r = TRUE)
combined_trees_clean <- c(lyve_tree,ksnp_tree,cfsan_tree,enterobase_tree)
#
##
### TreeDist
## Generalized Robinson-Foulds distance
#
VisualizeMatching(SharedPhylogeneticInfo, lyve_tree, ksnp_tree,
Plot = TreeDistPlot, matchZeros = FALSE)
SharedPhylogeneticInfo(lyve_tree, ksnp_tree)
MutualClusteringInfo(lyve_tree, ksnp_tree)
NyeSimilarity(lyve_tree, ksnp_tree)
JaccardRobinsonFoulds(lyve_tree, ksnp_tree)
MatchingSplitDistance(lyve_tree, ksnp_tree)
MatchingSplitInfoDistance(lyve_tree, ksnp_tree)
VisualizeMatching(JaccardRobinsonFoulds, lyve_tree, ksnp_tree,
Plot = TreeDistPlot, matchZeros = FALSE)
#
##
### TreeDist
## Using a suitable distance metric, projecting distances
#
# Tree colors
library('TreeTools', quietly = TRUE, warn.conflicts = FALSE)
treeNumbers <- c(1:4)
spectrum <- viridisLite::plasma(4)
treeCols <- spectrum[treeNumbers]
# calculate distances
distances <- ClusteringInfoDistance(combined_trees_clean)
distances <- RobinsonFoulds(combined_trees_clean)
distances <- as.dist(Quartet::QuartetDivergence(Quartet::ManyToManyQuartetAgreement(combined_trees_clean), similarity = FALSE))
# Projecting distances
#Then we need to reduce the dimensionality of these distances. We’ll start out with a 12-dimensional projection; if needed, we can always drop higher dimensions.
#Principal components analysis is quick and performs very well:
projection <- cmdscale(distances, k = 3)
# Alternative projection methods do exist, and sometimes give slightly better projections. isoMDS() performs non-metric multidimensional scaling (MDS) with the Kruskal-1 stress function (Kruskal, 1964):
kruskal <- MASS::isoMDS(distances, k = 3)
projection <- kruskal$points
#whereas sammon(), one of many metric MDS methods, uses Sammon’s stress function (Sammon, 1969):
sammon <- MASS::sammon(distances, k = 3)
projection <- sammon$points
#That’s a good start. It is tempting to plot the first two dimensions arising from this projection and be done:
par(mar = rep(0, 4))
plot(projection,
asp = 1, # Preserve aspect ratio - do not distort distances
ann = FALSE, axes = FALSE, # Don't label axes: dimensions are meaningless
col = treeCols, pch = 16
)
#
## Identifying clusters
#
# A quick visual inspection suggests at least two clusters, with the possibility of further subdivision
# of the brighter trees. But visual inspection can be highly misleading (Smith, 2021).
# We must take a statistical approach. A combination of partitioning around medoids and hierarchical
# clustering with minimax linkage will typically find a clustering solution that is close to optimal,
# if one exists (Smith, 2021).
library(protoclust)
possibleClusters <- 3:10
# Had to choose static K value
pamClusters <- lapply(possibleClusters, function (x) cluster::pam(distances, k = 3))
pamSils <- vapply(pamClusters, function (pamCluster) {
mean(cluster::silhouette(pamCluster)[, 3])
}, double(1))
bestPam <- which.max(pamSils)
pamSil <- pamSils[bestPam]
pamCluster <- pamClusters[[bestPam]]$cluster
hTree <- protoclust::protoclust(distances)
hClusters <- lapply(possibleClusters, function (k) cutree(hTree, k = 3))
hSils <- vapply(hClusters, function (hCluster) {
mean(cluster::silhouette(hCluster, distances)[, 3])
}, double(1))
bestH <- which.max(hSils)
hSil <- hSils[bestH]
hCluster <- hClusters[[bestH]]
plot(pamSils ~ possibleClusters,
xlab = 'Number of clusters', ylab = 'Silhouette coefficient',
ylim = range(c(pamSils, hSils)))
points(hSils ~ possibleClusters, pch = 2)
legend('topright', c('PAM', 'Hierarchical'), pch = 1:2)
# Silhouette coefficients of < 0.25 suggest that structure is not meaningful; > 0.5 denotes good evidence
# of clustering, and > 0.7 strong evidence (Kaufman & Rousseeuw, 1990). The evidence for the visually
# apparent clustering is not as strong as it first appears. Let’s explore our two-cluster hierarchical
# clustering solution anyway.
cluster <- hClusters[[2 - 1]]
#We can visualize the clustering solution as a tree:
class(hTree) <- 'hclust'
par(mar = c(0, 0, 0, 0))
plot(hTree, labels = FALSE, main = '')
points(seq_along(trees), rep(1, length(trees)), pch = 16,
col = spectrum[hTree$order])
#Another thing we may wish to do is to take the consensus of each cluster:
par(mfrow = c(1, 2), mar = rep(0.2, 4))
col1 <- spectrum[mean(treeNumbers[cluster == 1])]
col2 <- spectrum[mean(treeNumbers[cluster == 2])]
plot(consensus(trees[cluster == 1]), edge.color = col1, edge.width = 2, tip.color = col1)
plot(consensus(trees[cluster == 2]), edge.color = col2, edge.width = 2, tip.color = col2)
# Validating a projection
# Now let’s evaluate whether our plot of tree space is representative. First we want to know how many dimensions are necessary to adequately represent the true distances between trees. We hope for a trustworthiness × continuity score of > 0.9 for a usable projection, or > 0.95 for a good one.
library(TreeTools)
# ProjectionQuality doesn't work with regular TreeDist
#remotes::install_github("ms609/TreeDist")
txc <- vapply(1:12, function (k) {
newDist <- dist(projection[, seq_len(k)])
TreeTools::ProjectionQuality(distances, newDist, 10)['TxC']
}, 0)
plot(txc, xlab = 'Dimension')
abline(h = 0.9, lty = 2)
# To help establish visually what structures are more likely to be genuine, we might also choose to calculate a minimum spanning tree:
mstEnds <- MSTEdges(distances)
# Let’s plot the first five dimensions of our tree space, highlighting the convex hulls of our clusters:
plotSeq <- matrix(0, 5, 5)
plotSeq[upper.tri(plotSeq)] <- seq_len(5 * (5 - 1) / 2)
plotSeq <- t(plotSeq[-5, -1])
plotSeq[c(5, 10, 15)] <- 11:13
layout(plotSeq)
par(mar = rep(0.1, 4))
for (i in 2:4) for (j in seq_len(i - 1)) {
# Set up blank plot
plot(projection[, j], projection[, i], ann = FALSE, axes = FALSE, frame.plot = TRUE,
type = 'n', asp = 1, xlim = range(projection), ylim = range(projection))
# Plot MST
apply(mstEnds, 1, function (segment)
lines(projection[segment, j], projection[segment, i], col = "#bbbbbb", lty = 1))
# Add points
points(projection[, j], projection[, i], pch = 16, col = treeCols)
# Mark clusters
for (clI in unique(cluster)) {
inCluster <- cluster == clI
clusterX <- projection[inCluster, j]
clusterY <- projection[inCluster, i]
hull <- chull(clusterX, clusterY)
polygon(clusterX[hull], clusterY[hull], lty = 1, lwd = 2,
border = '#54de25bb')
}
}
# Annotate dimensions
plot(0, 0, type = 'n', ann = FALSE, axes = FALSE)
text(0, 0, 'Dimension 2')
plot(0, 0, type = 'n', ann = FALSE, axes = FALSE)
text(0, 0, 'Dimension 3')
plot(0, 0, type = 'n', ann = FALSE, axes = FALSE)
text(0, 0, 'Dimension 4')
#
##
###
#### Info on tree
###
##
#
# Quick summary of #nodes, tips, branch lengths, etc
summary(lyve_tree)
sum(lyve_tree$edge.length)
summary(cfsan_tree)
sum(cfsan_tree$edge.length)
summary(enterobase_tree)
sum(enterobase_tree$edge.length)
summary(ksnp_tree)
sum(ksnp_tree$edge.length)
#
### Calculate co-speciation (RF distance) between trees
#
## Need to:
# Make function to automate these pairwise comparisons within a vector of trees
# Save results in table
# Compare trees with cospeciation
# Test using Robinson-Foulds metric (RF)
cospeciation(ksnp_tree, cfsan_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(ksnp_tree, enterobase_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,ksnp_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,cfsan_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,enterobase_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(cfsan_tree, enterobase_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
# Test using Subtree pruning and regrafting (SPR)
cospeciation(ksnp_tree, cfsan_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(ksnp_tree, enterobase_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,ksnp_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,cfsan_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,enterobase_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(cfsan_tree, enterobase_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
# Example plot of cospeciation results
plot(cospeciation(ksnp_tree, enterobase_tree, distance = c("RF"),method=c("permutation"), nsim = 1000))
#
## Compare trees with all.equal.phylo
#
all.equal.phylo(lyve_tree,ksnp_tree)
all.equal.phylo(lyve_tree,cfsan_tree)
all.equal.phylo(lyve_tree,enterobase_tree)
all.equal.phylo(ksnp_tree, cfsan_tree)
all.equal.phylo(ksnp_tree, enterobase_tree)
all.equal.phylo(cfsan_tree, enterobase_tree)
#
## Plots -
#
comparePhylo(lyve_tree,ksnp_tree, plot=TRUE)
comparePhylo(lyve_tree,cfsan_tree, plot=TRUE)
comparePhylo(lyve_tree,enterobase_tree, plot=TRUE)
comparePhylo(ksnp_tree, cfsan_tree, plot=TRUE)
comparePhylo(ksnp_tree, enterobase_tree, plot=TRUE)
comparePhylo(cfsan_tree, enterobase_tree, plot=TRUE)
#
## Plots -line connecting nodes between trees
# http://phytools.org/mexico2018/ex/12/Plotting-methods.html
# Compare trees with all.equal.phylo
plot(cophylo(lyve_tree,ksnp_tree))
plot(cophylo(lyve_tree,cfsan_tree))
plot(cophylo(lyve_tree,enterobase_tree))
plot(cophylo(ksnp_tree, cfsan_tree))
plot(cophylo(ksnp_tree, enterobase_tree))
plot(cophylo(cfsan_tree, enterobase_tree))
# Side-by-side comparison with phylogenetic trees
obj <- cophylo(ksnp_tree, cfsan_tree, print= TRUE)
plot(obj,link.type="curved",link.lwd=3,link.lty="solid",
link.col="grey",fsize=0.8)
nodelabels.cophylo(which="left",frame="circle",cex=0.8)
nodelabels.cophylo(which="right",frame="circle",cex=0.8)
#
##
###
#### Combine rooted and cleaned trees
###
##
#
combined_rooted_trees <- c(ksnp_tree_rooted,enterobase_tree_rooted,cfsan_tree_rooted, lyve_tree_rooted)
combined_cleaned_trees <- c(ksnp_tree,enterobase_tree,cfsan_tree, lyve_tree)
names(combined_cleaned_trees) <- c("ksnp","enterobase","cfsan","lyveset")
densityTree(combined_rooted_trees,type="cladogram",nodes="intermediate")
densityTree(combined_cleaned_trees,type="cladogram",nodes="intermediate")
densityTree(combined_rooted_trees,use.edge.length=FALSE,type="phylogram",nodes="inner", alpha = 0.3)
# Load updated metadata file
#SRA_metadata <- read.csv("SRA_present.csv", header = FALSE, stringsAsFactors = FALSE)
# Calculate related tree distance
#relatedTreeDist(combined_trees, as.data.frame(SRA_metadata), checkTrees = TRUE)
#write.csv(lyve_tree$tip.label, "lyve_tree_nodes.csv")
#write.csv(ksnp_tree$tip.label, "ksnp3_nodes.csv")
# png(filename = "List_NY_lyveset_tree.png", res = 300,width = 800, height = 800)
# plotTree(lyve_tree, label.offset =1)
###
####
####### Treespace
####
###
# https://cran.r-project.org/web/packages/treespace/vignettes/introduction.html
library(treespace)
combined_treespace <- treespace(combined_rooted_trees, nf=3) # , return.tree.vectors = TRUE
#test <- as.treeshape(dataset1_tree_vector)
table.image(combined_treespace$D)
table.value(combined_treespace$D, nclass=5, method="color", symbol="circle", col=redpal(6))
plotGroves(combined_treespace$pco,lab.show=TRUE, lab.cex=1.5)
combined_treespace_groves <- findGroves(combined_treespace)
plotGrovesD3(combined_treespace_groves)
#aldous.test(combined_rooted_trees)
colless.test(combined_treespace_groves, alternative="greater")
likelihood.test(combined_treespace, alternative='greater')
#
##
### Test OTU grouping - not fully working yet.
##
# Enrique working on this
# Mutate to create new column with selected outbreak group
SRA_metadata <- as_tibble(SRA_metadata)
SRA_metadata <- SRA_metadata %>% mutate(Group = ifelse(SNP.cluster == "PDS000000366.382" , "Outbreak", "Other"))
outbreak_group <- SRA_metadata %>%
filter(SNP.cluster == "PDS000000366.382") %>%
select(Newick_label)
#Just the sample Ids
lyve_tree_w_meta <- groupOTU(lyve_tree ,outbreak_group, group_name = "Outbreak")
p <- ggtree(lyve_tree_w_meta, aes(color=Outbreak)) +
scale_color_manual(values = c("#efad29", "#63bbd4")) +
geom_nodepoint(color="black", size=0.1) +
geom_tiplab(size=2, color="black")
p
| /example_Ecoli_romaine_outbreak_phylogenetic_tree_analysis.R | no_license | TheNoyesLab/FMPRE_WGS_project | R | false | false | 15,839 | r | # Required R packages
library(ape)
library(phytools)
install.packages('TreeDist')
library(TreeDist)
# load treespace and packages for plotting:
library(treespace)
library(phylogram)
library(phangorn)
library(seqinr)
library(adegraphics)
library(adegenet)
library(apTreeshape)
library(ggtree)
# Set seed for reproducibility
set.seed(1)
# Load metadata file
#SRA_metadata <- read.csv("Salmonella_outbreak_SRA_metadata.csv", header = FALSE)
# Read in phylogenetic trees
lyve_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/lyveset.newick")
# kSNP3 tree.NJ.tre, tree.ML.tre, tree.core.tre, tree.parsimony.tre
ksnp_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/ksnp3.newick")
# Cfsan
cfsan_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/cfsan.newick")
# Enterobase
#enterobase_tree <- read.tree(file = "Pipeline_results/Old_resultsE_coli_romaine_outbreak/etoki/enterobase_SRA_phylo_tree.nwk")
enterobase_tree <- read.tree(file = "Pipeline_results/Old_results/Ecoli_romaine_outbreak/exported_trees/enterobase.newick")
# Combine trees
combined_trees <- c(lyve_tree,ksnp_tree,cfsan_tree,enterobase_tree)
# Combine trees from single dataset into vector
dataset1_tree_vector <- c(lyve_tree,ksnp_tree,cfsan_tree,enterobase_tree)
dataset1_tree_vector <- c(as.phylo(lyve_tree),as.phylo(ksnp_tree),as.phylo(cfsan_tree),as.phylo(enterobase_tree))
# From this point on, you have all of the phylogenetic trees loaded
# and can perform any further analysis you wish.
# Alot of the code below is still experimental and needs improvement.
#
##
### Code for subsetting trees with unmatched nodes
## Need to automate this
#
## Check for sample matches, between each tree
all_SRA_to_drop = c()
SRA_to_drop <- unique(enterobase_tree$tip.label[! enterobase_tree$tip.label %in% cfsan_tree$tip.label])
all_SRA_to_drop = c(all_SRA_to_drop,SRA_to_drop)
SRA_to_drop <- unique(enterobase_tree$tip.label[! enterobase_tree$tip.label %in% ksnp_tree$tip.label])
all_SRA_to_drop = c(all_SRA_to_drop,SRA_to_drop)
SRA_to_drop <- unique(cfsan_tree$tip.label[! cfsan_tree$tip.label %in% enterobase_tree$tip.label])
all_SRA_to_drop = c(all_SRA_to_drop,SRA_to_drop)
SRA_to_drop <- unique(lyve_tree$tip.label[! lyve_tree$tip.label %in% ksnp_tree$tip.label])
SRA_to_drop <- unique(ksnp_tree$tip.label[! ksnp_tree$tip.label %in% lyve_tree$tip.label])
all_SRA_to_drop <- unique(all_SRA_to_drop)
lyve_tree <- drop.tip(combined_trees[[1]], all_SRA_to_drop)
ksnp_tree <- drop.tip(combined_trees[[2]], all_SRA_to_drop)
cfsan_tree <- drop.tip(combined_trees[[3]], all_SRA_to_drop)
enterobase_tree <- drop.tip(combined_trees[[4]], all_SRA_to_drop)
# Add root to tree
lyve_tree_rooted <- root(lyve_tree,1, r = TRUE)
ksnp_tree_rooted <- root(ksnp_tree,1, r = TRUE)
cfsan_tree_rooted <- root(cfsan_tree,1, r = TRUE)
enterobase_tree_rooted <- root(enterobase_tree,1, r = TRUE)
combined_trees_clean <- c(lyve_tree,ksnp_tree,cfsan_tree,enterobase_tree)
#
##
### TreeDist
## Generalized Robinson-Foulds distance
#
VisualizeMatching(SharedPhylogeneticInfo, lyve_tree, ksnp_tree,
Plot = TreeDistPlot, matchZeros = FALSE)
SharedPhylogeneticInfo(lyve_tree, ksnp_tree)
MutualClusteringInfo(lyve_tree, ksnp_tree)
NyeSimilarity(lyve_tree, ksnp_tree)
JaccardRobinsonFoulds(lyve_tree, ksnp_tree)
MatchingSplitDistance(lyve_tree, ksnp_tree)
MatchingSplitInfoDistance(lyve_tree, ksnp_tree)
VisualizeMatching(JaccardRobinsonFoulds, lyve_tree, ksnp_tree,
Plot = TreeDistPlot, matchZeros = FALSE)
#
##
### TreeDist
## Using a suitable distance metric, projecting distances
#
# Tree colors
library('TreeTools', quietly = TRUE, warn.conflicts = FALSE)
treeNumbers <- c(1:4)
spectrum <- viridisLite::plasma(4)
treeCols <- spectrum[treeNumbers]
# calculate distances
distances <- ClusteringInfoDistance(combined_trees_clean)
distances <- RobinsonFoulds(combined_trees_clean)
distances <- as.dist(Quartet::QuartetDivergence(Quartet::ManyToManyQuartetAgreement(combined_trees_clean), similarity = FALSE))
# Projecting distances
#Then we need to reduce the dimensionality of these distances. We’ll start out with a 12-dimensional projection; if needed, we can always drop higher dimensions.
#Principal components analysis is quick and performs very well:
projection <- cmdscale(distances, k = 3)
# Alternative projection methods do exist, and sometimes give slightly better projections. isoMDS() performs non-metric multidimensional scaling (MDS) with the Kruskal-1 stress function (Kruskal, 1964):
kruskal <- MASS::isoMDS(distances, k = 3)
projection <- kruskal$points
#whereas sammon(), one of many metric MDS methods, uses Sammon’s stress function (Sammon, 1969):
sammon <- MASS::sammon(distances, k = 3)
projection <- sammon$points
#That’s a good start. It is tempting to plot the first two dimensions arising from this projection and be done:
par(mar = rep(0, 4))
plot(projection,
asp = 1, # Preserve aspect ratio - do not distort distances
ann = FALSE, axes = FALSE, # Don't label axes: dimensions are meaningless
col = treeCols, pch = 16
)
#
## Identifying clusters
#
# A quick visual inspection suggests at least two clusters, with the possibility of further subdivision
# of the brighter trees. But visual inspection can be highly misleading (Smith, 2021).
# We must take a statistical approach. A combination of partitioning around medoids and hierarchical
# clustering with minimax linkage will typically find a clustering solution that is close to optimal,
# if one exists (Smith, 2021).
library(protoclust)
possibleClusters <- 3:10
# Had to choose static K value
pamClusters <- lapply(possibleClusters, function (x) cluster::pam(distances, k = 3))
pamSils <- vapply(pamClusters, function (pamCluster) {
mean(cluster::silhouette(pamCluster)[, 3])
}, double(1))
bestPam <- which.max(pamSils)
pamSil <- pamSils[bestPam]
pamCluster <- pamClusters[[bestPam]]$cluster
hTree <- protoclust::protoclust(distances)
hClusters <- lapply(possibleClusters, function (k) cutree(hTree, k = 3))
hSils <- vapply(hClusters, function (hCluster) {
mean(cluster::silhouette(hCluster, distances)[, 3])
}, double(1))
bestH <- which.max(hSils)
hSil <- hSils[bestH]
hCluster <- hClusters[[bestH]]
plot(pamSils ~ possibleClusters,
xlab = 'Number of clusters', ylab = 'Silhouette coefficient',
ylim = range(c(pamSils, hSils)))
points(hSils ~ possibleClusters, pch = 2)
legend('topright', c('PAM', 'Hierarchical'), pch = 1:2)
# Silhouette coefficients of < 0.25 suggest that structure is not meaningful; > 0.5 denotes good evidence
# of clustering, and > 0.7 strong evidence (Kaufman & Rousseeuw, 1990). The evidence for the visually
# apparent clustering is not as strong as it first appears. Let’s explore our two-cluster hierarchical
# clustering solution anyway.
cluster <- hClusters[[2 - 1]]
#We can visualize the clustering solution as a tree:
class(hTree) <- 'hclust'
par(mar = c(0, 0, 0, 0))
plot(hTree, labels = FALSE, main = '')
points(seq_along(trees), rep(1, length(trees)), pch = 16,
col = spectrum[hTree$order])
#Another thing we may wish to do is to take the consensus of each cluster:
par(mfrow = c(1, 2), mar = rep(0.2, 4))
col1 <- spectrum[mean(treeNumbers[cluster == 1])]
col2 <- spectrum[mean(treeNumbers[cluster == 2])]
plot(consensus(trees[cluster == 1]), edge.color = col1, edge.width = 2, tip.color = col1)
plot(consensus(trees[cluster == 2]), edge.color = col2, edge.width = 2, tip.color = col2)
# Validating a projection
# Now let’s evaluate whether our plot of tree space is representative. First we want to know how many dimensions are necessary to adequately represent the true distances between trees. We hope for a trustworthiness × continuity score of > 0.9 for a usable projection, or > 0.95 for a good one.
library(TreeTools)
# ProjectionQuality doesn't work with regular TreeDist
#remotes::install_github("ms609/TreeDist")
txc <- vapply(1:12, function (k) {
newDist <- dist(projection[, seq_len(k)])
TreeTools::ProjectionQuality(distances, newDist, 10)['TxC']
}, 0)
plot(txc, xlab = 'Dimension')
abline(h = 0.9, lty = 2)
# To help establish visually what structures are more likely to be genuine, we might also choose to calculate a minimum spanning tree:
mstEnds <- MSTEdges(distances)
# Let’s plot the first five dimensions of our tree space, highlighting the convex hulls of our clusters:
plotSeq <- matrix(0, 5, 5)
plotSeq[upper.tri(plotSeq)] <- seq_len(5 * (5 - 1) / 2)
plotSeq <- t(plotSeq[-5, -1])
plotSeq[c(5, 10, 15)] <- 11:13
layout(plotSeq)
par(mar = rep(0.1, 4))
for (i in 2:4) for (j in seq_len(i - 1)) {
# Set up blank plot
plot(projection[, j], projection[, i], ann = FALSE, axes = FALSE, frame.plot = TRUE,
type = 'n', asp = 1, xlim = range(projection), ylim = range(projection))
# Plot MST
apply(mstEnds, 1, function (segment)
lines(projection[segment, j], projection[segment, i], col = "#bbbbbb", lty = 1))
# Add points
points(projection[, j], projection[, i], pch = 16, col = treeCols)
# Mark clusters
for (clI in unique(cluster)) {
inCluster <- cluster == clI
clusterX <- projection[inCluster, j]
clusterY <- projection[inCluster, i]
hull <- chull(clusterX, clusterY)
polygon(clusterX[hull], clusterY[hull], lty = 1, lwd = 2,
border = '#54de25bb')
}
}
# Annotate dimensions
plot(0, 0, type = 'n', ann = FALSE, axes = FALSE)
text(0, 0, 'Dimension 2')
plot(0, 0, type = 'n', ann = FALSE, axes = FALSE)
text(0, 0, 'Dimension 3')
plot(0, 0, type = 'n', ann = FALSE, axes = FALSE)
text(0, 0, 'Dimension 4')
#
##
###
#### Info on tree
###
##
#
# Quick summary of #nodes, tips, branch lengths, etc
summary(lyve_tree)
sum(lyve_tree$edge.length)
summary(cfsan_tree)
sum(cfsan_tree$edge.length)
summary(enterobase_tree)
sum(enterobase_tree$edge.length)
summary(ksnp_tree)
sum(ksnp_tree$edge.length)
#
### Calculate co-speciation (RF distance) between trees
#
## Need to:
# Make function to automate these pairwise comparisons within a vector of trees
# Save results in table
# Compare trees with cospeciation
# Test using Robinson-Foulds metric (RF)
cospeciation(ksnp_tree, cfsan_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(ksnp_tree, enterobase_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,ksnp_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,cfsan_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,enterobase_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
cospeciation(cfsan_tree, enterobase_tree, distance = c("RF"), method=c("permutation"), nsim = 1000)
# Test using Subtree pruning and regrafting (SPR)
cospeciation(ksnp_tree, cfsan_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(ksnp_tree, enterobase_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,ksnp_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,cfsan_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(lyve_tree,enterobase_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
cospeciation(cfsan_tree, enterobase_tree, distance = c("SPR"), method=c("permutation"), nsim = 1000)
# Example plot of cospeciation results
plot(cospeciation(ksnp_tree, enterobase_tree, distance = c("RF"),method=c("permutation"), nsim = 1000))
#
## Compare trees with all.equal.phylo
#
all.equal.phylo(lyve_tree,ksnp_tree)
all.equal.phylo(lyve_tree,cfsan_tree)
all.equal.phylo(lyve_tree,enterobase_tree)
all.equal.phylo(ksnp_tree, cfsan_tree)
all.equal.phylo(ksnp_tree, enterobase_tree)
all.equal.phylo(cfsan_tree, enterobase_tree)
#
## Plots -
#
comparePhylo(lyve_tree,ksnp_tree, plot=TRUE)
comparePhylo(lyve_tree,cfsan_tree, plot=TRUE)
comparePhylo(lyve_tree,enterobase_tree, plot=TRUE)
comparePhylo(ksnp_tree, cfsan_tree, plot=TRUE)
comparePhylo(ksnp_tree, enterobase_tree, plot=TRUE)
comparePhylo(cfsan_tree, enterobase_tree, plot=TRUE)
#
## Plots -line connecting nodes between trees
# http://phytools.org/mexico2018/ex/12/Plotting-methods.html
# Compare trees with all.equal.phylo
plot(cophylo(lyve_tree,ksnp_tree))
plot(cophylo(lyve_tree,cfsan_tree))
plot(cophylo(lyve_tree,enterobase_tree))
plot(cophylo(ksnp_tree, cfsan_tree))
plot(cophylo(ksnp_tree, enterobase_tree))
plot(cophylo(cfsan_tree, enterobase_tree))
# Side-by-side comparison with phylogenetic trees
obj <- cophylo(ksnp_tree, cfsan_tree, print= TRUE)
plot(obj,link.type="curved",link.lwd=3,link.lty="solid",
link.col="grey",fsize=0.8)
nodelabels.cophylo(which="left",frame="circle",cex=0.8)
nodelabels.cophylo(which="right",frame="circle",cex=0.8)
#
##
###
#### Combine rooted and cleaned trees
###
##
#
combined_rooted_trees <- c(ksnp_tree_rooted,enterobase_tree_rooted,cfsan_tree_rooted, lyve_tree_rooted)
combined_cleaned_trees <- c(ksnp_tree,enterobase_tree,cfsan_tree, lyve_tree)
names(combined_cleaned_trees) <- c("ksnp","enterobase","cfsan","lyveset")
densityTree(combined_rooted_trees,type="cladogram",nodes="intermediate")
densityTree(combined_cleaned_trees,type="cladogram",nodes="intermediate")
densityTree(combined_rooted_trees,use.edge.length=FALSE,type="phylogram",nodes="inner", alpha = 0.3)
# Load updated metadata file
#SRA_metadata <- read.csv("SRA_present.csv", header = FALSE, stringsAsFactors = FALSE)
# Calculate related tree distance
#relatedTreeDist(combined_trees, as.data.frame(SRA_metadata), checkTrees = TRUE)
#write.csv(lyve_tree$tip.label, "lyve_tree_nodes.csv")
#write.csv(ksnp_tree$tip.label, "ksnp3_nodes.csv")
# png(filename = "List_NY_lyveset_tree.png", res = 300,width = 800, height = 800)
# plotTree(lyve_tree, label.offset =1)
###
####
####### Treespace
####
###
# https://cran.r-project.org/web/packages/treespace/vignettes/introduction.html
library(treespace)
combined_treespace <- treespace(combined_rooted_trees, nf=3) # , return.tree.vectors = TRUE
#test <- as.treeshape(dataset1_tree_vector)
table.image(combined_treespace$D)
table.value(combined_treespace$D, nclass=5, method="color", symbol="circle", col=redpal(6))
plotGroves(combined_treespace$pco,lab.show=TRUE, lab.cex=1.5)
combined_treespace_groves <- findGroves(combined_treespace)
plotGrovesD3(combined_treespace_groves)
#aldous.test(combined_rooted_trees)
colless.test(combined_treespace_groves, alternative="greater")
likelihood.test(combined_treespace, alternative='greater')
#
##
### Test OTU grouping - not fully working yet.
##
# Enrique working on this
# Mutate to create new column with selected outbreak group
SRA_metadata <- as_tibble(SRA_metadata)
SRA_metadata <- SRA_metadata %>% mutate(Group = ifelse(SNP.cluster == "PDS000000366.382" , "Outbreak", "Other"))
outbreak_group <- SRA_metadata %>%
filter(SNP.cluster == "PDS000000366.382") %>%
select(Newick_label)
#Just the sample Ids
lyve_tree_w_meta <- groupOTU(lyve_tree ,outbreak_group, group_name = "Outbreak")
p <- ggtree(lyve_tree_w_meta, aes(color=Outbreak)) +
scale_color_manual(values = c("#efad29", "#63bbd4")) +
geom_nodepoint(color="black", size=0.1) +
geom_tiplab(size=2, color="black")
p
|
library(RODBC)
if (file.exists('test.xls'))
suppress_output <- file.remove('test.xls')
df <- data.frame(
a = c(1, 2, 3),
b = c('foo', 'bar', 'baz')
)
xls <- odbcConnectExcel('test.xls', readOnly = FALSE)
sqlSave(
xls,
df,
rownames = FALSE,
append = FALSE
)
odbcClose(xls)
| /packages/RODBC/connect.Excel.R | no_license | ReneNyffenegger/about-r | R | false | false | 296 | r | library(RODBC)
if (file.exists('test.xls'))
suppress_output <- file.remove('test.xls')
df <- data.frame(
a = c(1, 2, 3),
b = c('foo', 'bar', 'baz')
)
xls <- odbcConnectExcel('test.xls', readOnly = FALSE)
sqlSave(
xls,
df,
rownames = FALSE,
append = FALSE
)
odbcClose(xls)
|
# Exercise 1: Data Frame Practice
# Install devtools package: allows installations from GitHub
install.packages('devtools')
# Install "fueleconomy" package from GitHub
devtools::install_github("hadley/fueleconomy")
# Require/library the fueleconomy package
library(fueleconomy)
# You should have have access to the `vehicles` data.frame
View(vehicles)
# Create a data.frame of vehicles from 1997
cars.1997 <- vehicles[vehicles$year == 1997,]
# Use the `unique` function to verify that there is only 1 value in the `year` column of your new data.frame
unique(cars.1997$year)
# Create a data.frame of 2-Wheel Drive vehicles that get more than 20 miles/gallon in the city
two.wheel.20.mpg <- vehicles[vehicles$drive == '2-Wheel Drive' & vehicles$cty > 20,]
# Of those vehicles, what is the vehicle ID of the vehicle with the worst hwy mpg?
worst.hwy.mpg.id <- two.wheel.20.mpg$id[two.wheel.20.mpg$hwy == min(two.wheel.20.mpg$hwy)]
# Write a function that takes a `year` and a `make` as parameters, and returns
# The vehicle that gets the most hwy miles/gallon of vehicles of that make in that year
getMostMiles <- function(year, make) {
eligible.vehicles <- vehicles[vehicles$year == year & vehicles$make == make, ]
answer <- eligible.vehicles[eligible.vehicles$hwy == max(eligible.vehicles$hwy),]
return (answer)
}
# What was the most efficient honda model of 1995?
most.efficient.honda.1995 <- getMostMiles(1995, 'Honda')
| /exercise-1/exercise.R | permissive | michellewho/m11-dplyr | R | false | false | 1,439 | r | # Exercise 1: Data Frame Practice
# Install devtools package: allows installations from GitHub
install.packages('devtools')
# Install "fueleconomy" package from GitHub
devtools::install_github("hadley/fueleconomy")
# Require/library the fueleconomy package
library(fueleconomy)
# You should have have access to the `vehicles` data.frame
View(vehicles)
# Create a data.frame of vehicles from 1997
cars.1997 <- vehicles[vehicles$year == 1997,]
# Use the `unique` function to verify that there is only 1 value in the `year` column of your new data.frame
unique(cars.1997$year)
# Create a data.frame of 2-Wheel Drive vehicles that get more than 20 miles/gallon in the city
two.wheel.20.mpg <- vehicles[vehicles$drive == '2-Wheel Drive' & vehicles$cty > 20,]
# Of those vehicles, what is the vehicle ID of the vehicle with the worst hwy mpg?
worst.hwy.mpg.id <- two.wheel.20.mpg$id[two.wheel.20.mpg$hwy == min(two.wheel.20.mpg$hwy)]
# Write a function that takes a `year` and a `make` as parameters, and returns
# The vehicle that gets the most hwy miles/gallon of vehicles of that make in that year
getMostMiles <- function(year, make) {
eligible.vehicles <- vehicles[vehicles$year == year & vehicles$make == make, ]
answer <- eligible.vehicles[eligible.vehicles$hwy == max(eligible.vehicles$hwy),]
return (answer)
}
# What was the most efficient honda model of 1995?
most.efficient.honda.1995 <- getMostMiles(1995, 'Honda')
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/functions.R
\name{add_uhi_hist_data}
\alias{add_uhi_hist_data}
\title{Add UHI data from RasterLayer stack to sf data frame}
\usage{
add_uhi_hist_data(uhi_stack_list, sf_data)
}
\arguments{
\item{uhi_stack_list}{List, nested on time of year, time of day}
\item{sf_data}{sf points, data frame with Berlin tree locations and meta data}
}
\value{
A nested list containing RasterLayers with the following levels:
\enumerate{
\item{Time of year}{Summer / Winter}
\item{Time of day}{Night / Day}
}
Containing sf point objects (multiple copies of full data set - careful! \strong{needs improvement!})
}
\description{
Add UHI data from RasterLayer stack to sf data frame
}
| /man/add_uhi_hist_data.Rd | permissive | the-Hull/berlin.trees | R | false | true | 743 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/functions.R
\name{add_uhi_hist_data}
\alias{add_uhi_hist_data}
\title{Add UHI data from RasterLayer stack to sf data frame}
\usage{
add_uhi_hist_data(uhi_stack_list, sf_data)
}
\arguments{
\item{uhi_stack_list}{List, nested on time of year, time of day}
\item{sf_data}{sf points, data frame with Berlin tree locations and meta data}
}
\value{
A nested list containing RasterLayers with the following levels:
\enumerate{
\item{Time of year}{Summer / Winter}
\item{Time of day}{Night / Day}
}
Containing sf point objects (multiple copies of full data set - careful! \strong{needs improvement!})
}
\description{
Add UHI data from RasterLayer stack to sf data frame
}
|
localPath <- 'C:\\Users\\support\\Documents\\4 Exploratory Data Analysis\\Course Project 1\\'
setwd(localPath)
dsFilePath = "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip"
dsZipFile = "household_power_consumption.zip"
dsFile = "household_power_consumption.txt"
#
# Download File & Unzip
#
# download.file(dsFilePath,dsZipFile)
# unzip(dsZipFile,dsFile)
#
# load Data
#
dsHouseHold <- read.table(dsFile, dec = ".", sep = ";", na.strings= "?", header = FALSE, skip = 1,col.names = c("Date","Time","Global_active_power","Global_reactive_power","Voltage","Global_intensity","Sub_metering_1","Sub_metering_2","Sub_metering_3"))
dim(dsHouseHold)
dsHouseHold$fDate <- as.Date(dsHouseHold$Date, "%d/%m/%Y")
head(dsHouseHold)
dsHouseHold <- subset(dsHouseHold, dsHouseHold$fDate == "2007-02-01" | dsHouseHold$fDate == "2007-02-02")
dsHouseHold <- dsHouseHold[,1:9]
dim(dsHouseHold)
dsHouseHold$TimeStamp <- strptime(paste(dsHouseHold$Date, dsHouseHold$Time, sep="-"), "%d/%m/%Y-%H:%M:%S")
dim(dsHouseHold)
head(dsHouseHold)
#
# here is the graph
#
png(file = "plot2.png", bg = "transparent", width = 480, height = 480, units = "px", pointsize = 12)
par(mfrow = c(1,1))
plot(dsHouseHold$TimeStamp, dsHouseHold$Global_active_power,type = "l",xlab="", ylab="Global Active Power (kilowatts)")
dev.off()
| /plot2.R | no_license | moutonman/ExData_Plotting1 | R | false | false | 1,335 | r | localPath <- 'C:\\Users\\support\\Documents\\4 Exploratory Data Analysis\\Course Project 1\\'
setwd(localPath)
dsFilePath = "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip"
dsZipFile = "household_power_consumption.zip"
dsFile = "household_power_consumption.txt"
#
# Download File & Unzip
#
# download.file(dsFilePath,dsZipFile)
# unzip(dsZipFile,dsFile)
#
# load Data
#
dsHouseHold <- read.table(dsFile, dec = ".", sep = ";", na.strings= "?", header = FALSE, skip = 1,col.names = c("Date","Time","Global_active_power","Global_reactive_power","Voltage","Global_intensity","Sub_metering_1","Sub_metering_2","Sub_metering_3"))
dim(dsHouseHold)
dsHouseHold$fDate <- as.Date(dsHouseHold$Date, "%d/%m/%Y")
head(dsHouseHold)
dsHouseHold <- subset(dsHouseHold, dsHouseHold$fDate == "2007-02-01" | dsHouseHold$fDate == "2007-02-02")
dsHouseHold <- dsHouseHold[,1:9]
dim(dsHouseHold)
dsHouseHold$TimeStamp <- strptime(paste(dsHouseHold$Date, dsHouseHold$Time, sep="-"), "%d/%m/%Y-%H:%M:%S")
dim(dsHouseHold)
head(dsHouseHold)
#
# here is the graph
#
png(file = "plot2.png", bg = "transparent", width = 480, height = 480, units = "px", pointsize = 12)
par(mfrow = c(1,1))
plot(dsHouseHold$TimeStamp, dsHouseHold$Global_active_power,type = "l",xlab="", ylab="Global Active Power (kilowatts)")
dev.off()
|
\name{risk.attribution}
\alias{risk.attribution}
\title{Risk Attribution of a Portfolio}
\description{
Combined representation of the risk attributes MCTR, CCTR, CCTR percentage, Portfolio Volatility and individual Volatility of the stocks in a given portfolio for a given weight and time period.
}
\usage{
risk.attribution(tickers, weights = rep(1,length(tickers)),
start, end, data, CompanyList = NULL)
}
\arguments{
\item{tickers}{
A character vector of ticker names of companies in the portfolio.
}
\item{weights}{
A numeric vector of weights assigned to the stocks corresponding to the ticker names in \code{tickers}. The sum of the weights need not to be 1 or 100 (in percentage). By default, equal weights to all the stocks are assigned (i.e., by \code{rep(1, length(tickers))}).
}
\item{start}{
Start date in the format "yyyy-mm-dd".
}
\item{end}{
End date in the format "yyyy-mm-dd".
}
\item{data}{
A \code{zoo} object whose \code{rownames} are dates and \code{colnames} are ticker names of the companies. Values of the table corresponds to the daily returns of the stocks of corresponding ticker names.
}
\item{CompanyList}{
A dataframe containing all the Company names corresponding to the ticker names as its \code{rownames}. The input for this argument is optional.
}
}
\details{
For details of the risk attributes refer to the corresponding functions. See \code{\link{volatility}} for individual volatility of the stocks and \code{\link{portvol}} for portfolio volatility, MCTR & CCTR.\deqn{}
CCTR percentage for a stock in the portfolio is defined as the percentage of the portfolio volatility contributed by that stock for the given weight. i.e.,
\deqn{CCTR(\%) = \frac{CCTR}{\sigma}*100}{CCTR(\%) = CCTR/\sigma *100} where \eqn{\sigma} is the portfolio volatility.
}
\value{
Returns a dataframe with \code{rownames} as the ticker names as given in the input \code{tickers} with the last row corresponding to the portfolio values. The result contains the following columns:
\item{Company Name}{Optional. Available only if the dataframe with the company names corresponding to the ticker names as \code{rownames} is supplied as input in \code{risk.attribution} for the argument \code{CompanyList}.}
\item{Weight}{Standardized value of the weights assigned to the stocks in the portfolio. Value of this column corresponding to portfolio is the sum of the weights (i.e. 1).}
\item{MCTR}{Marginal Contribution to Total Risk (MCTR) in percentage. MCTR corresponding to the portfolio will be shown as \code{NA}, since it is meaningless.}
\item{CCTR}{Conditional Contribution to Total Risk (CCTR) in percentage. CCTR corresponding to the portfolio is the sum of the CCTR values, which is the portfolio volatility.}
\item{CCTR(\%)}{Percentage of the portfolio volatility contributed by the stock for the given weight. Clearly, CCTR percentage corresponding to the portfolio is 100.}
\item{Volatility}{Individual volatility of the stocks in percentage. Note that, the value of this column corresponding to the portfolio is not the sum of this column. It is the portfolio volatility.}
}
\note{
In the result or output (see example), both the values of the last row (Portfolio) corresponding to the columns CCTR and Volatility are same (Portfolio Volatility). It should also be noted that, Portfolio Volatility is the sum of CCTR values corresponding to all the stocks but not the sum of individual Volatility of the stocks.
}
\seealso{
\code{\link{volatility}},
\code{\link{portvol}},
\code{\link{mctr}},
\code{\link{cctr}},
\code{\link{zoo}}
}
\examples{
# load the data 'SnP500Returns'
data(SnP500Returns)
# consider the portfolio containing the stocks of the companies
# Apple, IBM, Intel, Microsoft
pf <- c("AAPL","IBM","INTC","MSFT")
# suppose the amount of investments in the above stocks are
# $10,000, $40,000, $20,000 & $30,000 respectively
wt <- c(10000,40000,20000,30000) # weights
# risk attribution for the portfolio 'pf' with weights 'wt'
# for the time period January 1, 2013 - January 31, 2013
risk.attribution(tickers = pf, weights = wt,
start = "2013-01-01", end = "2013-01-31",
data = SnP500Returns)
# to attach the company names corresponding to the ticker names
# load the dataset containing the company names
data(SnP500List)
risk.attribution(tickers = pf, weights = wt,
start = "2013-01-01", end = "2013-01-31",
data = SnP500Returns, CompanyList = SnP500List)
}
| /man/risk.attribution.Rd | no_license | PsaksuMeRap/PortRisk | R | false | false | 4,602 | rd | \name{risk.attribution}
\alias{risk.attribution}
\title{Risk Attribution of a Portfolio}
\description{
Combined representation of the risk attributes MCTR, CCTR, CCTR percentage, Portfolio Volatility and individual Volatility of the stocks in a given portfolio for a given weight and time period.
}
\usage{
risk.attribution(tickers, weights = rep(1,length(tickers)),
start, end, data, CompanyList = NULL)
}
\arguments{
\item{tickers}{
A character vector of ticker names of companies in the portfolio.
}
\item{weights}{
A numeric vector of weights assigned to the stocks corresponding to the ticker names in \code{tickers}. The sum of the weights need not to be 1 or 100 (in percentage). By default, equal weights to all the stocks are assigned (i.e., by \code{rep(1, length(tickers))}).
}
\item{start}{
Start date in the format "yyyy-mm-dd".
}
\item{end}{
End date in the format "yyyy-mm-dd".
}
\item{data}{
A \code{zoo} object whose \code{rownames} are dates and \code{colnames} are ticker names of the companies. Values of the table corresponds to the daily returns of the stocks of corresponding ticker names.
}
\item{CompanyList}{
A dataframe containing all the Company names corresponding to the ticker names as its \code{rownames}. The input for this argument is optional.
}
}
\details{
For details of the risk attributes refer to the corresponding functions. See \code{\link{volatility}} for individual volatility of the stocks and \code{\link{portvol}} for portfolio volatility, MCTR & CCTR.\deqn{}
CCTR percentage for a stock in the portfolio is defined as the percentage of the portfolio volatility contributed by that stock for the given weight. i.e.,
\deqn{CCTR(\%) = \frac{CCTR}{\sigma}*100}{CCTR(\%) = CCTR/\sigma *100} where \eqn{\sigma} is the portfolio volatility.
}
\value{
Returns a dataframe with \code{rownames} as the ticker names as given in the input \code{tickers} with the last row corresponding to the portfolio values. The result contains the following columns:
\item{Company Name}{Optional. Available only if the dataframe with the company names corresponding to the ticker names as \code{rownames} is supplied as input in \code{risk.attribution} for the argument \code{CompanyList}.}
\item{Weight}{Standardized value of the weights assigned to the stocks in the portfolio. Value of this column corresponding to portfolio is the sum of the weights (i.e. 1).}
\item{MCTR}{Marginal Contribution to Total Risk (MCTR) in percentage. MCTR corresponding to the portfolio will be shown as \code{NA}, since it is meaningless.}
\item{CCTR}{Conditional Contribution to Total Risk (CCTR) in percentage. CCTR corresponding to the portfolio is the sum of the CCTR values, which is the portfolio volatility.}
\item{CCTR(\%)}{Percentage of the portfolio volatility contributed by the stock for the given weight. Clearly, CCTR percentage corresponding to the portfolio is 100.}
\item{Volatility}{Individual volatility of the stocks in percentage. Note that, the value of this column corresponding to the portfolio is not the sum of this column. It is the portfolio volatility.}
}
\note{
In the result or output (see example), both the values of the last row (Portfolio) corresponding to the columns CCTR and Volatility are same (Portfolio Volatility). It should also be noted that, Portfolio Volatility is the sum of CCTR values corresponding to all the stocks but not the sum of individual Volatility of the stocks.
}
\seealso{
\code{\link{volatility}},
\code{\link{portvol}},
\code{\link{mctr}},
\code{\link{cctr}},
\code{\link{zoo}}
}
\examples{
# load the data 'SnP500Returns'
data(SnP500Returns)
# consider the portfolio containing the stocks of the companies
# Apple, IBM, Intel, Microsoft
pf <- c("AAPL","IBM","INTC","MSFT")
# suppose the amount of investments in the above stocks are
# $10,000, $40,000, $20,000 & $30,000 respectively
wt <- c(10000,40000,20000,30000) # weights
# risk attribution for the portfolio 'pf' with weights 'wt'
# for the time period January 1, 2013 - January 31, 2013
risk.attribution(tickers = pf, weights = wt,
start = "2013-01-01", end = "2013-01-31",
data = SnP500Returns)
# to attach the company names corresponding to the ticker names
# load the dataset containing the company names
data(SnP500List)
risk.attribution(tickers = pf, weights = wt,
start = "2013-01-01", end = "2013-01-31",
data = SnP500Returns, CompanyList = SnP500List)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tab.R
\name{suup}
\alias{suup}
\title{like -su- in stata, but ONLY for vectors}
\usage{
suup(x)
}
\arguments{
\item{x}{a numeric vector}
}
\value{
nothing (prints summary output)
}
\description{
like -su- in stata, but ONLY for vectors
}
| /man/suup.Rd | no_license | rdisalv2/dismisc | R | false | true | 316 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tab.R
\name{suup}
\alias{suup}
\title{like -su- in stata, but ONLY for vectors}
\usage{
suup(x)
}
\arguments{
\item{x}{a numeric vector}
}
\value{
nothing (prints summary output)
}
\description{
like -su- in stata, but ONLY for vectors
}
|
# create Ethiopia basic vaccination coverage map using coverage data from DHS 2016
# load packages
library (rgdal)
library (data.table)
library (ggplot2)
library (broom)
library (dplyr)
# map tutorial
# https://www.r-graph-gallery.com/168-load-a-shape-file-into-r.html
# clear workspace
rm (list = ls())
# create Ethiopia basic vaccination coverage map using coverage data from DHS 2016
create_map <- function () {
# read Ethiopia admin level map data
my_spdf <- readOGR (dsn = "ETH_adm",
layer = "ETH_adm1",
verbose = FALSE
)
# 'fortify' the data to get a dataframe format required by ggplot2
spdf_fortified <- tidy (my_spdf, region = "NAME_1")
# coordinates and aspect ratio
coord <- coord_quickmap (xlim = range (spdf_fortified$long),
ylim = range(spdf_fortified$lat),
expand = F
)
asp <- coord$aspect (list (x.range = range (spdf_fortified$long),
y.range = range (spdf_fortified$lat)))
# read Ethiopia basic vaccination coverage data (source: DHS 2016)
coverage <- fread (file = "eth_coverage.csv")
# join coverage data to map data
shapefile.df <- right_join (spdf_fortified, coverage, by = "id")
setDT (shapefile.df)
# admin level-1 data (11 regions in Ethiopia)
idList <- my_spdf@data$NAME_1
# centre of each region
centroids.df <- as.data.frame (coordinates(my_spdf))
names (centroids.df) <- c ("Longitude", "Latitude")
pop.df <- (data.frame(coverage, centroids.df))
setDT (pop.df)
# minor update of region names, latitude, and longitude
pop.df [id == "Benshangul-Gumaz", "id"] <- "Benishangul"
pop.df [id == "Gambela Peoples", "id"] <- "Gambela"
pop.df [id == "Harari People", "id"] <- "Harari"
pop.df [id == "Southern Nations, Nationalities and Peoples", "id"] <- "SNNPR"
pop.df [id == "Oromia", "Longitude"] <- 40.4
pop.df [id == "Harari", "Latitude"] <- 9
pop.df [id == "Dire Dawa", "Latitude"] <- 10
pop.df [id == "Addis Ababa", "Latitude"] <- 9.25
pop.df [id == "Tigray", "Latitude"] <- 14
pop.df [id == "Addis Ababa", "Longitude"] <- 38.4
pop.df [id == "Benishangul", "Longitude"] <- 35.4
# create map
p <- ggplot() +
geom_polygon (data = shapefile.df,
aes (x = long, y = lat, group = group, fill = coverage),
colour = "gold") +
# labs (title = " Basic vaccination coverage in different regions of Ethiopia", size = 10) +
labs (title = " Full vaccination coverage among children aged 12-23 months in 9 regional states and 2 chartered cities of Ethiopia",
subtitle = " (1-dose BCG, 3-dose DTP3-HepB-Hib, 3-dose polio, 1-dose measles (MCV1), 3-dose PCV3, 2-dose rotavirus)" ) +
geom_polygon (data = shapefile.df [id == "Addis Ababa"], aes(x = long, y = lat, group = group, fill = coverage), colour = "gold") +
geom_polygon (data = shapefile.df [id == "Harari People"], aes(x = long, y = lat, group = group, fill = coverage), colour = "gold") +
geom_text (data = pop.df,
aes (x = Longitude, y = Latitude, label = id),
size = 4,
color = "gold") +
theme_void () +
theme (plot.title = element_text (size = 15)) +
theme (plot.subtitle = element_text (size = 14)) +
theme (legend.title = element_text(size = 14),
legend.text = element_text(size = 14)) +
scale_fill_continuous (name = "coverage (%)",
trans = "reverse",
guide = guide_colourbar(reverse = TRUE))
print (p)
# save map figure to file
ggsave (filename = "plot_Ethiopia_vaccination_coverage_DHS2016.jpg",
width = 10 * 1.11,
height = 9 * 1.15 * asp,
units = "in",
dpi = 600)
ggsave (filename = "plot_Ethiopia_vaccination_coverage_DHS2016.eps",
width = 10 * 1.11,
height = 9 * 1.15 * asp,
units = "in",
device = cairo_ps)
return (p)
} # end of function -- create_map
# create Ethiopia basic vaccination coverage map using coverage data from DHS 2016
create_map ()
| /ethiopia_map.R | no_license | kaja-a/vaccine_equity_ethiopia | R | false | false | 4,246 | r | # create Ethiopia basic vaccination coverage map using coverage data from DHS 2016
# load packages
library (rgdal)
library (data.table)
library (ggplot2)
library (broom)
library (dplyr)
# map tutorial
# https://www.r-graph-gallery.com/168-load-a-shape-file-into-r.html
# clear workspace
rm (list = ls())
# create Ethiopia basic vaccination coverage map using coverage data from DHS 2016
create_map <- function () {
# read Ethiopia admin level map data
my_spdf <- readOGR (dsn = "ETH_adm",
layer = "ETH_adm1",
verbose = FALSE
)
# 'fortify' the data to get a dataframe format required by ggplot2
spdf_fortified <- tidy (my_spdf, region = "NAME_1")
# coordinates and aspect ratio
coord <- coord_quickmap (xlim = range (spdf_fortified$long),
ylim = range(spdf_fortified$lat),
expand = F
)
asp <- coord$aspect (list (x.range = range (spdf_fortified$long),
y.range = range (spdf_fortified$lat)))
# read Ethiopia basic vaccination coverage data (source: DHS 2016)
coverage <- fread (file = "eth_coverage.csv")
# join coverage data to map data
shapefile.df <- right_join (spdf_fortified, coverage, by = "id")
setDT (shapefile.df)
# admin level-1 data (11 regions in Ethiopia)
idList <- my_spdf@data$NAME_1
# centre of each region
centroids.df <- as.data.frame (coordinates(my_spdf))
names (centroids.df) <- c ("Longitude", "Latitude")
pop.df <- (data.frame(coverage, centroids.df))
setDT (pop.df)
# minor update of region names, latitude, and longitude
pop.df [id == "Benshangul-Gumaz", "id"] <- "Benishangul"
pop.df [id == "Gambela Peoples", "id"] <- "Gambela"
pop.df [id == "Harari People", "id"] <- "Harari"
pop.df [id == "Southern Nations, Nationalities and Peoples", "id"] <- "SNNPR"
pop.df [id == "Oromia", "Longitude"] <- 40.4
pop.df [id == "Harari", "Latitude"] <- 9
pop.df [id == "Dire Dawa", "Latitude"] <- 10
pop.df [id == "Addis Ababa", "Latitude"] <- 9.25
pop.df [id == "Tigray", "Latitude"] <- 14
pop.df [id == "Addis Ababa", "Longitude"] <- 38.4
pop.df [id == "Benishangul", "Longitude"] <- 35.4
# create map
p <- ggplot() +
geom_polygon (data = shapefile.df,
aes (x = long, y = lat, group = group, fill = coverage),
colour = "gold") +
# labs (title = " Basic vaccination coverage in different regions of Ethiopia", size = 10) +
labs (title = " Full vaccination coverage among children aged 12-23 months in 9 regional states and 2 chartered cities of Ethiopia",
subtitle = " (1-dose BCG, 3-dose DTP3-HepB-Hib, 3-dose polio, 1-dose measles (MCV1), 3-dose PCV3, 2-dose rotavirus)" ) +
geom_polygon (data = shapefile.df [id == "Addis Ababa"], aes(x = long, y = lat, group = group, fill = coverage), colour = "gold") +
geom_polygon (data = shapefile.df [id == "Harari People"], aes(x = long, y = lat, group = group, fill = coverage), colour = "gold") +
geom_text (data = pop.df,
aes (x = Longitude, y = Latitude, label = id),
size = 4,
color = "gold") +
theme_void () +
theme (plot.title = element_text (size = 15)) +
theme (plot.subtitle = element_text (size = 14)) +
theme (legend.title = element_text(size = 14),
legend.text = element_text(size = 14)) +
scale_fill_continuous (name = "coverage (%)",
trans = "reverse",
guide = guide_colourbar(reverse = TRUE))
print (p)
# save map figure to file
ggsave (filename = "plot_Ethiopia_vaccination_coverage_DHS2016.jpg",
width = 10 * 1.11,
height = 9 * 1.15 * asp,
units = "in",
dpi = 600)
ggsave (filename = "plot_Ethiopia_vaccination_coverage_DHS2016.eps",
width = 10 * 1.11,
height = 9 * 1.15 * asp,
units = "in",
device = cairo_ps)
return (p)
} # end of function -- create_map
# create Ethiopia basic vaccination coverage map using coverage data from DHS 2016
create_map ()
|
\alias{atkStateTypeRegister}
\name{atkStateTypeRegister}
\title{atkStateTypeRegister}
\description{Register a new object state.}
\usage{atkStateTypeRegister(name)}
\arguments{\item{\code{name}}{[character] a character string describing the new state.}}
\value{[\code{\link{AtkStateType}}] an \code{numeric} value for the new state.}
\author{Derived by RGtkGen from GTK+ documentation}
\keyword{internal}
| /man/atkStateTypeRegister.Rd | no_license | cran/RGtk2.10 | R | false | false | 406 | rd | \alias{atkStateTypeRegister}
\name{atkStateTypeRegister}
\title{atkStateTypeRegister}
\description{Register a new object state.}
\usage{atkStateTypeRegister(name)}
\arguments{\item{\code{name}}{[character] a character string describing the new state.}}
\value{[\code{\link{AtkStateType}}] an \code{numeric} value for the new state.}
\author{Derived by RGtkGen from GTK+ documentation}
\keyword{internal}
|
# ==========================================================================
# 50 Things You Should Know About Data
#
# Unit 6-05 reshape tables
# ==========================================================================
# just knowledge - no coding ;-)
| /L-06/L-06-05_reshape-table.r | no_license | ursklahr/50-t-y-n-t-k-a-d | R | false | false | 267 | r | # ==========================================================================
# 50 Things You Should Know About Data
#
# Unit 6-05 reshape tables
# ==========================================================================
# just knowledge - no coding ;-)
|
library(dplyr)
#file download
filename <- "UCI HAR Dataset.zip"
# Checking if archieve already exists.
if (!file.exists(filename)){
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
download.file(fileURL, filename, method="curl")
}
# Make folder
if (!file.exists("UCI HAR Dataset")) {
unzip(filename)
}
inpath<-'./UCI HAR Dataset'
#1.Merges the training and the test sets to create one data set
#readin train data
xtrain<-read.table(file.path(inpath, 'train', 'X_train.txt'), header=FALSE)
ytrain<-read.table(file.path(inpath, 'train', 'y_train.txt'), header=FALSE)
strain<-read.table(file.path(inpath, 'train', 'subject_train.txt'), header=FALSE)
#readin test data
xtest<-read.table(file.path(inpath, 'test', 'X_test.txt'), header=FALSE)
ytest<-read.table(file.path(inpath, 'test', 'y_test.txt'), header=FALSE)
stest<-read.table(file.path(inpath, 'test', 'subject_test.txt'), header=FALSE)
#readin feature data
features<-read.table(file.path(inpath, 'features.txt'), header=FALSE)
#readin activity data
activity<-read.table(file.path(inpath, 'activity_labels.txt'), header=FALSE)
colnames(activity) <- c('activityid','activitylabel')
#merge train, test, feature and activity data
colnames(xtrain)=features[,2]
colnames(ytrain)='activityid'
ytrain1<-merge(ytrain, activity, by='activityid' )
colnames(strain)='subjectid'
colnames(xtest)=features[,2]
colnames(ytest)='activityid'
ytest1<-merge(ytest, activity, by='activityid' )
colnames(stest)='subjectid'
train<-cbind(strain, xtrain, ytrain1)
test<-cbind(stest, xtest, ytest1)
df<-rbind(train, test)
#2. Extracts only the measurements on the mean and standard deviation for each measurement.
extract<-df %>%
select(subjectid, activityid, contains('mean'), contains('std'))
#3. Uses descriptive activity names to name the activities in the data set.
data<-merge(extract, activity, by='activityid')
#4. Appropriately labels the data set with descriptive variable names.
names(data)<-gsub('Acc', 'Accelerometer', names(data))
names(data)<-gsub('BodyBody', 'Body', names(data))
names(data)<-gsub('^f', 'frequency', names(data))
names(data)<-gsub('Gyro', 'Gyroscope', names(data))
names(data)<-gsub('Mag', 'Magnitude', names(data))
names(data)<-gsub('^t', 'time', names(data))
#5. From the data set in step 4, creates a second, independent tidy data set
# with the average of each variable for each activity and each subject.
tidydataset<-data %>%
group_by(activitylabel, subjectid) %>%
summarise_all(mean, na.rm = TRUE)
write.table(tidydataset, 'tidydataset.txt', row.name=FALSE)
| /run_analysis.R | no_license | singrisone/Getting-and-cleaning-data-course-project- | R | false | false | 2,685 | r | library(dplyr)
#file download
filename <- "UCI HAR Dataset.zip"
# Checking if archieve already exists.
if (!file.exists(filename)){
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
download.file(fileURL, filename, method="curl")
}
# Make folder
if (!file.exists("UCI HAR Dataset")) {
unzip(filename)
}
inpath<-'./UCI HAR Dataset'
#1.Merges the training and the test sets to create one data set
#readin train data
xtrain<-read.table(file.path(inpath, 'train', 'X_train.txt'), header=FALSE)
ytrain<-read.table(file.path(inpath, 'train', 'y_train.txt'), header=FALSE)
strain<-read.table(file.path(inpath, 'train', 'subject_train.txt'), header=FALSE)
#readin test data
xtest<-read.table(file.path(inpath, 'test', 'X_test.txt'), header=FALSE)
ytest<-read.table(file.path(inpath, 'test', 'y_test.txt'), header=FALSE)
stest<-read.table(file.path(inpath, 'test', 'subject_test.txt'), header=FALSE)
#readin feature data
features<-read.table(file.path(inpath, 'features.txt'), header=FALSE)
#readin activity data
activity<-read.table(file.path(inpath, 'activity_labels.txt'), header=FALSE)
colnames(activity) <- c('activityid','activitylabel')
#merge train, test, feature and activity data
colnames(xtrain)=features[,2]
colnames(ytrain)='activityid'
ytrain1<-merge(ytrain, activity, by='activityid' )
colnames(strain)='subjectid'
colnames(xtest)=features[,2]
colnames(ytest)='activityid'
ytest1<-merge(ytest, activity, by='activityid' )
colnames(stest)='subjectid'
train<-cbind(strain, xtrain, ytrain1)
test<-cbind(stest, xtest, ytest1)
df<-rbind(train, test)
#2. Extracts only the measurements on the mean and standard deviation for each measurement.
extract<-df %>%
select(subjectid, activityid, contains('mean'), contains('std'))
#3. Uses descriptive activity names to name the activities in the data set.
data<-merge(extract, activity, by='activityid')
#4. Appropriately labels the data set with descriptive variable names.
names(data)<-gsub('Acc', 'Accelerometer', names(data))
names(data)<-gsub('BodyBody', 'Body', names(data))
names(data)<-gsub('^f', 'frequency', names(data))
names(data)<-gsub('Gyro', 'Gyroscope', names(data))
names(data)<-gsub('Mag', 'Magnitude', names(data))
names(data)<-gsub('^t', 'time', names(data))
#5. From the data set in step 4, creates a second, independent tidy data set
# with the average of each variable for each activity and each subject.
tidydataset<-data %>%
group_by(activitylabel, subjectid) %>%
summarise_all(mean, na.rm = TRUE)
write.table(tidydataset, 'tidydataset.txt', row.name=FALSE)
|
####################################################################
#
# removing duplicate/overlapping plumes from plume list
# -----------------------------------------------------------------
#
# Author: Kelsey Foster
#
# This script was written to eliminate the possibility of
# double counting emissions from methane plumes within in
# the same search radius (maximum fetch = 150m). Please see
# CA's Methane Super-emitters (Duren etal) supplementary
# material (SI S2.5 and S2.8) for additional information.
#
#####################################################################
# load necessary R packages----------------------------------------
library(rgdal)
library (sp) # gBuffer, SpatialPointsDataFrame, crs, and spTransform functions
library(rgeos) # gBuffer
library(raster)
library(testit)
# USER DEFINED VARIABLES---------------------------------------------
df = read.csv('/Users/eyam/M2AF/MSF_manual_data_workflow/scripts/1_plume_proximity_filtering/flux_ovest_input_data.csv', stringsAsFactors = FALSE) #plume list as csv
out.file = "/Users/eyam/M2AF/MSF_manual_data_workflow/scripts/1_plume_proximity_filtering/flux_overest_output_06102019.csv" #output file
max.over = .30 #max allowable overlap between plume search radii (should be decimal)
# FUNCTIONS--------------------------------------------------------
# load helper functions
source('/Users/eyam/M2AF/MSF_manual_data_workflow/scripts/1_plume_proximity_filtering/remove_duplicate_plumes_helpers.R')
### coordinate reference systems needed
geog.crs = CRS('+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0') #make shapefile lat/long
proj.crs = CRS("+proj=aea +lat_1=34 +lat_2=40.5 +lat_0=0 +lon_0=-120 +x_0=0 +y_0=-4000000
+ellps=GRS80 +datum=NAD83 +units=m +no_defs") #change to m for area calculation (CA teale albers)
# main function
flux_overest = function(df, out.file, max.over){
### concatenate nearest facility and line name to get unique ID to sort by
df$concat = paste0(df$Nearest.facility..best.estimate., df$Line.name)
### There are duplicate source IDs ==> add unique candidate ID letter to end of each source ID
df$Source.identifier = paste0(df$Source.identifier, substr(df$Candidate.ID, 19,20))
### split the df by concat
x = split(df, df$concat)
# for (plume in 1:length(x)){
for (plume in 1:25){
print('')
data = x[[plume]] # subset df by concat (nearest facility and flight line)
data[data =='#VALUE!'] = NA # change null value to NA
data = subset(data, data$X2.m.wind..E..kg.hr.>1) # get plumes with flux
data = subset(data, !duplicated(data$X2.m.wind..E..kg.hr.)) #remove identical fluxes
if (nrow(data)>1){ #check if data df has more than 1 row
print(plume)
shp = make.shapefile(data, geog.crs, proj.crs) # turn data df into a shapefile of points with 150m buffer
per.overlap = percent.overlap(shp, data) #calculate %overlap and output a df with source ID and % overlap
x[[plume]] = recursive.overlap(per.overlap, data, max.over) #remove all sources with > user defined amount of overlap
}else{x[[plume]] = data}
}
### clean df and write to csv
x = do.call('rbind', x) # recombine split df
x$Source.identifier = substr(x$Source.identifier, 1,6) # remove candidate ID that was appended to source ID
row.names(x) = NULL
print(paste0('Check ', out.file, " for output table" ))
write.csv(x, out.file, row.names=FALSE) # write df to a csv
}
flux_overest(df, out.file, max.over)
| /msf_flow/plume_processor/filter_plumes/remove_duplicate_plumes.R | permissive | dsmbgu8/srcfinder | R | false | false | 3,656 | r | ####################################################################
#
# removing duplicate/overlapping plumes from plume list
# -----------------------------------------------------------------
#
# Author: Kelsey Foster
#
# This script was written to eliminate the possibility of
# double counting emissions from methane plumes within in
# the same search radius (maximum fetch = 150m). Please see
# CA's Methane Super-emitters (Duren etal) supplementary
# material (SI S2.5 and S2.8) for additional information.
#
#####################################################################
# load necessary R packages----------------------------------------
library(rgdal)
library (sp) # gBuffer, SpatialPointsDataFrame, crs, and spTransform functions
library(rgeos) # gBuffer
library(raster)
library(testit)
# USER DEFINED VARIABLES---------------------------------------------
df = read.csv('/Users/eyam/M2AF/MSF_manual_data_workflow/scripts/1_plume_proximity_filtering/flux_ovest_input_data.csv', stringsAsFactors = FALSE) #plume list as csv
out.file = "/Users/eyam/M2AF/MSF_manual_data_workflow/scripts/1_plume_proximity_filtering/flux_overest_output_06102019.csv" #output file
max.over = .30 #max allowable overlap between plume search radii (should be decimal)
# FUNCTIONS--------------------------------------------------------
# load helper functions
source('/Users/eyam/M2AF/MSF_manual_data_workflow/scripts/1_plume_proximity_filtering/remove_duplicate_plumes_helpers.R')
### coordinate reference systems needed
geog.crs = CRS('+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0') #make shapefile lat/long
proj.crs = CRS("+proj=aea +lat_1=34 +lat_2=40.5 +lat_0=0 +lon_0=-120 +x_0=0 +y_0=-4000000
+ellps=GRS80 +datum=NAD83 +units=m +no_defs") #change to m for area calculation (CA teale albers)
# main function
flux_overest = function(df, out.file, max.over){
### concatenate nearest facility and line name to get unique ID to sort by
df$concat = paste0(df$Nearest.facility..best.estimate., df$Line.name)
### There are duplicate source IDs ==> add unique candidate ID letter to end of each source ID
df$Source.identifier = paste0(df$Source.identifier, substr(df$Candidate.ID, 19,20))
### split the df by concat
x = split(df, df$concat)
# for (plume in 1:length(x)){
for (plume in 1:25){
print('')
data = x[[plume]] # subset df by concat (nearest facility and flight line)
data[data =='#VALUE!'] = NA # change null value to NA
data = subset(data, data$X2.m.wind..E..kg.hr.>1) # get plumes with flux
data = subset(data, !duplicated(data$X2.m.wind..E..kg.hr.)) #remove identical fluxes
if (nrow(data)>1){ #check if data df has more than 1 row
print(plume)
shp = make.shapefile(data, geog.crs, proj.crs) # turn data df into a shapefile of points with 150m buffer
per.overlap = percent.overlap(shp, data) #calculate %overlap and output a df with source ID and % overlap
x[[plume]] = recursive.overlap(per.overlap, data, max.over) #remove all sources with > user defined amount of overlap
}else{x[[plume]] = data}
}
### clean df and write to csv
x = do.call('rbind', x) # recombine split df
x$Source.identifier = substr(x$Source.identifier, 1,6) # remove candidate ID that was appended to source ID
row.names(x) = NULL
print(paste0('Check ', out.file, " for output table" ))
write.csv(x, out.file, row.names=FALSE) # write df to a csv
}
flux_overest(df, out.file, max.over)
|
prsumglm <-
function(x, mtype){
year0 <- mtype$refyear
disp <- suppressWarnings(summary(x$model)$dispersion) # suppress warning about zero weights in binomial glm
dev <- x$model$deviance
rdf <- x$model$df.residual
if( is.na(disp) )
cat('Model Fit: not estimated, did you fit a single site?\n')
else
cat(sprintf("Model Fit: Dispersion parameter is %4.2f (resid. deviance %5.1f on %5.0f d.f.)\n", disp, dev, rdf))
fyear <- x$parms$years[1]
if ( year0 == fyear ) { # check if ref year is first year, so need to reverse comparison
fyear <- max(x$parms$years)
delta_ind <- x$parms$index[x$parms$years==fyear]
if (delta_ind < 1 ) {
delta_ind <- -100 * (1-delta_ind )
} else {
delta_ind <- 100 * (delta_ind-1)
}
} else {
delta_ind <- 1/x$parms$index[x$parms$years==fyear]
if (delta_ind < 1 ) {
delta_ind <- -100 * (1-delta_ind)
} else {
delta_ind <- 100 * (delta_ind-1)
}
}
if ( mtype$type == 'annual' ){
if( delta_ind > 1000 )
cat(sprintf("Change between %d and %d: >1000%%\n", fyear, year0))
else
cat(sprintf("Change between %d and %d: %4.1f%%\n", fyear, year0, delta_ind))
}
else if ( mtype$type == 'trend' )
cat(sprintf("Slope for past %d years is %4.2f +/- %4.2f (t=%4.3f, P=%4.3f) \n",
x$test$nyrs, x$test$slope, x$test$slope.se, x$test$tval, x$test$tsig))
else if ( mtype$type == 'constant' )
cat(sprintf("Last year is %3.1f%% that in previous % d years: Estimate=%4.2f +/- %4.2f (t=%4.3f, P=%4.3f) \n",
100*exp(x$test$slope), x$test$nyrs, x$test$slope, x$test$slope.se, x$test$tval, x$test$tsig))
cat('\n')
return(x$parms)
}
| /R/prsumglm.R | no_license | pboesu/cesr | R | false | false | 1,759 | r | prsumglm <-
function(x, mtype){
year0 <- mtype$refyear
disp <- suppressWarnings(summary(x$model)$dispersion) # suppress warning about zero weights in binomial glm
dev <- x$model$deviance
rdf <- x$model$df.residual
if( is.na(disp) )
cat('Model Fit: not estimated, did you fit a single site?\n')
else
cat(sprintf("Model Fit: Dispersion parameter is %4.2f (resid. deviance %5.1f on %5.0f d.f.)\n", disp, dev, rdf))
fyear <- x$parms$years[1]
if ( year0 == fyear ) { # check if ref year is first year, so need to reverse comparison
fyear <- max(x$parms$years)
delta_ind <- x$parms$index[x$parms$years==fyear]
if (delta_ind < 1 ) {
delta_ind <- -100 * (1-delta_ind )
} else {
delta_ind <- 100 * (delta_ind-1)
}
} else {
delta_ind <- 1/x$parms$index[x$parms$years==fyear]
if (delta_ind < 1 ) {
delta_ind <- -100 * (1-delta_ind)
} else {
delta_ind <- 100 * (delta_ind-1)
}
}
if ( mtype$type == 'annual' ){
if( delta_ind > 1000 )
cat(sprintf("Change between %d and %d: >1000%%\n", fyear, year0))
else
cat(sprintf("Change between %d and %d: %4.1f%%\n", fyear, year0, delta_ind))
}
else if ( mtype$type == 'trend' )
cat(sprintf("Slope for past %d years is %4.2f +/- %4.2f (t=%4.3f, P=%4.3f) \n",
x$test$nyrs, x$test$slope, x$test$slope.se, x$test$tval, x$test$tsig))
else if ( mtype$type == 'constant' )
cat(sprintf("Last year is %3.1f%% that in previous % d years: Estimate=%4.2f +/- %4.2f (t=%4.3f, P=%4.3f) \n",
100*exp(x$test$slope), x$test$nyrs, x$test$slope, x$test$slope.se, x$test$tval, x$test$tsig))
cat('\n')
return(x$parms)
}
|
rm(list = ls())
library(purrr)
library(repurrrsive)
#dealing with lists
#make a list
mylist<-list(a = "a", b = 2)
#use the $ operator to access parts of a list
mylist$a
mylist$b
#and the double square bracket [[datum]]
mylist[["a"]]
mylist[["b"]]
named_element<-"a"
mylist[[named_element]]
#the single square bracket [datum] For list inputs, always returns a list
mylist["a"]
#using str() to get at list structure and organization
#let's use the listviewer package to learn list exploration
install.packages("listviewer")
library(listviewer)
str(wesanderson)
str(got_chars)
View(got_chars)
#play with max.level
str(wesanderson, max.level = 0)
str(wesanderson, max.level = 1)
str(wesanderson, max.level = 2)
str(got_chars, max.level=0)
str(got_chars, max.level = 1)
str(got_chars, max.level = 2)
str(got_chars, max.level = 2, list.len = 2)
str(got_chars$url, list.len = 1)
str(got_chars[[1]])
str(got_chars[1], list.len = 3)
| /Script files/review_lists.R | no_license | erethizon/JSON_primer | R | false | false | 932 | r | rm(list = ls())
library(purrr)
library(repurrrsive)
#dealing with lists
#make a list
mylist<-list(a = "a", b = 2)
#use the $ operator to access parts of a list
mylist$a
mylist$b
#and the double square bracket [[datum]]
mylist[["a"]]
mylist[["b"]]
named_element<-"a"
mylist[[named_element]]
#the single square bracket [datum] For list inputs, always returns a list
mylist["a"]
#using str() to get at list structure and organization
#let's use the listviewer package to learn list exploration
install.packages("listviewer")
library(listviewer)
str(wesanderson)
str(got_chars)
View(got_chars)
#play with max.level
str(wesanderson, max.level = 0)
str(wesanderson, max.level = 1)
str(wesanderson, max.level = 2)
str(got_chars, max.level=0)
str(got_chars, max.level = 1)
str(got_chars, max.level = 2)
str(got_chars, max.level = 2, list.len = 2)
str(got_chars$url, list.len = 1)
str(got_chars[[1]])
str(got_chars[1], list.len = 3)
|
#! /usr/bin/env R
argsIn <- commandArgs(trailingOnly=TRUE)
source(argsIn[1])
NETWORK <- argsIn[2]
EXPRMAT <- argsIn[3]
METADATA <- argsIn[4]
TAG <- argsIn[5]
runMsViper(NETWORK, EXPRMAT, METADATA, TAG)
RDATAS <- grep("ens2ext",dir(pattern="RData"),invert=TRUE,value=T)
lapply(RDATAS, filterSigRes)
| /bin/run_viper.call.R | no_license | brucemoran/aracne-ap_viper | R | false | false | 300 | r | #! /usr/bin/env R
argsIn <- commandArgs(trailingOnly=TRUE)
source(argsIn[1])
NETWORK <- argsIn[2]
EXPRMAT <- argsIn[3]
METADATA <- argsIn[4]
TAG <- argsIn[5]
runMsViper(NETWORK, EXPRMAT, METADATA, TAG)
RDATAS <- grep("ens2ext",dir(pattern="RData"),invert=TRUE,value=T)
lapply(RDATAS, filterSigRes)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/topN.R
\name{top_n_nested}
\alias{top_n_nested}
\title{Select top n rows by certain value}
\usage{
top_n_nested(df, n = 2, wt)
}
\arguments{
\item{df}{A nested dataframe}
\item{n}{Number of rows to return}
\item{wt}{The variable to use for ordering}
}
\description{
Select top n rows in each group, ordered by wt within a nested dataframe
}
| /man/top_n_nested.Rd | permissive | lenamax2355/homelocator | R | false | true | 421 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/topN.R
\name{top_n_nested}
\alias{top_n_nested}
\title{Select top n rows by certain value}
\usage{
top_n_nested(df, n = 2, wt)
}
\arguments{
\item{df}{A nested dataframe}
\item{n}{Number of rows to return}
\item{wt}{The variable to use for ordering}
}
\description{
Select top n rows in each group, ordered by wt within a nested dataframe
}
|
#' Add trips to trip set
#'
#' Creates a data frame of the same description as a trip set to append
#'
#' @param trip_ids ids for new trips
#' @param new_mode mode for new trips
#' @param distance distances to sample from
#' @param participant_id participant id for new trips
#' @param age age for participant
#' @param sex sex for participant
#' @param nTrips number of trips for participant
#' @param speed speed for new trips
#'
#' @return data frame of trips
#'
#' @export
add_trips <- function(trip_ids=0,new_mode='pedestrian',distance=1,participant_id=0,age=20,sex='male',nTrips=3,speed=4.8){
dist <- sample(distance,nTrips,replace=T)
return(data.frame(trip_id = trip_ids,
trip_mode = new_mode,
trip_distance = dist,
stage_mode = new_mode,
stage_distance = dist,
stage_duration = 60 * dist / speed,
participant_id = participant_id,
age = sample(age,1,replace=T),
sex = sample(sex,1,replace=T)))
}
| /R/add_trips.R | no_license | danielgils/ITHIM-R | R | false | false | 1,023 | r | #' Add trips to trip set
#'
#' Creates a data frame of the same description as a trip set to append
#'
#' @param trip_ids ids for new trips
#' @param new_mode mode for new trips
#' @param distance distances to sample from
#' @param participant_id participant id for new trips
#' @param age age for participant
#' @param sex sex for participant
#' @param nTrips number of trips for participant
#' @param speed speed for new trips
#'
#' @return data frame of trips
#'
#' @export
add_trips <- function(trip_ids=0,new_mode='pedestrian',distance=1,participant_id=0,age=20,sex='male',nTrips=3,speed=4.8){
dist <- sample(distance,nTrips,replace=T)
return(data.frame(trip_id = trip_ids,
trip_mode = new_mode,
trip_distance = dist,
stage_mode = new_mode,
stage_distance = dist,
stage_duration = 60 * dist / speed,
participant_id = participant_id,
age = sample(age,1,replace=T),
sex = sample(sex,1,replace=T)))
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{planes_centros}
\alias{planes_centros}
\title{Planes de la UPCT.}
\format{Un dataframe con 123 filas y 17columnas}
\source{
dwd_estudios.
}
\usage{
data(planes_centros)
}
\description{
Planes de la UPCT.
}
\keyword{datasets}
| /man/planes_centros.Rd | no_license | mkesslerct/opadar | R | false | true | 333 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{planes_centros}
\alias{planes_centros}
\title{Planes de la UPCT.}
\format{Un dataframe con 123 filas y 17columnas}
\source{
dwd_estudios.
}
\usage{
data(planes_centros)
}
\description{
Planes de la UPCT.
}
\keyword{datasets}
|
##' This function calculates element 271 (protein)
##'
##' @param element271Num The column corresponding to value of element
##' 271.
##' @param element271Symb The column corresponding to symbol of element
##' 271.
##' @param ratio271Num The column corresponding to ratio of element
##' 271.
##' @param element141Num The column corresponding to value of element
##' 141.
##' @param data The data
##' @export
##'
calculateEle271 = function(element271Num, element271Symb, ratio271Num,
element141Num, data){
setnames(data,
old = c(element271Num, element271Symb, ratio271Num,
element141Num),
new = c("element271Num", "element271Symb", "ratio271Num",
"element141Num"))
replaceIndex1 = with(data, which(replaceable(element271Symb)))
data[replaceIndex1,
`:=`(c("element271Num", "element271Symb"),
appendSymbol(ratio271Num *
computeRatio(element141Num, 100) , "C"))]
setnames(data,
new = c(element271Num, element271Symb, ratio271Num,
element141Num),
old = c("element271Num", "element271Symb", "ratio271Num",
"element141Num"))
replaceIndex1
}
| /faoswsAupus/R/calculateEle271.R | no_license | mkao006/sws_aupus | R | false | false | 1,234 | r | ##' This function calculates element 271 (protein)
##'
##' @param element271Num The column corresponding to value of element
##' 271.
##' @param element271Symb The column corresponding to symbol of element
##' 271.
##' @param ratio271Num The column corresponding to ratio of element
##' 271.
##' @param element141Num The column corresponding to value of element
##' 141.
##' @param data The data
##' @export
##'
calculateEle271 = function(element271Num, element271Symb, ratio271Num,
element141Num, data){
setnames(data,
old = c(element271Num, element271Symb, ratio271Num,
element141Num),
new = c("element271Num", "element271Symb", "ratio271Num",
"element141Num"))
replaceIndex1 = with(data, which(replaceable(element271Symb)))
data[replaceIndex1,
`:=`(c("element271Num", "element271Symb"),
appendSymbol(ratio271Num *
computeRatio(element141Num, 100) , "C"))]
setnames(data,
new = c(element271Num, element271Symb, ratio271Num,
element141Num),
old = c("element271Num", "element271Symb", "ratio271Num",
"element141Num"))
replaceIndex1
}
|
\name{comparisonsGraph}
\docType{methods}
\alias{comparisonsGraph}
\title{
Graph comparisons specified amongst groups
}
\description{
Generic function to create a Comparisons Graph based on a Comparisons
Table created in turn by the \pkg{cg}
package.
}
\usage{
comparisonsGraph(compstable, cgtheme=TRUE, device="single",
wraplength=20, cex.comps=0.7, \dots)
}
\arguments{
\item{compstable }{
A \code{comparisonsTable} object created by a
\code{\link{comparisonsTable}} method from the \pkg{cg} package.
\cr There is one class of objects that is currently available:
\cr\code{\link{cgOneFactorComparisonsTable}}, which is prepared by the
\cr\code{\link{comparisonsTable.cgOneFactorFit}} method.
}
\item{cgtheme }{
When set to the default \code{TRUE}, ensures a trellis device is active with
limited color scheme. Namely, \code{background},
\code{strip.shingle} and \code{strip.background} are each set to \code{"white"}.
}
\item{device }{
Can be one of three values:
\describe{
\item{\code{"single"}}{The default, which will put all graph panels on the same
device page.}
\item{\code{"multiple"}}{Relevant only when more than one panel of
graphs is possible. In
that case, a new graphics device is generated each newly
generated
single-paneled graph.}
\item{\code{"ask"}}{Relevant only when more than one panel of
graphs is possible. In
that case, each are portrayed as a single-paneled graph, with the
\code{ask=TRUE} argument specified in \code{\link{par}} so that
user input confirmation is needed before the graphs are
drawn.
}
}
}
\item{wraplength }{On the left hand vertical axis are each A vs. B comparison label
from the \code{compstable} object. An attempt at sensible formatting
when a newline is needed is made, but adjustment by this argument may
be needed. The default is \code{20} characters before wrapping to a newline.
}
\item{cex.comps }{Similar to \code{wraplength},
adjustment of this argument parameter can
be made to fit the comparison labels on the left hand vertical axis.
}
\item{\dots }{
Additional arguments, depending on the specific method written for
the \code{compstable} object. Currently, there is only one such specific method; see
\cr \code{\link{comparisonsGraph.cgOneFactorComparisonsTable}} for any additional
arguments that can be specified.
}
}
\value{
The main purpose is the side
effect of graphing to the current device. See the specific methods for
discussion of any return values.
}
\author{
Bill Pikounis [aut, cre, cph], John Oleynick [aut], Eva Ye [ctb]
}
\note{
Contact \email{cg@billpikounis.net} for bug reports, questions,
concerns, and comments.
}
\seealso{
\code{\link{comparisonsGraph.cgOneFactorComparisonsTable}}
}
\examples{
#### One Factor data
data(canine)
canine.data <- prepareCGOneFactorData(canine, format="groupcolumns",
analysisname="Canine",
endptname="Prostate Volume",
endptunits=expression(plain(cm)^3),
digits=1, logscale=TRUE, refgrp="CC")
canine.fit <- fit(canine.data)
canine.comps1 <- comparisonsTable(canine.fit, mcadjust=TRUE,
type="allgroupstocontrol", refgrp="CC")
comparisonsGraph(canine.comps1)
}
| /man/comparisonsGraphGeneric.Rd | no_license | cran/cg | R | false | false | 3,598 | rd | \name{comparisonsGraph}
\docType{methods}
\alias{comparisonsGraph}
\title{
Graph comparisons specified amongst groups
}
\description{
Generic function to create a Comparisons Graph based on a Comparisons
Table created in turn by the \pkg{cg}
package.
}
\usage{
comparisonsGraph(compstable, cgtheme=TRUE, device="single",
wraplength=20, cex.comps=0.7, \dots)
}
\arguments{
\item{compstable }{
A \code{comparisonsTable} object created by a
\code{\link{comparisonsTable}} method from the \pkg{cg} package.
\cr There is one class of objects that is currently available:
\cr\code{\link{cgOneFactorComparisonsTable}}, which is prepared by the
\cr\code{\link{comparisonsTable.cgOneFactorFit}} method.
}
\item{cgtheme }{
When set to the default \code{TRUE}, ensures a trellis device is active with
limited color scheme. Namely, \code{background},
\code{strip.shingle} and \code{strip.background} are each set to \code{"white"}.
}
\item{device }{
Can be one of three values:
\describe{
\item{\code{"single"}}{The default, which will put all graph panels on the same
device page.}
\item{\code{"multiple"}}{Relevant only when more than one panel of
graphs is possible. In
that case, a new graphics device is generated each newly
generated
single-paneled graph.}
\item{\code{"ask"}}{Relevant only when more than one panel of
graphs is possible. In
that case, each are portrayed as a single-paneled graph, with the
\code{ask=TRUE} argument specified in \code{\link{par}} so that
user input confirmation is needed before the graphs are
drawn.
}
}
}
\item{wraplength }{On the left hand vertical axis are each A vs. B comparison label
from the \code{compstable} object. An attempt at sensible formatting
when a newline is needed is made, but adjustment by this argument may
be needed. The default is \code{20} characters before wrapping to a newline.
}
\item{cex.comps }{Similar to \code{wraplength},
adjustment of this argument parameter can
be made to fit the comparison labels on the left hand vertical axis.
}
\item{\dots }{
Additional arguments, depending on the specific method written for
the \code{compstable} object. Currently, there is only one such specific method; see
\cr \code{\link{comparisonsGraph.cgOneFactorComparisonsTable}} for any additional
arguments that can be specified.
}
}
\value{
The main purpose is the side
effect of graphing to the current device. See the specific methods for
discussion of any return values.
}
\author{
Bill Pikounis [aut, cre, cph], John Oleynick [aut], Eva Ye [ctb]
}
\note{
Contact \email{cg@billpikounis.net} for bug reports, questions,
concerns, and comments.
}
\seealso{
\code{\link{comparisonsGraph.cgOneFactorComparisonsTable}}
}
\examples{
#### One Factor data
data(canine)
canine.data <- prepareCGOneFactorData(canine, format="groupcolumns",
analysisname="Canine",
endptname="Prostate Volume",
endptunits=expression(plain(cm)^3),
digits=1, logscale=TRUE, refgrp="CC")
canine.fit <- fit(canine.data)
canine.comps1 <- comparisonsTable(canine.fit, mcadjust=TRUE,
type="allgroupstocontrol", refgrp="CC")
comparisonsGraph(canine.comps1)
}
|
f <- unz(description="exdata_data_household_power_consumption.zip", filename="household_power_consumption.txt")
tab5rows<-read.table(f, sep = ';', header = TRUE, stringsAsFactors = F, nrows = 5)
classes <- sapply(tab5rows, class)
f <- unz(description="exdata_data_household_power_consumption.zip", filename="household_power_consumption.txt")
data <- read.table(f, sep = ';', header = TRUE, colClasses = classes, na.strings = "?", stringsAsFactors = F)
fdata <- data[data$Date=='1/2/2007' | data$Date=='2/2/2007',]
fdata$Datetime <- strptime(paste(fdata$Date, fdata$Time), "%d/%m/%Y %H:%M:%S")
attach(fdata)
png(filename = "plot2.png", width = 480, height = 480)
plot(Datetime, Global_active_power, type = "l", ylab="Global Active Power (kilowatts)", xlab="")
dev.off()
| /plot2.R | no_license | martinstorch/ExData_Plotting1 | R | false | false | 775 | r |
f <- unz(description="exdata_data_household_power_consumption.zip", filename="household_power_consumption.txt")
tab5rows<-read.table(f, sep = ';', header = TRUE, stringsAsFactors = F, nrows = 5)
classes <- sapply(tab5rows, class)
f <- unz(description="exdata_data_household_power_consumption.zip", filename="household_power_consumption.txt")
data <- read.table(f, sep = ';', header = TRUE, colClasses = classes, na.strings = "?", stringsAsFactors = F)
fdata <- data[data$Date=='1/2/2007' | data$Date=='2/2/2007',]
fdata$Datetime <- strptime(paste(fdata$Date, fdata$Time), "%d/%m/%Y %H:%M:%S")
attach(fdata)
png(filename = "plot2.png", width = 480, height = 480)
plot(Datetime, Global_active_power, type = "l", ylab="Global Active Power (kilowatts)", xlab="")
dev.off()
|
#' rKIN: A package for computating isotopic niche space
#'
#' The rKIN This package applies methods used to estimate animal homerange, but
#' instead of geospatial coordinates, we use isotopic coordinates. The estimation
#' methods include: 1) 2-dimensional bivariate normal kernel utilization density
#' estimator with multiple bandwidth estimation methods, 2) bivariate normal
#' ellipse estimator, and 3) minimum convex polygon estimator, all applied to
#' stable isotope data. Additionally, functions to
#' determine niche area, polygon overlap between groups and levels (confidence
#' contours) and plotting capabilities.
#'
#' @section rKIN functions:
#' The rKIN functions:
#' estKIN, estEllipse, estMCP, plot.kin, getArea, calcOverlap
#'
#' @docType package
#' @name rKIN
NULL
| /R/package-rKIN.R | no_license | cran/rKIN | R | false | false | 803 | r | #' rKIN: A package for computating isotopic niche space
#'
#' The rKIN This package applies methods used to estimate animal homerange, but
#' instead of geospatial coordinates, we use isotopic coordinates. The estimation
#' methods include: 1) 2-dimensional bivariate normal kernel utilization density
#' estimator with multiple bandwidth estimation methods, 2) bivariate normal
#' ellipse estimator, and 3) minimum convex polygon estimator, all applied to
#' stable isotope data. Additionally, functions to
#' determine niche area, polygon overlap between groups and levels (confidence
#' contours) and plotting capabilities.
#'
#' @section rKIN functions:
#' The rKIN functions:
#' estKIN, estEllipse, estMCP, plot.kin, getArea, calcOverlap
#'
#' @docType package
#' @name rKIN
NULL
|
# Copyright 2016 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
#' The size of British Columbia
#'
#' Total area, Land area only, or Freshwater area only, in the units of your choosing.
#'
#' The sizes are from \href{http://www.statcan.gc.ca/tables-tableaux/sum-som/l01/cst01/phys01-eng.htm}{Statistics Canada}
#'
#' @param what Which part of BC? One of `'total'` (default), `'land'`, or `'freshwater'`.
#' @param units One of `'km2'` (square kilometres; default), `'m2'` (square metres),
#' `'ha'` (hectares), `'acres'`, or `'sq_mi'` (square miles)
#'
#' @return The area of B.C. in the desired units (numeric vector).
#' @export
#'
#' @examples
#' ## With no arguments, gives the total area in km^2:
#' bc_area()
#'
#' ## Get the area of the land only, in hectares:
#' bc_area("land", "ha")
bc_area <- function(what = "total", units = "km2") {
what = match.arg(what, c("total", "land", "freshwater"))
units = match.arg(units, c("km2", "m2", "ha", "acres", "sq_mi"))
val_km2 <- switch(what, total = 944735, land = 925186, freshwater = 19549)
ret <- switch(units, km2 = val_km2, m2 = km2_m2(val_km2), ha = km2_ha(val_km2),
acres = km2_acres(val_km2), sq_mi = km2_sq_mi(val_km2))
ret <- round(ret, digits = 0)
structure(ret, names = paste(what, units, sep = "_"))
}
km2_m2 <- function(x) {
x * 1e6
}
km2_ha <- function(x) {
x * 100
}
km2_acres <- function(x) {
x * 247.105
}
km2_sq_mi <- function(x) {
x * 0.386102
}
#' Transform a Spatial* object to BC Albers projection
#'
#' @param obj The Spatial* or sf object to transform
#'
#' @return the Spatial* or sf object in BC Albers projection
#' @export
#'
transform_bc_albers <- function(obj) {
UseMethod("transform_bc_albers")
}
#' @export
transform_bc_albers.Spatial <- function(obj) {
if (!inherits(obj, "Spatial")) {
stop("sp_obj must be a Spatial object", call. = FALSE)
}
if (!requireNamespace("rgdal", quietly = TRUE)) {
stop("Package rgdal could not be loaded", call. = FALSE)
}
sp::spTransform(obj, sp::CRS("+init=epsg:3005"))
}
#' @export
transform_bc_albers.sf <- function(obj) {
sf::st_transform(obj, 3005)
}
#' @export
transform_bc_albers.sfc <- transform_bc_albers.sf
#' Check and fix polygons that self-intersect, and sometimes can fix orphan holes
#'
#' For `sf` objects, uses `lwgeom::st_make_valid` if `lwgeom` is installed.
#' Otherwise, uses the common method of buffering by zero.
#'
#' `fix_self_intersect` has been removed and will no longer work. Use
#' `fix_geo_problems` instead
#'
#' @param obj The SpatialPolygons* or sf object to check/fix
#' @param tries The maximum number of attempts to repair the geometry.
#'
#' @return The SpatialPolygons* or sf object, repaired if necessary
#' @export
fix_geo_problems <- function(obj, tries = 5) {
UseMethod("fix_geo_problems")
}
#' @export
fix_geo_problems.Spatial <- function(obj, tries = 5) {
if (!requireNamespace("rgeos", quietly = TRUE)) {
stop("Package rgeos required but not available", call. = FALSE)
}
is_valid <- suppressWarnings(rgeos::gIsValid(obj))
if (is_valid) {
message("Geometry is valid")
return(obj)
}
## If not valid, repair. Try max tries times
i <- 1L
message("Problems found - Attempting to repair...")
while (i <= tries) {
message("Attempt ", i, " of ", tries)
obj <- rgeos::gBuffer(obj, byid = TRUE, width = 0)
is_valid <- suppressWarnings(rgeos::gIsValid(obj))
if (is_valid) {
message("Geometry is valid")
return(obj)
} else {
i <- i + 1
}
}
warning("Tried ", tries, " times but could not repair geometry")
obj
}
#' @export
fix_geo_problems.sf <- function(obj, tries = 5) {
## Check if the overall geomtry is valid, if it is, exit and return input
is_valid <- suppressWarnings(suppressMessages(sf::st_is_valid(obj)))
if (all(is_valid)) {
message("Geometry is valid")
return(obj)
}
message("Problems found - Attempting to repair...")
if (requireNamespace("lwgeom", quietly = TRUE)) {
return(lwgeom::st_make_valid(obj))
} else {
message("package lwgeom not available for the st_make_valid function, sf::st_buffer(dist = 0)")
i <- 1
while (i <= tries) { # Try three times
message("Attempt ", i, " of ", tries)
obj <- sf::st_buffer(obj, dist = 0)
is_valid <- suppressWarnings(suppressMessages(sf::st_is_valid(obj)))
if (all(is_valid)) {
message("Geometry is valid")
return(obj)
} else {
i <- i + 1
}
}
}
warning("tried ", tries, " times but could not repair all geometries")
obj
}
#' @export
fix_geo_problems.sfc <- fix_geo_problems.sf
#' Union a SpatialPolygons* object with itself to remove overlaps, while retaining attributes
#'
#' The IDs of source polygons are stored in a list-column called
#' `union_ids`, and original attributes (if present) are stored as nested
#' dataframes in a list-column called `union_df`
#'
#' @param x A `SpatialPolygons` or `SpatialPolygonsDataFrame` object
#'
#' @return A `SpatialPolygons` or `SpatialPolygonsDataFrame` object
#' @export
#'
#' @examples
#' if (require(sp)) {
#' p1 <- Polygon(cbind(c(2,4,4,1,2),c(2,3,5,4,2)))
#' p2 <- Polygon(cbind(c(5,4,3,2,5),c(2,3,3,2,2)))
#'
#' ps1 <- Polygons(list(p1), "s1")
#' ps2 <- Polygons(list(p2), "s2")
#'
#' spp <- SpatialPolygons(list(ps1,ps2), 1:2)
#'
#' df <- data.frame(a = c("A", "B"), b = c("foo", "bar"),
#' stringsAsFactors = FALSE)
#'
#' spdf <- SpatialPolygonsDataFrame(spp, df, match.ID = FALSE)
#'
#' plot(spdf, col = c(rgb(1, 0, 0,0.5), rgb(0, 0, 1,0.5)))
#'
#' unioned_spdf <- self_union(spdf)
#' unioned_sp <- self_union(spp)
#' }
self_union <- function(x) {
if (!inherits(x, "SpatialPolygons")) {
stop("x must be a SpatialPolygons or SpatialPolygonsDataFrame")
}
if (!requireNamespace("raster", quietly = TRUE)) {
stop("Package raster could not be loaded", call. = FALSE)
}
unioned <- raster_union(x)
unioned$union_ids <- get_unioned_ids(unioned)
export_cols <- c("union_count", "union_ids")
if (inherits(x, "SpatialPolygonsDataFrame")) {
unioned$union_df <- lapply(unioned$union_ids, function(y) x@data[y, ])
export_cols <- c(export_cols, "union_df")
}
names(unioned)[names(unioned) == "count"] <- "union_count"
unioned[, export_cols]
}
#' Modified raster::union method for a single SpatialPolygons(DataFrame)
#'
#' Modify raster::union to remove the expression:
#' if (!rgeos::gIntersects(x)) {
#' return(x)
#' }
#' As it throws an error:
#' Error in RGEOSBinPredFunc(spgeom1, spgeom2, byid, func) :
#' TopologyException: side location conflict
#'
#' @param x a single SpatialPolygons(DataFrame) object
#' @noRd
raster_union <- function(x) {
# First get the function (method)
f <- methods::getMethod("union", c("SpatialPolygons", "missing"))
# Find the offending block in the body, and replace it with NULL
the_prob <- which(grepl("!rgeos::gIntersects(x)", body(f), fixed = TRUE))
body(f)[[the_prob]] <- NULL
# Call the modified function with the input
f(x)
}
## For each new polygon in a SpatialPolygonsDataFrame that has been unioned with
## itself (raster::union(SPDF, missing)), get the original polygon ids that
## compose it
get_unioned_ids <- function(unioned_sp) {
id_cols <- grep("^ID\\.", names(unioned_sp@data))
unioned_sp_data <- as.matrix(unioned_sp@data[, id_cols])
colnames(unioned_sp_data) <- gsub("ID\\.", "", colnames(unioned_sp_data))
unioned_ids <- apply(unioned_sp_data, 1, function(i) {
as.numeric(colnames(unioned_sp_data)[i > 0])
})
names(unioned_ids) <- rownames(unioned_sp_data)
unioned_ids
}
#' Get or calculate the attribute of a list-column containing nested dataframes.
#'
#' For example, `self_union` produces a `SpatialPolygonsDataFrame`
#' that has a column called `union_df`, which contains a `data.frame`
#' for each polygon with the attributes from the constituent polygons.
#'
#' @param x the list-column in the (SpatialPolygons)DataFrame that contains nested data.frames
#' @param col the column in the nested data frames from which to retrieve/calculate attributes
#' @param fun function to determine the resulting single attribute from overlapping polygons
#' @param ... other parameters passed on to `fun`
#'
#' @return An atomic vector of the same length as x
#' @export
#'
#' @examples
#' if (require(sp)) {
#' p1 <- Polygon(cbind(c(2,4,4,1,2),c(2,3,5,4,2)))
#' p2 <- Polygon(cbind(c(5,4,3,2,5),c(2,3,3,2,2)))
#' ps1 <- Polygons(list(p1), "s1")
#' ps2 <- Polygons(list(p2), "s2")
#' spp <- SpatialPolygons(list(ps1,ps2), 1:2)
#' df <- data.frame(a = c(1, 2), b = c("foo", "bar"),
#' c = factor(c("high", "low"), ordered = TRUE,
#' levels = c("low", "high")),
#' stringsAsFactors = FALSE)
#' spdf <- SpatialPolygonsDataFrame(spp, df, match.ID = FALSE)
#' plot(spdf, col = c(rgb(1, 0, 0,0.5), rgb(0, 0, 1,0.5)))
#' unioned_spdf <- self_union(spdf)
#' get_poly_attribute(unioned_spdf$union_df, "a", sum)
#' get_poly_attribute(unioned_spdf$union_df, "c", max)
#' }
get_poly_attribute <- function(x, col, fun, ...) {
if (!inherits(x, "list")) stop("x must be a list, or list-column in a data frame")
if (!all(vapply(x, is.data.frame, logical(1)))) stop("x must be a list of data frames")
if (!col %in% names(x[[1]])) stop(col, " is not a column in the data frames in x")
if (!is.function(fun)) stop("fun must be a function")
test_data <- x[[1]][[col]]
return_type <- get_return_type(test_data)
is_fac <- FALSE
if (return_type == "factor") {
is_fac <- TRUE
lvls <- levels(test_data)
ordered <- is.ordered(test_data)
return_type <- "integer"
}
fun_value <- eval(call(return_type, 1))
ret <- vapply(x, function(y) {
fun(y[[col]], ...)
}, FUN.VALUE = fun_value)
if (is_fac) {
ret <- factor(lvls[ret], ordered = ordered, levels = lvls)
}
ret
}
get_return_type <- function(x) {
if (is.factor(x)) {
return_type <- "factor"
} else {
return_type <- typeof(x)
}
}
#' Combine Northern Rockies Regional Municipality with Regional Districts
#'
#' @inheritParams get_layer
#'
#' @return A layer where the Northern Rockies Regional Municipality has been
#' combined with the Regional Districts to form a full provincial coverage.
#' @export
combine_nr_rd <- function(class = c("sf", "sp")) {
class = match.arg(class)
rd <- get_layer("regional_districts", class = class)
mun <- get_layer("municipalities", class = class)
rbind(rd, mun[mun$ADMIN_AREA_ABBREVIATION == "NRRM",])
}
ask <- function(...) {
choices <- c("Yes", "No")
cat(paste0(..., collapse = ""))
utils::menu(choices) == which(choices == "Yes")
}
#' Biogeoclimatic Zone Colours
#'
#' Standard colours used to represent Biogeoclimatic Zone colours to be used in plotting.
#'
#' @return named vector of hexadecimal colour codes. Names are standard
#' abbreviations of Zone names.
#' @export
#'
#' @examples
#' \dontrun{
#' if (require("bcmaps.rdata") && #' require(sf) && require(ggplot2) &&
#' packageVersion("ggplot2") >= '2.2.1.9000') {
#' bec <- bec()
#' ggplot() +
#' geom_sf(data = bec[bec$ZONE %in% c("BG", "PP"),],
#' aes(fill = ZONE, col = ZONE)) +
#' scale_fill_manual(values = bec_colors()) +
#' scale_colour_manual(values = bec_colours())
#' }
#' }
bec_colours <- function() {
bec_colours <- c(BAFA = "#E5D8B1", SWB = "#A3D1AB", BWBS = "#ABE7FF",
ESSF = "#9E33D3", CMA = "#E5C7C7", SBS = "#2D8CBD",
MH = "#A599FF", CWH = "#208500", ICH = "#85A303",
IMA = "#B2B2B2", SBPS = "#36DEFC", MS = "#FF46A3",
IDF = "#FFCF00", BG = "#FF0000", PP = "#DE7D00",
CDF = "#FFFF00")
bec_colours[sort(names(bec_colours))]
}
#' @rdname bec_colours
#' @export
bec_colors <- bec_colours
| /R/utils.R | permissive | robsalasco/bcmaps | R | false | false | 12,375 | r | # Copyright 2016 Province of British Columbia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and limitations under the License.
#' The size of British Columbia
#'
#' Total area, Land area only, or Freshwater area only, in the units of your choosing.
#'
#' The sizes are from \href{http://www.statcan.gc.ca/tables-tableaux/sum-som/l01/cst01/phys01-eng.htm}{Statistics Canada}
#'
#' @param what Which part of BC? One of `'total'` (default), `'land'`, or `'freshwater'`.
#' @param units One of `'km2'` (square kilometres; default), `'m2'` (square metres),
#' `'ha'` (hectares), `'acres'`, or `'sq_mi'` (square miles)
#'
#' @return The area of B.C. in the desired units (numeric vector).
#' @export
#'
#' @examples
#' ## With no arguments, gives the total area in km^2:
#' bc_area()
#'
#' ## Get the area of the land only, in hectares:
#' bc_area("land", "ha")
bc_area <- function(what = "total", units = "km2") {
what = match.arg(what, c("total", "land", "freshwater"))
units = match.arg(units, c("km2", "m2", "ha", "acres", "sq_mi"))
val_km2 <- switch(what, total = 944735, land = 925186, freshwater = 19549)
ret <- switch(units, km2 = val_km2, m2 = km2_m2(val_km2), ha = km2_ha(val_km2),
acres = km2_acres(val_km2), sq_mi = km2_sq_mi(val_km2))
ret <- round(ret, digits = 0)
structure(ret, names = paste(what, units, sep = "_"))
}
km2_m2 <- function(x) {
x * 1e6
}
km2_ha <- function(x) {
x * 100
}
km2_acres <- function(x) {
x * 247.105
}
km2_sq_mi <- function(x) {
x * 0.386102
}
#' Transform a Spatial* object to BC Albers projection
#'
#' @param obj The Spatial* or sf object to transform
#'
#' @return the Spatial* or sf object in BC Albers projection
#' @export
#'
transform_bc_albers <- function(obj) {
UseMethod("transform_bc_albers")
}
#' @export
transform_bc_albers.Spatial <- function(obj) {
if (!inherits(obj, "Spatial")) {
stop("sp_obj must be a Spatial object", call. = FALSE)
}
if (!requireNamespace("rgdal", quietly = TRUE)) {
stop("Package rgdal could not be loaded", call. = FALSE)
}
sp::spTransform(obj, sp::CRS("+init=epsg:3005"))
}
#' @export
transform_bc_albers.sf <- function(obj) {
sf::st_transform(obj, 3005)
}
#' @export
transform_bc_albers.sfc <- transform_bc_albers.sf
#' Check and fix polygons that self-intersect, and sometimes can fix orphan holes
#'
#' For `sf` objects, uses `lwgeom::st_make_valid` if `lwgeom` is installed.
#' Otherwise, uses the common method of buffering by zero.
#'
#' `fix_self_intersect` has been removed and will no longer work. Use
#' `fix_geo_problems` instead
#'
#' @param obj The SpatialPolygons* or sf object to check/fix
#' @param tries The maximum number of attempts to repair the geometry.
#'
#' @return The SpatialPolygons* or sf object, repaired if necessary
#' @export
fix_geo_problems <- function(obj, tries = 5) {
UseMethod("fix_geo_problems")
}
#' @export
fix_geo_problems.Spatial <- function(obj, tries = 5) {
if (!requireNamespace("rgeos", quietly = TRUE)) {
stop("Package rgeos required but not available", call. = FALSE)
}
is_valid <- suppressWarnings(rgeos::gIsValid(obj))
if (is_valid) {
message("Geometry is valid")
return(obj)
}
## If not valid, repair. Try max tries times
i <- 1L
message("Problems found - Attempting to repair...")
while (i <= tries) {
message("Attempt ", i, " of ", tries)
obj <- rgeos::gBuffer(obj, byid = TRUE, width = 0)
is_valid <- suppressWarnings(rgeos::gIsValid(obj))
if (is_valid) {
message("Geometry is valid")
return(obj)
} else {
i <- i + 1
}
}
warning("Tried ", tries, " times but could not repair geometry")
obj
}
#' @export
fix_geo_problems.sf <- function(obj, tries = 5) {
## Check if the overall geomtry is valid, if it is, exit and return input
is_valid <- suppressWarnings(suppressMessages(sf::st_is_valid(obj)))
if (all(is_valid)) {
message("Geometry is valid")
return(obj)
}
message("Problems found - Attempting to repair...")
if (requireNamespace("lwgeom", quietly = TRUE)) {
return(lwgeom::st_make_valid(obj))
} else {
message("package lwgeom not available for the st_make_valid function, sf::st_buffer(dist = 0)")
i <- 1
while (i <= tries) { # Try three times
message("Attempt ", i, " of ", tries)
obj <- sf::st_buffer(obj, dist = 0)
is_valid <- suppressWarnings(suppressMessages(sf::st_is_valid(obj)))
if (all(is_valid)) {
message("Geometry is valid")
return(obj)
} else {
i <- i + 1
}
}
}
warning("tried ", tries, " times but could not repair all geometries")
obj
}
#' @export
fix_geo_problems.sfc <- fix_geo_problems.sf
#' Union a SpatialPolygons* object with itself to remove overlaps, while retaining attributes
#'
#' The IDs of source polygons are stored in a list-column called
#' `union_ids`, and original attributes (if present) are stored as nested
#' dataframes in a list-column called `union_df`
#'
#' @param x A `SpatialPolygons` or `SpatialPolygonsDataFrame` object
#'
#' @return A `SpatialPolygons` or `SpatialPolygonsDataFrame` object
#' @export
#'
#' @examples
#' if (require(sp)) {
#' p1 <- Polygon(cbind(c(2,4,4,1,2),c(2,3,5,4,2)))
#' p2 <- Polygon(cbind(c(5,4,3,2,5),c(2,3,3,2,2)))
#'
#' ps1 <- Polygons(list(p1), "s1")
#' ps2 <- Polygons(list(p2), "s2")
#'
#' spp <- SpatialPolygons(list(ps1,ps2), 1:2)
#'
#' df <- data.frame(a = c("A", "B"), b = c("foo", "bar"),
#' stringsAsFactors = FALSE)
#'
#' spdf <- SpatialPolygonsDataFrame(spp, df, match.ID = FALSE)
#'
#' plot(spdf, col = c(rgb(1, 0, 0,0.5), rgb(0, 0, 1,0.5)))
#'
#' unioned_spdf <- self_union(spdf)
#' unioned_sp <- self_union(spp)
#' }
self_union <- function(x) {
if (!inherits(x, "SpatialPolygons")) {
stop("x must be a SpatialPolygons or SpatialPolygonsDataFrame")
}
if (!requireNamespace("raster", quietly = TRUE)) {
stop("Package raster could not be loaded", call. = FALSE)
}
unioned <- raster_union(x)
unioned$union_ids <- get_unioned_ids(unioned)
export_cols <- c("union_count", "union_ids")
if (inherits(x, "SpatialPolygonsDataFrame")) {
unioned$union_df <- lapply(unioned$union_ids, function(y) x@data[y, ])
export_cols <- c(export_cols, "union_df")
}
names(unioned)[names(unioned) == "count"] <- "union_count"
unioned[, export_cols]
}
#' Modified raster::union method for a single SpatialPolygons(DataFrame)
#'
#' Modify raster::union to remove the expression:
#' if (!rgeos::gIntersects(x)) {
#' return(x)
#' }
#' As it throws an error:
#' Error in RGEOSBinPredFunc(spgeom1, spgeom2, byid, func) :
#' TopologyException: side location conflict
#'
#' @param x a single SpatialPolygons(DataFrame) object
#' @noRd
raster_union <- function(x) {
# First get the function (method)
f <- methods::getMethod("union", c("SpatialPolygons", "missing"))
# Find the offending block in the body, and replace it with NULL
the_prob <- which(grepl("!rgeos::gIntersects(x)", body(f), fixed = TRUE))
body(f)[[the_prob]] <- NULL
# Call the modified function with the input
f(x)
}
## For each new polygon in a SpatialPolygonsDataFrame that has been unioned with
## itself (raster::union(SPDF, missing)), get the original polygon ids that
## compose it
get_unioned_ids <- function(unioned_sp) {
id_cols <- grep("^ID\\.", names(unioned_sp@data))
unioned_sp_data <- as.matrix(unioned_sp@data[, id_cols])
colnames(unioned_sp_data) <- gsub("ID\\.", "", colnames(unioned_sp_data))
unioned_ids <- apply(unioned_sp_data, 1, function(i) {
as.numeric(colnames(unioned_sp_data)[i > 0])
})
names(unioned_ids) <- rownames(unioned_sp_data)
unioned_ids
}
#' Get or calculate the attribute of a list-column containing nested dataframes.
#'
#' For example, `self_union` produces a `SpatialPolygonsDataFrame`
#' that has a column called `union_df`, which contains a `data.frame`
#' for each polygon with the attributes from the constituent polygons.
#'
#' @param x the list-column in the (SpatialPolygons)DataFrame that contains nested data.frames
#' @param col the column in the nested data frames from which to retrieve/calculate attributes
#' @param fun function to determine the resulting single attribute from overlapping polygons
#' @param ... other parameters passed on to `fun`
#'
#' @return An atomic vector of the same length as x
#' @export
#'
#' @examples
#' if (require(sp)) {
#' p1 <- Polygon(cbind(c(2,4,4,1,2),c(2,3,5,4,2)))
#' p2 <- Polygon(cbind(c(5,4,3,2,5),c(2,3,3,2,2)))
#' ps1 <- Polygons(list(p1), "s1")
#' ps2 <- Polygons(list(p2), "s2")
#' spp <- SpatialPolygons(list(ps1,ps2), 1:2)
#' df <- data.frame(a = c(1, 2), b = c("foo", "bar"),
#' c = factor(c("high", "low"), ordered = TRUE,
#' levels = c("low", "high")),
#' stringsAsFactors = FALSE)
#' spdf <- SpatialPolygonsDataFrame(spp, df, match.ID = FALSE)
#' plot(spdf, col = c(rgb(1, 0, 0,0.5), rgb(0, 0, 1,0.5)))
#' unioned_spdf <- self_union(spdf)
#' get_poly_attribute(unioned_spdf$union_df, "a", sum)
#' get_poly_attribute(unioned_spdf$union_df, "c", max)
#' }
get_poly_attribute <- function(x, col, fun, ...) {
if (!inherits(x, "list")) stop("x must be a list, or list-column in a data frame")
if (!all(vapply(x, is.data.frame, logical(1)))) stop("x must be a list of data frames")
if (!col %in% names(x[[1]])) stop(col, " is not a column in the data frames in x")
if (!is.function(fun)) stop("fun must be a function")
test_data <- x[[1]][[col]]
return_type <- get_return_type(test_data)
is_fac <- FALSE
if (return_type == "factor") {
is_fac <- TRUE
lvls <- levels(test_data)
ordered <- is.ordered(test_data)
return_type <- "integer"
}
fun_value <- eval(call(return_type, 1))
ret <- vapply(x, function(y) {
fun(y[[col]], ...)
}, FUN.VALUE = fun_value)
if (is_fac) {
ret <- factor(lvls[ret], ordered = ordered, levels = lvls)
}
ret
}
get_return_type <- function(x) {
if (is.factor(x)) {
return_type <- "factor"
} else {
return_type <- typeof(x)
}
}
#' Combine Northern Rockies Regional Municipality with Regional Districts
#'
#' @inheritParams get_layer
#'
#' @return A layer where the Northern Rockies Regional Municipality has been
#' combined with the Regional Districts to form a full provincial coverage.
#' @export
combine_nr_rd <- function(class = c("sf", "sp")) {
class = match.arg(class)
rd <- get_layer("regional_districts", class = class)
mun <- get_layer("municipalities", class = class)
rbind(rd, mun[mun$ADMIN_AREA_ABBREVIATION == "NRRM",])
}
ask <- function(...) {
choices <- c("Yes", "No")
cat(paste0(..., collapse = ""))
utils::menu(choices) == which(choices == "Yes")
}
#' Biogeoclimatic Zone Colours
#'
#' Standard colours used to represent Biogeoclimatic Zone colours to be used in plotting.
#'
#' @return named vector of hexadecimal colour codes. Names are standard
#' abbreviations of Zone names.
#' @export
#'
#' @examples
#' \dontrun{
#' if (require("bcmaps.rdata") && #' require(sf) && require(ggplot2) &&
#' packageVersion("ggplot2") >= '2.2.1.9000') {
#' bec <- bec()
#' ggplot() +
#' geom_sf(data = bec[bec$ZONE %in% c("BG", "PP"),],
#' aes(fill = ZONE, col = ZONE)) +
#' scale_fill_manual(values = bec_colors()) +
#' scale_colour_manual(values = bec_colours())
#' }
#' }
bec_colours <- function() {
bec_colours <- c(BAFA = "#E5D8B1", SWB = "#A3D1AB", BWBS = "#ABE7FF",
ESSF = "#9E33D3", CMA = "#E5C7C7", SBS = "#2D8CBD",
MH = "#A599FF", CWH = "#208500", ICH = "#85A303",
IMA = "#B2B2B2", SBPS = "#36DEFC", MS = "#FF46A3",
IDF = "#FFCF00", BG = "#FF0000", PP = "#DE7D00",
CDF = "#FFFF00")
bec_colours[sort(names(bec_colours))]
}
#' @rdname bec_colours
#' @export
bec_colors <- bec_colours
|
#setting up ---------------
list.of.packages <- c("Rcpp","dplyr","RPostgreSQL","sqldf","shiny","DT","httr","rpivotTable")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
if(length(new.packages)) install.packages(new.packages)
lapply(list.of.packages, require, character.only = TRUE)
options(sqldf.driver = "SQLite")
username = ''
password = ''
setwd('~')
setwd('..')
setwd(paste(getwd(),'/Genomics England Ltd/GE-Samples Team - Team Folder/Omics Tracker files',sep=''))
temp = list.files(pattern="*.csv")
for (i in 1:length(temp)) assign(substr(temp[i],1,nchar(temp[i])-4), read.csv(temp[i]))
# #download data from confluence------------
# confluence_get_download<- function(link) {
# read.csv(text=content(httr::GET(paste('https://cnfl.extge.co.uk/download/attachments/135701046',link,sep=''),authenticate(username, password)),'text'))
# }
#
# confluence_get_page<- function(link) {
# content(httr::GET(paste('https://cnfl.extge.co.uk/rest/api/content/',link,sep=''),authenticate(username, password)))
# }
#
# a<-confluence_get_page('135701046/child/attachment')
#
# for(x in 1:length(a$results)){
# assign(substr(a$results[[x]]$title,1,nchar(a$results[[x]]$title)-4), confluence_get_download(a$results[[x]]$`_links`$download))
# }
count_aliquots<-sqldf("select pid,omics_in_labkey.'Sample.Type', count(pid) as aliquots from omics_in_labkey group by pid,omics_in_labkey.'Sample.Type'")
d<-sqldf("select distinct case when used_samples.lsid is not null then 'Yes' else 'No' end as Used, omics_in_labkey.'clinic.ID' as 'Clinic ID',omics_in_labkey.PID
as 'Participant ID', omics_in_labkey.LSID as 'Laboratory Sample ID',
omics_in_labkey.'Sample.Type' as 'Sample Type', disease_group as 'Disease Type',disease_sub_group as 'Disease Sub-type', aliquots as Aliquots
from omics_in_labkey
left join used_samples on omics_in_labkey.lsid = used_samples.lsid
left join disease_type on participant_id = omics_in_labkey.pid
left join count_aliquots on count_aliquots.PID = omics_in_labkey.PID and count_aliquots.'Sample.Type' = omics_in_labkey.'Sample.Type'
where omics_in_labkey.pid is not null
order by omics_in_labkey.PID,omics_in_labkey.'Sample.Type', omics_in_labkey.LSID
")
d$`Sample Type`<-as.factor(d$`Sample Type`)
d$`Disease Type`<-as.factor(d$`Disease Type`)
d$`Disease Sub-type`<-as.factor(d$`Disease Sub-type`)
d$`Participant ID`<- as.character(d$`Participant ID`)
d$Used<- as.factor(d$Used)
d$`Clinic ID`<- as.factor(d$`Clinic ID`)
used_samples$LSID<- as.character(used_samples$LSID)
used_samples$UKB.Dispatch.Date<- as.Date(used_samples$UKB.Dispatch.Date, format = "%d/%m/%Y", origin="1900-01-01")
used_samples$Child.LSID<- as.character(used_samples$Child.LSID)
used_samples$Child.Volume<- as.numeric(used_samples$Child.Volume)
used_samples$Child.Concentration<- as.numeric(used_samples$Child.Concentration)
used_samples$RIN<- as.numeric(used_samples$RIN)
used_samples$Average.Fragment.Size<- as.numeric(used_samples$Average.Fragment.Size)
used_samples$OD.Ratio<- as.numeric(used_samples$OD.Ratio)
#ui and server-------
ui <- navbarPage(
title = 'Omics Tracker',
tabPanel('All omics samples', DT::dataTableOutput('tab1')
),
tabPanel('Used samples', DT::dataTableOutput('tab2')
),
tabPanel('Samples with issues', DT::dataTableOutput('tab3')
),
tabPanel('Leftover DNA used', DT::dataTableOutput('tab4')
),
tabPanel('Pivot table of all omics', rpivotTableOutput('tab5')
)
)
server <- function(input, output) {
tab<- function(tabData) {DT::renderDataTable(
datatable( tabData, filter = 'top', extensions = c('Buttons'),
options = list(pageLength = 25,
dom = 'Brtip',
autowidth = TRUE,
columnDefs = list(list(className = 'dt-center', width = '2000px', targets = "_all")),
buttons = c('colvis','csv'))))}
output$tab1 <- tab(d)
output$tab2 <- tab(used_samples)
output$tab3 <- tab(samples_with_issues)
output$tab4 <- tab(leftover_dna_used)
output$tab5 <- rpivotTable::renderRpivotTable({
rpivotTable(data = d)
})
}
# Create Shiny app ----
shinyApp(ui, server) | /omics_app.R | no_license | JonathanWarrin/omicsTracker | R | false | false | 4,294 | r | #setting up ---------------
list.of.packages <- c("Rcpp","dplyr","RPostgreSQL","sqldf","shiny","DT","httr","rpivotTable")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
if(length(new.packages)) install.packages(new.packages)
lapply(list.of.packages, require, character.only = TRUE)
options(sqldf.driver = "SQLite")
username = ''
password = ''
setwd('~')
setwd('..')
setwd(paste(getwd(),'/Genomics England Ltd/GE-Samples Team - Team Folder/Omics Tracker files',sep=''))
temp = list.files(pattern="*.csv")
for (i in 1:length(temp)) assign(substr(temp[i],1,nchar(temp[i])-4), read.csv(temp[i]))
# #download data from confluence------------
# confluence_get_download<- function(link) {
# read.csv(text=content(httr::GET(paste('https://cnfl.extge.co.uk/download/attachments/135701046',link,sep=''),authenticate(username, password)),'text'))
# }
#
# confluence_get_page<- function(link) {
# content(httr::GET(paste('https://cnfl.extge.co.uk/rest/api/content/',link,sep=''),authenticate(username, password)))
# }
#
# a<-confluence_get_page('135701046/child/attachment')
#
# for(x in 1:length(a$results)){
# assign(substr(a$results[[x]]$title,1,nchar(a$results[[x]]$title)-4), confluence_get_download(a$results[[x]]$`_links`$download))
# }
count_aliquots<-sqldf("select pid,omics_in_labkey.'Sample.Type', count(pid) as aliquots from omics_in_labkey group by pid,omics_in_labkey.'Sample.Type'")
d<-sqldf("select distinct case when used_samples.lsid is not null then 'Yes' else 'No' end as Used, omics_in_labkey.'clinic.ID' as 'Clinic ID',omics_in_labkey.PID
as 'Participant ID', omics_in_labkey.LSID as 'Laboratory Sample ID',
omics_in_labkey.'Sample.Type' as 'Sample Type', disease_group as 'Disease Type',disease_sub_group as 'Disease Sub-type', aliquots as Aliquots
from omics_in_labkey
left join used_samples on omics_in_labkey.lsid = used_samples.lsid
left join disease_type on participant_id = omics_in_labkey.pid
left join count_aliquots on count_aliquots.PID = omics_in_labkey.PID and count_aliquots.'Sample.Type' = omics_in_labkey.'Sample.Type'
where omics_in_labkey.pid is not null
order by omics_in_labkey.PID,omics_in_labkey.'Sample.Type', omics_in_labkey.LSID
")
d$`Sample Type`<-as.factor(d$`Sample Type`)
d$`Disease Type`<-as.factor(d$`Disease Type`)
d$`Disease Sub-type`<-as.factor(d$`Disease Sub-type`)
d$`Participant ID`<- as.character(d$`Participant ID`)
d$Used<- as.factor(d$Used)
d$`Clinic ID`<- as.factor(d$`Clinic ID`)
used_samples$LSID<- as.character(used_samples$LSID)
used_samples$UKB.Dispatch.Date<- as.Date(used_samples$UKB.Dispatch.Date, format = "%d/%m/%Y", origin="1900-01-01")
used_samples$Child.LSID<- as.character(used_samples$Child.LSID)
used_samples$Child.Volume<- as.numeric(used_samples$Child.Volume)
used_samples$Child.Concentration<- as.numeric(used_samples$Child.Concentration)
used_samples$RIN<- as.numeric(used_samples$RIN)
used_samples$Average.Fragment.Size<- as.numeric(used_samples$Average.Fragment.Size)
used_samples$OD.Ratio<- as.numeric(used_samples$OD.Ratio)
#ui and server-------
ui <- navbarPage(
title = 'Omics Tracker',
tabPanel('All omics samples', DT::dataTableOutput('tab1')
),
tabPanel('Used samples', DT::dataTableOutput('tab2')
),
tabPanel('Samples with issues', DT::dataTableOutput('tab3')
),
tabPanel('Leftover DNA used', DT::dataTableOutput('tab4')
),
tabPanel('Pivot table of all omics', rpivotTableOutput('tab5')
)
)
server <- function(input, output) {
tab<- function(tabData) {DT::renderDataTable(
datatable( tabData, filter = 'top', extensions = c('Buttons'),
options = list(pageLength = 25,
dom = 'Brtip',
autowidth = TRUE,
columnDefs = list(list(className = 'dt-center', width = '2000px', targets = "_all")),
buttons = c('colvis','csv'))))}
output$tab1 <- tab(d)
output$tab2 <- tab(used_samples)
output$tab3 <- tab(samples_with_issues)
output$tab4 <- tab(leftover_dna_used)
output$tab5 <- rpivotTable::renderRpivotTable({
rpivotTable(data = d)
})
}
# Create Shiny app ----
shinyApp(ui, server) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cloudkms_objects.R
\name{SetIamPolicyRequest}
\alias{SetIamPolicyRequest}
\title{SetIamPolicyRequest Object}
\usage{
SetIamPolicyRequest(policy = NULL, updateMask = NULL)
}
\arguments{
\item{policy}{REQUIRED: The complete policy to be applied to the `resource`}
\item{updateMask}{OPTIONAL: A FieldMask specifying which fields of the policy to modify}
}
\value{
SetIamPolicyRequest object
}
\description{
SetIamPolicyRequest Object
}
\details{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_objects}}
Request message for `SetIamPolicy` method.
}
| /googlecloudkmsv1beta1.auto/man/SetIamPolicyRequest.Rd | permissive | GVersteeg/autoGoogleAPI | R | false | true | 635 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cloudkms_objects.R
\name{SetIamPolicyRequest}
\alias{SetIamPolicyRequest}
\title{SetIamPolicyRequest Object}
\usage{
SetIamPolicyRequest(policy = NULL, updateMask = NULL)
}
\arguments{
\item{policy}{REQUIRED: The complete policy to be applied to the `resource`}
\item{updateMask}{OPTIONAL: A FieldMask specifying which fields of the policy to modify}
}
\value{
SetIamPolicyRequest object
}
\description{
SetIamPolicyRequest Object
}
\details{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_objects}}
Request message for `SetIamPolicy` method.
}
|
#__________________________________________________________________________________________________________
# This script is used to populate the MESH_parameters_CLASS.ini/.tpl and MESH_parameters_hydrology.ini/.tpl (.tpl files for calibration with Ostrich) from a .csv file containing a list of all the parameter values.
# Requirements:
# - MESH_parameters_CLASS.txt and MESH_parameters_hydrology.txt files containing the parameter codes (ex._FCAN-NL1_)
# for all the parameters for the number of GRUs desired
# NOTE: the default .txt files included with this script assume 3 soil layers, MID=1, and 10 GRUs; delete/add soil layers or GRUs as necessary and specify the start date and time of the meteorological data.
# - ParamValues.csv file with the following columns; columns 1, 2, and 4 must remain, but the remaining columns are for refernece information only and can be modified by the user.
# 1) Parameter code (which corresponds to the code used in the .txt file)
# 2) Parameter Value - to be written to the text file
# 3) GRU Number
# 4) Calibration (containing TRUE or FALSE; only used if the "Calibration" variable below is set to "TRUE")
# 5) Description (which can be used to describe the parameter, or justification for the value selected)
#__________________________________________________________________________________________________________
##### SET VALUES IN THIS SECTION #####
# Modify the .txt files according to the number of GRUs used, initial values, and met. data start time, and update the ParamValues.csv file with the parameter values (initial or static)
#Set the working directory to the location of the files
#setwd('C:/myfolder')
# Specify the file name containing the parameter values
ParamFile <- 'ParamValues_Point_OBS.csv'
Calibration <- TRUE
# TRUE for calibrating the model to only replace non-calibrated parameter values (where column 3=FALSE) and also generate .tpl files
# FALSE to replace all parameters with the values and only generate .ini files
#__________________________________________________________________________________________________________
#Load libraries
library(tidyverse)
library(dplyr)
#Create a tibble of the parameter ranges
best_pars <- read_csv(ParamFile, col_names=TRUE, skip_empty_rows = TRUE) #header=c("Parameter", "Value", "GRU", "Calibrate"))
best_pars <- filter(best_pars, is.na(Parameter)==FALSE)
#Read the template files into R
class_pars <- readLines("MESH_parameters_CLASS.txt")
hydro_pars <- readLines("MESH_parameters_hydrology.txt")
class_pars_ini <- class_pars
hydro_pars_ini <- hydro_pars
i <- 1
while(i<=nrow(best_pars)) {
class_pars_ini <- gsub(best_pars[[i,1]], best_pars[[i,2]], class_pars_ini)
hydro_pars_ini <- gsub(best_pars[[i,1]], best_pars[[i,2]], hydro_pars_ini)
i <- i+1
}
writeLines(class_pars_ini, con="MESH_parameters_CLASS.ini")
writeLines(hydro_pars_ini, con="MESH_parameters_hydrology.ini")
if (Calibration==TRUE){
best_pars_cal <- filter(best_pars, Calibrate==FALSE) #Filter out calibrated parameters to leave the parameter codes in to be read by Ostrich in the written file
class_pars_tpl <- class_pars
hydro_pars_tpl <- hydro_pars
i <- 1
while(i<=nrow(best_pars_cal)) {
class_pars_tpl <- gsub(best_pars_cal[[i,1]], best_pars_cal[[i,2]], class_pars_tpl)
hydro_pars_tpl <- gsub(best_pars_cal[[i,1]], best_pars_cal[[i,2]], hydro_pars_tpl)
i <- i+1
}
writeLines(class_pars_tpl, con="MESH_parameters_CLASS.tpl")
writeLines(hydro_pars_tpl, con="MESH_parameters_hydrology.tpl")
}
| /Code/Find_Replace_in_Text_File.R | no_license | MESH-Model/MESH_Whitegull | R | false | false | 3,670 | r | #__________________________________________________________________________________________________________
# This script is used to populate the MESH_parameters_CLASS.ini/.tpl and MESH_parameters_hydrology.ini/.tpl (.tpl files for calibration with Ostrich) from a .csv file containing a list of all the parameter values.
# Requirements:
# - MESH_parameters_CLASS.txt and MESH_parameters_hydrology.txt files containing the parameter codes (ex._FCAN-NL1_)
# for all the parameters for the number of GRUs desired
# NOTE: the default .txt files included with this script assume 3 soil layers, MID=1, and 10 GRUs; delete/add soil layers or GRUs as necessary and specify the start date and time of the meteorological data.
# - ParamValues.csv file with the following columns; columns 1, 2, and 4 must remain, but the remaining columns are for refernece information only and can be modified by the user.
# 1) Parameter code (which corresponds to the code used in the .txt file)
# 2) Parameter Value - to be written to the text file
# 3) GRU Number
# 4) Calibration (containing TRUE or FALSE; only used if the "Calibration" variable below is set to "TRUE")
# 5) Description (which can be used to describe the parameter, or justification for the value selected)
#__________________________________________________________________________________________________________
##### SET VALUES IN THIS SECTION #####
# Modify the .txt files according to the number of GRUs used, initial values, and met. data start time, and update the ParamValues.csv file with the parameter values (initial or static)
#Set the working directory to the location of the files
#setwd('C:/myfolder')
# Specify the file name containing the parameter values
ParamFile <- 'ParamValues_Point_OBS.csv'
Calibration <- TRUE
# TRUE for calibrating the model to only replace non-calibrated parameter values (where column 3=FALSE) and also generate .tpl files
# FALSE to replace all parameters with the values and only generate .ini files
#__________________________________________________________________________________________________________
#Load libraries
library(tidyverse)
library(dplyr)
#Create a tibble of the parameter ranges
best_pars <- read_csv(ParamFile, col_names=TRUE, skip_empty_rows = TRUE) #header=c("Parameter", "Value", "GRU", "Calibrate"))
best_pars <- filter(best_pars, is.na(Parameter)==FALSE)
#Read the template files into R
class_pars <- readLines("MESH_parameters_CLASS.txt")
hydro_pars <- readLines("MESH_parameters_hydrology.txt")
class_pars_ini <- class_pars
hydro_pars_ini <- hydro_pars
i <- 1
while(i<=nrow(best_pars)) {
class_pars_ini <- gsub(best_pars[[i,1]], best_pars[[i,2]], class_pars_ini)
hydro_pars_ini <- gsub(best_pars[[i,1]], best_pars[[i,2]], hydro_pars_ini)
i <- i+1
}
writeLines(class_pars_ini, con="MESH_parameters_CLASS.ini")
writeLines(hydro_pars_ini, con="MESH_parameters_hydrology.ini")
if (Calibration==TRUE){
best_pars_cal <- filter(best_pars, Calibrate==FALSE) #Filter out calibrated parameters to leave the parameter codes in to be read by Ostrich in the written file
class_pars_tpl <- class_pars
hydro_pars_tpl <- hydro_pars
i <- 1
while(i<=nrow(best_pars_cal)) {
class_pars_tpl <- gsub(best_pars_cal[[i,1]], best_pars_cal[[i,2]], class_pars_tpl)
hydro_pars_tpl <- gsub(best_pars_cal[[i,1]], best_pars_cal[[i,2]], hydro_pars_tpl)
i <- i+1
}
writeLines(class_pars_tpl, con="MESH_parameters_CLASS.tpl")
writeLines(hydro_pars_tpl, con="MESH_parameters_hydrology.tpl")
}
|
#' @title Congruence Index c according to Brown & Gore (1994)
#' @keywords congruence
#' @export con_brown_c_holland
#' @description This function computes an index od congruence according to Brown & Gore (1994).
#' @details The function finds the congruence according to Brown & Gore (1994) between the three-letter Holland-codes given in argument a, which is the person code, and argument b, which is the environment code. The Index is (currently) only defined for three letters from the Holland code. The degree of congruence is output, according to its definition by Brown & Gore (1994), as a reciprocal value of a distance. This means, for example, that a value of '18' is the result for a perfect fit !
#' @param a a character vector with person Holland codes.
#' @param b a character vector with environment Holland codes.
#' @return a numeric with value for congruence.
#' @references Brown & Gore (1994). An Evaluation of interest congruence indices: Distribution Characteristics and Measurement Properties. \emph{Journal of Vocational Behaviour, 45}, 310-327.
#' @examples
#' con_brown_c_holland(a="RIA",b="SEC") # max. difference
#' con_brown_c_holland(a="RIA",b="RIA") # max. similarity
################################################################################
con_brown_c_holland<-function(a,b){
# kongruenzindex nach Brown & Gore (1994)
# zur Bestimmung der Übereinstimmung für vector a = Personencodes mit
# dem Vector b = Umweltcode diese Reihenfolge ist bei der Eingabe W I C H T I G !!!
# (vgl. Brown & Gore 1994. An Evaluation of interest congruence Indicees: Distribution Characteristics and Measurement Properties. Journal of Vocational Behaviour, 45, 310-327. )
#func. by: jhheine@googlemail.com
a <- toupper(unlist(strsplit(a,split = "",fixed = TRUE))) # um auch z.B. "RIA" eingeben zu können
b <- toupper(unlist(strsplit(b,split = "",fixed = TRUE))) # um auch z.B. "RIA" eingeben zu können
if(length(a) != length(b)) stop("a and b must have the same number of characters in Holland-code")
if(length(a) > 3 ) stop("a Brown-Index is only defined for three-letter Holland-code")
### Hilfsfunktion
einsbis6<-function(n){temp<- n%%6 ; if(temp==0){temp<-6} ;return(temp)}
### Ende Hilfsfunktion
ria<-c("R","I","A","S","E","C")
### eigentliche Brown index Function
brw<-function (a,b){
gesindex<-0
for (i in 1:3){
p<-a[i]; u<-b[i]
index<-0
if(p==u){index<-3}
wo<-which(ria==p)
if(u==ria[einsbis6(wo-1)] | u==ria[einsbis6(wo+1)]){ index<-2 }
if(u==ria[einsbis6(wo-2)] | u==ria[einsbis6(wo+2)]){ index<-1 }
gesindex<-gesindex+index*(4-i)
}
return(gesindex)
}
### Ende eigentliche Brown Funktion
erg<-brw(a = a, b = b)
return (erg)
}
| /R/con_brown_c_holland.R | no_license | cran/holland | R | false | false | 2,755 | r | #' @title Congruence Index c according to Brown & Gore (1994)
#' @keywords congruence
#' @export con_brown_c_holland
#' @description This function computes an index od congruence according to Brown & Gore (1994).
#' @details The function finds the congruence according to Brown & Gore (1994) between the three-letter Holland-codes given in argument a, which is the person code, and argument b, which is the environment code. The Index is (currently) only defined for three letters from the Holland code. The degree of congruence is output, according to its definition by Brown & Gore (1994), as a reciprocal value of a distance. This means, for example, that a value of '18' is the result for a perfect fit !
#' @param a a character vector with person Holland codes.
#' @param b a character vector with environment Holland codes.
#' @return a numeric with value for congruence.
#' @references Brown & Gore (1994). An Evaluation of interest congruence indices: Distribution Characteristics and Measurement Properties. \emph{Journal of Vocational Behaviour, 45}, 310-327.
#' @examples
#' con_brown_c_holland(a="RIA",b="SEC") # max. difference
#' con_brown_c_holland(a="RIA",b="RIA") # max. similarity
################################################################################
con_brown_c_holland<-function(a,b){
# kongruenzindex nach Brown & Gore (1994)
# zur Bestimmung der Übereinstimmung für vector a = Personencodes mit
# dem Vector b = Umweltcode diese Reihenfolge ist bei der Eingabe W I C H T I G !!!
# (vgl. Brown & Gore 1994. An Evaluation of interest congruence Indicees: Distribution Characteristics and Measurement Properties. Journal of Vocational Behaviour, 45, 310-327. )
#func. by: jhheine@googlemail.com
a <- toupper(unlist(strsplit(a,split = "",fixed = TRUE))) # um auch z.B. "RIA" eingeben zu können
b <- toupper(unlist(strsplit(b,split = "",fixed = TRUE))) # um auch z.B. "RIA" eingeben zu können
if(length(a) != length(b)) stop("a and b must have the same number of characters in Holland-code")
if(length(a) > 3 ) stop("a Brown-Index is only defined for three-letter Holland-code")
### Hilfsfunktion
einsbis6<-function(n){temp<- n%%6 ; if(temp==0){temp<-6} ;return(temp)}
### Ende Hilfsfunktion
ria<-c("R","I","A","S","E","C")
### eigentliche Brown index Function
brw<-function (a,b){
gesindex<-0
for (i in 1:3){
p<-a[i]; u<-b[i]
index<-0
if(p==u){index<-3}
wo<-which(ria==p)
if(u==ria[einsbis6(wo-1)] | u==ria[einsbis6(wo+1)]){ index<-2 }
if(u==ria[einsbis6(wo-2)] | u==ria[einsbis6(wo+2)]){ index<-1 }
gesindex<-gesindex+index*(4-i)
}
return(gesindex)
}
### Ende eigentliche Brown Funktion
erg<-brw(a = a, b = b)
return (erg)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cov4gappy.R
\name{cov4gappy}
\alias{cov4gappy}
\title{Covariance matrix calculation for gappy data}
\usage{
cov4gappy(F1, F2 = NULL)
}
\arguments{
\item{F1}{A data field.}
\item{F2}{An optional 2nd data field.}
}
\value{
A matrix with covariances between columns of \code{F1}.
If both \code{F1} and \code{F2} are provided, then the covariances
between columns of \code{F1} and the columns of \code{F2} are returned.
}
\description{
This function calculates a covoriance matrix for data that contain
missing values ('gappy data').
}
\details{
This function gives comparable results to \code{cov(F1, y=F2, use="pairwise.complete.obs")}
whereby each covariance value is divided by n number of shared values (as opposed
to n-1 in the case of \code{cov()}. Futhermore, the function will return a 0 (zero) in
cases where no shared values exist between columns; the advantage being that a
covariance matrix will still be calculated in cases of very gappy data, or when
spatial locations have accidentally been included without observations (i.e. land
in fields of aquatic-related parameters).
}
\examples{
# Create synthetic data
set.seed(1)
mat <- matrix(rnorm(500, sd=10), nrow=50, ncol=10)
matg <- mat
matg[sample(length(mat), 0.5*length(mat))] <- NaN # Makes 50\% missing values
matg # gappy matrix
# Calculate covariance matrix and compare to 'cov' function output
c1 <- cov4gappy(matg)
c2 <- cov(matg, use="pairwise.complete.obs")
plot(c1,c2, main="covariance comparison", xlab="cov4gappy", ylab="cov")
abline(0,1,col=8)
}
\keyword{EOF}
\keyword{PCA}
\keyword{covariance}
\keyword{gappy}
| /man/cov4gappy.Rd | no_license | ValentinLouis/sinkr | R | false | true | 1,678 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cov4gappy.R
\name{cov4gappy}
\alias{cov4gappy}
\title{Covariance matrix calculation for gappy data}
\usage{
cov4gappy(F1, F2 = NULL)
}
\arguments{
\item{F1}{A data field.}
\item{F2}{An optional 2nd data field.}
}
\value{
A matrix with covariances between columns of \code{F1}.
If both \code{F1} and \code{F2} are provided, then the covariances
between columns of \code{F1} and the columns of \code{F2} are returned.
}
\description{
This function calculates a covoriance matrix for data that contain
missing values ('gappy data').
}
\details{
This function gives comparable results to \code{cov(F1, y=F2, use="pairwise.complete.obs")}
whereby each covariance value is divided by n number of shared values (as opposed
to n-1 in the case of \code{cov()}. Futhermore, the function will return a 0 (zero) in
cases where no shared values exist between columns; the advantage being that a
covariance matrix will still be calculated in cases of very gappy data, or when
spatial locations have accidentally been included without observations (i.e. land
in fields of aquatic-related parameters).
}
\examples{
# Create synthetic data
set.seed(1)
mat <- matrix(rnorm(500, sd=10), nrow=50, ncol=10)
matg <- mat
matg[sample(length(mat), 0.5*length(mat))] <- NaN # Makes 50\% missing values
matg # gappy matrix
# Calculate covariance matrix and compare to 'cov' function output
c1 <- cov4gappy(matg)
c2 <- cov(matg, use="pairwise.complete.obs")
plot(c1,c2, main="covariance comparison", xlab="cov4gappy", ylab="cov")
abline(0,1,col=8)
}
\keyword{EOF}
\keyword{PCA}
\keyword{covariance}
\keyword{gappy}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_inv_col.R
\name{get_inv_col}
\alias{get_inv_col}
\title{Color-codes dB values}
\usage{
get_inv_col(db)
}
\arguments{
\item{db}{A number of sensitivity threshold in dB}
}
\value{
either white or black color for a visual field location
}
\description{
\code{get_inv_col} returns the color of a visual field location given its dB value, (e.g., white < 15dB, black > 15dB)
}
\examples{
get_inv_col(25)
}
| /man/get_inv_col.Rd | no_license | cran/binovisualfields | R | false | true | 482 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_inv_col.R
\name{get_inv_col}
\alias{get_inv_col}
\title{Color-codes dB values}
\usage{
get_inv_col(db)
}
\arguments{
\item{db}{A number of sensitivity threshold in dB}
}
\value{
either white or black color for a visual field location
}
\description{
\code{get_inv_col} returns the color of a visual field location given its dB value, (e.g., white < 15dB, black > 15dB)
}
\examples{
get_inv_col(25)
}
|
testlist <- list(a = 9.98246016032383e-316, b = 0)
result <- do.call(BayesMRA::rmvn_arma_scalar,testlist)
str(result) | /BayesMRA/inst/testfiles/rmvn_arma_scalar/AFL_rmvn_arma_scalar/rmvn_arma_scalar_valgrind_files/1615926083-test.R | no_license | akhikolla/updatedatatype-list1 | R | false | false | 117 | r | testlist <- list(a = 9.98246016032383e-316, b = 0)
result <- do.call(BayesMRA::rmvn_arma_scalar,testlist)
str(result) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/standard_igfet.R
\name{igfet_zscore2value}
\alias{igfet_zscore2value}
\alias{igfet_centile2value}
\title{Convert INTERGROWTH z-scores/centiles to fetal ultrasound measurements (generic)}
\usage{
igfet_zscore2value(gagedays, z = 0, var = c("hccm", "bpdcm", "ofdcm",
"accm", "flcm"))
igfet_centile2value(gagedays, p = 50, var = c("hccm", "bpdcm", "ofdcm",
"accm", "flcm"))
}
\arguments{
\item{gagedays}{gestational age in days}
\item{z}{z-score(s) to convert}
\item{var}{the name of the measurement to convert ("hccm", "bpdcm", "ofdcm", "accm", "flcm")}
\item{p}{centile(s) to convert (must be between 0 and 100)}
}
\description{
Convert INTERGROWTH z-scores/centiles to fetal ultrasound measurements (generic)
}
\examples{
# get value for median head circumference for child at 100 gestational days
igfet_centile2value(100, 50, var = "hccm")
}
\references{
International standards for fetal growth based on serial ultrasound measurements: the Fetal Growth Longitudinal Study of the INTERGROWTH-21st Project
Papageorghiou, Aris T et al.
The Lancet, Volume 384, Issue 9946, 869-879
}
| /man/igfet_zscore2value.Rd | permissive | hathawayj/growthstandards | R | false | true | 1,167 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/standard_igfet.R
\name{igfet_zscore2value}
\alias{igfet_zscore2value}
\alias{igfet_centile2value}
\title{Convert INTERGROWTH z-scores/centiles to fetal ultrasound measurements (generic)}
\usage{
igfet_zscore2value(gagedays, z = 0, var = c("hccm", "bpdcm", "ofdcm",
"accm", "flcm"))
igfet_centile2value(gagedays, p = 50, var = c("hccm", "bpdcm", "ofdcm",
"accm", "flcm"))
}
\arguments{
\item{gagedays}{gestational age in days}
\item{z}{z-score(s) to convert}
\item{var}{the name of the measurement to convert ("hccm", "bpdcm", "ofdcm", "accm", "flcm")}
\item{p}{centile(s) to convert (must be between 0 and 100)}
}
\description{
Convert INTERGROWTH z-scores/centiles to fetal ultrasound measurements (generic)
}
\examples{
# get value for median head circumference for child at 100 gestational days
igfet_centile2value(100, 50, var = "hccm")
}
\references{
International standards for fetal growth based on serial ultrasound measurements: the Fetal Growth Longitudinal Study of the INTERGROWTH-21st Project
Papageorghiou, Aris T et al.
The Lancet, Volume 384, Issue 9946, 869-879
}
|
mod.lt <- function(child.value, child.mort=4, e0.target=NULL, adult.mort=NULL, sex="female", alpha=0){
#data(MLTobs)
class <- hmd.DA(x=child.value, sex=sex, child.mort=child.mort, adult.mort=adult.mort)
class <- as.numeric(class$classification)
# If e0.target is provided (not null), then get alpha from that
# If e0.target is NULL (default), use alpha
a.out <- if (is.null(e0.target)) alpha else alpha.e0(pattern=class, e0.target=e0.target, sex=sex)
mx.out <- mortmod(pattern=class, alpha=a.out, sex=sex)
lt.out <- lt.mx(nmx=exp(mx.out), sex=sex)
return(structure(c(lt.out, list(alpha=a.out, sex=sex, family=class)), class='LifeTable'))
} | /LifeTables/R/mod.lt.R | no_license | ingted/R-Examples | R | false | false | 653 | r | mod.lt <- function(child.value, child.mort=4, e0.target=NULL, adult.mort=NULL, sex="female", alpha=0){
#data(MLTobs)
class <- hmd.DA(x=child.value, sex=sex, child.mort=child.mort, adult.mort=adult.mort)
class <- as.numeric(class$classification)
# If e0.target is provided (not null), then get alpha from that
# If e0.target is NULL (default), use alpha
a.out <- if (is.null(e0.target)) alpha else alpha.e0(pattern=class, e0.target=e0.target, sex=sex)
mx.out <- mortmod(pattern=class, alpha=a.out, sex=sex)
lt.out <- lt.mx(nmx=exp(mx.out), sex=sex)
return(structure(c(lt.out, list(alpha=a.out, sex=sex, family=class)), class='LifeTable'))
} |
#This will setup the directory for the assignent, It will create directory, download the files from the web
# and will unzip the files to be use for the analysis
setwd("F:/Shared/Drive/AnjaliS/Coursera/ExpData/Week4")
mainDir<-getwd()
subDir<-"Course4Assignment2"
if (file.exists(subDir)){
setwd(file.path(mainDir, subDir))
} else {
dir.create(file.path(mainDir, subDir))
setwd(file.path(mainDir, subDir))
}
#download the file and unzip into created folder
mDir<-paste(getwd(),"/Data_for_Peer_Assessment.zip",sep = "")
url<-"https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip"
if (!file.exists(mDir)){
download.file(url, dest="Data_for_Peer_Assessment.zip", mode="wb")
}
unzip ("Data_for_Peer_Assessment.zip", exdir=getwd())
#End
library(ggplot2)
#read the data from the zip file
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
#Take the subset of coal realted data from scc and NEI
Coal<-SCC[grep("coal",SCC$EI.Sector,ignore.case = TRUE),]
Coal<-subset(NEI, SCC %in% Coal$SCC)
#Aggregate the data by year
TotalPM<-aggregate(Coal$Emissions,by=list(Coal$year),sum)
names(TotalPM)<-c("year","Emissions")
#Plotting
png("plot4.png", width=480, height=480)
g<-ggplot(TotalPM,aes(year,Emissions))
g<-g+geom_point() #marking points
g<-g+geom_line(stat = "identity")+ labs(y="Emissions from Coal related sources ",x="Year (1999,2002,2005,2008)")
#beautifying the plot
g<-g+theme_bw() +ggtitle("Coal combustion-related sources")+theme(
plot.title = element_text(color="red", size=12, face="bold.italic"),
axis.title.x = element_text(color="#993333", size=10, face="bold"),
axis.title.y = element_text(color="#993333", size=10, face="bold")
)
g
dev.off()
| /plot4.R | no_license | anzy9/ExData-Plotting2 | R | false | false | 1,774 | r | #This will setup the directory for the assignent, It will create directory, download the files from the web
# and will unzip the files to be use for the analysis
setwd("F:/Shared/Drive/AnjaliS/Coursera/ExpData/Week4")
mainDir<-getwd()
subDir<-"Course4Assignment2"
if (file.exists(subDir)){
setwd(file.path(mainDir, subDir))
} else {
dir.create(file.path(mainDir, subDir))
setwd(file.path(mainDir, subDir))
}
#download the file and unzip into created folder
mDir<-paste(getwd(),"/Data_for_Peer_Assessment.zip",sep = "")
url<-"https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip"
if (!file.exists(mDir)){
download.file(url, dest="Data_for_Peer_Assessment.zip", mode="wb")
}
unzip ("Data_for_Peer_Assessment.zip", exdir=getwd())
#End
library(ggplot2)
#read the data from the zip file
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
#Take the subset of coal realted data from scc and NEI
Coal<-SCC[grep("coal",SCC$EI.Sector,ignore.case = TRUE),]
Coal<-subset(NEI, SCC %in% Coal$SCC)
#Aggregate the data by year
TotalPM<-aggregate(Coal$Emissions,by=list(Coal$year),sum)
names(TotalPM)<-c("year","Emissions")
#Plotting
png("plot4.png", width=480, height=480)
g<-ggplot(TotalPM,aes(year,Emissions))
g<-g+geom_point() #marking points
g<-g+geom_line(stat = "identity")+ labs(y="Emissions from Coal related sources ",x="Year (1999,2002,2005,2008)")
#beautifying the plot
g<-g+theme_bw() +ggtitle("Coal combustion-related sources")+theme(
plot.title = element_text(color="red", size=12, face="bold.italic"),
axis.title.x = element_text(color="#993333", size=10, face="bold"),
axis.title.y = element_text(color="#993333", size=10, face="bold")
)
g
dev.off()
|
# Deena M.A. Gendoo
# October 26, 2015
# Code to generate stacked barplot of MM2S predictions, per sample
# DISCLAIMER:
# MM2S package (and its code components) is provided "AS-IS" and without any warranty of any kind.
# In no event shall the University Health Network (UHN) or the authors be liable for any consequential damage of any kind,
# or any damages resulting from the use of this MM2S.
#################################################################################
#################################################################################
PredictionsDistributionBoxplot<-function(InputMatrix,pdf_output,pdfheight,pdfwidth)
{
colorscheme<-c("#fccde5","#ccebc5","#80b1d3","#ffffb3","#fb8072")
if(is.logical(pdf_output))
{
TrueCounts<-apply(InputMatrix$Predictions,2,function(x){sum(as.numeric(x)>0)})
if(pdf_output==TRUE)
{
if((is.numeric(pdfheight))&&(is.numeric(pdfwidth)))
{
pdf(file="MM2S_OverallPredictions_BoxplotDist.pdf",height=pdfheight,width=pdfwidth)
boxplot(InputMatrix$Predictions,las=2,ylab="Prediction Strength (%)",
names=paste(colnames(InputMatrix$Predictions), " (n=", TrueCounts,")",sep=""),col=colorscheme,cex.axis=0.5,
main=paste("Distribution of MB Subtype Calls Across ",nrow(InputMatrix$Predictions)," Samples",sep=""))
dev.off()
}
else
{message("PDF dimensions must be numeric")
stop()}
}
}
else {
message("TRUE or FALSE needed for PDF output")
stop()}
boxplot(InputMatrix$Predictions,las=2,ylab="Prediction Strength (%)",
names=paste(colnames(InputMatrix$Predictions), " (n=", TrueCounts,")",sep=""),col=colorscheme,cex.axis=0.5,
main=paste("Distribution of MB Subtype Calls Across ",nrow(InputMatrix$Predictions)," Samples",sep=""))
}
| /R/PredictionsDistributionBoxplot.R | no_license | bhklab/MM2S | R | false | false | 1,912 | r | # Deena M.A. Gendoo
# October 26, 2015
# Code to generate stacked barplot of MM2S predictions, per sample
# DISCLAIMER:
# MM2S package (and its code components) is provided "AS-IS" and without any warranty of any kind.
# In no event shall the University Health Network (UHN) or the authors be liable for any consequential damage of any kind,
# or any damages resulting from the use of this MM2S.
#################################################################################
#################################################################################
PredictionsDistributionBoxplot<-function(InputMatrix,pdf_output,pdfheight,pdfwidth)
{
colorscheme<-c("#fccde5","#ccebc5","#80b1d3","#ffffb3","#fb8072")
if(is.logical(pdf_output))
{
TrueCounts<-apply(InputMatrix$Predictions,2,function(x){sum(as.numeric(x)>0)})
if(pdf_output==TRUE)
{
if((is.numeric(pdfheight))&&(is.numeric(pdfwidth)))
{
pdf(file="MM2S_OverallPredictions_BoxplotDist.pdf",height=pdfheight,width=pdfwidth)
boxplot(InputMatrix$Predictions,las=2,ylab="Prediction Strength (%)",
names=paste(colnames(InputMatrix$Predictions), " (n=", TrueCounts,")",sep=""),col=colorscheme,cex.axis=0.5,
main=paste("Distribution of MB Subtype Calls Across ",nrow(InputMatrix$Predictions)," Samples",sep=""))
dev.off()
}
else
{message("PDF dimensions must be numeric")
stop()}
}
}
else {
message("TRUE or FALSE needed for PDF output")
stop()}
boxplot(InputMatrix$Predictions,las=2,ylab="Prediction Strength (%)",
names=paste(colnames(InputMatrix$Predictions), " (n=", TrueCounts,")",sep=""),col=colorscheme,cex.axis=0.5,
main=paste("Distribution of MB Subtype Calls Across ",nrow(InputMatrix$Predictions)," Samples",sep=""))
}
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/functioncollection_import.R
\name{ReadAquiferData}
\alias{ReadAquiferData}
\title{Read an 'AquiferData.txt' file}
\usage{
ReadAquiferData(filename = "AquiferData.txt", sep = "\\t")
}
\arguments{
\item{filename}{Path to and file name of the AquiferData file to import. Windows users: Note that
Paths are separated by '/', not '\\'.}
\item{sep}{Character string, field separator as in \code{\link{read.table}}.}
}
\value{
\code{ReadAquiferData} returns a data frame.
}
\description{
This is a convenience wrapper function to import a HYPE AquiferData file as data frame into R.
}
\details{
\code{ReadAquiferData} is a simple \code{\link{read.table}} wrapper, mainly added to provide a comparable
function to other RHYPE import functions. Will check for \code{NA} values in imported data and return a warning if any are found.
HYPE requires \code{NA}-free input in required 'AquiferData.txt' columns, but empty values are allowed in comment columns
which are not read.
}
\examples{
\dontrun{ReadAquiferData("../myhype/AquiferData.txt")}
}
| /man/ReadAquiferData.Rd | no_license | wstolte/RHYPE | R | false | false | 1,124 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/functioncollection_import.R
\name{ReadAquiferData}
\alias{ReadAquiferData}
\title{Read an 'AquiferData.txt' file}
\usage{
ReadAquiferData(filename = "AquiferData.txt", sep = "\\t")
}
\arguments{
\item{filename}{Path to and file name of the AquiferData file to import. Windows users: Note that
Paths are separated by '/', not '\\'.}
\item{sep}{Character string, field separator as in \code{\link{read.table}}.}
}
\value{
\code{ReadAquiferData} returns a data frame.
}
\description{
This is a convenience wrapper function to import a HYPE AquiferData file as data frame into R.
}
\details{
\code{ReadAquiferData} is a simple \code{\link{read.table}} wrapper, mainly added to provide a comparable
function to other RHYPE import functions. Will check for \code{NA} values in imported data and return a warning if any are found.
HYPE requires \code{NA}-free input in required 'AquiferData.txt' columns, but empty values are allowed in comment columns
which are not read.
}
\examples{
\dontrun{ReadAquiferData("../myhype/AquiferData.txt")}
}
|
#' Verify that a channel of an HDF5 RGChannelSet matches its in-core twin.
#'
#' FIXME: allow for extended channels
#'
#' @param ch the channel name ("Red" or "Green")
#' @param ram the in-core RGChannelSet
#' @param hdf5 the hdf5-backed RGChannelSet
#' @param rows how many rows to test? (default: ncol(ram))
#'
#' @return a logical of length one
#'
#' @seealso read.methdf5
#'
#' @export
verifyChannel <- function(ch=c("Red","Green"), ram, hdf5, rows=NULL) {
j <- seq(1, ncol(ram))
if (is.null(rows)) {
i <- j
} else {
i <- seq(1, rows)
}
ch <- match.arg(ch)
fn <- ifelse(ch == "Red", minfi::getRed, minfi::getGreen)
identical(as.matrix(fn(hdf5[i, j])), fn(ram[i, j]))
}
| /R/verifyChannel.R | no_license | trichelab/h5testR | R | false | false | 727 | r | #' Verify that a channel of an HDF5 RGChannelSet matches its in-core twin.
#'
#' FIXME: allow for extended channels
#'
#' @param ch the channel name ("Red" or "Green")
#' @param ram the in-core RGChannelSet
#' @param hdf5 the hdf5-backed RGChannelSet
#' @param rows how many rows to test? (default: ncol(ram))
#'
#' @return a logical of length one
#'
#' @seealso read.methdf5
#'
#' @export
verifyChannel <- function(ch=c("Red","Green"), ram, hdf5, rows=NULL) {
j <- seq(1, ncol(ram))
if (is.null(rows)) {
i <- j
} else {
i <- seq(1, rows)
}
ch <- match.arg(ch)
fn <- ifelse(ch == "Red", minfi::getRed, minfi::getGreen)
identical(as.matrix(fn(hdf5[i, j])), fn(ram[i, j]))
}
|
# Interpolate Class
#
# Author: rsheftel
###############################################################################
qf.interpolate <- function(searchValue, searchVector, resultVector){
# Returns the interpolated value from the resultVector that
# is interpolated based on the searchValue in the searchVector
# Both vectors must be sorted with values from min to max
# The extrapolation is done on a flat line constant from the edge value
# NA are handled by removing those columns from the search and result
#remove all columns with NA in search or result vector
goodCols <- !is.na(searchVector) & !is.na(resultVector)
searchVector <- searchVector[goodCols]
resultVector <- resultVector[goodCols]
#Find the high and low index
indexLow <- ifelse(searchValue <= searchVector[1],
1,
max(which(searchVector <= searchValue)))
indexHigh <- ifelse(searchValue >= searchVector[length(searchVector)],
length(searchVector),
min(which(searchVector >= searchValue)))
if(!is.na(indexLow) && !is.na(indexHigh)){
if(indexLow==indexHigh)
return(resultVector[indexLow])
else {
return(resultVector[indexLow] + (resultVector[indexHigh] - resultVector[indexLow]) *
((searchValue - searchVector[indexLow])/(searchVector[indexHigh] - searchVector[indexLow])))
}}
else
return(NA)
}
qf.interpolateVector <- function(searchValues, searchVector, resultVector){
sapply(searchValues, function(x) qf.interpolate(x,searchVector, resultVector))
}
qf.interpolateXY <- function(searchValueY, searchValueX, searchVectorY, searchVectorX, resultMatrix){
#Returns the interpolated value the the resultMatrix that
# is interpolated on the X and Y dimensions
# The Y vector are the rows and the X is the columns, in R standard it is [Y,X] for the matrix element
indexLowX <- ifelse(searchValueX <= searchVectorX[1],
1,
max(which(searchVectorX <= searchValueX)))
indexHighX <- ifelse(searchValueX >= searchVectorX[length(searchVectorX)],
length(searchVectorX),
min(which(searchVectorX >= searchValueX)))
indexLowY <- ifelse(searchValueY <= searchVectorY[1],
1,
max(which(searchVectorY <= searchValueY)))
indexHighY <- ifelse(searchValueY >= searchVectorY[length(searchVectorY)],
length(searchVectorY),
min(which(searchVectorY >= searchValueY)))
weightHighX <- ifelse(indexLowX == indexHighX, 1,
(searchValueX - searchVectorX[indexLowX]) / (searchVectorX[indexHighX] - searchVectorX[indexLowX]))
weightLowX <- (1 - weightHighX)
weightHighY <- ifelse(indexLowY == indexHighY, 1,
(searchValueY - searchVectorY[indexLowY]) / (searchVectorY[indexHighY] - searchVectorY[indexLowY]))
weightLowY <- (1 - weightHighY)
result <- (weightHighX * weightHighY) * resultMatrix[indexHighY, indexHighX]
result <- ((weightHighX * weightLowY) * resultMatrix[indexLowY, indexHighX]) + result
result <- ((weightLowX * weightHighY) * resultMatrix[indexHighY, indexLowX]) + result
result <- ((weightLowX * weightLowY) * resultMatrix[indexLowY, indexLowX]) + result
return(result)
}
| /R/src/QFMath/R/Interpolate.R | no_license | rsheftel/ratel | R | false | false | 3,094 | r | # Interpolate Class
#
# Author: rsheftel
###############################################################################
qf.interpolate <- function(searchValue, searchVector, resultVector){
# Returns the interpolated value from the resultVector that
# is interpolated based on the searchValue in the searchVector
# Both vectors must be sorted with values from min to max
# The extrapolation is done on a flat line constant from the edge value
# NA are handled by removing those columns from the search and result
#remove all columns with NA in search or result vector
goodCols <- !is.na(searchVector) & !is.na(resultVector)
searchVector <- searchVector[goodCols]
resultVector <- resultVector[goodCols]
#Find the high and low index
indexLow <- ifelse(searchValue <= searchVector[1],
1,
max(which(searchVector <= searchValue)))
indexHigh <- ifelse(searchValue >= searchVector[length(searchVector)],
length(searchVector),
min(which(searchVector >= searchValue)))
if(!is.na(indexLow) && !is.na(indexHigh)){
if(indexLow==indexHigh)
return(resultVector[indexLow])
else {
return(resultVector[indexLow] + (resultVector[indexHigh] - resultVector[indexLow]) *
((searchValue - searchVector[indexLow])/(searchVector[indexHigh] - searchVector[indexLow])))
}}
else
return(NA)
}
qf.interpolateVector <- function(searchValues, searchVector, resultVector){
sapply(searchValues, function(x) qf.interpolate(x,searchVector, resultVector))
}
qf.interpolateXY <- function(searchValueY, searchValueX, searchVectorY, searchVectorX, resultMatrix){
#Returns the interpolated value the the resultMatrix that
# is interpolated on the X and Y dimensions
# The Y vector are the rows and the X is the columns, in R standard it is [Y,X] for the matrix element
indexLowX <- ifelse(searchValueX <= searchVectorX[1],
1,
max(which(searchVectorX <= searchValueX)))
indexHighX <- ifelse(searchValueX >= searchVectorX[length(searchVectorX)],
length(searchVectorX),
min(which(searchVectorX >= searchValueX)))
indexLowY <- ifelse(searchValueY <= searchVectorY[1],
1,
max(which(searchVectorY <= searchValueY)))
indexHighY <- ifelse(searchValueY >= searchVectorY[length(searchVectorY)],
length(searchVectorY),
min(which(searchVectorY >= searchValueY)))
weightHighX <- ifelse(indexLowX == indexHighX, 1,
(searchValueX - searchVectorX[indexLowX]) / (searchVectorX[indexHighX] - searchVectorX[indexLowX]))
weightLowX <- (1 - weightHighX)
weightHighY <- ifelse(indexLowY == indexHighY, 1,
(searchValueY - searchVectorY[indexLowY]) / (searchVectorY[indexHighY] - searchVectorY[indexLowY]))
weightLowY <- (1 - weightHighY)
result <- (weightHighX * weightHighY) * resultMatrix[indexHighY, indexHighX]
result <- ((weightHighX * weightLowY) * resultMatrix[indexLowY, indexHighX]) + result
result <- ((weightLowX * weightHighY) * resultMatrix[indexHighY, indexLowX]) + result
result <- ((weightLowX * weightLowY) * resultMatrix[indexLowY, indexLowX]) + result
return(result)
}
|
#' Cold-Start Emissions factors for Light Duty Vehicles
#'
#' This function returns speed functions which depends on ambient temperature
#' average speed. The emission factors comes from the guidelines EMEP/EEA air pollutant
#' emission inventory guidebook
#' http://www.eea.europa.eu/themes/air/emep-eea-air-pollutant-emission-inventory-guidebook
#'
#' @param v Category vehicle: "LDV"
#' @param ta Ambient temperature. Monthly men can be used
#' @param cc Size of engine in cc: "<=1400", "1400_2000" or ">2000"
#' @param f Type of fuel: "G", "D" or "LPG"
#' @param eu Euro standard: "PRE", "I", "II", "III", "IV", "V", "VI" or "VIc"
#' @param p Pollutant: "CO", "FC", "NOx", "HC" or "PM"
#' @param k Multiplication factor
#' @param show.equation Option to see or not the equation parameters
#' @return an emission factor function which depends of the average speed V
#' and ambient temperature. g/km
#' @keywords cold emission factors
#' @export
#' @examples \dontrun{
#' # Do not run
#' V <- 0:150
#' ef1 <- ef_ldv_cold(ta = 15, cc = "<=1400", f ="G", eu = "I",
#' p = "CO")
#' ef1(10)
#' }
ef_ldv_cold <- function(v = "LDV", ta, cc, f, eu, p, k = 1, show.equation = FALSE){
ef_ldv <- sysdata[[5]]
df <- ef_ldv[ef_ldv$VEH == v &
ef_ldv$CC == cc &
ef_ldv$FUEL == f &
ef_ldv$EURO == eu &
ef_ldv$POLLUTANT == p, ]
lista <- list(a = df$a,
b = df$b,
c = df$c,
d = df$d,
e = df$e,
f = df$f,
g = df$g,
h = df$h,
i = df$i,
Equation = paste0("(",as.character(df$Y), ")", "*", k))
if (show.equation == TRUE) {
print(lista)
}
f1 <- function(V){
a <- df$a
b <- df$b
c <- df$c
d <- df$d
e <- df$e
f <- df$f
g <- df$g
h <- df$h
i <- df$i
V <- ifelse(V<df$MINV,df$MINV,ifelse(V>df$MAXV,df$MAXV,V))
eval(parse(text = paste0("(",as.character(df$Y), ")", "*", k)))
}
return(f1)
}
| /R/ef_ldv_cold.R | no_license | salvatirehbein/vein | R | false | false | 2,030 | r | #' Cold-Start Emissions factors for Light Duty Vehicles
#'
#' This function returns speed functions which depends on ambient temperature
#' average speed. The emission factors comes from the guidelines EMEP/EEA air pollutant
#' emission inventory guidebook
#' http://www.eea.europa.eu/themes/air/emep-eea-air-pollutant-emission-inventory-guidebook
#'
#' @param v Category vehicle: "LDV"
#' @param ta Ambient temperature. Monthly men can be used
#' @param cc Size of engine in cc: "<=1400", "1400_2000" or ">2000"
#' @param f Type of fuel: "G", "D" or "LPG"
#' @param eu Euro standard: "PRE", "I", "II", "III", "IV", "V", "VI" or "VIc"
#' @param p Pollutant: "CO", "FC", "NOx", "HC" or "PM"
#' @param k Multiplication factor
#' @param show.equation Option to see or not the equation parameters
#' @return an emission factor function which depends of the average speed V
#' and ambient temperature. g/km
#' @keywords cold emission factors
#' @export
#' @examples \dontrun{
#' # Do not run
#' V <- 0:150
#' ef1 <- ef_ldv_cold(ta = 15, cc = "<=1400", f ="G", eu = "I",
#' p = "CO")
#' ef1(10)
#' }
ef_ldv_cold <- function(v = "LDV", ta, cc, f, eu, p, k = 1, show.equation = FALSE){
ef_ldv <- sysdata[[5]]
df <- ef_ldv[ef_ldv$VEH == v &
ef_ldv$CC == cc &
ef_ldv$FUEL == f &
ef_ldv$EURO == eu &
ef_ldv$POLLUTANT == p, ]
lista <- list(a = df$a,
b = df$b,
c = df$c,
d = df$d,
e = df$e,
f = df$f,
g = df$g,
h = df$h,
i = df$i,
Equation = paste0("(",as.character(df$Y), ")", "*", k))
if (show.equation == TRUE) {
print(lista)
}
f1 <- function(V){
a <- df$a
b <- df$b
c <- df$c
d <- df$d
e <- df$e
f <- df$f
g <- df$g
h <- df$h
i <- df$i
V <- ifelse(V<df$MINV,df$MINV,ifelse(V>df$MAXV,df$MAXV,V))
eval(parse(text = paste0("(",as.character(df$Y), ")", "*", k)))
}
return(f1)
}
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/mrds-package.R
\docType{methods}
\name{mrds-opt}
\alias{mrds-opt}
\title{Tips on optimisation issues in \code{mrds} models}
\description{
Occasionally when fitting an `mrds` model one can run into optimisation issues. In general such problems can be quite complex so these "quick fixes" may not work. If you come up against problems that are not fixed by these tips, or you feel the results are dubious please go ahead and contact the package authors.
}
\section{Debug mode}{
One can obtain debug output at each stage of the optimisation using the \code{showit} option. This is set via \code{control}, so adding \code{control=list(showit=3)} gives the highest level of debug output (setting \code{showit} to 1 or 2 gives less output).
}
\section{Re-scaling covariates}{
Sometimes convergence issues in covariate (MCDS) models are caused by values of the covariate being very large, so a rescaling of that covariate is then necessary. Simply scaling by the standard deviation of the covariate can help (e.g. \code{dat$size.scaled <- dat$scale/sd(dat$scale)} for a covariate \code{size}, then including \code{size.scaled} in the model instead of \code{size}).
It is important to note that one needs to use the original covariate (size) when computing Horvitz-Thompson estimates of population size if the group size is used in that estimate. i.e. use the unscaled size in the numerator of the H-T estimator.
}
\section{Initial values}{
Initial (or starting) values can be set via the \code{initial} element of the \code{control} list. \code{initial} is a list itself with elements \code{scale}, \code{shape} and \code{adjustment}, corresponding to the associated parameters. If a model has covariates then the \code{scale} or \code{shape} elements will be vectors with parameter initial values in the same order as they are specific in the model formula (using \code{showit} is a good check they are in the correct order). Adjustment starting values are in order of the order of that term (cosine order 2 is before cosine order 3 terms).
One way of obtaining starting values is to fit a simpler model first (say with fewer covariates or adjustments) and then use the starting values from this simpler model for the corresponding parameters.
Another alternative to obtain starting values is to fit the model (or some submodel) using Distance for Windows. Note that Distance reports the scale parameter (or intercept in a covariate model) on the exponential sclale, so one must \code{log} this before supplying it to \code{ddf}.
}
\section{Bounds}{
One can change the upper and lower bounds for the parameters. These specify the largest and smallest values individual parameters can be. By placing these constraints on the parameters, it is possible to "temper" the optimisation problem, making fitting possible.
Again, one uses the \code{control} list, the elements \code{upperbounds} and \code{lowerbounds}. In this case, each of \code{upperbounds} and \code{lowerbounds} are vectors, which one can think of as each of the vectors \code{scale}, \code{shape} and \code{adjustment} from the "Initial values" section above, concatenated in that order. If one does not occur (e.g. no shape parameter) then it is simple omitted from the vector.
}
\author{
David L. Miller <dave@ninepointeightone.net>
}
| /mrds/man/mrds-opt.Rd | no_license | LHMarshall/mrds | R | false | false | 3,393 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/mrds-package.R
\docType{methods}
\name{mrds-opt}
\alias{mrds-opt}
\title{Tips on optimisation issues in \code{mrds} models}
\description{
Occasionally when fitting an `mrds` model one can run into optimisation issues. In general such problems can be quite complex so these "quick fixes" may not work. If you come up against problems that are not fixed by these tips, or you feel the results are dubious please go ahead and contact the package authors.
}
\section{Debug mode}{
One can obtain debug output at each stage of the optimisation using the \code{showit} option. This is set via \code{control}, so adding \code{control=list(showit=3)} gives the highest level of debug output (setting \code{showit} to 1 or 2 gives less output).
}
\section{Re-scaling covariates}{
Sometimes convergence issues in covariate (MCDS) models are caused by values of the covariate being very large, so a rescaling of that covariate is then necessary. Simply scaling by the standard deviation of the covariate can help (e.g. \code{dat$size.scaled <- dat$scale/sd(dat$scale)} for a covariate \code{size}, then including \code{size.scaled} in the model instead of \code{size}).
It is important to note that one needs to use the original covariate (size) when computing Horvitz-Thompson estimates of population size if the group size is used in that estimate. i.e. use the unscaled size in the numerator of the H-T estimator.
}
\section{Initial values}{
Initial (or starting) values can be set via the \code{initial} element of the \code{control} list. \code{initial} is a list itself with elements \code{scale}, \code{shape} and \code{adjustment}, corresponding to the associated parameters. If a model has covariates then the \code{scale} or \code{shape} elements will be vectors with parameter initial values in the same order as they are specific in the model formula (using \code{showit} is a good check they are in the correct order). Adjustment starting values are in order of the order of that term (cosine order 2 is before cosine order 3 terms).
One way of obtaining starting values is to fit a simpler model first (say with fewer covariates or adjustments) and then use the starting values from this simpler model for the corresponding parameters.
Another alternative to obtain starting values is to fit the model (or some submodel) using Distance for Windows. Note that Distance reports the scale parameter (or intercept in a covariate model) on the exponential sclale, so one must \code{log} this before supplying it to \code{ddf}.
}
\section{Bounds}{
One can change the upper and lower bounds for the parameters. These specify the largest and smallest values individual parameters can be. By placing these constraints on the parameters, it is possible to "temper" the optimisation problem, making fitting possible.
Again, one uses the \code{control} list, the elements \code{upperbounds} and \code{lowerbounds}. In this case, each of \code{upperbounds} and \code{lowerbounds} are vectors, which one can think of as each of the vectors \code{scale}, \code{shape} and \code{adjustment} from the "Initial values" section above, concatenated in that order. If one does not occur (e.g. no shape parameter) then it is simple omitted from the vector.
}
\author{
David L. Miller <dave@ninepointeightone.net>
}
|
context("Test basics")
test_that("SSModel works properly",{
tol<-1e-3
set.seed(123)
d<-data.frame(x=rnorm(100))
t12<-ts(cbind(t1=rnorm(100)+d$x,t2=rnorm(100)))
t12[sample(size=50,1:200)]<-NA
expect_warning(
model<-SSModel(t12~SSMcycle(period=10, type='common',Q=2)
+SSMcycle(period=10, type='distinct',P1=diag(c(1,1,2,2)),Q=diag(1:2))
+SSMtrend(2,type="common",Q=diag(c(1,0.5)))
+SSMtrend(2,type="distinct",Q=list(diag(0.1,2),diag(0.01,2)),
P1=diag(c(0.1,0.01,0.1,0.01)))
+SSMseasonal(period=4,type="common")
+SSMseasonal(period=4,type="distinct",Q=diag(c(2,3)),P1=diag(c(2,2,2,3,3,3)))
+SSMseasonal(period=5,type="common",sea.type="trig")
+SSMseasonal(period=5,type="distinct",sea.type="trig",Q=diag(c(0.1,0.2)),
P1=diag(rep(c(0.1,0.2),each=4)))
+SSMarima(ar=0.9,ma=0.2)+SSMregression(~-1+x,index=1,Q=1,data=d)
), NA)
expect_warning(print(model), NA)
expect_warning(logLik(model), NA)
expect_equal(logLik(model),-442.006705531500,tolerance=tol,check.attributes=FALSE)
expect_warning(out<-KFS(model,filtering=c("state","mean"),smoothing=c("state","mean","disturbance")), NA)
expect_equal(out$d,11)
expect_equal(out$j,1)
expect_warning(
model<-SSModel(t12~SSMcycle(period=10, type='common',Q=2, state_names = c("a", "b"))
+SSMcycle(period=10, type='distinct',P1=diag(c(1,1,2,2)),Q=diag(1:2), state_names = rep("c",4))
+SSMtrend(2,type="common",Q=diag(c(1,0.5)), state_names = 1:2)
+SSMtrend(2,type="distinct",Q=list(diag(0.1,2), diag(0.01,2), state_names = 1:4),
P1=diag(c(0.1,0.01,0.1,0.01)))
+SSMseasonal(period=4,type="common", state_names = 1:3)
+SSMseasonal(period=4,type="distinct",Q=diag(c(2,3)),P1=diag(c(2,2,2,3,3,3)),
state_names = 1:6)
+SSMseasonal(period=5,type="common",sea.type="trig", state_names = 1:4)
+SSMseasonal(period=5,type="distinct",sea.type="trig",Q=diag(c(0.1,0.2)),
P1=diag(rep(c(0.1,0.2),each=4)))
+SSMarima(ar=0.9,ma=0.2, state_names = 1:4) +
SSMregression(~-1+x,index=1,Q=1,data=d, state_names = 1)
), NA)
custom_model <- SSModel(1:10 ~ -1 +
SSMcustom(Z = 1, T = 1, R = 1, Q = 1, P1inf = 1), H = 1)
custom_model <- rename_states(custom_model, "level")
ll_model <- SSModel(1:10 ~ SSMtrend(1, Q = 1), H = 1)
test_these <- c("y", "Z", "H", "T", "R", "Q", "a1", "P1", "P1inf")
expect_identical(custom_model[test_these], ll_model[test_these])
})
| /data/genthat_extracted_code/KFAS/tests/testBasics.R | no_license | surayaaramli/typeRrh | R | false | false | 2,680 | r | context("Test basics")
test_that("SSModel works properly",{
tol<-1e-3
set.seed(123)
d<-data.frame(x=rnorm(100))
t12<-ts(cbind(t1=rnorm(100)+d$x,t2=rnorm(100)))
t12[sample(size=50,1:200)]<-NA
expect_warning(
model<-SSModel(t12~SSMcycle(period=10, type='common',Q=2)
+SSMcycle(period=10, type='distinct',P1=diag(c(1,1,2,2)),Q=diag(1:2))
+SSMtrend(2,type="common",Q=diag(c(1,0.5)))
+SSMtrend(2,type="distinct",Q=list(diag(0.1,2),diag(0.01,2)),
P1=diag(c(0.1,0.01,0.1,0.01)))
+SSMseasonal(period=4,type="common")
+SSMseasonal(period=4,type="distinct",Q=diag(c(2,3)),P1=diag(c(2,2,2,3,3,3)))
+SSMseasonal(period=5,type="common",sea.type="trig")
+SSMseasonal(period=5,type="distinct",sea.type="trig",Q=diag(c(0.1,0.2)),
P1=diag(rep(c(0.1,0.2),each=4)))
+SSMarima(ar=0.9,ma=0.2)+SSMregression(~-1+x,index=1,Q=1,data=d)
), NA)
expect_warning(print(model), NA)
expect_warning(logLik(model), NA)
expect_equal(logLik(model),-442.006705531500,tolerance=tol,check.attributes=FALSE)
expect_warning(out<-KFS(model,filtering=c("state","mean"),smoothing=c("state","mean","disturbance")), NA)
expect_equal(out$d,11)
expect_equal(out$j,1)
expect_warning(
model<-SSModel(t12~SSMcycle(period=10, type='common',Q=2, state_names = c("a", "b"))
+SSMcycle(period=10, type='distinct',P1=diag(c(1,1,2,2)),Q=diag(1:2), state_names = rep("c",4))
+SSMtrend(2,type="common",Q=diag(c(1,0.5)), state_names = 1:2)
+SSMtrend(2,type="distinct",Q=list(diag(0.1,2), diag(0.01,2), state_names = 1:4),
P1=diag(c(0.1,0.01,0.1,0.01)))
+SSMseasonal(period=4,type="common", state_names = 1:3)
+SSMseasonal(period=4,type="distinct",Q=diag(c(2,3)),P1=diag(c(2,2,2,3,3,3)),
state_names = 1:6)
+SSMseasonal(period=5,type="common",sea.type="trig", state_names = 1:4)
+SSMseasonal(period=5,type="distinct",sea.type="trig",Q=diag(c(0.1,0.2)),
P1=diag(rep(c(0.1,0.2),each=4)))
+SSMarima(ar=0.9,ma=0.2, state_names = 1:4) +
SSMregression(~-1+x,index=1,Q=1,data=d, state_names = 1)
), NA)
custom_model <- SSModel(1:10 ~ -1 +
SSMcustom(Z = 1, T = 1, R = 1, Q = 1, P1inf = 1), H = 1)
custom_model <- rename_states(custom_model, "level")
ll_model <- SSModel(1:10 ~ SSMtrend(1, Q = 1), H = 1)
test_these <- c("y", "Z", "H", "T", "R", "Q", "a1", "P1", "P1inf")
expect_identical(custom_model[test_these], ll_model[test_these])
})
|
\name{Georgia}
\alias{Georgia}
\alias{Gedu.df}
\docType{data}
\title{Georgia census data set (csv file)}
\description{
Census data from the county of Georgia, USA
}
\usage{data(Georgia)}
\format{
A data frame with 159 observations on the following 13 variables.
\describe{
\item{AreaKey}{An identification number for each county}
\item{Latitude}{The latitude of the county centroid}
\item{Longitud}{The longitude of the county centroid}
\item{TotPop90}{Population of the county in 1990}
\item{PctRural}{Percentage of the county population defined as rural}
\item{PctBach}{Percentage of the county population with a bachelors degree}
\item{PctEld}{Percentage of the county population aged 65 or over}
\item{PctFB}{Percentage of the county population born outside the US}
\item{PctPov}{Percentage of the county population living below the poverty line}
\item{PctBlack}{Percentage of the county population who are black}
\item{ID}{a numeric vector of IDs}
\item{X}{a numeric vector of x coordinates}
\item{Y}{a numeric vector of y coordinates}
}
}
\details{
This data set can also be found in GWR 3 and in spgwr.
}
\references{
Fotheringham S, Brunsdon, C, and Charlton, M (2002),
Geographically Weighted Regression: The Analysis of Spatially Varying Relationships, Chichester: Wiley.
}
\examples{
data(Georgia)
ls()
coords <- cbind(Gedu.df$X, Gedu.df$Y)
educ.spdf <- SpatialPointsDataFrame(coords, Gedu.df)
spplot(educ.spdf, names(educ.spdf)[4:10])
}
\keyword{data}
\concept{Georgia census}
| /00_pkg_src/GWmodel/man/Georgia.Rd | no_license | lbb220/GWmodel.Rcheck | R | false | false | 1,552 | rd | \name{Georgia}
\alias{Georgia}
\alias{Gedu.df}
\docType{data}
\title{Georgia census data set (csv file)}
\description{
Census data from the county of Georgia, USA
}
\usage{data(Georgia)}
\format{
A data frame with 159 observations on the following 13 variables.
\describe{
\item{AreaKey}{An identification number for each county}
\item{Latitude}{The latitude of the county centroid}
\item{Longitud}{The longitude of the county centroid}
\item{TotPop90}{Population of the county in 1990}
\item{PctRural}{Percentage of the county population defined as rural}
\item{PctBach}{Percentage of the county population with a bachelors degree}
\item{PctEld}{Percentage of the county population aged 65 or over}
\item{PctFB}{Percentage of the county population born outside the US}
\item{PctPov}{Percentage of the county population living below the poverty line}
\item{PctBlack}{Percentage of the county population who are black}
\item{ID}{a numeric vector of IDs}
\item{X}{a numeric vector of x coordinates}
\item{Y}{a numeric vector of y coordinates}
}
}
\details{
This data set can also be found in GWR 3 and in spgwr.
}
\references{
Fotheringham S, Brunsdon, C, and Charlton, M (2002),
Geographically Weighted Regression: The Analysis of Spatially Varying Relationships, Chichester: Wiley.
}
\examples{
data(Georgia)
ls()
coords <- cbind(Gedu.df$X, Gedu.df$Y)
educ.spdf <- SpatialPointsDataFrame(coords, Gedu.df)
spplot(educ.spdf, names(educ.spdf)[4:10])
}
\keyword{data}
\concept{Georgia census}
|
#
# This is the user-interface definition of a Shiny web application. You can
# run the application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
# http://littleactuary.github.io/blog/Web-application-framework-with-Shiny/
# https://www.r-bloggers.com/deploying-desktop-apps-with-r/
# http://blog.analytixware.com/2014/03/packaging-your-shiny-app-as-windows.html
# https://stackedit.io/editor
# https://codeshare.io
# https://uasnap.shinyapps.io/akcan_climate/
library(shiny)
library(shinythemes)
library(shinyjs)
library(dygraphs)
library(plotly)
# Define UI for application that draws a histogram
shinyUI(
tagList(
navbarPage("tsGRNNimpute", #"VS-OMTR",
tabPanel("Imputation",
fluidPage(theme = shinytheme("cosmo"), #"flatly" #yeti
# Application title
#titlePanel("Upload Multiple Files -> Imputation NA's-> Downloads ZIP archive"),
# Sidebar with a slider input for number of bins
sidebarLayout(
sidebarPanel(
fileInput("file1", "Choose .CLI File", multiple = TRUE,
accept = c(
"text/csv",
"text/comma-separated-values,text/plain",
".csv", ".cli", ".CLI")
),
tags$hr(),
checkboxInput("viewUploadFormat", "View and select upload format", FALSE),
# list formats
conditionalPanel(
condition = "input.viewUploadFormat == true",
wellPanel(tags$b(""),
radioButtons("cliFormat", "Format upload .cli files:",
c("VS-Pascal" = "vso",
"VS-Fortran" = "vsf",
"meteo.ru/Aisori" = "aisori",
"meteo.ru/Aisori - TAB+Blank" = "aisoriTAB"
)
)
)
),
wellPanel(tags$b(""),
checkboxInput("debug", "View Debug Info", TRUE),
checkboxInput("globalVec", "View global prec.imp/temp.imp", FALSE),
checkboxInput("statsna", "Stats NA Distribution", FALSE)),
selectInput("replaceNAs", "Replace NAs by:",
choices = c("Select Replacement method",
"GRNN-ManualSigma",
"imputeTS",
"GRNN-CrossValidation",
"PSO-GRNN",
"missForest",
"Hmisc",
"MICE",
"Amelia",
"mi")),
conditionalPanel(
condition = "input.replaceNAs == 'GRNN-ManualSigma'",
wellPanel(tags$b(""),
sliderInput("sigmaPrecGRNN", label = "Changing Sigma - Prec",
min = 0.001,
max = 0.999, value = 0.01),
sliderInput("sigmaTempGRNN", label = "Changing Sigma - Temp",
min = 0.001,
max = 0.999, value = 0.01)
)
),
conditionalPanel( #http://www.stat.columbia.edu/~gelman/arm/missing.pdf
condition = "input.replaceNAs == 'imputeTS'",
wellPanel(tags$b(""),
radioButtons("imputeTSalgorithm", "imputeTS algorithm:",
c(
"Weighted Moving Average" = "ma",
"Kalman Smoothing and State Space Models" = "kalman",
"Last Observation Carried Forward" = "locf",
"Mean Value" = "mean",
"Random Sample" = "random",
"Seasonally Decomposed Missing Value Imputation" = "seadec",
"Seasonally Splitted Missing Value Imputation " = "seasplit"
)
)
)
),
selectInput("cliFormatWrite", "Format download .CLI ZIP archive:",
choices = c("VS-Pascal", "VS-Fortran", "VS-Shiny", ".RData"),
selected = "VS-Pascal"),
conditionalPanel(
condition = "input.cliFormatWrite == 'VS-Shiny' && input.cliFormat != 'aisori'",
numericInput('WMOStation', 'WMO Station', 9999,
min = 1, max = 9999)),
tags$hr(),
downloadButton('downloadData', 'Save Results to .ZIP archive', class = "butt"),
tags$hr(),
tags$div(tags$a(href="mailto:ilynva@gmail.com","Created by Iljin Victor, 2017"))#, style = "color:green")
),
# Show a plot of the generated distribution
mainPanel(
tabsetPanel(
tabPanel("Stats",
conditionalPanel(
condition = "input.debug == true",
verbatimTextOutput("strFileInput"),
verbatimTextOutput("summaryMergeFileInput")),
conditionalPanel(
condition = "input.statsna == true",
verbatimTextOutput("printStatsNA")
),
conditionalPanel(
condition = "input.globalVec == true",
verbatimTextOutput("printStatsGlobalVecImp"))
),
tabPanel("Table",
dataTableOutput("contents")),
tabPanel("Plot NAs",
tabsetPanel(
tabPanel("Distribution of NAs",
plotOutput("plotNADPrec"),
plotOutput("plotNADTemp"),
column(12,
dateRangeInput("dates_plotNAD", label = "Date Range")),
column(12,
sliderInput("num_plotNAD", label = "Number Range", min = 1,
max = 24745, value = c(40, 20000), width = '100%'))
),
tabPanel("Gapsize of NAs",
plotOutput("plotNAGPrec"),
plotOutput("plotNAGTemp")),
tabPanel("Distribution of NAs Bar",
plotOutput("plotNABPrec"),
plotOutput("plotNABTemp")),
tabPanel("Percentage of missing data",
dygraphOutput("dygraphPercentageNAs"))
)
),
tabPanel("ImputeNAs Temp", # plotNA.imputations
tabsetPanel(
tabPanel("DyGraphs",
dygraphOutput("dygraphTempNAs")
, tags$hr(),
verbatimTextOutput("dygraphTempNAsInfo")),
tabPanel("PlotLy",
plotlyOutput("plotlyTempNAs")
),
tabPanel("highcharter"),
tabPanel("RPlot",
plotOutput('tempNA'), tags$hr())
)
),
tabPanel("ImputeNAs Prec",
tabsetPanel(
tabPanel("DyGraphs",
dygraphOutput("dygraphPrecNAs")
, tags$hr(),
verbatimTextOutput("dygraphPrecNAsInfo")
),
tabPanel("PlotLy"),
tabPanel("highcharter")
)
)
)
)
)
)),
tabPanel("About",
tags$hr(),
verbatimTextOutput("About"), tags$hr())
) # NavBar
))
| /vs-omtr/ui.R | no_license | tmeits/fmvcd | R | false | false | 7,864 | r | #
# This is the user-interface definition of a Shiny web application. You can
# run the application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
# http://littleactuary.github.io/blog/Web-application-framework-with-Shiny/
# https://www.r-bloggers.com/deploying-desktop-apps-with-r/
# http://blog.analytixware.com/2014/03/packaging-your-shiny-app-as-windows.html
# https://stackedit.io/editor
# https://codeshare.io
# https://uasnap.shinyapps.io/akcan_climate/
library(shiny)
library(shinythemes)
library(shinyjs)
library(dygraphs)
library(plotly)
# Define UI for application that draws a histogram
shinyUI(
tagList(
navbarPage("tsGRNNimpute", #"VS-OMTR",
tabPanel("Imputation",
fluidPage(theme = shinytheme("cosmo"), #"flatly" #yeti
# Application title
#titlePanel("Upload Multiple Files -> Imputation NA's-> Downloads ZIP archive"),
# Sidebar with a slider input for number of bins
sidebarLayout(
sidebarPanel(
fileInput("file1", "Choose .CLI File", multiple = TRUE,
accept = c(
"text/csv",
"text/comma-separated-values,text/plain",
".csv", ".cli", ".CLI")
),
tags$hr(),
checkboxInput("viewUploadFormat", "View and select upload format", FALSE),
# list formats
conditionalPanel(
condition = "input.viewUploadFormat == true",
wellPanel(tags$b(""),
radioButtons("cliFormat", "Format upload .cli files:",
c("VS-Pascal" = "vso",
"VS-Fortran" = "vsf",
"meteo.ru/Aisori" = "aisori",
"meteo.ru/Aisori - TAB+Blank" = "aisoriTAB"
)
)
)
),
wellPanel(tags$b(""),
checkboxInput("debug", "View Debug Info", TRUE),
checkboxInput("globalVec", "View global prec.imp/temp.imp", FALSE),
checkboxInput("statsna", "Stats NA Distribution", FALSE)),
selectInput("replaceNAs", "Replace NAs by:",
choices = c("Select Replacement method",
"GRNN-ManualSigma",
"imputeTS",
"GRNN-CrossValidation",
"PSO-GRNN",
"missForest",
"Hmisc",
"MICE",
"Amelia",
"mi")),
conditionalPanel(
condition = "input.replaceNAs == 'GRNN-ManualSigma'",
wellPanel(tags$b(""),
sliderInput("sigmaPrecGRNN", label = "Changing Sigma - Prec",
min = 0.001,
max = 0.999, value = 0.01),
sliderInput("sigmaTempGRNN", label = "Changing Sigma - Temp",
min = 0.001,
max = 0.999, value = 0.01)
)
),
conditionalPanel( #http://www.stat.columbia.edu/~gelman/arm/missing.pdf
condition = "input.replaceNAs == 'imputeTS'",
wellPanel(tags$b(""),
radioButtons("imputeTSalgorithm", "imputeTS algorithm:",
c(
"Weighted Moving Average" = "ma",
"Kalman Smoothing and State Space Models" = "kalman",
"Last Observation Carried Forward" = "locf",
"Mean Value" = "mean",
"Random Sample" = "random",
"Seasonally Decomposed Missing Value Imputation" = "seadec",
"Seasonally Splitted Missing Value Imputation " = "seasplit"
)
)
)
),
selectInput("cliFormatWrite", "Format download .CLI ZIP archive:",
choices = c("VS-Pascal", "VS-Fortran", "VS-Shiny", ".RData"),
selected = "VS-Pascal"),
conditionalPanel(
condition = "input.cliFormatWrite == 'VS-Shiny' && input.cliFormat != 'aisori'",
numericInput('WMOStation', 'WMO Station', 9999,
min = 1, max = 9999)),
tags$hr(),
downloadButton('downloadData', 'Save Results to .ZIP archive', class = "butt"),
tags$hr(),
tags$div(tags$a(href="mailto:ilynva@gmail.com","Created by Iljin Victor, 2017"))#, style = "color:green")
),
# Show a plot of the generated distribution
mainPanel(
tabsetPanel(
tabPanel("Stats",
conditionalPanel(
condition = "input.debug == true",
verbatimTextOutput("strFileInput"),
verbatimTextOutput("summaryMergeFileInput")),
conditionalPanel(
condition = "input.statsna == true",
verbatimTextOutput("printStatsNA")
),
conditionalPanel(
condition = "input.globalVec == true",
verbatimTextOutput("printStatsGlobalVecImp"))
),
tabPanel("Table",
dataTableOutput("contents")),
tabPanel("Plot NAs",
tabsetPanel(
tabPanel("Distribution of NAs",
plotOutput("plotNADPrec"),
plotOutput("plotNADTemp"),
column(12,
dateRangeInput("dates_plotNAD", label = "Date Range")),
column(12,
sliderInput("num_plotNAD", label = "Number Range", min = 1,
max = 24745, value = c(40, 20000), width = '100%'))
),
tabPanel("Gapsize of NAs",
plotOutput("plotNAGPrec"),
plotOutput("plotNAGTemp")),
tabPanel("Distribution of NAs Bar",
plotOutput("plotNABPrec"),
plotOutput("plotNABTemp")),
tabPanel("Percentage of missing data",
dygraphOutput("dygraphPercentageNAs"))
)
),
tabPanel("ImputeNAs Temp", # plotNA.imputations
tabsetPanel(
tabPanel("DyGraphs",
dygraphOutput("dygraphTempNAs")
, tags$hr(),
verbatimTextOutput("dygraphTempNAsInfo")),
tabPanel("PlotLy",
plotlyOutput("plotlyTempNAs")
),
tabPanel("highcharter"),
tabPanel("RPlot",
plotOutput('tempNA'), tags$hr())
)
),
tabPanel("ImputeNAs Prec",
tabsetPanel(
tabPanel("DyGraphs",
dygraphOutput("dygraphPrecNAs")
, tags$hr(),
verbatimTextOutput("dygraphPrecNAsInfo")
),
tabPanel("PlotLy"),
tabPanel("highcharter")
)
)
)
)
)
)),
tabPanel("About",
tags$hr(),
verbatimTextOutput("About"), tags$hr())
) # NavBar
))
|
# Test the auto-indent here. Highlight code and then press COMMAND+I or CTRL+I.
library(Quandl)
library(lubridate)
library(ggplot2)
library(dplyr)
library(stringr)
#-------------------------------------------------------------------------------
# The paste() function (in base R)
#-------------------------------------------------------------------------------
# The greatest of all R string/character commands: the paste() command. You can
# combine text and R variables.
PI <- paste("The life of", pi)
PI
# The next example is one of programming abstractly. i.e. you assign all
# specific values to variables, and then use the variable names throughout the
# rest of your code. That way if you want to change either the name or the
# number to order, you only need to change it once, instead of everywhere in
# your document.
name <- "BORT"
number_to_order <- 76
paste("We need to order", number_to_order, "more", name, "novelty license plates.")
# paste() is very useful when you want to automate the creation of strings. For
# example say you want to produce a daily chart of bitcoin prices with a title
# indicating the date range. It would get tiring to everyday update the title
# manually. With paste() you can do this automatically.
bitcoin <- Quandl("BAVERAGE/USD", start_date="2013-01-01") %>%
tbl_df() %>%
mutate(Date=ymd(Date))
min(bitcoin$Date)
max(bitcoin$Date)
title <- paste("Bitcoin prices from", min(bitcoin$Date), "to", max(bitcoin$Date))
title
ggplot(bitcoin, aes(x=Date, y=`24h Average`)) +
geom_line() +
xlab("Date") + ylab("24h Average Price") +
ggtitle(title) +
geom_smooth(n=100)
# There are two possible arguments. 'collapse' to collapse a single vector of
# character strings and 'sep' to determine what to separate the paste by. The
# default is sep=" ".
letters[1:10]
paste(letters[1:10], collapse="+")
paste(letters[1:10], 1:10, sep="&")
#-------------------------------------------------------------------------------
# EXERCISE
#-------------------------------------------------------------------------------
# Using the paste() command, write-out a command that will print out a message
# like this one, but by assigning values to the variables below: "Hello, my name
# is Albert Kim. I am from Montreal, Quebec". Now if you have people's data in
# a spreadsheet
city <-
province <-
last_name <-
first_name <-
#-------------------------------------------------------------------------------
# A list of other base R string commands
#-------------------------------------------------------------------------------
# as.character() converts objects into strings
as.character(76)
# The following play with upper case and lower case
tolower("HeLLO worlD")
toupper("HeLLO worlD")
# nchar() counts characters
nchar("HeLLO worlD")
# abbreviate() using some sort of algorithm: remove vowels, and include just
# enough consonants to be able to distinguish values? I'm not sure of the exact
# procedure. The only thing that matters is that you for each original word you
# get a unique abbreviation
colors()
colors() %>%
abbreviate()
# gsub for character substitution
text <- "I love to play apples to apples while eating apples. How do you like dem apples?"
gsub(pattern = "apple", replacement="orange", x=text)
#-------------------------------------------------------------------------------
# The stringr Package
#-------------------------------------------------------------------------------
# These ALL have great help file examples. Type ?str_ in the console and press
# TAB to see all the functions that are available to you!
# Useful basic manipulations:
str_c() # string concatenation. same as paste()
str_length() # number of characters. same as nchar()
str_sub() # extracts substrings
str_trim() # removes leading and trailing whitespace
str_pad() # pads a string
str_to_title() # convert string to title casing
str_count() # count number of matches in a string
str_sort() # sort a character vector
# Less useful basic manipulations
str_wrap() # wraps a string paragraph
str_dup() # duplicates characters
# Advanced string find, extract, replace, etc.
str_detect() # Detect the presence or absence of a pattern in a string
str_split() # Split up a string into a variable number of pieces
str_split_fixed() # Split up a string into a fixed number of pieces
str_extract_all() # Extract all pieces of a string that match a pattern
str_extract() # Extract only first piece of a string that matches a pattern
str_match_all() # Extract all matched groups from a string
str_match() # Extract only first matched group from a string
str_locate_all() # Locate the position of all occurences of a pattern in a string
str_locate() # Locate the position of the only first occurence of a pattern in a string
str_replace_all() # Replace all occurrences of a matched pattern in a string
str_replace() # Replace only first occurrence of a matched pattern in a string
#-------------------------------------------------------------------------------
# Examples of Non Self-Evident Commands
#-------------------------------------------------------------------------------
# Extract substrings:
hw <- "Hadley Wickham"
str_sub(hw, 1, 6)
str_sub(hw, 1, nchar(hw))
str_sub(hw, c(1, 8), c(6, 14))
# Padding strings. Especially useful for padding with 0's
nums <- c(1:15)
as.character(nums)
str_pad(nums, 2, pad ="0")
str_pad(nums, 3, pad ="0")
# HUGE one: detecting strings
text <- "Hello, my name is Simon and I like to do drawings."
str_detect(string=text, pattern="Simon")
text <- c("Simon", "Radhika", "Hyo-Kyung")
str_detect(string=text, pattern="Simon")
# Extracting strings. Notice the difference. The latter returns a "list" and
# not a vector
text <-c("Simon", "My name is Simon. Simon Favreau-Lessard", "Hyo-Kyung")
str_extract(string=text, pattern="Simon")
# Note here the output is a "list", not a vector
output <- str_extract_all(string=text, pattern="Simon")
output
output[[1]]
# Matching strings:
strings <- c(" 219 733 8965", "329-293-8753 ", "banana", "595 794 7569",
"387 287 6718", "apple", "233.398.9187 ", "482 952 3315",
"239 923 8115 and 842 566 4692", "Work: 579-499-7527", "$1000",
"Home: 543.355.3679")
str_match(string=strings, pattern="233")
str_match(string=strings, pattern="Work")
# Locate the exact position of a string: input is a single element
fruit <- "It's time to go bananas for bananas."
str_locate(string=fruit, pattern="na")
str_locate_all(string=fruit, pattern="na")
str_locate(string=fruit, pattern="time to go ba")
str_sub(string=fruit, start=6, end=18)
# Locate the exact position of a string: input is a vector
fruit <- c("apple", "banana", "pear", "pineapple")
str_locate(string=fruit, pattern="na")
# Again, the output here is a list
output <- str_locate_all(string=fruit, pattern="na")
output
output[[2]]
# String replacing. Same as gsub() earlier
text <- "I love to play apples to apples while eating apples. How do you like dem apples?"
gsub(pattern = "apple", replacement="orange", x=text)
# Note the difference
str_replace(string=text, pattern = "apple", replacement = "orange")
str_replace_all(string=text, pattern = "apple", replacement = "orange")
# Splitting
fruits <- "apples and oranges and pears and bananas"
str_split(string=fruits, pattern=" and ")
fruits <- c(
"apples and oranges and pears and bananas",
"pineapples and mangos and guavas"
)
str_split(string=fruits, pattern=" and ") | /Lec19 String Manipulation/Lec19.R | no_license | 2016-09-Middlebury-Data-Science/Topics | R | false | false | 7,659 | r | # Test the auto-indent here. Highlight code and then press COMMAND+I or CTRL+I.
library(Quandl)
library(lubridate)
library(ggplot2)
library(dplyr)
library(stringr)
#-------------------------------------------------------------------------------
# The paste() function (in base R)
#-------------------------------------------------------------------------------
# The greatest of all R string/character commands: the paste() command. You can
# combine text and R variables.
PI <- paste("The life of", pi)
PI
# The next example is one of programming abstractly. i.e. you assign all
# specific values to variables, and then use the variable names throughout the
# rest of your code. That way if you want to change either the name or the
# number to order, you only need to change it once, instead of everywhere in
# your document.
name <- "BORT"
number_to_order <- 76
paste("We need to order", number_to_order, "more", name, "novelty license plates.")
# paste() is very useful when you want to automate the creation of strings. For
# example say you want to produce a daily chart of bitcoin prices with a title
# indicating the date range. It would get tiring to everyday update the title
# manually. With paste() you can do this automatically.
bitcoin <- Quandl("BAVERAGE/USD", start_date="2013-01-01") %>%
tbl_df() %>%
mutate(Date=ymd(Date))
min(bitcoin$Date)
max(bitcoin$Date)
title <- paste("Bitcoin prices from", min(bitcoin$Date), "to", max(bitcoin$Date))
title
ggplot(bitcoin, aes(x=Date, y=`24h Average`)) +
geom_line() +
xlab("Date") + ylab("24h Average Price") +
ggtitle(title) +
geom_smooth(n=100)
# There are two possible arguments. 'collapse' to collapse a single vector of
# character strings and 'sep' to determine what to separate the paste by. The
# default is sep=" ".
letters[1:10]
paste(letters[1:10], collapse="+")
paste(letters[1:10], 1:10, sep="&")
#-------------------------------------------------------------------------------
# EXERCISE
#-------------------------------------------------------------------------------
# Using the paste() command, write-out a command that will print out a message
# like this one, but by assigning values to the variables below: "Hello, my name
# is Albert Kim. I am from Montreal, Quebec". Now if you have people's data in
# a spreadsheet
city <-
province <-
last_name <-
first_name <-
#-------------------------------------------------------------------------------
# A list of other base R string commands
#-------------------------------------------------------------------------------
# as.character() converts objects into strings
as.character(76)
# The following play with upper case and lower case
tolower("HeLLO worlD")
toupper("HeLLO worlD")
# nchar() counts characters
nchar("HeLLO worlD")
# abbreviate() using some sort of algorithm: remove vowels, and include just
# enough consonants to be able to distinguish values? I'm not sure of the exact
# procedure. The only thing that matters is that you for each original word you
# get a unique abbreviation
colors()
colors() %>%
abbreviate()
# gsub for character substitution
text <- "I love to play apples to apples while eating apples. How do you like dem apples?"
gsub(pattern = "apple", replacement="orange", x=text)
#-------------------------------------------------------------------------------
# The stringr Package
#-------------------------------------------------------------------------------
# These ALL have great help file examples. Type ?str_ in the console and press
# TAB to see all the functions that are available to you!
# Useful basic manipulations:
str_c() # string concatenation. same as paste()
str_length() # number of characters. same as nchar()
str_sub() # extracts substrings
str_trim() # removes leading and trailing whitespace
str_pad() # pads a string
str_to_title() # convert string to title casing
str_count() # count number of matches in a string
str_sort() # sort a character vector
# Less useful basic manipulations
str_wrap() # wraps a string paragraph
str_dup() # duplicates characters
# Advanced string find, extract, replace, etc.
str_detect() # Detect the presence or absence of a pattern in a string
str_split() # Split up a string into a variable number of pieces
str_split_fixed() # Split up a string into a fixed number of pieces
str_extract_all() # Extract all pieces of a string that match a pattern
str_extract() # Extract only first piece of a string that matches a pattern
str_match_all() # Extract all matched groups from a string
str_match() # Extract only first matched group from a string
str_locate_all() # Locate the position of all occurences of a pattern in a string
str_locate() # Locate the position of the only first occurence of a pattern in a string
str_replace_all() # Replace all occurrences of a matched pattern in a string
str_replace() # Replace only first occurrence of a matched pattern in a string
#-------------------------------------------------------------------------------
# Examples of Non Self-Evident Commands
#-------------------------------------------------------------------------------
# Extract substrings:
hw <- "Hadley Wickham"
str_sub(hw, 1, 6)
str_sub(hw, 1, nchar(hw))
str_sub(hw, c(1, 8), c(6, 14))
# Padding strings. Especially useful for padding with 0's
nums <- c(1:15)
as.character(nums)
str_pad(nums, 2, pad ="0")
str_pad(nums, 3, pad ="0")
# HUGE one: detecting strings
text <- "Hello, my name is Simon and I like to do drawings."
str_detect(string=text, pattern="Simon")
text <- c("Simon", "Radhika", "Hyo-Kyung")
str_detect(string=text, pattern="Simon")
# Extracting strings. Notice the difference. The latter returns a "list" and
# not a vector
text <-c("Simon", "My name is Simon. Simon Favreau-Lessard", "Hyo-Kyung")
str_extract(string=text, pattern="Simon")
# Note here the output is a "list", not a vector
output <- str_extract_all(string=text, pattern="Simon")
output
output[[1]]
# Matching strings:
strings <- c(" 219 733 8965", "329-293-8753 ", "banana", "595 794 7569",
"387 287 6718", "apple", "233.398.9187 ", "482 952 3315",
"239 923 8115 and 842 566 4692", "Work: 579-499-7527", "$1000",
"Home: 543.355.3679")
str_match(string=strings, pattern="233")
str_match(string=strings, pattern="Work")
# Locate the exact position of a string: input is a single element
fruit <- "It's time to go bananas for bananas."
str_locate(string=fruit, pattern="na")
str_locate_all(string=fruit, pattern="na")
str_locate(string=fruit, pattern="time to go ba")
str_sub(string=fruit, start=6, end=18)
# Locate the exact position of a string: input is a vector
fruit <- c("apple", "banana", "pear", "pineapple")
str_locate(string=fruit, pattern="na")
# Again, the output here is a list
output <- str_locate_all(string=fruit, pattern="na")
output
output[[2]]
# String replacing. Same as gsub() earlier
text <- "I love to play apples to apples while eating apples. How do you like dem apples?"
gsub(pattern = "apple", replacement="orange", x=text)
# Note the difference
str_replace(string=text, pattern = "apple", replacement = "orange")
str_replace_all(string=text, pattern = "apple", replacement = "orange")
# Splitting
fruits <- "apples and oranges and pears and bananas"
str_split(string=fruits, pattern=" and ")
fruits <- c(
"apples and oranges and pears and bananas",
"pineapples and mangos and guavas"
)
str_split(string=fruits, pattern=" and ") |
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810536108077e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) | /CNull/inst/testfiles/communities_individual_based_sampling_alpha/AFL_communities_individual_based_sampling_alpha/communities_individual_based_sampling_alpha_valgrind_files/1615775251-test.R | no_license | akhikolla/updatedatatype-list2 | R | false | false | 362 | r | testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810536108077e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) |
library(ape)
testtree <- read.tree("3982_9.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="3982_9_unrooted.txt") | /codeml_files/newick_trees_processed/3982_9/rinput.R | no_license | DaniBoo/cyanobacteria_project | R | false | false | 135 | r | library(ape)
testtree <- read.tree("3982_9.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="3982_9_unrooted.txt") |
library(aimsir17)
library(ggplot2)
data <- observations %>%
filter(station=="MALIN HEAD") %>%
group_by(day,month) %>%
summarise(MeanDailyTemp=mean(temp))
ggplot(data) +
geom_histogram(aes(x=MeanDailyTemp),binwidth = 1)+
geom_vline(xintercept = median(data$MeanDailyTemp),colour="red")+
geom_vline(xintercept = mean(data$MeanDailyTemp),colour="blue")
ggplot(data) +
geom_histogram(aes(x=MeanDailyTemp),binwidth = 0.5)+
geom_vline(xintercept = median(data$MeanDailyTemp),colour="red")+
geom_vline(xintercept = mean(data$MeanDailyTemp),colour="blue")
data2 <- observations %>%
filter(station=="MALIN HEAD" |
station=="SherkinIsland") %>%
group_by(station,day,month) %>%
summarise(MeanDailyTemp=mean(temp))
ggplot(data2) +
geom_histogram(aes(x=MeanDailyTemp,fill=station),
binwidth = 1)+
facet_wrap(~station,ncol=1)
data3 <- observations %>%
filter(station=="MALIN HEAD" |
station=="SherkinIsland")
ggplot(data3,aes(x=temp,colour=station)) +
geom_freqpoly(binwidth=1)
data3 <- observations %>%
filter(station=="MALIN HEAD" |
station=="SherkinIsland")
ggplot(data3,aes(y=temp,x=station)) +
geom_boxplot()
ggplot(observations,aes(y=temp,x=station)) +
geom_boxplot()+
theme(axis.text.x=element_text(angle = 45))
| /code/09 EDA/Histogram.R | permissive | JimDuggan/CT1100 | R | false | false | 1,346 | r | library(aimsir17)
library(ggplot2)
data <- observations %>%
filter(station=="MALIN HEAD") %>%
group_by(day,month) %>%
summarise(MeanDailyTemp=mean(temp))
ggplot(data) +
geom_histogram(aes(x=MeanDailyTemp),binwidth = 1)+
geom_vline(xintercept = median(data$MeanDailyTemp),colour="red")+
geom_vline(xintercept = mean(data$MeanDailyTemp),colour="blue")
ggplot(data) +
geom_histogram(aes(x=MeanDailyTemp),binwidth = 0.5)+
geom_vline(xintercept = median(data$MeanDailyTemp),colour="red")+
geom_vline(xintercept = mean(data$MeanDailyTemp),colour="blue")
data2 <- observations %>%
filter(station=="MALIN HEAD" |
station=="SherkinIsland") %>%
group_by(station,day,month) %>%
summarise(MeanDailyTemp=mean(temp))
ggplot(data2) +
geom_histogram(aes(x=MeanDailyTemp,fill=station),
binwidth = 1)+
facet_wrap(~station,ncol=1)
data3 <- observations %>%
filter(station=="MALIN HEAD" |
station=="SherkinIsland")
ggplot(data3,aes(x=temp,colour=station)) +
geom_freqpoly(binwidth=1)
data3 <- observations %>%
filter(station=="MALIN HEAD" |
station=="SherkinIsland")
ggplot(data3,aes(y=temp,x=station)) +
geom_boxplot()
ggplot(observations,aes(y=temp,x=station)) +
geom_boxplot()+
theme(axis.text.x=element_text(angle = 45))
|
library(stats)
library(mice)
library(tidyverse)
library(factoextra)
# Part I
# Import data
raw <- read_csv('https://s3.amazonaws.com/notredame.analytics.data/mallcustomers.csv')
customer <- raw
# Change the data type for each column
customer <- customer %>%
mutate(CustomerID = as.integer(CustomerID),Gender = as.factor(Gender),
Age = as.integer(Age),Income = as.character(Income),SpendingScore = as.integer(SpendingScore))
# Part II
# Look at missing values
summary(customer)
# Do the imputation based on logistic regression
imputed_cust <- mice(customer,m=1,maxit=5,meth='logreg',seed=1234)
# First 10 suggested imputations of Gender
imputed_cust$imp$Gender[1:10,]
# Complete the customer dataset with the imputed values
complete_cust <- mice::complete(imputed_cust)
# Change the Income column to numbers
complete_cust$Income <- str_remove_all(complete_cust$Income,' USD|,')
complete_cust <- complete_cust %>%
mutate(Income = as.numeric(Income))
summary(complete_cust)
# Part III
# Normalize the data using z-score
selected_metrics <- complete_cust %>%
select(-CustomerID, -Gender, -Age)
sm_z <- scale(selected_metrics)
# Cluster sm_z based on selected metrics
set.seed(1234)
k_3 <- kmeans(sm_z, centers=3, nstart = 25)
k_3$size
k_3$centers
# Viz with fviz_cluster
fviz_cluster(k_3, data = sm_z)
# Assign cluster IDs to complete_cust
complete_cust$cluster <- k_3$cluster
# Use the elbow method to identify the optimal number of clusters
wcss <- vector()
n = 20
set.seed(1234)
for(k in 1:n) {
wcss[k] <- sum(kmeans(sm_z, k)$withinss)
}
wcss
# Visualize the values of WCSS as they relate to number of clusters
tibble(value = wcss) %>%
ggplot(mapping=aes(x=seq(1,length(wcss)), y=value)) +
geom_point()+
geom_line() +
labs(title = "The Elbow Method", y = "WCSS", x = "Number of Clusters (k)" ) +
theme_minimal()
# The optimal number of clusters should be 5 according to the elbow method.
# Part IV
# Use the optimal number of clusters and re-run the clustering
set.seed(1234)
k_5 <- kmeans(sm_z, centers=5, nstart = 25)
k_5$size
k_5$centers
fviz_cluster(k_5, data = sm_z)
# Looking at the above chart, we can assign the following labels to each cluster:
# Cluster 1: 'Poor spenders'
# Cluster 2: 'Neutral middle-class'
# Cluster 3: 'Rich savers'
# Cluster 4: 'Rich spenders'
# Cluster 5: 'Poor savers'
# Create a new column in complete_cust to show the cluster number
complete_cust$cluster <- k_5$cluster
# Examine age and gender for each cluster
cluster_age_gender <- complete_cust %>%
select(cluster,Age,Gender)
# Average age for each cluster
cluster_age_gender %>%
group_by(cluster) %>%
summarise(avg_age = mean(Age))
# Average age of the overall dataset
mean(complete_cust$Age)
# Discussion on age: we can see that the mean of age in each cluster is very different from that of the overall dataset.
# Gender distribution for each cluster with stacked barplot
cluster_gender <- cluster_age_gender %>%
group_by(cluster,Gender) %>%
tally() %>%
pivot_wider(names_from = Gender, values_from = n) %>%
as.data.frame() %>%
mutate(cluster = as.character(cluster))
complete_gender <- complete_cust %>%
group_by(Gender) %>%
tally() %>%
pivot_wider(names_from = Gender, values_from = n)
complete_gender['cluster'] <- 'Overall'
complete_gender <- complete_gender[c(3,1,2)]
gender_total <- rbind(complete_gender,cluster_gender)
gender_total <- gender_total %>%
transform(Femaleprob = Female/(Female+Male))
gender_total <- gender_total %>%
transform(Maleprob = Male/(Female+Male)) %>%
select(-Female,-Male)
names(gender_total) <- c('Cluster','Female','Male')
gender_total %>%
pivot_longer(-Cluster,names_to = 'Gender',values_to = 'Probability') %>%
ggplot(aes(fill=Gender, y=Probability, x=Cluster)) +
geom_bar(position="stack", stat="identity")+
geom_hline(yintercept=0.445, linetype="dashed", color = "blue")+
theme(axis.text.x = element_text(face = c('plain', 'plain', 'plain', 'plain', 'plain', 'bold')))
# Discussion on Gender Distribition: We can see from the above graph that the overall dataset has a pretty balanced gender distribution.
# Compared to the overall dataset, cluster 1 and 3 have the most deviation in gender distribution, where cluster 1 has significantly more females and cluster 3 more males.
# Cluster 5 also has more females but the extent is less extreme than cluster 1. Clusters 2 and 4 have similar gender distributions as in the overall dataset.
# Recommendations:
# 1. Cluster 4 would be the group to promote to because this group generally earns high income and spends frequently. This means that members in cluster 4 could be relatively price insensitive. The average age of this group is 32.7. They're likely young professionals who are financially independent. Acme can promote creative, popular, and high-quality products to this group.
# 2. Cluster 3 has high income but is conservative in spending. Acme should try to find out why these people don't spend at its stores. It could be that Acme's products don't appeal to these people. We can see that the average of of cluster 3 is 41.1. So these people can be heads of families. They might be more attracted to more family-focused products such as boats.
# 3. Cluster 5 has low income and doesn't like spending. To incentivize more spending from this group, Acme can consider strategies such as wholesale and 'everyday low price'.
| /Imputation_with_MICE_and_KMeansClustering.R | no_license | dtang9724/Machine-Learning | R | false | false | 5,562 | r | library(stats)
library(mice)
library(tidyverse)
library(factoextra)
# Part I
# Import data
raw <- read_csv('https://s3.amazonaws.com/notredame.analytics.data/mallcustomers.csv')
customer <- raw
# Change the data type for each column
customer <- customer %>%
mutate(CustomerID = as.integer(CustomerID),Gender = as.factor(Gender),
Age = as.integer(Age),Income = as.character(Income),SpendingScore = as.integer(SpendingScore))
# Part II
# Look at missing values
summary(customer)
# Do the imputation based on logistic regression
imputed_cust <- mice(customer,m=1,maxit=5,meth='logreg',seed=1234)
# First 10 suggested imputations of Gender
imputed_cust$imp$Gender[1:10,]
# Complete the customer dataset with the imputed values
complete_cust <- mice::complete(imputed_cust)
# Change the Income column to numbers
complete_cust$Income <- str_remove_all(complete_cust$Income,' USD|,')
complete_cust <- complete_cust %>%
mutate(Income = as.numeric(Income))
summary(complete_cust)
# Part III
# Normalize the data using z-score
selected_metrics <- complete_cust %>%
select(-CustomerID, -Gender, -Age)
sm_z <- scale(selected_metrics)
# Cluster sm_z based on selected metrics
set.seed(1234)
k_3 <- kmeans(sm_z, centers=3, nstart = 25)
k_3$size
k_3$centers
# Viz with fviz_cluster
fviz_cluster(k_3, data = sm_z)
# Assign cluster IDs to complete_cust
complete_cust$cluster <- k_3$cluster
# Use the elbow method to identify the optimal number of clusters
wcss <- vector()
n = 20
set.seed(1234)
for(k in 1:n) {
wcss[k] <- sum(kmeans(sm_z, k)$withinss)
}
wcss
# Visualize the values of WCSS as they relate to number of clusters
tibble(value = wcss) %>%
ggplot(mapping=aes(x=seq(1,length(wcss)), y=value)) +
geom_point()+
geom_line() +
labs(title = "The Elbow Method", y = "WCSS", x = "Number of Clusters (k)" ) +
theme_minimal()
# The optimal number of clusters should be 5 according to the elbow method.
# Part IV
# Use the optimal number of clusters and re-run the clustering
set.seed(1234)
k_5 <- kmeans(sm_z, centers=5, nstart = 25)
k_5$size
k_5$centers
fviz_cluster(k_5, data = sm_z)
# Looking at the above chart, we can assign the following labels to each cluster:
# Cluster 1: 'Poor spenders'
# Cluster 2: 'Neutral middle-class'
# Cluster 3: 'Rich savers'
# Cluster 4: 'Rich spenders'
# Cluster 5: 'Poor savers'
# Create a new column in complete_cust to show the cluster number
complete_cust$cluster <- k_5$cluster
# Examine age and gender for each cluster
cluster_age_gender <- complete_cust %>%
select(cluster,Age,Gender)
# Average age for each cluster
cluster_age_gender %>%
group_by(cluster) %>%
summarise(avg_age = mean(Age))
# Average age of the overall dataset
mean(complete_cust$Age)
# Discussion on age: we can see that the mean of age in each cluster is very different from that of the overall dataset.
# Gender distribution for each cluster with stacked barplot
cluster_gender <- cluster_age_gender %>%
group_by(cluster,Gender) %>%
tally() %>%
pivot_wider(names_from = Gender, values_from = n) %>%
as.data.frame() %>%
mutate(cluster = as.character(cluster))
complete_gender <- complete_cust %>%
group_by(Gender) %>%
tally() %>%
pivot_wider(names_from = Gender, values_from = n)
complete_gender['cluster'] <- 'Overall'
complete_gender <- complete_gender[c(3,1,2)]
gender_total <- rbind(complete_gender,cluster_gender)
gender_total <- gender_total %>%
transform(Femaleprob = Female/(Female+Male))
gender_total <- gender_total %>%
transform(Maleprob = Male/(Female+Male)) %>%
select(-Female,-Male)
names(gender_total) <- c('Cluster','Female','Male')
gender_total %>%
pivot_longer(-Cluster,names_to = 'Gender',values_to = 'Probability') %>%
ggplot(aes(fill=Gender, y=Probability, x=Cluster)) +
geom_bar(position="stack", stat="identity")+
geom_hline(yintercept=0.445, linetype="dashed", color = "blue")+
theme(axis.text.x = element_text(face = c('plain', 'plain', 'plain', 'plain', 'plain', 'bold')))
# Discussion on Gender Distribition: We can see from the above graph that the overall dataset has a pretty balanced gender distribution.
# Compared to the overall dataset, cluster 1 and 3 have the most deviation in gender distribution, where cluster 1 has significantly more females and cluster 3 more males.
# Cluster 5 also has more females but the extent is less extreme than cluster 1. Clusters 2 and 4 have similar gender distributions as in the overall dataset.
# Recommendations:
# 1. Cluster 4 would be the group to promote to because this group generally earns high income and spends frequently. This means that members in cluster 4 could be relatively price insensitive. The average age of this group is 32.7. They're likely young professionals who are financially independent. Acme can promote creative, popular, and high-quality products to this group.
# 2. Cluster 3 has high income but is conservative in spending. Acme should try to find out why these people don't spend at its stores. It could be that Acme's products don't appeal to these people. We can see that the average of of cluster 3 is 41.1. So these people can be heads of families. They might be more attracted to more family-focused products such as boats.
# 3. Cluster 5 has low income and doesn't like spending. To incentivize more spending from this group, Acme can consider strategies such as wholesale and 'everyday low price'.
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/theme_blog.R
\name{scale_color_blog}
\alias{scale_color_blog}
\title{color scale that works well with blog}
\usage{
scale_color_blog()
}
\description{
color scale that works well with blog
}
| /man/scale_color_blog.Rd | no_license | davan690/brotools | R | false | true | 269 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/theme_blog.R
\name{scale_color_blog}
\alias{scale_color_blog}
\title{color scale that works well with blog}
\usage{
scale_color_blog()
}
\description{
color scale that works well with blog
}
|
#
# environment-change.R, 18 Nov 15
#
# Data extracted from:
# We have it easy, but do we have it right?
# Todd Mytkowicz and Amer Diwan and Matthias Hauswirth and Peter F. Sweeney
#
# Example from:
# Evidence-based Software Engineering: based on the publicly available data
# Derek M. Jones
#
# TAG benchmark_performance performance_variation environment-variables
source("ESEUR_config.r")
library("gplots")
# A package containing a plotCI function that may be loaded during build
unloadNamespace("Rcapture")
ngs=read.csv(paste0(ESEUR_dir, "benchmark/ngs08.csv.xz"), as.is=TRUE)
plotCI(ngs$x, ngs$y, li=ngs$y.conf.min, ui=ngs$y.conf.max,
col="red", barcol="gray", gap=0.5,
xlab="Characters added to the environment",
ylab="Percentage performance difference")
| /benchmark/environment-change.R | no_license | shanechin/ESEUR-code-data | R | false | false | 772 | r | #
# environment-change.R, 18 Nov 15
#
# Data extracted from:
# We have it easy, but do we have it right?
# Todd Mytkowicz and Amer Diwan and Matthias Hauswirth and Peter F. Sweeney
#
# Example from:
# Evidence-based Software Engineering: based on the publicly available data
# Derek M. Jones
#
# TAG benchmark_performance performance_variation environment-variables
source("ESEUR_config.r")
library("gplots")
# A package containing a plotCI function that may be loaded during build
unloadNamespace("Rcapture")
ngs=read.csv(paste0(ESEUR_dir, "benchmark/ngs08.csv.xz"), as.is=TRUE)
plotCI(ngs$x, ngs$y, li=ngs$y.conf.min, ui=ngs$y.conf.max,
col="red", barcol="gray", gap=0.5,
xlab="Characters added to the environment",
ylab="Percentage performance difference")
|
number * 2
number <- 5 + 2
number * 2
a * 2 # error. as 'a' is not specified
times <- c(60, 35, 40, 33, 15, 20, 10) # This is an in-line comment
(times <- c(60, 35, 40, 33, 15, 20, 10)) # assign and print
times_hour <- times / 60
mean(times)
sqrt(times)
range(times) # minimum and maximum
times < 30
times == 20
times != 20
times > 20 & times < 50
times > 20 | times < 50
which(times < 30)
any(times < 30) # is there any true?
all(times < 30) # are all true?
sum(times < 30) # count number of trues. as TRUE equals 1, FALSE equals 0.
# subsetting:
times[3]
times[-3] # all except 3rd
times[c(2, 4)]
times[c(4, 2)] # order matters
times[1:5]
times[times < 30]
# cap entries
times[times > 50] <- 50 # AWESOME!!!
times
# NA:
length(times) # returned 7
times[8] <- NA
times
mean(times) # returned NA
?mean
mean(times, na.rm = TRUE)
mean(times, 0, TRUE)
mean(na.rm = TRUE, x = times)
mtcars
str(mtcars)
names(mtcars)
mtcars$mpg
| /weeks_1_and_2/cm002-r-exploration.r | no_license | stannam/STAT545-participation | R | false | false | 933 | r | number * 2
number <- 5 + 2
number * 2
a * 2 # error. as 'a' is not specified
times <- c(60, 35, 40, 33, 15, 20, 10) # This is an in-line comment
(times <- c(60, 35, 40, 33, 15, 20, 10)) # assign and print
times_hour <- times / 60
mean(times)
sqrt(times)
range(times) # minimum and maximum
times < 30
times == 20
times != 20
times > 20 & times < 50
times > 20 | times < 50
which(times < 30)
any(times < 30) # is there any true?
all(times < 30) # are all true?
sum(times < 30) # count number of trues. as TRUE equals 1, FALSE equals 0.
# subsetting:
times[3]
times[-3] # all except 3rd
times[c(2, 4)]
times[c(4, 2)] # order matters
times[1:5]
times[times < 30]
# cap entries
times[times > 50] <- 50 # AWESOME!!!
times
# NA:
length(times) # returned 7
times[8] <- NA
times
mean(times) # returned NA
?mean
mean(times, na.rm = TRUE)
mean(times, 0, TRUE)
mean(na.rm = TRUE, x = times)
mtcars
str(mtcars)
names(mtcars)
mtcars$mpg
|
context("Parallel Co-ordinate plots")
test_that("test input object", {
expect_error(ExpParcoord(mtcars, Stsize = 20,
Nvar = c("mpg", "disp", "wt", "gear")))
})
test_that("test output object", {
plotlst <- ExpParcoord(mtcars, Nvar = c("mpg", "disp", "wt", "gear"))
expect_is(plotlst, "ggplot")
})
| /tests/testthat/test_exppcp.R | no_license | cran/SmartEDA | R | false | false | 342 | r | context("Parallel Co-ordinate plots")
test_that("test input object", {
expect_error(ExpParcoord(mtcars, Stsize = 20,
Nvar = c("mpg", "disp", "wt", "gear")))
})
test_that("test output object", {
plotlst <- ExpParcoord(mtcars, Nvar = c("mpg", "disp", "wt", "gear"))
expect_is(plotlst, "ggplot")
})
|
# SQE API
#
# No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator)
#
# The version of the OpenAPI document: v1
#
# Generated by: https://openapi-generator.tech
#' @docType class
#' @title SimpleImageDTO
#' @description SimpleImageDTO Class
#' @format An \code{R6Class} generator object
#' @field id integer
#'
#' @field url character
#'
#' @field lightingType \link{Lighting}
#'
#' @field lightingDirection \link{Direction}
#'
#' @field waveLength list( character )
#'
#' @field type character
#'
#' @field side \link{SideDesignation}
#'
#' @field ppi integer
#'
#' @field master character
#'
#' @field catalogNumber integer
#'
#'
#' @importFrom R6 R6Class
#' @importFrom jsonlite fromJSON toJSON
#' @export
SimpleImageDTO <- R6::R6Class(
'SimpleImageDTO',
public = list(
`id` = NULL,
`url` = NULL,
`lightingType` = NULL,
`lightingDirection` = NULL,
`waveLength` = NULL,
`type` = NULL,
`side` = NULL,
`ppi` = NULL,
`master` = NULL,
`catalogNumber` = NULL,
initialize = function(`id`, `url`, `lightingType`, `lightingDirection`, `waveLength`, `type`, `side`, `ppi`, `master`, `catalogNumber`, ...){
local.optional.var <- list(...)
if (!missing(`id`)) {
stopifnot(is.numeric(`id`), length(`id`) == 1)
self$`id` <- `id`
}
if (!missing(`url`)) {
stopifnot(is.character(`url`), length(`url`) == 1)
self$`url` <- `url`
}
if (!missing(`lightingType`)) {
stopifnot(R6::is.R6(`lightingType`))
self$`lightingType` <- `lightingType`
}
if (!missing(`lightingDirection`)) {
stopifnot(R6::is.R6(`lightingDirection`))
self$`lightingDirection` <- `lightingDirection`
}
if (!missing(`waveLength`)) {
stopifnot(is.vector(`waveLength`), length(`waveLength`) != 0)
sapply(`waveLength`, function(x) stopifnot(is.character(x)))
self$`waveLength` <- `waveLength`
}
if (!missing(`type`)) {
stopifnot(is.character(`type`), length(`type`) == 1)
self$`type` <- `type`
}
if (!missing(`side`)) {
stopifnot(R6::is.R6(`side`))
self$`side` <- `side`
}
if (!missing(`ppi`)) {
stopifnot(is.numeric(`ppi`), length(`ppi`) == 1)
self$`ppi` <- `ppi`
}
if (!missing(`master`)) {
self$`master` <- `master`
}
if (!missing(`catalogNumber`)) {
stopifnot(is.numeric(`catalogNumber`), length(`catalogNumber`) == 1)
self$`catalogNumber` <- `catalogNumber`
}
},
toJSON = function() {
SimpleImageDTOObject <- list()
if (!is.null(self$`id`)) {
SimpleImageDTOObject[['id']] <-
self$`id`
}
if (!is.null(self$`url`)) {
SimpleImageDTOObject[['url']] <-
self$`url`
}
if (!is.null(self$`lightingType`)) {
SimpleImageDTOObject[['lightingType']] <-
self$`lightingType`$toJSON()
}
if (!is.null(self$`lightingDirection`)) {
SimpleImageDTOObject[['lightingDirection']] <-
self$`lightingDirection`$toJSON()
}
if (!is.null(self$`waveLength`)) {
SimpleImageDTOObject[['waveLength']] <-
self$`waveLength`
}
if (!is.null(self$`type`)) {
SimpleImageDTOObject[['type']] <-
self$`type`
}
if (!is.null(self$`side`)) {
SimpleImageDTOObject[['side']] <-
self$`side`$toJSON()
}
if (!is.null(self$`ppi`)) {
SimpleImageDTOObject[['ppi']] <-
self$`ppi`
}
if (!is.null(self$`master`)) {
SimpleImageDTOObject[['master']] <-
self$`master`
}
if (!is.null(self$`catalogNumber`)) {
SimpleImageDTOObject[['catalogNumber']] <-
self$`catalogNumber`
}
SimpleImageDTOObject
},
fromJSON = function(SimpleImageDTOJson) {
SimpleImageDTOObject <- jsonlite::fromJSON(SimpleImageDTOJson)
if (!is.null(SimpleImageDTOObject$`id`)) {
self$`id` <- SimpleImageDTOObject$`id`
}
if (!is.null(SimpleImageDTOObject$`url`)) {
self$`url` <- SimpleImageDTOObject$`url`
}
if (!is.null(SimpleImageDTOObject$`lightingType`)) {
lightingTypeObject <- Lighting$new()
lightingTypeObject$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingType, auto_unbox = TRUE, digits = NA))
self$`lightingType` <- lightingTypeObject
}
if (!is.null(SimpleImageDTOObject$`lightingDirection`)) {
lightingDirectionObject <- Direction$new()
lightingDirectionObject$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingDirection, auto_unbox = TRUE, digits = NA))
self$`lightingDirection` <- lightingDirectionObject
}
if (!is.null(SimpleImageDTOObject$`waveLength`)) {
self$`waveLength` <- ApiClient$new()$deserializeObj(SimpleImageDTOObject$`waveLength`, "array[character]", loadNamespace("qumranicaApiConnector"))
}
if (!is.null(SimpleImageDTOObject$`type`)) {
self$`type` <- SimpleImageDTOObject$`type`
}
if (!is.null(SimpleImageDTOObject$`side`)) {
sideObject <- SideDesignation$new()
sideObject$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$side, auto_unbox = TRUE, digits = NA))
self$`side` <- sideObject
}
if (!is.null(SimpleImageDTOObject$`ppi`)) {
self$`ppi` <- SimpleImageDTOObject$`ppi`
}
if (!is.null(SimpleImageDTOObject$`master`)) {
self$`master` <- SimpleImageDTOObject$`master`
}
if (!is.null(SimpleImageDTOObject$`catalogNumber`)) {
self$`catalogNumber` <- SimpleImageDTOObject$`catalogNumber`
}
},
toJSONString = function() {
jsoncontent <- c(
if (!is.null(self$`id`)) {
sprintf(
'"id":
%d
',
self$`id`
)},
if (!is.null(self$`url`)) {
sprintf(
'"url":
"%s"
',
self$`url`
)},
if (!is.null(self$`lightingType`)) {
sprintf(
'"lightingType":
%s
',
jsonlite::toJSON(self$`lightingType`$toJSON(), auto_unbox=TRUE, digits = NA)
)},
if (!is.null(self$`lightingDirection`)) {
sprintf(
'"lightingDirection":
%s
',
jsonlite::toJSON(self$`lightingDirection`$toJSON(), auto_unbox=TRUE, digits = NA)
)},
if (!is.null(self$`waveLength`)) {
sprintf(
'"waveLength":
[%s]
',
paste(unlist(lapply(self$`waveLength`, function(x) paste0('"', x, '"'))), collapse=",")
)},
if (!is.null(self$`type`)) {
sprintf(
'"type":
"%s"
',
self$`type`
)},
if (!is.null(self$`side`)) {
sprintf(
'"side":
%s
',
jsonlite::toJSON(self$`side`$toJSON(), auto_unbox=TRUE, digits = NA)
)},
if (!is.null(self$`ppi`)) {
sprintf(
'"ppi":
%d
',
self$`ppi`
)},
if (!is.null(self$`master`)) {
sprintf(
'"master":
"%s"
',
self$`master`
)},
if (!is.null(self$`catalogNumber`)) {
sprintf(
'"catalogNumber":
%d
',
self$`catalogNumber`
)}
)
jsoncontent <- paste(jsoncontent, collapse = ",")
paste('{', jsoncontent, '}', sep = "")
},
fromJSONString = function(SimpleImageDTOJson) {
SimpleImageDTOObject <- jsonlite::fromJSON(SimpleImageDTOJson)
self$`id` <- SimpleImageDTOObject$`id`
self$`url` <- SimpleImageDTOObject$`url`
self$`lightingType` <- Lighting$new()$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingType, auto_unbox = TRUE, digits = NA))
self$`lightingDirection` <- Direction$new()$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingDirection, auto_unbox = TRUE, digits = NA))
self$`waveLength` <- ApiClient$new()$deserializeObj(SimpleImageDTOObject$`waveLength`, "array[character]", loadNamespace("qumranicaApiConnector"))
self$`type` <- SimpleImageDTOObject$`type`
self$`side` <- SideDesignation$new()$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$side, auto_unbox = TRUE, digits = NA))
self$`ppi` <- SimpleImageDTOObject$`ppi`
self$`master` <- SimpleImageDTOObject$`master`
self$`catalogNumber` <- SimpleImageDTOObject$`catalogNumber`
self
}
)
)
| /libs/r/R/simple_image_dto.R | permissive | Scripta-Qumranica-Electronica/SQE_API_Connectors | R | false | false | 8,644 | r | # SQE API
#
# No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator)
#
# The version of the OpenAPI document: v1
#
# Generated by: https://openapi-generator.tech
#' @docType class
#' @title SimpleImageDTO
#' @description SimpleImageDTO Class
#' @format An \code{R6Class} generator object
#' @field id integer
#'
#' @field url character
#'
#' @field lightingType \link{Lighting}
#'
#' @field lightingDirection \link{Direction}
#'
#' @field waveLength list( character )
#'
#' @field type character
#'
#' @field side \link{SideDesignation}
#'
#' @field ppi integer
#'
#' @field master character
#'
#' @field catalogNumber integer
#'
#'
#' @importFrom R6 R6Class
#' @importFrom jsonlite fromJSON toJSON
#' @export
SimpleImageDTO <- R6::R6Class(
'SimpleImageDTO',
public = list(
`id` = NULL,
`url` = NULL,
`lightingType` = NULL,
`lightingDirection` = NULL,
`waveLength` = NULL,
`type` = NULL,
`side` = NULL,
`ppi` = NULL,
`master` = NULL,
`catalogNumber` = NULL,
initialize = function(`id`, `url`, `lightingType`, `lightingDirection`, `waveLength`, `type`, `side`, `ppi`, `master`, `catalogNumber`, ...){
local.optional.var <- list(...)
if (!missing(`id`)) {
stopifnot(is.numeric(`id`), length(`id`) == 1)
self$`id` <- `id`
}
if (!missing(`url`)) {
stopifnot(is.character(`url`), length(`url`) == 1)
self$`url` <- `url`
}
if (!missing(`lightingType`)) {
stopifnot(R6::is.R6(`lightingType`))
self$`lightingType` <- `lightingType`
}
if (!missing(`lightingDirection`)) {
stopifnot(R6::is.R6(`lightingDirection`))
self$`lightingDirection` <- `lightingDirection`
}
if (!missing(`waveLength`)) {
stopifnot(is.vector(`waveLength`), length(`waveLength`) != 0)
sapply(`waveLength`, function(x) stopifnot(is.character(x)))
self$`waveLength` <- `waveLength`
}
if (!missing(`type`)) {
stopifnot(is.character(`type`), length(`type`) == 1)
self$`type` <- `type`
}
if (!missing(`side`)) {
stopifnot(R6::is.R6(`side`))
self$`side` <- `side`
}
if (!missing(`ppi`)) {
stopifnot(is.numeric(`ppi`), length(`ppi`) == 1)
self$`ppi` <- `ppi`
}
if (!missing(`master`)) {
self$`master` <- `master`
}
if (!missing(`catalogNumber`)) {
stopifnot(is.numeric(`catalogNumber`), length(`catalogNumber`) == 1)
self$`catalogNumber` <- `catalogNumber`
}
},
toJSON = function() {
SimpleImageDTOObject <- list()
if (!is.null(self$`id`)) {
SimpleImageDTOObject[['id']] <-
self$`id`
}
if (!is.null(self$`url`)) {
SimpleImageDTOObject[['url']] <-
self$`url`
}
if (!is.null(self$`lightingType`)) {
SimpleImageDTOObject[['lightingType']] <-
self$`lightingType`$toJSON()
}
if (!is.null(self$`lightingDirection`)) {
SimpleImageDTOObject[['lightingDirection']] <-
self$`lightingDirection`$toJSON()
}
if (!is.null(self$`waveLength`)) {
SimpleImageDTOObject[['waveLength']] <-
self$`waveLength`
}
if (!is.null(self$`type`)) {
SimpleImageDTOObject[['type']] <-
self$`type`
}
if (!is.null(self$`side`)) {
SimpleImageDTOObject[['side']] <-
self$`side`$toJSON()
}
if (!is.null(self$`ppi`)) {
SimpleImageDTOObject[['ppi']] <-
self$`ppi`
}
if (!is.null(self$`master`)) {
SimpleImageDTOObject[['master']] <-
self$`master`
}
if (!is.null(self$`catalogNumber`)) {
SimpleImageDTOObject[['catalogNumber']] <-
self$`catalogNumber`
}
SimpleImageDTOObject
},
fromJSON = function(SimpleImageDTOJson) {
SimpleImageDTOObject <- jsonlite::fromJSON(SimpleImageDTOJson)
if (!is.null(SimpleImageDTOObject$`id`)) {
self$`id` <- SimpleImageDTOObject$`id`
}
if (!is.null(SimpleImageDTOObject$`url`)) {
self$`url` <- SimpleImageDTOObject$`url`
}
if (!is.null(SimpleImageDTOObject$`lightingType`)) {
lightingTypeObject <- Lighting$new()
lightingTypeObject$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingType, auto_unbox = TRUE, digits = NA))
self$`lightingType` <- lightingTypeObject
}
if (!is.null(SimpleImageDTOObject$`lightingDirection`)) {
lightingDirectionObject <- Direction$new()
lightingDirectionObject$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingDirection, auto_unbox = TRUE, digits = NA))
self$`lightingDirection` <- lightingDirectionObject
}
if (!is.null(SimpleImageDTOObject$`waveLength`)) {
self$`waveLength` <- ApiClient$new()$deserializeObj(SimpleImageDTOObject$`waveLength`, "array[character]", loadNamespace("qumranicaApiConnector"))
}
if (!is.null(SimpleImageDTOObject$`type`)) {
self$`type` <- SimpleImageDTOObject$`type`
}
if (!is.null(SimpleImageDTOObject$`side`)) {
sideObject <- SideDesignation$new()
sideObject$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$side, auto_unbox = TRUE, digits = NA))
self$`side` <- sideObject
}
if (!is.null(SimpleImageDTOObject$`ppi`)) {
self$`ppi` <- SimpleImageDTOObject$`ppi`
}
if (!is.null(SimpleImageDTOObject$`master`)) {
self$`master` <- SimpleImageDTOObject$`master`
}
if (!is.null(SimpleImageDTOObject$`catalogNumber`)) {
self$`catalogNumber` <- SimpleImageDTOObject$`catalogNumber`
}
},
toJSONString = function() {
jsoncontent <- c(
if (!is.null(self$`id`)) {
sprintf(
'"id":
%d
',
self$`id`
)},
if (!is.null(self$`url`)) {
sprintf(
'"url":
"%s"
',
self$`url`
)},
if (!is.null(self$`lightingType`)) {
sprintf(
'"lightingType":
%s
',
jsonlite::toJSON(self$`lightingType`$toJSON(), auto_unbox=TRUE, digits = NA)
)},
if (!is.null(self$`lightingDirection`)) {
sprintf(
'"lightingDirection":
%s
',
jsonlite::toJSON(self$`lightingDirection`$toJSON(), auto_unbox=TRUE, digits = NA)
)},
if (!is.null(self$`waveLength`)) {
sprintf(
'"waveLength":
[%s]
',
paste(unlist(lapply(self$`waveLength`, function(x) paste0('"', x, '"'))), collapse=",")
)},
if (!is.null(self$`type`)) {
sprintf(
'"type":
"%s"
',
self$`type`
)},
if (!is.null(self$`side`)) {
sprintf(
'"side":
%s
',
jsonlite::toJSON(self$`side`$toJSON(), auto_unbox=TRUE, digits = NA)
)},
if (!is.null(self$`ppi`)) {
sprintf(
'"ppi":
%d
',
self$`ppi`
)},
if (!is.null(self$`master`)) {
sprintf(
'"master":
"%s"
',
self$`master`
)},
if (!is.null(self$`catalogNumber`)) {
sprintf(
'"catalogNumber":
%d
',
self$`catalogNumber`
)}
)
jsoncontent <- paste(jsoncontent, collapse = ",")
paste('{', jsoncontent, '}', sep = "")
},
fromJSONString = function(SimpleImageDTOJson) {
SimpleImageDTOObject <- jsonlite::fromJSON(SimpleImageDTOJson)
self$`id` <- SimpleImageDTOObject$`id`
self$`url` <- SimpleImageDTOObject$`url`
self$`lightingType` <- Lighting$new()$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingType, auto_unbox = TRUE, digits = NA))
self$`lightingDirection` <- Direction$new()$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$lightingDirection, auto_unbox = TRUE, digits = NA))
self$`waveLength` <- ApiClient$new()$deserializeObj(SimpleImageDTOObject$`waveLength`, "array[character]", loadNamespace("qumranicaApiConnector"))
self$`type` <- SimpleImageDTOObject$`type`
self$`side` <- SideDesignation$new()$fromJSON(jsonlite::toJSON(SimpleImageDTOObject$side, auto_unbox = TRUE, digits = NA))
self$`ppi` <- SimpleImageDTOObject$`ppi`
self$`master` <- SimpleImageDTOObject$`master`
self$`catalogNumber` <- SimpleImageDTOObject$`catalogNumber`
self
}
)
)
|
graphConvMC_diff <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=0.5) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=0.5)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=0.5)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diff4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diffz <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 1,size=1)+
xlab("") +scale_x_log10()+ ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=20))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_diffw <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 1,size=1)+
xlab("") +scale_x_log10()+ ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=20))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se1 <- function(df,df2, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se2 <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(z,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
seplot <- function(df,colname, title=NULL, ylim=NULL, legend=TRUE)
{
G <- (ncol(df)-2)/3
df$algo <- as.factor(df$algo)
df$individual <- as.factor(df$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
graf <- ggplot(df)+geom_line(aes(iterations,value,by=value,colour = df$algo,linetype=df$individual),show.legend = legend,size=1)+guides(linetype=FALSE,size=FALSE)+labs(colour='batch size (in %)') +
xlab("passes")+ ylab(colname) + theme_bw() + theme(
panel.background = element_rect(colour = "grey", size=1),legend.position = c(0.8, 0.6)) + guides(color = guide_legend(override.aes = list(size=5)))+
theme(legend.text=element_text(size=20),legend.title=element_text(size=20))+ theme(panel.border = element_blank() ,axis.text.x = element_text(color="black",
size=20, angle=0),
axis.text.y = element_text(color="black",
size=20, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", size=20))# + theme(aspect.ratio=1)
grid.arrange(graf)
# do.call("grid.arrange", c(graf, ncol=1, top=title))
}
seplot2 <- function(df,colname, title=NULL, ylim=NULL, legend=TRUE)
{
G <- (ncol(df)-2)/3
df$algo <- as.factor(df$algo)
df$method <- as.factor(df$method)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
graf <- ggplot(df)+geom_line(aes(iterations,value,by=value,colour = df$algo,linetype=df$method),show.legend = legend,size=1)+guides(linetype=FALSE,size=FALSE)+labs(colour='batch size (in %)') +
xlab("passes")+ ylab(colname) + theme_bw() + theme(
panel.background = element_rect(colour = "grey", size=1),legend.position = c(0.8, 0.6)) + guides(color = guide_legend(override.aes = list(size=5)))+
theme(legend.text=element_text(size=20),legend.title=element_text(size=20))+ theme(panel.border = element_blank() ,axis.text.x = element_text(color="black",
size=20, angle=0),
axis.text.y = element_text(color="black",
size=20, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", size=20))# + theme(aspect.ratio=1)
grid.arrange(graf)
# do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_new <- function(df, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +
xlab("iteration") + ylab(names(df[j]))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=ncol(df)-2, top=title))
}
graphConvMC_twokernels <- function(df,df2, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_3 <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="red")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_5 <- function(df,df2,df3,df4,df5, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="red")+
geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="yellow")+geom_line(aes_string(df5[,1],df5[,j],by=df5[,ncol(df5)]),colour="pink")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_6 <- function(df,df2,df3,df4,df5,df6, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="red")+
geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="yellow")+geom_line(aes_string(df5[,1],df5[,j],by=df5[,ncol(df5)]),colour="pink")+
geom_line(aes_string(df6[,1],df6[,j],by=df6[,ncol(df6)]),colour="green")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diff <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diff4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diffz <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") +scale_x_log10()+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_diffw <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") +scale_x_log10()+ ylab(expression(paste(omega,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se1 <- function(df,df2, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+
xlab("") + ylab(expression(paste(lambda))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se2 <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+
xlab("") + ylab("") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(beta,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(beta,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec_icml <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(beta,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed_icml <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") + scale_x_log10()+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=15, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=15, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
plot_run <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$algo <- as.factor(df$algo)
df2$algo <- as.factor(df2$algo)
df3$algo <- as.factor(df3$algo)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="dashed",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("")+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=15, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=15, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
| /ttsem/3)PkModel/R/plots.R | no_license | BelhalK/PapersCode | R | false | false | 38,592 | r | graphConvMC_diff <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=0.5) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=0.5)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=0.5)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diff4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diffz <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 1,size=1)+
xlab("") +scale_x_log10()+ ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=20))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_diffw <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 1,size=1)+
xlab("") +scale_x_log10()+ ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=20))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se1 <- function(df,df2, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se2 <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(z,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
seplot <- function(df,colname, title=NULL, ylim=NULL, legend=TRUE)
{
G <- (ncol(df)-2)/3
df$algo <- as.factor(df$algo)
df$individual <- as.factor(df$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
graf <- ggplot(df)+geom_line(aes(iterations,value,by=value,colour = df$algo,linetype=df$individual),show.legend = legend,size=1)+guides(linetype=FALSE,size=FALSE)+labs(colour='batch size (in %)') +
xlab("passes")+ ylab(colname) + theme_bw() + theme(
panel.background = element_rect(colour = "grey", size=1),legend.position = c(0.8, 0.6)) + guides(color = guide_legend(override.aes = list(size=5)))+
theme(legend.text=element_text(size=20),legend.title=element_text(size=20))+ theme(panel.border = element_blank() ,axis.text.x = element_text(color="black",
size=20, angle=0),
axis.text.y = element_text(color="black",
size=20, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", size=20))# + theme(aspect.ratio=1)
grid.arrange(graf)
# do.call("grid.arrange", c(graf, ncol=1, top=title))
}
seplot2 <- function(df,colname, title=NULL, ylim=NULL, legend=TRUE)
{
G <- (ncol(df)-2)/3
df$algo <- as.factor(df$algo)
df$method <- as.factor(df$method)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
graf <- ggplot(df)+geom_line(aes(iterations,value,by=value,colour = df$algo,linetype=df$method),show.legend = legend,size=1)+guides(linetype=FALSE,size=FALSE)+labs(colour='batch size (in %)') +
xlab("passes")+ ylab(colname) + theme_bw() + theme(
panel.background = element_rect(colour = "grey", size=1),legend.position = c(0.8, 0.6)) + guides(color = guide_legend(override.aes = list(size=5)))+
theme(legend.text=element_text(size=20),legend.title=element_text(size=20))+ theme(panel.border = element_blank() ,axis.text.x = element_text(color="black",
size=20, angle=0),
axis.text.y = element_text(color="black",
size=20, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", size=20))# + theme(aspect.ratio=1)
grid.arrange(graf)
# do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_new <- function(df, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +
xlab("iteration") + ylab(names(df[j]))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=ncol(df)-2, top=title))
}
graphConvMC_twokernels <- function(df,df2, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_3 <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="red")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_5 <- function(df,df2,df3,df4,df5, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="red")+
geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="yellow")+geom_line(aes_string(df5[,1],df5[,j],by=df5[,ncol(df5)]),colour="pink")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_6 <- function(df,df2,df3,df4,df5,df6, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df))))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)])) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="blue")+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="red")+
geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="yellow")+geom_line(aes_string(df5[,1],df5[,j],by=df5[,ncol(df5)]),colour="pink")+
geom_line(aes_string(df6[,1],df6[,j],by=df6[,ncol(df6)]),colour="green")+
xlab("iteration")+ ylab(names(df[j])) + theme_bw()
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diff <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diff4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=3, top=title))
}
graphConvMC_diffz <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") +scale_x_log10()+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_diffw <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") +scale_x_log10()+ ylab(expression(paste(omega,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se1 <- function(df,df2, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+
xlab("") + ylab(expression(paste(lambda))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se2 <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+
xlab("") + ylab("") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(beta,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype=1,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype=1,size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(beta,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed4 <- function(df,df2,df3,df4, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
df4$individual <- as.factor(df4$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="blue",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="red",linetype = 2,size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="green",linetype = 2,size=1)+geom_line(aes_string(df4[,1],df4[,j],by=df4[,ncol(df4)]),colour="black",linetype = 2,size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"1"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=10, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=10, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sec_icml <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") + scale_x_log10()+ylab(expression(paste(beta,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_sed_icml <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") + scale_x_log10()+ ylab(expression(paste(omega,"2"))) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=30, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=30, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=30))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
graphConvMC_se <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$individual <- as.factor(df$individual)
df2$individual <- as.factor(df2$individual)
df3$individual <- as.factor(df3$individual)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="longdash",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("") + scale_x_log10()+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=15, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=15, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
plot_run <- function(df,df2,df3, title=NULL, ylim=NULL)
{
G <- (ncol(df)-2)/3
df$algo <- as.factor(df$algo)
df2$algo <- as.factor(df2$algo)
df3$algo <- as.factor(df3$algo)
ylim <-rep(ylim,each=2)
graf <- vector("list", ncol(df)-2)
o <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for (j in (2:(ncol(df)-1)))
{
grafj <- ggplot(df)+geom_line(aes_string(df[,1],df[,j],by=df[,ncol(df)]),colour="black",linetype= "solid",size=1) +geom_line(aes_string(df2[,1],df2[,j],by=df2[,ncol(df2)]),colour="black",linetype="dashed",size=1)+geom_line(aes_string(df3[,1],df3[,j],by=df3[,ncol(df3)]),colour="black",linetype="dotted",size=1)+
xlab("")+ylab(names(df[j])) + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),
panel.grid.minor = element_blank(), axis.line = element_line(colour = "black"),axis.text.x = element_text(face="bold", color="black",
size=15, angle=0),
axis.text.y = element_text(face="bold", color="black",
size=15, angle=0))+theme(axis.title = element_text(family = "Trebuchet MS", color="black", face="bold", size=15))
if (!is.null(ylim))
grafj <- grafj + ylim(ylim[j-1]*c(-1,1))
graf[[o[j]]] <- grafj
}
do.call("grid.arrange", c(graf, ncol=1, top=title))
}
|
## The following two functions work together to compute the inverse of an invertible matrix and store it in the cache environment so it does not have to be computed for more than once.
## The first function, `makeCacheMatrix` creates a special "matrix", which is really a list containing a function to do the below:
## 1. set the value of the matrix
## 2. get the value of the matrix
## 3. set the value of the inverse of the matrix
## 4. get the value of the inverse of the matrix
makeCacheMatrix <- function(x = matrix()) {
# initialize the inverse to NULL
inv <- NULL
# create the matrix and set the inverse to NULL in the working environment
set <- function(y) {
x <<- y
inv <<- NULL
}
# get the value of the matrix
get <- function() x
# Assign the value of solve to inv
setinverse <- function(inverse) inv <<- inverse
# get the value of inv, which is the inverse of the matrix
getinverse <- function() inv
# return a list of the functions that are created in this working environment
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
## The following function computes the inverse of the special "matrix" created with the above function. However, it first checks to see if the inverse has already been computed. If so, it `get`s the inverse from the cache and skips the computation. Otherwise, it computes the inverse of the matrix and sets the value of the inverse in the cache via the `setinverse` function.
cacheSolve <- function(x, ...) {
# get the inv from the cache environment
inv <- x$getinverse()
# if inv already exsits in the cache environment, return the value of inv from the cache environment
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
# create the matrix
data <- x$get()
# compute the inverse of the matrix
inv <- solve(data, ...)
# assign the computed inverse of the matrix to inv in the cache environment
x$setinverse(inv)
# return the inverse of the matrix 'x'
inv
}
| /cachematrix.R | no_license | Wyldstyle-MS1987/ProgrammingAssignment2 | R | false | false | 2,286 | r | ## The following two functions work together to compute the inverse of an invertible matrix and store it in the cache environment so it does not have to be computed for more than once.
## The first function, `makeCacheMatrix` creates a special "matrix", which is really a list containing a function to do the below:
## 1. set the value of the matrix
## 2. get the value of the matrix
## 3. set the value of the inverse of the matrix
## 4. get the value of the inverse of the matrix
makeCacheMatrix <- function(x = matrix()) {
# initialize the inverse to NULL
inv <- NULL
# create the matrix and set the inverse to NULL in the working environment
set <- function(y) {
x <<- y
inv <<- NULL
}
# get the value of the matrix
get <- function() x
# Assign the value of solve to inv
setinverse <- function(inverse) inv <<- inverse
# get the value of inv, which is the inverse of the matrix
getinverse <- function() inv
# return a list of the functions that are created in this working environment
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
## The following function computes the inverse of the special "matrix" created with the above function. However, it first checks to see if the inverse has already been computed. If so, it `get`s the inverse from the cache and skips the computation. Otherwise, it computes the inverse of the matrix and sets the value of the inverse in the cache via the `setinverse` function.
cacheSolve <- function(x, ...) {
# get the inv from the cache environment
inv <- x$getinverse()
# if inv already exsits in the cache environment, return the value of inv from the cache environment
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
# create the matrix
data <- x$get()
# compute the inverse of the matrix
inv <- solve(data, ...)
# assign the computed inverse of the matrix to inv in the cache environment
x$setinverse(inv)
# return the inverse of the matrix 'x'
inv
}
|
#####################################################################################################################################
################################## AVIAN N~FC REGRESSION MODELS COMPARISON #########################################################
#####################################################################################################################################
# PURPOSE: compare AVIA responses (N.point ~ FC) at different buffer sizes around forest points (800m, 3000m )
# AUTHORS: Tyler Huntington, Elizabeth Nichols, Larissa Boesing, Jean Paul Metzger
# set working directory
setwd("/Users/tyler/Dropbox/Interface/Dados/Non-Geospatial/Winners_Losers_Traits/Analysis/Data")
# load base beetle dataframe
a.df <- readRDS("a.df.rds")
# load libraries
library(xlsx)
library(reshape)
library(plyr)
library(reshape2)
library(dplyr)
library (rgdal)
library (rgeos)
library (spdep)
library (plyr)
library(ggplot2)
library (caper)
library (ape)
library (MuMIn)
library (lme4)
################################### READ IN & ORGANIZE DATA
# subset a.df for species that were captured 3 or more times
a.df <- subset ( a.df , a.df$N.tot > 3 )
# subset a.df for species that were sighted at three distinct points
a.df <- subset ( a.df , a.df$Pts.total > 3 )
# subset out species with unclear data trends
a.df <- subset (a.df, a.df$Species!= "Ramphas.toco")
a.df <- subset (a.df, a.df$Species!= "Phae.eury")
a.df <- subset (a.df, a.df$Species!= "Phae.pret")
a.df <- subset (a.df, a.df$Species!= "Myiornis.aur")
a.df <- subset (a.df, a.df$Species!= "Malac.stri")
a.df$Species <- factor(a.df$Species)
a.df$Species <- droplevels ( a.df$Species )
# create df with unique entry for each beetle species and associated attributes
a.sp.df <- ddply ( a.df, .(Species , Nest , Biogeo , BiogeoPlas , BM ,
Fecund, DietPlas, Diet, Habitat, HabPlas, LS.total, Pts.total),
summarize, N.tot = mean(N.tot))
### Run N ~ FC multivariate regressions for each species using FC vals at each buffer size as IVs
# initialize blank df for top spatial models per species
a_spatial_mod.df <- data.frame("Species" = factor(),
"Top_Spatial_Mod" = factor())
# initialize model counter vars
b1000_count_a_tm <- 0
b800_count_a_tm <- 0
b3000_count_a_tm <- 0
b400_count_a_tm <- 0
b200_count_a_tm <- 0
# iterate over species
for (i in a.sp.df$Species) {
# initialize species row to be inserted in top model df
a_species_row <- NULL
# create df with entries as all captures of a particular species
a.par.df <- subset (a.df , a.df$Species == i)
# regress N ~ FC for particular species at multiple spatial scales
a.lm.par <- glmer (a.par.df$N ~ a.par.df$perFC_200
+ a.par.df$perFC_400
+ a.par.df$perFC_800
+ a.par.df$perFC_1000
+ a.par.df$perFC_3000
+ (1|a.par.df$Landscape), family = poisson)
options(na.action = "na.fail")
a.par.spatial.mod.comp <- dredge(a.lm.par, beta = c("none", "sd", "partial.sd"), evaluate = TRUE,
m.lim = c(1, 1), rank = "AICc", fixed = NULL)
a.par.top.model <- get.models(a.par.spatial.mod.comp, subset = delta == 0)
top_model_code <- rownames (summary(a.par.top.model))
# KEY FOR MODEL NUMBERS IN DREDGE TABLE
# Model 2 = 1000m buffer
# Model 17 = 800m buffer
# Model 5 = 3000m buffer
# Model 9 = 400m buffer
# Model 3 = 200m buffer
# use model code to assign top model for particular species
if (top_model_code == "2") {
top_mod <- 1000
b1000_count_a_tm <- b1000_count_a_tm + 1
} else if (top_model_code == "17") {
top_mod <- 800
b800_count_a_tm <- b800_count_a_tm + 1
} else if (top_model_code == "5") {
top_mod <- 3000
b3000_count_a_tm <- b3000_count_a_tm + 1
} else if (top_model_code == "9") {
top_mod <- 400
b400_count_a_tm <- b400_count_a_tm + 1
} else if (top_model_code == "3") {
top_mod <- 200
b200_count_a_tm <- b200_count_a_tm + 1
}
a_species_row <- data.frame(factor(i), factor(top_mod))
a_spatial_mod.df <- rbind (a_spatial_mod.df, a_species_row)
print(a_spatial_mod.df)
}
# rectify column names of a_spatial_mod.df
colnames(a_spatial_mod.df) <- c("Species", "Top_Spatial_Mod")
################################################ ANALYZE RESULTS OF SPATIAL SCALE COMPS
# plot distribution of top spatial models across beetle species
par(mar = c(5, 5, 5, 5) + 0.1)
counts = c(b200_count_a_tm, b400_count_a_tm, b800_count_a_tm, b1000_count_a_tm, b3000_count_a_tm)
labels = c("200", "400", "800", "1000", "3000")
barplot <- barplot(counts, space = 0.25, names.arg = labels, ylim = c(0 ,50),
main = "Birds: Top Model Approach\n for Determining Spatial Scale",
xlab = "Buffer Radius (m)",
ylab = "# Species w/ Model in Candidate Set")
text(barplot, counts, labels = counts, pos = 3)
table(a_spatial_mod.df$Top_Spatial_Mod)
| /Rcode/deprecated_code_archive/WLT_working_analysis/a_regressions/a_beta_spatial_compare_top_mod.R | no_license | tylerhuntington222/BENS-Winners-Losers-Traits | R | false | false | 5,088 | r | #####################################################################################################################################
################################## AVIAN N~FC REGRESSION MODELS COMPARISON #########################################################
#####################################################################################################################################
# PURPOSE: compare AVIA responses (N.point ~ FC) at different buffer sizes around forest points (800m, 3000m )
# AUTHORS: Tyler Huntington, Elizabeth Nichols, Larissa Boesing, Jean Paul Metzger
# set working directory
setwd("/Users/tyler/Dropbox/Interface/Dados/Non-Geospatial/Winners_Losers_Traits/Analysis/Data")
# load base beetle dataframe
a.df <- readRDS("a.df.rds")
# load libraries
library(xlsx)
library(reshape)
library(plyr)
library(reshape2)
library(dplyr)
library (rgdal)
library (rgeos)
library (spdep)
library (plyr)
library(ggplot2)
library (caper)
library (ape)
library (MuMIn)
library (lme4)
################################### READ IN & ORGANIZE DATA
# subset a.df for species that were captured 3 or more times
a.df <- subset ( a.df , a.df$N.tot > 3 )
# subset a.df for species that were sighted at three distinct points
a.df <- subset ( a.df , a.df$Pts.total > 3 )
# subset out species with unclear data trends
a.df <- subset (a.df, a.df$Species!= "Ramphas.toco")
a.df <- subset (a.df, a.df$Species!= "Phae.eury")
a.df <- subset (a.df, a.df$Species!= "Phae.pret")
a.df <- subset (a.df, a.df$Species!= "Myiornis.aur")
a.df <- subset (a.df, a.df$Species!= "Malac.stri")
a.df$Species <- factor(a.df$Species)
a.df$Species <- droplevels ( a.df$Species )
# create df with unique entry for each beetle species and associated attributes
a.sp.df <- ddply ( a.df, .(Species , Nest , Biogeo , BiogeoPlas , BM ,
Fecund, DietPlas, Diet, Habitat, HabPlas, LS.total, Pts.total),
summarize, N.tot = mean(N.tot))
### Run N ~ FC multivariate regressions for each species using FC vals at each buffer size as IVs
# initialize blank df for top spatial models per species
a_spatial_mod.df <- data.frame("Species" = factor(),
"Top_Spatial_Mod" = factor())
# initialize model counter vars
b1000_count_a_tm <- 0
b800_count_a_tm <- 0
b3000_count_a_tm <- 0
b400_count_a_tm <- 0
b200_count_a_tm <- 0
# iterate over species
for (i in a.sp.df$Species) {
# initialize species row to be inserted in top model df
a_species_row <- NULL
# create df with entries as all captures of a particular species
a.par.df <- subset (a.df , a.df$Species == i)
# regress N ~ FC for particular species at multiple spatial scales
a.lm.par <- glmer (a.par.df$N ~ a.par.df$perFC_200
+ a.par.df$perFC_400
+ a.par.df$perFC_800
+ a.par.df$perFC_1000
+ a.par.df$perFC_3000
+ (1|a.par.df$Landscape), family = poisson)
options(na.action = "na.fail")
a.par.spatial.mod.comp <- dredge(a.lm.par, beta = c("none", "sd", "partial.sd"), evaluate = TRUE,
m.lim = c(1, 1), rank = "AICc", fixed = NULL)
a.par.top.model <- get.models(a.par.spatial.mod.comp, subset = delta == 0)
top_model_code <- rownames (summary(a.par.top.model))
# KEY FOR MODEL NUMBERS IN DREDGE TABLE
# Model 2 = 1000m buffer
# Model 17 = 800m buffer
# Model 5 = 3000m buffer
# Model 9 = 400m buffer
# Model 3 = 200m buffer
# use model code to assign top model for particular species
if (top_model_code == "2") {
top_mod <- 1000
b1000_count_a_tm <- b1000_count_a_tm + 1
} else if (top_model_code == "17") {
top_mod <- 800
b800_count_a_tm <- b800_count_a_tm + 1
} else if (top_model_code == "5") {
top_mod <- 3000
b3000_count_a_tm <- b3000_count_a_tm + 1
} else if (top_model_code == "9") {
top_mod <- 400
b400_count_a_tm <- b400_count_a_tm + 1
} else if (top_model_code == "3") {
top_mod <- 200
b200_count_a_tm <- b200_count_a_tm + 1
}
a_species_row <- data.frame(factor(i), factor(top_mod))
a_spatial_mod.df <- rbind (a_spatial_mod.df, a_species_row)
print(a_spatial_mod.df)
}
# rectify column names of a_spatial_mod.df
colnames(a_spatial_mod.df) <- c("Species", "Top_Spatial_Mod")
################################################ ANALYZE RESULTS OF SPATIAL SCALE COMPS
# plot distribution of top spatial models across beetle species
par(mar = c(5, 5, 5, 5) + 0.1)
counts = c(b200_count_a_tm, b400_count_a_tm, b800_count_a_tm, b1000_count_a_tm, b3000_count_a_tm)
labels = c("200", "400", "800", "1000", "3000")
barplot <- barplot(counts, space = 0.25, names.arg = labels, ylim = c(0 ,50),
main = "Birds: Top Model Approach\n for Determining Spatial Scale",
xlab = "Buffer Radius (m)",
ylab = "# Species w/ Model in Candidate Set")
text(barplot, counts, labels = counts, pos = 3)
table(a_spatial_mod.df$Top_Spatial_Mod)
|
binomial.CARlocalised <- function(formula, data=NULL, G, trials, W, burnin, n.sample, thin=1, prior.mean.beta=NULL, prior.var.beta=NULL, prior.delta=NULL, prior.tau2=NULL, verbose=TRUE)
{
#### Check on the verbose option
if(is.null(verbose)) verbose=TRUE
if(!is.logical(verbose)) stop("the verbose option is not logical.", call.=FALSE)
if(verbose)
{
cat("Setting up the model\n")
a<-proc.time()
}else{}
##############################################
#### Format the arguments and check for errors
##############################################
#### Overall formula object
frame <- try(suppressWarnings(model.frame(formula, data=data, na.action=na.pass)), silent=TRUE)
if(class(frame)=="try-error") stop("the formula inputted contains an error, e.g the variables may be different lengths or the data object has not been specified.", call.=FALSE)
#### Format and check the neighbourhood matrix W
if(!is.matrix(W)) stop("W is not a matrix.", call.=FALSE)
K <- nrow(W)
if(ncol(W)!= K) stop("W has the wrong number of columns.", call.=FALSE)
if(sum(is.na(W))>0) stop("W has missing 'NA' values.", call.=FALSE)
if(!is.numeric(W)) stop("W has non-numeric values.", call.=FALSE)
if(min(W)<0) stop("W has negative elements.", call.=FALSE)
if(sum(W!=t(W))>0) stop("W is not symmetric.", call.=FALSE)
if(min(apply(W, 1, sum))==0) stop("W has some areas with no neighbours (one of the row sums equals zero).", call.=FALSE)
#### Response variable
Y <- model.response(frame)
N.all <- length(Y)
N <- N.all / K
if(floor(N.all/K)!=ceiling(N.all/K)) stop("The number of spatial areas is not a multiple of the number of data points.", call.=FALSE)
if(sum(is.na(Y))>0) stop("the response has missing 'NA' values.", call.=FALSE)
if(!is.numeric(Y)) stop("the response variable has non-numeric values.", call.=FALSE)
int.check <- N.all-sum(ceiling(Y)==floor(Y))
if(int.check > 0) stop("the respons variable has non-integer values.", call.=FALSE)
if(min(Y)<0) stop("the response variable has negative values.", call.=FALSE)
failures <- trials - Y
Y.mat <- matrix(Y, nrow=K, ncol=N, byrow=FALSE)
failures.mat <- matrix(failures, nrow=K, ncol=N, byrow=FALSE)
which.miss <- as.numeric(!is.na(Y))
which.miss.mat <- matrix(which.miss, nrow=K, ncol=N, byrow=FALSE)
#### Offset variable
## Create the offset
offset <- try(model.offset(frame), silent=TRUE)
if(class(offset)=="try-error") stop("the offset is not numeric.", call.=FALSE)
if(is.null(offset)) offset <- rep(0,N.all)
if(sum(is.na(offset))>0) stop("the offset has missing 'NA' values.", call.=FALSE)
if(!is.numeric(offset)) stop("the offset variable has non-numeric values.", call.=FALSE)
offset.mat <- matrix(offset, nrow=K, ncol=N, byrow=FALSE)
#### Design matrix
## Create the matrix
X <- try(suppressWarnings(model.matrix(object=attr(frame, "terms"), data=frame)), silent=TRUE)
if(class(X)=="try-error") stop("the covariate matrix contains inappropriate values.", call.=FALSE)
if(sum(is.na(X))>0) stop("the covariate matrix contains missing 'NA' values.", call.=FALSE)
ptemp <- ncol(X)
if(ptemp==1)
{
X <- NULL
regression.vec <- rep(0, N.all)
regression.mat <- matrix(regression.vec, nrow=K, ncol=N, byrow=FALSE)
p <- 0
}else
{
## Check for linearly related columns
cor.X <- suppressWarnings(cor(X))
diag(cor.X) <- 0
if(max(cor.X, na.rm=TRUE)==1) stop("the covariate matrix has two exactly linearly related columns.", call.=FALSE)
if(min(cor.X, na.rm=TRUE)==-1) stop("the covariate matrix has two exactly linearly related columns.", call.=FALSE)
if(sort(apply(X, 2, sd))[2]==0) stop("the covariate matrix has two intercept terms.", call.=FALSE)
## Remove the intercept term
int.which <- which(apply(X,2,sd)==0)
colnames.X <- colnames(X)
X <- as.matrix(X[ ,-int.which])
colnames(X) <- colnames.X[-int.which]
p <- ncol(X)
## Standardise X
X.standardised <- X
X.sd <- apply(X, 2, sd)
X.mean <- apply(X, 2, mean)
X.indicator <- rep(NA, p) # To determine which parameter estimates to transform back
for(j in 1:p)
{
if(length(table(X[ ,j]))>2)
{
X.indicator[j] <- 1
X.standardised[ ,j] <- (X[ ,j] - mean(X[ ,j])) / sd(X[ ,j])
}else
{
X.indicator[j] <- 0
}
}
## Compute a starting value for beta
dat <- cbind(Y, failures)
mod.glm <- glm(dat~X.standardised-1, offset=offset, family="quasibinomial")
beta.mean <- mod.glm$coefficients
beta.sd <- sqrt(diag(summary(mod.glm)$cov.scaled))
beta <- rnorm(n=length(beta.mean), mean=beta.mean, sd=beta.sd)
regression.vec <- X.standardised %*% beta
regression.mat <- matrix(regression.vec, nrow=K, ncol=N, byrow=FALSE)
}
#### Format and check the number of clusters G
if(length(G)!=1) stop("G is the wrong length.", call.=FALSE)
if(!is.numeric(G)) stop("G is not numeric.", call.=FALSE)
if(G<=1) stop("G is less than 2.", call.=FALSE)
if(G!=round(G)) stop("G is not an integer.", call.=FALSE)
if(floor(G/2)==ceiling(G/2))
{
Gstar <- G/2
}else
{
Gstar <- (G+1)/2
}
#### Format and check the MCMC quantities
if(is.null(burnin)) stop("the burnin argument is missing", call.=FALSE)
if(is.null(n.sample)) stop("the n.sample argument is missing", call.=FALSE)
if(!is.numeric(burnin)) stop("burn-in is not a number", call.=FALSE)
if(!is.numeric(n.sample)) stop("n.sample is not a number", call.=FALSE)
if(!is.numeric(thin)) stop("thin is not a number", call.=FALSE)
if(n.sample <= 0) stop("n.sample is less than or equal to zero.", call.=FALSE)
if(burnin < 0) stop("burn-in is less than zero.", call.=FALSE)
if(thin <= 0) stop("thin is less than or equal to zero.", call.=FALSE)
if(n.sample <= burnin) stop("Burn-in is greater than n.sample.", call.=FALSE)
if(burnin!=round(burnin)) stop("burnin is not an integer.", call.=FALSE)
if(n.sample!=round(n.sample)) stop("n.sample is not an integer.", call.=FALSE)
if(thin!=round(thin)) stop("thin is not an integer.", call.=FALSE)
#### Check and specify the priors
if(!is.null(X))
{
if(is.null(prior.mean.beta)) prior.mean.beta <- rep(0, p)
if(length(prior.mean.beta)!=p) stop("the vector of prior means for beta is the wrong length.", call.=FALSE)
if(!is.numeric(prior.mean.beta)) stop("the vector of prior means for beta is not numeric.", call.=FALSE)
if(sum(is.na(prior.mean.beta))!=0) stop("the vector of prior means for beta has missing values.", call.=FALSE)
if(is.null(prior.var.beta)) prior.var.beta <- rep(1000, p)
if(length(prior.var.beta)!=p) stop("the vector of prior variances for beta is the wrong length.", call.=FALSE)
if(!is.numeric(prior.var.beta)) stop("the vector of prior variances for beta is not numeric.", call.=FALSE)
if(sum(is.na(prior.var.beta))!=0) stop("the vector of prior variances for beta has missing values.", call.=FALSE)
if(min(prior.var.beta) <=0) stop("the vector of prior variances has elements less than zero", call.=FALSE)
}else
{}
if(is.null(prior.delta)) prior.delta <- 10
if(length(prior.delta)!=1) stop("the prior value for delta is the wrong length.", call.=FALSE)
if(!is.numeric(prior.delta)) stop("the prior value for delta is not numeric.", call.=FALSE)
if(sum(is.na(prior.delta))!=0) stop("the prior value for delta has missing values.", call.=FALSE)
if(prior.delta<=0) stop("the prior value for delta is not positive.", call.=FALSE)
if(is.null(prior.tau2)) prior.tau2 <- c(0.001, 0.001)
if(length(prior.tau2)!=2) stop("the prior value for tau2 is the wrong length.", call.=FALSE)
if(!is.numeric(prior.tau2)) stop("the prior value for tau2 is not numeric.", call.=FALSE)
if(sum(is.na(prior.tau2))!=0) stop("the prior value for tau2 has missing values.", call.=FALSE)
#### Specify the initial parameter values
theta.hat <- Y / trials
theta.hat[theta.hat==0] <- 0.01
theta.hat[theta.hat==1] <- 0.99
res.temp <- log(theta.hat / (1 - theta.hat)) - regression.vec - offset
res.sd <- sd(res.temp, na.rm=TRUE)/5
phi.mat <- matrix(rnorm(n=N.all, mean=0, sd = res.sd), nrow=K, byrow=FALSE)
tau2 <- var(phi)/10
gamma <- runif(1)
Z <- sample(1:G, size=N.all, replace=TRUE)
Z.mat <- matrix(Z, nrow=K, ncol=N, byrow=FALSE)
lambda <- sort(runif(G, min=min(res.temp), max=max(res.temp)))
lambda.mat <- matrix(rep(lambda, N), nrow=N, byrow=TRUE)
delta <- runif(1,1, min(2, prior.delta))
mu <- matrix(lambda[Z], nrow=K, ncol=N, byrow=FALSE)
## Compute the blocking structure for beta
if(!is.null(X))
{
blocksize.beta <- 5
if(blocksize.beta >= p)
{
n.beta.block <- 1
beta.beg <- 1
beta.fin <- p
}else
{
n.standard <- 1 + floor((p-blocksize.beta) / blocksize.beta)
remainder <- p - n.standard * blocksize.beta
if(remainder==0)
{
beta.beg <- c(1,seq((blocksize.beta+1), p, blocksize.beta))
beta.fin <- seq(blocksize.beta, p, blocksize.beta)
n.beta.block <- length(beta.beg)
}else
{
beta.beg <- c(1, seq((blocksize.beta+1), p, blocksize.beta))
beta.fin <- c(seq((blocksize.beta), p, blocksize.beta), p)
n.beta.block <- length(beta.beg)
}
}
}else{}
#### Set up matrices to store samples
n.keep <- floor((n.sample - burnin)/thin)
samples.Z <- array(NA, c(n.keep, N.all))
samples.lambda <- array(NA, c(n.keep, G))
samples.delta <- array(NA, c(n.keep, 1))
samples.tau2 <- array(NA, c(n.keep, 1))
samples.gamma <- array(NA, c(n.keep, 1))
samples.phi <- array(NA, c(n.keep, N.all))
samples.fitted <- array(NA, c(n.keep, N.all))
samples.deviance <- array(NA, c(n.keep, 1))
samples.like <- array(NA, c(n.keep, N.all))
if(!is.null(X))
{
samples.beta <- array(NA, c(n.keep, p))
accept.all <- rep(0,8)
proposal.corr.beta <- solve(t(X.standardised) %*% X.standardised)
chol.proposal.corr.beta <- chol(proposal.corr.beta)
proposal.sd.beta <- 0.01
}else
{
accept.all <- rep(0,6)
}
accept <- accept.all
proposal.sd.lambda <- 0.1
proposal.sd.delta <- 0.1
proposal.sd.phi <- 0.1
Y.extend <- matrix(rep(Y, G), byrow=F, ncol=G)
delta.update <- matrix(rep(1:G, N.all-K), ncol=G, byrow=T)
tau2.posterior.shape <- prior.tau2[1] + N * (K-1) /2
#### Spatial quantities
## Create the triplet object
W.triplet <- c(NA, NA, NA)
for(i in 1:K)
{
for(j in 1:K)
{
if(W[i,j]>0)
{
W.triplet <- rbind(W.triplet, c(i,j, W[i,j]))
}else{}
}
}
W.triplet <- W.triplet[-1, ]
W.n.triplet <- nrow(W.triplet)
W.triplet.sum <- tapply(W.triplet[ ,3], W.triplet[ ,1], sum)
W.neighbours <- tapply(W.triplet[ ,3], W.triplet[ ,1], length)
## Create the start and finish points for W updating
W.begfin <- array(NA, c(K, 2))
temp <- 1
for(i in 1:K)
{
W.begfin[i, ] <- c(temp, (temp + W.neighbours[i]-1))
temp <- temp + W.neighbours[i]
}
###########################
#### Run the Bayesian model
###########################
## Start timer
if(verbose)
{
cat("Generating", n.sample, "samples\n", sep = " ")
progressBar <- txtProgressBar(style = 3)
percentage.points<-round((1:100/100)*n.sample)
}else
{
percentage.points<-round((1:100/100)*n.sample)
}
for(j in 1:n.sample)
{
####################
## Sample from beta
####################
if(!is.null(X))
{
proposal <- beta + (sqrt(proposal.sd.beta)* t(chol.proposal.corr.beta)) %*% rnorm(p)
proposal.beta <- beta
offset.temp <- offset + as.numeric(mu) + as.numeric(phi.mat)
for(r in 1:n.beta.block)
{
proposal.beta[beta.beg[r]:beta.fin[r]] <- proposal[beta.beg[r]:beta.fin[r]]
prob <- binomialbetaupdate(X.standardised, N.all, p, beta, proposal.beta, offset.temp, Y, failures, prior.mean.beta, prior.var.beta, which.miss)
if(prob > runif(1))
{
beta[beta.beg[r]:beta.fin[r]] <- proposal.beta[beta.beg[r]:beta.fin[r]]
accept[7] <- accept[7] + 1
}else
{
proposal.beta[beta.beg[r]:beta.fin[r]] <- beta[beta.beg[r]:beta.fin[r]]
}
}
accept[8] <- accept[8] + n.beta.block
regression.vec <- X.standardised %*% beta
regression.mat <- matrix(regression.vec, nrow=K, ncol=N, byrow=FALSE)
}else{}
#######################
#### Sample from lambda
#######################
#### Propose a new value
proposal.extend <- c(-100, lambda, 100)
for(r in 1:G)
{
proposal.extend[(r+1)] <- rtrunc(n=1, spec="norm", a=proposal.extend[r], b=proposal.extend[(r+2)], mean=proposal.extend[(r+1)], sd=proposal.sd.lambda)
}
proposal <- proposal.extend[-c(1, (G+2))]
#### Compute the data likelihood
lp.current <- lambda[Z] + offset + as.numeric(regression.mat) + as.numeric(phi.mat)
lp.proposal <- proposal[Z] + offset + as.numeric(regression.mat) + as.numeric(phi.mat)
p.current <- exp(lp.current) / (1 + exp(lp.current))
p.proposal <- exp(lp.proposal) / (1 + exp(lp.proposal))
like.current <- Y * log(p.current) + failures * log(1 - p.current)
like.proposal <- Y * log(p.proposal) + failures * log(1 - p.proposal)
prob <- exp(sum(like.proposal - like.current))
if(prob > runif(1))
{
lambda <- proposal
lambda.mat <- matrix(rep(lambda, N), nrow=N, byrow=TRUE)
mu <- matrix(lambda[Z], nrow=K, ncol=N, byrow=FALSE)
accept[1] <- accept[1] + 1
}else
{
}
accept[2] <- accept[2] + 1
##################
#### Sample from Z
##################
prior.offset <- rep(NA, G)
for(r in 1:G)
{
prior.offset[r] <- log(sum(exp(-delta * ((1:G - r)^2 + (1:G - Gstar)^2))))
}
mu.offset <- offset.mat + regression.mat + phi.mat
test <- Zupdatesqbin(Z=Z.mat, Offset=mu.offset, Y=Y.mat, delta=delta, lambda=lambda, nsites=K, ntime=N, G=G, SS=1:G, prioroffset=prior.offset, Gstar=Gstar, failures=failures.mat)
Z.mat <- test
Z <- as.numeric(Z.mat)
mu <- matrix(lambda[Z], nrow=K, ncol=N, byrow=FALSE)
######################
#### Sample from delta
######################
proposal.delta <- rtrunc(n=1, spec="norm", a=1, b=prior.delta, mean=delta, sd=proposal.sd.delta)
sum.delta1 <- sum((Z - Gstar)^2)
sum.delta2 <- sum((Z.mat[ ,-1] - Z.mat[ ,-N])^2)
current.fc1 <- -delta * (sum.delta1 + sum.delta2) - K * log(sum(exp(-delta * (1:G - Gstar)^2)))
proposal.fc1 <- -proposal.delta * (sum.delta1 + sum.delta2) - K * log(sum(exp(-proposal.delta * (1:G - Gstar)^2)))
Z.temp <- matrix(rep(as.numeric(Z.mat[ ,-N]),G), ncol=G, byrow=FALSE)
Z.temp2 <- (delta.update - Z.temp)^2 + (delta.update - Gstar)^2
current.fc <- current.fc1 - sum(log(apply(exp(-delta * Z.temp2),1,sum)))
proposal.fc <- proposal.fc1 - sum(log(apply(exp(-proposal.delta * Z.temp2),1,sum)))
prob <- exp(proposal.fc - current.fc)
if(prob > runif(1))
{
delta <- proposal.delta
accept[3] <- accept[3] + 1
}else
{
}
accept[4] <- accept[4] + 1
####################
#### Sample from phi
####################
phi.offset <- mu + offset.mat + regression.mat
temp1 <- binomialarcarupdate(W.triplet, W.begfin, W.triplet.sum, K, N, phi.mat, tau2, gamma, 1, Y.mat, failures.mat, proposal.sd.phi, phi.offset, W.triplet.sum, which.miss.mat)
phi.temp <- temp1[[1]]
phi <- as.numeric(phi.temp)
for(i in 1:G)
{
phi[which(Z==i)] <- phi[which(Z==i)] - mean(phi[which(Z==i)])
}
phi.mat <- matrix(phi, nrow=K, ncol=N, byrow=FALSE)
accept[5] <- accept[5] + temp1[[2]]
accept[6] <- accept[6] + K*N
####################
## Sample from gamma
####################
temp2 <- gammaquadformcompute(W.triplet, W.triplet.sum, W.n.triplet, K, N, phi.mat, 1)
mean.gamma <- temp2[[1]] / temp2[[2]]
sd.gamma <- sqrt(tau2 / temp2[[2]])
gamma <- rtrunc(n=1, spec="norm", a=0, b=1, mean=mean.gamma, sd=sd.gamma)
####################
## Samples from tau2
####################
temp3 <- tauquadformcompute(W.triplet, W.triplet.sum, W.n.triplet, K, N, phi.mat, 1, gamma)
tau2.posterior.scale <- temp3 + prior.tau2[2]
tau2 <- 1 / rgamma(1, tau2.posterior.shape, scale=(1/tau2.posterior.scale))
#########################
## Calculate the deviance
#########################
lp <- as.numeric(mu + offset.mat + regression.mat + phi.mat)
prob <- exp(lp) / (1+exp(lp))
fitted <- trials * prob
deviance.all <- dbinom(x=Y, size=trials, prob=prob, log=TRUE)
like <- exp(deviance.all)
deviance <- -2 * sum(deviance.all)
###################
## Save the results
###################
if(j > burnin & (j-burnin)%%thin==0)
{
ele <- (j - burnin) / thin
samples.delta[ele, ] <- delta
samples.lambda[ele, ] <- lambda
samples.Z[ele, ] <- Z
samples.phi[ele, ] <- as.numeric(phi.mat)
samples.tau2[ele, ] <- tau2
samples.gamma[ele, ] <- gamma
samples.deviance[ele, ] <- deviance
samples.fitted[ele, ] <- fitted
samples.like[ele, ] <- like
if(!is.null(X)) samples.beta[ele, ] <- beta
}else
{
}
########################################
## Self tune the acceptance probabilties
########################################
k <- j/100
if(ceiling(k)==floor(k))
{
#### Determine the acceptance probabilities
accept.lambda <- 100 * accept[1] / accept[2]
accept.delta <- 100 * accept[3] / accept[4]
accept.phi <- 100 * accept[5] / accept[6]
if(!is.null(X))
{
accept.beta <- 100 * accept[7] / accept[8]
if(accept.beta > 40)
{
proposal.sd.beta <- proposal.sd.beta + 0.1 * proposal.sd.beta
}else if(accept.beta < 20)
{
proposal.sd.beta <- proposal.sd.beta - 0.1 * proposal.sd.beta
}else
{
}
accept.all <- accept.all + accept
accept <- rep(0,8)
}else
{
accept.all <- accept.all + accept
accept <- rep(0,6)
}
#### lambda tuning parameter
if(accept.lambda > 40)
{
proposal.sd.lambda <- min(proposal.sd.lambda + 0.1 * proposal.sd.lambda, 10)
}else if(accept.lambda < 20)
{
proposal.sd.lambda <- proposal.sd.lambda - 0.1 * proposal.sd.lambda
}else
{
}
#### delta tuning parameter
if(accept.delta > 50)
{
proposal.sd.delta <- min(proposal.sd.delta + 0.1 * proposal.sd.delta, 10)
}else if(accept.delta < 40)
{
proposal.sd.delta <- proposal.sd.delta - 0.1 * proposal.sd.delta
}else
{
}
#### phi tuning parameter
if(accept.phi > 50)
{
proposal.sd.phi <- proposal.sd.phi + 0.1 * proposal.sd.phi
}else if(accept.phi < 40)
{
proposal.sd.phi <- proposal.sd.phi - 0.1 * proposal.sd.phi
}else
{
}
}else
{
}
################################
## print progress to the console
################################
if(j %in% percentage.points & verbose)
{
setTxtProgressBar(progressBar, j/n.sample)
}
}
# end timer
if(verbose)
{
cat("\nSummarising results")
close(progressBar)
}else
{}
###################################
#### Summarise and save the results
###################################
## Compute the acceptance rates
accept.lambda <- 100 * accept.all[1] / accept.all[2]
accept.delta <- 100 * accept.all[3] / accept.all[4]
accept.phi <- 100 * accept.all[5] / accept.all[6]
accept.gamma <- 100
if(!is.null(X))
{
accept.beta <- 100 * accept.all[7] / accept.all[8]
accept.final <- c(accept.beta, accept.lambda, accept.delta, accept.phi, accept.gamma)
names(accept.final) <- c("beta", "lambda", "delta", "phi", "rho.T")
}else
{
accept.final <- c(accept.lambda, accept.delta, accept.phi, accept.gamma)
names(accept.final) <- c("lambda", "delta", "phi", "rho.T")
}
## DIC
median.Z <- round(apply(samples.Z,2,median), 0)
median.lambda <- apply(samples.lambda, 2, median)
median.mu <- matrix(median.lambda[median.Z], nrow=K, ncol=N, byrow=FALSE)
if(!is.null(X))
{
median.beta <- apply(samples.beta,2,median)
regression.mat <- matrix(X.standardised %*% median.beta, nrow=K, ncol=N, byrow=FALSE)
}else
{
}
median.phi <- matrix(apply(samples.phi, 2, median), nrow=K, byrow=FALSE)
lp.median <- as.numeric(median.mu + offset.mat + median.phi + regression.mat)
median.prob <- exp(lp.median) / (1 + exp(lp.median))
fitted.median <- trials * median.prob
deviance.fitted <- -2 * sum(dbinom(x=Y, size=trials, prob=median.prob, log=TRUE))
p.d <- median(samples.deviance) - deviance.fitted
DIC <- 2 * median(samples.deviance) - deviance.fitted
#### Watanabe-Akaike Information Criterion (WAIC)
LPPD <- sum(log(apply(samples.like,2,mean)), na.rm=TRUE)
p.w <- sum(apply(log(samples.like),2,var), na.rm=TRUE)
WAIC <- -2 * (LPPD - p.w)
## Compute the LMPL
CPO <- rep(NA, N.all)
for(j in 1:N.all)
{
CPO[j] <- 1/median((1 / dbinom(x=Y[j], size=trials[j], prob=(samples.fitted[ ,j] / trials[j]))))
}
LMPL <- sum(log(CPO))
## Create the Fitted values
fitted.values <- apply(samples.fitted, 2, median)
residuals <- as.numeric(Y) - fitted.values
#### transform the parameters back to the origianl covariate scale.
if(!is.null(X))
{
samples.beta.orig <- samples.beta
number.cts <- sum(X.indicator==1)
if(number.cts>0)
{
for(r in 1:p)
{
if(X.indicator[r]==1)
{
samples.beta.orig[ ,r] <- samples.beta[ ,r] / X.sd[r]
}else
{
}
}
}else
{
}
}else
{}
#### Create a summary object
summary.hyper <- array(NA, c(3, 7))
summary.hyper[1,1:3] <- quantile(samples.delta, c(0.5, 0.025, 0.975))
summary.hyper[2,1:3] <- quantile(samples.tau2, c(0.5, 0.025, 0.975))
summary.hyper[3,1:3] <- quantile(samples.gamma, c(0.5, 0.025, 0.975))
rownames(summary.hyper) <- c("delta", "tau2", "rho.T")
summary.hyper[1, 4:7] <- c(n.keep, accept.delta, effectiveSize(mcmc(samples.delta)), geweke.diag(mcmc(samples.delta))$z)
summary.hyper[2, 4:7] <- c(n.keep, 100, effectiveSize(mcmc(samples.tau2)), geweke.diag(mcmc(samples.tau2))$z)
summary.hyper[3, 4:7] <- c(n.keep, 100, effectiveSize(mcmc(samples.gamma)), geweke.diag(mcmc(samples.gamma))$z)
summary.lambda <- array(NA, c(G,1))
summary.lambda <- t(apply(samples.lambda, 2, quantile, c(0.5, 0.025, 0.975)))
summary.lambda <- cbind(summary.lambda, rep(n.keep, G), rep(accept.lambda, G), effectiveSize(mcmc(samples.lambda)), geweke.diag(mcmc(samples.lambda))$z)
summary.lambda <- matrix(summary.lambda, ncol=7)
rownames(summary.lambda) <- paste("lambda", 1:G, sep="")
if(!is.null(X))
{
samples.beta.orig <- mcmc(samples.beta.orig)
summary.beta <- t(apply(samples.beta.orig, 2, quantile, c(0.5, 0.025, 0.975)))
summary.beta <- cbind(summary.beta, rep(n.keep, p), rep(accept.beta,p), effectiveSize(samples.beta.orig), geweke.diag(samples.beta.orig)$z)
rownames(summary.beta) <- colnames(X)
colnames(summary.beta) <- c("Median", "2.5%", "97.5%", "n.sample", "% accept", "n.effective", "Geweke.diag")
summary.results <- rbind(summary.beta, summary.lambda, summary.hyper)
}else
{
summary.results <- rbind(summary.lambda, summary.hyper)
}
summary.results[ , 1:3] <- round(summary.results[ , 1:3], 4)
summary.results[ , 4:7] <- round(summary.results[ , 4:7], 1)
colnames(summary.results) <- c("Median", "2.5%", "97.5%", "n.sample", "% accept", "n.effective", "Geweke.diag")
## Compile and return the results
modelfit <- c(DIC, p.d, WAIC, p.w, LMPL)
names(modelfit) <- c("DIC", "p.d", "WAIC", "p.w", "LMPL")
if(is.null(X)) samples.beta.orig = NA
samples <- list(beta=mcmc(samples.beta.orig), lambda=mcmc(samples.lambda), Z=mcmc(samples.Z), delta=mcmc(samples.delta), phi = mcmc(samples.phi), tau2=mcmc(samples.tau2), rho.T=mcmc(samples.gamma), fitted=mcmc(samples.fitted), deviance=mcmc(samples.deviance))
model.string <- c("Likelihood model - Binomial (logit link function)", "\nLatent structure model - Localised autoregressive CAR model\n")
results <- list(summary.results=summary.results, samples=samples, fitted.values=fitted.values, residuals=residuals, modelfit=modelfit, accept=accept.final, localised.structure=median.Z, formula=formula, model=model.string, X=X)
class(results) <- "carbayesST"
if(verbose)
{
b<-proc.time()
cat(" finished in ", round(b[3]-a[3], 1), "seconds")
}else
{}
return(results)
}
| /CARBayesST/R/binomial.CARlocalised.R | no_license | ingted/R-Examples | R | false | false | 28,107 | r | binomial.CARlocalised <- function(formula, data=NULL, G, trials, W, burnin, n.sample, thin=1, prior.mean.beta=NULL, prior.var.beta=NULL, prior.delta=NULL, prior.tau2=NULL, verbose=TRUE)
{
#### Check on the verbose option
if(is.null(verbose)) verbose=TRUE
if(!is.logical(verbose)) stop("the verbose option is not logical.", call.=FALSE)
if(verbose)
{
cat("Setting up the model\n")
a<-proc.time()
}else{}
##############################################
#### Format the arguments and check for errors
##############################################
#### Overall formula object
frame <- try(suppressWarnings(model.frame(formula, data=data, na.action=na.pass)), silent=TRUE)
if(class(frame)=="try-error") stop("the formula inputted contains an error, e.g the variables may be different lengths or the data object has not been specified.", call.=FALSE)
#### Format and check the neighbourhood matrix W
if(!is.matrix(W)) stop("W is not a matrix.", call.=FALSE)
K <- nrow(W)
if(ncol(W)!= K) stop("W has the wrong number of columns.", call.=FALSE)
if(sum(is.na(W))>0) stop("W has missing 'NA' values.", call.=FALSE)
if(!is.numeric(W)) stop("W has non-numeric values.", call.=FALSE)
if(min(W)<0) stop("W has negative elements.", call.=FALSE)
if(sum(W!=t(W))>0) stop("W is not symmetric.", call.=FALSE)
if(min(apply(W, 1, sum))==0) stop("W has some areas with no neighbours (one of the row sums equals zero).", call.=FALSE)
#### Response variable
Y <- model.response(frame)
N.all <- length(Y)
N <- N.all / K
if(floor(N.all/K)!=ceiling(N.all/K)) stop("The number of spatial areas is not a multiple of the number of data points.", call.=FALSE)
if(sum(is.na(Y))>0) stop("the response has missing 'NA' values.", call.=FALSE)
if(!is.numeric(Y)) stop("the response variable has non-numeric values.", call.=FALSE)
int.check <- N.all-sum(ceiling(Y)==floor(Y))
if(int.check > 0) stop("the respons variable has non-integer values.", call.=FALSE)
if(min(Y)<0) stop("the response variable has negative values.", call.=FALSE)
failures <- trials - Y
Y.mat <- matrix(Y, nrow=K, ncol=N, byrow=FALSE)
failures.mat <- matrix(failures, nrow=K, ncol=N, byrow=FALSE)
which.miss <- as.numeric(!is.na(Y))
which.miss.mat <- matrix(which.miss, nrow=K, ncol=N, byrow=FALSE)
#### Offset variable
## Create the offset
offset <- try(model.offset(frame), silent=TRUE)
if(class(offset)=="try-error") stop("the offset is not numeric.", call.=FALSE)
if(is.null(offset)) offset <- rep(0,N.all)
if(sum(is.na(offset))>0) stop("the offset has missing 'NA' values.", call.=FALSE)
if(!is.numeric(offset)) stop("the offset variable has non-numeric values.", call.=FALSE)
offset.mat <- matrix(offset, nrow=K, ncol=N, byrow=FALSE)
#### Design matrix
## Create the matrix
X <- try(suppressWarnings(model.matrix(object=attr(frame, "terms"), data=frame)), silent=TRUE)
if(class(X)=="try-error") stop("the covariate matrix contains inappropriate values.", call.=FALSE)
if(sum(is.na(X))>0) stop("the covariate matrix contains missing 'NA' values.", call.=FALSE)
ptemp <- ncol(X)
if(ptemp==1)
{
X <- NULL
regression.vec <- rep(0, N.all)
regression.mat <- matrix(regression.vec, nrow=K, ncol=N, byrow=FALSE)
p <- 0
}else
{
## Check for linearly related columns
cor.X <- suppressWarnings(cor(X))
diag(cor.X) <- 0
if(max(cor.X, na.rm=TRUE)==1) stop("the covariate matrix has two exactly linearly related columns.", call.=FALSE)
if(min(cor.X, na.rm=TRUE)==-1) stop("the covariate matrix has two exactly linearly related columns.", call.=FALSE)
if(sort(apply(X, 2, sd))[2]==0) stop("the covariate matrix has two intercept terms.", call.=FALSE)
## Remove the intercept term
int.which <- which(apply(X,2,sd)==0)
colnames.X <- colnames(X)
X <- as.matrix(X[ ,-int.which])
colnames(X) <- colnames.X[-int.which]
p <- ncol(X)
## Standardise X
X.standardised <- X
X.sd <- apply(X, 2, sd)
X.mean <- apply(X, 2, mean)
X.indicator <- rep(NA, p) # To determine which parameter estimates to transform back
for(j in 1:p)
{
if(length(table(X[ ,j]))>2)
{
X.indicator[j] <- 1
X.standardised[ ,j] <- (X[ ,j] - mean(X[ ,j])) / sd(X[ ,j])
}else
{
X.indicator[j] <- 0
}
}
## Compute a starting value for beta
dat <- cbind(Y, failures)
mod.glm <- glm(dat~X.standardised-1, offset=offset, family="quasibinomial")
beta.mean <- mod.glm$coefficients
beta.sd <- sqrt(diag(summary(mod.glm)$cov.scaled))
beta <- rnorm(n=length(beta.mean), mean=beta.mean, sd=beta.sd)
regression.vec <- X.standardised %*% beta
regression.mat <- matrix(regression.vec, nrow=K, ncol=N, byrow=FALSE)
}
#### Format and check the number of clusters G
if(length(G)!=1) stop("G is the wrong length.", call.=FALSE)
if(!is.numeric(G)) stop("G is not numeric.", call.=FALSE)
if(G<=1) stop("G is less than 2.", call.=FALSE)
if(G!=round(G)) stop("G is not an integer.", call.=FALSE)
if(floor(G/2)==ceiling(G/2))
{
Gstar <- G/2
}else
{
Gstar <- (G+1)/2
}
#### Format and check the MCMC quantities
if(is.null(burnin)) stop("the burnin argument is missing", call.=FALSE)
if(is.null(n.sample)) stop("the n.sample argument is missing", call.=FALSE)
if(!is.numeric(burnin)) stop("burn-in is not a number", call.=FALSE)
if(!is.numeric(n.sample)) stop("n.sample is not a number", call.=FALSE)
if(!is.numeric(thin)) stop("thin is not a number", call.=FALSE)
if(n.sample <= 0) stop("n.sample is less than or equal to zero.", call.=FALSE)
if(burnin < 0) stop("burn-in is less than zero.", call.=FALSE)
if(thin <= 0) stop("thin is less than or equal to zero.", call.=FALSE)
if(n.sample <= burnin) stop("Burn-in is greater than n.sample.", call.=FALSE)
if(burnin!=round(burnin)) stop("burnin is not an integer.", call.=FALSE)
if(n.sample!=round(n.sample)) stop("n.sample is not an integer.", call.=FALSE)
if(thin!=round(thin)) stop("thin is not an integer.", call.=FALSE)
#### Check and specify the priors
if(!is.null(X))
{
if(is.null(prior.mean.beta)) prior.mean.beta <- rep(0, p)
if(length(prior.mean.beta)!=p) stop("the vector of prior means for beta is the wrong length.", call.=FALSE)
if(!is.numeric(prior.mean.beta)) stop("the vector of prior means for beta is not numeric.", call.=FALSE)
if(sum(is.na(prior.mean.beta))!=0) stop("the vector of prior means for beta has missing values.", call.=FALSE)
if(is.null(prior.var.beta)) prior.var.beta <- rep(1000, p)
if(length(prior.var.beta)!=p) stop("the vector of prior variances for beta is the wrong length.", call.=FALSE)
if(!is.numeric(prior.var.beta)) stop("the vector of prior variances for beta is not numeric.", call.=FALSE)
if(sum(is.na(prior.var.beta))!=0) stop("the vector of prior variances for beta has missing values.", call.=FALSE)
if(min(prior.var.beta) <=0) stop("the vector of prior variances has elements less than zero", call.=FALSE)
}else
{}
if(is.null(prior.delta)) prior.delta <- 10
if(length(prior.delta)!=1) stop("the prior value for delta is the wrong length.", call.=FALSE)
if(!is.numeric(prior.delta)) stop("the prior value for delta is not numeric.", call.=FALSE)
if(sum(is.na(prior.delta))!=0) stop("the prior value for delta has missing values.", call.=FALSE)
if(prior.delta<=0) stop("the prior value for delta is not positive.", call.=FALSE)
if(is.null(prior.tau2)) prior.tau2 <- c(0.001, 0.001)
if(length(prior.tau2)!=2) stop("the prior value for tau2 is the wrong length.", call.=FALSE)
if(!is.numeric(prior.tau2)) stop("the prior value for tau2 is not numeric.", call.=FALSE)
if(sum(is.na(prior.tau2))!=0) stop("the prior value for tau2 has missing values.", call.=FALSE)
#### Specify the initial parameter values
theta.hat <- Y / trials
theta.hat[theta.hat==0] <- 0.01
theta.hat[theta.hat==1] <- 0.99
res.temp <- log(theta.hat / (1 - theta.hat)) - regression.vec - offset
res.sd <- sd(res.temp, na.rm=TRUE)/5
phi.mat <- matrix(rnorm(n=N.all, mean=0, sd = res.sd), nrow=K, byrow=FALSE)
tau2 <- var(phi)/10
gamma <- runif(1)
Z <- sample(1:G, size=N.all, replace=TRUE)
Z.mat <- matrix(Z, nrow=K, ncol=N, byrow=FALSE)
lambda <- sort(runif(G, min=min(res.temp), max=max(res.temp)))
lambda.mat <- matrix(rep(lambda, N), nrow=N, byrow=TRUE)
delta <- runif(1,1, min(2, prior.delta))
mu <- matrix(lambda[Z], nrow=K, ncol=N, byrow=FALSE)
## Compute the blocking structure for beta
if(!is.null(X))
{
blocksize.beta <- 5
if(blocksize.beta >= p)
{
n.beta.block <- 1
beta.beg <- 1
beta.fin <- p
}else
{
n.standard <- 1 + floor((p-blocksize.beta) / blocksize.beta)
remainder <- p - n.standard * blocksize.beta
if(remainder==0)
{
beta.beg <- c(1,seq((blocksize.beta+1), p, blocksize.beta))
beta.fin <- seq(blocksize.beta, p, blocksize.beta)
n.beta.block <- length(beta.beg)
}else
{
beta.beg <- c(1, seq((blocksize.beta+1), p, blocksize.beta))
beta.fin <- c(seq((blocksize.beta), p, blocksize.beta), p)
n.beta.block <- length(beta.beg)
}
}
}else{}
#### Set up matrices to store samples
n.keep <- floor((n.sample - burnin)/thin)
samples.Z <- array(NA, c(n.keep, N.all))
samples.lambda <- array(NA, c(n.keep, G))
samples.delta <- array(NA, c(n.keep, 1))
samples.tau2 <- array(NA, c(n.keep, 1))
samples.gamma <- array(NA, c(n.keep, 1))
samples.phi <- array(NA, c(n.keep, N.all))
samples.fitted <- array(NA, c(n.keep, N.all))
samples.deviance <- array(NA, c(n.keep, 1))
samples.like <- array(NA, c(n.keep, N.all))
if(!is.null(X))
{
samples.beta <- array(NA, c(n.keep, p))
accept.all <- rep(0,8)
proposal.corr.beta <- solve(t(X.standardised) %*% X.standardised)
chol.proposal.corr.beta <- chol(proposal.corr.beta)
proposal.sd.beta <- 0.01
}else
{
accept.all <- rep(0,6)
}
accept <- accept.all
proposal.sd.lambda <- 0.1
proposal.sd.delta <- 0.1
proposal.sd.phi <- 0.1
Y.extend <- matrix(rep(Y, G), byrow=F, ncol=G)
delta.update <- matrix(rep(1:G, N.all-K), ncol=G, byrow=T)
tau2.posterior.shape <- prior.tau2[1] + N * (K-1) /2
#### Spatial quantities
## Create the triplet object
W.triplet <- c(NA, NA, NA)
for(i in 1:K)
{
for(j in 1:K)
{
if(W[i,j]>0)
{
W.triplet <- rbind(W.triplet, c(i,j, W[i,j]))
}else{}
}
}
W.triplet <- W.triplet[-1, ]
W.n.triplet <- nrow(W.triplet)
W.triplet.sum <- tapply(W.triplet[ ,3], W.triplet[ ,1], sum)
W.neighbours <- tapply(W.triplet[ ,3], W.triplet[ ,1], length)
## Create the start and finish points for W updating
W.begfin <- array(NA, c(K, 2))
temp <- 1
for(i in 1:K)
{
W.begfin[i, ] <- c(temp, (temp + W.neighbours[i]-1))
temp <- temp + W.neighbours[i]
}
###########################
#### Run the Bayesian model
###########################
## Start timer
if(verbose)
{
cat("Generating", n.sample, "samples\n", sep = " ")
progressBar <- txtProgressBar(style = 3)
percentage.points<-round((1:100/100)*n.sample)
}else
{
percentage.points<-round((1:100/100)*n.sample)
}
for(j in 1:n.sample)
{
####################
## Sample from beta
####################
if(!is.null(X))
{
proposal <- beta + (sqrt(proposal.sd.beta)* t(chol.proposal.corr.beta)) %*% rnorm(p)
proposal.beta <- beta
offset.temp <- offset + as.numeric(mu) + as.numeric(phi.mat)
for(r in 1:n.beta.block)
{
proposal.beta[beta.beg[r]:beta.fin[r]] <- proposal[beta.beg[r]:beta.fin[r]]
prob <- binomialbetaupdate(X.standardised, N.all, p, beta, proposal.beta, offset.temp, Y, failures, prior.mean.beta, prior.var.beta, which.miss)
if(prob > runif(1))
{
beta[beta.beg[r]:beta.fin[r]] <- proposal.beta[beta.beg[r]:beta.fin[r]]
accept[7] <- accept[7] + 1
}else
{
proposal.beta[beta.beg[r]:beta.fin[r]] <- beta[beta.beg[r]:beta.fin[r]]
}
}
accept[8] <- accept[8] + n.beta.block
regression.vec <- X.standardised %*% beta
regression.mat <- matrix(regression.vec, nrow=K, ncol=N, byrow=FALSE)
}else{}
#######################
#### Sample from lambda
#######################
#### Propose a new value
proposal.extend <- c(-100, lambda, 100)
for(r in 1:G)
{
proposal.extend[(r+1)] <- rtrunc(n=1, spec="norm", a=proposal.extend[r], b=proposal.extend[(r+2)], mean=proposal.extend[(r+1)], sd=proposal.sd.lambda)
}
proposal <- proposal.extend[-c(1, (G+2))]
#### Compute the data likelihood
lp.current <- lambda[Z] + offset + as.numeric(regression.mat) + as.numeric(phi.mat)
lp.proposal <- proposal[Z] + offset + as.numeric(regression.mat) + as.numeric(phi.mat)
p.current <- exp(lp.current) / (1 + exp(lp.current))
p.proposal <- exp(lp.proposal) / (1 + exp(lp.proposal))
like.current <- Y * log(p.current) + failures * log(1 - p.current)
like.proposal <- Y * log(p.proposal) + failures * log(1 - p.proposal)
prob <- exp(sum(like.proposal - like.current))
if(prob > runif(1))
{
lambda <- proposal
lambda.mat <- matrix(rep(lambda, N), nrow=N, byrow=TRUE)
mu <- matrix(lambda[Z], nrow=K, ncol=N, byrow=FALSE)
accept[1] <- accept[1] + 1
}else
{
}
accept[2] <- accept[2] + 1
##################
#### Sample from Z
##################
prior.offset <- rep(NA, G)
for(r in 1:G)
{
prior.offset[r] <- log(sum(exp(-delta * ((1:G - r)^2 + (1:G - Gstar)^2))))
}
mu.offset <- offset.mat + regression.mat + phi.mat
test <- Zupdatesqbin(Z=Z.mat, Offset=mu.offset, Y=Y.mat, delta=delta, lambda=lambda, nsites=K, ntime=N, G=G, SS=1:G, prioroffset=prior.offset, Gstar=Gstar, failures=failures.mat)
Z.mat <- test
Z <- as.numeric(Z.mat)
mu <- matrix(lambda[Z], nrow=K, ncol=N, byrow=FALSE)
######################
#### Sample from delta
######################
proposal.delta <- rtrunc(n=1, spec="norm", a=1, b=prior.delta, mean=delta, sd=proposal.sd.delta)
sum.delta1 <- sum((Z - Gstar)^2)
sum.delta2 <- sum((Z.mat[ ,-1] - Z.mat[ ,-N])^2)
current.fc1 <- -delta * (sum.delta1 + sum.delta2) - K * log(sum(exp(-delta * (1:G - Gstar)^2)))
proposal.fc1 <- -proposal.delta * (sum.delta1 + sum.delta2) - K * log(sum(exp(-proposal.delta * (1:G - Gstar)^2)))
Z.temp <- matrix(rep(as.numeric(Z.mat[ ,-N]),G), ncol=G, byrow=FALSE)
Z.temp2 <- (delta.update - Z.temp)^2 + (delta.update - Gstar)^2
current.fc <- current.fc1 - sum(log(apply(exp(-delta * Z.temp2),1,sum)))
proposal.fc <- proposal.fc1 - sum(log(apply(exp(-proposal.delta * Z.temp2),1,sum)))
prob <- exp(proposal.fc - current.fc)
if(prob > runif(1))
{
delta <- proposal.delta
accept[3] <- accept[3] + 1
}else
{
}
accept[4] <- accept[4] + 1
####################
#### Sample from phi
####################
phi.offset <- mu + offset.mat + regression.mat
temp1 <- binomialarcarupdate(W.triplet, W.begfin, W.triplet.sum, K, N, phi.mat, tau2, gamma, 1, Y.mat, failures.mat, proposal.sd.phi, phi.offset, W.triplet.sum, which.miss.mat)
phi.temp <- temp1[[1]]
phi <- as.numeric(phi.temp)
for(i in 1:G)
{
phi[which(Z==i)] <- phi[which(Z==i)] - mean(phi[which(Z==i)])
}
phi.mat <- matrix(phi, nrow=K, ncol=N, byrow=FALSE)
accept[5] <- accept[5] + temp1[[2]]
accept[6] <- accept[6] + K*N
####################
## Sample from gamma
####################
temp2 <- gammaquadformcompute(W.triplet, W.triplet.sum, W.n.triplet, K, N, phi.mat, 1)
mean.gamma <- temp2[[1]] / temp2[[2]]
sd.gamma <- sqrt(tau2 / temp2[[2]])
gamma <- rtrunc(n=1, spec="norm", a=0, b=1, mean=mean.gamma, sd=sd.gamma)
####################
## Samples from tau2
####################
temp3 <- tauquadformcompute(W.triplet, W.triplet.sum, W.n.triplet, K, N, phi.mat, 1, gamma)
tau2.posterior.scale <- temp3 + prior.tau2[2]
tau2 <- 1 / rgamma(1, tau2.posterior.shape, scale=(1/tau2.posterior.scale))
#########################
## Calculate the deviance
#########################
lp <- as.numeric(mu + offset.mat + regression.mat + phi.mat)
prob <- exp(lp) / (1+exp(lp))
fitted <- trials * prob
deviance.all <- dbinom(x=Y, size=trials, prob=prob, log=TRUE)
like <- exp(deviance.all)
deviance <- -2 * sum(deviance.all)
###################
## Save the results
###################
if(j > burnin & (j-burnin)%%thin==0)
{
ele <- (j - burnin) / thin
samples.delta[ele, ] <- delta
samples.lambda[ele, ] <- lambda
samples.Z[ele, ] <- Z
samples.phi[ele, ] <- as.numeric(phi.mat)
samples.tau2[ele, ] <- tau2
samples.gamma[ele, ] <- gamma
samples.deviance[ele, ] <- deviance
samples.fitted[ele, ] <- fitted
samples.like[ele, ] <- like
if(!is.null(X)) samples.beta[ele, ] <- beta
}else
{
}
########################################
## Self tune the acceptance probabilties
########################################
k <- j/100
if(ceiling(k)==floor(k))
{
#### Determine the acceptance probabilities
accept.lambda <- 100 * accept[1] / accept[2]
accept.delta <- 100 * accept[3] / accept[4]
accept.phi <- 100 * accept[5] / accept[6]
if(!is.null(X))
{
accept.beta <- 100 * accept[7] / accept[8]
if(accept.beta > 40)
{
proposal.sd.beta <- proposal.sd.beta + 0.1 * proposal.sd.beta
}else if(accept.beta < 20)
{
proposal.sd.beta <- proposal.sd.beta - 0.1 * proposal.sd.beta
}else
{
}
accept.all <- accept.all + accept
accept <- rep(0,8)
}else
{
accept.all <- accept.all + accept
accept <- rep(0,6)
}
#### lambda tuning parameter
if(accept.lambda > 40)
{
proposal.sd.lambda <- min(proposal.sd.lambda + 0.1 * proposal.sd.lambda, 10)
}else if(accept.lambda < 20)
{
proposal.sd.lambda <- proposal.sd.lambda - 0.1 * proposal.sd.lambda
}else
{
}
#### delta tuning parameter
if(accept.delta > 50)
{
proposal.sd.delta <- min(proposal.sd.delta + 0.1 * proposal.sd.delta, 10)
}else if(accept.delta < 40)
{
proposal.sd.delta <- proposal.sd.delta - 0.1 * proposal.sd.delta
}else
{
}
#### phi tuning parameter
if(accept.phi > 50)
{
proposal.sd.phi <- proposal.sd.phi + 0.1 * proposal.sd.phi
}else if(accept.phi < 40)
{
proposal.sd.phi <- proposal.sd.phi - 0.1 * proposal.sd.phi
}else
{
}
}else
{
}
################################
## print progress to the console
################################
if(j %in% percentage.points & verbose)
{
setTxtProgressBar(progressBar, j/n.sample)
}
}
# end timer
if(verbose)
{
cat("\nSummarising results")
close(progressBar)
}else
{}
###################################
#### Summarise and save the results
###################################
## Compute the acceptance rates
accept.lambda <- 100 * accept.all[1] / accept.all[2]
accept.delta <- 100 * accept.all[3] / accept.all[4]
accept.phi <- 100 * accept.all[5] / accept.all[6]
accept.gamma <- 100
if(!is.null(X))
{
accept.beta <- 100 * accept.all[7] / accept.all[8]
accept.final <- c(accept.beta, accept.lambda, accept.delta, accept.phi, accept.gamma)
names(accept.final) <- c("beta", "lambda", "delta", "phi", "rho.T")
}else
{
accept.final <- c(accept.lambda, accept.delta, accept.phi, accept.gamma)
names(accept.final) <- c("lambda", "delta", "phi", "rho.T")
}
## DIC
median.Z <- round(apply(samples.Z,2,median), 0)
median.lambda <- apply(samples.lambda, 2, median)
median.mu <- matrix(median.lambda[median.Z], nrow=K, ncol=N, byrow=FALSE)
if(!is.null(X))
{
median.beta <- apply(samples.beta,2,median)
regression.mat <- matrix(X.standardised %*% median.beta, nrow=K, ncol=N, byrow=FALSE)
}else
{
}
median.phi <- matrix(apply(samples.phi, 2, median), nrow=K, byrow=FALSE)
lp.median <- as.numeric(median.mu + offset.mat + median.phi + regression.mat)
median.prob <- exp(lp.median) / (1 + exp(lp.median))
fitted.median <- trials * median.prob
deviance.fitted <- -2 * sum(dbinom(x=Y, size=trials, prob=median.prob, log=TRUE))
p.d <- median(samples.deviance) - deviance.fitted
DIC <- 2 * median(samples.deviance) - deviance.fitted
#### Watanabe-Akaike Information Criterion (WAIC)
LPPD <- sum(log(apply(samples.like,2,mean)), na.rm=TRUE)
p.w <- sum(apply(log(samples.like),2,var), na.rm=TRUE)
WAIC <- -2 * (LPPD - p.w)
## Compute the LMPL
CPO <- rep(NA, N.all)
for(j in 1:N.all)
{
CPO[j] <- 1/median((1 / dbinom(x=Y[j], size=trials[j], prob=(samples.fitted[ ,j] / trials[j]))))
}
LMPL <- sum(log(CPO))
## Create the Fitted values
fitted.values <- apply(samples.fitted, 2, median)
residuals <- as.numeric(Y) - fitted.values
#### transform the parameters back to the origianl covariate scale.
if(!is.null(X))
{
samples.beta.orig <- samples.beta
number.cts <- sum(X.indicator==1)
if(number.cts>0)
{
for(r in 1:p)
{
if(X.indicator[r]==1)
{
samples.beta.orig[ ,r] <- samples.beta[ ,r] / X.sd[r]
}else
{
}
}
}else
{
}
}else
{}
#### Create a summary object
summary.hyper <- array(NA, c(3, 7))
summary.hyper[1,1:3] <- quantile(samples.delta, c(0.5, 0.025, 0.975))
summary.hyper[2,1:3] <- quantile(samples.tau2, c(0.5, 0.025, 0.975))
summary.hyper[3,1:3] <- quantile(samples.gamma, c(0.5, 0.025, 0.975))
rownames(summary.hyper) <- c("delta", "tau2", "rho.T")
summary.hyper[1, 4:7] <- c(n.keep, accept.delta, effectiveSize(mcmc(samples.delta)), geweke.diag(mcmc(samples.delta))$z)
summary.hyper[2, 4:7] <- c(n.keep, 100, effectiveSize(mcmc(samples.tau2)), geweke.diag(mcmc(samples.tau2))$z)
summary.hyper[3, 4:7] <- c(n.keep, 100, effectiveSize(mcmc(samples.gamma)), geweke.diag(mcmc(samples.gamma))$z)
summary.lambda <- array(NA, c(G,1))
summary.lambda <- t(apply(samples.lambda, 2, quantile, c(0.5, 0.025, 0.975)))
summary.lambda <- cbind(summary.lambda, rep(n.keep, G), rep(accept.lambda, G), effectiveSize(mcmc(samples.lambda)), geweke.diag(mcmc(samples.lambda))$z)
summary.lambda <- matrix(summary.lambda, ncol=7)
rownames(summary.lambda) <- paste("lambda", 1:G, sep="")
if(!is.null(X))
{
samples.beta.orig <- mcmc(samples.beta.orig)
summary.beta <- t(apply(samples.beta.orig, 2, quantile, c(0.5, 0.025, 0.975)))
summary.beta <- cbind(summary.beta, rep(n.keep, p), rep(accept.beta,p), effectiveSize(samples.beta.orig), geweke.diag(samples.beta.orig)$z)
rownames(summary.beta) <- colnames(X)
colnames(summary.beta) <- c("Median", "2.5%", "97.5%", "n.sample", "% accept", "n.effective", "Geweke.diag")
summary.results <- rbind(summary.beta, summary.lambda, summary.hyper)
}else
{
summary.results <- rbind(summary.lambda, summary.hyper)
}
summary.results[ , 1:3] <- round(summary.results[ , 1:3], 4)
summary.results[ , 4:7] <- round(summary.results[ , 4:7], 1)
colnames(summary.results) <- c("Median", "2.5%", "97.5%", "n.sample", "% accept", "n.effective", "Geweke.diag")
## Compile and return the results
modelfit <- c(DIC, p.d, WAIC, p.w, LMPL)
names(modelfit) <- c("DIC", "p.d", "WAIC", "p.w", "LMPL")
if(is.null(X)) samples.beta.orig = NA
samples <- list(beta=mcmc(samples.beta.orig), lambda=mcmc(samples.lambda), Z=mcmc(samples.Z), delta=mcmc(samples.delta), phi = mcmc(samples.phi), tau2=mcmc(samples.tau2), rho.T=mcmc(samples.gamma), fitted=mcmc(samples.fitted), deviance=mcmc(samples.deviance))
model.string <- c("Likelihood model - Binomial (logit link function)", "\nLatent structure model - Localised autoregressive CAR model\n")
results <- list(summary.results=summary.results, samples=samples, fitted.values=fitted.values, residuals=residuals, modelfit=modelfit, accept=accept.final, localised.structure=median.Z, formula=formula, model=model.string, X=X)
class(results) <- "carbayesST"
if(verbose)
{
b<-proc.time()
cat(" finished in ", round(b[3]-a[3], 1), "seconds")
}else
{}
return(results)
}
|
library(MPkn)
### Name: radekW
### Title: The Numbers of Rows of the Output Matrix
### Aliases: radekW
### Keywords: radekW
### ** Examples
radekW(n = c(3, 5, 8, 9, 11), k = c(1, 0, 1, 0, 0))
| /data/genthat_extracted_code/MPkn/examples/radekW.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 201 | r | library(MPkn)
### Name: radekW
### Title: The Numbers of Rows of the Output Matrix
### Aliases: radekW
### Keywords: radekW
### ** Examples
radekW(n = c(3, 5, 8, 9, 11), k = c(1, 0, 1, 0, 0))
|
library(shiny)
library(DT)
source("scripts/theme.R")
shinyUI(fluidPage(
theme = theme,
navbarPage(
"Montgomery Pets",
tabPanel(
"Welcome",
includeMarkdown("welcome.Rmd")
),
tabPanel(
"Overview",
uiOutput("overviewTitle"),
includeMarkdown("overview.Rmd"),
uiOutput("overviewSlider"),
plotOutput("overviewPlot")
),
tabPanel(
"Inspector",
uiOutput("inspectorTitle"),
includeMarkdown("inspector.Rmd"),
fluidRow(
column(9, uiOutput("inspectorSelected")),
column(3, actionButton("random", "Inspect random pet"))
),
fluidRow(
column(4, uiOutput("inspectorImage")),
column(4, plotOutput("inspectorMap")),
column(4, plotOutput("inspectorPlot"))
),
DTOutput("inspectorTable"),
),
tabPanel(
"Explorer",
uiOutput("explorerTitle"),
includeMarkdown("explorer.Rmd"),
# First plot and description
fluidRow(
column(5, plotOutput("explorerPlot1")),
column(4, uiOutput("explorerPlot1Text"))
),
# Second plot and controls
fluidRow(
column(
4,
sidebarPanel(width = 10,
radioButtons(
"explorerSpecies",
label = "View up to 6 top breeds for species:",
choices = c("All", "Cats", "Dogs", "Birds and Others"),
selected = "All"
)
)
),
column(5, plotOutput("explorerPlot2"))
)
),
tabPanel(
"Takeaways",
includeMarkdown("takeaways.Rmd"),
plotOutput("takeawaysPlot")
)
)
))
| /ui.R | permissive | xyx0826/adoptable-pets-viz | R | false | false | 2,051 | r | library(shiny)
library(DT)
source("scripts/theme.R")
shinyUI(fluidPage(
theme = theme,
navbarPage(
"Montgomery Pets",
tabPanel(
"Welcome",
includeMarkdown("welcome.Rmd")
),
tabPanel(
"Overview",
uiOutput("overviewTitle"),
includeMarkdown("overview.Rmd"),
uiOutput("overviewSlider"),
plotOutput("overviewPlot")
),
tabPanel(
"Inspector",
uiOutput("inspectorTitle"),
includeMarkdown("inspector.Rmd"),
fluidRow(
column(9, uiOutput("inspectorSelected")),
column(3, actionButton("random", "Inspect random pet"))
),
fluidRow(
column(4, uiOutput("inspectorImage")),
column(4, plotOutput("inspectorMap")),
column(4, plotOutput("inspectorPlot"))
),
DTOutput("inspectorTable"),
),
tabPanel(
"Explorer",
uiOutput("explorerTitle"),
includeMarkdown("explorer.Rmd"),
# First plot and description
fluidRow(
column(5, plotOutput("explorerPlot1")),
column(4, uiOutput("explorerPlot1Text"))
),
# Second plot and controls
fluidRow(
column(
4,
sidebarPanel(width = 10,
radioButtons(
"explorerSpecies",
label = "View up to 6 top breeds for species:",
choices = c("All", "Cats", "Dogs", "Birds and Others"),
selected = "All"
)
)
),
column(5, plotOutput("explorerPlot2"))
)
),
tabPanel(
"Takeaways",
includeMarkdown("takeaways.Rmd"),
plotOutput("takeawaysPlot")
)
)
))
|
source("common.R")
png("./plot1.png", width = 480, height = 480, units = "px")
hist(data$Global_active_power, col = "red", xlab = "Global Active Power (kilowatts)", main = "Global Active Power" )
dev.off()
| /plot1.R | no_license | brett-shwom/ExData_Plotting1 | R | false | false | 206 | r | source("common.R")
png("./plot1.png", width = 480, height = 480, units = "px")
hist(data$Global_active_power, col = "red", xlab = "Global Active Power (kilowatts)", main = "Global Active Power" )
dev.off()
|
# Replication Archive for:
# Kirkland, Patricia A. and Alexander Coppock
# Candidate Choice without Party Labels: New Insights from Conjoint Survey Experiments
# Forthcoming at Political Behavior
# Helper Functions
gen_entry <- function(est, se, p){
entry <- paste0(format_num(est, digits = 2), " (", format_num(se, 2), ")")
if(p < 0.05) {
entry <- paste0(entry, "*")
}
return(entry)
}
gen_entry_vec <- Vectorize(gen_entry)
se_mean <- function(x){
x_nona <- x[!is.na(x)]
n <- length(x_nona)
return(sd(x_nona)/(sqrt(n)))
}
format_num <- function(x, digits=3){
x <- as.numeric(x)
return(paste0(sprintf(paste0("%.", digits, "f"), x)))
}
make_attributes <- function(df){
df <- within(df,{
attribute = factor(attribute,
levels = c("Male", "Base category: Female",
"65", "55", "45",
"Base category: 35",
"Asian",
"Black",
"Hispanic",
"Base category: White",
"Attorney",
"Business Executive",
"Small Business Owner",
"Police Officer",
"Electrician",
"Stay-at-Home Dad/Mom",
"Base category: Educator",
"Representative in Congress",
"Mayor",
"State Legislator",
"City Council Member",
"School Board President",
"Base category: No Political Experience",
"Republican",
"Democrat",
"Base category: Independent"))
})
return(df)
}
make_coef_group <- function(df){
df <- within(df,{
coef_group <- rep(NA, nrow(df))
coef_group[grepl(pattern="Gender", x = rowname)] <- "Gender"
coef_group[grepl(pattern="Age", x = rowname)] <- "Age"
coef_group[grepl(pattern="Race", x = rowname)] <- "Race"
coef_group[grepl(pattern="Job", x = rowname)] <- "Job Experience"
coef_group[grepl(pattern="Political", x = rowname)] <- "Pol. Experience"
coef_group[grepl(pattern="Party", x = rowname)] <- "Party"
coef_group <- factor(coef_group, levels=c("Party", "Pol. Experience", "Job Experience",
"Race", "Age", "Gender"))
})
return(df)
}
prep_for_gg <- function(fit_cl, WP=FALSE){
df <- data.frame(ests =fit_cl[,1], ses = fit_cl[,2]) %>%
add_rownames()
df <- filter(df, grepl(pattern = "Gender|Age|Race|Job|Political|Party", x = df$rowname))
df <- within(df,{
uis <- ests + 1.96*ses
lis <- ests - 1.96*ses
attribute <- sub(pattern = "Gender|Age|Race|Job|Political|Party", replacement = "", x = df$rowname)
attribute <- factor(attribute, levels = c("Male", "Base category: Female",
"65", "55", "45",
"Base category: 35",
"Asian",
"Black",
"Hispanic",
"Base category: White",
"Attorney",
"Business Executive",
"Small Business Owner",
"Police Officer",
"Electrician",
"Stay-at-Home Dad/Mom",
"Base category: Educator",
"Representative in Congress",
"Mayor",
"State Legislator",
"City Council Member",
"School Board President",
"Base category: No Political Experience"))
if(WP){
attribute <- sub(pattern = "Gender|Age|Race|Job|Political|Party", replacement = "", x = df$rowname)
attribute <- factor(attribute, levels = c("Male", "Base category: Female",
"65", "55", "45",
"Base category: 35",
"Asian",
"Black",
"Hispanic",
"Base category: White",
"Attorney",
"Business Executive",
"Small Business Owner",
"Police Officer",
"Electrician",
"Stay-at-Home Dad/Mom",
"Base category: Educator",
"Representative in Congress",
"Mayor",
"State Legislator",
"City Council Member",
"School Board President",
"Base category: No Political Experience",
"Republican",
"Democrat",
"Base category: Independent"))
}
})
return(df)
}
cl <- function(dat,fm, cluster){
require(sandwich, quietly = TRUE)
require(lmtest, quietly = TRUE)
M <- length(unique(cluster))
N <- length(cluster)
K <- fm$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum));
vcovCL <- dfc*sandwich(fm, meat=crossprod(uj)/N)
coeftest(fm, vcovCL) }
cl_vcov <- function(dat,fm, cluster){
require(sandwich, quietly = TRUE)
require(lmtest, quietly = TRUE)
M <- length(unique(cluster))
N <- length(cluster)
K <- fm$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum));
vcovCL <- dfc*sandwich(fm, meat=crossprod(uj)/N)
return(vcovCL)}
| /references/kirkland_2017/mayors_source.R | permissive | timothyb0912/checking-zoo | R | false | false | 7,696 | r | # Replication Archive for:
# Kirkland, Patricia A. and Alexander Coppock
# Candidate Choice without Party Labels: New Insights from Conjoint Survey Experiments
# Forthcoming at Political Behavior
# Helper Functions
gen_entry <- function(est, se, p){
entry <- paste0(format_num(est, digits = 2), " (", format_num(se, 2), ")")
if(p < 0.05) {
entry <- paste0(entry, "*")
}
return(entry)
}
gen_entry_vec <- Vectorize(gen_entry)
se_mean <- function(x){
x_nona <- x[!is.na(x)]
n <- length(x_nona)
return(sd(x_nona)/(sqrt(n)))
}
format_num <- function(x, digits=3){
x <- as.numeric(x)
return(paste0(sprintf(paste0("%.", digits, "f"), x)))
}
make_attributes <- function(df){
df <- within(df,{
attribute = factor(attribute,
levels = c("Male", "Base category: Female",
"65", "55", "45",
"Base category: 35",
"Asian",
"Black",
"Hispanic",
"Base category: White",
"Attorney",
"Business Executive",
"Small Business Owner",
"Police Officer",
"Electrician",
"Stay-at-Home Dad/Mom",
"Base category: Educator",
"Representative in Congress",
"Mayor",
"State Legislator",
"City Council Member",
"School Board President",
"Base category: No Political Experience",
"Republican",
"Democrat",
"Base category: Independent"))
})
return(df)
}
make_coef_group <- function(df){
df <- within(df,{
coef_group <- rep(NA, nrow(df))
coef_group[grepl(pattern="Gender", x = rowname)] <- "Gender"
coef_group[grepl(pattern="Age", x = rowname)] <- "Age"
coef_group[grepl(pattern="Race", x = rowname)] <- "Race"
coef_group[grepl(pattern="Job", x = rowname)] <- "Job Experience"
coef_group[grepl(pattern="Political", x = rowname)] <- "Pol. Experience"
coef_group[grepl(pattern="Party", x = rowname)] <- "Party"
coef_group <- factor(coef_group, levels=c("Party", "Pol. Experience", "Job Experience",
"Race", "Age", "Gender"))
})
return(df)
}
prep_for_gg <- function(fit_cl, WP=FALSE){
df <- data.frame(ests =fit_cl[,1], ses = fit_cl[,2]) %>%
add_rownames()
df <- filter(df, grepl(pattern = "Gender|Age|Race|Job|Political|Party", x = df$rowname))
df <- within(df,{
uis <- ests + 1.96*ses
lis <- ests - 1.96*ses
attribute <- sub(pattern = "Gender|Age|Race|Job|Political|Party", replacement = "", x = df$rowname)
attribute <- factor(attribute, levels = c("Male", "Base category: Female",
"65", "55", "45",
"Base category: 35",
"Asian",
"Black",
"Hispanic",
"Base category: White",
"Attorney",
"Business Executive",
"Small Business Owner",
"Police Officer",
"Electrician",
"Stay-at-Home Dad/Mom",
"Base category: Educator",
"Representative in Congress",
"Mayor",
"State Legislator",
"City Council Member",
"School Board President",
"Base category: No Political Experience"))
if(WP){
attribute <- sub(pattern = "Gender|Age|Race|Job|Political|Party", replacement = "", x = df$rowname)
attribute <- factor(attribute, levels = c("Male", "Base category: Female",
"65", "55", "45",
"Base category: 35",
"Asian",
"Black",
"Hispanic",
"Base category: White",
"Attorney",
"Business Executive",
"Small Business Owner",
"Police Officer",
"Electrician",
"Stay-at-Home Dad/Mom",
"Base category: Educator",
"Representative in Congress",
"Mayor",
"State Legislator",
"City Council Member",
"School Board President",
"Base category: No Political Experience",
"Republican",
"Democrat",
"Base category: Independent"))
}
})
return(df)
}
cl <- function(dat,fm, cluster){
require(sandwich, quietly = TRUE)
require(lmtest, quietly = TRUE)
M <- length(unique(cluster))
N <- length(cluster)
K <- fm$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum));
vcovCL <- dfc*sandwich(fm, meat=crossprod(uj)/N)
coeftest(fm, vcovCL) }
cl_vcov <- function(dat,fm, cluster){
require(sandwich, quietly = TRUE)
require(lmtest, quietly = TRUE)
M <- length(unique(cluster))
N <- length(cluster)
K <- fm$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum));
vcovCL <- dfc*sandwich(fm, meat=crossprod(uj)/N)
return(vcovCL)}
|
### Making sure directory is set correctly
setwd(here::here())
## Loading functions
source('Simulation Functions.R')
##### Running simulations
### Preparing parallelization
pack.name <- c('MASS', 'simesHotelling', 'Matrix', 'highmean', 'highD2pop')
cl = makeCluster(parallel::detectCores() - 1) ## Detect cores
registerDoParallel(cl)
## Default values
iter.num <- 3
false.hypo <- c(0.01, 0.05, 0.15)
obs <- c(20, 50, 100)
type <- 'Unif'
alpha <- 0.05
setwd(here::here())
false.hypo <- c(0)
source('Parametric Tests/Type I error/Simulation Script Parametric - Alpha.R')
setwd(here::here())
false.hypo <- c(0.01, 0.05, 0.15)
source('Parametric Tests/Power/Simulation Script Parametric - Power.R')
setwd(here::here())
false.hypo <- c(0)
source('Non-equal Covaraince/Type I error/Simulation Script Nonequal - Alpha.R')
setwd(here::here())
false.hypo <- c(0.01, 0.05, 0.15)
source('Non-equal Covaraince/Power/Simulation Script Nonequal - Power.R')
#### Plotting results
### Making sure directory is set correctly
setwd(here::here())
## Loading functions
source('Simulation Functions.R')
### Plotting parametric power results
setwd(here::here())
source('Parametric Tests/Parametric - plotting and princeton.R')
### Plotting parametric power results
setwd(here::here())
source('Parametric Tests/Parametric - plotting and princeton.R')
| /Run simulations.R | no_license | tfrostig/Simulations-Simes-Hotelling | R | false | false | 1,384 | r | ### Making sure directory is set correctly
setwd(here::here())
## Loading functions
source('Simulation Functions.R')
##### Running simulations
### Preparing parallelization
pack.name <- c('MASS', 'simesHotelling', 'Matrix', 'highmean', 'highD2pop')
cl = makeCluster(parallel::detectCores() - 1) ## Detect cores
registerDoParallel(cl)
## Default values
iter.num <- 3
false.hypo <- c(0.01, 0.05, 0.15)
obs <- c(20, 50, 100)
type <- 'Unif'
alpha <- 0.05
setwd(here::here())
false.hypo <- c(0)
source('Parametric Tests/Type I error/Simulation Script Parametric - Alpha.R')
setwd(here::here())
false.hypo <- c(0.01, 0.05, 0.15)
source('Parametric Tests/Power/Simulation Script Parametric - Power.R')
setwd(here::here())
false.hypo <- c(0)
source('Non-equal Covaraince/Type I error/Simulation Script Nonequal - Alpha.R')
setwd(here::here())
false.hypo <- c(0.01, 0.05, 0.15)
source('Non-equal Covaraince/Power/Simulation Script Nonequal - Power.R')
#### Plotting results
### Making sure directory is set correctly
setwd(here::here())
## Loading functions
source('Simulation Functions.R')
### Plotting parametric power results
setwd(here::here())
source('Parametric Tests/Parametric - plotting and princeton.R')
### Plotting parametric power results
setwd(here::here())
source('Parametric Tests/Parametric - plotting and princeton.R')
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/abm-acor.R
\name{acor.lthgaussian}
\alias{acor.lthgaussian}
\title{Select the lth gaussian function}
\usage{
acor.lthgaussian(W)
}
\arguments{
\item{W}{The vector of weights}
}
\value{
The index of lht gaussian function
}
\description{
Given a weight vector calculate the probabilities of selecting
the lth gaussian function and return the index of lht gaussian selected with
probability p
}
\references{
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains.
European Journal of Operational Research, 185(3), 1155-1173.
http://doi.org/10.1016/j.ejor.2006.06.046
}
| /man/acor.lthgaussian.Rd | permissive | antonio-pgarcia/evoper | R | false | true | 672 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/abm-acor.R
\name{acor.lthgaussian}
\alias{acor.lthgaussian}
\title{Select the lth gaussian function}
\usage{
acor.lthgaussian(W)
}
\arguments{
\item{W}{The vector of weights}
}
\value{
The index of lht gaussian function
}
\description{
Given a weight vector calculate the probabilities of selecting
the lth gaussian function and return the index of lht gaussian selected with
probability p
}
\references{
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains.
European Journal of Operational Research, 185(3), 1155-1173.
http://doi.org/10.1016/j.ejor.2006.06.046
}
|
#Data Preparation:
#_________________________________________________________________________
#1. remove na. (done)
#2. impute na : (done)
#3. finding proportion of NAs (done sum(is.na(x))/nrow(object) with vapply or map FROM purrr)
#4. Diagraming using Amelia package (done in missmap from Amelia)
#5. feature plot (done with featurePlot function it consist of argument x= predictor variables, y=response variables, plot= "the type of plot which we want")
#6. VIF
#7. Transforming variables when needed. (done in preProcessing function in caret PACKAGE)
#8. Deciding on when to drop variable with many NA (not required)
#9. NA when variable is categorical (done with missMDA PACKAGE estim_ncpMCA and MIMCA)
#10. String extraction using rebus and stringr (done practice to be done)
#11. Efficient use of various stringr functions
#12. Use of gsub for substituting
#13. Use of gather, spread, unite and separate for restructuring the data.
#14. Model that dont need much data preparation. (Check out the machine learning model)
#15. dplyr functionality specific functions
#16. use of map in dplyr
#17. date manipulation with lubridate
#18. How to work on dummy variables is it needed to know
#19. for loop in mutate
#20. use of sqldf in r and comparison of it with. (done)
#21. Imputation of the NA values for 1. Categorical 2. for numeric (done)
#22. Outlier treatment
#23. Scaling and Weighting in R (done with preProcess function in caret where we choose the requisite method)
#24. Oversampling, undersampling or both (using ovun function from ROSE PACKAGE)
#25. factor plot
#Commonly used Packages:
library(dplyr) #select filter mutate group_by summarise
library(sqldf) #Running SQL Queries on R (SQL Codes equivalent to writing dplyr stuff can also be made)
library(stringr)#Contains a lot of string manipulating functions
library(rebus) #This makes writing regular expressions very easy this is used with stringr package
library(Amelia) #This package contain missmap through which we can view where the missing values are
#its also used to imputation of the missing values
library(readr) #Importing data from csv and txt into r as tibble dataframe
library(readxl) #importing excel file
library(data.table) #fread
library(tidyr) #Unite Separate Gather spread nest unnest to change the table orientation and uniting and separating columns.
library(broom) #tidy-> used to get overall model parameters in a table format,augment-> used to get data point specific
#parameters like fit, leverage , cook distance
library(purrr) #Functional programming can be written, map map2 , map_if, safely, invoke_map
library(lubridate) #different date format to ISO 8601 format
library(ggplot2) #Plotting graphs
library(ggforce) #Use to develop paginated graphs when using facets(When using facets in cases where the number of
#levels are more we can fix the number of rows, columns and number of pages by it)
library(corrplot)#We can draw correlation plot which is basically a heatmap where darker shade represent higher
#correlation and lighter represent lower order correlation
library(caret) #When the output variables are numeric we can use it to draw feature plot which is nothing but factor plot
#preProcess
library(scales) #used with ggplot to provide breaks in the axis
#1. Finding the number of NAs individually
#_________________________________________________________________
a<-c(rnorm(100,mean=200,sd=32),NA,NA,NA,NA,NA,NA) #creating random numbers in r
b<-c(runif(100,min=20,max=300),NA,NA,NA,NA,NA,NA)
a<-as.data.frame(a)
b<-as.data.frame(b)
c<-cbind(a,b)
sum(is.na(c$a)) #Finding the number of NAs
sum(is.na(c$b))/nrow(c) #Finding the proportion of NAs
#If I have 2 data object and I want to find the na in both of them then how to do it ?????
#2. Finding the number of NAs over the columns
#___________________________________________________________________
map(c,function(x)sum(is.na(x))) #Its anonymous function
map(c,~sum(is.na(.)))
#apply : apply(X, MARGIN( 1 for row, 2 for column), FUN, ...)
#sapply : sapply(X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE)
sapply(c,function(x)sum(is.na(x))) #sapply is a user-friendly version and wrapper of lapply by default returning a vector, matrix
apply(c,2,function(x)sum(is.na(x)))
##3 Finding proportion of NAs
map(c,function(x)sum(is.na(x))/nrow(c)*100)
#4. Diagraming using Amelia package
#____________________________________________________________________
#Seeing the graphic of the missing values in R using missmap function in Amelia package in R
missmap(c)
#5. Missing value imputation in R :mice for numeric and missMDA for categorical
#_____________________________________________________________________
#Missing data in the training data set can reduce the power/fit of the model or can lead to a biased
#model because we have not analysed the behavior and the relationship with other variables correctly
#It can lead to wrong prediction or classification.
#Why my data has missing values?
#1. Data extraction: Error at this stage are typically easy to find and corrected.
#2. Data Collection: a. Missing completely at random (probability of missing variable is same for all observation)
# b. Missing at random (missing ratio varies for different level and inout variables)
# c. Missing that depends on unobserved predictors (eg. if in medical particular diagnostic causes discomfort then there is higher chance of drop out from the study)
# d. Missing that depend on missing values itself (eg. People with higher or lower income are likely to provide non-response to their earning)
library(mlbench)
data("PimaIndiansDiabetes")
missmap(PimaIndiansDiabetes) #0 missing values
setwd("C:\\Users\\fz1775\\Desktop\\MACHINE LEARNING\\Testing ML\\Practice Dataset")
bank_data<-read_csv("Bank_data_with_missing.csv")
missmap(bank_data) #Very traces are missing
map(bank_data,function(x)sum(is.na(x))) #number of missing data points
map(bank_data,function(x)sum(is.na(x))/nrow(bank_data)*100) #proportion of missing data points
summary(bank_data)
#For Categorical data
#Based on the problem at hand , we can try to do one of the following
#1. Mode is one of the option which can be used
#2. Missing values can be treated as a separate category by itself, We can create another category
# for missing values and use them as a different level
#3. If the number of missing values are lesser compared to the number of samples and also number of
# samples are high, we can also choose to remove those rows in our analysis
#4. We can run model to predict the missing values using the other variables as input
#R provides MICE(Multiple imputation by chained equation) package and Amelia package for handling
#missing values
#for MICE follow the steps below
#1. Change variable(with missing values) into factors by as.factor()
#2. Create a data set of all the known variable and the missing values
#3. Read about the complete() command from the MICE package and apply to the new data set.
#How to treat the missing values in R?
#1. Deletion
#2. Mean Median Mode
#3. Prediction Model
#4. KNN Imputation
#Imputation of Numerical data:
library(mice) # Multivariate Imputation via chained equations
md.pattern(bank_data) #This can also be used togather with missmap
bank<-mice(bank_data,m=5,maxit=50,meth='pmm',seed=500)
#m -> refers to number of imputed datasets (Number of multiple imputations. The default is m=5.)
#m -> A scalar giving the number of iterations. The default is 5.
#meth="pmm" refers to the imputation method (Predictive mean matching)
summary(bank)
completedata<-complete(bank,1)
map(completedata,function(x)sum(is.na(x))) #We can see the numeric variables are no more empty
#What is MICE?
#Missing data are a common problem in psychiatric research. Multivariate imputation
#by chained equations (MICE), sometimes called “fully conditional specification” or
#“sequential regression multiple imputation”
#While complete case analysis may be easy to implement it relies upon stronger missing data assumptions than multiple imputation and it can result in biased estimates and a reduction in power
# Single imputation procedures, such as mean imputation, are an improvement but do not account for the uncertainty in the imputations; once the imputation is completed, analyses proceed as if the imputed values were the known, true values rather than imputed. This will lead to overly precise results and the potential for incorrect conclusions.
#Maximum likelihood methods are sometimes a viable approach for dealing with missing data (Graham, 2009); however, these methods are primarily available only for certain types of models
#mice package in R:
# The mice package implements a method to deal with missing data. The package creates multiple imputations
# (replacement values) for multivariate missing data. The method is based on Fully Conditional Specification,
# where each incomplete variable is imputed by a separate model. The MICE algorithm can impute mixes of
# continuous, binary, unordered categorical and ordered categorical data. In addition, MICE can impute
# continuous two-level data, and maintain consistency between imputations by means of passive imputation.
# Many diagnostic plots are implemented to inspect the quality of the imputations.
#Multiple imputation has a number of advantages over these other missing data approaches. Multiple imputation involves filling in the missing values multiple times, creating multiple “complete” datasets. Described in detail by Schafer and Graham (2002), the missing values are imputed based on the observed values for a given individual and the relations observed in the data for other participants, assuming the observed variables are included in the imputation model.
#Because multiple imputation involves creating multiple predictions for each missing value, the analyses of multiply imputed data take into account the uncertainty in the imputations and yield accurate standard errors. On a simple level, if there is not much information in the observed data (used in the imputation model) regarding the missing values, the imputations will be very variable, leading to high standard errors in the analyses.
library(missMDA)
#What is missMDA?
#The missMDA package quickly generates several imputed datasets with quantitative variables
#and/or categorical variables. It is based on the
#1. dimentionality reduction method such as PCA for continuous variables or
#2. multiple correspondance analysis for categorical variables.
#Compared to the Amelia and mice, it better handles cases where the number of variables
#is larger than the number of units and cases where regularization is needed. For categorical
#variables, it is particularly interesting with many variables and many levels but also
#with rare level
#Partition the data to categorical only
bank_data_cat<-bank_data%>%select(2:5,7:9,11,16:17)%>%map(.,as.factor)
nb<-estim_ncpMCA(bank_data_cat,ncp.max=5) #Time consuming, nb=4 (Better to convert the data to factor)
# Takes almost 2 hours
res<-MIMCA(bank_data_cat,ncp=4,nboot=1) #MIMCA performs multiple imputations for categorical data using Multiple Correspondence Analysis.
#nboot: the number of imputed datasets it should be 1
a1<-as.data.frame(res$res.MI) #we can get the imputed data by this step
#We can finally merge the numeric and the categorical togather.
?estim_ncpMCA
?MIMCA
#Numeric Data: mice function then completedata function on the output of mice function.
# The principle followed with Multiple Imputation with chained equations.
#Categorical Data: select only the categorical data then estim_ncpMCA(categorical_object,ncp.max=5)
# MCA Stands for multiple correspondance analysis then use MIMCA(bank_data_cat,ncp=4,nboot=10)
#6. Preprocessing of data in R using caret
#_____________________________________________________________________
#library(caret)
preProcessValues_scale<-preProcess(bank_data_num,method="scale") #scale means (x- mean(X)/sd of X)
#Check out which variable to scale and center
preProcessValues_center<-preProcess(bank_data_num,method="center") # center means X-mean(X)
#There are a number of preprocessing methods available in R
#1. BoxCox
#2. YeoJohnson
#3. expoTrans
#4. center
#5. scale
#6. range
#7. pca
#8. ica
#9. corr
#7. Feature Plot in R : This the factor plot
#___________________________________________________________________________-
# Here we first split the data to the response variable (Y) and the predictor variable (X)
# split input and output
y <- bank_data[,14]
x <- bank_data[,c(1:13,15:17)]
# scatterplot matrix / feature plot is same as factor analysis plot
library(ellipse)
featurePlot(x=x, y=y, plot="ellipse") #ellipse package required
# box and whisker plots for each attribute
featurePlot(x=x, y=y, plot="box")
# density plots for each attribute
featurePlot(x=x, y=y, plot="density")
# pairs plot for each attribute
featurePlot(x=x, y=y, plot="pairs") #just like ellipse plot with ellipse not present, more like scatter plot
#8. Correlation Matrix and correlation plot in R.
#__________________________________________________________________
correlationMatrix <- cor(bank_data[,c(1,6,10,12:15)]) #Taking only the numeric variables
# summarize the correlation matrix
print(correlationMatrix)
#Plotting the corrplot to remove trhe redundant factors
library(corrplot)
corrplot(correlationMatrix,method="color")
#9. String Manipulation in R using stringr package
#_________________________________________________________
# rebus provides START and END shortcuts to specify regular expressions
# that match the start and end of the string. These are also known as anchors
library(rebus)
#gsub("Patten which we now want to keep","old pattern",Variable_of_interest)
## Join Multiple Strings Into A Single String.+++++++++++++++++++++++
#str_c() str_c(..., sep = "", collapse = NULL)
a2<-"x"
b2<-c("y",NA)
str_c(a2,b2,sep="~")
#Missing values are contagious so convert NA to "NA" by
str_replace_na(b2) #and then use str_c which essentially means string concatenate
#str_detect(variable,pattern=) Result: TRUE FALSE IMPORTANT
#str_subset (variable, pattern=) Result: Only those variable having match with the pattern IMPORTANT
#str_count (variable,pattern=) Result: 0 1 IMPORTANT
#str_split(variable,pattern=) Result: split the variable into two parts
#str_replace (variable,pattern="khk", replacement="vjjb") IMPORTANT
#Like the dplyr the rebus package also help to write text pattern using pipe
# %R%
# START %R% ANY_CHAR %R% one_or_more(DGT)
# %R% END
# optional()
# zero_or_more()
# one_or_more()
# repeated()
# DGT
# WRD
# SPC
# DOLLAR %R% DGT %R% optional(DGT) %R% DOT %R% dgt(2)
#10. sqldf PACKAGE functionalities in R for data manipulation
#______________________________________________________________________
bank_sql1<-sqldf("SELECT * from bank_data")
bank_sql1
CP_State_Lookup<-read_csv("CP_State_Lookup.csv")
CP_Scenario_Lookup<-read_csv("CP_Scenario_Lookup.csv")
CP_Product_Lookup<-read_csv("CP_Product_Lookup.csv")
CP_Industry_Lookup<-read_csv("CP_Industry_Lookup.csv")
CP_Executive_Lookup<-read_csv("CP_Executive_Lookup.csv")
CP_Date_Lookup<-read_csv("CP_Date_Lookup.csv")
CP_Customer_Lookup<-read_csv("CP_Customer_Lookup.csv")
CP_BU_Lookup<-read_csv("CP_BU_Lookup.csv")
CP_RevenueTaxData_Fact<-read_csv("CP_Revenue Tax Data_Fact.csv")
#INNER JOIN: USED TO COMPARE MULTIPLE TABLES, REPORT MATCHING DATA.(Matching data with respect to the variable in ON)
CP_BU_Lookup_1_9<-CP_BU_Lookup%>%filter(`BU Key`>=1&`BU Key`<=9)
CP_Innerjoin<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact INNER JOIN CP_BU_Lookup_1_9 ON CP_RevenueTaxData_Fact.`BU Key`=CP_BU_Lookup_1_9.`BU Key`")
#OUTER JOINS : USED TO COMPARE MULTIPLE TABLES, REPORT MATCHING & MISSING DATA
# LEFT OUTER JOIN : All Left Table Data + Matching Right Table Data.
# Non match Right table data is reported as null.
CP_LeftOuter<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact LEFT JOIN CP_BU_Lookup_1_9 ON CP_RevenueTaxData_Fact.`BU Key`=CP_BU_Lookup_1_9.`BU Key`")
# FULL OUTER JOIN : Combined output of LEFT OUTER JOIN + RIGHT OUTER JOIN
#CROSS JOIN => all type of combination
CP_CrossJoin<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact CROSS JOIN CP_BU_Lookup_1_9")
#GROUP BY: by using group by alone we get the last data only
CP_Groupby_alone<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact GROUP BY `BU Key`")
#GROUP BY SUM
CP_Groupby_sum_Revenue<-sqldf("SELECT `BU Key`, SUM(Revenue) FROM CP_RevenueTaxData_Fact GROUP BY `BU Key`")
#USING GROUP BY TO GET ONLY UNIQUE VALUES
CP_Groupby_Unique_Values<-sqldf("SELECT `BU Key` FROM CP_RevenueTaxData_Fact GROUP BY `BU Key`")
#RULE : WHENEVER WE USE GROUP BY THEN COLUMNS USED IN SELECT SHOULD ALSO BE IN GROUP BY
CP_Groupby_Multiple_variable<-sqldf("SELECT `BU Key`, `Customer Key`, SUM(Revenue)
from CP_RevenueTaxData_Fact
GROUP BY `BU Key`, `Customer Key`")
#WHEN WE WANT TO USE FILTER ON AGGREGATE OF COLUMN IN GROUP BY WE USE HAVING
CP_Groupby_Multiple_variable_having<-sqldf("SELECT `BU Key`, `Customer Key`, SUM(Revenue)
from CP_RevenueTaxData_Fact
GROUP BY `BU Key`, `Customer Key`
HAVING SUM(Revenue)<3336145.89")
#UNION MEANS The UNION command combines the result set of two or more SELECT statements (only distinct values)
#The UNION ALL command combines the result set of two or more SELECT statements (allows duplicate values).
CP_RevenueTaxData_Fact_BU1_9<-CP_RevenueTaxData_Fact%>%filter(`BU Key`>=1 & `BU Key` <10)
CP_RevenueTaxData_Fact_BU5_end<-CP_RevenueTaxData_Fact%>%filter(`BU Key`>=5)
#UNION
CP_Revenue_Union<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact_BU1_9 UNION SELECT * FROM CP_RevenueTaxData_Fact_BU5_end")
#UNION ALL
CP_Revenue_Unionall<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact_BU1_9 UNION ALL SELECT * FROM CP_RevenueTaxData_Fact_BU5_end")
#WHERE FOR NON AGGREGATE COLUMN 1st + GROUP BY for AGGREGATED COLUMN 2nd +HAVING
CP_Revenue_where_groupby<-sqldf("SELECT `BU Key`, `Customer Key`, SUM(Revenue)
from CP_RevenueTaxData_Fact
WHERE `BU KEY` BETWEEN 1 AND 10
GROUP BY `BU Key`, `Customer Key`
HAVING SUM(Revenue)<3336145.89")
#SUBQUERIES IN SQLDF (SEMI JOIN)
CP_Revenue_Subq1<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact
WHERE `BU KEY` IN (SELECT `BU KEY` FROM CP_BU_Lookup_1_9)")
#SUBQUERIES IN SQLDF (ANTI JOIN)
CP_Revenue_Subq12<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact
WHERE `BU KEY` NOT IN (SELECT `BU KEY` FROM CP_BU_Lookup_1_9)")
#MORE COMPLICATED SUBQUERIES IN SQLDF
CP_Revenue_Subq123<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact
WHERE `BU KEY` IN (SELECT `BU KEY` FROM CP_BU_Lookup_1_9 WHERE Executive_id < 5)")
#cASE ( mindful of `Variable` and 'Text')
CP_Revenue_Case<-sqldf("SELECT `Customer Key`,
CASE
WHEN `Customer Key` < 5000 THEN 'CK < 5000'
WHEN `Customer Key` BETWEEN 5000 AND 10000 THEN 'CK BETWEEN 5000 AND 10000'
WHEN `Customer Key` > 10000 THEN 'CK >10000'
ELSE 'CK not in list'
END Ransingh
FROM CP_RevenueTaxData_Fact")
# REQ 1 : HOW TO REPORT ALL COURSES & RESPECTIVE STUDENTS IN EACH COURSE ?
# SELECT * FROM COURSES
# INNER JOIN
# tblStudents
# ON tblStudents.StdCourse_ID = COURSES.COURSE_ID
#
#
# REQ 2 : HOW TO REPORT ALL COURSES WITH AND WITHOUT STUDENTS ?
# SELECT * FROM COURSES
# LEFT OUTER JOIN
# tblStudents
# ON COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
# REQ 3 : HOW TO REPORT ALL COURSES WITH AND WITHOUT STUDENTS ?
# SELECT * FROM tblStudents
# RIGHT OUTER JOIN
# COURSES
# ON
# COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
#
# -- REQ 4 : HOW TO REPORT LIST OF ALL COURSES WITHOUT STUDENTS?
# SELECT * FROM COURSES
# LEFT OUTER JOIN
# tblStudents
# ON
# COURSES.COURSE_ID = tblStudents.StdCourse_ID
# WHERE
# tblStudents.StdCourse_ID IS NULL
#
# -- REQ 5 : HOW TO REPORT LIST OF ALL COURSES AND STUDENTS?
# SELECT * FROM COURSES CROSS JOIN tblStudents
# SELECT * FROM COURSES CROSS APPLY tblStudents
#
# -- REQ 6 : HOW TO TUNE QUERIES WITH JOINS [FOR BIG TABLES] ?
# SELECT * FROM COURSES
# INNER MERGE JOIN
# tblStudents
# ON COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
# -- REQ 7 : HOW TO TUNE QUERIES WITH JOINS [FOR SMALL TABLES] ?
# SELECT * FROM COURSES
# LEFT OUTER LOOP JOIN
# tblStudents
# ON COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
# -- REQ 8 : HOW TO TUNE QUERIES WITH JOINS [FOR HEAP TABLES] ?
# SELECT * FROM COURSES
# FULL OUTER LOOP JOIN
# tblStudents
# ON
# COURSES.COURSE_ID = tblStudents.StdCourse_ID
# -- QUERY 1: HOW TO REPORT LIST OF ALL POPULATION DETAILS?
# SELECT * FROM tblPopulation
#
# -- QUERY 2: HOW TO REPORT LIST OF ALL COUNTRY NAMES?
# SELECT Country FROM tblPopulation
#
# -- QUERY 3: HOW TO REPORT LIST OF ALL UNIQUE COUNTRIES DETAILS?
# SELECT Country FROM tblPopulation
# GROUP BY Country
#
# -- QUERY 4: HOW TO REPORT TOTAL POPULATION DETAILS?
# SELECT sum(Population) AS TOTAL_POP FROM tblPopulation
#
# -- QUERY 5: HOW TO REPORT COUNTRY WISE TOTAL POPULATION DETAILS?
# SELECT COUNTRY, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY
#
# -- RULE : WHENEVER WE USE GROUP BY THEN COLUMNS USED IN SELECT SHOULD ALSO BE IN GROUP BY
# SELECT COUNTRY, STATE, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY
#
# -- RULE : WHENEVER WE USE GROUP BY THEN COLUMNS USED IN SELECT SHOULD ALSO BE INCLUDED IN GROUP BY
#
# -- QUERY 6: HOW TO REPORT COUNTRY WISE, STATE WISE TOTAL POPULATION?
# SELECT COUNTRY, STATE, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY, STATE
#
# -- QUERY 7: HOW TO REPORT COUNTRY WISE, STATE WISE, CITY WISE TOTALS?
# SELECT COUNTRY, STATE, CITY, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY, STATE, CITY
#
# -- QUERY 8: HOW TO APPLY CONDITIONS ON GROUP BY DATA ?
# SELECT COUNTRY, STATE, CITY, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY , STATE, CITY
# HAVING sum(Population) > 15
#
# -- QUERY 9: HOW TO APPLY CONDITIONS BEFORE AND AFTER GROUP BY ?
# SELECT COUNTRY, STATE, sum(Population) AS TOTAL_POP FROM tblPopulation
# WHERE COUNTRY = 'COUNTRY1' -- USED TO SPECIFY CONDITIONS ON NON-AGGREGATE VALUES
# GROUP BY COUNTRY , STATE
# HAVING sum(Population) > 5 -- USED TO SPECIFY CONDITIONS ON AGGREGATE VALUES
#
#
# -- QUERY 10: HOW TO REPORT TOTAL POPULATION USING ROLLUP ?
# SELECT COUNTRY, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY COUNTRY
#
# SELECT COUNTRY, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
#
#NOT POSSIBLE IN R
# SELECT COUNTRY, SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
#
#
# SELECT
# COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 0
# UNION ALL
# SELECT
# ISNULL(COUNTRY, 'GRAND TOTAL') AS COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 1
#
#
# SELECT
# COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 0
# UNION ALL
# SELECT
# COALESCE(COUNTRY, 'GRAND TOTAL') AS COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 1
#
# -- IS NULL ISNULL
#
#
#
# SELECT COUNTRY, STATE, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY,STATE) -- 11 ROWS
# -- COUNTRY WISE TOTAL + COUNTRY WISE STATE WISE TOTAL
#
#
# SELECT COUNTRY, STATE, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY CUBE(COUNTRY,STATE) -- 13 ROWS
# -- COUNTRY WISE TOTAL + COUNTRY WISE STATE WISE TOTAL
# -- STATE WISE TOTAL
#
#
# SELECT * FROM CUSTOMERS_DATA
# SELECT * FROM PRODUCTS_DATA
# SELECT * FROM TIME_DATA
# SELECT * FROM SALES_DATA
#
#
# -- QUERY #1: HOW TO REPORT PRODUCT WISE TOTAL SALES?
# SELECT *
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
#
#
# -- QUERY #2
# SELECT EnglishProductName, SalesAmount
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
#
#
# -- QUERY #3
# SELECT EnglishProductName, SUM(SalesAmount)
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
#
#
# -- QUERY #4
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
#
#
#
# -- QUERY #5 : HOW TO REPORT PRODUCT WISE TOTAL SALES ABOVE 1000 USD?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING SUM(SalesAmount) > 1000
#
#
# -- QUERY #6 : HOW TO REPORT PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT EnglishProductName,
# SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING
# SUM(SalesAmount) > 1000 AND SUM(TAXAMT) > 1000
#
#
#
#
#
# -- QUERY #7 : HOW TO REPORT PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING
# SUM(SalesAmount) > 1000 AND SUM(TAXAMT) > 1000
# ORDER BY TOTAL_SALES DESC
#
#
# -- QUERY #8 : HOW TO REPORT PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING
# SUM(SalesAmount) > 1000 AND SUM(TAXAMT) > 1000
# ORDER BY 2 DESC -- ORDERING THE DATA BY USING COLUMN CARDINAL POSITION.
#
#
#
#
#
#
#
# -- QUERY 9: WRITE A QUERY TO REPORT SUM OF SALES AND TAX FOR PRODUCTS WITH MAXIMUM DEALER PRICE ?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# WHERE -- FOR CONDITIONS ON NON-AGGREGATE COLUMNS
# PRODUCTS_DATA.DealerPrice
# IN ( SELECT MAX(DealerPrice) FROM PRODUCTS_DATA)
# GROUP BY EnglishProductName
#
#
#
#
# -- QUERY 10: HOW TO REPORT SUM OF SALES FOR PRODUCTS WITH MAXIMUM DEALER PRICE BUT NOT FOR MINIMAL LIST PRICE ?
# -- NESTED SUB QUERY
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# WHERE
# PRODUCTS_DATA.DealerPrice
# IN ( SELECT MAX(DealerPrice) FROM PRODUCTS_DATA
# WHERE LISTPRICE
# NOT IN ( SELECT MIN(LISTPRICE) FROM PRODUCTS_DATA ) )
# GROUP BY EnglishProductName
#
#
#
# -- EXAMPLES TO JOIN MORE THAN TWO TABLES:
# SELECT * FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
#
#
#
# SELECT * FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# INNER JOIN
# TIME_DATA
# ON
# SALES_DATA.ORDERDATEKEY = TIME_DATA.TIMEKEY
#
#
# -- Q1: HOW TO REPORT YEAR WISE TOTAL SALES?
# -- Q2: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX?
# -- Q3: HOW TO REPORT YEAR WISE, QUARTER WISE, MONTH WISE TOTAL SALES AND TOTAL TAX?
# -- Q4: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX FOR JUNE MONTH ?
# -- Q5: HOW TO REPORT CLASS WISE, COLOR WISE PRODUCTS FOR EACH YEAR BASED ON ASC ORDER OF SALES?
# -- Q6: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS WITH MAXIMUM NUMBER OF SALES?
# -- Q7: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS EXCEPT WITH MINIMUM NUMBER OF SALES?
# -- Q8: HOW TO COMBINE THE RESULTS FROM ABOVE TWO QUERIES.
# -- Q9: HOW TO ADDRESS POSSIBLE BLOCKING ISSUES FROM ABOVE TWO QUERIES?
# -- Q10: HOW TO REPORT YEAR WISE, CUSTOMER WISE, PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
#
#
#
# -- Q1: HOW TO REPORT YEAR WISE TOTAL SALES?
# SELECT T.CalendarYear, SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# GROUP BY T.CalendarYear
#
#
# -- Q2: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX?
# SELECT T.CalendarYear, T.CalendarQuarter,
# SUM(S.SalesAmount) AS TOTAL_SALES, SUM(S.TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# GROUP BY T.CalendarYear, T.CalendarQuarter
#
#
#
# -- Q3: HOW TO REPORT YEAR WISE, QUARTER WISE, MONTH WISE TOTAL SALES AND TOTAL TAX?
# SELECT T.CalendarYear, T.CalendarQuarter, T.EnglishMonthName,
# SUM(S.SalesAmount) AS TOTAL_SALES, SUM(S.TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# GROUP BY T.CalendarYear, T.CalendarQuarter, T.EnglishMonthName
#
# -- Q4: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX FOR JUNE MONTH ?
# SELECT T.CalendarYear, T.CalendarQuarter,
# SUM(S.SalesAmount) AS TOTAL_SALES, SUM(S.TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# WHERE T.EnglishMonthName = 'JUNE'
# GROUP BY T.CalendarYear, T.CalendarQuarter
#
#
# -- Q5: HOW TO REPORT CLASS WISE, COLOR WISE PRODUCTS FOR EACH YEAR BASED ON ASC ORDER OF SALES?
# SELECT
# P.Class, P.Color, T.CalendarYear,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY P.Class, P.Color, T.CalendarYear
#
#
# -- Q6: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS WITH MAXIMUM NUMBER OF SALES?
#
# -- STEP 1: IDENTIFY THE PRODUCTS THAT HAVE MAX SALE VALUE:
# SELECT
# P.EnglishProductName,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY P.EnglishProductName
#
# CREATE VIEW VW_SALE_PROIDUCTS
# AS
# SELECT
# P.EnglishProductName,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY P.EnglishProductName
#
#
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
#
# -- STEP 2:
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
#
# -- Q7: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS EXCEPT WITH MINIMUM NUMBER OF SALES?
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName NOT IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MIN(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
# -- Q8: HOW TO COMBINE THE RESULTS FROM ABOVE TWO QUERIES ?
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName NOT IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MIN(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# OR
# P.EnglishProductName IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
# -- Q9: HOW TO ADDRESS POSSIBLE BLOCKING ISSUES FROM ABOVE TWO QUERIES?
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S (READPAST)
# INNER JOIN PRODUCTS_DATA AS P (READPAST)
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName NOT IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MIN(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# OR
# P.EnglishProductName IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
# -- Q10: HOW TO REPORT YEAR WISE, CUSTOMER WISE, PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT
# T.CalendarYear, C.FirstName + ' ' + C.LastName AS FULLNAME, P.EnglishProductName,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# INNER JOIN CUSTOMERS_DATA AS C
# ON
# C.CustomerKey = S.CustomerKey
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY T.CalendarYear, C.FirstName + ' ' + C.LastName, P.EnglishProductName
# HAVING SUM(S.SalesAmount) > 1000
#
#
# /*
# NORMAL FORMS : A MECHANISM TO IDENTIFY THE TABLES, RELATIONS AND DATA TYPES.
# ENSURE PROPER DIVSION OF BUSINESS DATA INTO MULTIPLE TABLES.
#
# 1 NF : FIRST NORMAL FORM. EVERY COLUMN SHOULD BE ATOMIC. MEANS, STORES SINGLE VALUE.
#
# 2 NF : SECOND NORMAL FORM. EVERY TABLE SHOULD BE IN FIRST NORMAL FORM
# EVERY TABLE SHOULD BE HAVING A CANDIDATE KEY. USED FOR FUNCTIONAL DEPENDANCY.
#
# 3 NF : THIRD NORMAL FORM. EVERY TABLE SHOULD BE IN SECOND NORMAL FORM
# EVERY TABLE SHOULD BE HAVING A FOREIGN KEY. USED FOR MULTI-VALUED DEPENDANCY.
#
# BCNF NF : BOYCE-CODD NORMAL FORM. EVERY TABLE SHOULD BE IN THIRD NORMAL FORM
# EVERY TABLE SHOULD BE HAVING MORE THAN ONE FOREIGN KEY. USED FOR MULTI-VALUED DEPENDANCY.
# AND MANY TO ONE RELATION.
#
# 4 NF : FOURTH NORMAL FORM. EVERY TABLE SHOULD BE IN THIRD NORMAL FORM
# AND ATLEAST ONE SELF REFERENCE. MEANS A TABLE REFERENCING ITSELF. */
# SELECT * FROM tblPopulation WHERE COUNTRY = 'COUNTRY1'
#
# CREATE VIEW VW_COUNTRY1
# AS
# SELECT * FROM tblPopulation WHERE COUNTRY = 'COUNTRY1'
#
# SELECT * FROM VW_COUNTRY1
#
# -- MAIN PURPOSE OF VIEWS : TO STORE QUERIES FOR EASY END USER ACCESS.
#
# -- WHENEVER WE CREATE A DATABASE, SET OF PREDEFINED VIEWS ARE AUTO CREATED [SYSTEM VIEWS]
# -- HOW TO REPORT LIST OF DATABASES IN A SERVER?
# SELECT * FROM SYS.DATABASES
#
# -- HOW TO REPORT LIST OF TABLES IN A DATABASE?
# SELECT * FROM SYS.TABLES -- REPORTS TABLES IN THE CURRENT DATABASE [2P]
# SELECT * FROM UNIVERSITY_DATABASE.SYS.TABLES -- REPORTS TABLES IN THE SPECIFIED DATABASE [3P]
#
# -- HOW TO REPORT LIST OF PRIMARY KEYS, FOREIGN KEYS, CHECK CONSTRAINTS, ETC IN A DATABASE?
# SELECT * FROM SYS.OBJECTS
#
# -- HOW TO REPORT LIST OF COLUMNS FOR ALL TABLES & VIEWS & FUNCTIONS IN THE CURRENT DATABASE?
# SELECT * FROM INFORMATION_SCHEMA.COLUMNS
#
#
# CREATE FUNCTION fn_ReportDetails ( @country varchar(30) ) -- @country is a PARAMETER. INPUT VALUE
# RETURNS table
# AS
# RETURN
# (
# SELECT * FROM tblPopulation WHERE COUNTRY = @country -- PARAMETERIZED QUERY
# )
#
# SELECT * FROM fn_ReportDetails('COUNTRY1')
# SELECT * FROM fn_ReportDetails('COUNTRY2')
#
# -- MAIN PURPOSE OF FUNCTIONS : COMPUTATIONS (CALCULATIONS), DYNAMIC REPORTING
#
# CREATE PROCEDURE usp_ReportDetails ( @country varchar(30) ) -- @country is a PARAMETER. INPUT VALUE
# AS
# SELECT * FROM tblPopulation WHERE COUNTRY = @country -- PARAMETERIZED QUERY
#
# EXECUTE usp_ReportDetails 'COUNTRY1'
# EXEC usp_ReportDetails 'COUNTRY2'
#
# -- MAIN PURPOSE OF STORED PROCEDURES (SPROCs) : PROGRAMMING, QUERY TUNING [QUERIES CAN EXECUTE BETTER]
#
# -- ADVANTAGE OF STORED PROCEDURES OVER FUNCTIONS: SPs ARE PRE-COMPILED AND STORED FOR READY EXECUTIONS.
# FUNCTIONS NEED TO GET COMPILED EVERY TIME WE EXECUTE.
# COMPILATION : CONVERT FROM HIGH LEVEL SQL TO MACHINE CODE.
#
# -- ADVANTAGE OF FUNCTIONS OVER STORED PROCEDURES: FUNCTIONS ARE EXECUTED WITH REGULAR SELECT STATEMENT
# HENCE FLEXIBLE FOR DATA ACCESS, REPORTING, CALCULATIONS.
#
#
# -- SYSTEM STORED PROCEDURES:
# EXEC SP_HELPDB 'OBJECT_OVERVIEW' -- REPORTS DETAILS OF THE GIVEN DATABASE INCLUDING SIZE & FILES
# EXEC SP_HELP 'tblPopulation' -- REPORTS TABLE DEFINITION
# EXEC SP_HELPTEXT 'usp_ReportDetails' -- REPORTS VIEW, FUNCTION, PROCEDURE DEFINITION
# EXEC SP_DEPENDS 'tblPopulation' -- REPORTS THE OBJECT DEPENDANCIES ON THE TABLE
# EXEC SP_RENAME 'tblPopulation', 'tblPopulation_NEW' -- TO RENAME A PROCEDURE
# EXEC SP_RECOMPILE 'usp_ReportDetails' -- RECOMPILES THE STORED PROCEDURE NEXT TIME WE EXECUTE IT
# -- RECOMPILATION REQUIRED IF UNDERLYING TABLE STRUCUTRE CHANGES.
#
#
#
# SELECT @@VERSION
# SELECT @@SERVERNAME | /All about data processing in R.R | no_license | Ransinghsatyajitray/All-about-data-preparation-in-R | R | false | false | 37,961 | r | #Data Preparation:
#_________________________________________________________________________
#1. remove na. (done)
#2. impute na : (done)
#3. finding proportion of NAs (done sum(is.na(x))/nrow(object) with vapply or map FROM purrr)
#4. Diagraming using Amelia package (done in missmap from Amelia)
#5. feature plot (done with featurePlot function it consist of argument x= predictor variables, y=response variables, plot= "the type of plot which we want")
#6. VIF
#7. Transforming variables when needed. (done in preProcessing function in caret PACKAGE)
#8. Deciding on when to drop variable with many NA (not required)
#9. NA when variable is categorical (done with missMDA PACKAGE estim_ncpMCA and MIMCA)
#10. String extraction using rebus and stringr (done practice to be done)
#11. Efficient use of various stringr functions
#12. Use of gsub for substituting
#13. Use of gather, spread, unite and separate for restructuring the data.
#14. Model that dont need much data preparation. (Check out the machine learning model)
#15. dplyr functionality specific functions
#16. use of map in dplyr
#17. date manipulation with lubridate
#18. How to work on dummy variables is it needed to know
#19. for loop in mutate
#20. use of sqldf in r and comparison of it with. (done)
#21. Imputation of the NA values for 1. Categorical 2. for numeric (done)
#22. Outlier treatment
#23. Scaling and Weighting in R (done with preProcess function in caret where we choose the requisite method)
#24. Oversampling, undersampling or both (using ovun function from ROSE PACKAGE)
#25. factor plot
#Commonly used Packages:
library(dplyr) #select filter mutate group_by summarise
library(sqldf) #Running SQL Queries on R (SQL Codes equivalent to writing dplyr stuff can also be made)
library(stringr)#Contains a lot of string manipulating functions
library(rebus) #This makes writing regular expressions very easy this is used with stringr package
library(Amelia) #This package contain missmap through which we can view where the missing values are
#its also used to imputation of the missing values
library(readr) #Importing data from csv and txt into r as tibble dataframe
library(readxl) #importing excel file
library(data.table) #fread
library(tidyr) #Unite Separate Gather spread nest unnest to change the table orientation and uniting and separating columns.
library(broom) #tidy-> used to get overall model parameters in a table format,augment-> used to get data point specific
#parameters like fit, leverage , cook distance
library(purrr) #Functional programming can be written, map map2 , map_if, safely, invoke_map
library(lubridate) #different date format to ISO 8601 format
library(ggplot2) #Plotting graphs
library(ggforce) #Use to develop paginated graphs when using facets(When using facets in cases where the number of
#levels are more we can fix the number of rows, columns and number of pages by it)
library(corrplot)#We can draw correlation plot which is basically a heatmap where darker shade represent higher
#correlation and lighter represent lower order correlation
library(caret) #When the output variables are numeric we can use it to draw feature plot which is nothing but factor plot
#preProcess
library(scales) #used with ggplot to provide breaks in the axis
#1. Finding the number of NAs individually
#_________________________________________________________________
a<-c(rnorm(100,mean=200,sd=32),NA,NA,NA,NA,NA,NA) #creating random numbers in r
b<-c(runif(100,min=20,max=300),NA,NA,NA,NA,NA,NA)
a<-as.data.frame(a)
b<-as.data.frame(b)
c<-cbind(a,b)
sum(is.na(c$a)) #Finding the number of NAs
sum(is.na(c$b))/nrow(c) #Finding the proportion of NAs
#If I have 2 data object and I want to find the na in both of them then how to do it ?????
#2. Finding the number of NAs over the columns
#___________________________________________________________________
map(c,function(x)sum(is.na(x))) #Its anonymous function
map(c,~sum(is.na(.)))
#apply : apply(X, MARGIN( 1 for row, 2 for column), FUN, ...)
#sapply : sapply(X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE)
sapply(c,function(x)sum(is.na(x))) #sapply is a user-friendly version and wrapper of lapply by default returning a vector, matrix
apply(c,2,function(x)sum(is.na(x)))
##3 Finding proportion of NAs
map(c,function(x)sum(is.na(x))/nrow(c)*100)
#4. Diagraming using Amelia package
#____________________________________________________________________
#Seeing the graphic of the missing values in R using missmap function in Amelia package in R
missmap(c)
#5. Missing value imputation in R :mice for numeric and missMDA for categorical
#_____________________________________________________________________
#Missing data in the training data set can reduce the power/fit of the model or can lead to a biased
#model because we have not analysed the behavior and the relationship with other variables correctly
#It can lead to wrong prediction or classification.
#Why my data has missing values?
#1. Data extraction: Error at this stage are typically easy to find and corrected.
#2. Data Collection: a. Missing completely at random (probability of missing variable is same for all observation)
# b. Missing at random (missing ratio varies for different level and inout variables)
# c. Missing that depends on unobserved predictors (eg. if in medical particular diagnostic causes discomfort then there is higher chance of drop out from the study)
# d. Missing that depend on missing values itself (eg. People with higher or lower income are likely to provide non-response to their earning)
library(mlbench)
data("PimaIndiansDiabetes")
missmap(PimaIndiansDiabetes) #0 missing values
setwd("C:\\Users\\fz1775\\Desktop\\MACHINE LEARNING\\Testing ML\\Practice Dataset")
bank_data<-read_csv("Bank_data_with_missing.csv")
missmap(bank_data) #Very traces are missing
map(bank_data,function(x)sum(is.na(x))) #number of missing data points
map(bank_data,function(x)sum(is.na(x))/nrow(bank_data)*100) #proportion of missing data points
summary(bank_data)
#For Categorical data
#Based on the problem at hand , we can try to do one of the following
#1. Mode is one of the option which can be used
#2. Missing values can be treated as a separate category by itself, We can create another category
# for missing values and use them as a different level
#3. If the number of missing values are lesser compared to the number of samples and also number of
# samples are high, we can also choose to remove those rows in our analysis
#4. We can run model to predict the missing values using the other variables as input
#R provides MICE(Multiple imputation by chained equation) package and Amelia package for handling
#missing values
#for MICE follow the steps below
#1. Change variable(with missing values) into factors by as.factor()
#2. Create a data set of all the known variable and the missing values
#3. Read about the complete() command from the MICE package and apply to the new data set.
#How to treat the missing values in R?
#1. Deletion
#2. Mean Median Mode
#3. Prediction Model
#4. KNN Imputation
#Imputation of Numerical data:
library(mice) # Multivariate Imputation via chained equations
md.pattern(bank_data) #This can also be used togather with missmap
bank<-mice(bank_data,m=5,maxit=50,meth='pmm',seed=500)
#m -> refers to number of imputed datasets (Number of multiple imputations. The default is m=5.)
#m -> A scalar giving the number of iterations. The default is 5.
#meth="pmm" refers to the imputation method (Predictive mean matching)
summary(bank)
completedata<-complete(bank,1)
map(completedata,function(x)sum(is.na(x))) #We can see the numeric variables are no more empty
#What is MICE?
#Missing data are a common problem in psychiatric research. Multivariate imputation
#by chained equations (MICE), sometimes called “fully conditional specification” or
#“sequential regression multiple imputation”
#While complete case analysis may be easy to implement it relies upon stronger missing data assumptions than multiple imputation and it can result in biased estimates and a reduction in power
# Single imputation procedures, such as mean imputation, are an improvement but do not account for the uncertainty in the imputations; once the imputation is completed, analyses proceed as if the imputed values were the known, true values rather than imputed. This will lead to overly precise results and the potential for incorrect conclusions.
#Maximum likelihood methods are sometimes a viable approach for dealing with missing data (Graham, 2009); however, these methods are primarily available only for certain types of models
#mice package in R:
# The mice package implements a method to deal with missing data. The package creates multiple imputations
# (replacement values) for multivariate missing data. The method is based on Fully Conditional Specification,
# where each incomplete variable is imputed by a separate model. The MICE algorithm can impute mixes of
# continuous, binary, unordered categorical and ordered categorical data. In addition, MICE can impute
# continuous two-level data, and maintain consistency between imputations by means of passive imputation.
# Many diagnostic plots are implemented to inspect the quality of the imputations.
#Multiple imputation has a number of advantages over these other missing data approaches. Multiple imputation involves filling in the missing values multiple times, creating multiple “complete” datasets. Described in detail by Schafer and Graham (2002), the missing values are imputed based on the observed values for a given individual and the relations observed in the data for other participants, assuming the observed variables are included in the imputation model.
#Because multiple imputation involves creating multiple predictions for each missing value, the analyses of multiply imputed data take into account the uncertainty in the imputations and yield accurate standard errors. On a simple level, if there is not much information in the observed data (used in the imputation model) regarding the missing values, the imputations will be very variable, leading to high standard errors in the analyses.
library(missMDA)
#What is missMDA?
#The missMDA package quickly generates several imputed datasets with quantitative variables
#and/or categorical variables. It is based on the
#1. dimentionality reduction method such as PCA for continuous variables or
#2. multiple correspondance analysis for categorical variables.
#Compared to the Amelia and mice, it better handles cases where the number of variables
#is larger than the number of units and cases where regularization is needed. For categorical
#variables, it is particularly interesting with many variables and many levels but also
#with rare level
#Partition the data to categorical only
bank_data_cat<-bank_data%>%select(2:5,7:9,11,16:17)%>%map(.,as.factor)
nb<-estim_ncpMCA(bank_data_cat,ncp.max=5) #Time consuming, nb=4 (Better to convert the data to factor)
# Takes almost 2 hours
res<-MIMCA(bank_data_cat,ncp=4,nboot=1) #MIMCA performs multiple imputations for categorical data using Multiple Correspondence Analysis.
#nboot: the number of imputed datasets it should be 1
a1<-as.data.frame(res$res.MI) #we can get the imputed data by this step
#We can finally merge the numeric and the categorical togather.
?estim_ncpMCA
?MIMCA
#Numeric Data: mice function then completedata function on the output of mice function.
# The principle followed with Multiple Imputation with chained equations.
#Categorical Data: select only the categorical data then estim_ncpMCA(categorical_object,ncp.max=5)
# MCA Stands for multiple correspondance analysis then use MIMCA(bank_data_cat,ncp=4,nboot=10)
#6. Preprocessing of data in R using caret
#_____________________________________________________________________
#library(caret)
preProcessValues_scale<-preProcess(bank_data_num,method="scale") #scale means (x- mean(X)/sd of X)
#Check out which variable to scale and center
preProcessValues_center<-preProcess(bank_data_num,method="center") # center means X-mean(X)
#There are a number of preprocessing methods available in R
#1. BoxCox
#2. YeoJohnson
#3. expoTrans
#4. center
#5. scale
#6. range
#7. pca
#8. ica
#9. corr
#7. Feature Plot in R : This the factor plot
#___________________________________________________________________________-
# Here we first split the data to the response variable (Y) and the predictor variable (X)
# split input and output
y <- bank_data[,14]
x <- bank_data[,c(1:13,15:17)]
# scatterplot matrix / feature plot is same as factor analysis plot
library(ellipse)
featurePlot(x=x, y=y, plot="ellipse") #ellipse package required
# box and whisker plots for each attribute
featurePlot(x=x, y=y, plot="box")
# density plots for each attribute
featurePlot(x=x, y=y, plot="density")
# pairs plot for each attribute
featurePlot(x=x, y=y, plot="pairs") #just like ellipse plot with ellipse not present, more like scatter plot
#8. Correlation Matrix and correlation plot in R.
#__________________________________________________________________
correlationMatrix <- cor(bank_data[,c(1,6,10,12:15)]) #Taking only the numeric variables
# summarize the correlation matrix
print(correlationMatrix)
#Plotting the corrplot to remove trhe redundant factors
library(corrplot)
corrplot(correlationMatrix,method="color")
#9. String Manipulation in R using stringr package
#_________________________________________________________
# rebus provides START and END shortcuts to specify regular expressions
# that match the start and end of the string. These are also known as anchors
library(rebus)
#gsub("Patten which we now want to keep","old pattern",Variable_of_interest)
## Join Multiple Strings Into A Single String.+++++++++++++++++++++++
#str_c() str_c(..., sep = "", collapse = NULL)
a2<-"x"
b2<-c("y",NA)
str_c(a2,b2,sep="~")
#Missing values are contagious so convert NA to "NA" by
str_replace_na(b2) #and then use str_c which essentially means string concatenate
#str_detect(variable,pattern=) Result: TRUE FALSE IMPORTANT
#str_subset (variable, pattern=) Result: Only those variable having match with the pattern IMPORTANT
#str_count (variable,pattern=) Result: 0 1 IMPORTANT
#str_split(variable,pattern=) Result: split the variable into two parts
#str_replace (variable,pattern="khk", replacement="vjjb") IMPORTANT
#Like the dplyr the rebus package also help to write text pattern using pipe
# %R%
# START %R% ANY_CHAR %R% one_or_more(DGT)
# %R% END
# optional()
# zero_or_more()
# one_or_more()
# repeated()
# DGT
# WRD
# SPC
# DOLLAR %R% DGT %R% optional(DGT) %R% DOT %R% dgt(2)
#10. sqldf PACKAGE functionalities in R for data manipulation
#______________________________________________________________________
bank_sql1<-sqldf("SELECT * from bank_data")
bank_sql1
CP_State_Lookup<-read_csv("CP_State_Lookup.csv")
CP_Scenario_Lookup<-read_csv("CP_Scenario_Lookup.csv")
CP_Product_Lookup<-read_csv("CP_Product_Lookup.csv")
CP_Industry_Lookup<-read_csv("CP_Industry_Lookup.csv")
CP_Executive_Lookup<-read_csv("CP_Executive_Lookup.csv")
CP_Date_Lookup<-read_csv("CP_Date_Lookup.csv")
CP_Customer_Lookup<-read_csv("CP_Customer_Lookup.csv")
CP_BU_Lookup<-read_csv("CP_BU_Lookup.csv")
CP_RevenueTaxData_Fact<-read_csv("CP_Revenue Tax Data_Fact.csv")
#INNER JOIN: USED TO COMPARE MULTIPLE TABLES, REPORT MATCHING DATA.(Matching data with respect to the variable in ON)
CP_BU_Lookup_1_9<-CP_BU_Lookup%>%filter(`BU Key`>=1&`BU Key`<=9)
CP_Innerjoin<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact INNER JOIN CP_BU_Lookup_1_9 ON CP_RevenueTaxData_Fact.`BU Key`=CP_BU_Lookup_1_9.`BU Key`")
#OUTER JOINS : USED TO COMPARE MULTIPLE TABLES, REPORT MATCHING & MISSING DATA
# LEFT OUTER JOIN : All Left Table Data + Matching Right Table Data.
# Non match Right table data is reported as null.
CP_LeftOuter<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact LEFT JOIN CP_BU_Lookup_1_9 ON CP_RevenueTaxData_Fact.`BU Key`=CP_BU_Lookup_1_9.`BU Key`")
# FULL OUTER JOIN : Combined output of LEFT OUTER JOIN + RIGHT OUTER JOIN
#CROSS JOIN => all type of combination
CP_CrossJoin<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact CROSS JOIN CP_BU_Lookup_1_9")
#GROUP BY: by using group by alone we get the last data only
CP_Groupby_alone<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact GROUP BY `BU Key`")
#GROUP BY SUM
CP_Groupby_sum_Revenue<-sqldf("SELECT `BU Key`, SUM(Revenue) FROM CP_RevenueTaxData_Fact GROUP BY `BU Key`")
#USING GROUP BY TO GET ONLY UNIQUE VALUES
CP_Groupby_Unique_Values<-sqldf("SELECT `BU Key` FROM CP_RevenueTaxData_Fact GROUP BY `BU Key`")
#RULE : WHENEVER WE USE GROUP BY THEN COLUMNS USED IN SELECT SHOULD ALSO BE IN GROUP BY
CP_Groupby_Multiple_variable<-sqldf("SELECT `BU Key`, `Customer Key`, SUM(Revenue)
from CP_RevenueTaxData_Fact
GROUP BY `BU Key`, `Customer Key`")
#WHEN WE WANT TO USE FILTER ON AGGREGATE OF COLUMN IN GROUP BY WE USE HAVING
CP_Groupby_Multiple_variable_having<-sqldf("SELECT `BU Key`, `Customer Key`, SUM(Revenue)
from CP_RevenueTaxData_Fact
GROUP BY `BU Key`, `Customer Key`
HAVING SUM(Revenue)<3336145.89")
#UNION MEANS The UNION command combines the result set of two or more SELECT statements (only distinct values)
#The UNION ALL command combines the result set of two or more SELECT statements (allows duplicate values).
CP_RevenueTaxData_Fact_BU1_9<-CP_RevenueTaxData_Fact%>%filter(`BU Key`>=1 & `BU Key` <10)
CP_RevenueTaxData_Fact_BU5_end<-CP_RevenueTaxData_Fact%>%filter(`BU Key`>=5)
#UNION
CP_Revenue_Union<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact_BU1_9 UNION SELECT * FROM CP_RevenueTaxData_Fact_BU5_end")
#UNION ALL
CP_Revenue_Unionall<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact_BU1_9 UNION ALL SELECT * FROM CP_RevenueTaxData_Fact_BU5_end")
#WHERE FOR NON AGGREGATE COLUMN 1st + GROUP BY for AGGREGATED COLUMN 2nd +HAVING
CP_Revenue_where_groupby<-sqldf("SELECT `BU Key`, `Customer Key`, SUM(Revenue)
from CP_RevenueTaxData_Fact
WHERE `BU KEY` BETWEEN 1 AND 10
GROUP BY `BU Key`, `Customer Key`
HAVING SUM(Revenue)<3336145.89")
#SUBQUERIES IN SQLDF (SEMI JOIN)
CP_Revenue_Subq1<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact
WHERE `BU KEY` IN (SELECT `BU KEY` FROM CP_BU_Lookup_1_9)")
#SUBQUERIES IN SQLDF (ANTI JOIN)
CP_Revenue_Subq12<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact
WHERE `BU KEY` NOT IN (SELECT `BU KEY` FROM CP_BU_Lookup_1_9)")
#MORE COMPLICATED SUBQUERIES IN SQLDF
CP_Revenue_Subq123<-sqldf("SELECT * FROM CP_RevenueTaxData_Fact
WHERE `BU KEY` IN (SELECT `BU KEY` FROM CP_BU_Lookup_1_9 WHERE Executive_id < 5)")
#cASE ( mindful of `Variable` and 'Text')
CP_Revenue_Case<-sqldf("SELECT `Customer Key`,
CASE
WHEN `Customer Key` < 5000 THEN 'CK < 5000'
WHEN `Customer Key` BETWEEN 5000 AND 10000 THEN 'CK BETWEEN 5000 AND 10000'
WHEN `Customer Key` > 10000 THEN 'CK >10000'
ELSE 'CK not in list'
END Ransingh
FROM CP_RevenueTaxData_Fact")
# REQ 1 : HOW TO REPORT ALL COURSES & RESPECTIVE STUDENTS IN EACH COURSE ?
# SELECT * FROM COURSES
# INNER JOIN
# tblStudents
# ON tblStudents.StdCourse_ID = COURSES.COURSE_ID
#
#
# REQ 2 : HOW TO REPORT ALL COURSES WITH AND WITHOUT STUDENTS ?
# SELECT * FROM COURSES
# LEFT OUTER JOIN
# tblStudents
# ON COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
# REQ 3 : HOW TO REPORT ALL COURSES WITH AND WITHOUT STUDENTS ?
# SELECT * FROM tblStudents
# RIGHT OUTER JOIN
# COURSES
# ON
# COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
#
# -- REQ 4 : HOW TO REPORT LIST OF ALL COURSES WITHOUT STUDENTS?
# SELECT * FROM COURSES
# LEFT OUTER JOIN
# tblStudents
# ON
# COURSES.COURSE_ID = tblStudents.StdCourse_ID
# WHERE
# tblStudents.StdCourse_ID IS NULL
#
# -- REQ 5 : HOW TO REPORT LIST OF ALL COURSES AND STUDENTS?
# SELECT * FROM COURSES CROSS JOIN tblStudents
# SELECT * FROM COURSES CROSS APPLY tblStudents
#
# -- REQ 6 : HOW TO TUNE QUERIES WITH JOINS [FOR BIG TABLES] ?
# SELECT * FROM COURSES
# INNER MERGE JOIN
# tblStudents
# ON COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
# -- REQ 7 : HOW TO TUNE QUERIES WITH JOINS [FOR SMALL TABLES] ?
# SELECT * FROM COURSES
# LEFT OUTER LOOP JOIN
# tblStudents
# ON COURSES.COURSE_ID = tblStudents.StdCourse_ID
#
# -- REQ 8 : HOW TO TUNE QUERIES WITH JOINS [FOR HEAP TABLES] ?
# SELECT * FROM COURSES
# FULL OUTER LOOP JOIN
# tblStudents
# ON
# COURSES.COURSE_ID = tblStudents.StdCourse_ID
# -- QUERY 1: HOW TO REPORT LIST OF ALL POPULATION DETAILS?
# SELECT * FROM tblPopulation
#
# -- QUERY 2: HOW TO REPORT LIST OF ALL COUNTRY NAMES?
# SELECT Country FROM tblPopulation
#
# -- QUERY 3: HOW TO REPORT LIST OF ALL UNIQUE COUNTRIES DETAILS?
# SELECT Country FROM tblPopulation
# GROUP BY Country
#
# -- QUERY 4: HOW TO REPORT TOTAL POPULATION DETAILS?
# SELECT sum(Population) AS TOTAL_POP FROM tblPopulation
#
# -- QUERY 5: HOW TO REPORT COUNTRY WISE TOTAL POPULATION DETAILS?
# SELECT COUNTRY, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY
#
# -- RULE : WHENEVER WE USE GROUP BY THEN COLUMNS USED IN SELECT SHOULD ALSO BE IN GROUP BY
# SELECT COUNTRY, STATE, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY
#
# -- RULE : WHENEVER WE USE GROUP BY THEN COLUMNS USED IN SELECT SHOULD ALSO BE INCLUDED IN GROUP BY
#
# -- QUERY 6: HOW TO REPORT COUNTRY WISE, STATE WISE TOTAL POPULATION?
# SELECT COUNTRY, STATE, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY, STATE
#
# -- QUERY 7: HOW TO REPORT COUNTRY WISE, STATE WISE, CITY WISE TOTALS?
# SELECT COUNTRY, STATE, CITY, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY, STATE, CITY
#
# -- QUERY 8: HOW TO APPLY CONDITIONS ON GROUP BY DATA ?
# SELECT COUNTRY, STATE, CITY, sum(Population) AS TOTAL_POP FROM tblPopulation
# GROUP BY COUNTRY , STATE, CITY
# HAVING sum(Population) > 15
#
# -- QUERY 9: HOW TO APPLY CONDITIONS BEFORE AND AFTER GROUP BY ?
# SELECT COUNTRY, STATE, sum(Population) AS TOTAL_POP FROM tblPopulation
# WHERE COUNTRY = 'COUNTRY1' -- USED TO SPECIFY CONDITIONS ON NON-AGGREGATE VALUES
# GROUP BY COUNTRY , STATE
# HAVING sum(Population) > 5 -- USED TO SPECIFY CONDITIONS ON AGGREGATE VALUES
#
#
# -- QUERY 10: HOW TO REPORT TOTAL POPULATION USING ROLLUP ?
# SELECT COUNTRY, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY COUNTRY
#
# SELECT COUNTRY, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
#
#NOT POSSIBLE IN R
# SELECT COUNTRY, SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
#
#
# SELECT
# COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 0
# UNION ALL
# SELECT
# ISNULL(COUNTRY, 'GRAND TOTAL') AS COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 1
#
#
# SELECT
# COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 0
# UNION ALL
# SELECT
# COALESCE(COUNTRY, 'GRAND TOTAL') AS COUNTRY,
# SUM(Population) AS TOTAL_POPULATION, GROUPING(COUNTRY) FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY)
# HAVING GROUPING(COUNTRY) = 1
#
# -- IS NULL ISNULL
#
#
#
# SELECT COUNTRY, STATE, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY ROLLUP(COUNTRY,STATE) -- 11 ROWS
# -- COUNTRY WISE TOTAL + COUNTRY WISE STATE WISE TOTAL
#
#
# SELECT COUNTRY, STATE, SUM(Population) AS TOTAL_POPULATION FROM tblPopulation
# GROUP BY CUBE(COUNTRY,STATE) -- 13 ROWS
# -- COUNTRY WISE TOTAL + COUNTRY WISE STATE WISE TOTAL
# -- STATE WISE TOTAL
#
#
# SELECT * FROM CUSTOMERS_DATA
# SELECT * FROM PRODUCTS_DATA
# SELECT * FROM TIME_DATA
# SELECT * FROM SALES_DATA
#
#
# -- QUERY #1: HOW TO REPORT PRODUCT WISE TOTAL SALES?
# SELECT *
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
#
#
# -- QUERY #2
# SELECT EnglishProductName, SalesAmount
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
#
#
# -- QUERY #3
# SELECT EnglishProductName, SUM(SalesAmount)
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
#
#
# -- QUERY #4
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
#
#
#
# -- QUERY #5 : HOW TO REPORT PRODUCT WISE TOTAL SALES ABOVE 1000 USD?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING SUM(SalesAmount) > 1000
#
#
# -- QUERY #6 : HOW TO REPORT PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT EnglishProductName,
# SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING
# SUM(SalesAmount) > 1000 AND SUM(TAXAMT) > 1000
#
#
#
#
#
# -- QUERY #7 : HOW TO REPORT PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING
# SUM(SalesAmount) > 1000 AND SUM(TAXAMT) > 1000
# ORDER BY TOTAL_SALES DESC
#
#
# -- QUERY #8 : HOW TO REPORT PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# GROUP BY EnglishProductName
# HAVING
# SUM(SalesAmount) > 1000 AND SUM(TAXAMT) > 1000
# ORDER BY 2 DESC -- ORDERING THE DATA BY USING COLUMN CARDINAL POSITION.
#
#
#
#
#
#
#
# -- QUERY 9: WRITE A QUERY TO REPORT SUM OF SALES AND TAX FOR PRODUCTS WITH MAXIMUM DEALER PRICE ?
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES, SUM(TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# WHERE -- FOR CONDITIONS ON NON-AGGREGATE COLUMNS
# PRODUCTS_DATA.DealerPrice
# IN ( SELECT MAX(DealerPrice) FROM PRODUCTS_DATA)
# GROUP BY EnglishProductName
#
#
#
#
# -- QUERY 10: HOW TO REPORT SUM OF SALES FOR PRODUCTS WITH MAXIMUM DEALER PRICE BUT NOT FOR MINIMAL LIST PRICE ?
# -- NESTED SUB QUERY
# SELECT EnglishProductName, SUM(SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# WHERE
# PRODUCTS_DATA.DealerPrice
# IN ( SELECT MAX(DealerPrice) FROM PRODUCTS_DATA
# WHERE LISTPRICE
# NOT IN ( SELECT MIN(LISTPRICE) FROM PRODUCTS_DATA ) )
# GROUP BY EnglishProductName
#
#
#
# -- EXAMPLES TO JOIN MORE THAN TWO TABLES:
# SELECT * FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
#
#
#
# SELECT * FROM SALES_DATA
# INNER JOIN
# PRODUCTS_DATA
# ON
# SALES_DATA.ProductKey = PRODUCTS_DATA.ProductKey
# INNER JOIN
# TIME_DATA
# ON
# SALES_DATA.ORDERDATEKEY = TIME_DATA.TIMEKEY
#
#
# -- Q1: HOW TO REPORT YEAR WISE TOTAL SALES?
# -- Q2: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX?
# -- Q3: HOW TO REPORT YEAR WISE, QUARTER WISE, MONTH WISE TOTAL SALES AND TOTAL TAX?
# -- Q4: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX FOR JUNE MONTH ?
# -- Q5: HOW TO REPORT CLASS WISE, COLOR WISE PRODUCTS FOR EACH YEAR BASED ON ASC ORDER OF SALES?
# -- Q6: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS WITH MAXIMUM NUMBER OF SALES?
# -- Q7: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS EXCEPT WITH MINIMUM NUMBER OF SALES?
# -- Q8: HOW TO COMBINE THE RESULTS FROM ABOVE TWO QUERIES.
# -- Q9: HOW TO ADDRESS POSSIBLE BLOCKING ISSUES FROM ABOVE TWO QUERIES?
# -- Q10: HOW TO REPORT YEAR WISE, CUSTOMER WISE, PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
#
#
#
# -- Q1: HOW TO REPORT YEAR WISE TOTAL SALES?
# SELECT T.CalendarYear, SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# GROUP BY T.CalendarYear
#
#
# -- Q2: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX?
# SELECT T.CalendarYear, T.CalendarQuarter,
# SUM(S.SalesAmount) AS TOTAL_SALES, SUM(S.TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# GROUP BY T.CalendarYear, T.CalendarQuarter
#
#
#
# -- Q3: HOW TO REPORT YEAR WISE, QUARTER WISE, MONTH WISE TOTAL SALES AND TOTAL TAX?
# SELECT T.CalendarYear, T.CalendarQuarter, T.EnglishMonthName,
# SUM(S.SalesAmount) AS TOTAL_SALES, SUM(S.TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# GROUP BY T.CalendarYear, T.CalendarQuarter, T.EnglishMonthName
#
# -- Q4: HOW TO REPORT YEAR WISE, QUARTER WISE TOTAL SALES AND TOTAL TAX FOR JUNE MONTH ?
# SELECT T.CalendarYear, T.CalendarQuarter,
# SUM(S.SalesAmount) AS TOTAL_SALES, SUM(S.TAXAMT) AS TOTAL_TAX
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# WHERE T.EnglishMonthName = 'JUNE'
# GROUP BY T.CalendarYear, T.CalendarQuarter
#
#
# -- Q5: HOW TO REPORT CLASS WISE, COLOR WISE PRODUCTS FOR EACH YEAR BASED ON ASC ORDER OF SALES?
# SELECT
# P.Class, P.Color, T.CalendarYear,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY P.Class, P.Color, T.CalendarYear
#
#
# -- Q6: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS WITH MAXIMUM NUMBER OF SALES?
#
# -- STEP 1: IDENTIFY THE PRODUCTS THAT HAVE MAX SALE VALUE:
# SELECT
# P.EnglishProductName,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY P.EnglishProductName
#
# CREATE VIEW VW_SALE_PROIDUCTS
# AS
# SELECT
# P.EnglishProductName,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY P.EnglishProductName
#
#
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
#
# -- STEP 2:
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
#
# -- Q7: HOW TO REPORT TOTAL SALES FOR SUCH PRODUCTS EXCEPT WITH MINIMUM NUMBER OF SALES?
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName NOT IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MIN(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
# -- Q8: HOW TO COMBINE THE RESULTS FROM ABOVE TWO QUERIES ?
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName NOT IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MIN(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# OR
# P.EnglishProductName IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
# -- Q9: HOW TO ADDRESS POSSIBLE BLOCKING ISSUES FROM ABOVE TWO QUERIES?
# SELECT
# P.EnglishProductName, P.Color, P.Class,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S (READPAST)
# INNER JOIN PRODUCTS_DATA AS P (READPAST)
# ON
# P.ProductKey = S.ProductKey
# WHERE
# P.EnglishProductName NOT IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MIN(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# OR
# P.EnglishProductName IN (
# SELECT EnglishProductName FROM VW_SALE_PROIDUCTS
# WHERE TOTAL_SALES = (SELECT MAX(TOTAL_SALES) FROM VW_SALE_PROIDUCTS)
# )
# GROUP BY P.EnglishProductName, P.Color, P.Class
#
#
#
# -- Q10: HOW TO REPORT YEAR WISE, CUSTOMER WISE, PRODUCT WISE TOTAL SALES AND TOTAL TAX ABOVE 1000 USD?
# SELECT
# T.CalendarYear, C.FirstName + ' ' + C.LastName AS FULLNAME, P.EnglishProductName,
# SUM(S.SalesAmount) AS TOTAL_SALES
# FROM SALES_DATA AS S
# INNER JOIN TIME_DATA AS T
# ON
# S.OrderDateKey = T.TimeKey
# INNER JOIN CUSTOMERS_DATA AS C
# ON
# C.CustomerKey = S.CustomerKey
# INNER JOIN PRODUCTS_DATA AS P
# ON
# P.ProductKey = S.ProductKey
# GROUP BY T.CalendarYear, C.FirstName + ' ' + C.LastName, P.EnglishProductName
# HAVING SUM(S.SalesAmount) > 1000
#
#
# /*
# NORMAL FORMS : A MECHANISM TO IDENTIFY THE TABLES, RELATIONS AND DATA TYPES.
# ENSURE PROPER DIVSION OF BUSINESS DATA INTO MULTIPLE TABLES.
#
# 1 NF : FIRST NORMAL FORM. EVERY COLUMN SHOULD BE ATOMIC. MEANS, STORES SINGLE VALUE.
#
# 2 NF : SECOND NORMAL FORM. EVERY TABLE SHOULD BE IN FIRST NORMAL FORM
# EVERY TABLE SHOULD BE HAVING A CANDIDATE KEY. USED FOR FUNCTIONAL DEPENDANCY.
#
# 3 NF : THIRD NORMAL FORM. EVERY TABLE SHOULD BE IN SECOND NORMAL FORM
# EVERY TABLE SHOULD BE HAVING A FOREIGN KEY. USED FOR MULTI-VALUED DEPENDANCY.
#
# BCNF NF : BOYCE-CODD NORMAL FORM. EVERY TABLE SHOULD BE IN THIRD NORMAL FORM
# EVERY TABLE SHOULD BE HAVING MORE THAN ONE FOREIGN KEY. USED FOR MULTI-VALUED DEPENDANCY.
# AND MANY TO ONE RELATION.
#
# 4 NF : FOURTH NORMAL FORM. EVERY TABLE SHOULD BE IN THIRD NORMAL FORM
# AND ATLEAST ONE SELF REFERENCE. MEANS A TABLE REFERENCING ITSELF. */
# SELECT * FROM tblPopulation WHERE COUNTRY = 'COUNTRY1'
#
# CREATE VIEW VW_COUNTRY1
# AS
# SELECT * FROM tblPopulation WHERE COUNTRY = 'COUNTRY1'
#
# SELECT * FROM VW_COUNTRY1
#
# -- MAIN PURPOSE OF VIEWS : TO STORE QUERIES FOR EASY END USER ACCESS.
#
# -- WHENEVER WE CREATE A DATABASE, SET OF PREDEFINED VIEWS ARE AUTO CREATED [SYSTEM VIEWS]
# -- HOW TO REPORT LIST OF DATABASES IN A SERVER?
# SELECT * FROM SYS.DATABASES
#
# -- HOW TO REPORT LIST OF TABLES IN A DATABASE?
# SELECT * FROM SYS.TABLES -- REPORTS TABLES IN THE CURRENT DATABASE [2P]
# SELECT * FROM UNIVERSITY_DATABASE.SYS.TABLES -- REPORTS TABLES IN THE SPECIFIED DATABASE [3P]
#
# -- HOW TO REPORT LIST OF PRIMARY KEYS, FOREIGN KEYS, CHECK CONSTRAINTS, ETC IN A DATABASE?
# SELECT * FROM SYS.OBJECTS
#
# -- HOW TO REPORT LIST OF COLUMNS FOR ALL TABLES & VIEWS & FUNCTIONS IN THE CURRENT DATABASE?
# SELECT * FROM INFORMATION_SCHEMA.COLUMNS
#
#
# CREATE FUNCTION fn_ReportDetails ( @country varchar(30) ) -- @country is a PARAMETER. INPUT VALUE
# RETURNS table
# AS
# RETURN
# (
# SELECT * FROM tblPopulation WHERE COUNTRY = @country -- PARAMETERIZED QUERY
# )
#
# SELECT * FROM fn_ReportDetails('COUNTRY1')
# SELECT * FROM fn_ReportDetails('COUNTRY2')
#
# -- MAIN PURPOSE OF FUNCTIONS : COMPUTATIONS (CALCULATIONS), DYNAMIC REPORTING
#
# CREATE PROCEDURE usp_ReportDetails ( @country varchar(30) ) -- @country is a PARAMETER. INPUT VALUE
# AS
# SELECT * FROM tblPopulation WHERE COUNTRY = @country -- PARAMETERIZED QUERY
#
# EXECUTE usp_ReportDetails 'COUNTRY1'
# EXEC usp_ReportDetails 'COUNTRY2'
#
# -- MAIN PURPOSE OF STORED PROCEDURES (SPROCs) : PROGRAMMING, QUERY TUNING [QUERIES CAN EXECUTE BETTER]
#
# -- ADVANTAGE OF STORED PROCEDURES OVER FUNCTIONS: SPs ARE PRE-COMPILED AND STORED FOR READY EXECUTIONS.
# FUNCTIONS NEED TO GET COMPILED EVERY TIME WE EXECUTE.
# COMPILATION : CONVERT FROM HIGH LEVEL SQL TO MACHINE CODE.
#
# -- ADVANTAGE OF FUNCTIONS OVER STORED PROCEDURES: FUNCTIONS ARE EXECUTED WITH REGULAR SELECT STATEMENT
# HENCE FLEXIBLE FOR DATA ACCESS, REPORTING, CALCULATIONS.
#
#
# -- SYSTEM STORED PROCEDURES:
# EXEC SP_HELPDB 'OBJECT_OVERVIEW' -- REPORTS DETAILS OF THE GIVEN DATABASE INCLUDING SIZE & FILES
# EXEC SP_HELP 'tblPopulation' -- REPORTS TABLE DEFINITION
# EXEC SP_HELPTEXT 'usp_ReportDetails' -- REPORTS VIEW, FUNCTION, PROCEDURE DEFINITION
# EXEC SP_DEPENDS 'tblPopulation' -- REPORTS THE OBJECT DEPENDANCIES ON THE TABLE
# EXEC SP_RENAME 'tblPopulation', 'tblPopulation_NEW' -- TO RENAME A PROCEDURE
# EXEC SP_RECOMPILE 'usp_ReportDetails' -- RECOMPILES THE STORED PROCEDURE NEXT TIME WE EXECUTE IT
# -- RECOMPILATION REQUIRED IF UNDERLYING TABLE STRUCUTRE CHANGES.
#
#
#
# SELECT @@VERSION
# SELECT @@SERVERNAME |
\name{videotrackR-package}
\alias{videotrackR-package}
\alias{videotrackR}
\docType{package}
\title{
What the package does (short line)
}
\description{
More about what it does (maybe more than one line)
~~ A concise (1-5 lines) description of the package ~~
}
\details{
\tabular{ll}{
Package: \tab videotrackR\cr
Type: \tab Package\cr
Version: \tab 1.0\cr
Date: \tab 2014-08-06\cr
License: \tab What Licence is it under ?\cr
}
~~ An overview of how to use the package, including the most important functions ~~
}
\author{
Who wrote it
Maintainer: Who to complain to <yourfault@somewhere.net>
}
\references{
~~ Literature or other references for background information ~~
}
~~ Optionally other standard keywords, one per line, from file KEYWORDS in the R documentation directory ~~
\keyword{ package }
\seealso{
~~ Optional links to other man pages, e.g. ~~
~~ \code{\link[<pkg>:<pkg>-package]{<pkg>}} ~~
}
\examples{
%% ~~ simple examples of the most important functions ~~
}
| /man/videotrackR-package.Rd | no_license | swarm-lab/videotrackR | R | false | false | 977 | rd | \name{videotrackR-package}
\alias{videotrackR-package}
\alias{videotrackR}
\docType{package}
\title{
What the package does (short line)
}
\description{
More about what it does (maybe more than one line)
~~ A concise (1-5 lines) description of the package ~~
}
\details{
\tabular{ll}{
Package: \tab videotrackR\cr
Type: \tab Package\cr
Version: \tab 1.0\cr
Date: \tab 2014-08-06\cr
License: \tab What Licence is it under ?\cr
}
~~ An overview of how to use the package, including the most important functions ~~
}
\author{
Who wrote it
Maintainer: Who to complain to <yourfault@somewhere.net>
}
\references{
~~ Literature or other references for background information ~~
}
~~ Optionally other standard keywords, one per line, from file KEYWORDS in the R documentation directory ~~
\keyword{ package }
\seealso{
~~ Optional links to other man pages, e.g. ~~
~~ \code{\link[<pkg>:<pkg>-package]{<pkg>}} ~~
}
\examples{
%% ~~ simple examples of the most important functions ~~
}
|
# Template for running and plotting a very simple agent-based model in R
# Original: Professor Bear Braumoeller, Department of Political Science, Ohio State
# Tweaked by Jon Green, Department of Political Science, Ohio State
# This code creates a 20x20 grid of 0s and 1s, which represent values of some
# variable held by agents in those cells. It then chooses two adjacent cells,
# the first at random and the second at random from among the first cell's
# neighbors, and applies a simple rule -- the first cell takes on the value
# of the second. It iterates this cell selection and rule application 1,000
# times, displays the result, and tracks the fraction of 1s in the matrix
# over time.
# This is not meant to represent a meaningful social process. It's just meant
# to be a template for students and colleagues to use to create more interesting
# agent-based models.
library(spam)
#This function sets the matrix
abm.matrix <- function(dimension = 20){
mat <- matrix(sample(c(0,1), dimension*dimension, replace=TRUE), nrow=dimension, ncol=dimension)
return(mat)
}
#This function sets the thing to track (in this case, the ratio of black to white cells)
bin.ratio <- function(mat){
bin.ratio <- sum(mat)/(nrow(mat)*ncol(mat))
return(bin.ratio)
}
#This function picks the first cell
select.cell <- function(mat){
fc.row <- round(runif(1)*(nrow(mat))+0.5)
fc.col <- round(runif(1)*(ncol(mat))+0.5)
return(c(fc.row, fc.col))
}
#This function picks a neighboring cell, wrapping around if it winds up out of bounds
select.neighbor <- function(first.cell, mat){
#Match cells
sc.row <- first.cell[1]
sc.col <- first.cell[2]
#Move to neighboring row/column
while((sc.row == first.cell[1]) & (sc.col == first.cell[2])){
sc.row <- first.cell[1] + sample(c(-1, 0, 1), 1)
sc.col <- first.cell[2] + sample(c(-1, 0, 1), 1)
}
#If out of bounds, wraparound
sc.row[sc.row==0] <- nrow(mat)
sc.row[sc.row==nrow(mat)+1] <- 1
sc.col[sc.col==0] <- ncol(mat)
sc.col[sc.col==ncol(mat)+1] <- 1
return(c(sc.row, sc.col))
}
#This function makes the first cell match the second cell
copy.values <- function(mat, first.cell, second.cell){
mat[first.cell[1],first.cell[2]] <- mat[second.cell[1],second.cell[2]]
return(mat)
}
#This function updates the thing to track
add.to.tracked.series <- function(br, mat){
br <- c(br, bin.ratio(mat))
return(br)
}
#This function displays results
display.result <- function(mat, thing.to.track, iteration){
if(iteration==1){
image(t(mat), col=c("white", "black"), axes=FALSE)
par1 <- c(list(mfg=c(1,1,1,2)), par(pars))
plot(-100, -100, xlim=c(1,1000), ylim=c(0,1),
ylab="Fraction of black squares",
xlab="Iteration",
main = "Fraction of Black Cells",
type="n", cex.axis=0.8)
rect(par("usr")[1], par("usr")[3], par("usr")[2], par("usr")[4], col="#E6E6E6")
abline(h=c(0,0.25,0.5,0.75,1), col="white", lwd=0.5)
abline(v=c(0,200,400,600,800,1000), col="white", lwd=0.5)
par2 <- c(list(mfg=c(1,2,1,2)), par(pars))
} else {
par(par1)
image(t(mat), col=c("white", "black"), axes=FALSE)
par(par2)
segments(iteration-1, ratio[iteration-1], iteration, ratio[iteration], col="black", lwd=1)
}
}
bin.matrix <- abm.matrix(dimension = 15)
ratio <- bin.ratio(bin.matrix)
par(mfrow=c(1,2))
pars <- c('plt','usr')
for(iteration in 1:1000){
first.cell <- select.cell(mat = bin.matrix)
second.cell <- select.neighbor(first.cell = first.cell, mat = bin.matrix)
bin.matrix <- copy.values(mat = bin.matrix, first.cell = first.cell, second.cell = second.cell)
ratio <- add.to.tracked.series(br = ratio, mat = bin.matrix)
display.result(mat = bin.matrix, thing.to.track = bin.ratios, iteration)
}
| /ABM_simple_template.R | no_license | jgreen4919/ABM | R | false | false | 3,763 | r | # Template for running and plotting a very simple agent-based model in R
# Original: Professor Bear Braumoeller, Department of Political Science, Ohio State
# Tweaked by Jon Green, Department of Political Science, Ohio State
# This code creates a 20x20 grid of 0s and 1s, which represent values of some
# variable held by agents in those cells. It then chooses two adjacent cells,
# the first at random and the second at random from among the first cell's
# neighbors, and applies a simple rule -- the first cell takes on the value
# of the second. It iterates this cell selection and rule application 1,000
# times, displays the result, and tracks the fraction of 1s in the matrix
# over time.
# This is not meant to represent a meaningful social process. It's just meant
# to be a template for students and colleagues to use to create more interesting
# agent-based models.
library(spam)
#This function sets the matrix
abm.matrix <- function(dimension = 20){
mat <- matrix(sample(c(0,1), dimension*dimension, replace=TRUE), nrow=dimension, ncol=dimension)
return(mat)
}
#This function sets the thing to track (in this case, the ratio of black to white cells)
bin.ratio <- function(mat){
bin.ratio <- sum(mat)/(nrow(mat)*ncol(mat))
return(bin.ratio)
}
#This function picks the first cell
select.cell <- function(mat){
fc.row <- round(runif(1)*(nrow(mat))+0.5)
fc.col <- round(runif(1)*(ncol(mat))+0.5)
return(c(fc.row, fc.col))
}
#This function picks a neighboring cell, wrapping around if it winds up out of bounds
select.neighbor <- function(first.cell, mat){
#Match cells
sc.row <- first.cell[1]
sc.col <- first.cell[2]
#Move to neighboring row/column
while((sc.row == first.cell[1]) & (sc.col == first.cell[2])){
sc.row <- first.cell[1] + sample(c(-1, 0, 1), 1)
sc.col <- first.cell[2] + sample(c(-1, 0, 1), 1)
}
#If out of bounds, wraparound
sc.row[sc.row==0] <- nrow(mat)
sc.row[sc.row==nrow(mat)+1] <- 1
sc.col[sc.col==0] <- ncol(mat)
sc.col[sc.col==ncol(mat)+1] <- 1
return(c(sc.row, sc.col))
}
#This function makes the first cell match the second cell
copy.values <- function(mat, first.cell, second.cell){
mat[first.cell[1],first.cell[2]] <- mat[second.cell[1],second.cell[2]]
return(mat)
}
#This function updates the thing to track
add.to.tracked.series <- function(br, mat){
br <- c(br, bin.ratio(mat))
return(br)
}
#This function displays results
display.result <- function(mat, thing.to.track, iteration){
if(iteration==1){
image(t(mat), col=c("white", "black"), axes=FALSE)
par1 <- c(list(mfg=c(1,1,1,2)), par(pars))
plot(-100, -100, xlim=c(1,1000), ylim=c(0,1),
ylab="Fraction of black squares",
xlab="Iteration",
main = "Fraction of Black Cells",
type="n", cex.axis=0.8)
rect(par("usr")[1], par("usr")[3], par("usr")[2], par("usr")[4], col="#E6E6E6")
abline(h=c(0,0.25,0.5,0.75,1), col="white", lwd=0.5)
abline(v=c(0,200,400,600,800,1000), col="white", lwd=0.5)
par2 <- c(list(mfg=c(1,2,1,2)), par(pars))
} else {
par(par1)
image(t(mat), col=c("white", "black"), axes=FALSE)
par(par2)
segments(iteration-1, ratio[iteration-1], iteration, ratio[iteration], col="black", lwd=1)
}
}
bin.matrix <- abm.matrix(dimension = 15)
ratio <- bin.ratio(bin.matrix)
par(mfrow=c(1,2))
pars <- c('plt','usr')
for(iteration in 1:1000){
first.cell <- select.cell(mat = bin.matrix)
second.cell <- select.neighbor(first.cell = first.cell, mat = bin.matrix)
bin.matrix <- copy.values(mat = bin.matrix, first.cell = first.cell, second.cell = second.cell)
ratio <- add.to.tracked.series(br = ratio, mat = bin.matrix)
display.result(mat = bin.matrix, thing.to.track = bin.ratios, iteration)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_os.R
\name{get_os}
\alias{get_os}
\title{Get operating system}
\usage{
get_os()
}
\value{
A string with the operating system.
}
\description{
Get operating system
}
| /man/get_os.Rd | permissive | SC-COSMO/sccosmomcma | R | false | true | 247 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_os.R
\name{get_os}
\alias{get_os}
\title{Get operating system}
\usage{
get_os()
}
\value{
A string with the operating system.
}
\description{
Get operating system
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/relabel_colms.R
\name{relabel_colms}
\alias{relabel_colms}
\title{Relabel columns to match the selection table format}
\usage{
relabel_colms(X, extra.cols.name = NULL, extra.cols.new.name = NULL,
khz.to.hz = FALSE, hz.to.khz = FALSE, waveform = FALSE)
}
\arguments{
\item{X}{Data frame imported from Raven.}
\item{extra.cols.name}{Character vector with the names of additional columns to be relabeled. Default is \code{NULL}.
'extra.cols.new.name' must be also provided.}
\item{extra.cols.new.name}{Character vector with the new names for the additional columns to be relabeled.
Default is \code{NULL}. 'extra.cols.name' must be also provided.}
\item{khz.to.hz}{Logical. Controls if frequency variables ('top.freq' and 'bottom.freq') should be converted from kHz
(the unit used by other bioacoustic analysis R packages like \code{\link{warbleR}}) to Hz (the unit used by Raven).
Default is \code{FALSE}.}
\item{hz.to.khz}{Logical. Controls if frequency variables ('top.freq' and 'bottom.freq') should be converted from Hz
(the unit used by other bioacoustic analysis R packages like Raven) to kHz (the unit used by \code{\link{warbleR}}).
Default is \code{FALSE}. Ignored if 'kHz.to.hz' is \code{TRUE}.}
\item{waveform}{Logical to control if 'waveform' related data should be included (this data is typically duplicated in 'spectrogram' data). Default is \code{FALSE} (not to include it).}
}
\value{
The function returns the input data frame with new column names for time and frequency 'coordinates' and sound files and selections.
}
\description{
\code{ relabel_colms} relabels columns to match the selection table format (as in the R package \code{\link{warbleR}})
}
\details{
This function relabels columns to match the selection table format to match then ones used by other bioacoustic analysis R packages like \code{\link{warbleR}}.
}
\examples{
# Load data
data(selection_files)
#save 'Raven' selection tables in the temporary directory
writeLines(selection_files[[5]], con = names(selection_files)[5])
\donttest{
#'# import data to R
rvn.dat <- imp_raven(all.data = TRUE)
names(rvn.dat)
# Select data for a single sound file
rvn.dat2 <- relabel_colms(rvn.dat)
names(rvn.dat2)
# plus 1 additional column
rvn.dat2 <- relabel_colms(rvn.dat, extra.cols.name = "selec.file", "Raven selection file")
names(rvn.dat2)
# plus 2 additional column
rvn.dat2 <- relabel_colms(rvn.dat, extra.cols.name = c("selec.file", "View"),
c("Raven selection file", "Raven view"))
names(rvn.dat2)
}
}
\seealso{
\code{\link{imp_raven}}; \code{\link{exp_raven}}
}
\author{
Marcelo Araya-Salas (\email{araya-salas@cornell.edu})
}
| /man/relabel_colms.Rd | no_license | DanWoodrich/Rraven | R | false | true | 2,720 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/relabel_colms.R
\name{relabel_colms}
\alias{relabel_colms}
\title{Relabel columns to match the selection table format}
\usage{
relabel_colms(X, extra.cols.name = NULL, extra.cols.new.name = NULL,
khz.to.hz = FALSE, hz.to.khz = FALSE, waveform = FALSE)
}
\arguments{
\item{X}{Data frame imported from Raven.}
\item{extra.cols.name}{Character vector with the names of additional columns to be relabeled. Default is \code{NULL}.
'extra.cols.new.name' must be also provided.}
\item{extra.cols.new.name}{Character vector with the new names for the additional columns to be relabeled.
Default is \code{NULL}. 'extra.cols.name' must be also provided.}
\item{khz.to.hz}{Logical. Controls if frequency variables ('top.freq' and 'bottom.freq') should be converted from kHz
(the unit used by other bioacoustic analysis R packages like \code{\link{warbleR}}) to Hz (the unit used by Raven).
Default is \code{FALSE}.}
\item{hz.to.khz}{Logical. Controls if frequency variables ('top.freq' and 'bottom.freq') should be converted from Hz
(the unit used by other bioacoustic analysis R packages like Raven) to kHz (the unit used by \code{\link{warbleR}}).
Default is \code{FALSE}. Ignored if 'kHz.to.hz' is \code{TRUE}.}
\item{waveform}{Logical to control if 'waveform' related data should be included (this data is typically duplicated in 'spectrogram' data). Default is \code{FALSE} (not to include it).}
}
\value{
The function returns the input data frame with new column names for time and frequency 'coordinates' and sound files and selections.
}
\description{
\code{ relabel_colms} relabels columns to match the selection table format (as in the R package \code{\link{warbleR}})
}
\details{
This function relabels columns to match the selection table format to match then ones used by other bioacoustic analysis R packages like \code{\link{warbleR}}.
}
\examples{
# Load data
data(selection_files)
#save 'Raven' selection tables in the temporary directory
writeLines(selection_files[[5]], con = names(selection_files)[5])
\donttest{
#'# import data to R
rvn.dat <- imp_raven(all.data = TRUE)
names(rvn.dat)
# Select data for a single sound file
rvn.dat2 <- relabel_colms(rvn.dat)
names(rvn.dat2)
# plus 1 additional column
rvn.dat2 <- relabel_colms(rvn.dat, extra.cols.name = "selec.file", "Raven selection file")
names(rvn.dat2)
# plus 2 additional column
rvn.dat2 <- relabel_colms(rvn.dat, extra.cols.name = c("selec.file", "View"),
c("Raven selection file", "Raven view"))
names(rvn.dat2)
}
}
\seealso{
\code{\link{imp_raven}}; \code{\link{exp_raven}}
}
\author{
Marcelo Araya-Salas (\email{araya-salas@cornell.edu})
}
|
# HPD 신뢰구간
# grid method
HPDgrid = function(prob, level=0.95){
prob.sort=sort(prob, decreasing=T)
M=min(which(cumsum(prob.sort)>=level))
height = prob.sort[M]
HPD.index=which(prob>=height)
HPD.level=sum(prob[HPD.index])
res=list(index=HPD.index, level=HPD.level)
return(res)
}
N=1001; level=0.95
theta=seq(-3,3,length=N)
prob=exp(-0.5/0.25*(theta-0.3)^2) # likelihood
prob=prob/sum(prob)
HPD=HPDgrid(prob,level)
HPDgrid.hat = c(min(theta[HPD$index]), max(theta[HPD$index]))
plot(theta, prob,type="l")
abline(v=HPDgrid.hat, lty=2, col="blue")
HPD$level
# sample method
HPDsample = function(th, level=.95){
N = length(th)
theta.sort = sort(th)
M = ceiling(N*level)
nCI = N-M+1
CI.width = theta.sort[1:nCI+M-1]-theta.sort[1:nCI]
HPD.index = which.min(CI.width)
HPD = c(theta.sort[HPD.index], theta.sort[HPD.index+M-1])
return(HPD)
}
N = 10000; level=0.95
theta = rnorm(N, 0.3, 0.5) #posterior
HPDsample.hat = HPDsample(theta,level)
th = seq(-3,3,length=N)
post = dnorm(th, 0.3,0.5)
plot(th, post, type="l")
abline(v=HPDsample.hat, lty=2)
| /HPD.R | no_license | dohye/R | R | false | false | 1,081 | r | # HPD 신뢰구간
# grid method
HPDgrid = function(prob, level=0.95){
prob.sort=sort(prob, decreasing=T)
M=min(which(cumsum(prob.sort)>=level))
height = prob.sort[M]
HPD.index=which(prob>=height)
HPD.level=sum(prob[HPD.index])
res=list(index=HPD.index, level=HPD.level)
return(res)
}
N=1001; level=0.95
theta=seq(-3,3,length=N)
prob=exp(-0.5/0.25*(theta-0.3)^2) # likelihood
prob=prob/sum(prob)
HPD=HPDgrid(prob,level)
HPDgrid.hat = c(min(theta[HPD$index]), max(theta[HPD$index]))
plot(theta, prob,type="l")
abline(v=HPDgrid.hat, lty=2, col="blue")
HPD$level
# sample method
HPDsample = function(th, level=.95){
N = length(th)
theta.sort = sort(th)
M = ceiling(N*level)
nCI = N-M+1
CI.width = theta.sort[1:nCI+M-1]-theta.sort[1:nCI]
HPD.index = which.min(CI.width)
HPD = c(theta.sort[HPD.index], theta.sort[HPD.index+M-1])
return(HPD)
}
N = 10000; level=0.95
theta = rnorm(N, 0.3, 0.5) #posterior
HPDsample.hat = HPDsample(theta,level)
th = seq(-3,3,length=N)
post = dnorm(th, 0.3,0.5)
plot(th, post, type="l")
abline(v=HPDsample.hat, lty=2)
|
##' @export
writevol <- function(x,filename="test",ANALYZE=FALSE,flip=TRUE,gzipped=FALSE,template,...) {
if (missing(template)) {
template <- system.file("brains/con_0007.hdr.gz",package="neurocdf")
template <- gsub(".hdr","",template)
}
res <- oro.nifti::readNIfTI(template)
## oro.nifti::writeNIfTI(res,filename="con_0007",gzipped=TRUE,onefile=TRUE)
## require(Rniftilib)
## ff <- gsub(".hdr","",system.file("brains/con_0007.hdr",package="neurocdf"))
## L <- nifti.image.read(ff)
## nifti.set.filenames(L, filename, check=0, set_byte_order=1)
## L[] <- x
## nifti.image.write(L)
## return(NULL)
## require(AnalyzeFMRI)
## L <- f.read.nifti.header(system.file("brains/single_subj_T1.nii",package="neurocdf"))
## L$scl.slope <- NULL
## L$scl.inter <- NULL
## L$datatype <- 16
## L$filename <- NULL
## f.write.nifti(x[],filename,size="float",L=L)
## return(NULL)
hdr <- list()
keep <- c(
## "vox_offset",
"scl_slope","scl_inter",
## "intent_code","qform_code","sform_code",
"quatern_b","quatern_c","quatern_d",
"intent_p1","intent_p2","intent_p3",
"srow_x","srow_y","srow_z",
"qoffset_x","qoffset_y","qoffset_z",
"xyzt_units",
"pixdim")
for (i in keep) {
new <- list(slot(res,i)); names(new) <- i
hdr <- c(hdr, new)
}
hdr$datatype <- 16
hdr$dim <- dim(x)
##if (flip) hdr$srow_x <- -hdr$srow_x
if (ANALYZE) {
out <- do.call(oro.nifti::anlz, c(list(img=x[]),hdr))
oro.nifti::writeANALYZE(out,filename=filename,gzipped=gzipped,...)
} else {
out <- do.call(oro.nifti::nifti, c(list(img=x[]),hdr))
oro.nifti::writeNIfTI(out,filename=filename,gzipped=gzipped,onefile=FALSE,...)
}
}
| /R/writevol.R | no_license | kkholst/neurocdf | R | false | false | 1,827 | r | ##' @export
writevol <- function(x,filename="test",ANALYZE=FALSE,flip=TRUE,gzipped=FALSE,template,...) {
if (missing(template)) {
template <- system.file("brains/con_0007.hdr.gz",package="neurocdf")
template <- gsub(".hdr","",template)
}
res <- oro.nifti::readNIfTI(template)
## oro.nifti::writeNIfTI(res,filename="con_0007",gzipped=TRUE,onefile=TRUE)
## require(Rniftilib)
## ff <- gsub(".hdr","",system.file("brains/con_0007.hdr",package="neurocdf"))
## L <- nifti.image.read(ff)
## nifti.set.filenames(L, filename, check=0, set_byte_order=1)
## L[] <- x
## nifti.image.write(L)
## return(NULL)
## require(AnalyzeFMRI)
## L <- f.read.nifti.header(system.file("brains/single_subj_T1.nii",package="neurocdf"))
## L$scl.slope <- NULL
## L$scl.inter <- NULL
## L$datatype <- 16
## L$filename <- NULL
## f.write.nifti(x[],filename,size="float",L=L)
## return(NULL)
hdr <- list()
keep <- c(
## "vox_offset",
"scl_slope","scl_inter",
## "intent_code","qform_code","sform_code",
"quatern_b","quatern_c","quatern_d",
"intent_p1","intent_p2","intent_p3",
"srow_x","srow_y","srow_z",
"qoffset_x","qoffset_y","qoffset_z",
"xyzt_units",
"pixdim")
for (i in keep) {
new <- list(slot(res,i)); names(new) <- i
hdr <- c(hdr, new)
}
hdr$datatype <- 16
hdr$dim <- dim(x)
##if (flip) hdr$srow_x <- -hdr$srow_x
if (ANALYZE) {
out <- do.call(oro.nifti::anlz, c(list(img=x[]),hdr))
oro.nifti::writeANALYZE(out,filename=filename,gzipped=gzipped,...)
} else {
out <- do.call(oro.nifti::nifti, c(list(img=x[]),hdr))
oro.nifti::writeNIfTI(out,filename=filename,gzipped=gzipped,onefile=FALSE,...)
}
}
|
\name{pisa.mean.pv}
\alias{pisa.mean.pv}
\alias{pisa2015.mean.pv}
\title{
Calculates mean achievement score
}
\description{
pisa.mean.pv uses five plausible values to calculate the mean achievement score and its standard error
Use the pisa2015.mean.pv() for data from PISA 2015 study.
}
\usage{
pisa.mean.pv(pvlabel, by, data, export = FALSE, name = "output",
folder = getwd())
pisa2015.mean.pv(pvlabel, by, data, export = FALSE, name = "output",
folder = getwd())
}
\arguments{
\item{pvlabel}{
The label corresponding to the achievement variable, for example, "READ", for overall reading performance.
}
\item{by}{
The label for the grouping variable, usually the countries (i.e., by="IDCNTRYL"), but could be any other categorical variable.
}
\item{data}{
An R object, normally a data frame, containing the data from PIRLS.
}
\item{export}{
A logical value. If TRUE, the output is exported to a file in comma-separated value format (.csv) that can be opened from LibreOffice or Excel.
}
\item{name}{
The name of the exported file.
}
\item{folder}{
The folder where the exported file is located.
}
}
\value{
pisa.mean.pv returns a data frame with the mean values and standard errors.
}
\seealso{
timss.mean.pv, pirls.mean.pv, piaac.mean.pv
}
\examples{
\dontrun{
# Table I.2.3a, p. 305 International Report 2012
pisa.mean.pv(pvlabel = "MATH", by = "IDCNTRYL", data = pisa)
pisa.mean.pv(pvlabel = "MATH", by = c("IDCNTRYL", "ST04Q01"), data = pisa)
# Table III.2.1a, p. 232, International Report 2012
pisa.mean.pv(pvlabel="MATH", by=c("IDCNTRYL", "ST08Q01"), data=pisa)
# Figure I.2.16 p. 56 International Report 2009
pisa.mean.pv(pvlabel = "READ", by = "IDCNTRYL", data = pisa)
# PISA 2015
pisa2015.mean.pv(pvlabel = "READ", by = "CNT", data = stud2015)
}
}
| /man/pisa.mean.pv.Rd | no_license | dickli/intsvy2 | R | false | false | 1,794 | rd | \name{pisa.mean.pv}
\alias{pisa.mean.pv}
\alias{pisa2015.mean.pv}
\title{
Calculates mean achievement score
}
\description{
pisa.mean.pv uses five plausible values to calculate the mean achievement score and its standard error
Use the pisa2015.mean.pv() for data from PISA 2015 study.
}
\usage{
pisa.mean.pv(pvlabel, by, data, export = FALSE, name = "output",
folder = getwd())
pisa2015.mean.pv(pvlabel, by, data, export = FALSE, name = "output",
folder = getwd())
}
\arguments{
\item{pvlabel}{
The label corresponding to the achievement variable, for example, "READ", for overall reading performance.
}
\item{by}{
The label for the grouping variable, usually the countries (i.e., by="IDCNTRYL"), but could be any other categorical variable.
}
\item{data}{
An R object, normally a data frame, containing the data from PIRLS.
}
\item{export}{
A logical value. If TRUE, the output is exported to a file in comma-separated value format (.csv) that can be opened from LibreOffice or Excel.
}
\item{name}{
The name of the exported file.
}
\item{folder}{
The folder where the exported file is located.
}
}
\value{
pisa.mean.pv returns a data frame with the mean values and standard errors.
}
\seealso{
timss.mean.pv, pirls.mean.pv, piaac.mean.pv
}
\examples{
\dontrun{
# Table I.2.3a, p. 305 International Report 2012
pisa.mean.pv(pvlabel = "MATH", by = "IDCNTRYL", data = pisa)
pisa.mean.pv(pvlabel = "MATH", by = c("IDCNTRYL", "ST04Q01"), data = pisa)
# Table III.2.1a, p. 232, International Report 2012
pisa.mean.pv(pvlabel="MATH", by=c("IDCNTRYL", "ST08Q01"), data=pisa)
# Figure I.2.16 p. 56 International Report 2009
pisa.mean.pv(pvlabel = "READ", by = "IDCNTRYL", data = pisa)
# PISA 2015
pisa2015.mean.pv(pvlabel = "READ", by = "CNT", data = stud2015)
}
}
|
# Damon Polioudakis
# 2016-03-09
# Calculate gene length and GC content for exon union of each gene in list
# If GC content and exon union have already been calculated for reference,
# can skip to last section and load
################################################################################
rm(list = ls())
sessionInfo()
source("http://bioconductor.org/biocLite.R")
# biocLite("Genominator")
library(Repitools)
library(BSgenome)
# Run if BSgenome.Hsapiens.UCSC.hg19 is not installed:
# biocLite("BSgenome.Hsapiens.UCSC.hg19")
library(BSgenome.Hsapiens.UCSC.hg19)
library(GenomicRanges)
# Run if Genominator is not installed:
# biocLite("Genominator")
library(Genominator)
library(biomaRt)
library(ggplot2)
options(stringsAsFactors = FALSE)
## Load data and assign variables
exDatDF <- read.csv("../data/htseq/Exprs_HTSCexon.csv")
# Get Gencode 18 gtf file - this was cleaned by selecting only the columns
# containing the word "exon" and with the relevant information - chrnum,
# feature type, strand, start, end, and ensg and ense IDs separated by a semicolon
gtfInfoDF <- read.table("../../source/gencode.v19.annotation.gtf", sep = "\t")
################################################################################
### Format and Filter
# Keep only the exon level features
keep <- gtfInfoDF[ , 3]=="exon"
gtfInfoDF <- gtfInfoDF[keep, ]
# Split the semicolon separated information
geneExonInfo <- unlist(strsplit(gtfInfoDF[ , 9], "[;]"))
# Finding which has gene_id for ensembl ID
genCol<- which(regexpr("gene_id ", geneExonInfo) > 0)
getSeq <- geneExonInfo[genCol]
ENSGID <- substr(getSeq, 9, 100)
length(unique(ENSGID)) #57912
transCol <- which(regexpr("transcript_id ", geneExonInfo) > 0)
tranSeq <- geneExonInfo[transCol]
ENSEID <- substr(tranSeq, 16, 100)
length(unique(ENSEID)) #196612
gtfInfoDF <- cbind(gtfInfoDF[ , c(1:8)], ENSGID, ENSEID)
geneCol <- which(regexpr("gene_name ", geneExonInfo) > 0)
geneSeq <- geneExonInfo[geneCol]
GENEID <- substr(geneSeq, 12, 100)
length(unique(GENEID)) #55763
gtfInfoDF <- cbind(gtfInfoDF[ , c(1:8)], ENSGID, ENSEID)
# 6 and 8 columns are blank - remove
gtfInfoDF <- gtfInfoDF[ , -c(6, 8)]
######## From Viveks script - I think this incorrectly only keeps 1 exon per gene
# ## Keep only one copy of each ENSEID - the gtf file records one copy for each transcript id
# keep <- match(unique(ENSEID),ENSEID)
# gtfInfo1dF <- gtfInfoDF[keep,]
# ##gtfInfoDF[,1] <- substr(gtfInfoDF[,1],4,10) ## 672406 exons is exactly what biomaRt contains
gtfInfo1dF <- gtfInfoDF
################################################################################
### Recode things for the Genominator package
# Using as.factor to coerce chromosome names can really botch things up.. beware!
# So go ahead and convert MT, X, and Y to numbers throughout, unless necessary
# for other purposes
chrnums <- gtfInfo1dF[ ,1]
chrnums[chrnums=="MT"] <- "25"
chrnums[chrnums=="X"] <- "23"
chrnums[chrnums=="Y"] <- "24"
# If there are no chr to remove this does not work
## removing Non-annotated(NT) chromosomes
# rmChR.col1 <- which(regexpr("HG", chrnums) > 0)
# rmChR.col2 <- which(regexpr("GL", chrnums) > 0)
# rmChR.col3 <- which(regexpr("HS", chrnums) > 0)
# rmChR.col <- c(rmChR.col1, rmChR.col2, rmChR.col3)
# gtfInfo1dF <- gtfInfo1dF[-rmChR.col, ]
# chrnums <- chrnums[-rmChR.col]
gtfInfo1dF[ ,1] <- chrnums ## Check here
gtfInfoDF <- gtfInfo1dF
strinfo <- gtfInfoDF[ ,6]
strinfo[strinfo=="+"] <- 1L
strinfo[strinfo=="-"] <- -1L
gtfInfoDF[ ,6] <- strinfo
# chr integer, strand integer (-1L,0L,1L), start integer, end integer, ensg
# and transcript id
geneDat1 <- gtfInfoDF[ ,c(1 ,6 ,4 ,5 ,7 ,8)]
geneDat1 <- data.frame(as.numeric(chrnums)
, as.numeric(geneDat1[ ,2])
, as.numeric(geneDat1[ ,3])
, as.numeric(geneDat1[ ,4])
, geneDat1[ ,5]
, geneDat1[ ,6])
names(geneDat1) <- c("chr","strand","start","end","ensembl_gene_id","ensembl_exon_id")
geneDatX <- geneDat1[order(geneDat1[ ,1], geneDat1[ ,3]), ]
# Remove NAs from ERCC chromosomes
geneDatX <- geneDatX[complete.cases(geneDatX), ]
# Have genominator check if this is a valid data object
validAnnotation(geneDatX)
# Should take a few minutes !!!!
geneDatX <- makeGeneRepresentation(annoData = geneDat1
, type = "Ugene"
, gene.id = "ensembl_gene_id"
, transcript.id = "ensembl_exon_id"
, verbose = TRUE)
save(geneDatX, file = "../../source/Genominator_Union_Exon_Models_ENSEMBLhg19.rda")
load(file = "../../source/Genominator_Union_Exon_Models_ENSEMBLhg19.rda")
################################################################################
### Now use the genominator output to calculate GC content - Use mac laptop
geneDat2 <- cbind(geneDatX, geneDatX[ , 3] - geneDatX[ , 2])
geneDat2 <- geneDat2[order(geneDat2[ , 5]) , ]
# Change formatting again
chrNums <- geneDat2[ ,"chr"]
chrNums[chrNums=="25"] <- "M" ## important as UCSC codes MT as M
chrNums[chrNums=="23"] <- "X"
chrNums[chrNums=="24"] <- "Y"
stInfo <- geneDat2[ ,"strand"]
stInfo[stInfo==c(-1)] <- "-"
stInfo[stInfo==c(1)] <- "+"
# Calculate GC content using the union exon ranges determined by
# selecting "exons" from the gtf file above
# Convert to a genomic ranges object
gcQuery <- GRanges(paste("chr", chrNums, sep = "")
, IRanges(geneDat2[ , 2], geneDat2[ , 3]), strand = stInfo)
gcContent <- gcContentCalc(x = gcQuery, organism = Hsapiens)
# Take a length weighted average of GC content percentages to get the GC content
# for the union gene model
head(geneDat2)
geneDat2 <- cbind(geneDat2, gcContent)
geneDat2 <- cbind(geneDat2, gcContent * geneDat2[ , 6])
unionGenes <- by(geneDat2[ , 6], as.factor(geneDat2[ , 5]), sum)
unionGC <- by(geneDat2[ , 8], as.factor(geneDat2[ , 5]), sum)
geneDat3 <- cbind(unionGenes, unionGC / unionGenes)
colnames(geneDat3) <- c("UnionExonLength","UnionGCcontent")
ENSEMBLhg19.70UnionAnno <- geneDat3
## Save for further usage
save(ENSEMBLhg19.70UnionAnno, file="../../source/ENSEMBLhg19_Exon_Union_Anno.rda")
load("../../source/ENSEMBLhg19_Exon_Union_Anno.rda")
################################################################################
### Check Lengths and GC content to longest isoform for each gene id from biomart
# Pull Length and % GC content from Biomart, includes UTRs and CDS
ensemblMart <- useMart("ENSEMBL_MART_ENSEMBL", host = "www.ensembl.org")
ensemblMart <- useDataset("hsapiens_gene_ensembl", mart = ensemblMart)
martDF <- getBM(attributes = c("ensembl_gene_id", "ensembl_transcript_id"
, "transcript_length", "percentage_gc_content")
, mart = ensemblMart)
# Select longest isoform for each gene id
sel <- ave(martDF$transcript_length, martDF$ensembl_gene_id
, FUN = max) == martDF$transcript_length
martMaxDF <- martDF[sel, ]
# Merge biomart dataframe with union exon dataframe
df <- ENSEMBLhg19.70UnionAnno
row.names(df) <- gsub("\\..*", "", row.names(df))
df <- merge(df, martMaxDF, by.x = "row.names", by.y = "ensembl_gene_id")
## Plot biomart longest isoform length and gc content vs union exon
# Length
ggplot(df, aes(x = UnionExonLength, y = df$transcript_length)) +
geom_point(alpha = 0.25, shape = 1) +
xlab("Union Exon Gene Length") +
ylab("Biomart Longest Isoform Gene Length") +
ggtitle(paste0("Calc_Union_Exon_Length_and_GC.R"
,"\nCompare Union Exon Gene Length to Biomart Longest Isoform"
,"\nPearson:", round(cor(df$UnionExonLength
, df$transcript_length
, method = "pearson"), 2)
,"\nSpearman:", round(cor(df$UnionExonLength
, df$transcript_length
, method = "spearman"), 2)))
ggsave("../analysis/Calc_Union_Exon_Length_and_GC_Compare_Length_To_Biomart.pdf")
# GC
ggplot(df, aes(x = UnionGCcontent, y = df$percentage_gc_content/100)) +
geom_point(alpha = 0.25, shape = 1) +
xlab("Union Exon Gene GC Content") +
ylab("Biomart Longest Isoform GC Content") +
ggtitle(paste0("Calc_Union_Exon_Length_and_GC.R"
,"\nCompare Union Exon Gene GC Content to Biomart Longest Isoform"
,"\nPearson:", round(cor(df$UnionGCcontent
, df$percentage_gc_content/100
, method = "pearson"
, use = 'pairwise.complete.obs'), 2)
,"\nSpearman:", round(cor(df$UnionGCcontent
, df$percentage_gc_content/100
, method = "spearman"
, use = 'pairwise.complete.obs'), 2)))
ggsave("../analysis/Calc_Union_Exon_Length_and_GC_Compare_GC_To_Biomart.pdf")
################################################################################
### Calculate average length and gc for each sample
unionGenes <- data.frame(Length = ENSEMBLhg19.70UnionAnno[ ,1])
exLenDF <- merge(x = exDatDF, y = unionGenes, by.x = "X", by.y = "row.names" )
avgLength <- apply(exLenDF, 2
, function(counts) sum(as.numeric(counts) * exLenDF["Length"]) /
sum(as.numeric(counts)))
avgLength <- tail(head(avgLength, -1), -1)
avgGCdF <- merge(x = exDatDF, y = ENSEMBLhg19.70UnionAnno, by.x = "X", by.y = "row.names" )
avgGCdF <- avgGCdF[complete.cases(avgGCdF), ]
avgGC <- apply(avgGCdF, 2
, function(counts) sum(as.numeric(counts) * avgGCdF["UnionGCcontent"]) /
sum(as.numeric(counts)))
avgGC <- tail(head(avgGC, -2), -1)
save(avgLength, avgGC, file = "../analysis/tables/Avg_Gene_Length_and_GC.rda")
| /Calc_Union_Exon_Length_and_GC.R | no_license | dpolioudakis/single_cell_kriegstein_2015 | R | false | false | 9,935 | r | # Damon Polioudakis
# 2016-03-09
# Calculate gene length and GC content for exon union of each gene in list
# If GC content and exon union have already been calculated for reference,
# can skip to last section and load
################################################################################
rm(list = ls())
sessionInfo()
source("http://bioconductor.org/biocLite.R")
# biocLite("Genominator")
library(Repitools)
library(BSgenome)
# Run if BSgenome.Hsapiens.UCSC.hg19 is not installed:
# biocLite("BSgenome.Hsapiens.UCSC.hg19")
library(BSgenome.Hsapiens.UCSC.hg19)
library(GenomicRanges)
# Run if Genominator is not installed:
# biocLite("Genominator")
library(Genominator)
library(biomaRt)
library(ggplot2)
options(stringsAsFactors = FALSE)
## Load data and assign variables
exDatDF <- read.csv("../data/htseq/Exprs_HTSCexon.csv")
# Get Gencode 18 gtf file - this was cleaned by selecting only the columns
# containing the word "exon" and with the relevant information - chrnum,
# feature type, strand, start, end, and ensg and ense IDs separated by a semicolon
gtfInfoDF <- read.table("../../source/gencode.v19.annotation.gtf", sep = "\t")
################################################################################
### Format and Filter
# Keep only the exon level features
keep <- gtfInfoDF[ , 3]=="exon"
gtfInfoDF <- gtfInfoDF[keep, ]
# Split the semicolon separated information
geneExonInfo <- unlist(strsplit(gtfInfoDF[ , 9], "[;]"))
# Finding which has gene_id for ensembl ID
genCol<- which(regexpr("gene_id ", geneExonInfo) > 0)
getSeq <- geneExonInfo[genCol]
ENSGID <- substr(getSeq, 9, 100)
length(unique(ENSGID)) #57912
transCol <- which(regexpr("transcript_id ", geneExonInfo) > 0)
tranSeq <- geneExonInfo[transCol]
ENSEID <- substr(tranSeq, 16, 100)
length(unique(ENSEID)) #196612
gtfInfoDF <- cbind(gtfInfoDF[ , c(1:8)], ENSGID, ENSEID)
geneCol <- which(regexpr("gene_name ", geneExonInfo) > 0)
geneSeq <- geneExonInfo[geneCol]
GENEID <- substr(geneSeq, 12, 100)
length(unique(GENEID)) #55763
gtfInfoDF <- cbind(gtfInfoDF[ , c(1:8)], ENSGID, ENSEID)
# 6 and 8 columns are blank - remove
gtfInfoDF <- gtfInfoDF[ , -c(6, 8)]
######## From Viveks script - I think this incorrectly only keeps 1 exon per gene
# ## Keep only one copy of each ENSEID - the gtf file records one copy for each transcript id
# keep <- match(unique(ENSEID),ENSEID)
# gtfInfo1dF <- gtfInfoDF[keep,]
# ##gtfInfoDF[,1] <- substr(gtfInfoDF[,1],4,10) ## 672406 exons is exactly what biomaRt contains
gtfInfo1dF <- gtfInfoDF
################################################################################
### Recode things for the Genominator package
# Using as.factor to coerce chromosome names can really botch things up.. beware!
# So go ahead and convert MT, X, and Y to numbers throughout, unless necessary
# for other purposes
chrnums <- gtfInfo1dF[ ,1]
chrnums[chrnums=="MT"] <- "25"
chrnums[chrnums=="X"] <- "23"
chrnums[chrnums=="Y"] <- "24"
# If there are no chr to remove this does not work
## removing Non-annotated(NT) chromosomes
# rmChR.col1 <- which(regexpr("HG", chrnums) > 0)
# rmChR.col2 <- which(regexpr("GL", chrnums) > 0)
# rmChR.col3 <- which(regexpr("HS", chrnums) > 0)
# rmChR.col <- c(rmChR.col1, rmChR.col2, rmChR.col3)
# gtfInfo1dF <- gtfInfo1dF[-rmChR.col, ]
# chrnums <- chrnums[-rmChR.col]
gtfInfo1dF[ ,1] <- chrnums ## Check here
gtfInfoDF <- gtfInfo1dF
strinfo <- gtfInfoDF[ ,6]
strinfo[strinfo=="+"] <- 1L
strinfo[strinfo=="-"] <- -1L
gtfInfoDF[ ,6] <- strinfo
# chr integer, strand integer (-1L,0L,1L), start integer, end integer, ensg
# and transcript id
geneDat1 <- gtfInfoDF[ ,c(1 ,6 ,4 ,5 ,7 ,8)]
geneDat1 <- data.frame(as.numeric(chrnums)
, as.numeric(geneDat1[ ,2])
, as.numeric(geneDat1[ ,3])
, as.numeric(geneDat1[ ,4])
, geneDat1[ ,5]
, geneDat1[ ,6])
names(geneDat1) <- c("chr","strand","start","end","ensembl_gene_id","ensembl_exon_id")
geneDatX <- geneDat1[order(geneDat1[ ,1], geneDat1[ ,3]), ]
# Remove NAs from ERCC chromosomes
geneDatX <- geneDatX[complete.cases(geneDatX), ]
# Have genominator check if this is a valid data object
validAnnotation(geneDatX)
# Should take a few minutes !!!!
geneDatX <- makeGeneRepresentation(annoData = geneDat1
, type = "Ugene"
, gene.id = "ensembl_gene_id"
, transcript.id = "ensembl_exon_id"
, verbose = TRUE)
save(geneDatX, file = "../../source/Genominator_Union_Exon_Models_ENSEMBLhg19.rda")
load(file = "../../source/Genominator_Union_Exon_Models_ENSEMBLhg19.rda")
################################################################################
### Now use the genominator output to calculate GC content - Use mac laptop
geneDat2 <- cbind(geneDatX, geneDatX[ , 3] - geneDatX[ , 2])
geneDat2 <- geneDat2[order(geneDat2[ , 5]) , ]
# Change formatting again
chrNums <- geneDat2[ ,"chr"]
chrNums[chrNums=="25"] <- "M" ## important as UCSC codes MT as M
chrNums[chrNums=="23"] <- "X"
chrNums[chrNums=="24"] <- "Y"
stInfo <- geneDat2[ ,"strand"]
stInfo[stInfo==c(-1)] <- "-"
stInfo[stInfo==c(1)] <- "+"
# Calculate GC content using the union exon ranges determined by
# selecting "exons" from the gtf file above
# Convert to a genomic ranges object
gcQuery <- GRanges(paste("chr", chrNums, sep = "")
, IRanges(geneDat2[ , 2], geneDat2[ , 3]), strand = stInfo)
gcContent <- gcContentCalc(x = gcQuery, organism = Hsapiens)
# Take a length weighted average of GC content percentages to get the GC content
# for the union gene model
head(geneDat2)
geneDat2 <- cbind(geneDat2, gcContent)
geneDat2 <- cbind(geneDat2, gcContent * geneDat2[ , 6])
unionGenes <- by(geneDat2[ , 6], as.factor(geneDat2[ , 5]), sum)
unionGC <- by(geneDat2[ , 8], as.factor(geneDat2[ , 5]), sum)
geneDat3 <- cbind(unionGenes, unionGC / unionGenes)
colnames(geneDat3) <- c("UnionExonLength","UnionGCcontent")
ENSEMBLhg19.70UnionAnno <- geneDat3
## Save for further usage
save(ENSEMBLhg19.70UnionAnno, file="../../source/ENSEMBLhg19_Exon_Union_Anno.rda")
load("../../source/ENSEMBLhg19_Exon_Union_Anno.rda")
################################################################################
### Check Lengths and GC content to longest isoform for each gene id from biomart
# Pull Length and % GC content from Biomart, includes UTRs and CDS
ensemblMart <- useMart("ENSEMBL_MART_ENSEMBL", host = "www.ensembl.org")
ensemblMart <- useDataset("hsapiens_gene_ensembl", mart = ensemblMart)
martDF <- getBM(attributes = c("ensembl_gene_id", "ensembl_transcript_id"
, "transcript_length", "percentage_gc_content")
, mart = ensemblMart)
# Select longest isoform for each gene id
sel <- ave(martDF$transcript_length, martDF$ensembl_gene_id
, FUN = max) == martDF$transcript_length
martMaxDF <- martDF[sel, ]
# Merge biomart dataframe with union exon dataframe
df <- ENSEMBLhg19.70UnionAnno
row.names(df) <- gsub("\\..*", "", row.names(df))
df <- merge(df, martMaxDF, by.x = "row.names", by.y = "ensembl_gene_id")
## Plot biomart longest isoform length and gc content vs union exon
# Length
ggplot(df, aes(x = UnionExonLength, y = df$transcript_length)) +
geom_point(alpha = 0.25, shape = 1) +
xlab("Union Exon Gene Length") +
ylab("Biomart Longest Isoform Gene Length") +
ggtitle(paste0("Calc_Union_Exon_Length_and_GC.R"
,"\nCompare Union Exon Gene Length to Biomart Longest Isoform"
,"\nPearson:", round(cor(df$UnionExonLength
, df$transcript_length
, method = "pearson"), 2)
,"\nSpearman:", round(cor(df$UnionExonLength
, df$transcript_length
, method = "spearman"), 2)))
ggsave("../analysis/Calc_Union_Exon_Length_and_GC_Compare_Length_To_Biomart.pdf")
# GC
ggplot(df, aes(x = UnionGCcontent, y = df$percentage_gc_content/100)) +
geom_point(alpha = 0.25, shape = 1) +
xlab("Union Exon Gene GC Content") +
ylab("Biomart Longest Isoform GC Content") +
ggtitle(paste0("Calc_Union_Exon_Length_and_GC.R"
,"\nCompare Union Exon Gene GC Content to Biomart Longest Isoform"
,"\nPearson:", round(cor(df$UnionGCcontent
, df$percentage_gc_content/100
, method = "pearson"
, use = 'pairwise.complete.obs'), 2)
,"\nSpearman:", round(cor(df$UnionGCcontent
, df$percentage_gc_content/100
, method = "spearman"
, use = 'pairwise.complete.obs'), 2)))
ggsave("../analysis/Calc_Union_Exon_Length_and_GC_Compare_GC_To_Biomart.pdf")
################################################################################
### Calculate average length and gc for each sample
unionGenes <- data.frame(Length = ENSEMBLhg19.70UnionAnno[ ,1])
exLenDF <- merge(x = exDatDF, y = unionGenes, by.x = "X", by.y = "row.names" )
avgLength <- apply(exLenDF, 2
, function(counts) sum(as.numeric(counts) * exLenDF["Length"]) /
sum(as.numeric(counts)))
avgLength <- tail(head(avgLength, -1), -1)
avgGCdF <- merge(x = exDatDF, y = ENSEMBLhg19.70UnionAnno, by.x = "X", by.y = "row.names" )
avgGCdF <- avgGCdF[complete.cases(avgGCdF), ]
avgGC <- apply(avgGCdF, 2
, function(counts) sum(as.numeric(counts) * avgGCdF["UnionGCcontent"]) /
sum(as.numeric(counts)))
avgGC <- tail(head(avgGC, -2), -1)
save(avgLength, avgGC, file = "../analysis/tables/Avg_Gene_Length_and_GC.rda")
|
#' Get daytime VPD for all files in a directory.
#'
#' Wrapper function to derive daily daytime VPD from half-hourly data
#' for all site-scale data files in a given directory (argument \code{dir}).
#' filtering hours where the shortwave incoming radiation (SW_IN_F) is
#' zero and aggregating taking the mean across remaining hours per day.
#'
#' @param dir A character string specifying the directory in which to look
#' for site-specific half-hourly data files.
#'
#' @return A list of outputs of the function \link{get_vpd_day_fluxnet2015_byfile}.
#' @export
#'
#' @examples
#' \dontrun{
#' df <- get_vpd_day_fluxnet2015("./")
#' }
#'
get_vpd_day_fluxnet2015 <- function(dir){
# loop over all HH files in the directory 'dir'
out <- purrr::map( as.list(list.files(dir, pattern = "HH")),
~get_vpd_day_fluxnet2015_byfile(paste0(dir, .)))
return(out)
}
#' Get daytime VPD
#'
#' Derive daily daytime VPD (vapour pressure deficit) from half-hourly
#' data filtering hours where the shortwave incoming radiation (SW_IN_F)
#' is greater than zero and aggregating taking the mean across remaining
#' hours per day.
#'
#' @param filename_hh A character string specifying the file name containing
#' site-specific half-hourly data.
#' @param write A logical specifying whether daiily daytime VPD should be
#' written to a file.
#'
#' @return A data frame (tibble) containing daily daytime VPD.
#' @export
#'
#' @examples
#' \dontrun{
#' df <- get_vpd_day_fluxnet2015_byfile(
#' "./FLX_BE-Vie_FLUXNET2015_FULLSET_HH_1996-2014_1-3.csv"
#' )
#' }
#'
#'
get_vpd_day_fluxnet2015_byfile <- function(filename_hh, write=FALSE){
# CRAN compliance, define variables
TIMESTAMP_START <- TIMESTAMP_END <- date_start <- date_day <- TA_F <-
TA_F_MDS <- TA_F_QC <- TA_F_MDS_QC <- TA_ERA <- VPD_F_MDS <-
SW_IN_F <- VPD_F <- VPD_F_QC <- VPD_F_MDS_QC <- VPD_ERA <- NULL
filename_dd_vpd <- filename_hh %>%
stringr::str_replace("HH", "DD") %>%
stringr::str_replace(".csv", "_VPD_DAY.csv")
if (file.exists(filename_dd_vpd)){
# Daytime VPD file is already available, reading from file
# print(paste("Reading daytime VPD from:", filename_dd_vpd))
message(paste("Reading file with calculated daytime VPD:", filename_dd_vpd))
df <- readr::read_csv(filename_dd_vpd)
} else {
# Get daytime VPD from half-hourly data
# read half-hourly data
if (!file.exists(filename_hh)){
stop(paste("Half-hourly file does not exist:", filename_hh))
}
df <- readr::read_csv(filename_hh) %>%
dplyr::mutate( date_start = lubridate::ymd_hm( TIMESTAMP_START ),
date_end = lubridate::ymd_hm( TIMESTAMP_END ) ) %>%
dplyr::mutate( date = date_start ) %>%
# retain only daytime data = when incoming shortwave radiation is positive
dplyr::filter(SW_IN_F > 0) %>%
# take mean over daytime values
dplyr::mutate(date_day = lubridate::as_date(date_start)) %>%
dplyr::group_by(date_day) %>%
dplyr::summarise(VPD_F_DAY = mean(VPD_F, na.rm=TRUE),
VPD_F_DAY_QC = sum(is.element(VPD_F_QC, c(0,1)))/n(),
VPD_F_DAY_MDS = mean(VPD_F_MDS, na.rm=TRUE),
VPD_F_DAY_MDS_QC = sum(is.element(VPD_F_MDS_QC, c(0,1)))/n(),
VPD_DAY_ERA = mean(VPD_ERA, na.rm=TRUE) ) %>%
dplyr::rename(date = date_day)
# write to csv file
if (write){
message(paste("Writing file with daytime VPD as:", filename_dd_vpd))
readr::write_csv(df, path = filename_dd_vpd)
}
}
return(df)
}
| /R/get_vpd_day_fluxnet2015.R | no_license | geco-bern/ingestr | R | false | false | 3,643 | r | #' Get daytime VPD for all files in a directory.
#'
#' Wrapper function to derive daily daytime VPD from half-hourly data
#' for all site-scale data files in a given directory (argument \code{dir}).
#' filtering hours where the shortwave incoming radiation (SW_IN_F) is
#' zero and aggregating taking the mean across remaining hours per day.
#'
#' @param dir A character string specifying the directory in which to look
#' for site-specific half-hourly data files.
#'
#' @return A list of outputs of the function \link{get_vpd_day_fluxnet2015_byfile}.
#' @export
#'
#' @examples
#' \dontrun{
#' df <- get_vpd_day_fluxnet2015("./")
#' }
#'
get_vpd_day_fluxnet2015 <- function(dir){
# loop over all HH files in the directory 'dir'
out <- purrr::map( as.list(list.files(dir, pattern = "HH")),
~get_vpd_day_fluxnet2015_byfile(paste0(dir, .)))
return(out)
}
#' Get daytime VPD
#'
#' Derive daily daytime VPD (vapour pressure deficit) from half-hourly
#' data filtering hours where the shortwave incoming radiation (SW_IN_F)
#' is greater than zero and aggregating taking the mean across remaining
#' hours per day.
#'
#' @param filename_hh A character string specifying the file name containing
#' site-specific half-hourly data.
#' @param write A logical specifying whether daiily daytime VPD should be
#' written to a file.
#'
#' @return A data frame (tibble) containing daily daytime VPD.
#' @export
#'
#' @examples
#' \dontrun{
#' df <- get_vpd_day_fluxnet2015_byfile(
#' "./FLX_BE-Vie_FLUXNET2015_FULLSET_HH_1996-2014_1-3.csv"
#' )
#' }
#'
#'
get_vpd_day_fluxnet2015_byfile <- function(filename_hh, write=FALSE){
# CRAN compliance, define variables
TIMESTAMP_START <- TIMESTAMP_END <- date_start <- date_day <- TA_F <-
TA_F_MDS <- TA_F_QC <- TA_F_MDS_QC <- TA_ERA <- VPD_F_MDS <-
SW_IN_F <- VPD_F <- VPD_F_QC <- VPD_F_MDS_QC <- VPD_ERA <- NULL
filename_dd_vpd <- filename_hh %>%
stringr::str_replace("HH", "DD") %>%
stringr::str_replace(".csv", "_VPD_DAY.csv")
if (file.exists(filename_dd_vpd)){
# Daytime VPD file is already available, reading from file
# print(paste("Reading daytime VPD from:", filename_dd_vpd))
message(paste("Reading file with calculated daytime VPD:", filename_dd_vpd))
df <- readr::read_csv(filename_dd_vpd)
} else {
# Get daytime VPD from half-hourly data
# read half-hourly data
if (!file.exists(filename_hh)){
stop(paste("Half-hourly file does not exist:", filename_hh))
}
df <- readr::read_csv(filename_hh) %>%
dplyr::mutate( date_start = lubridate::ymd_hm( TIMESTAMP_START ),
date_end = lubridate::ymd_hm( TIMESTAMP_END ) ) %>%
dplyr::mutate( date = date_start ) %>%
# retain only daytime data = when incoming shortwave radiation is positive
dplyr::filter(SW_IN_F > 0) %>%
# take mean over daytime values
dplyr::mutate(date_day = lubridate::as_date(date_start)) %>%
dplyr::group_by(date_day) %>%
dplyr::summarise(VPD_F_DAY = mean(VPD_F, na.rm=TRUE),
VPD_F_DAY_QC = sum(is.element(VPD_F_QC, c(0,1)))/n(),
VPD_F_DAY_MDS = mean(VPD_F_MDS, na.rm=TRUE),
VPD_F_DAY_MDS_QC = sum(is.element(VPD_F_MDS_QC, c(0,1)))/n(),
VPD_DAY_ERA = mean(VPD_ERA, na.rm=TRUE) ) %>%
dplyr::rename(date = date_day)
# write to csv file
if (write){
message(paste("Writing file with daytime VPD as:", filename_dd_vpd))
readr::write_csv(df, path = filename_dd_vpd)
}
}
return(df)
}
|
library(tidyverse)
library(stringr)
library(lubridate)
library(rnaturalearth)
coast = ne_coastline() %>% fortify %>% as_data_frame
#' Read a variable from greb model output
#'
#' @param file filename
#' @param tstamps time stamps of the outputs
#' @param varname variable name
#' @param ivar variable index if there are several variables
#' @param nvar number of variables in output file
#' @param nlon number of longitudes
#' @param nlat number of latitudes
#' @param nbyte number of bytes per datum
#' @return a tidy data_frame with columns time, lon, lat, and the requested variable
#'
#' @details
#' 2d fields are saved as arrays in row-major order, so we use
#' `expand.grid(lon, lat)` instead of `(lat, lon)`.
#'
#' In the default greb output, 50 years of monthly mean fields for 5 variables
#' (Tsurf, Tair, Tocn, qatm, albedo) are written in order [year, month,
#' variable, lon, lat]. Each saved number uses 4 bytes of memory. So output
#' fields are written in chunks of size (4*n_grid) bytes. For example, albedo is
#' the 5th variable, so the first albedo field (for year 1, month 1) starts at
#' position (n_grid * 4), the second albedo field (year 1, month 2) starts at
#' (n_grid * 4 + 5 * n_grid * 4), and so on. So we jump 4 * 4 * n_grid bytes
#' using seek(), and then read 4 * n_grid bytes using readBin(). The data is
#' iteratively appended to a list which avoids making deep copies, as opposed to
#' appending to a vector.
#
read_greb = function(file, tstamps,
varname=str_c('variable_',ivar),
ivar=1L, nvar=5L,
nlon=96, nlat=48, nbyte=4) {
# sanity checks
stopifnot(file.exists(file))
stopifnot(file.size(file) == nlon * nlat * nvar * nbyte * length(tstamps))
stopifnot(length(ivar) == length(varname))
stopifnot(all(ivar <= nvar))
# latitude/longitude grid (longitude varies fastest)
ngrid = nlon * nlat
dlon = 360 / nlon
dlat = 180 / nlat
lon = seq(dlon/2, 360-dlon/2, len=nlon)
lat = seq(-90+dlat/2, 90-dlat/2, len=nlat)
lonlat = expand.grid(lon=lon, lat=lat) %>% as_data_frame
ntime = length(tstamps)
# initialise data frame with time, lon, lat
out_df = bind_cols(
data_frame(time = tstamps) %>% slice(rep(1:ntime, each=ngrid)),
lonlat %>% slice(rep(1:ngrid, ntime))
) %>% wrap_lon(how='-180_180')
# read requested data from file
con = file(file, open='rb')
for (jj in seq_along(ivar)) {
ivar_ = ivar[jj]
out = list()
nout = 0
for (ii in seq_len(ntime)) {
seek(con, where = nbyte * ngrid * ((ii-1) * nvar + (ivar_ - 1)))
out[nout + 1:ngrid] = readBin(con=con, what=numeric(), n=ngrid, size=nbyte)
nout = nout + ngrid
}
out = list(unlist(out)) %>% setNames(varname[jj]) %>% as_data_frame
out_df = bind_cols(out_df, out)
}
close(con)
return(out_df)
}
#' Transform longitudes between ranges [0,360] and [-180, 180]
#'
#' @param df data frame with column `lon` or `long`
#' @param how character to which range to transform ('0_360' or '-180_180')
#' @return the original data frame with transformed longitude column
wrap_lon = function(df, how=c('0_360', '-180_180')) {
how = match.arg(how)
is_long = FALSE
if ('long' %in% names(df)) {
names(df) = names(df) %>% str_replace('^long$', 'lon')
is_long = TRUE
}
if (how == '0_360') {
df = df %>% mutate(lon = ifelse(lon < 0, lon + 360, lon))
}
if (how == '-180_180') {
df = df %>% mutate(lon = ifelse(lon > 180, lon - 360, lon))
}
if (is_long) {
names(df) = names(df) %>% str_replace('^lon$', 'long')
}
return(df)
}
| /R/functions.R | no_license | pslota/greb-climate-model | R | false | false | 3,646 | r | library(tidyverse)
library(stringr)
library(lubridate)
library(rnaturalearth)
coast = ne_coastline() %>% fortify %>% as_data_frame
#' Read a variable from greb model output
#'
#' @param file filename
#' @param tstamps time stamps of the outputs
#' @param varname variable name
#' @param ivar variable index if there are several variables
#' @param nvar number of variables in output file
#' @param nlon number of longitudes
#' @param nlat number of latitudes
#' @param nbyte number of bytes per datum
#' @return a tidy data_frame with columns time, lon, lat, and the requested variable
#'
#' @details
#' 2d fields are saved as arrays in row-major order, so we use
#' `expand.grid(lon, lat)` instead of `(lat, lon)`.
#'
#' In the default greb output, 50 years of monthly mean fields for 5 variables
#' (Tsurf, Tair, Tocn, qatm, albedo) are written in order [year, month,
#' variable, lon, lat]. Each saved number uses 4 bytes of memory. So output
#' fields are written in chunks of size (4*n_grid) bytes. For example, albedo is
#' the 5th variable, so the first albedo field (for year 1, month 1) starts at
#' position (n_grid * 4), the second albedo field (year 1, month 2) starts at
#' (n_grid * 4 + 5 * n_grid * 4), and so on. So we jump 4 * 4 * n_grid bytes
#' using seek(), and then read 4 * n_grid bytes using readBin(). The data is
#' iteratively appended to a list which avoids making deep copies, as opposed to
#' appending to a vector.
#
read_greb = function(file, tstamps,
varname=str_c('variable_',ivar),
ivar=1L, nvar=5L,
nlon=96, nlat=48, nbyte=4) {
# sanity checks
stopifnot(file.exists(file))
stopifnot(file.size(file) == nlon * nlat * nvar * nbyte * length(tstamps))
stopifnot(length(ivar) == length(varname))
stopifnot(all(ivar <= nvar))
# latitude/longitude grid (longitude varies fastest)
ngrid = nlon * nlat
dlon = 360 / nlon
dlat = 180 / nlat
lon = seq(dlon/2, 360-dlon/2, len=nlon)
lat = seq(-90+dlat/2, 90-dlat/2, len=nlat)
lonlat = expand.grid(lon=lon, lat=lat) %>% as_data_frame
ntime = length(tstamps)
# initialise data frame with time, lon, lat
out_df = bind_cols(
data_frame(time = tstamps) %>% slice(rep(1:ntime, each=ngrid)),
lonlat %>% slice(rep(1:ngrid, ntime))
) %>% wrap_lon(how='-180_180')
# read requested data from file
con = file(file, open='rb')
for (jj in seq_along(ivar)) {
ivar_ = ivar[jj]
out = list()
nout = 0
for (ii in seq_len(ntime)) {
seek(con, where = nbyte * ngrid * ((ii-1) * nvar + (ivar_ - 1)))
out[nout + 1:ngrid] = readBin(con=con, what=numeric(), n=ngrid, size=nbyte)
nout = nout + ngrid
}
out = list(unlist(out)) %>% setNames(varname[jj]) %>% as_data_frame
out_df = bind_cols(out_df, out)
}
close(con)
return(out_df)
}
#' Transform longitudes between ranges [0,360] and [-180, 180]
#'
#' @param df data frame with column `lon` or `long`
#' @param how character to which range to transform ('0_360' or '-180_180')
#' @return the original data frame with transformed longitude column
wrap_lon = function(df, how=c('0_360', '-180_180')) {
how = match.arg(how)
is_long = FALSE
if ('long' %in% names(df)) {
names(df) = names(df) %>% str_replace('^long$', 'lon')
is_long = TRUE
}
if (how == '0_360') {
df = df %>% mutate(lon = ifelse(lon < 0, lon + 360, lon))
}
if (how == '-180_180') {
df = df %>% mutate(lon = ifelse(lon > 180, lon - 360, lon))
}
if (is_long) {
names(df) = names(df) %>% str_replace('^lon$', 'long')
}
return(df)
}
|
# conversion to class 'cross'
# return F2 intercross
gpData2cross <- function(gpData,...){
# check for class
if(class(gpData)!="gpData") stop("object '",substitute(gpData),"' not of class 'gpData'")
# check on geno and map
if(is.null(gpData$geno) | is.null(gpData$map)) stop("'geno' and 'map' needed in",substitute(gpData))
if(dim(gpData$pheno)[3] > 1) stop("You can only use unreplicated values for cross!")
else{
# use codeGeno if not yet done
if(!gpData$info$codeGeno) stop("Use function codeGeno before gpData2cross!")
# only use individuals with genotypes and phenotypes
genoPheno <- gpData$covar$id[gpData$covar$genotyped & gpData$covar$phenotyped]
# read information from gpData
geno <- data.frame(gpData$geno[rownames(gpData$geno) %in% genoPheno,])
phenoDim <- dim(gpData$pheno)
phenoNames <- dimnames(gpData$pheno)
phenoDim[1] <- sum(dimnames(gpData$pheno)[[1]] %in% genoPheno)
phenoNames[[1]] <- dimnames(gpData$pheno)[[1]][dimnames(gpData$pheno)[[1]] %in% genoPheno]
pheno <- gpData$pheno[dimnames(gpData$pheno)[[1]] %in% genoPheno, ,]
pheno <- array(pheno, dim=phenoDim)
dimnames(pheno) <- phenoNames
pheno <- apply(pheno, 2, rbind) # possible because of unreplicated data!!!
rownames(pheno) <- phenoNames[[1]]
if(dim(gpData$pheno)[3]>1) pheno$repl <- rep(1:dim(gpData$pheno)[3], each=dim(gpData$pheno)[1])
if(!is.null(gpData$phenoCovars)) pheno <- cbind(pheno, data.frame(apply(gpData$phenoCovars[dimnames(gpData$phenoCovars)[[1]] %in% genoPheno, ,], 2, rbind)))
map <- gpData$map
n <- nrow(geno)
pheno <- as.data.frame(pheno)
}
# split markers (+pos) and genotypes on chromosomes
genoList <- split(cbind(rownames(map),map$pos,t(geno)),map$chr)
# result is a list
# function to bring each list element in right format
addData <- function(x){
ret <- list()
Nm <- length(x)/(n+2)
# elements of x:
# 1:Nm: marker names
# (Nm+1):(2*Nm): marker positions
# rest: genotypes as vector
# add 1 to genotypes
# coding for F2 intercross: AA=1, AB=2, BB=3
ret[["data"]] <- matrix(as.numeric(x[-(1:(2*Nm))])+1,nrow=n,ncol=Nm,byrow=TRUE,dimnames=list(NULL,x[1:Nm]))
ret[["map"]] <- as.numeric(x[(Nm+1):(2*Nm)])
names(ret[["map"]]) <- x[1:Nm]
# this may have to be changed
class(ret) <- "A"
ret
}
#
# apply function to each list element
genoList <- lapply(genoList,addData)
# create object 'cross'
cross <- list(geno=genoList,pheno=pheno)
class(cross) <- c("f2","cross")
cross
}
| /R/gpData2cross.r | no_license | cran/synbreed | R | false | false | 2,913 | r | # conversion to class 'cross'
# return F2 intercross
gpData2cross <- function(gpData,...){
# check for class
if(class(gpData)!="gpData") stop("object '",substitute(gpData),"' not of class 'gpData'")
# check on geno and map
if(is.null(gpData$geno) | is.null(gpData$map)) stop("'geno' and 'map' needed in",substitute(gpData))
if(dim(gpData$pheno)[3] > 1) stop("You can only use unreplicated values for cross!")
else{
# use codeGeno if not yet done
if(!gpData$info$codeGeno) stop("Use function codeGeno before gpData2cross!")
# only use individuals with genotypes and phenotypes
genoPheno <- gpData$covar$id[gpData$covar$genotyped & gpData$covar$phenotyped]
# read information from gpData
geno <- data.frame(gpData$geno[rownames(gpData$geno) %in% genoPheno,])
phenoDim <- dim(gpData$pheno)
phenoNames <- dimnames(gpData$pheno)
phenoDim[1] <- sum(dimnames(gpData$pheno)[[1]] %in% genoPheno)
phenoNames[[1]] <- dimnames(gpData$pheno)[[1]][dimnames(gpData$pheno)[[1]] %in% genoPheno]
pheno <- gpData$pheno[dimnames(gpData$pheno)[[1]] %in% genoPheno, ,]
pheno <- array(pheno, dim=phenoDim)
dimnames(pheno) <- phenoNames
pheno <- apply(pheno, 2, rbind) # possible because of unreplicated data!!!
rownames(pheno) <- phenoNames[[1]]
if(dim(gpData$pheno)[3]>1) pheno$repl <- rep(1:dim(gpData$pheno)[3], each=dim(gpData$pheno)[1])
if(!is.null(gpData$phenoCovars)) pheno <- cbind(pheno, data.frame(apply(gpData$phenoCovars[dimnames(gpData$phenoCovars)[[1]] %in% genoPheno, ,], 2, rbind)))
map <- gpData$map
n <- nrow(geno)
pheno <- as.data.frame(pheno)
}
# split markers (+pos) and genotypes on chromosomes
genoList <- split(cbind(rownames(map),map$pos,t(geno)),map$chr)
# result is a list
# function to bring each list element in right format
addData <- function(x){
ret <- list()
Nm <- length(x)/(n+2)
# elements of x:
# 1:Nm: marker names
# (Nm+1):(2*Nm): marker positions
# rest: genotypes as vector
# add 1 to genotypes
# coding for F2 intercross: AA=1, AB=2, BB=3
ret[["data"]] <- matrix(as.numeric(x[-(1:(2*Nm))])+1,nrow=n,ncol=Nm,byrow=TRUE,dimnames=list(NULL,x[1:Nm]))
ret[["map"]] <- as.numeric(x[(Nm+1):(2*Nm)])
names(ret[["map"]]) <- x[1:Nm]
# this may have to be changed
class(ret) <- "A"
ret
}
#
# apply function to each list element
genoList <- lapply(genoList,addData)
# create object 'cross'
cross <- list(geno=genoList,pheno=pheno)
class(cross) <- c("f2","cross")
cross
}
|
library(shiny)
library(gapminder)
# Define Server
shinyServer(function(input, output) {
output$MiHistograma <- renderPlot({
lf <- gapminder$lifeExp
intervalos <- seq(min(lf), max(lf), length.out = input$intervalos + 1)
hist(lf, breaks = intervalos, col = 'darkorchid3', border = 'white',
xlab = 'Esperanza de vida al nacer (Años)', ylab = 'Frecuencia', main = 'Mi primer Histograma')
})
}) | /Meetup4-TallerShiny-master/Histograma/server.R | no_license | mecomontes/R-Programming-for-Data-Science | R | false | false | 416 | r | library(shiny)
library(gapminder)
# Define Server
shinyServer(function(input, output) {
output$MiHistograma <- renderPlot({
lf <- gapminder$lifeExp
intervalos <- seq(min(lf), max(lf), length.out = input$intervalos + 1)
hist(lf, breaks = intervalos, col = 'darkorchid3', border = 'white',
xlab = 'Esperanza de vida al nacer (Años)', ylab = 'Frecuencia', main = 'Mi primer Histograma')
})
}) |
#' Agreement of extrapolative areas of MOP layers
#'
#' @description kuenm_mopagree calculates raster layers that represent the agreement of strict
#' extrapolative areas among two or more climate models of an emission scenario in a
#' given time period. Various emission scenarios and time periods can be processed.
#'
#' @param mop.dir (character) name of the folder in which MOP results are (e.g., the output
#' folder after using the \code{\link{kuenm_mmop}}) function.
#' @param in.format (character) format of model raster files. Options are "ascii", "GTiff", and "EHdr" = bil.
#' @param out.format (character) format of layers to be written in \code{out.dir}. Options are "ascii", "GTiff",
#' and "EHdr" = bil. Default = "GTiff".
#' @param current (character) if exist, pattern to look for when defining which is the scenario of current
#' projection to be excluded from calculations. If not defined, no current projection is assumed.
#' @param time.periods (character or numeric) pattern to be searched when identifying MOP layers for
#' distinct time projections. If not defined it is assumed that only one time period was considered.
#' @param emi.scenarios (character) pattern to be searched for identifying distinct emission
#' scenarios (e.g., RCP). If not defined it is asumed that only one emission scenario was used.
#' @param out.dir (character) name of the output directory to be created in which subdirectories
#' containing raster layers of strict extrapolative areas agreement will be written. Default = "MOP_agremment".
#'
#' @return Folders named as the set or sets of variables used to perform the MOP, containing raster layers in format
#' \code{out.format} that represent agreement of strict strapolative areas for each emission scenario
#' in a each time period. Folders will be written inside \code{out.dir}.
#'
#' @details
#' Users must be specific when defining the patterns that the function will search for. This patterns
#' must be part of the mop layer names so the function can locate each file without problems.
#' This function uses this system of work to avoid high demands of RAM while perfomring these analyses.
#'
#' @export
#'
#' @examples
#' # MOP layers must be already created before using this function.
#'
#' # Arguments
#' mop_dir <- "MOP_results"
#' format <- "GTiff"
#' curr <- "current"
#' time_periods <- 2050
#' emi_scenarios <- c("RCP4.5", "RCP8.5")
#' out_dir <- "MOP_agremment"
#'
#' kuenm_mopagree(mop.dir = mop_dir, in.format = format, out.format = format,
#' current = curr, time.periods = time_periods,
#' emi.scenarios = emi_scenarios, out.dir = out_dir)
kuenm_mopagree <- function(mop.dir, in.format, out.format = "GTiff", current,
time.periods, emi.scenarios, out.dir = "MOP_agremment") {
# testing for potential errors and preparing data
cat("Preparing data for starting analyses, please wait...\n")
if (!dir.exists(mop.dir)) {
stop(paste(mop.dir, "does not exist in the working directory, check folder name",
"\nor its existence."))
}
if (length(list.dirs(mop.dir, recursive = FALSE)) == 0) {
stop(paste(mop.dir, "does not contain any subdirectory named as sets of projection variables;",
"\neach subdirectory inside", mop.dir, "must containg at least one mop raster layer."))
}
if (missing(current)) {
cat("Argument current is not defined, no current projection will be assumed.\n")
}
if (missing(time.periods)) {
cat("Argument time.periods is not defined, only one time projection will be assumed.\n")
}
if (missing(emi.scenarios)) {
cat("Argument emi.scenarios is not defined, only one emission scenario will be assumed.\n")
}
# defining formats
if (in.format == "ascii") {
format <- ".asc$"
}
if (in.format == "GTiff") {
format <- ".tif$"
}
if (in.format == "EHdr") {
format <- ".bil$"
}
if (out.format == "ascii") {
format1 <- ".asc"
}
if (out.format == "GTiff") {
format1 <- ".tif"
}
if (out.format == "EHdr") {
format1 <- ".bil"
}
# Reading mop names
nstas <- list.files(mop.dir, pattern = paste0("MOP.*", format),
full.names = TRUE, recursive = TRUE)
mopn <- list.files(mop.dir, pattern = paste0("MOP.*", format), recursive = TRUE)
mopin <- unique(gsub("%.*", "%", mopn))
mopin <- unique(gsub("^.*/", "", mopin))
# Folder for all outputs
dir.create(out.dir)
# Folders for sets
sets <- dir(mop.dir)
sets <- sets[sets != "Result_description (kuenm_mmop).txt"]
set_dirs <- paste0(out.dir, "/", sets)
for (i in 1:length(sets)) {
# Separating by sets
ecl <- paste0(".*/", sets[i], "/.*")
ecla <- gregexpr(ecl, nstas)
eclam <- regmatches(nstas, ecla)
setses <- unlist(eclam)
# Folders per each set
dir.create(set_dirs[i])
# Copying current if exists
if (!missing(current)) {
cu <- paste0(".*", current, ".*")
cur <- gregexpr(cu, setses)
curr <- regmatches(setses, cur)
curre <- unlist(curr)
to_cur <- paste0(set_dirs[i], "/", mopin, "_",
gsub(paste0(".*", current), current, curre))
file.copy(from = curre, to = to_cur)
}
# Time periods
if (missing(time.periods)) {
time.periods <- ""
timep <- 1
septi <- ""
}else {
timep <- time.periods
septi <- "_"
}
for (j in 1:length(time.periods)) {
# Separating by times if exist
tp <- paste0(".*", time.periods[j], ".*")
tpe <- gregexpr(tp, setses)
tper <- regmatches(setses, tpe)
tperi <- unlist(tper)
## Separating by scenarios if exist
if (missing(emi.scenarios)) {
emi.scenarios <- ""
sepem <- ""
} else {
sepem <- "_"
}
## If exist and more than one, separate by emission scenarios
for (k in 1:length(emi.scenarios)) {
### Separating by scenarios if exist
es <- paste0(".*", emi.scenarios[k], ".*")
esc <- gregexpr(es, tperi)
esce <- regmatches(tperi, esc)
escen <- unlist(esce)
### Claculations
a <- raster::stack(escen) # stack
b <- raster::values(a) # matrix
a <- a[[1]] # raster layer from stack
dims <- dim(b) # dimensions of matrix
b <- c(b) # matrix to vector
#### Reclassify
b[!is.na(b)] <- ifelse(na.omit(b) == 0, 1, 0)
b <- matrix(b, dims) # vector to matrix again
#### New layer with model agreement
a[] <- apply(b, 1, sum)
### Writing files
mopnams <- paste(set_dirs[i], paste0(mopin, septi, time.periods[j],
sepem, emi.scenarios[k], "_agreement", format1), sep = "/")
raster::writeRaster(a, filename = mopnams, format = out.format)
cat(paste("\t\t", k, "of", length(emi.scenarios), "emission scenarios\n"))
}
cat(paste("\t", j, "of", length(time.periods), "time periods\n"))
}
cat(paste(i, "of", length(sets), "sets\n"))
}
# preparing description table
vals <- sort(na.omit(unique(a[])))
mopag <- paste0("Strict extrapolation in ", vals[-1], " GCMs")
descriptions <- c("No strict extrapolation", mopag)
res_table <- data.frame(Raster_value = vals, Description = descriptions)
# writting desciption table
result_description(process = "kuenm_mopagree", result.table = res_table, out.dir = out.dir)
cat(paste("\nCheck your working directory:", getwd(), sep = "\t"))
}
| /R/kuenm_mopagree.R | no_license | helixcn/kuenm | R | false | false | 7,531 | r | #' Agreement of extrapolative areas of MOP layers
#'
#' @description kuenm_mopagree calculates raster layers that represent the agreement of strict
#' extrapolative areas among two or more climate models of an emission scenario in a
#' given time period. Various emission scenarios and time periods can be processed.
#'
#' @param mop.dir (character) name of the folder in which MOP results are (e.g., the output
#' folder after using the \code{\link{kuenm_mmop}}) function.
#' @param in.format (character) format of model raster files. Options are "ascii", "GTiff", and "EHdr" = bil.
#' @param out.format (character) format of layers to be written in \code{out.dir}. Options are "ascii", "GTiff",
#' and "EHdr" = bil. Default = "GTiff".
#' @param current (character) if exist, pattern to look for when defining which is the scenario of current
#' projection to be excluded from calculations. If not defined, no current projection is assumed.
#' @param time.periods (character or numeric) pattern to be searched when identifying MOP layers for
#' distinct time projections. If not defined it is assumed that only one time period was considered.
#' @param emi.scenarios (character) pattern to be searched for identifying distinct emission
#' scenarios (e.g., RCP). If not defined it is asumed that only one emission scenario was used.
#' @param out.dir (character) name of the output directory to be created in which subdirectories
#' containing raster layers of strict extrapolative areas agreement will be written. Default = "MOP_agremment".
#'
#' @return Folders named as the set or sets of variables used to perform the MOP, containing raster layers in format
#' \code{out.format} that represent agreement of strict strapolative areas for each emission scenario
#' in a each time period. Folders will be written inside \code{out.dir}.
#'
#' @details
#' Users must be specific when defining the patterns that the function will search for. This patterns
#' must be part of the mop layer names so the function can locate each file without problems.
#' This function uses this system of work to avoid high demands of RAM while perfomring these analyses.
#'
#' @export
#'
#' @examples
#' # MOP layers must be already created before using this function.
#'
#' # Arguments
#' mop_dir <- "MOP_results"
#' format <- "GTiff"
#' curr <- "current"
#' time_periods <- 2050
#' emi_scenarios <- c("RCP4.5", "RCP8.5")
#' out_dir <- "MOP_agremment"
#'
#' kuenm_mopagree(mop.dir = mop_dir, in.format = format, out.format = format,
#' current = curr, time.periods = time_periods,
#' emi.scenarios = emi_scenarios, out.dir = out_dir)
kuenm_mopagree <- function(mop.dir, in.format, out.format = "GTiff", current,
time.periods, emi.scenarios, out.dir = "MOP_agremment") {
# testing for potential errors and preparing data
cat("Preparing data for starting analyses, please wait...\n")
if (!dir.exists(mop.dir)) {
stop(paste(mop.dir, "does not exist in the working directory, check folder name",
"\nor its existence."))
}
if (length(list.dirs(mop.dir, recursive = FALSE)) == 0) {
stop(paste(mop.dir, "does not contain any subdirectory named as sets of projection variables;",
"\neach subdirectory inside", mop.dir, "must containg at least one mop raster layer."))
}
if (missing(current)) {
cat("Argument current is not defined, no current projection will be assumed.\n")
}
if (missing(time.periods)) {
cat("Argument time.periods is not defined, only one time projection will be assumed.\n")
}
if (missing(emi.scenarios)) {
cat("Argument emi.scenarios is not defined, only one emission scenario will be assumed.\n")
}
# defining formats
if (in.format == "ascii") {
format <- ".asc$"
}
if (in.format == "GTiff") {
format <- ".tif$"
}
if (in.format == "EHdr") {
format <- ".bil$"
}
if (out.format == "ascii") {
format1 <- ".asc"
}
if (out.format == "GTiff") {
format1 <- ".tif"
}
if (out.format == "EHdr") {
format1 <- ".bil"
}
# Reading mop names
nstas <- list.files(mop.dir, pattern = paste0("MOP.*", format),
full.names = TRUE, recursive = TRUE)
mopn <- list.files(mop.dir, pattern = paste0("MOP.*", format), recursive = TRUE)
mopin <- unique(gsub("%.*", "%", mopn))
mopin <- unique(gsub("^.*/", "", mopin))
# Folder for all outputs
dir.create(out.dir)
# Folders for sets
sets <- dir(mop.dir)
sets <- sets[sets != "Result_description (kuenm_mmop).txt"]
set_dirs <- paste0(out.dir, "/", sets)
for (i in 1:length(sets)) {
# Separating by sets
ecl <- paste0(".*/", sets[i], "/.*")
ecla <- gregexpr(ecl, nstas)
eclam <- regmatches(nstas, ecla)
setses <- unlist(eclam)
# Folders per each set
dir.create(set_dirs[i])
# Copying current if exists
if (!missing(current)) {
cu <- paste0(".*", current, ".*")
cur <- gregexpr(cu, setses)
curr <- regmatches(setses, cur)
curre <- unlist(curr)
to_cur <- paste0(set_dirs[i], "/", mopin, "_",
gsub(paste0(".*", current), current, curre))
file.copy(from = curre, to = to_cur)
}
# Time periods
if (missing(time.periods)) {
time.periods <- ""
timep <- 1
septi <- ""
}else {
timep <- time.periods
septi <- "_"
}
for (j in 1:length(time.periods)) {
# Separating by times if exist
tp <- paste0(".*", time.periods[j], ".*")
tpe <- gregexpr(tp, setses)
tper <- regmatches(setses, tpe)
tperi <- unlist(tper)
## Separating by scenarios if exist
if (missing(emi.scenarios)) {
emi.scenarios <- ""
sepem <- ""
} else {
sepem <- "_"
}
## If exist and more than one, separate by emission scenarios
for (k in 1:length(emi.scenarios)) {
### Separating by scenarios if exist
es <- paste0(".*", emi.scenarios[k], ".*")
esc <- gregexpr(es, tperi)
esce <- regmatches(tperi, esc)
escen <- unlist(esce)
### Claculations
a <- raster::stack(escen) # stack
b <- raster::values(a) # matrix
a <- a[[1]] # raster layer from stack
dims <- dim(b) # dimensions of matrix
b <- c(b) # matrix to vector
#### Reclassify
b[!is.na(b)] <- ifelse(na.omit(b) == 0, 1, 0)
b <- matrix(b, dims) # vector to matrix again
#### New layer with model agreement
a[] <- apply(b, 1, sum)
### Writing files
mopnams <- paste(set_dirs[i], paste0(mopin, septi, time.periods[j],
sepem, emi.scenarios[k], "_agreement", format1), sep = "/")
raster::writeRaster(a, filename = mopnams, format = out.format)
cat(paste("\t\t", k, "of", length(emi.scenarios), "emission scenarios\n"))
}
cat(paste("\t", j, "of", length(time.periods), "time periods\n"))
}
cat(paste(i, "of", length(sets), "sets\n"))
}
# preparing description table
vals <- sort(na.omit(unique(a[])))
mopag <- paste0("Strict extrapolation in ", vals[-1], " GCMs")
descriptions <- c("No strict extrapolation", mopag)
res_table <- data.frame(Raster_value = vals, Description = descriptions)
# writting desciption table
result_description(process = "kuenm_mopagree", result.table = res_table, out.dir = out.dir)
cat(paste("\nCheck your working directory:", getwd(), sep = "\t"))
}
|
dat <- readLines("Day4_1_data.txt")
#number of passports:
passnum <- sum(dat == "")+2
passports <- c()
a <- 1
for (i in 1:length(dat)){
if (dat[i] == ""){
a <- a+1
}
passports[a] <- paste(passports[a], dat[i])
}
#Cleanup
passports <- gsub("NA ", "", passports)
passports <- trimws(passports, "both", whitespace = "[ \t\r\n]")
fields <- c("byr", "iyr", "eyr", "hgt", "hcl", "ecl", "pid")
test1 <- grepl(c("byr"), passports)
test2 <- grepl(c("iyr"), passports)
test3 <- grepl(c("eyr"), passports)
test4 <- grepl(c("hgt"), passports)
test5 <- grepl(c("hcl"), passports)
test6 <- grepl(c("ecl"), passports)
test7 <- grepl(c("pid"), passports)
tests <- test1+test2+test3+test4+test5+test6+test7
results <- which(tests == 7)
counter <- 0
## exract byr
a <- strsplit(passports[1], "byr")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- as.numeric(gsub(":", "", b))
if (c >= 1920 && c <= 2002){
counter <- counter+1
}
#iyr
a <- strsplit(passports[1], "iyr")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- as.numeric(gsub(":", "", b))
if (c >= 2010 && c <= 2020){
counter <- counter+1
}
#eyr
a <- strsplit(passports[1], "eyr")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- as.numeric(gsub(":", "", b))
if (c >= 2020 && c <= 2030){
counter <- counter+1
}
#hgt
a <- strsplit(passports[1], "hgt")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
#c <- as.numeric(gsub(":", "", b))
if (grepl("cm", b)){
c <- as.numeric(gsub(":|cm", "", b))
if (c >= 150 && c <= 193){
counter <- counter+1
} else if (grepl("in", b)) {
c <- gsub(":|in", "", b)
}
if (c >= 59 && c <= 76){
counter <- counter+1
}
}
#hcl
a <- strsplit(passports[1], "hcl")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- gsub(":|#", "", b)
if ((grepl("[A-z0-9]{6}", c)) & (nchar(c) == 6)){
counter <- counter+1
}
# ecl
a <- strsplit(passports[1], "ecl")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- gsub(":", "", b)
if (grepl("amb|blu|brn|gry|grn|hzl|oth", c)){
counter <- counter+1
}
# pid
a <- strsplit(passports[1], "pid")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- gsub(":", "", b)
if (grepl("[0-9]{9}", c) & nchar(c) == 9){
counter <- counter+1
}
| /AdventOfCode2020/Day4_sol2_copy.R | no_license | th-of/Misc | R | false | false | 2,247 | r | dat <- readLines("Day4_1_data.txt")
#number of passports:
passnum <- sum(dat == "")+2
passports <- c()
a <- 1
for (i in 1:length(dat)){
if (dat[i] == ""){
a <- a+1
}
passports[a] <- paste(passports[a], dat[i])
}
#Cleanup
passports <- gsub("NA ", "", passports)
passports <- trimws(passports, "both", whitespace = "[ \t\r\n]")
fields <- c("byr", "iyr", "eyr", "hgt", "hcl", "ecl", "pid")
test1 <- grepl(c("byr"), passports)
test2 <- grepl(c("iyr"), passports)
test3 <- grepl(c("eyr"), passports)
test4 <- grepl(c("hgt"), passports)
test5 <- grepl(c("hcl"), passports)
test6 <- grepl(c("ecl"), passports)
test7 <- grepl(c("pid"), passports)
tests <- test1+test2+test3+test4+test5+test6+test7
results <- which(tests == 7)
counter <- 0
## exract byr
a <- strsplit(passports[1], "byr")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- as.numeric(gsub(":", "", b))
if (c >= 1920 && c <= 2002){
counter <- counter+1
}
#iyr
a <- strsplit(passports[1], "iyr")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- as.numeric(gsub(":", "", b))
if (c >= 2010 && c <= 2020){
counter <- counter+1
}
#eyr
a <- strsplit(passports[1], "eyr")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- as.numeric(gsub(":", "", b))
if (c >= 2020 && c <= 2030){
counter <- counter+1
}
#hgt
a <- strsplit(passports[1], "hgt")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
#c <- as.numeric(gsub(":", "", b))
if (grepl("cm", b)){
c <- as.numeric(gsub(":|cm", "", b))
if (c >= 150 && c <= 193){
counter <- counter+1
} else if (grepl("in", b)) {
c <- gsub(":|in", "", b)
}
if (c >= 59 && c <= 76){
counter <- counter+1
}
}
#hcl
a <- strsplit(passports[1], "hcl")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- gsub(":|#", "", b)
if ((grepl("[A-z0-9]{6}", c)) & (nchar(c) == 6)){
counter <- counter+1
}
# ecl
a <- strsplit(passports[1], "ecl")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- gsub(":", "", b)
if (grepl("amb|blu|brn|gry|grn|hzl|oth", c)){
counter <- counter+1
}
# pid
a <- strsplit(passports[1], "pid")[[1]][2]
b <- strsplit(a, " ")[[1]][1]
c <- gsub(":", "", b)
if (grepl("[0-9]{9}", c) & nchar(c) == 9){
counter <- counter+1
}
|
# Author: Yewshen Lim (y.lim20@imperial.ac.uk)
# Script: R_conditionals.R
# Created: Oct 2020
#
# Script illustrates the use of conditionals
rm(list = ls())
# Checks if an integer is even
is.even <- function(n=2) {
if (n %% 2 == 0) {
return(paste(n, "is even!"))
}
return(paste(n, "is odd!"))
}
is.even(6)
# Checks if a number is a power of 2
is.power2 <- function(n=2) {
if (log2(n) %% 1 == 0) {
return(paste(n, "is a power of 2!"))
}
return(paste(n, "is not a power of 2!"))
}
is.power2(4)
# Checks if a number is prime
is.prime <- function(n) {
if (n == 0) {
return(paste(n, "is a zero!"))
}
if (n == 1) {
return(paste(n, "is just a unit!"))
}
ints <- 2:(n - 1)
if (all(n %% ints != 0)) {
return(paste(n, "is a prime!"))
}
return(paste(n, "is a composite!"))
}
is.prime(3) | /Week3/Code/R_conditionals.R | no_license | EcoFiendly/CMEECourseWork | R | false | false | 878 | r | # Author: Yewshen Lim (y.lim20@imperial.ac.uk)
# Script: R_conditionals.R
# Created: Oct 2020
#
# Script illustrates the use of conditionals
rm(list = ls())
# Checks if an integer is even
is.even <- function(n=2) {
if (n %% 2 == 0) {
return(paste(n, "is even!"))
}
return(paste(n, "is odd!"))
}
is.even(6)
# Checks if a number is a power of 2
is.power2 <- function(n=2) {
if (log2(n) %% 1 == 0) {
return(paste(n, "is a power of 2!"))
}
return(paste(n, "is not a power of 2!"))
}
is.power2(4)
# Checks if a number is prime
is.prime <- function(n) {
if (n == 0) {
return(paste(n, "is a zero!"))
}
if (n == 1) {
return(paste(n, "is just a unit!"))
}
ints <- 2:(n - 1)
if (all(n %% ints != 0)) {
return(paste(n, "is a prime!"))
}
return(paste(n, "is a composite!"))
}
is.prime(3) |
library(vegan)
data_OTU<-read.csv(file.choose(),row.names = 1,header = TRUE,check.names = FALSE) #first column=otu name
#Calculate relative abundance
relative_abundance=function(d){
dta_sum=apply(d,2,function(x){x/sum(x)})
}
abu<-relative_abundance(d = data_OTU)
abu<-as.data.frame(abu)
write.csv(abu,file='vOTU_abu.csv')
#calculate alpha-diversity indexes
alpha <- function(x, tree = NULL, base = exp(1)) {
Richness <- rowSums(x > 0)
Shannon <- diversity(x, index = 'shannon', base = base)
Simpson <- diversity(x, index = 'simpson') #Gini-Simpson 指数
result <- data.frame(Richness, Shannon, Simpson)
result
}
abu_alpha<-t(data_OTU) #first column=treatment (CK_7,CK_14,CK_24...)
alpha_all <- alpha(abu_alpha, base = exp(1))
alpha_all
write.csv(alpha_all,file='vOTU_alpha2.csv')
| /alpha diverity/R script for alpha analysis.R | no_license | superahura/Virome_of_agricultural_soils | R | false | false | 826 | r | library(vegan)
data_OTU<-read.csv(file.choose(),row.names = 1,header = TRUE,check.names = FALSE) #first column=otu name
#Calculate relative abundance
relative_abundance=function(d){
dta_sum=apply(d,2,function(x){x/sum(x)})
}
abu<-relative_abundance(d = data_OTU)
abu<-as.data.frame(abu)
write.csv(abu,file='vOTU_abu.csv')
#calculate alpha-diversity indexes
alpha <- function(x, tree = NULL, base = exp(1)) {
Richness <- rowSums(x > 0)
Shannon <- diversity(x, index = 'shannon', base = base)
Simpson <- diversity(x, index = 'simpson') #Gini-Simpson 指数
result <- data.frame(Richness, Shannon, Simpson)
result
}
abu_alpha<-t(data_OTU) #first column=treatment (CK_7,CK_14,CK_24...)
alpha_all <- alpha(abu_alpha, base = exp(1))
alpha_all
write.csv(alpha_all,file='vOTU_alpha2.csv')
|
#Lista
x <- list(1, "a", TRUE, 3+2i, 5L, 1:50)
x
#Todos los elementos de una lista mantienen la clase que originalmente tenian al nombrarlos en la lista
#Matrices
#Las matrices son vectores con un atributo llamado dimension, este atributo es un vector en si mismo compuesto de dos elementos
m <- matrix(nrow = 2, ncol=3) #Esta es una matriz desordenada
m <- matrix(NA,2,3)
m
dim(m)
attributes(m)
#Como llenar una matriz
m <- matrix(data = 1:6, nrow=2, ncol=3) #Matrix ordenada
m <- matrix(1:6,2,3)
m
#Estas son formas equivalentes de hacerlo pra ambas matrices
#La manera automatica de llenarse fue columna por columna
#Si yo quisiera que s ellenara fila por fila seria:
m <- matrix(data = 1:6, nrow=2, ncol=3, byrow= TRUE)
m <- matrix(1:6,2,3,T)
m
#Una manera alternativa de crear una matriz es desde un vector y modificar su dimension
m <- 1:10
m
dim(m) <- c(2,5)
m
#Otra forma de crear una matriz es uniendo diferentes vectores
x <- 1:3
y <- 10:12
#CBind va a unir columnas y RBind une filas
cbind(x,y)
rbind(x,y)
#Factores
#Factores sirve para variables que no son numericas
x <- factor(c("Si", "Si", "No", "No", "Si"))
x
#Factores por orden alfabetico
x <- factor(c("Azul", "Verde", "Verde", "Azul", "Rojo"))
x
#table te dice cuantas veces aparece cada una de las cosas
table(x)
unclass(x)
#Factores con orden definido
x <- factor(c("Azul", "Verde", "Verde", "Azul", "Rojo"),levels = c("Rojo","Amarillo","Verde","Naranja"))
x
unclass(x)
#Valores faltantes
x <- c(1,2,NA,10,3)
is.na(x) #Detecta los valores faltantes
is.nan(x) #Valor no numerico que no es faltante
y <- c(1,2,NaN,10,3)
is.na(y) #Detecta los valores faltantes
is.nan(y) #Valor no numerico que no es faltante
#Data Frames puede tener elementos de diferentes clases y se utiliza para matrices.
#foo y bar son variables pueden ser las que sean.
na <- data.frame(foo = 1:4, bar = c(T, T, F, F))
na
nrow(na) #Cuenta el numero de filas
ncol(na) #Cuenta el numero de columnas
mo <- 1:3
names(mo)
names(mo) <- c("foo", "bar", "norf") #Almacena el nombre de los elementos
mo
names(mo)
nu <- list(a=1,b=2,c=3)
nu
ma <- matrix(1:4, nrow=2, ncol=2)
ma
dimnames(ma) <- list(c("a", "b"), c("C", "d"))
ma
| /Listas clase 2602.R | no_license | Yahumara/Programacion_Actuarial_III | R | false | false | 2,202 | r | #Lista
x <- list(1, "a", TRUE, 3+2i, 5L, 1:50)
x
#Todos los elementos de una lista mantienen la clase que originalmente tenian al nombrarlos en la lista
#Matrices
#Las matrices son vectores con un atributo llamado dimension, este atributo es un vector en si mismo compuesto de dos elementos
m <- matrix(nrow = 2, ncol=3) #Esta es una matriz desordenada
m <- matrix(NA,2,3)
m
dim(m)
attributes(m)
#Como llenar una matriz
m <- matrix(data = 1:6, nrow=2, ncol=3) #Matrix ordenada
m <- matrix(1:6,2,3)
m
#Estas son formas equivalentes de hacerlo pra ambas matrices
#La manera automatica de llenarse fue columna por columna
#Si yo quisiera que s ellenara fila por fila seria:
m <- matrix(data = 1:6, nrow=2, ncol=3, byrow= TRUE)
m <- matrix(1:6,2,3,T)
m
#Una manera alternativa de crear una matriz es desde un vector y modificar su dimension
m <- 1:10
m
dim(m) <- c(2,5)
m
#Otra forma de crear una matriz es uniendo diferentes vectores
x <- 1:3
y <- 10:12
#CBind va a unir columnas y RBind une filas
cbind(x,y)
rbind(x,y)
#Factores
#Factores sirve para variables que no son numericas
x <- factor(c("Si", "Si", "No", "No", "Si"))
x
#Factores por orden alfabetico
x <- factor(c("Azul", "Verde", "Verde", "Azul", "Rojo"))
x
#table te dice cuantas veces aparece cada una de las cosas
table(x)
unclass(x)
#Factores con orden definido
x <- factor(c("Azul", "Verde", "Verde", "Azul", "Rojo"),levels = c("Rojo","Amarillo","Verde","Naranja"))
x
unclass(x)
#Valores faltantes
x <- c(1,2,NA,10,3)
is.na(x) #Detecta los valores faltantes
is.nan(x) #Valor no numerico que no es faltante
y <- c(1,2,NaN,10,3)
is.na(y) #Detecta los valores faltantes
is.nan(y) #Valor no numerico que no es faltante
#Data Frames puede tener elementos de diferentes clases y se utiliza para matrices.
#foo y bar son variables pueden ser las que sean.
na <- data.frame(foo = 1:4, bar = c(T, T, F, F))
na
nrow(na) #Cuenta el numero de filas
ncol(na) #Cuenta el numero de columnas
mo <- 1:3
names(mo)
names(mo) <- c("foo", "bar", "norf") #Almacena el nombre de los elementos
mo
names(mo)
nu <- list(a=1,b=2,c=3)
nu
ma <- matrix(1:4, nrow=2, ncol=2)
ma
dimnames(ma) <- list(c("a", "b"), c("C", "d"))
ma
|
library(dplyr)
library(highcharter)
library(shiny)
server <- function(input, output) {
dta <- reactive({
val <- rnorm(input$num, input$mean, input$sd)
num <- seq(1:input$num)
dta <- dplyr::bind_cols(num = num, val = val)
})
# Define the output
output$noise_plot <- renderPlot({
# Create the plot.
plot(dta(), col = input$col)
})
output$noise_table <- renderDataTable({
# Return the table data.
dta()
})
output$point_color <- renderText({
# Return text
if (input$col == "red") {
"Red Points"
} else {
"Blue Points"
}
})
}
ui <- fluidPage(
title = "Random Numbers",
fluidRow(
column(width = 2,
h3("Controls"),
radioButtons("col", h3("Point Color"),
choices = list("Red" = "red", "Blue" = "blue"),
selected = "blue"),
sliderInput("num", "Number of Values", min = 10, max = 500, value = 200),
p("Use the slider input to choose the number of plotted random values."),
numericInput("mean", h3("Mean"), value = 0),
numericInput("sd", h3("Standard Deviation"), value = 1)
),
column(width = 5,
h3("Plot"),
plotOutput("noise_plot"),
h3("Point Color"),
textOutput("point_color")
),
column(width = 5,
h3("Table"),
dataTableOutput("noise_table")
)
)
)
shinyApp(ui = ui, server = server)
| /Solutions/4-ex-311.R | no_license | stenzei/ShinyWorkshop | R | false | false | 1,476 | r | library(dplyr)
library(highcharter)
library(shiny)
server <- function(input, output) {
dta <- reactive({
val <- rnorm(input$num, input$mean, input$sd)
num <- seq(1:input$num)
dta <- dplyr::bind_cols(num = num, val = val)
})
# Define the output
output$noise_plot <- renderPlot({
# Create the plot.
plot(dta(), col = input$col)
})
output$noise_table <- renderDataTable({
# Return the table data.
dta()
})
output$point_color <- renderText({
# Return text
if (input$col == "red") {
"Red Points"
} else {
"Blue Points"
}
})
}
ui <- fluidPage(
title = "Random Numbers",
fluidRow(
column(width = 2,
h3("Controls"),
radioButtons("col", h3("Point Color"),
choices = list("Red" = "red", "Blue" = "blue"),
selected = "blue"),
sliderInput("num", "Number of Values", min = 10, max = 500, value = 200),
p("Use the slider input to choose the number of plotted random values."),
numericInput("mean", h3("Mean"), value = 0),
numericInput("sd", h3("Standard Deviation"), value = 1)
),
column(width = 5,
h3("Plot"),
plotOutput("noise_plot"),
h3("Point Color"),
textOutput("point_color")
),
column(width = 5,
h3("Table"),
dataTableOutput("noise_table")
)
)
)
shinyApp(ui = ui, server = server)
|
library(gridExtra)
library(ggplot2)
source("/Users/elena/Google Drive/ASU/mesoudi_model/random_ind_learner/debug/learners_window_3.R")
source("~/Google Drive/ASU/mesoudi_model/random_ind_learner/debug/random_learner_3_v2.R")
n<-10
s<-10
n<-5
s<-5
sdev <- 1000000000
r <- matrix(0, ncol=n, nrow=s)
#assign payoffs
for (i in 1:s){
r[i,] <- rexp(n, rate=1)
r[i,] <- round(2*(r[i,]^2))
}
l1 <- one_learner(r,sdev)
l2 <- two_learner(r,sdev)
l3 <- three_learner(r,sdev)
# source("/Users/elena/Google Drive/ASU/mesoudi_model/learning_algorithms/learners_window_pay.R")
# two dependency learners do fine | /random_ind_learner/debug/debug_random.R | no_license | elenamiu/cc-model | R | false | false | 605 | r | library(gridExtra)
library(ggplot2)
source("/Users/elena/Google Drive/ASU/mesoudi_model/random_ind_learner/debug/learners_window_3.R")
source("~/Google Drive/ASU/mesoudi_model/random_ind_learner/debug/random_learner_3_v2.R")
n<-10
s<-10
n<-5
s<-5
sdev <- 1000000000
r <- matrix(0, ncol=n, nrow=s)
#assign payoffs
for (i in 1:s){
r[i,] <- rexp(n, rate=1)
r[i,] <- round(2*(r[i,]^2))
}
l1 <- one_learner(r,sdev)
l2 <- two_learner(r,sdev)
l3 <- three_learner(r,sdev)
# source("/Users/elena/Google Drive/ASU/mesoudi_model/learning_algorithms/learners_window_pay.R")
# two dependency learners do fine |
#' Account for invariance of configurations.
#'
#'This function accounts for the fact that configurations in the latent space are invariant to rotations, reflections and translations.
#'
#' @param z An n x d matrix of latent locations in the d dimensional space for each of n nodes.
#' @param zMAP The \emph{maximum a posteriori} configuration of latent locations used as the template to which all sampled configurations are mapped.
#'
#' @return The transformed version of the input configuration z that best matches zMAP.
#' @details Procrustean rotations, reflections and translations (note: NOT dilations) are employed to best match z to zMAP.
#' @seealso \code{\link{MEclustnet}}
#' @references Isobel Claire Gormley and Thomas Brendan Murphy. (2010) A Mixture of Experts Latent Position Cluster Model for Social Network Data. Statistical Methodology, 7 (3), pp.385-405.
#' @importFrom vegan procrustes
invariant <-
function(z, zMAP)
{
procrustes(zMAP, z, scale = FALSE)$Yrot
}
| /R/invariant.R | no_license | cran/MEclustnet | R | false | false | 985 | r | #' Account for invariance of configurations.
#'
#'This function accounts for the fact that configurations in the latent space are invariant to rotations, reflections and translations.
#'
#' @param z An n x d matrix of latent locations in the d dimensional space for each of n nodes.
#' @param zMAP The \emph{maximum a posteriori} configuration of latent locations used as the template to which all sampled configurations are mapped.
#'
#' @return The transformed version of the input configuration z that best matches zMAP.
#' @details Procrustean rotations, reflections and translations (note: NOT dilations) are employed to best match z to zMAP.
#' @seealso \code{\link{MEclustnet}}
#' @references Isobel Claire Gormley and Thomas Brendan Murphy. (2010) A Mixture of Experts Latent Position Cluster Model for Social Network Data. Statistical Methodology, 7 (3), pp.385-405.
#' @importFrom vegan procrustes
invariant <-
function(z, zMAP)
{
procrustes(zMAP, z, scale = FALSE)$Yrot
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Rrelperm-package.R
\name{kr2p_gl}
\alias{kr2p_gl}
\title{Generate a matrix of two-phase relative permeability data for the gas-liquid system using the modified Brooks-Corey model}
\usage{
kr2p_gl(SWCON, SOIRG, SORG, SGCON, SGCRIT, KRGCL, KROGCG, NG, NOG, NP)
}
\arguments{
\item{SWCON}{connate water saturation, fraction}
\item{SOIRG}{irreducible oil saturation, fraction}
\item{SORG}{residual oil saturation, fraction}
\item{SGCON}{connate gas saturation, fraction}
\item{SGCRIT}{critical gas saturation, fraction}
\item{KRGCL}{gas relative permeability at connate liquid}
\item{KROGCG}{oil relative permeability at connate gas}
\item{NG}{exponent term for calculating krg}
\item{NOG}{exponent term for calculating krog}
\item{NP}{number of saturation points in the table, the maximum acceptable value is 501}
}
\value{
A matrix with gas saturation, liquid saturation, gas relative permeability, and oil relative permeability values, respectively
}
\description{
The 'kr2p_gl()' creates a table of two-phase gas and liquid relative permeability data for gas and liquid saturation values between zero and one.
}
\examples{
rel_perm_gl <- kr2p_gl(0.15, 0.1, 0.1, 0.05, 0.05, 0.3, 1, 4, 2.25, 101)
}
\references{
\insertRef{Brooks1964}{Rrelperm}
}
| /Rrelperm/man/kr2p_gl.Rd | no_license | akhikolla/InformationHouse | R | false | true | 1,334 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Rrelperm-package.R
\name{kr2p_gl}
\alias{kr2p_gl}
\title{Generate a matrix of two-phase relative permeability data for the gas-liquid system using the modified Brooks-Corey model}
\usage{
kr2p_gl(SWCON, SOIRG, SORG, SGCON, SGCRIT, KRGCL, KROGCG, NG, NOG, NP)
}
\arguments{
\item{SWCON}{connate water saturation, fraction}
\item{SOIRG}{irreducible oil saturation, fraction}
\item{SORG}{residual oil saturation, fraction}
\item{SGCON}{connate gas saturation, fraction}
\item{SGCRIT}{critical gas saturation, fraction}
\item{KRGCL}{gas relative permeability at connate liquid}
\item{KROGCG}{oil relative permeability at connate gas}
\item{NG}{exponent term for calculating krg}
\item{NOG}{exponent term for calculating krog}
\item{NP}{number of saturation points in the table, the maximum acceptable value is 501}
}
\value{
A matrix with gas saturation, liquid saturation, gas relative permeability, and oil relative permeability values, respectively
}
\description{
The 'kr2p_gl()' creates a table of two-phase gas and liquid relative permeability data for gas and liquid saturation values between zero and one.
}
\examples{
rel_perm_gl <- kr2p_gl(0.15, 0.1, 0.1, 0.05, 0.05, 0.3, 1, 4, 2.25, 101)
}
\references{
\insertRef{Brooks1964}{Rrelperm}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.