content
large_stringlengths 0
6.46M
| path
large_stringlengths 3
331
| license_type
large_stringclasses 2
values | repo_name
large_stringlengths 5
125
| language
large_stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 4
6.46M
| extension
large_stringclasses 75
values | text
stringlengths 0
6.46M
|
|---|---|---|---|---|---|---|---|---|---|
# Please build your own test file from test-Template.R, and place it in tests folder
# please specify the package you need to run the sim function in the test files.
# to test all the test files in the tests folder:
test_dir("/Users/stevec/Dropbox/Courses/7043H16/Lab/scfmModules/scfmSpread/tests")
# Alternative, you can use test_file to test individual test file, e.g.:
test_file("/Users/stevec/Dropbox/Courses/7043H16/Lab/scfmModules/scfmSpread/tests/testthat/test-template.R")
|
/modules/scfmSpread/tests/unitTests.R
|
no_license
|
tati-micheletti/SpaDESinAction
|
R
| false
| false
| 484
|
r
|
# Please build your own test file from test-Template.R, and place it in tests folder
# please specify the package you need to run the sim function in the test files.
# to test all the test files in the tests folder:
test_dir("/Users/stevec/Dropbox/Courses/7043H16/Lab/scfmModules/scfmSpread/tests")
# Alternative, you can use test_file to test individual test file, e.g.:
test_file("/Users/stevec/Dropbox/Courses/7043H16/Lab/scfmModules/scfmSpread/tests/testthat/test-template.R")
|
\name{Run_permutation}
\alias{Run_permutation}
\title{Derive importance scores for M permuted data sets.}
\usage{
Run_permutation(X, W, ntree, mtry,genes.name,M)
}
\arguments{
\item{X}{\code{(n x p)} Matrix containing expression levels for \code{n} samples and \code{p} genes.}
\item{W}{\code{(p x p)} Matrix containing iRafNet sampling scores. Element \code{(i,j)} contains score for regulatory relationship \code{(i -> j)}. Scores must be non-negative. Larger value of sampling score corresponds to higher likelihood of gene \code{i} regulating gene \code{j}. Columns and rows of \code{W} must be in the same order as the columns of \code{X}. Sampling scores \code{W} are computed considering one prior data such as protein-protein interactions or gene expression from knock-out experiments.}
\item{ntree}{Numeric value: number of trees.}
\item{mtry}{Numeric value: number of predictors to be sampled at each node.}
\item{genes.name}{Vector containing genes name. The order needs to match the rows of \code{x_j}.}
\item{M}{Integer: total number of permutations.}
}
\value{
A matrix with \code{I} rows and \code{M} columns with \code{I} being the total number of regulations and \code{M} the number of permutations. Element \code{(i,j)} corresponds to the importance score of interaction \code{i} for permuted data \code{j}.
}
\description{
This function computes importance score for \code{M} permuted data sets. Sample labels of target genes are randomly permuted and iRafNet is implemented. Resulting importance scores can be used to derive an estimate of FDR.
}
\examples{
# --- Generate data sets
n<-20 # sample size
p<-5 # number of genes
genes.name<-paste("G",seq(1,p),sep="") # genes name
M=5; # number of permutations
data<-matrix(rnorm(p*n),n,p) # generate expression matrix
W<-abs(matrix(rnorm(p*p),p,p)) # generate score for regulatory relationships
# --- Standardize variables to mean 0 and variance 1
data <- (apply(data, 2, function(x) { (x - mean(x)) / sd(x) } ))
# --- Run iRafNet and obtain importance score of regulatory relationships
out.iRafNet<-iRafNet(data,W,mtry=round(sqrt(p-1)),ntree=1000,genes.name)
# --- Run iRafNet for M permuted data sets
out.perm<-Run_permutation(data,W,mtry=round(sqrt(p-1)),ntree=1000,genes.name,M)
}
\references{
Petralia, F., Wang, P., Yang, J., Tu, Z. (2015) Integrative random forest for gene regulatory network inference, \emph{Bioinformatics}, \bold{31}, i197-i205.
Petralia, F., Song, W.M., Tu, Z. and Wang, P. (2016). New method for joint network analysis reveals common and different coexpression patterns among genes and proteins in breast cancer. \emph{Journal of proteome research}, \bold{15}(3), pp.743-754.
A. Liaw and M. Wiener (2002). Classification and Regression by randomForest. \emph{R News} \bold{2}, 18--22.
}
|
/man/Run_permutation.Rd
|
no_license
|
cran/iRafNet
|
R
| false
| false
| 2,893
|
rd
|
\name{Run_permutation}
\alias{Run_permutation}
\title{Derive importance scores for M permuted data sets.}
\usage{
Run_permutation(X, W, ntree, mtry,genes.name,M)
}
\arguments{
\item{X}{\code{(n x p)} Matrix containing expression levels for \code{n} samples and \code{p} genes.}
\item{W}{\code{(p x p)} Matrix containing iRafNet sampling scores. Element \code{(i,j)} contains score for regulatory relationship \code{(i -> j)}. Scores must be non-negative. Larger value of sampling score corresponds to higher likelihood of gene \code{i} regulating gene \code{j}. Columns and rows of \code{W} must be in the same order as the columns of \code{X}. Sampling scores \code{W} are computed considering one prior data such as protein-protein interactions or gene expression from knock-out experiments.}
\item{ntree}{Numeric value: number of trees.}
\item{mtry}{Numeric value: number of predictors to be sampled at each node.}
\item{genes.name}{Vector containing genes name. The order needs to match the rows of \code{x_j}.}
\item{M}{Integer: total number of permutations.}
}
\value{
A matrix with \code{I} rows and \code{M} columns with \code{I} being the total number of regulations and \code{M} the number of permutations. Element \code{(i,j)} corresponds to the importance score of interaction \code{i} for permuted data \code{j}.
}
\description{
This function computes importance score for \code{M} permuted data sets. Sample labels of target genes are randomly permuted and iRafNet is implemented. Resulting importance scores can be used to derive an estimate of FDR.
}
\examples{
# --- Generate data sets
n<-20 # sample size
p<-5 # number of genes
genes.name<-paste("G",seq(1,p),sep="") # genes name
M=5; # number of permutations
data<-matrix(rnorm(p*n),n,p) # generate expression matrix
W<-abs(matrix(rnorm(p*p),p,p)) # generate score for regulatory relationships
# --- Standardize variables to mean 0 and variance 1
data <- (apply(data, 2, function(x) { (x - mean(x)) / sd(x) } ))
# --- Run iRafNet and obtain importance score of regulatory relationships
out.iRafNet<-iRafNet(data,W,mtry=round(sqrt(p-1)),ntree=1000,genes.name)
# --- Run iRafNet for M permuted data sets
out.perm<-Run_permutation(data,W,mtry=round(sqrt(p-1)),ntree=1000,genes.name,M)
}
\references{
Petralia, F., Wang, P., Yang, J., Tu, Z. (2015) Integrative random forest for gene regulatory network inference, \emph{Bioinformatics}, \bold{31}, i197-i205.
Petralia, F., Song, W.M., Tu, Z. and Wang, P. (2016). New method for joint network analysis reveals common and different coexpression patterns among genes and proteins in breast cancer. \emph{Journal of proteome research}, \bold{15}(3), pp.743-754.
A. Liaw and M. Wiener (2002). Classification and Regression by randomForest. \emph{R News} \bold{2}, 18--22.
}
|
#' @title Delete a person
#' @description Function to Delete a person on pipedrive.
#' @param id ID of the person
#' @param api_token To validate your requests, you'll need your api_token - this means that our system will need to know who you are and be able to connect all actions you do with your chosen Pipedrive account. Have in mind that a user has a different api_token for each company. Please access the following link for more information: <https://pipedrive.readme.io/docs/how-to-find-the-api-token?utm_source=api_reference>
#' @param company_domain How to get the company domain: <https://pipedrive.readme.io/docs/how-to-get-the-company-domain>
#' @param return_type the default return is an object List with all informations of process, or you can set boolean (TRUE = success, FALSE = error)
#' @return customizable return, the default is an object List
#' @export
#' @examples \donttest{
#' persons.delete(id='e.g.',api_token='token',company_domain='exp')
#' }
persons.delete <- function(id, api_token=NULL, company_domain='api', return_type = c('complete','boolean')){
api_token <- check_api_token_(api_token)
url <- 'https://{company_domain}.pipedrive.com/v1/persons/{id}?'
url <- sub('{company_domain}',company_domain, url, fixed = TRUE)
url <- paste0(url, 'api_token={api_token}')
url <- sub('{api_token}',api_token, url, fixed = TRUE)
url <- sub('{id}',id, url, fixed = TRUE)
r <- httr::DELETE(url)
if(return_type[1] == 'boolean'){
if(r$status_code %in% c(200,201)){return(TRUE)}else{return(FALSE)}
}else{return(r)}
}
|
/R/persons.delete.R
|
no_license
|
cran/Rpipedrive
|
R
| false
| false
| 1,559
|
r
|
#' @title Delete a person
#' @description Function to Delete a person on pipedrive.
#' @param id ID of the person
#' @param api_token To validate your requests, you'll need your api_token - this means that our system will need to know who you are and be able to connect all actions you do with your chosen Pipedrive account. Have in mind that a user has a different api_token for each company. Please access the following link for more information: <https://pipedrive.readme.io/docs/how-to-find-the-api-token?utm_source=api_reference>
#' @param company_domain How to get the company domain: <https://pipedrive.readme.io/docs/how-to-get-the-company-domain>
#' @param return_type the default return is an object List with all informations of process, or you can set boolean (TRUE = success, FALSE = error)
#' @return customizable return, the default is an object List
#' @export
#' @examples \donttest{
#' persons.delete(id='e.g.',api_token='token',company_domain='exp')
#' }
persons.delete <- function(id, api_token=NULL, company_domain='api', return_type = c('complete','boolean')){
api_token <- check_api_token_(api_token)
url <- 'https://{company_domain}.pipedrive.com/v1/persons/{id}?'
url <- sub('{company_domain}',company_domain, url, fixed = TRUE)
url <- paste0(url, 'api_token={api_token}')
url <- sub('{api_token}',api_token, url, fixed = TRUE)
url <- sub('{id}',id, url, fixed = TRUE)
r <- httr::DELETE(url)
if(return_type[1] == 'boolean'){
if(r$status_code %in% c(200,201)){return(TRUE)}else{return(FALSE)}
}else{return(r)}
}
|
#' get an envrionment variable `HEADLESS_CHROME`
#'
#' @md
#' @note This only return an envrionment variable `HEADLESS_CHROME`.
#' @export
#' @examples
#' get_env()
get_chrome_env <- function() {
Sys.getenv("HEADLESS_CHROME")
}
#' set an envrionment variable `HEADLESS_CHROME`
#'
#' @md
#' @note This only grabs the `<body>` `innerHTML` contents
#' @param env path of chrome execute file for an envrionment variable `HEADLESS_CHROME`
#' @export
#' @examples
#' set_env("C:/Program Files/Google/Chrome/Application/chrome.exe")
set_chrome_env <- function(env=Sys.getenv("HEADLESS_CHROME")) {
Sys.setenv(HEADLESS_CHROME=env)
}
|
/R/env.R
|
no_license
|
markwsac/decapitated
|
R
| false
| false
| 628
|
r
|
#' get an envrionment variable `HEADLESS_CHROME`
#'
#' @md
#' @note This only return an envrionment variable `HEADLESS_CHROME`.
#' @export
#' @examples
#' get_env()
get_chrome_env <- function() {
Sys.getenv("HEADLESS_CHROME")
}
#' set an envrionment variable `HEADLESS_CHROME`
#'
#' @md
#' @note This only grabs the `<body>` `innerHTML` contents
#' @param env path of chrome execute file for an envrionment variable `HEADLESS_CHROME`
#' @export
#' @examples
#' set_env("C:/Program Files/Google/Chrome/Application/chrome.exe")
set_chrome_env <- function(env=Sys.getenv("HEADLESS_CHROME")) {
Sys.setenv(HEADLESS_CHROME=env)
}
|
# Figure S2: exclusivity surface plots for different initial connectivity and resource share type -------
## Load data on disproportionate share type: this scenario calculates individual payoffs in which the group-level resource share is determined by the size of the groups, considering that larger groups outcompete smaller ones. There are simulations for three initial network connectivity values: Tprob=0.2, Tprob=0.5, Tprob=0.8
load(paste(getwd(), "/data/1_exclusivity_surface.RData", sep=""))
source("setup.R")
# Preparing the data for surface plot
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
# Bins for the plot
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
# figure labels
figlabel <- c("(A)", "(B)", "(C)")
par(mfcol=c(3,3), mar=c(1,1,1,1))
for(s in 1:3){
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.05,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
# Plot surface with transparent backpoints
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
# add transparent surface
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="Exclusivity",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
#text(trans3d(45,5,1.2,res), paste(figlabel[s]," Tprob =", tprob[s]))
text(trans3d(45,5,1.2,res), paste(figlabel[s]))
}
### Load data on equal share type: this scenario considers no disproportionate individual payoffs, i.e. there is equal allocation of resource to all individuals irrespective of their group size. There are simulations for three initial network connectivity values: Tprob=0.2, Tprob=0.5, Tprob=0.8
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_surface_equalresourceshare.RData", sep=""))
source("setup.R")
# Preparing the data for surface plot
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
# Bins for the plot
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
# figure labels
figlabel <- c("(D)", "(E)", "(F)")
for(s in 1:3){
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.05,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
# Plot surface with transparent backpoints
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
# add transparent surface
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
#text(trans3d(45,5,1.2,res), paste(figlabel[s]," Tprob =", tprob[s]))
text(trans3d(45,5,1.2,res), paste(figlabel[s]))
}
#####
### When running the model for longer (n.reps=500) under equal allocation of resources the same pattern for disproportionate allocation of resources emerge ###
## the commented code below creates a restricted parameter space, runs the simulation with the edited model1() function and the edited simulation() function
## Load packages and functions
# source("setup.R")
#
## simplified parameter space, with fewer replicates
# N = c( 20, 30, 40, 50, 70, 100, 140, 170, 200)
# r = c(5, 9, 13, 17, 25, 31, 41 ,51)
# tprob = c(0.2, 0.5, 0.8)
# replic = 100
# payoff.type = "equal"
#
## parameter space
# variables <- expand.grid(N,r,tprob,payoff.type)
# colnames(variables) <- c("N","r","tprob","type")
# variables <- variables[which(variables$r<=(variables$N/2)),] # only when resource.size <= half of population
# variables <- variables[rep(seq_len(nrow(variables)), each=replic),]
# variables <- split(variables, seq(nrow(variables)))
#
### Loading edited model1 with coop.total after 250 t step
# model1 <- function(N, resource.size, n.reps, tprob, type="size-based", output="model"){
#
# # Setting initial directed network: start with a random set of edges; no loops
# coop <- rgraph(N,tprob=tprob); diag(coop) <- NA
# # Network with average number of foraging association across runs
# coop.total <- matrix(0,nrow=N,ncol=N)
# # Mean individual payoffs
# mean.payoff <- rep(NA,n.reps)
# # All formed groups
# groups.total <- list()
# # Exclusivity
# group.exclusivity <- rep(NA,n.reps)
#
# # Model run
# for (zz in 1:n.reps) {
# coop.NOW <- matrix(0,nrow=N,ncol=N)
#
# # extract pairs of cooperators (reciprocal links) out of the network
# ABind <- cooperators(coop,N)
# # form groups based on chain rule (A->B, B->C, then A->C)
# groups <- identify.groups(ABind)
# # calculate per capita payoff based on given resource patch size
# if (length(groups) >= 1) {
# payoffs <- calculate.payoffs(groups,resource.size,type) # calculate the group-level resource share based on group size
# # individual payoffs
# inds.payoffs <- rep(0,N)
# for (i in 1:length(groups)) {
# inds.payoffs[groups[[i]]] <- payoffs[i]
# a <- expand.grid(groups[[i]],groups[[i]])
# a <- a[which(a[,1] != a[,2]),]
# coop.NOW[as.matrix(a)] <- 1
# if(zz>=250){
# coop.total[as.matrix(a)] <- coop.total[as.matrix(a)]+1
# }else{
# coop.total[as.matrix(a)]
# }
# }
# }
#
# #update network
# coop <- update.coop.network(coop,inds.payoffs)
#
# if(output != "exclusivity"){
# # Mean nonzero individual payoff
# mean.payoff[zz] <- mean(inds.payoffs[which(inds.payoffs!=0)])
# # Number and size of groups for each run
# groups.total[[zz]] <- groups
# # Exclusivity: proportion of times individuals were part of the foraging group for each run
# group.exclusivity[zz] <- exclusivity(initial.size=N, runs=zz, patch=resource.size, all.ties=coop.total)
# }
# }
#
# # OUTPUTS
# if(output != "exclusivity"){
# group.number <- groups.number(all.groups=groups.total, runs=n.reps)
# group.size <- average.group.size(groups.total)
# if(output == "model"){
# result.model <- list(coop.total, mean.payoff, group.number, group.size, group.exclusivity, coop.NOW, groups)
# names(result.model) <- c("coop.total", "mean.payoffs", "n.groups", "mean.group.size", "exclusivity", "coop.NOW", "groups")
# return(result.model)
# } else { # output == 'sensitivity'
# result.model <- cbind(1:n.reps, rep(N,n.reps), rep(resource.size,n.reps), rep(tprob,n.reps), group.exclusivity, mean.payoff, group.number, group.size)
# colnames(result.model) <- c("time.step", "N","r","tprob","Exclusivity", "Mean.payoffs","n.groups", "Mean.group.size")
# return(result.model)
# }
# } else { # output == "exclusivity"
# group.exclusivity <- exclusivity(initial.size=N, runs=(n.reps-N), patch=resource.size, all.ties=coop.total)
# result.model <- c(N, resource.size, tprob, group.exclusivity)
# return(result.model)
# }
# }
#
### Loading simulation with n.reps=500
# simulation <- function(inputs, model, output){
# cat("running parameters:", as.numeric(inputs),"\n")
#
# # assign input parameters
# N <- as.numeric(inputs[1])
# resource.size <- as.numeric(inputs[2])
# tprob <- as.numeric(inputs[3])
# type <- as.character(inputs[,4])
# n.reps <- 500
#
# # # when resource patch size is <= population size
# # if (resource.size <= N) {
# if(model=="model1") {
# result <- model1(N, resource.size, n.reps, tprob, type, output)
# } else {
# n.reps <- 1000
# result <- model2(N, resource.size, n.reps, tprob, type, output)}
# #}
# return(result)
# }
#
#
### Running the simulation
# ptm <- proc.time()
# output <- do.call('rbind',lapply(variables, simulation, model="model1", output="exclusivity"))
# colnames(output) <- c("N","r","tprob","Exclusivity")
# cat(paste("simulation time:", round(((proc.time() - ptm)[3])/60, digits=2), "min"))
#
#####
# Instead: just load data on model1 exclusivity ran for longer (t=500), Tprob=0.2
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_t250-tprob02-equal.RData", sep=""))
source("setup.R")
# figure labels
figlabel <- c("(G)", "(H)", "(I)")
# Plot surface
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
s=1
{
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.09,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
text(trans3d(45,5,1.2,res), paste(figlabel[s]))
}
# Load data on model1 exclusivity ran for longer (t=500), Tprob=0.5
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_t250-tprob05-equal.RData", sep=""))
source("setup.R")
figlabel <- c("(G)", "(H)", "(I)")
# Plot surface
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
s=1
{
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.09,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
text(trans3d(45,5,1.2,res), paste(figlabel[2]))
}
# Load data on model1 exclusivity ran for longer (t=500), Tprob=0.8
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_t250-tprob08-equal.RData", sep=""))
source("setup.R")
figlabel <- c("(G)", "(H)", "(I)")
# Plot surface
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
s=1
{
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.09,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="Population size",xlab="Resource patch size",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
text(trans3d(45,5,1.2,res), paste(figlabel[3]))
}
|
/Figure_codes/FigureS2.R
|
no_license
|
sabrinatucci/Cantor-Farine-Repro-Project
|
R
| false
| false
| 22,088
|
r
|
# Figure S2: exclusivity surface plots for different initial connectivity and resource share type -------
## Load data on disproportionate share type: this scenario calculates individual payoffs in which the group-level resource share is determined by the size of the groups, considering that larger groups outcompete smaller ones. There are simulations for three initial network connectivity values: Tprob=0.2, Tprob=0.5, Tprob=0.8
load(paste(getwd(), "/data/1_exclusivity_surface.RData", sep=""))
source("setup.R")
# Preparing the data for surface plot
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
# Bins for the plot
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
# figure labels
figlabel <- c("(A)", "(B)", "(C)")
par(mfcol=c(3,3), mar=c(1,1,1,1))
for(s in 1:3){
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.05,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
# Plot surface with transparent backpoints
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
# add transparent surface
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="Exclusivity",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
#text(trans3d(45,5,1.2,res), paste(figlabel[s]," Tprob =", tprob[s]))
text(trans3d(45,5,1.2,res), paste(figlabel[s]))
}
### Load data on equal share type: this scenario considers no disproportionate individual payoffs, i.e. there is equal allocation of resource to all individuals irrespective of their group size. There are simulations for three initial network connectivity values: Tprob=0.2, Tprob=0.5, Tprob=0.8
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_surface_equalresourceshare.RData", sep=""))
source("setup.R")
# Preparing the data for surface plot
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
# Bins for the plot
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
# figure labels
figlabel <- c("(D)", "(E)", "(F)")
for(s in 1:3){
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.05,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
# Plot surface with transparent backpoints
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
# add transparent surface
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
#text(trans3d(45,5,1.2,res), paste(figlabel[s]," Tprob =", tprob[s]))
text(trans3d(45,5,1.2,res), paste(figlabel[s]))
}
#####
### When running the model for longer (n.reps=500) under equal allocation of resources the same pattern for disproportionate allocation of resources emerge ###
## the commented code below creates a restricted parameter space, runs the simulation with the edited model1() function and the edited simulation() function
## Load packages and functions
# source("setup.R")
#
## simplified parameter space, with fewer replicates
# N = c( 20, 30, 40, 50, 70, 100, 140, 170, 200)
# r = c(5, 9, 13, 17, 25, 31, 41 ,51)
# tprob = c(0.2, 0.5, 0.8)
# replic = 100
# payoff.type = "equal"
#
## parameter space
# variables <- expand.grid(N,r,tprob,payoff.type)
# colnames(variables) <- c("N","r","tprob","type")
# variables <- variables[which(variables$r<=(variables$N/2)),] # only when resource.size <= half of population
# variables <- variables[rep(seq_len(nrow(variables)), each=replic),]
# variables <- split(variables, seq(nrow(variables)))
#
### Loading edited model1 with coop.total after 250 t step
# model1 <- function(N, resource.size, n.reps, tprob, type="size-based", output="model"){
#
# # Setting initial directed network: start with a random set of edges; no loops
# coop <- rgraph(N,tprob=tprob); diag(coop) <- NA
# # Network with average number of foraging association across runs
# coop.total <- matrix(0,nrow=N,ncol=N)
# # Mean individual payoffs
# mean.payoff <- rep(NA,n.reps)
# # All formed groups
# groups.total <- list()
# # Exclusivity
# group.exclusivity <- rep(NA,n.reps)
#
# # Model run
# for (zz in 1:n.reps) {
# coop.NOW <- matrix(0,nrow=N,ncol=N)
#
# # extract pairs of cooperators (reciprocal links) out of the network
# ABind <- cooperators(coop,N)
# # form groups based on chain rule (A->B, B->C, then A->C)
# groups <- identify.groups(ABind)
# # calculate per capita payoff based on given resource patch size
# if (length(groups) >= 1) {
# payoffs <- calculate.payoffs(groups,resource.size,type) # calculate the group-level resource share based on group size
# # individual payoffs
# inds.payoffs <- rep(0,N)
# for (i in 1:length(groups)) {
# inds.payoffs[groups[[i]]] <- payoffs[i]
# a <- expand.grid(groups[[i]],groups[[i]])
# a <- a[which(a[,1] != a[,2]),]
# coop.NOW[as.matrix(a)] <- 1
# if(zz>=250){
# coop.total[as.matrix(a)] <- coop.total[as.matrix(a)]+1
# }else{
# coop.total[as.matrix(a)]
# }
# }
# }
#
# #update network
# coop <- update.coop.network(coop,inds.payoffs)
#
# if(output != "exclusivity"){
# # Mean nonzero individual payoff
# mean.payoff[zz] <- mean(inds.payoffs[which(inds.payoffs!=0)])
# # Number and size of groups for each run
# groups.total[[zz]] <- groups
# # Exclusivity: proportion of times individuals were part of the foraging group for each run
# group.exclusivity[zz] <- exclusivity(initial.size=N, runs=zz, patch=resource.size, all.ties=coop.total)
# }
# }
#
# # OUTPUTS
# if(output != "exclusivity"){
# group.number <- groups.number(all.groups=groups.total, runs=n.reps)
# group.size <- average.group.size(groups.total)
# if(output == "model"){
# result.model <- list(coop.total, mean.payoff, group.number, group.size, group.exclusivity, coop.NOW, groups)
# names(result.model) <- c("coop.total", "mean.payoffs", "n.groups", "mean.group.size", "exclusivity", "coop.NOW", "groups")
# return(result.model)
# } else { # output == 'sensitivity'
# result.model <- cbind(1:n.reps, rep(N,n.reps), rep(resource.size,n.reps), rep(tprob,n.reps), group.exclusivity, mean.payoff, group.number, group.size)
# colnames(result.model) <- c("time.step", "N","r","tprob","Exclusivity", "Mean.payoffs","n.groups", "Mean.group.size")
# return(result.model)
# }
# } else { # output == "exclusivity"
# group.exclusivity <- exclusivity(initial.size=N, runs=(n.reps-N), patch=resource.size, all.ties=coop.total)
# result.model <- c(N, resource.size, tprob, group.exclusivity)
# return(result.model)
# }
# }
#
### Loading simulation with n.reps=500
# simulation <- function(inputs, model, output){
# cat("running parameters:", as.numeric(inputs),"\n")
#
# # assign input parameters
# N <- as.numeric(inputs[1])
# resource.size <- as.numeric(inputs[2])
# tprob <- as.numeric(inputs[3])
# type <- as.character(inputs[,4])
# n.reps <- 500
#
# # # when resource patch size is <= population size
# # if (resource.size <= N) {
# if(model=="model1") {
# result <- model1(N, resource.size, n.reps, tprob, type, output)
# } else {
# n.reps <- 1000
# result <- model2(N, resource.size, n.reps, tprob, type, output)}
# #}
# return(result)
# }
#
#
### Running the simulation
# ptm <- proc.time()
# output <- do.call('rbind',lapply(variables, simulation, model="model1", output="exclusivity"))
# colnames(output) <- c("N","r","tprob","Exclusivity")
# cat(paste("simulation time:", round(((proc.time() - ptm)[3])/60, digits=2), "min"))
#
#####
# Instead: just load data on model1 exclusivity ran for longer (t=500), Tprob=0.2
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_t250-tprob02-equal.RData", sep=""))
source("setup.R")
# figure labels
figlabel <- c("(G)", "(H)", "(I)")
# Plot surface
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
s=1
{
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.09,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
text(trans3d(45,5,1.2,res), paste(figlabel[s]))
}
# Load data on model1 exclusivity ran for longer (t=500), Tprob=0.5
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_t250-tprob05-equal.RData", sep=""))
source("setup.R")
figlabel <- c("(G)", "(H)", "(I)")
# Plot surface
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
s=1
{
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.09,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
text(trans3d(45,5,1.2,res), paste(figlabel[2]))
}
# Load data on model1 exclusivity ran for longer (t=500), Tprob=0.8
rm(list=ls())
load(paste(getwd(), "/data/1_exclusivity_t250-tprob08-equal.RData", sep=""))
source("setup.R")
figlabel <- c("(G)", "(H)", "(I)")
# Plot surface
output <- output[which(!is.na(output[,4])),]
output <- as.data.frame(output)
output$ID <- 1:nrow(output)
r.bins <- seq(min(output$r),max(output$r),1)
N.bins <- seq(min(output$N),max(output$N),1)
s=1
{
# Data to plot: only cases when resource.size<= half of population
input <- output[which(output$tprob==tprob[s] & output$r<=(output$N/2)),]
# Fit surface
surf <- locfit(Exclusivity~lp(N,r,nn=0.09,scale=F, h=0.1,deg=1), data=input)
# Z-axis
plotcol="black"
zmax <- 1
zmin <- 0
z <- matrix(predict(surf,newdata=expand.grid(r=r.bins,N=N.bins),type="response"),nrow=length(r.bins),ncol=length(N.bins),byrow=FALSE)
N.mat <- matrix(rep(N.bins,each=length(r.bins)),ncol=length(N.bins),nrow=length(r.bins))
r.mat <- matrix(rep(r.bins,each=length(N.bins)),ncol=length(N.bins),nrow=length(r.bins),byrow=TRUE)
z[which(r.mat > (N.mat/2))] <- NA
z[which(z > 1)] <- 1
z[which(z < 0)] <- 0
minz <- min(z,na.rm=T)
nrz <- nrow(z)
ncz <- ncol(z)
# Create colors
nbcol <- 100
jet.colors <- blue2green2red(nbcol)
jet.colors2 <- add.alpha(jet.colors,alpha=0.6)
# Compute the z-value at the facet centres
zfacet <- (z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz])/4
# Recode facet z-values into color indices
facetcol <- cut(zfacet,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
zcol <- cut(z,breaks=seq(zmin,zmax,(zmax-zmin)/(nbcol)),labels=c(1:nbcol))
res <- persp(r.bins, N.bins, matrix(NA,nrow=length(r.bins),ncol=length(N.bins)), theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors[facetcol], ylab="",xlab="",zlab="",zlim=c(zmin,zmax),xlim=range(r.bins),ylim=range(N.bins),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75, main="")
input2 <- input
input2$r <- input2$r + rnorm(nrow(input),mean=0,sd=0.2)
input2$N <- input2$N + rnorm(nrow(input),mean=0,sd=0.2)
input2$pred <- predict(surf,newdata=input[,c(2,1)], type="response")
input3 <- input2[which((input2$Exclusivity - input2$pred) < 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="darkgrey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
par(new=TRUE)
res <- persp(r.bins, N.bins, z, theta=120, phi=30, shade=0.2, ticktype="detailed", expand=0.8, col=jet.colors2[facetcol], ylab="Population size",xlab="Resource patch size",zlab="",zlim=c(zmin,zmax),nticks=5, border="black",lwd=0.1, cex.lab=1.2,ltheta = 235, lphi = 75)
input3 <- input2[which((input2$Exclusivity - input2$pred) >= 0),]
for (i in 1:nrow(input3)) {
lines(trans3d(c(input3$r[i],input3$r[i]),c(input3$N[i],input3$N[i]),c(input3$Exclusivity[i],input3$pred[i]),res), pch=20,col="grey")
}
points(trans3d(input3$r,input3$N,input3$Exclusivity,res), pch=20)
text(trans3d(45,5,1.2,res), paste(figlabel[3]))
}
|
# Yige Wu @WashU Apr 2020
## running on local
## for plotting average expression of known pathogenic pathway genes for each tumor subclusters (manually grouped)
## VHL-HIF pathway
# set up libraries and output directory -----------------------------------
## set working directory
baseD = "~/Box/"
setwd(baseD)
source("./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/ccRCC_snRNA_analysis/ccRCC_snRNA_shared.R")
source("./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/ccRCC_snRNA_analysis/plotting.R")
## set run id
version_tmp <- 1
run_id <- paste0(format(Sys.Date(), "%Y%m%d") , ".v", version_tmp)
## set output directory
dir_out <- paste0(makeOutDir(), run_id, "/")
dir.create(dir_out)
# input dependencies ------------------------------------------------------
## input scaled average expression
scaled_avgexp_df <- fread(input = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Analysis_Results/recluster/recluster_cell_groups_in_individual_samples/recluster_nephron_epithelium/other/scale_averageexpression/20200407.v1/averageexpression_tumor_cells_by_manual_subcluster.scaled.20200407.v1.tsv", data.table = F)
## input reactome pathway
load(file = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Gene_Lists/2015-08-01_Gene_Set.RData")
pathway_genes <- REACT[["165159 mTOR signalling"]]
## input id meta data
id_metadata_df <- fread(input = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Analysis_Results/sample_info/make_meta_data/20191105.v1/meta_data.20191105.v1.tsv", data.table = F)
## input cnv fraction by chr
frac_cnv_wide_df <- fread(input = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Analysis_Results/copy_number/summarize_cnv_fraction/estimate_fraction_of_tumorcells_with_expectedcnv_perchrregion_per_manualsubcluster_using_cnvgenes/20200407.v1/fraction_of_tumorcells.expectedCNA.by_chr_region.by_manual_subcluster.20200407.v1.tsv", data.table = F)
rownames(frac_cnv_wide_df) <- frac_cnv_wide_df$manual_cluster_name
# make data matrix for heatmap body ---------------------------------------
scaled_avgexp_df <- scaled_avgexp_df[rowSums(!is.na(scaled_avgexp_df)) > 1,]
## format the column names to only aliquot id
data_col_names <- colnames(scaled_avgexp_df)[-1]
data_col_names.changed <- str_split_fixed(string = data_col_names, pattern = "\\.", n = 2)[,2]
## rename the data frame
colnames(scaled_avgexp_df) <- c("gene", data_col_names.changed)
## filter out unwanted manual clusters
data_col_names.keep <- data_col_names.changed[!grepl(pattern = "MCNA", x = data_col_names.changed)]
## reformat data frame to matrix
plot_data_df <- scaled_avgexp_df
plot_data_mat <- as.matrix(plot_data_df[,data_col_names.keep])
plot_data_mat %>% head()
## add row names
rownames(plot_data_mat) <- plot_data_df$gene
plot_data_mat %>% head()
### get aliquot ids and case ids
tumorsubcluster_ids <- colnames(plot_data_mat)
aliquot_ids <- str_split_fixed(string = tumorsubcluster_ids, pattern = "_", n = 2)[,1]
case_ids <- mapvalues(x = aliquot_ids, from = id_metadata_df$Aliquot.snRNA, to = as.vector(id_metadata_df$Case))
# make top column annotation --------------------------------------------------
## make annotation data frame with copy number profile first
top_col_anno_df_wide <- frac_cnv_wide_df
top_col_anno_df <- top_col_anno_df_wide[,-1]
rownames(top_col_anno_df) <- top_col_anno_df_wide$manual_cluster_name
top_col_anno_df <- top_col_anno_df[colnames(plot_data_mat),]
top_col_anno_df[is.na(top_col_anno_df)] <- 0
### make top column annotation object
top_col_anno = HeatmapAnnotation(Fraction_Cells_With_14q_Loss = anno_barplot(x = top_col_anno_df$`14q`,
gp = gpar(fill = 1, col = 1)),
show_legend = T)
# make bottom column annotation -------------------------------------------
bottom_col_anno = HeatmapAnnotation(foo = anno_text(case_ids,
location = 0.5, just = "center",
gp = gpar(fill = uniq_case_colors[case_ids], col = "white", border = "black"),
width = max_text_width(case_ids)*1.2))
# plot pathway genes ------------------------------------------------------
## get the subset of genes
plot_genes_tmp <- REACT[["165159 mTOR signalling"]]
# plot_genes_tmp <- c(plot_genes_tmp, as.vector(ccrcc_cna_genes_df$gene_symbol[ccrcc_cna_genes_df$chr_region == "3p"]))
plot_genes_tmp <- intersect(plot_genes_tmp, plot_data_df$gene)
## get the subset of plot data
plot_data_mat_tmp <- plot_data_mat[plot_genes_tmp,]
## make function for colors
heatmapbody_color_fun <- colorRamp2(c(quantile(plot_data_mat_tmp, 0.1, na.rm=T),
quantile(plot_data_mat_tmp, 0.5, na.rm=T),
quantile(plot_data_mat_tmp, 0.9, na.rm=T)),
c("blue", "white", "red"))
p <- Heatmap(matrix = plot_data_mat_tmp,
col = heatmapbody_color_fun,
bottom_annotation = bottom_col_anno,
top_annotation = top_col_anno,
show_heatmap_legend = T)
p
## save heatmap
png(filename = paste0(dir_out, "avg_exp.mtor_pathway.by_tumor_manualsubcluster.heatmap.", run_id, ".png"),
width = 4500, height = 1500, res = 150)
### combine heatmap and heatmap legend
draw(object = p,
annotation_legend_side = "right")
dev.off()
|
/tumor_subcluster/plotting/heatmap/heatmap_tumor_manualsubcluster_mtorpathway.R
|
no_license
|
ding-lab/ccRCC_snRNA_analysis
|
R
| false
| false
| 5,458
|
r
|
# Yige Wu @WashU Apr 2020
## running on local
## for plotting average expression of known pathogenic pathway genes for each tumor subclusters (manually grouped)
## VHL-HIF pathway
# set up libraries and output directory -----------------------------------
## set working directory
baseD = "~/Box/"
setwd(baseD)
source("./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/ccRCC_snRNA_analysis/ccRCC_snRNA_shared.R")
source("./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/ccRCC_snRNA_analysis/plotting.R")
## set run id
version_tmp <- 1
run_id <- paste0(format(Sys.Date(), "%Y%m%d") , ".v", version_tmp)
## set output directory
dir_out <- paste0(makeOutDir(), run_id, "/")
dir.create(dir_out)
# input dependencies ------------------------------------------------------
## input scaled average expression
scaled_avgexp_df <- fread(input = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Analysis_Results/recluster/recluster_cell_groups_in_individual_samples/recluster_nephron_epithelium/other/scale_averageexpression/20200407.v1/averageexpression_tumor_cells_by_manual_subcluster.scaled.20200407.v1.tsv", data.table = F)
## input reactome pathway
load(file = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Gene_Lists/2015-08-01_Gene_Set.RData")
pathway_genes <- REACT[["165159 mTOR signalling"]]
## input id meta data
id_metadata_df <- fread(input = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Analysis_Results/sample_info/make_meta_data/20191105.v1/meta_data.20191105.v1.tsv", data.table = F)
## input cnv fraction by chr
frac_cnv_wide_df <- fread(input = "./Ding_Lab/Projects_Current/RCC/ccRCC_snRNA/Resources/Analysis_Results/copy_number/summarize_cnv_fraction/estimate_fraction_of_tumorcells_with_expectedcnv_perchrregion_per_manualsubcluster_using_cnvgenes/20200407.v1/fraction_of_tumorcells.expectedCNA.by_chr_region.by_manual_subcluster.20200407.v1.tsv", data.table = F)
rownames(frac_cnv_wide_df) <- frac_cnv_wide_df$manual_cluster_name
# make data matrix for heatmap body ---------------------------------------
scaled_avgexp_df <- scaled_avgexp_df[rowSums(!is.na(scaled_avgexp_df)) > 1,]
## format the column names to only aliquot id
data_col_names <- colnames(scaled_avgexp_df)[-1]
data_col_names.changed <- str_split_fixed(string = data_col_names, pattern = "\\.", n = 2)[,2]
## rename the data frame
colnames(scaled_avgexp_df) <- c("gene", data_col_names.changed)
## filter out unwanted manual clusters
data_col_names.keep <- data_col_names.changed[!grepl(pattern = "MCNA", x = data_col_names.changed)]
## reformat data frame to matrix
plot_data_df <- scaled_avgexp_df
plot_data_mat <- as.matrix(plot_data_df[,data_col_names.keep])
plot_data_mat %>% head()
## add row names
rownames(plot_data_mat) <- plot_data_df$gene
plot_data_mat %>% head()
### get aliquot ids and case ids
tumorsubcluster_ids <- colnames(plot_data_mat)
aliquot_ids <- str_split_fixed(string = tumorsubcluster_ids, pattern = "_", n = 2)[,1]
case_ids <- mapvalues(x = aliquot_ids, from = id_metadata_df$Aliquot.snRNA, to = as.vector(id_metadata_df$Case))
# make top column annotation --------------------------------------------------
## make annotation data frame with copy number profile first
top_col_anno_df_wide <- frac_cnv_wide_df
top_col_anno_df <- top_col_anno_df_wide[,-1]
rownames(top_col_anno_df) <- top_col_anno_df_wide$manual_cluster_name
top_col_anno_df <- top_col_anno_df[colnames(plot_data_mat),]
top_col_anno_df[is.na(top_col_anno_df)] <- 0
### make top column annotation object
top_col_anno = HeatmapAnnotation(Fraction_Cells_With_14q_Loss = anno_barplot(x = top_col_anno_df$`14q`,
gp = gpar(fill = 1, col = 1)),
show_legend = T)
# make bottom column annotation -------------------------------------------
bottom_col_anno = HeatmapAnnotation(foo = anno_text(case_ids,
location = 0.5, just = "center",
gp = gpar(fill = uniq_case_colors[case_ids], col = "white", border = "black"),
width = max_text_width(case_ids)*1.2))
# plot pathway genes ------------------------------------------------------
## get the subset of genes
plot_genes_tmp <- REACT[["165159 mTOR signalling"]]
# plot_genes_tmp <- c(plot_genes_tmp, as.vector(ccrcc_cna_genes_df$gene_symbol[ccrcc_cna_genes_df$chr_region == "3p"]))
plot_genes_tmp <- intersect(plot_genes_tmp, plot_data_df$gene)
## get the subset of plot data
plot_data_mat_tmp <- plot_data_mat[plot_genes_tmp,]
## make function for colors
heatmapbody_color_fun <- colorRamp2(c(quantile(plot_data_mat_tmp, 0.1, na.rm=T),
quantile(plot_data_mat_tmp, 0.5, na.rm=T),
quantile(plot_data_mat_tmp, 0.9, na.rm=T)),
c("blue", "white", "red"))
p <- Heatmap(matrix = plot_data_mat_tmp,
col = heatmapbody_color_fun,
bottom_annotation = bottom_col_anno,
top_annotation = top_col_anno,
show_heatmap_legend = T)
p
## save heatmap
png(filename = paste0(dir_out, "avg_exp.mtor_pathway.by_tumor_manualsubcluster.heatmap.", run_id, ".png"),
width = 4500, height = 1500, res = 150)
### combine heatmap and heatmap legend
draw(object = p,
annotation_legend_side = "right")
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/PASWR-package.R
\docType{data}
\name{Formula1}
\alias{Formula1}
\title{Pit Stop Times}
\format{
A data frame with 10 observations on the following 3 variables:
\describe{
\item{Race}{number corresponding to a race site}
\item{Team1}{pit stop times for team one}
\item{Team2}{pit stop times for team two}
}
}
\source{
Ugarte, M. D., Militino, A. F., and Arnholt, A. T. (2008)
\emph{Probability and Statistics with R}. Chapman & Hall/CRC.
}
\description{
Pit stop times for two teams at 10 randomly selected Formula 1 races
}
\examples{
with(data = Formula1,
boxplot(Team1, Team2))
}
\keyword{datasets}
|
/man/Formula1.Rd
|
no_license
|
alanarnholt/PASWR
|
R
| false
| true
| 685
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/PASWR-package.R
\docType{data}
\name{Formula1}
\alias{Formula1}
\title{Pit Stop Times}
\format{
A data frame with 10 observations on the following 3 variables:
\describe{
\item{Race}{number corresponding to a race site}
\item{Team1}{pit stop times for team one}
\item{Team2}{pit stop times for team two}
}
}
\source{
Ugarte, M. D., Militino, A. F., and Arnholt, A. T. (2008)
\emph{Probability and Statistics with R}. Chapman & Hall/CRC.
}
\description{
Pit stop times for two teams at 10 randomly selected Formula 1 races
}
\examples{
with(data = Formula1,
boxplot(Team1, Team2))
}
\keyword{datasets}
|
#' @title Create input indicator(s)
#'
#' @description The function creates the input indicators from the inputs and
#' the outputs.
#' @param data_table A symmetric input-output table, a use table,
#' a margins or tax table retrieved by the \code{\link{iotable_get}}
#' function.
#' @param input_row The name of input(s) for which you want to create the
#' indicator(s). Must be present in the \code{data_table}.
#' @param households If the households column should be added,
#' defaults to \code{FALSE}.
#' @param digits Rounding digits, if omitted, no rounding takes place.
#' @param indicator_names The names of new indicators. Defaults to \code{NULL} when
#' the names in the key column of \code{input_matrix} will be used to create the
#' indicator names.
#' @return A tibble (data frame) containing the \code{input_matrix} divided by the \code{output_vector}
#' with a key column for products or industries.
#' @importFrom dplyr mutate across
#' @family indicator functions
#' @examples
#' input_indicator_create( data_table = iotable_get(),
#' input_row = c("gva", "compensation_employees"),
#' digits = 4,
#' indicator_names = c("GVA indicator", "Income indicator"))
##' @export
input_indicator_create <- function ( data_table,
input_row = c('gva_bp','net_tax_production'),
digits = NULL,
households = FALSE,
indicator_names = NULL) {
data_table <- data_table %>%
mutate(across(where(is.factor), as.character))
cm <- coefficient_matrix_create( data_table = data_table,
households = households )
key_column <- tolower(as.character(unlist(cm[,1])))
key_column
inputs_present <- which( key_column %in% tolower(input_row) )
inputs_present
if ( length(inputs_present) == 0 ) {
stop ( "The inputs were not found")
} else if ( length(inputs_present) < length(input_row)) {
not_found <- chars_collapse(input_row [! input_row %in% key_column[inputs_present]])
input_msg <- chars_collapse(input_row)
warning ( glue::glue("In input_indicator_create(data_table, input_row = {input_msg}) the rows {not_found} were not found in the data_table."))
}
input_matrix <- cm[inputs_present, ]
final_names <- NULL
if (! is.null(indicator_names)) { #adding custom names, if inputed
if ( length(indicator_names) == nrow ( input_matrix) ) {
final_names <- indicator_names
} else {
warning ( 'The number of new indicator names is different from indicators,
default names are used')
}
}
if ( is.null(final_names)) { #creating default names
final_names <- paste0(as.character(unlist(input_matrix[,1])), "_indicator")
}
input_matrix[,1] <- final_names
if ( !is.null(digits)) matrix_round (input_matrix, digits) else input_matrix
}
|
/R/input_indicator_create.R
|
permissive
|
cran/iotables
|
R
| false
| false
| 3,104
|
r
|
#' @title Create input indicator(s)
#'
#' @description The function creates the input indicators from the inputs and
#' the outputs.
#' @param data_table A symmetric input-output table, a use table,
#' a margins or tax table retrieved by the \code{\link{iotable_get}}
#' function.
#' @param input_row The name of input(s) for which you want to create the
#' indicator(s). Must be present in the \code{data_table}.
#' @param households If the households column should be added,
#' defaults to \code{FALSE}.
#' @param digits Rounding digits, if omitted, no rounding takes place.
#' @param indicator_names The names of new indicators. Defaults to \code{NULL} when
#' the names in the key column of \code{input_matrix} will be used to create the
#' indicator names.
#' @return A tibble (data frame) containing the \code{input_matrix} divided by the \code{output_vector}
#' with a key column for products or industries.
#' @importFrom dplyr mutate across
#' @family indicator functions
#' @examples
#' input_indicator_create( data_table = iotable_get(),
#' input_row = c("gva", "compensation_employees"),
#' digits = 4,
#' indicator_names = c("GVA indicator", "Income indicator"))
##' @export
input_indicator_create <- function ( data_table,
input_row = c('gva_bp','net_tax_production'),
digits = NULL,
households = FALSE,
indicator_names = NULL) {
data_table <- data_table %>%
mutate(across(where(is.factor), as.character))
cm <- coefficient_matrix_create( data_table = data_table,
households = households )
key_column <- tolower(as.character(unlist(cm[,1])))
key_column
inputs_present <- which( key_column %in% tolower(input_row) )
inputs_present
if ( length(inputs_present) == 0 ) {
stop ( "The inputs were not found")
} else if ( length(inputs_present) < length(input_row)) {
not_found <- chars_collapse(input_row [! input_row %in% key_column[inputs_present]])
input_msg <- chars_collapse(input_row)
warning ( glue::glue("In input_indicator_create(data_table, input_row = {input_msg}) the rows {not_found} were not found in the data_table."))
}
input_matrix <- cm[inputs_present, ]
final_names <- NULL
if (! is.null(indicator_names)) { #adding custom names, if inputed
if ( length(indicator_names) == nrow ( input_matrix) ) {
final_names <- indicator_names
} else {
warning ( 'The number of new indicator names is different from indicators,
default names are used')
}
}
if ( is.null(final_names)) { #creating default names
final_names <- paste0(as.character(unlist(input_matrix[,1])), "_indicator")
}
input_matrix[,1] <- final_names
if ( !is.null(digits)) matrix_round (input_matrix, digits) else input_matrix
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wiki_graph.R
\docType{data}
\name{wiki_data}
\alias{wiki_data}
\title{Dataset creation for wiki graph.}
\format{A data frame with 18 rows and 3 variables:
\describe{
\item{v1}{edge1 of graph}
\item{v2}{edge2 of grpah}
\item{w}{weight of the edge}
}}
\source{
\url{https://en.wikipedia.org/wiki/Graph}
}
\usage{
wiki_data
}
\description{
A dataset containing the edges of the graph from (v1 to v2) with the weight
of edge (w)
}
\keyword{datasets}
|
/man/wiki_data.Rd
|
permissive
|
MuhammadFaizanKhalid/euclidspackage
|
R
| false
| true
| 530
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wiki_graph.R
\docType{data}
\name{wiki_data}
\alias{wiki_data}
\title{Dataset creation for wiki graph.}
\format{A data frame with 18 rows and 3 variables:
\describe{
\item{v1}{edge1 of graph}
\item{v2}{edge2 of grpah}
\item{w}{weight of the edge}
}}
\source{
\url{https://en.wikipedia.org/wiki/Graph}
}
\usage{
wiki_data
}
\description{
A dataset containing the edges of the graph from (v1 to v2) with the weight
of edge (w)
}
\keyword{datasets}
|
##--------------------------------##
## Small ODE example: EnvZ/OmpR ##
##--------------------------------##
rm(list = ls())
library(episode)
library(ggplot2); library(reshape)
library(igraph)
source("../ggplot_theme.R")
## Define system ##
x0 <- c('(EnvZ-P)OmpR' = 4,
'EnvZ(OmpR-P)' = 5,
'EnvZ-P' = 6,
'EnvZ' = 7,
'OmpR' = 8,
'OmpR-P' = 9)
set.seed(27 + 09)
x0 <- c('(EnvZ-P)OmpR' = 0,
'EnvZ(OmpR-P)' = 0,
'EnvZ-P' = 0,
'EnvZ' = 0,
'OmpR' = 0,
'OmpR-P' = 0)
x0[] <- runif(6, min = 5, max = 10)
A <- matrix(c(0, 0, 1, 0, 1, 0,
1, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 1,
0, 1, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0,
0, 0, 1, 0, 0, 0),
byrow = TRUE, ncol = 6)
B <- matrix(c(1, 0, 0, 0, 0, 0,
0, 0, 1, 0, 1, 0,
0, 0, 0, 1, 0, 1,
0, 1, 0, 0, 0, 0,
0, 0, 0, 1, 0, 1,
0, 0, 0, 1, 1, 0,
0, 0, 1, 0, 0, 0,
0, 0, 0, 1, 0, 0),
byrow = TRUE, ncol = 6)
k <- c('k1' = 0,
'k-1' = 0,
'kt' = 0,
'k2' = 0,
'k-2' = 0,
'kp' = 0,
'kk' = 0,
'k-k' = 0)
k[] <- rnorm(8, 3)
colnames(B) <- colnames(A) <- names(x0)
rownames(B) <- rownames(A) <- names(k)
m <- mak(A = A, B = B)
ti <- seq(0, 1, by = 0.01)
sc <- cbind(1, 1, c(rep(0, 3), rep(1, 5)), c(rep(1, 3), rep(0, 3), rep(1, 2)))
trajs <- numsolve(m, time = rep(ti, 4),
x0 = cbind(x0, 1.5 * sample(x0), x0, x0),
param = k * sc)
trajs <- lapply(split(trajs, c(0, cumsum(diff(trajs[, 1]) < 0))),
matrix, ncol = ncol(trajs))
trajs <- lapply(trajs, function(tt) {colnames(tt) <- c("Time", names(x0)); tt})
# original system, perturbed system, intervened 1, intervened 2
# ## trajectory
# ggplot(melt(data.frame(trajs[[2]]), id.vars = "Time"), aes(x = Time, y = value, color = variable)) +
# geom_line()
library(Matrix)
## true network
netw <- field(m, x = x0, param = k, differentials = TRUE)$f_dx != 0
diag(netw) <- FALSE
netw
m
gg <- graph_from_adjacency_matrix(t(netw))
plot(gg, layout = layout_(gg, in_circle()), vertex.shape = rep("circle", 6),
vertex.color = "skyblue", edge.arrow.size=.5)
## larger frame model: x -> y and x + y -> z and z -> x + y
d <- ncol(A)
A <- B <- matrix(0, ncol = d, nrow = 0)
for (x in seq_len(d)) {
for (y in setdiff(seq_len(d), x)) {
A_ <- matrix(0, nrow = 1 + d - 2, ncol = d)
B_ <- matrix(0, nrow = 1 + d - 2, ncol = d)
A_[1, x] <- 1
B_[1, y] <- 1
s <- 1
for (z in setdiff(seq_len(d), c(x, y))) {
s <<- s + 1
if (x > y) {
A_[s, c(x, y)] <- 1
B_[s, z] <- 1
} else {
B_[s, c(x, y)] <- 1
A_[s, z] <- 1
}
}
A <- rbind(A, A_)
B <- rbind(B, B_)
}
}
## Sample data and plot data
data <- trajs
data <- lapply(data, function(dat) {
dat[, -1] <- dat[, -1] + matrix(rnorm(length(dat[, -1]), sd = 1), nrow = nrow(dat))
dat[0:30 * 3 + 1, ]
})
## plot data
ii <- 0
data_ <- lapply(data, function(dat) {
ii <<- ii + 1
cbind(dat, Type = ii)
})
dat <- data.frame(do.call(rbind, data_))
names(dat) <- c("Time", names(x0), "Type")
dat$Type <- factor(dat$Type)
levels(dat$Type) <- c("Original System", "Perturbed System", "(EnvZ-P)OmpR Inhibited", "EnvZ(OmpR-P) Inhibited")
gp <- ggplot(melt(dat, id.vars = c("Time", "Type")),
aes(x = Time, y = value, color = variable)) +
geom_point() + geom_line() + ylab("Abundance") +
scale_color_discrete("") + facet_wrap(~Type) + xlab("Time [min]") +
theme_bw() +
theme(axis.text.x = element_text(size = 10),
axis.text.y = element_text(size = 10),
axis.title = element_text(size = 17, face = "bold"),
strip.text.x = element_text(size = 15),
strip.text.y = element_text(size = 15),
legend.text = element_text(size = 12),
legend.title = element_text(size = 13),
plot.title = element_text(hjust = 0.5))
pdf(paste0("../figures/EnvZ_data.pdf"), height = 6, width = 9)
print(gp)
dev.off()
## Recovery
sc_full <- cbind(1, 1,
!apply(cbind(A[, c(1)], (B - A)[, c(1)]) != 0, 1, any),
!apply(cbind(A[, c(2)], (B - A)[, c(2)]) != 0, 1, any))
## Recovery via AIM ##
m_full <- mak(A, B)
m_full
x_smooth <- lapply(data, function(dat) {
cbind(Time = dat[, 1], apply(dat[, -1], 2, function(y) {
ll <- loess(y ~ x, data.frame(y = y, x = dat[, 1]),
family = "symmetric")
predict(ll, newdata = data.frame(x = dat[, 1]))
}))
})
## Perturbed
a_per <- aim(m_full,
op = opt(do.call(rbind, data[c(1, 2)])),
x = do.call(rbind, x_smooth[c(1, 2)]))#, adapts = NULL)
## Intervened
a_int <- aim(mak(A, B,
r = reg(contexts = sc_full[, c(3, 4)])),
op = opt(do.call(rbind, data[c(3, 4)])),
x = do.call(rbind, x_smooth[c(3, 4)]))#, adapts = NULL)
## Both
a_bot <- aim(mak(A, B,
r = reg(contexts = sc_full)),
op = opt(do.call(rbind, data)),
x = do.call(rbind, x_smooth))
aas <- list(per = a_per, int = a_int, bot = a_bot)
rods <- lapply(aas, rodeo)
# No refitting, since only one smoother
netw_guess <- lapply(aas, function(a) {
sel <- which.max(Matrix::colSums(a$params$rate!=0) >= 8)
netw_guess <- field(m_full, x = x0, param = a$params[[1]][, sel],
differentials = TRUE)$f_dx != 0
diag(netw_guess) <- FALSE
netw_guess
})
## igraph it
gg_s <- lapply(netw_guess, function(ntw) {
gg_guess <- graph_from_adjacency_matrix(t(ntw))
gg_u <- union(gg, gg_guess)
E(gg_guess)
E(gg_u)$weight <- as.numeric((attr(E(gg_u), "vnames") %in% attr(E(gg_guess), "vnames")) &
(attr(E(gg_u), "vnames") %in% attr(E(gg), "vnames"))) -
as.numeric((attr(E(gg_u), "vnames") %in% attr(E(gg_guess), "vnames")) &
!(attr(E(gg_u), "vnames") %in% attr(E(gg), "vnames")))
E(gg_u)$color <- rep('gray', length(E(gg_u)))
E(gg_u)$color[E(gg_u)$weight == 1] <- 'green'
E(gg_u)$color[E(gg_u)$weight == 0] <- 'gray'
E(gg_u)$color[E(gg_u)$weight == -1] <- 'red'
gg_u
})
lay <- layout_on_grid(gg)
igraph_options(vertex.size = 65, vertex.label.cex = .95, edge.arrow.size = 1)
lay[, 2] <- c(.25, 0, .25, .75, 1, .75)
pdf(paste0("../figures/EnvZ_est_per.pdf"), height = 7, width = 10)
plot(gg_s$per, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
pdf(paste0("../figures/EnvZ_est_int.pdf"), height = 7, width = 10)
plot(gg_s$int, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
pdf(paste0("../figures/EnvZ_est_both.pdf"), height = 7, width = 10)
plot(gg_s$bot, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
pdf(paste0("../figures/EnvZ_truth.pdf"), height = 7, width = 10)
plot(gg, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
# figure of ranking #
xs <- sapply(seq(.25, 1.25, length.out = 5), function(sp) {
lapply(data, function(dat) {
cbind(Time = dat[, 1], apply(dat[, -1], 2, function(y) {
ll <- loess(y ~ x, data.frame(y = y, x = dat[, 1]), span = sp,
family = "symmetric")
predict(ll, newdata = data.frame(x = dat[, 1]))
}))
})
}, simplify = FALSE)
as_both <- lapply(xs, function(x_smooth) {
a <- aim(mak(A, B,
r = reg(contexts = sc_full)),
op = opt(do.call(rbind, data)),
x = do.call(rbind, x_smooth))
rod <- rodeo(a)
rod
})
ntws <- lapply(as_both, function(rod) {
apply(rod$params$rate, 2, function(k) {
netw_guess <- field(m_full, x = x0, param = k,
differentials = TRUE)$f_dx != 0
diag(netw_guess) <- FALSE
netw_guess
})
})
## all lambda sequences are (up to a scale equivalent)
common_lambda <- as_both[[3]]$op$lambda
ss <- 0
rank <- lapply(as_both[-1], function(rod) {
ss <<- ss + 1
cbind(loss = rod$loss, df = apply(rod$params$rate != 0, 2, sum), coor = seq_along(rod$loss), smooth = ss, lambda = common_lambda)[1:18, ]
})
rank <- do.call(rbind, rank)
choice <- sapply(sort(unique(rank[, "df"])), function(df) {
red <- rank[rank[, "df"] == df, , drop = FALSE]
red[which.min(red[, "loss"]), c("coor", "smooth")]
})
dfr <- data.frame(rank)
dfr$opt <- apply(rank[, c("coor", "smooth")], 1, function(x) {
any(apply(choice == x, 2, all))
})
head(dfr)
dfr$label <- rep("", nrow(dfr))
dfr$label[dfr$opt] <- dfr$df[dfr$opt]
cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#CC79A7")
dfr$smooth <- factor(dfr$smooth)
dfr1 <- dfr
dfr1$smooth <- factor(dfr1$smooth, levels = rev(levels(dfr1$smooth)))
g1 <- ggplot(dfr1, aes(x = log(lambda), y = smooth, fill = smooth, alpha = df, color = opt)) +
geom_tile(width = 0.365, height = 0.9, size = 1) +
ggplot_theme + ylab("Smoother") + xlab(expression(log(lambda))) +
geom_text(aes(label = label), alpha = 1) +
scale_fill_manual(values = cbPalette, guide = FALSE) +
theme(legend.position = "right") +
scale_alpha_continuous(name = "Number of\nparameters") +
scale_color_manual(values = c("white", "black"), guide = FALSE)
g1
dfr1$smooth <- factor(dfr1$smooth, levels = rev(levels(dfr1$smooth)))
g2 <- ggplot(dfr1, aes(x = df, y = loss, color = smooth)) +
geom_point(aes(shape = opt), size = 2, stroke = 2) + ggplot_theme +
theme(legend.position = "right") +
xlab("Number of parameters") + ylab("Loss") +
scale_shape_manual(values = c(4, 1), guide = FALSE) +
scale_color_manual(values = rev(cbPalette), name = "Smoother ")
g2
library(gridExtra)
gg <- grid.arrange(g1, g2)
pdf(paste0("../figures/Alg_visual.pdf"), height = 7, width = 12)
grid.arrange(g1, g2)
dev.off()
|
/SimStudies/SimF_EnvZOmpR/main.R
|
no_license
|
nielsrhansen/SLODE
|
R
| false
| false
| 9,948
|
r
|
##--------------------------------##
## Small ODE example: EnvZ/OmpR ##
##--------------------------------##
rm(list = ls())
library(episode)
library(ggplot2); library(reshape)
library(igraph)
source("../ggplot_theme.R")
## Define system ##
x0 <- c('(EnvZ-P)OmpR' = 4,
'EnvZ(OmpR-P)' = 5,
'EnvZ-P' = 6,
'EnvZ' = 7,
'OmpR' = 8,
'OmpR-P' = 9)
set.seed(27 + 09)
x0 <- c('(EnvZ-P)OmpR' = 0,
'EnvZ(OmpR-P)' = 0,
'EnvZ-P' = 0,
'EnvZ' = 0,
'OmpR' = 0,
'OmpR-P' = 0)
x0[] <- runif(6, min = 5, max = 10)
A <- matrix(c(0, 0, 1, 0, 1, 0,
1, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 1,
0, 1, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0,
0, 0, 1, 0, 0, 0),
byrow = TRUE, ncol = 6)
B <- matrix(c(1, 0, 0, 0, 0, 0,
0, 0, 1, 0, 1, 0,
0, 0, 0, 1, 0, 1,
0, 1, 0, 0, 0, 0,
0, 0, 0, 1, 0, 1,
0, 0, 0, 1, 1, 0,
0, 0, 1, 0, 0, 0,
0, 0, 0, 1, 0, 0),
byrow = TRUE, ncol = 6)
k <- c('k1' = 0,
'k-1' = 0,
'kt' = 0,
'k2' = 0,
'k-2' = 0,
'kp' = 0,
'kk' = 0,
'k-k' = 0)
k[] <- rnorm(8, 3)
colnames(B) <- colnames(A) <- names(x0)
rownames(B) <- rownames(A) <- names(k)
m <- mak(A = A, B = B)
ti <- seq(0, 1, by = 0.01)
sc <- cbind(1, 1, c(rep(0, 3), rep(1, 5)), c(rep(1, 3), rep(0, 3), rep(1, 2)))
trajs <- numsolve(m, time = rep(ti, 4),
x0 = cbind(x0, 1.5 * sample(x0), x0, x0),
param = k * sc)
trajs <- lapply(split(trajs, c(0, cumsum(diff(trajs[, 1]) < 0))),
matrix, ncol = ncol(trajs))
trajs <- lapply(trajs, function(tt) {colnames(tt) <- c("Time", names(x0)); tt})
# original system, perturbed system, intervened 1, intervened 2
# ## trajectory
# ggplot(melt(data.frame(trajs[[2]]), id.vars = "Time"), aes(x = Time, y = value, color = variable)) +
# geom_line()
library(Matrix)
## true network
netw <- field(m, x = x0, param = k, differentials = TRUE)$f_dx != 0
diag(netw) <- FALSE
netw
m
gg <- graph_from_adjacency_matrix(t(netw))
plot(gg, layout = layout_(gg, in_circle()), vertex.shape = rep("circle", 6),
vertex.color = "skyblue", edge.arrow.size=.5)
## larger frame model: x -> y and x + y -> z and z -> x + y
d <- ncol(A)
A <- B <- matrix(0, ncol = d, nrow = 0)
for (x in seq_len(d)) {
for (y in setdiff(seq_len(d), x)) {
A_ <- matrix(0, nrow = 1 + d - 2, ncol = d)
B_ <- matrix(0, nrow = 1 + d - 2, ncol = d)
A_[1, x] <- 1
B_[1, y] <- 1
s <- 1
for (z in setdiff(seq_len(d), c(x, y))) {
s <<- s + 1
if (x > y) {
A_[s, c(x, y)] <- 1
B_[s, z] <- 1
} else {
B_[s, c(x, y)] <- 1
A_[s, z] <- 1
}
}
A <- rbind(A, A_)
B <- rbind(B, B_)
}
}
## Sample data and plot data
data <- trajs
data <- lapply(data, function(dat) {
dat[, -1] <- dat[, -1] + matrix(rnorm(length(dat[, -1]), sd = 1), nrow = nrow(dat))
dat[0:30 * 3 + 1, ]
})
## plot data
ii <- 0
data_ <- lapply(data, function(dat) {
ii <<- ii + 1
cbind(dat, Type = ii)
})
dat <- data.frame(do.call(rbind, data_))
names(dat) <- c("Time", names(x0), "Type")
dat$Type <- factor(dat$Type)
levels(dat$Type) <- c("Original System", "Perturbed System", "(EnvZ-P)OmpR Inhibited", "EnvZ(OmpR-P) Inhibited")
gp <- ggplot(melt(dat, id.vars = c("Time", "Type")),
aes(x = Time, y = value, color = variable)) +
geom_point() + geom_line() + ylab("Abundance") +
scale_color_discrete("") + facet_wrap(~Type) + xlab("Time [min]") +
theme_bw() +
theme(axis.text.x = element_text(size = 10),
axis.text.y = element_text(size = 10),
axis.title = element_text(size = 17, face = "bold"),
strip.text.x = element_text(size = 15),
strip.text.y = element_text(size = 15),
legend.text = element_text(size = 12),
legend.title = element_text(size = 13),
plot.title = element_text(hjust = 0.5))
pdf(paste0("../figures/EnvZ_data.pdf"), height = 6, width = 9)
print(gp)
dev.off()
## Recovery
sc_full <- cbind(1, 1,
!apply(cbind(A[, c(1)], (B - A)[, c(1)]) != 0, 1, any),
!apply(cbind(A[, c(2)], (B - A)[, c(2)]) != 0, 1, any))
## Recovery via AIM ##
m_full <- mak(A, B)
m_full
x_smooth <- lapply(data, function(dat) {
cbind(Time = dat[, 1], apply(dat[, -1], 2, function(y) {
ll <- loess(y ~ x, data.frame(y = y, x = dat[, 1]),
family = "symmetric")
predict(ll, newdata = data.frame(x = dat[, 1]))
}))
})
## Perturbed
a_per <- aim(m_full,
op = opt(do.call(rbind, data[c(1, 2)])),
x = do.call(rbind, x_smooth[c(1, 2)]))#, adapts = NULL)
## Intervened
a_int <- aim(mak(A, B,
r = reg(contexts = sc_full[, c(3, 4)])),
op = opt(do.call(rbind, data[c(3, 4)])),
x = do.call(rbind, x_smooth[c(3, 4)]))#, adapts = NULL)
## Both
a_bot <- aim(mak(A, B,
r = reg(contexts = sc_full)),
op = opt(do.call(rbind, data)),
x = do.call(rbind, x_smooth))
aas <- list(per = a_per, int = a_int, bot = a_bot)
rods <- lapply(aas, rodeo)
# No refitting, since only one smoother
netw_guess <- lapply(aas, function(a) {
sel <- which.max(Matrix::colSums(a$params$rate!=0) >= 8)
netw_guess <- field(m_full, x = x0, param = a$params[[1]][, sel],
differentials = TRUE)$f_dx != 0
diag(netw_guess) <- FALSE
netw_guess
})
## igraph it
gg_s <- lapply(netw_guess, function(ntw) {
gg_guess <- graph_from_adjacency_matrix(t(ntw))
gg_u <- union(gg, gg_guess)
E(gg_guess)
E(gg_u)$weight <- as.numeric((attr(E(gg_u), "vnames") %in% attr(E(gg_guess), "vnames")) &
(attr(E(gg_u), "vnames") %in% attr(E(gg), "vnames"))) -
as.numeric((attr(E(gg_u), "vnames") %in% attr(E(gg_guess), "vnames")) &
!(attr(E(gg_u), "vnames") %in% attr(E(gg), "vnames")))
E(gg_u)$color <- rep('gray', length(E(gg_u)))
E(gg_u)$color[E(gg_u)$weight == 1] <- 'green'
E(gg_u)$color[E(gg_u)$weight == 0] <- 'gray'
E(gg_u)$color[E(gg_u)$weight == -1] <- 'red'
gg_u
})
lay <- layout_on_grid(gg)
igraph_options(vertex.size = 65, vertex.label.cex = .95, edge.arrow.size = 1)
lay[, 2] <- c(.25, 0, .25, .75, 1, .75)
pdf(paste0("../figures/EnvZ_est_per.pdf"), height = 7, width = 10)
plot(gg_s$per, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
pdf(paste0("../figures/EnvZ_est_int.pdf"), height = 7, width = 10)
plot(gg_s$int, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
pdf(paste0("../figures/EnvZ_est_both.pdf"), height = 7, width = 10)
plot(gg_s$bot, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
pdf(paste0("../figures/EnvZ_truth.pdf"), height = 7, width = 10)
plot(gg, layout = lay,
vertex.shape = rep("crectangle", 6),
vertex.color = "white")
dev.off()
# figure of ranking #
xs <- sapply(seq(.25, 1.25, length.out = 5), function(sp) {
lapply(data, function(dat) {
cbind(Time = dat[, 1], apply(dat[, -1], 2, function(y) {
ll <- loess(y ~ x, data.frame(y = y, x = dat[, 1]), span = sp,
family = "symmetric")
predict(ll, newdata = data.frame(x = dat[, 1]))
}))
})
}, simplify = FALSE)
as_both <- lapply(xs, function(x_smooth) {
a <- aim(mak(A, B,
r = reg(contexts = sc_full)),
op = opt(do.call(rbind, data)),
x = do.call(rbind, x_smooth))
rod <- rodeo(a)
rod
})
ntws <- lapply(as_both, function(rod) {
apply(rod$params$rate, 2, function(k) {
netw_guess <- field(m_full, x = x0, param = k,
differentials = TRUE)$f_dx != 0
diag(netw_guess) <- FALSE
netw_guess
})
})
## all lambda sequences are (up to a scale equivalent)
common_lambda <- as_both[[3]]$op$lambda
ss <- 0
rank <- lapply(as_both[-1], function(rod) {
ss <<- ss + 1
cbind(loss = rod$loss, df = apply(rod$params$rate != 0, 2, sum), coor = seq_along(rod$loss), smooth = ss, lambda = common_lambda)[1:18, ]
})
rank <- do.call(rbind, rank)
choice <- sapply(sort(unique(rank[, "df"])), function(df) {
red <- rank[rank[, "df"] == df, , drop = FALSE]
red[which.min(red[, "loss"]), c("coor", "smooth")]
})
dfr <- data.frame(rank)
dfr$opt <- apply(rank[, c("coor", "smooth")], 1, function(x) {
any(apply(choice == x, 2, all))
})
head(dfr)
dfr$label <- rep("", nrow(dfr))
dfr$label[dfr$opt] <- dfr$df[dfr$opt]
cbPalette <- c("#E69F00", "#56B4E9", "#009E73", "#CC79A7")
dfr$smooth <- factor(dfr$smooth)
dfr1 <- dfr
dfr1$smooth <- factor(dfr1$smooth, levels = rev(levels(dfr1$smooth)))
g1 <- ggplot(dfr1, aes(x = log(lambda), y = smooth, fill = smooth, alpha = df, color = opt)) +
geom_tile(width = 0.365, height = 0.9, size = 1) +
ggplot_theme + ylab("Smoother") + xlab(expression(log(lambda))) +
geom_text(aes(label = label), alpha = 1) +
scale_fill_manual(values = cbPalette, guide = FALSE) +
theme(legend.position = "right") +
scale_alpha_continuous(name = "Number of\nparameters") +
scale_color_manual(values = c("white", "black"), guide = FALSE)
g1
dfr1$smooth <- factor(dfr1$smooth, levels = rev(levels(dfr1$smooth)))
g2 <- ggplot(dfr1, aes(x = df, y = loss, color = smooth)) +
geom_point(aes(shape = opt), size = 2, stroke = 2) + ggplot_theme +
theme(legend.position = "right") +
xlab("Number of parameters") + ylab("Loss") +
scale_shape_manual(values = c(4, 1), guide = FALSE) +
scale_color_manual(values = rev(cbPalette), name = "Smoother ")
g2
library(gridExtra)
gg <- grid.arrange(g1, g2)
pdf(paste0("../figures/Alg_visual.pdf"), height = 7, width = 12)
grid.arrange(g1, g2)
dev.off()
|
AUTH_SCOPES = c('https://www.googleapis.com/auth/cloud-platform')
GCS_PATH_SEPARATOR = '/'
#' Access datasets from Google Cloud Storage
#'
#' Helper functions for loading datasets from Google Cloud Storage (GCS). In case of
#' tabular data, provides functions which can be used in concert with
#' \link[megautils]{import}.
#'
#' \describe{
#' \item{`gcs_data()`}{loads and parses data from GCS.}
#' \item{`gcs_table()`}{lazy table reference for using GCS objects with
#' \link{import}.}
#' \item{`gcs_auth()`}{thin wrapper over \link{gargle} and \link{googleAuthR}
#' for faciliting user-base authentication.}
#' }
#'
#' @param bucket a valid GCS bucket name.
#'
#' @param path a path pointing to a dataset into the GCS bucket.
#'
#' @param loader a post-processing function (e.g. \link{read_csv}) which
#' transforms the downloaded file into usable data. When used with
#' \link[megautils]{import}, `loader` must return an object with S3 class
#' `data.frame` (e.g. a \link{tibble}).
#'
#' @examples
#'
#' \dontrun{
#' # The typical use case is importing datasets into notebooks locally.
#' gcs_auth(email = 'user@example.com')
#'
#' # Places the contents of cities.csv into global object "cities", and
#' # stores the data into local cache for future use.
#' import(
#' gcs_table(
#' bucket = 'somebucket.example.com',
#' path = '/datasets/cities.csv',
#' loader = read_csv
#' )
#' )
#' }
#'
#'
#' @export
gcs_data <- function(bucket, path, loader) {
tryCatch({
target <- file.path(tempdir(), gcs_basename(path))
if(!googleCloudStorageR::gcs_get_object(
object_name = path,
bucket = bucket,
saveToDisk = target)) {
stop(g('Failed to download {path} from bucket {bucket}.'))
}
loader(target)
}, finally = {
file.remove(target)
})
}
#' @rdname gcs_data
#' @export
gcs_table <- function(bucket, path, loader, name = NULL) {
if (is.null(name)) {
name <- gsub(pattern = '\\.[^\\.]+$', replacement = '', x = gcs_basename(path))
}
obj <- list(
bucket = bucket,
path = path,
loader = loader,
name = name
)
class(obj) <- c('gcs_table', class(obj))
obj
}
#' @rdname gcs_data
#' @export
materialize.gcs_table <- function(reference) {
do.call(gcs_data, reference[1:(length(reference) - 1)])
}
#' @rdname gcs_data
#' @export
gcs_auth <- function(...) {
token <- gargle::token_fetch(scopes = AUTH_SCOPES, ...)
googleAuthR::gar_auth(scopes = AUTH_SCOPES, token)
}
gcs_basename <- function(path) {
parts <- stringr::str_split(path, GCS_PATH_SEPARATOR, simplify = TRUE)
parts[length(parts)]
}
|
/R/gcs.R
|
permissive
|
gmega/megautils
|
R
| false
| false
| 2,699
|
r
|
AUTH_SCOPES = c('https://www.googleapis.com/auth/cloud-platform')
GCS_PATH_SEPARATOR = '/'
#' Access datasets from Google Cloud Storage
#'
#' Helper functions for loading datasets from Google Cloud Storage (GCS). In case of
#' tabular data, provides functions which can be used in concert with
#' \link[megautils]{import}.
#'
#' \describe{
#' \item{`gcs_data()`}{loads and parses data from GCS.}
#' \item{`gcs_table()`}{lazy table reference for using GCS objects with
#' \link{import}.}
#' \item{`gcs_auth()`}{thin wrapper over \link{gargle} and \link{googleAuthR}
#' for faciliting user-base authentication.}
#' }
#'
#' @param bucket a valid GCS bucket name.
#'
#' @param path a path pointing to a dataset into the GCS bucket.
#'
#' @param loader a post-processing function (e.g. \link{read_csv}) which
#' transforms the downloaded file into usable data. When used with
#' \link[megautils]{import}, `loader` must return an object with S3 class
#' `data.frame` (e.g. a \link{tibble}).
#'
#' @examples
#'
#' \dontrun{
#' # The typical use case is importing datasets into notebooks locally.
#' gcs_auth(email = 'user@example.com')
#'
#' # Places the contents of cities.csv into global object "cities", and
#' # stores the data into local cache for future use.
#' import(
#' gcs_table(
#' bucket = 'somebucket.example.com',
#' path = '/datasets/cities.csv',
#' loader = read_csv
#' )
#' )
#' }
#'
#'
#' @export
gcs_data <- function(bucket, path, loader) {
tryCatch({
target <- file.path(tempdir(), gcs_basename(path))
if(!googleCloudStorageR::gcs_get_object(
object_name = path,
bucket = bucket,
saveToDisk = target)) {
stop(g('Failed to download {path} from bucket {bucket}.'))
}
loader(target)
}, finally = {
file.remove(target)
})
}
#' @rdname gcs_data
#' @export
gcs_table <- function(bucket, path, loader, name = NULL) {
if (is.null(name)) {
name <- gsub(pattern = '\\.[^\\.]+$', replacement = '', x = gcs_basename(path))
}
obj <- list(
bucket = bucket,
path = path,
loader = loader,
name = name
)
class(obj) <- c('gcs_table', class(obj))
obj
}
#' @rdname gcs_data
#' @export
materialize.gcs_table <- function(reference) {
do.call(gcs_data, reference[1:(length(reference) - 1)])
}
#' @rdname gcs_data
#' @export
gcs_auth <- function(...) {
token <- gargle::token_fetch(scopes = AUTH_SCOPES, ...)
googleAuthR::gar_auth(scopes = AUTH_SCOPES, token)
}
gcs_basename <- function(path) {
parts <- stringr::str_split(path, GCS_PATH_SEPARATOR, simplify = TRUE)
parts[length(parts)]
}
|
options(servr.daemon = interactive(),
blogdown.YAML.empty = TRUE,
blogdown.author = 'Alexander C. Hungerford',
blogdown.ext = '.Rmd',
blogdown.subdir = 'post')
|
/.Rprofile
|
permissive
|
achungerford/rsite
|
R
| false
| false
| 176
|
rprofile
|
options(servr.daemon = interactive(),
blogdown.YAML.empty = TRUE,
blogdown.author = 'Alexander C. Hungerford',
blogdown.ext = '.Rmd',
blogdown.subdir = 'post')
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gdfpd_get_inflation_data.R
\name{gdfpd.get.inflation.data}
\alias{gdfpd.get.inflation.data}
\title{Downloads and read inflation data from github}
\usage{
gdfpd.get.inflation.data(inflation.index, do.cache)
}
\arguments{
\item{inflation.index}{Sets the inflation index to use for finding inflation adjusted values of all reports. Possible values: 'dollar' (default) or 'IPCA', the brazilian main inflation index.
When using 'IPCA', the base date is set as the last date found in the DFP dataset.}
\item{do.cache}{Logical for controlling to whether to use a cache system or not. Default = TRUE}
}
\value{
A dataframe with inflation data
}
\description{
Inflation data is available at git repo 'msperlin/GetITRData_auxiliary'
}
\examples{
\dontrun{ # keep cran check fast
df.inflation <- gdfpd.get.inflation.data('IPCA')
str(df.inflation)
}
}
|
/man/gdfpd.get.inflation.data.Rd
|
no_license
|
msperlin/GetDFPData
|
R
| false
| true
| 920
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gdfpd_get_inflation_data.R
\name{gdfpd.get.inflation.data}
\alias{gdfpd.get.inflation.data}
\title{Downloads and read inflation data from github}
\usage{
gdfpd.get.inflation.data(inflation.index, do.cache)
}
\arguments{
\item{inflation.index}{Sets the inflation index to use for finding inflation adjusted values of all reports. Possible values: 'dollar' (default) or 'IPCA', the brazilian main inflation index.
When using 'IPCA', the base date is set as the last date found in the DFP dataset.}
\item{do.cache}{Logical for controlling to whether to use a cache system or not. Default = TRUE}
}
\value{
A dataframe with inflation data
}
\description{
Inflation data is available at git repo 'msperlin/GetITRData_auxiliary'
}
\examples{
\dontrun{ # keep cran check fast
df.inflation <- gdfpd.get.inflation.data('IPCA')
str(df.inflation)
}
}
|
c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 2133
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 2133
c
c Input Parameter (command line, file):
c input filename QBFLIB/Jordan-Kaiser/reduction-finding-full-set-params-k1c3n4/eequery_query64_1344n.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 971
c no.of clauses 2133
c no.of taut cls 0
c
c Output Parameters:
c remaining no.of clauses 2133
c
c QBFLIB/Jordan-Kaiser/reduction-finding-full-set-params-k1c3n4/eequery_query64_1344n.qdimacs 971 2133 E1 [] 0 99 869 2133 NONE
|
/code/dcnf-ankit-optimized/Results/QBFLIB-2018/E1/Experiments/Jordan-Kaiser/reduction-finding-full-set-params-k1c3n4/eequery_query64_1344n/eequery_query64_1344n.R
|
no_license
|
arey0pushpa/dcnf-autarky
|
R
| false
| false
| 710
|
r
|
c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 2133
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 2133
c
c Input Parameter (command line, file):
c input filename QBFLIB/Jordan-Kaiser/reduction-finding-full-set-params-k1c3n4/eequery_query64_1344n.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 971
c no.of clauses 2133
c no.of taut cls 0
c
c Output Parameters:
c remaining no.of clauses 2133
c
c QBFLIB/Jordan-Kaiser/reduction-finding-full-set-params-k1c3n4/eequery_query64_1344n.qdimacs 971 2133 E1 [] 0 99 869 2133 NONE
|
library(tidyverse)
# ** calculate the mean, sd, cov and se **
data_15P_cal_HE_outlier_replaced <- read_csv("data/tidydata/data_15P_cal_HE_outlier_replaced.csv")
variation <- data_15P_cal_HE_outlier_replaced %>%
group_by(Sample, Time) %>%
summarise(Mean_HE = mean(HE, na.rm = TRUE),
Sd_HE = sd(HE, na.rm = TRUE),
Cov = Sd_HE / Mean_HE * 100,
Se_HE = Sd_HE/sqrt(3)) # sqrt: radical sign
# save the dataset
write_csv(variation, "analysis/variation_15P.csv")
# combine the variation with the "data_15P_cal_HE_outlier_replaced" dataset
data_15P_cal_var <- left_join(data_15P_cal_HE_outlier_replaced, variation)
# save the joined dataset
write_csv(data_15P_cal_var, "analysis/data_15P_cal_var.csv")
# ** check the high variation samples (Cov > 10) **
variation %>%
filter(Cov > 10) %>% # choose just the Cov above 10
filter(!(Time %in% c(0, 20))) %>%
group_by(Sample)# remove the first two time points
# 280 out of 2034 observations have a Cov > 10
filter(Time %in% c(1440, 1800)) # check how many Cov above 10 are at 1440 or 1800min
# just 15 observations here
|
/scripts/calculation/calculation_var_15P.R
|
no_license
|
Yuzi-00/starch-degradation
|
R
| false
| false
| 1,186
|
r
|
library(tidyverse)
# ** calculate the mean, sd, cov and se **
data_15P_cal_HE_outlier_replaced <- read_csv("data/tidydata/data_15P_cal_HE_outlier_replaced.csv")
variation <- data_15P_cal_HE_outlier_replaced %>%
group_by(Sample, Time) %>%
summarise(Mean_HE = mean(HE, na.rm = TRUE),
Sd_HE = sd(HE, na.rm = TRUE),
Cov = Sd_HE / Mean_HE * 100,
Se_HE = Sd_HE/sqrt(3)) # sqrt: radical sign
# save the dataset
write_csv(variation, "analysis/variation_15P.csv")
# combine the variation with the "data_15P_cal_HE_outlier_replaced" dataset
data_15P_cal_var <- left_join(data_15P_cal_HE_outlier_replaced, variation)
# save the joined dataset
write_csv(data_15P_cal_var, "analysis/data_15P_cal_var.csv")
# ** check the high variation samples (Cov > 10) **
variation %>%
filter(Cov > 10) %>% # choose just the Cov above 10
filter(!(Time %in% c(0, 20))) %>%
group_by(Sample)# remove the first two time points
# 280 out of 2034 observations have a Cov > 10
filter(Time %in% c(1440, 1800)) # check how many Cov above 10 are at 1440 or 1800min
# just 15 observations here
|
# Read in the data from the text file...
data <- read.table("household_power_consumption.txt", sep=";", na.strings="?", header=TRUE)
#Subset the data desired for Feb 1st and 2nd, 2007
feb <- data[(data$Date==c("1/2/2007")| data$Date==c("2/2/2007")),]
hist(feb$Global_active_power, col="red", xlab="Global Active Power (kilowatts)",main=paste("Global Active Power"))
#copy to a png file...
dev.copy(png, file="plot1.png", height=480, width=480)
dev.off()
|
/plot1.R
|
no_license
|
susanst/ExData_Plotting1
|
R
| false
| false
| 459
|
r
|
# Read in the data from the text file...
data <- read.table("household_power_consumption.txt", sep=";", na.strings="?", header=TRUE)
#Subset the data desired for Feb 1st and 2nd, 2007
feb <- data[(data$Date==c("1/2/2007")| data$Date==c("2/2/2007")),]
hist(feb$Global_active_power, col="red", xlab="Global Active Power (kilowatts)",main=paste("Global Active Power"))
#copy to a png file...
dev.copy(png, file="plot1.png", height=480, width=480)
dev.off()
|
608f72c3c5f27301083afd596371276d trivial_query25_1344n.qdimacs 885 4022
|
/code/dcnf-ankit-optimized/Results/QBFLIB-2018/E1/Database/Jordan-Kaiser/reduction-finding-full-set-params-k1c3n4/trivial_query25_1344n/trivial_query25_1344n.R
|
no_license
|
arey0pushpa/dcnf-autarky
|
R
| false
| false
| 71
|
r
|
608f72c3c5f27301083afd596371276d trivial_query25_1344n.qdimacs 885 4022
|
plot4 <- function()
{
epc_table <- read.table("household_power_consumption.txt", sep=";", header=TRUE, stringsAsFactors = FALSE)
epc_table[,1] <- as.Date(epc_table[,1],format="%d/%m/%Y")
epc_table_subset <- with(epc_table, epc_table[(Date >= "2007-02-01" & Date <= "2007-02-02"), ])
epc_table_subset["datetime"] <- NA
epc_table_subset$datetime <- strptime(paste(epc_table_subset$Date, epc_table_subset$Time), "%Y-%m-%d %H:%M:%S", tz = "EST5EDT")
png(file="plot4.png", width=480, height=480)
par(mfrow=c(2,2))
plot(epc_table_subset$datetime, epc_table_subset$Global_active_power, type="l", xlab="", ylab="Global Active Power (kilowatts)")
plot(epc_table_subset$datetime, epc_table_subset$Voltage, type="l", xlab="datetime", ylab="Voltage")
plot(epc_table_subset$datetime, epc_table_subset$Sub_metering_1, type="l", col="black", xlab="", ylab="Energy sub metering")
lines(epc_table_subset$datetime, epc_table_subset$Sub_metering_2, type="l", col="red")
lines(epc_table_subset$datetime, epc_table_subset$Sub_metering_3, type="l", col="blue")
legend("topright", legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"), cex=0.7, lty=1, col=c("black","red","blue"))
plot(epc_table_subset$datetime, epc_table_subset$Global_reactive_power, type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off()
}
|
/plot4.R
|
no_license
|
kazkibergetic/ExData_Plotting1
|
R
| false
| false
| 1,350
|
r
|
plot4 <- function()
{
epc_table <- read.table("household_power_consumption.txt", sep=";", header=TRUE, stringsAsFactors = FALSE)
epc_table[,1] <- as.Date(epc_table[,1],format="%d/%m/%Y")
epc_table_subset <- with(epc_table, epc_table[(Date >= "2007-02-01" & Date <= "2007-02-02"), ])
epc_table_subset["datetime"] <- NA
epc_table_subset$datetime <- strptime(paste(epc_table_subset$Date, epc_table_subset$Time), "%Y-%m-%d %H:%M:%S", tz = "EST5EDT")
png(file="plot4.png", width=480, height=480)
par(mfrow=c(2,2))
plot(epc_table_subset$datetime, epc_table_subset$Global_active_power, type="l", xlab="", ylab="Global Active Power (kilowatts)")
plot(epc_table_subset$datetime, epc_table_subset$Voltage, type="l", xlab="datetime", ylab="Voltage")
plot(epc_table_subset$datetime, epc_table_subset$Sub_metering_1, type="l", col="black", xlab="", ylab="Energy sub metering")
lines(epc_table_subset$datetime, epc_table_subset$Sub_metering_2, type="l", col="red")
lines(epc_table_subset$datetime, epc_table_subset$Sub_metering_3, type="l", col="blue")
legend("topright", legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"), cex=0.7, lty=1, col=c("black","red","blue"))
plot(epc_table_subset$datetime, epc_table_subset$Global_reactive_power, type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off()
}
|
#Data has been download into working directory first
#Load the household data into R
household_data <- read.csv("household_power_consumption.txt", header =TRUE, sep = ";")
#subset of the data required for the graphs
graph_data <- subset(household_data, household_data$Date=="1/2/2007" | household_data$Date =="2/2/2007")
#convert date and time field in date and time format
graph_data$Date <- as.Date(graph_data$Date, format ="%d/%m/%Y")
graph_data$Time <- strptime(graph_data$Time, format ="%H:%M:%S")
#create first plot
png("plot1.png", width=480, height=480)
hist(as.numeric(as.character(graph_data$Global_active_power)), xlab = "Global Active Power (kilowatts)", ylab ="Frequency", col = "red",
main = "Global Active Power")
dev.off()
|
/Plot1.R
|
no_license
|
Nativim/ExData_Plotting1
|
R
| false
| false
| 781
|
r
|
#Data has been download into working directory first
#Load the household data into R
household_data <- read.csv("household_power_consumption.txt", header =TRUE, sep = ";")
#subset of the data required for the graphs
graph_data <- subset(household_data, household_data$Date=="1/2/2007" | household_data$Date =="2/2/2007")
#convert date and time field in date and time format
graph_data$Date <- as.Date(graph_data$Date, format ="%d/%m/%Y")
graph_data$Time <- strptime(graph_data$Time, format ="%H:%M:%S")
#create first plot
png("plot1.png", width=480, height=480)
hist(as.numeric(as.character(graph_data$Global_active_power)), xlab = "Global Active Power (kilowatts)", ylab ="Frequency", col = "red",
main = "Global Active Power")
dev.off()
|
library(shiny)
phasefit<-function(CumulativeTime, Response){
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1) # Response can be modelled as a linear regression
lm.coef<-coef(lm1) # Extract regression coeficients
names(lm.coef)<-NULL # Remove names from the coeficients
mesor<-lm.coef[1] # mean level = average value
A<-lm.coef[2]
B<-lm.coef[3]
amplitude<-sqrt(A^2+B^2) # amplitude = peak - average, average - nadir
acrophase<-atan2(-B, A) # angle at which peak occurs
if(-1*acrophase>0) acrotime<- (-1*acrophase*24/(2*pi))
# if negative # then subtract from 24 to obtain acrophase in 0-24 range
if(-1*acrophase<=0) acrotime<-24-1*acrophase*24/(2*pi)
r.squared<-summary(lm1)$adj.r.squared
pval<-anova(lm1)$`Pr(>F)`[1]
phasevalues<-c(mesor, amplitude, acrotime, r.squared, pval)
names(phasevalues)<-c("mesor","amplitude","acrotime","r2","pval")
newtime<-seq(min(CumulativeTime), max(CumulativeTime),1)
newtime<-seq(0,max(CumulativeTime),0.1)
sintime<-sin(2*pi*newtime / 24)
costime<-cos(2*pi*newtime / 24)
fitresponse<-mesor+A*costime+B*sintime
preds <- predict(lm1, interval = 'confidence', type="response")
fits<-fitted(lm1)
results<-list(mesor=mesor, amplitude=amplitude, acrotime=acrotime,
acrophase=acrophase, r.squared=r.squared,
pval=pval, newtime=newtime, fitresponse=fitresponse, fits=fits)
return(results)
}
shinyServer(function(input, output, session){
# Session is required to make the selectInput update after a file is uploaded
# input$file1 will be NULL initially. After the user selects and uploads a
# file, it will be a data frame with 'name', 'size', 'type', and 'datapath'
# columns. The 'datapath' column will contain the local filenames where the
# data can be found.
url <- a("Sample File", href="https://raw.githubusercontent.com/gtatters/CosinorFit/master/AutorhythmSample.csv")
#diagurl<-a("Help with Diagnostics", href="https://data.library.virginia.edu/diagnostic-plots/")
output$tab <- renderUI({
tagList("", url)
})
# output$diag <- renderUI({
# tagList("", diagurl)
# })
contentsrea <- reactive({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
read.csv(inFile$datapath, header=input$header)
})
output$contents <- renderTable({
contentsrea()
})
observe({
updateSelectInput(session, "xcol", choices = names(contentsrea()))
})
observe({
updateSelectInput(session, "ycol", choices = names(contentsrea()))
})
selectedData <- reactive({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d[, c(input$xcol, input$ycol)]
})
output$mainPlot <- renderPlot({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1) # Response can be modelled as a linear regression
lm.coef<-coef(lm1) # Extract regression coeficients
names(lm.coef)<-NULL # Remove names from the coeficients
mesor<-lm.coef[1] # mean level = average value
A<-lm.coef[2]
B<-lm.coef[3]
amplitude<-sqrt(A^2+B^2) # amplitude = peak - average, average - nadir
acrophase<-atan2(-B, A) # angle at which peak occurs
if(-1*acrophase>0) acrotime<- (-1*acrophase*24/(2*pi))
# if negative # then subtract from 24 to obtain acrophase in 0-24 range
if(-1*acrophase<=0) acrotime<-24-1*acrophase*24/(2*pi)
r.squared<-summary(lm1)$adj.r.squared
pval<-anova(lm1)$`Pr(>F)`[1]
phasevalues<-c(mesor, amplitude, acrotime, r.squared, pval)
names(phasevalues)<-c("mesor","amplitude","acrotime","r2","pval")
newtime<-seq(min(CumulativeTime), max(CumulativeTime),1)
newtime<-seq(0,max(CumulativeTime),0.1)
sintime<-sin(2*pi*newtime / 24)
costime<-cos(2*pi*newtime / 24)
fitresponse<-mesor+A*costime+B*sintime
preds <- predict(lm1, interval = 'confidence', type="response")
fits<-fitted(lm1)
ind<-order(Hour)
plot(Hour[ind], Response[ind], xaxp=c(0,24,4), pch=20,
xlab=input$xcol, ylab=input$ycol)
#lines(Hour[ind], Response[ind])
lines(newtime, fitresponse, lwd=3, col="grey")
abline(mesor, 0, lwd=2, lty=2, col="black")
abline(mesor+amplitude, 0, lwd=1, lty=2, col="red")
abline(mesor-amplitude, 0, lwd=1, lty=2, col="blue")
rect(acrotime, min(Response), acrotime, max(Response), lwd=3, lty=2)
})
output$residPlot <- renderPlot({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
oldpar<-par()
par(mfrow=c(2,2))
plot(lm1)
par<-oldpar
})
output$modelSummary <-renderPrint({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1)
})
output$equationtable <-renderTable({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1) # Response can be modelled as a linear regression
lm.coef<-coef(lm1) # Extract regression coeficients
names(lm.coef)<-NULL # Remove names from the coeficients
mesor<-lm.coef[1] # mean level = average value
A<-lm.coef[2]
B<-lm.coef[3]
amplitude<-sqrt(A^2+B^2) # amplitude = peak - average, average - nadir
acrophase<-atan2(-B, A) # angle at which peak occurs
if(-1*acrophase>0) acrotime<- (-1*acrophase*24/(2*pi))
# if negative # then subtract from 24 to obtain acrophase in 0-24 range
if(-1*acrophase<=0) acrotime<-24-1*acrophase*24/(2*pi)
r.squared<-summary(lm1)$adj.r.squared
pval<-anova(lm1)$`Pr(>F)`[1]
digs<-input$digs
phasevalues<-data.frame(format(mesor, digits=digs),
format(amplitude, digits=digs),
format(acrotime, digits=digs),
format(r.squared, digits=digs),
format(pval, digits=digs))
colnames(phasevalues)<-c("Mesor","Amplitude","Acrophase","R.Squared","P")
phasevalues
})
})
|
/server.R
|
no_license
|
gtatters/CosinorFit
|
R
| false
| false
| 7,971
|
r
|
library(shiny)
phasefit<-function(CumulativeTime, Response){
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1) # Response can be modelled as a linear regression
lm.coef<-coef(lm1) # Extract regression coeficients
names(lm.coef)<-NULL # Remove names from the coeficients
mesor<-lm.coef[1] # mean level = average value
A<-lm.coef[2]
B<-lm.coef[3]
amplitude<-sqrt(A^2+B^2) # amplitude = peak - average, average - nadir
acrophase<-atan2(-B, A) # angle at which peak occurs
if(-1*acrophase>0) acrotime<- (-1*acrophase*24/(2*pi))
# if negative # then subtract from 24 to obtain acrophase in 0-24 range
if(-1*acrophase<=0) acrotime<-24-1*acrophase*24/(2*pi)
r.squared<-summary(lm1)$adj.r.squared
pval<-anova(lm1)$`Pr(>F)`[1]
phasevalues<-c(mesor, amplitude, acrotime, r.squared, pval)
names(phasevalues)<-c("mesor","amplitude","acrotime","r2","pval")
newtime<-seq(min(CumulativeTime), max(CumulativeTime),1)
newtime<-seq(0,max(CumulativeTime),0.1)
sintime<-sin(2*pi*newtime / 24)
costime<-cos(2*pi*newtime / 24)
fitresponse<-mesor+A*costime+B*sintime
preds <- predict(lm1, interval = 'confidence', type="response")
fits<-fitted(lm1)
results<-list(mesor=mesor, amplitude=amplitude, acrotime=acrotime,
acrophase=acrophase, r.squared=r.squared,
pval=pval, newtime=newtime, fitresponse=fitresponse, fits=fits)
return(results)
}
shinyServer(function(input, output, session){
# Session is required to make the selectInput update after a file is uploaded
# input$file1 will be NULL initially. After the user selects and uploads a
# file, it will be a data frame with 'name', 'size', 'type', and 'datapath'
# columns. The 'datapath' column will contain the local filenames where the
# data can be found.
url <- a("Sample File", href="https://raw.githubusercontent.com/gtatters/CosinorFit/master/AutorhythmSample.csv")
#diagurl<-a("Help with Diagnostics", href="https://data.library.virginia.edu/diagnostic-plots/")
output$tab <- renderUI({
tagList("", url)
})
# output$diag <- renderUI({
# tagList("", diagurl)
# })
contentsrea <- reactive({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
read.csv(inFile$datapath, header=input$header)
})
output$contents <- renderTable({
contentsrea()
})
observe({
updateSelectInput(session, "xcol", choices = names(contentsrea()))
})
observe({
updateSelectInput(session, "ycol", choices = names(contentsrea()))
})
selectedData <- reactive({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d[, c(input$xcol, input$ycol)]
})
output$mainPlot <- renderPlot({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1) # Response can be modelled as a linear regression
lm.coef<-coef(lm1) # Extract regression coeficients
names(lm.coef)<-NULL # Remove names from the coeficients
mesor<-lm.coef[1] # mean level = average value
A<-lm.coef[2]
B<-lm.coef[3]
amplitude<-sqrt(A^2+B^2) # amplitude = peak - average, average - nadir
acrophase<-atan2(-B, A) # angle at which peak occurs
if(-1*acrophase>0) acrotime<- (-1*acrophase*24/(2*pi))
# if negative # then subtract from 24 to obtain acrophase in 0-24 range
if(-1*acrophase<=0) acrotime<-24-1*acrophase*24/(2*pi)
r.squared<-summary(lm1)$adj.r.squared
pval<-anova(lm1)$`Pr(>F)`[1]
phasevalues<-c(mesor, amplitude, acrotime, r.squared, pval)
names(phasevalues)<-c("mesor","amplitude","acrotime","r2","pval")
newtime<-seq(min(CumulativeTime), max(CumulativeTime),1)
newtime<-seq(0,max(CumulativeTime),0.1)
sintime<-sin(2*pi*newtime / 24)
costime<-cos(2*pi*newtime / 24)
fitresponse<-mesor+A*costime+B*sintime
preds <- predict(lm1, interval = 'confidence', type="response")
fits<-fitted(lm1)
ind<-order(Hour)
plot(Hour[ind], Response[ind], xaxp=c(0,24,4), pch=20,
xlab=input$xcol, ylab=input$ycol)
#lines(Hour[ind], Response[ind])
lines(newtime, fitresponse, lwd=3, col="grey")
abline(mesor, 0, lwd=2, lty=2, col="black")
abline(mesor+amplitude, 0, lwd=1, lty=2, col="red")
abline(mesor-amplitude, 0, lwd=1, lty=2, col="blue")
rect(acrotime, min(Response), acrotime, max(Response), lwd=3, lty=2)
})
output$residPlot <- renderPlot({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
oldpar<-par()
par(mfrow=c(2,2))
plot(lm1)
par<-oldpar
})
output$modelSummary <-renderPrint({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1)
})
output$equationtable <-renderTable({
inFile <- input$file1
if (is.null(inFile))
return(NULL)
d<-read.csv(inFile$datapath, header=input$header)
d<-d[, c(input$xcol, input$ycol)]
CumulativeTime <- d[,1]
Hour <- CumulativeTime - floor(CumulativeTime/24)*24
Response <- d[,2]
CosTime<-cos(2*pi*CumulativeTime / 24) # Create two new predictor variables, CosTime and SinTime
SinTime<-sin(2*pi*CumulativeTime / 24)
lm1<-lm(Response ~ CosTime + SinTime)
summary(lm1) # Response can be modelled as a linear regression
lm.coef<-coef(lm1) # Extract regression coeficients
names(lm.coef)<-NULL # Remove names from the coeficients
mesor<-lm.coef[1] # mean level = average value
A<-lm.coef[2]
B<-lm.coef[3]
amplitude<-sqrt(A^2+B^2) # amplitude = peak - average, average - nadir
acrophase<-atan2(-B, A) # angle at which peak occurs
if(-1*acrophase>0) acrotime<- (-1*acrophase*24/(2*pi))
# if negative # then subtract from 24 to obtain acrophase in 0-24 range
if(-1*acrophase<=0) acrotime<-24-1*acrophase*24/(2*pi)
r.squared<-summary(lm1)$adj.r.squared
pval<-anova(lm1)$`Pr(>F)`[1]
digs<-input$digs
phasevalues<-data.frame(format(mesor, digits=digs),
format(amplitude, digits=digs),
format(acrotime, digits=digs),
format(r.squared, digits=digs),
format(pval, digits=digs))
colnames(phasevalues)<-c("Mesor","Amplitude","Acrophase","R.Squared","P")
phasevalues
})
})
|
####################################################################################################
#' Function to order contigs within a single linkage group using a greedy algorithms
#' Attempt to order contigs within
#' @useDynLib contiBAIT
#' @import Rcpp TSP
#
#' @param linkageGroupReadTable dataframe of strand calls (product of combineZeroDists or preprocessStrandTable)
#' @param randomAttempts number of times to repeat the greedy algortihm with a random restart
#' @param nProcesses number of processes to attempt ordering in parallel
#' @param verbose whether to print verbose messages
#' @return list of two members: 1) contig names in order, 2) the original data.frame entered into function correctly ordered
####################################################################################################
orderContigsGreedy <- function(linkageGroupReadTable, randomAttempts=75,nProcesses = 1, verbose=TRUE)
{
factorizedLinkageGroupReadTable <- linkageGroupReadTable
for (i in seq_len(ncol(linkageGroupReadTable))) {
linkageGroupReadTable[,i] <- as.numeric(as.character( linkageGroupReadTable[,i]))
}
linkageGroupReadTable[is.na(linkageGroupReadTable)] <- 0
if(nrow(linkageGroupReadTable) > 1)
{
order_contigs <- function(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)
{
best_order <- .Call('orderContigsGreedy', as.matrix(linkageGroupReadTable))
best_table <- linkageGroupReadTable
for (i in seq_len(randomAttempts)) {
#temp_order <- list(order = 1:length(linkageGroup),score = 0)
temp_table <- as.matrix(linkageGroupReadTable[sample(nrow(linkageGroupReadTable)),])
temp_order <- .Call('orderContigsGreedy', temp_table)
if ( temp_order$score < best_order$score){
if(verbose){message(' -> Found better ordering!')}
best_order <- temp_order
best_table <- temp_table
}
}
linkageGroupReadTable <- factorizedLinkageGroupReadTable[row.names(best_table)[best_order$order],]
return(c(list(orderVector=row.names(best_table)[best_order$order], orderedMatrix=linkageGroupReadTable),best_order$score))
}
if (nProcesses > 1)
{
cl <- makeCluster(getOption("cl.cores",nProcesses))
order_contigs_cluster <- function(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable){
library(contiBAIT)
order_contigs(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)
}
res <- clusterCall(cl,order_contigs_cluster,linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)
stopCluster(cl)
best_order <- res[[1]]
for (i in res[-1]){
if (i[[3]] < best_order[[3]])
{
best_order <- i
}
}
return(best_order[-3])
} else
{
return(order_contigs(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)[-3])
}
}else
{
linkageGroupReadTable <- factorizedLinkageGroupReadTable
return(list(orderVector=row.names(linkageGroupReadTable), orderedMatrix=linkageGroupReadTable))
}
}
|
/R/orderContigsGreedy.R
|
permissive
|
oneillkza/ContiBAIT
|
R
| false
| false
| 3,228
|
r
|
####################################################################################################
#' Function to order contigs within a single linkage group using a greedy algorithms
#' Attempt to order contigs within
#' @useDynLib contiBAIT
#' @import Rcpp TSP
#
#' @param linkageGroupReadTable dataframe of strand calls (product of combineZeroDists or preprocessStrandTable)
#' @param randomAttempts number of times to repeat the greedy algortihm with a random restart
#' @param nProcesses number of processes to attempt ordering in parallel
#' @param verbose whether to print verbose messages
#' @return list of two members: 1) contig names in order, 2) the original data.frame entered into function correctly ordered
####################################################################################################
orderContigsGreedy <- function(linkageGroupReadTable, randomAttempts=75,nProcesses = 1, verbose=TRUE)
{
factorizedLinkageGroupReadTable <- linkageGroupReadTable
for (i in seq_len(ncol(linkageGroupReadTable))) {
linkageGroupReadTable[,i] <- as.numeric(as.character( linkageGroupReadTable[,i]))
}
linkageGroupReadTable[is.na(linkageGroupReadTable)] <- 0
if(nrow(linkageGroupReadTable) > 1)
{
order_contigs <- function(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)
{
best_order <- .Call('orderContigsGreedy', as.matrix(linkageGroupReadTable))
best_table <- linkageGroupReadTable
for (i in seq_len(randomAttempts)) {
#temp_order <- list(order = 1:length(linkageGroup),score = 0)
temp_table <- as.matrix(linkageGroupReadTable[sample(nrow(linkageGroupReadTable)),])
temp_order <- .Call('orderContigsGreedy', temp_table)
if ( temp_order$score < best_order$score){
if(verbose){message(' -> Found better ordering!')}
best_order <- temp_order
best_table <- temp_table
}
}
linkageGroupReadTable <- factorizedLinkageGroupReadTable[row.names(best_table)[best_order$order],]
return(c(list(orderVector=row.names(best_table)[best_order$order], orderedMatrix=linkageGroupReadTable),best_order$score))
}
if (nProcesses > 1)
{
cl <- makeCluster(getOption("cl.cores",nProcesses))
order_contigs_cluster <- function(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable){
library(contiBAIT)
order_contigs(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)
}
res <- clusterCall(cl,order_contigs_cluster,linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)
stopCluster(cl)
best_order <- res[[1]]
for (i in res[-1]){
if (i[[3]] < best_order[[3]])
{
best_order <- i
}
}
return(best_order[-3])
} else
{
return(order_contigs(linkageGroupReadTable,randomAttempts,verbose,factorizedLinkageGroupReadTable)[-3])
}
}else
{
linkageGroupReadTable <- factorizedLinkageGroupReadTable
return(list(orderVector=row.names(linkageGroupReadTable), orderedMatrix=linkageGroupReadTable))
}
}
|
library(readr)
library(data.table)
daten_moocall <- read_delim("Rohdaten/Rohdaten_10-Sep-2018_Faersen.csv",
";", escape_double = FALSE, trim_ws = TRUE)
print_tables <- function(confusion_table) {
table = epitools::epitable(c(sum(confusion_table$RP),sum(confusion_table$FP), sum(confusion_table$FN), sum(confusion_table$RN)))
test.table <- epiR::epi.tests(table)
print(test.table)
matrix <- matrix(c(sum(confusion_table$RP), sum(confusion_table$FN), sum(confusion_table$FP), sum(confusion_table$RN)), nrow=2)
print(caret::confusionMatrix(as.table(matrix)))
print(paste("Youden-Score:", mean(confusion_table$Sensitivitaet, na.rm = T) + mean(confusion_table$Spezifitaet, na.rm = T) -1))
}
setDT(daten_moocall)
daten_moocall <- daten_moocall[`Gekalbt (ja/nein)` == "ja"]
daten_moocall[, Zeit_bis_Umstallung := difftime(Umstallzeit, Alarmzeit, units = "hours")]
daten_moocall[, Sensorzeit := difftime(Umstallzeit, SonT, units = "hours")]
daten_moocall[, Studienzeit := max(Sensorzeit, na.rm = T), by="ID"]
daten_moocall[is.na(Zeit_bis_Umstallung), Zeit_bis_Umstallung := 99999]
daten_moocall <- daten_moocall[Zeit_bis_Umstallung >=0]
daten_moocall <- daten_moocall[!ID %in% c(1899, 3909, 2641, 1304)]
uniqueN(daten_moocall$ID)
uniqueN(daten_moocall[`Alarm (ja/nein)` == "ja"]$ID)/uniqueN(daten_moocall$ID)
#c(1,2,3,4,6)
for (i in 1:12) {
print(paste("Zeitfenster:", i, "Sensitivität:", uniqueN(daten_moocall[Zeit_bis_Umstallung <= i]$ID)/uniqueN(daten_moocall$ID)))
}
#### Nur HA 2
for (i in 1:12) {
print(paste("Zeitfenster:", i, "Sensitivität:", uniqueN(daten_moocall[Zeit_bis_Umstallung <= i & `Alarmtyp (HA1/HA2)` == "HA2h"]$ID)/uniqueN(daten_moocall$ID)))
}
#### Nur HA 1
for (i in 1:12) {
print(paste("Zeitfenster:", i, "Sensitivität:", uniqueN(daten_moocall[Zeit_bis_Umstallung <= i & `Alarmtyp (HA1/HA2)` == "HA1h"]$ID)/uniqueN(daten_moocall$ID)))
}
#`Alarmtyp (HA1/HA2)` == "HA2h"
daten_moocall <- daten_moocall[order(Alarmzeit)]
daten_moocall[order(Alarmzeit), Abstand_vorheriger_Alarm := difftime(Alarmzeit, shift(Alarmzeit, n = 1L, type = "lag"), units = "hour"), by = "ID"]
daten_moocall[`Alarmtyp (HA1/HA2)` == "HA2h", Abstand_HA2_davor := difftime(Alarmzeit, shift(Alarmzeit, n = 1L, type = "lag"), units = "hour"), by = "ID"]
table(daten_moocall$Abstand_vorheriger_Alarm)
table(daten_moocall$Abstand_HA2_davor)
daten_moocall[`Alarmtyp (HA1/HA2)` == "HA2h" & Abstand_vorheriger_Alarm>2]
# PPV
for (i in 1:12) {
Anzahl_Kuehe_mit_korrektem_Alarm = uniqueN(daten_moocall[Zeit_bis_Umstallung <= i]$ID)
Anzahl_Fehlalarme = nrow(daten_moocall[Zeit_bis_Umstallung > i][`Alarm (ja/nein)` == "ja"])
print(paste("Zeitfenster:", i, "PPV:", Anzahl_Kuehe_mit_korrektem_Alarm/(Anzahl_Fehlalarme + Anzahl_Kuehe_mit_korrektem_Alarm)))
}
for (i in 1:12) {
Anzahl_Kuehe_mit_korrektem_Alarm = uniqueN(daten_moocall[Zeit_bis_Umstallung <= i & `Alarmtyp (HA1/HA2)` == "HA2h"]$ID)
Anzahl_Fehlalarme = nrow(daten_moocall[Zeit_bis_Umstallung > i][`Alarmtyp (HA1/HA2)` == "HA2h"])
print(paste("Zeitfenster:", i, "PPV:", Anzahl_Kuehe_mit_korrektem_Alarm/(Anzahl_Fehlalarme + Anzahl_Kuehe_mit_korrektem_Alarm)))
}
daten_moocall[Zeit_bis_Umstallung > 0][`Alarm (ja/nein)` == "ja"][,.N, by="ID"][order(N)]
daten_moocall[Zeit_bis_Umstallung > 0][`Alarmtyp (HA1/HA2)` == "HA2h"][,.N, by="ID"][order(N)]
for (i in 3) {
daten_moocall[,Zeitintervall := as.integer(floor(Zeit_bis_Umstallung/i))]
zeitintervalle = daten_moocall[, .(HA2 = ifelse("HA2h" %in% `Alarmtyp (HA1/HA2)`, 1,0),
HA1 = ifelse("HA1h" %in% `Alarmtyp (HA1/HA2)`, 1,0),
Studienintervalle = as.integer(ceiling(max(Studienzeit)/i)),
Fehler = sum(as.integer(2 == `Event Score`), na.rm = T) + sum(as.integer(3 == `Event Score`), na.rm = T) + sum(as.integer(4 == `Event Score`), na.rm = T)
) ,by=c("ID","Zeitintervall")]
#nrow(zeitintervalle[Zeitintervall>0][HA1 == 1 | HA2 == 1])
#zeitintervalle[Zeitintervall<10000 & Zeitintervall<Studienintervalle]
confusion_table <- zeitintervalle[, .(RN=max(Studienintervalle)-(.N-1), #-as.integer(sum(Fehler))
FN = as.numeric(min(Zeitintervall)!=0),
RP = sum(as.numeric(Zeitintervall==0)),
FP = sum(as.numeric(Zeitintervall<33000 & Zeitintervall>0))
) , by="ID"]
table = epitools::epitable(c(sum(confusion_table$RP),sum(confusion_table$FP), sum(confusion_table$FN), sum(confusion_table$RN)))
print(paste("Zeitintervall", i))
print(epiR::epi.tests(table))
}
confusion_table[, `:=` (
Sensitivitaet = RP/(RP+FN),
Spezifitaet = RN/(RN+FP),
PPV = RP/(RP+FP),
NPV = RN/(RN+FN)
)]
confusion_table[Spezifitaet<0, Spezifitaet:= NA]
confusion_table[NPV<0, NPV:= NA]
#setDT(X2018_09_14_Rohdaten_korr)
#kuh_faerse <- X2018_09_14_Rohdaten_korr[, .(Kuh = unique(`Kuh/Faerse`)), by=ID]
a = c("", "_ha1", "_ha2")
b = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,24)
out = ""
sink("2018-11-02_r_output_faersen.txt")
for (Stunde in b) {
for (Alarmtyp in a) {
#file_name = paste0("3. Beratung/2018-10-01_python/confusion_table_", Stunde, "h", Alarmtyp, ".csv")
file_name = paste0("./2018-10-01_python_faersen/confusion_table_", Stunde, "h", Alarmtyp, ".csv")
print(file_name)
confusion_table <- fread(file = file_name)
print(paste0("Zeitraum: ", Stunde, " Alarm: ", Alarmtyp))
out = c(out,print_tables(confusion_table = confusion_table))
}
}
sink()
print("bontastisch")
#confusion_table <- merge(confusion_table, kuh_faerse, by="ID", all.x = T)
#confusion_table <- confusion_table[Kuh=="Faerse"]
#mean(confusion_table$Sensitivitaet, na.rm = T)
#mean(confusion_table$Spezifitaet, na.rm = T)
#mean(confusion_table$PPV, na.rm = T)
#mean(confusion_table$NPV, na.rm = T)
#(mean(confusion_table$Sensitivitaet, na.rm = T) + mean(confusion_table$Spezifitaet, na.rm = T))/2
#print_tables <- function(confusion_table) {
# table = epitools::epitable(c(sum(confusion_table$RP),sum(confusion_table$FP), sum(confusion_table$FN), sum(confusion_table$RN)))
# test.table <- epiR::epi.tests(table)
# print(test.table)
# matrix <- matrix(c(sum(confusion_table$RP), sum(confusion_table$FN), sum(confusion_table$FP), sum(confusion_table$RN)), nrow=2)
# print(caret::confusionMatrix(as.table(matrix)))
# print(paste("Youden-Score:", mean(confusion_table$Sensitivitaet, na.rm = T) + mean(confusion_table$Spezifitaet, na.rm = T) -1))
#}
with(confusion_table)
hist(confusion_table$Sensitivitaet)
hist(confusion_table$Spezifitaet)
hist(confusion_table$PPV)
hist(confusion_table$NPV)
#table(confusion_table[,RP==FN])
#uniqueN(zeitintervalle$ID)
#### Sensitivität
print("Sensitivität")
nrow(zeitintervalle[Zeitintervall==0])/uniqueN(zeitintervalle$ID)
#### PPV
print("PPV")
nrow(zeitintervalle[Zeitintervall==0])/(nrow(zeitintervalle[Zeitintervall==0])+nrow(zeitintervalle[Zeitintervall>0][HA1 == 1 | HA2 == 1]))
#### Spezifität
print("Spezifiät")
sum(confusion_table$RN)/(sum(confusion_table$RN)+sum(confusion_table$FN))
#### NPV
print("NPV")
sum(confusion_table$RN)/(sum(confusion_table$RN) + (uniqueN(zeitintervalle$ID) - nrow(zeitintervalle[Zeitintervall==0])))
fwrite(daten_moocall, file = "2018-09-10_daten_moocall.csv")
fwrite(confusion_table, file = "2018-09-10_confusion_table.csv")
fwrite(zeitintervalle, file = "2018-09-10_zeitintervalle_3h.csv")
|
/alex/MooCall/Auswertung_Faersen.R
|
no_license
|
whllnd/studie
|
R
| false
| false
| 7,622
|
r
|
library(readr)
library(data.table)
daten_moocall <- read_delim("Rohdaten/Rohdaten_10-Sep-2018_Faersen.csv",
";", escape_double = FALSE, trim_ws = TRUE)
print_tables <- function(confusion_table) {
table = epitools::epitable(c(sum(confusion_table$RP),sum(confusion_table$FP), sum(confusion_table$FN), sum(confusion_table$RN)))
test.table <- epiR::epi.tests(table)
print(test.table)
matrix <- matrix(c(sum(confusion_table$RP), sum(confusion_table$FN), sum(confusion_table$FP), sum(confusion_table$RN)), nrow=2)
print(caret::confusionMatrix(as.table(matrix)))
print(paste("Youden-Score:", mean(confusion_table$Sensitivitaet, na.rm = T) + mean(confusion_table$Spezifitaet, na.rm = T) -1))
}
setDT(daten_moocall)
daten_moocall <- daten_moocall[`Gekalbt (ja/nein)` == "ja"]
daten_moocall[, Zeit_bis_Umstallung := difftime(Umstallzeit, Alarmzeit, units = "hours")]
daten_moocall[, Sensorzeit := difftime(Umstallzeit, SonT, units = "hours")]
daten_moocall[, Studienzeit := max(Sensorzeit, na.rm = T), by="ID"]
daten_moocall[is.na(Zeit_bis_Umstallung), Zeit_bis_Umstallung := 99999]
daten_moocall <- daten_moocall[Zeit_bis_Umstallung >=0]
daten_moocall <- daten_moocall[!ID %in% c(1899, 3909, 2641, 1304)]
uniqueN(daten_moocall$ID)
uniqueN(daten_moocall[`Alarm (ja/nein)` == "ja"]$ID)/uniqueN(daten_moocall$ID)
#c(1,2,3,4,6)
for (i in 1:12) {
print(paste("Zeitfenster:", i, "Sensitivität:", uniqueN(daten_moocall[Zeit_bis_Umstallung <= i]$ID)/uniqueN(daten_moocall$ID)))
}
#### Nur HA 2
for (i in 1:12) {
print(paste("Zeitfenster:", i, "Sensitivität:", uniqueN(daten_moocall[Zeit_bis_Umstallung <= i & `Alarmtyp (HA1/HA2)` == "HA2h"]$ID)/uniqueN(daten_moocall$ID)))
}
#### Nur HA 1
for (i in 1:12) {
print(paste("Zeitfenster:", i, "Sensitivität:", uniqueN(daten_moocall[Zeit_bis_Umstallung <= i & `Alarmtyp (HA1/HA2)` == "HA1h"]$ID)/uniqueN(daten_moocall$ID)))
}
#`Alarmtyp (HA1/HA2)` == "HA2h"
daten_moocall <- daten_moocall[order(Alarmzeit)]
daten_moocall[order(Alarmzeit), Abstand_vorheriger_Alarm := difftime(Alarmzeit, shift(Alarmzeit, n = 1L, type = "lag"), units = "hour"), by = "ID"]
daten_moocall[`Alarmtyp (HA1/HA2)` == "HA2h", Abstand_HA2_davor := difftime(Alarmzeit, shift(Alarmzeit, n = 1L, type = "lag"), units = "hour"), by = "ID"]
table(daten_moocall$Abstand_vorheriger_Alarm)
table(daten_moocall$Abstand_HA2_davor)
daten_moocall[`Alarmtyp (HA1/HA2)` == "HA2h" & Abstand_vorheriger_Alarm>2]
# PPV
for (i in 1:12) {
Anzahl_Kuehe_mit_korrektem_Alarm = uniqueN(daten_moocall[Zeit_bis_Umstallung <= i]$ID)
Anzahl_Fehlalarme = nrow(daten_moocall[Zeit_bis_Umstallung > i][`Alarm (ja/nein)` == "ja"])
print(paste("Zeitfenster:", i, "PPV:", Anzahl_Kuehe_mit_korrektem_Alarm/(Anzahl_Fehlalarme + Anzahl_Kuehe_mit_korrektem_Alarm)))
}
for (i in 1:12) {
Anzahl_Kuehe_mit_korrektem_Alarm = uniqueN(daten_moocall[Zeit_bis_Umstallung <= i & `Alarmtyp (HA1/HA2)` == "HA2h"]$ID)
Anzahl_Fehlalarme = nrow(daten_moocall[Zeit_bis_Umstallung > i][`Alarmtyp (HA1/HA2)` == "HA2h"])
print(paste("Zeitfenster:", i, "PPV:", Anzahl_Kuehe_mit_korrektem_Alarm/(Anzahl_Fehlalarme + Anzahl_Kuehe_mit_korrektem_Alarm)))
}
daten_moocall[Zeit_bis_Umstallung > 0][`Alarm (ja/nein)` == "ja"][,.N, by="ID"][order(N)]
daten_moocall[Zeit_bis_Umstallung > 0][`Alarmtyp (HA1/HA2)` == "HA2h"][,.N, by="ID"][order(N)]
for (i in 3) {
daten_moocall[,Zeitintervall := as.integer(floor(Zeit_bis_Umstallung/i))]
zeitintervalle = daten_moocall[, .(HA2 = ifelse("HA2h" %in% `Alarmtyp (HA1/HA2)`, 1,0),
HA1 = ifelse("HA1h" %in% `Alarmtyp (HA1/HA2)`, 1,0),
Studienintervalle = as.integer(ceiling(max(Studienzeit)/i)),
Fehler = sum(as.integer(2 == `Event Score`), na.rm = T) + sum(as.integer(3 == `Event Score`), na.rm = T) + sum(as.integer(4 == `Event Score`), na.rm = T)
) ,by=c("ID","Zeitintervall")]
#nrow(zeitintervalle[Zeitintervall>0][HA1 == 1 | HA2 == 1])
#zeitintervalle[Zeitintervall<10000 & Zeitintervall<Studienintervalle]
confusion_table <- zeitintervalle[, .(RN=max(Studienintervalle)-(.N-1), #-as.integer(sum(Fehler))
FN = as.numeric(min(Zeitintervall)!=0),
RP = sum(as.numeric(Zeitintervall==0)),
FP = sum(as.numeric(Zeitintervall<33000 & Zeitintervall>0))
) , by="ID"]
table = epitools::epitable(c(sum(confusion_table$RP),sum(confusion_table$FP), sum(confusion_table$FN), sum(confusion_table$RN)))
print(paste("Zeitintervall", i))
print(epiR::epi.tests(table))
}
confusion_table[, `:=` (
Sensitivitaet = RP/(RP+FN),
Spezifitaet = RN/(RN+FP),
PPV = RP/(RP+FP),
NPV = RN/(RN+FN)
)]
confusion_table[Spezifitaet<0, Spezifitaet:= NA]
confusion_table[NPV<0, NPV:= NA]
#setDT(X2018_09_14_Rohdaten_korr)
#kuh_faerse <- X2018_09_14_Rohdaten_korr[, .(Kuh = unique(`Kuh/Faerse`)), by=ID]
a = c("", "_ha1", "_ha2")
b = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,24)
out = ""
sink("2018-11-02_r_output_faersen.txt")
for (Stunde in b) {
for (Alarmtyp in a) {
#file_name = paste0("3. Beratung/2018-10-01_python/confusion_table_", Stunde, "h", Alarmtyp, ".csv")
file_name = paste0("./2018-10-01_python_faersen/confusion_table_", Stunde, "h", Alarmtyp, ".csv")
print(file_name)
confusion_table <- fread(file = file_name)
print(paste0("Zeitraum: ", Stunde, " Alarm: ", Alarmtyp))
out = c(out,print_tables(confusion_table = confusion_table))
}
}
sink()
print("bontastisch")
#confusion_table <- merge(confusion_table, kuh_faerse, by="ID", all.x = T)
#confusion_table <- confusion_table[Kuh=="Faerse"]
#mean(confusion_table$Sensitivitaet, na.rm = T)
#mean(confusion_table$Spezifitaet, na.rm = T)
#mean(confusion_table$PPV, na.rm = T)
#mean(confusion_table$NPV, na.rm = T)
#(mean(confusion_table$Sensitivitaet, na.rm = T) + mean(confusion_table$Spezifitaet, na.rm = T))/2
#print_tables <- function(confusion_table) {
# table = epitools::epitable(c(sum(confusion_table$RP),sum(confusion_table$FP), sum(confusion_table$FN), sum(confusion_table$RN)))
# test.table <- epiR::epi.tests(table)
# print(test.table)
# matrix <- matrix(c(sum(confusion_table$RP), sum(confusion_table$FN), sum(confusion_table$FP), sum(confusion_table$RN)), nrow=2)
# print(caret::confusionMatrix(as.table(matrix)))
# print(paste("Youden-Score:", mean(confusion_table$Sensitivitaet, na.rm = T) + mean(confusion_table$Spezifitaet, na.rm = T) -1))
#}
with(confusion_table)
hist(confusion_table$Sensitivitaet)
hist(confusion_table$Spezifitaet)
hist(confusion_table$PPV)
hist(confusion_table$NPV)
#table(confusion_table[,RP==FN])
#uniqueN(zeitintervalle$ID)
#### Sensitivität
print("Sensitivität")
nrow(zeitintervalle[Zeitintervall==0])/uniqueN(zeitintervalle$ID)
#### PPV
print("PPV")
nrow(zeitintervalle[Zeitintervall==0])/(nrow(zeitintervalle[Zeitintervall==0])+nrow(zeitintervalle[Zeitintervall>0][HA1 == 1 | HA2 == 1]))
#### Spezifität
print("Spezifiät")
sum(confusion_table$RN)/(sum(confusion_table$RN)+sum(confusion_table$FN))
#### NPV
print("NPV")
sum(confusion_table$RN)/(sum(confusion_table$RN) + (uniqueN(zeitintervalle$ID) - nrow(zeitintervalle[Zeitintervall==0])))
fwrite(daten_moocall, file = "2018-09-10_daten_moocall.csv")
fwrite(confusion_table, file = "2018-09-10_confusion_table.csv")
fwrite(zeitintervalle, file = "2018-09-10_zeitintervalle_3h.csv")
|
\name{ READ }
\alias{ READ }
\docType{data}
\title{ Rectum adenocarcinoma }
\description{
A document describing the TCGA cancer code
}
\details{
\preformatted{
> experiments( READ )
ExperimentList class object of length 14:
[1] READ_CNASeq-20160128: RaggedExperiment with 56380 rows and 70 columns
[2] READ_CNASNP-20160128: RaggedExperiment with 156806 rows and 316 columns
[3] READ_CNVSNP-20160128: RaggedExperiment with 35765 rows and 316 columns
[4] READ_GISTIC_AllByGene-20160128: SummarizedExperiment with 24776 rows and 165 columns
[5] READ_GISTIC_Peaks-20160128: RangedSummarizedExperiment with 55 rows and 165 columns
[6] READ_GISTIC_ThresholdedByGene-20160128: SummarizedExperiment with 24776 rows and 165 columns
[7] READ_miRNASeqGene-20160128: SummarizedExperiment with 705 rows and 76 columns
[8] READ_mRNAArray-20160128: SummarizedExperiment with 17814 rows and 72 columns
[9] READ_Mutation-20160128: RaggedExperiment with 22075 rows and 69 columns
[10] READ_RNASeq2GeneNorm-20160128: SummarizedExperiment with 20501 rows and 72 columns
[11] READ_RNASeqGene-20160128: SummarizedExperiment with 20502 rows and 72 columns
[12] READ_RPPAArray-20160128: SummarizedExperiment with 208 rows and 131 columns
[13] READ_Methylation_methyl27-20160128: SummarizedExperiment with 27578 rows and 73 columns
[14] READ_Methylation_methyl450-20160128: SummarizedExperiment with 485577 rows and 106 columns
> rownames( READ )
CharacterList of length 14
[["READ_CNASeq-20160128"]] character(0)
[["READ_CNASNP-20160128"]] character(0)
[["READ_CNVSNP-20160128"]] character(0)
[["READ_GISTIC_AllByGene-20160128"]] ACAP3 ... WASIR1|ENSG00000185203.7
[["READ_GISTIC_Peaks-20160128"]] chr1:3814904-31841618 ...
[["READ_GISTIC_ThresholdedByGene-20160128"]] ACAP3 ...
[["READ_miRNASeqGene-20160128"]] hsa-let-7a-1 hsa-let-7a-2 ... hsa-mir-99b
[["READ_mRNAArray-20160128"]] ELMO2 CREB3L1 RPS11 PNMA1 ... SNRPD2 AQP7 CTSC
[["READ_Mutation-20160128"]] character(0)
[["READ_RNASeq2GeneNorm-20160128"]] A1BG A1CF A2BP1 ... ZZZ3 psiTPTE22 tAKR
...
<4 more elements>
> colnames( READ )
CharacterList of length 14
[["READ_CNASeq-20160128"]] TCGA-AF-2691-01A-01D-1167-02 ...
[["READ_CNASNP-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_CNVSNP-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_GISTIC_AllByGene-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_GISTIC_Peaks-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_GISTIC_ThresholdedByGene-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_miRNASeqGene-20160128"]] TCGA-AF-2687-01A-02T-1735-13 ...
[["READ_mRNAArray-20160128"]] TCGA-AF-2689-11A-01R-1758-07 ...
[["READ_Mutation-20160128"]] TCGA-AF-2689-01A-01W-0831-10 ... TCGA-AG-A036-01
[["READ_RNASeq2GeneNorm-20160128"]] TCGA-AF-2691-01A-01R-0821-07 ...
...
<4 more elements>
Sizes of each ExperimentList element:
assay size.Mb
1 READ_CNASeq-20160128 1.5 Mb
2 READ_CNASNP-20160128 4.3 Mb
3 READ_CNVSNP-20160128 1.1 Mb
4 READ_GISTIC_AllByGene-20160128 4.9 Mb
5 READ_GISTIC_Peaks-20160128 0.1 Mb
6 READ_GISTIC_ThresholdedByGene-20160128 4.9 Mb
7 READ_miRNASeqGene-20160128 0.1 Mb
8 READ_mRNAArray-20160128 1.1 Mb
9 READ_Mutation-20160128 9.6 Mb
10 READ_RNASeq2GeneNorm-20160128 1.3 Mb
11 READ_RNASeqGene-20160128 1.3 Mb
12 READ_RPPAArray-20160128 0 Mb
13 READ_Methylation_methyl27-20160128 4.9 Mb
14 READ_Methylation_methyl450-20160128 75 Mb
---------------------------
Overall survival time-to-event summary (in years):
---------------------------
Call: survfit(formula = survival::Surv(colDat$days_to_death/365, colDat$vital_status) ~
-1)
142 observations deleted due to missingness
n events median 0.95LCL 0.95UCL
27.00 27.00 2.00 1.44 3.25
---------------------------
Available sample meta-data:
---------------------------
years_to_birth:
Min. 1st Qu. Median Mean 3rd Qu. Max.
31.00 57.00 66.00 64.37 72.00 90.00
vital_status:
0 1
141 28
days_to_death:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
59.0 347.5 730.0 786.1 1193.0 1741.0 142
days_to_last_followup:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.0 366.0 625.0 779.5 1096.0 3932.0 28
tumor_tissue_site:
rectum NA's
166 3
pathology_M_stage:
m0 m1 m1a mx NA's
128 22 2 14 3
gender:
female male
77 92
date_of_initial_pathologic_diagnosis:
Min. 1st Qu. Median Mean 3rd Qu. Max.
1999 2007 2009 2008 2010 2012
days_to_last_known_alive:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
31.0 292.2 863.0 1420.1 2214.5 3667.0 161
radiation_therapy:
no yes NA's
114 22 33
histological_type:
rectal adenocarcinoma rectal mucinous adenocarcinoma
150 13
NA's
6
tumor_stage:
stage iia NA's
1 168
residual_tumor:
r0 r1 r2 rx NA's
126 2 12 5 24
number_of_lymph_nodes:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.000 0.000 0.000 2.692 3.000 31.000 10
ethnicity:
hispanic or latino not hispanic or latino NA's
1 84 84
Including an additional 2242 columns
}}
\seealso{\link{READ-v2.0.0}}
\keyword{datasets}
|
/man/READ.Rd
|
no_license
|
shawnspei/curatedTCGAData
|
R
| false
| false
| 5,654
|
rd
|
\name{ READ }
\alias{ READ }
\docType{data}
\title{ Rectum adenocarcinoma }
\description{
A document describing the TCGA cancer code
}
\details{
\preformatted{
> experiments( READ )
ExperimentList class object of length 14:
[1] READ_CNASeq-20160128: RaggedExperiment with 56380 rows and 70 columns
[2] READ_CNASNP-20160128: RaggedExperiment with 156806 rows and 316 columns
[3] READ_CNVSNP-20160128: RaggedExperiment with 35765 rows and 316 columns
[4] READ_GISTIC_AllByGene-20160128: SummarizedExperiment with 24776 rows and 165 columns
[5] READ_GISTIC_Peaks-20160128: RangedSummarizedExperiment with 55 rows and 165 columns
[6] READ_GISTIC_ThresholdedByGene-20160128: SummarizedExperiment with 24776 rows and 165 columns
[7] READ_miRNASeqGene-20160128: SummarizedExperiment with 705 rows and 76 columns
[8] READ_mRNAArray-20160128: SummarizedExperiment with 17814 rows and 72 columns
[9] READ_Mutation-20160128: RaggedExperiment with 22075 rows and 69 columns
[10] READ_RNASeq2GeneNorm-20160128: SummarizedExperiment with 20501 rows and 72 columns
[11] READ_RNASeqGene-20160128: SummarizedExperiment with 20502 rows and 72 columns
[12] READ_RPPAArray-20160128: SummarizedExperiment with 208 rows and 131 columns
[13] READ_Methylation_methyl27-20160128: SummarizedExperiment with 27578 rows and 73 columns
[14] READ_Methylation_methyl450-20160128: SummarizedExperiment with 485577 rows and 106 columns
> rownames( READ )
CharacterList of length 14
[["READ_CNASeq-20160128"]] character(0)
[["READ_CNASNP-20160128"]] character(0)
[["READ_CNVSNP-20160128"]] character(0)
[["READ_GISTIC_AllByGene-20160128"]] ACAP3 ... WASIR1|ENSG00000185203.7
[["READ_GISTIC_Peaks-20160128"]] chr1:3814904-31841618 ...
[["READ_GISTIC_ThresholdedByGene-20160128"]] ACAP3 ...
[["READ_miRNASeqGene-20160128"]] hsa-let-7a-1 hsa-let-7a-2 ... hsa-mir-99b
[["READ_mRNAArray-20160128"]] ELMO2 CREB3L1 RPS11 PNMA1 ... SNRPD2 AQP7 CTSC
[["READ_Mutation-20160128"]] character(0)
[["READ_RNASeq2GeneNorm-20160128"]] A1BG A1CF A2BP1 ... ZZZ3 psiTPTE22 tAKR
...
<4 more elements>
> colnames( READ )
CharacterList of length 14
[["READ_CNASeq-20160128"]] TCGA-AF-2691-01A-01D-1167-02 ...
[["READ_CNASNP-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_CNVSNP-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_GISTIC_AllByGene-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_GISTIC_Peaks-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_GISTIC_ThresholdedByGene-20160128"]] TCGA-AF-2687-01A-02D-1732-01 ...
[["READ_miRNASeqGene-20160128"]] TCGA-AF-2687-01A-02T-1735-13 ...
[["READ_mRNAArray-20160128"]] TCGA-AF-2689-11A-01R-1758-07 ...
[["READ_Mutation-20160128"]] TCGA-AF-2689-01A-01W-0831-10 ... TCGA-AG-A036-01
[["READ_RNASeq2GeneNorm-20160128"]] TCGA-AF-2691-01A-01R-0821-07 ...
...
<4 more elements>
Sizes of each ExperimentList element:
assay size.Mb
1 READ_CNASeq-20160128 1.5 Mb
2 READ_CNASNP-20160128 4.3 Mb
3 READ_CNVSNP-20160128 1.1 Mb
4 READ_GISTIC_AllByGene-20160128 4.9 Mb
5 READ_GISTIC_Peaks-20160128 0.1 Mb
6 READ_GISTIC_ThresholdedByGene-20160128 4.9 Mb
7 READ_miRNASeqGene-20160128 0.1 Mb
8 READ_mRNAArray-20160128 1.1 Mb
9 READ_Mutation-20160128 9.6 Mb
10 READ_RNASeq2GeneNorm-20160128 1.3 Mb
11 READ_RNASeqGene-20160128 1.3 Mb
12 READ_RPPAArray-20160128 0 Mb
13 READ_Methylation_methyl27-20160128 4.9 Mb
14 READ_Methylation_methyl450-20160128 75 Mb
---------------------------
Overall survival time-to-event summary (in years):
---------------------------
Call: survfit(formula = survival::Surv(colDat$days_to_death/365, colDat$vital_status) ~
-1)
142 observations deleted due to missingness
n events median 0.95LCL 0.95UCL
27.00 27.00 2.00 1.44 3.25
---------------------------
Available sample meta-data:
---------------------------
years_to_birth:
Min. 1st Qu. Median Mean 3rd Qu. Max.
31.00 57.00 66.00 64.37 72.00 90.00
vital_status:
0 1
141 28
days_to_death:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
59.0 347.5 730.0 786.1 1193.0 1741.0 142
days_to_last_followup:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.0 366.0 625.0 779.5 1096.0 3932.0 28
tumor_tissue_site:
rectum NA's
166 3
pathology_M_stage:
m0 m1 m1a mx NA's
128 22 2 14 3
gender:
female male
77 92
date_of_initial_pathologic_diagnosis:
Min. 1st Qu. Median Mean 3rd Qu. Max.
1999 2007 2009 2008 2010 2012
days_to_last_known_alive:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
31.0 292.2 863.0 1420.1 2214.5 3667.0 161
radiation_therapy:
no yes NA's
114 22 33
histological_type:
rectal adenocarcinoma rectal mucinous adenocarcinoma
150 13
NA's
6
tumor_stage:
stage iia NA's
1 168
residual_tumor:
r0 r1 r2 rx NA's
126 2 12 5 24
number_of_lymph_nodes:
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.000 0.000 0.000 2.692 3.000 31.000 10
ethnicity:
hispanic or latino not hispanic or latino NA's
1 84 84
Including an additional 2242 columns
}}
\seealso{\link{READ-v2.0.0}}
\keyword{datasets}
|
# Function makeCacheMatrix and cacheSolve are used in combination.
# makeCacheMatrix creates a cache object for a matrix and its inverse matrix.
# Function cacheSolve takes a makeCacheMatrix object and computes the inverse of the cached matrix and store the result as a cache in the makeCacheMatrix object.
# If there is already a cache for the inverse matrix function cacheSolve only returns the cache.
# Example:
# mx <- matrix(11:14,2,2)
# cmx <- makeCacheMatrix(mx)
# inv <- cacheSolve(cmx)
# Function makeCacheMatrix creates a list consisting of the following four functions:
# set : set the cached value of a square matrix
# get : get the cached square matrix
# setInverse : set the cached value of the inverse of the cached matrix
# getInverse : get the cached value of the inverse of the cached matrix
#
# These functions have access to mx and inv in the enclosing environment.
#
makeCacheMatrix <- function(mx = matrix()) {
inv <- NULL
set <- function(mtx) {
mx <<- mtx
inv <<- NULL
}
get <- function() mx
setInverse <- function(inverse) inv <<- inverse
getInverse <- function() inv
list(set = set,
get = get,
setInverse = setInverse,
getInverse = getInverse)
}
# Function cacheSolve calculates the inverse matrix of the cached matrix
# within the object created by the function makeCacheMatrix.
# The first time it is called it computes the inverse of the matrix by solve()
# and place the resulting inverse matrix in the makeCacheMatrix object.
# Subsequent calls to this function looks up the cached value quickly
# for the inverse of the matrix without the need to compute the inverse again.
#
cacheSolve <- function(cmx, ...) {
inv <- cmx$getInverse()
if (!is.null(inv)) {
return(inv)
}
mx <- cmx$get()
inv <- solve(mx, ...)
cmx$setInverse(inv)
# return a matrix that is the inverse of mx
inv
}
|
/cachematrix.R
|
no_license
|
eov/ProgrammingAssignment2
|
R
| false
| false
| 1,944
|
r
|
# Function makeCacheMatrix and cacheSolve are used in combination.
# makeCacheMatrix creates a cache object for a matrix and its inverse matrix.
# Function cacheSolve takes a makeCacheMatrix object and computes the inverse of the cached matrix and store the result as a cache in the makeCacheMatrix object.
# If there is already a cache for the inverse matrix function cacheSolve only returns the cache.
# Example:
# mx <- matrix(11:14,2,2)
# cmx <- makeCacheMatrix(mx)
# inv <- cacheSolve(cmx)
# Function makeCacheMatrix creates a list consisting of the following four functions:
# set : set the cached value of a square matrix
# get : get the cached square matrix
# setInverse : set the cached value of the inverse of the cached matrix
# getInverse : get the cached value of the inverse of the cached matrix
#
# These functions have access to mx and inv in the enclosing environment.
#
makeCacheMatrix <- function(mx = matrix()) {
inv <- NULL
set <- function(mtx) {
mx <<- mtx
inv <<- NULL
}
get <- function() mx
setInverse <- function(inverse) inv <<- inverse
getInverse <- function() inv
list(set = set,
get = get,
setInverse = setInverse,
getInverse = getInverse)
}
# Function cacheSolve calculates the inverse matrix of the cached matrix
# within the object created by the function makeCacheMatrix.
# The first time it is called it computes the inverse of the matrix by solve()
# and place the resulting inverse matrix in the makeCacheMatrix object.
# Subsequent calls to this function looks up the cached value quickly
# for the inverse of the matrix without the need to compute the inverse again.
#
cacheSolve <- function(cmx, ...) {
inv <- cmx$getInverse()
if (!is.null(inv)) {
return(inv)
}
mx <- cmx$get()
inv <- solve(mx, ...)
cmx$setInverse(inv)
# return a matrix that is the inverse of mx
inv
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/model-feature-selection.R
\name{featsel_stepforward}
\alias{featsel_stepforward}
\title{Feature selection vis stepwise forward}
\usage{
featsel_stepforward(model, ...)
}
\arguments{
\item{model}{model}
\item{...}{Additional arguments for stats::step}
}
\description{
Feature selection vis stepwise forward
}
\examples{
data("credit_woe")
m <- glm(bad ~ ., family = binomial, data = credit_woe)
featsel_stepforward(m)
}
|
/man/featsel_stepforward.Rd
|
permissive
|
jbkunst/risk3r
|
R
| false
| true
| 502
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/model-feature-selection.R
\name{featsel_stepforward}
\alias{featsel_stepforward}
\title{Feature selection vis stepwise forward}
\usage{
featsel_stepforward(model, ...)
}
\arguments{
\item{model}{model}
\item{...}{Additional arguments for stats::step}
}
\description{
Feature selection vis stepwise forward
}
\examples{
data("credit_woe")
m <- glm(bad ~ ., family = binomial, data = credit_woe)
featsel_stepforward(m)
}
|
\name{CAAIlluminatedFraction_VenusMagnitudeAA}
\alias{CAAIlluminatedFraction_VenusMagnitudeAA}
\title{
CAAIlluminatedFraction_VenusMagnitudeAA
}
\description{
CAAIlluminatedFraction_VenusMagnitudeAA
}
\usage{
CAAIlluminatedFraction_VenusMagnitudeAA(r, Delta, i)
}
\arguments{
\item{r}{
r The planet's distance to the Sun in astronomical units.
}
\item{Delta}{
Delta The planet's distance from the Earth in astronomical units.
}
\item{i}{
i The planet's phase angle in degrees.
}
}
\details{
}
\value{
The magnitude of the planet.
}
\references{
Meeus, J. H. (1991). Astronomical algorithms. Willmann-Bell, Incorporated.
}
\author{
C++ code by PJ Naughter, imported to R by Jinlong Zhang
}
\note{
}
\seealso{
}
\examples{
CAAIlluminatedFraction_VenusMagnitudeAA(r = 0.75, Delta = 0.43, i = 120)
}
\keyword{ Venus }
|
/man/CAAIlluminatedFraction_VenusMagnitudeAA.Rd
|
no_license
|
helixcn/skycalc
|
R
| false
| false
| 843
|
rd
|
\name{CAAIlluminatedFraction_VenusMagnitudeAA}
\alias{CAAIlluminatedFraction_VenusMagnitudeAA}
\title{
CAAIlluminatedFraction_VenusMagnitudeAA
}
\description{
CAAIlluminatedFraction_VenusMagnitudeAA
}
\usage{
CAAIlluminatedFraction_VenusMagnitudeAA(r, Delta, i)
}
\arguments{
\item{r}{
r The planet's distance to the Sun in astronomical units.
}
\item{Delta}{
Delta The planet's distance from the Earth in astronomical units.
}
\item{i}{
i The planet's phase angle in degrees.
}
}
\details{
}
\value{
The magnitude of the planet.
}
\references{
Meeus, J. H. (1991). Astronomical algorithms. Willmann-Bell, Incorporated.
}
\author{
C++ code by PJ Naughter, imported to R by Jinlong Zhang
}
\note{
}
\seealso{
}
\examples{
CAAIlluminatedFraction_VenusMagnitudeAA(r = 0.75, Delta = 0.43, i = 120)
}
\keyword{ Venus }
|
#' Custom save to .csv function
#'
#' This function saves data.tables or data.frames as .csv in the root working directory or a specified subfolder.
#' Additionally the current date is automatically included in the file name.
#'
#' @param file The data.table or data.frame to be saved.
#'
#' @param file_name Character string that specifies the name the saved file should have
#' The date of creation and the .csv ending are added automatically
#'
#' @param subfolder A character string without "/" giving the subfolder the file shall be saved in.
#'
#' @param create_subfolder Given a subfolder, setting this to TRUE will create a new directory with the name given in subfolder and will stop if set to FALSE
#'
#' @param sep the field separator string. Values within each row of x are separated by this string.
#'
#' @param quote a logical value (TRUE or FALSE) or a numeric vector. If TRUE, any character or factor columns will be surrounded by double quotes. If a numeric vector, its elements are taken as the indices of columns to quote. In both cases, row and column names are quoted if they are written. If FALSE, nothing is quoted.
#'
#' @export
save_csv_carl <- function(file = NA, file_name = NA, subfolder = NA, create_subfolder = F, sep = ",", quote = F) {
# check if data.table is
if(!all(is.na(file)) | data.table::is.data.table(file) == F){
# try to coerce file into data.table
try(data.table::setDT(file), silent = T)
# if file is still not a data.table, stop
if (!data.table::is.data.table(file)) {
stop("Please specifiy a file that can be converted into a data.table to save as file!")
}
}
# check whether file name was given
if (is.na(file_name) | !is.character(file_name)) {
stop("Please specifiy a character string with a name for the file_name!")
}
# check if subfolder is a character string
if (!is.character(subfolder) & !is.na(subfolder)) {
stop("Please specifiy a character string with a name of an existing subfolder to save the file in!")
}
# check whether subfolder is given
if (is.na(subfolder)) {
save_designation <- here::here()
} else {
# remove any / in case I forgot that I don't need them
subfolder <- gsub(x = subfolder, pattern = "\\|/", replacement = "")
save_designation <- here::here(subfolder)
}
# check if given directory exists
if (!dir.exists(save_designation) & create_subfolder == F) {
stop("Directory or subfolder does not exist. Set create_subfolder == T to create a subdirectory of your working directory")
# create subdirectory when create_subfolder == T
} else if (!dir.exists(save_designation) & create_subfolder == T) {
dir.create(save_designation)
}
# set the complete file_name
complete_file_name <- paste0(format(Sys.time(), '%y%m%d'), "_", file_name, ".csv")
# save in wd
if (here::here() == save_designation) {
# print message
message("File will be saved as: ", here::here(complete_file_name))
# save file as .csv
data.table::fwrite(x = file,
file = here::here(complete_file_name),
sep = sep,
row.names = F,
quote = quote)
# in case of subfolder
} else if(here::here() != save_designation & !is.na(subfolder))
# print message
message("File will be saved as: ", here::here(subfolder, complete_file_name))
# save file as .csv
data.table::fwrite(x = file,
file = here::here(subfolder, complete_file_name),
sep = sep,
row.names = F,
quote = quote)
}
|
/R/save_csv_carl.R
|
no_license
|
cfbeuchel/CarlHelpR
|
R
| false
| false
| 3,635
|
r
|
#' Custom save to .csv function
#'
#' This function saves data.tables or data.frames as .csv in the root working directory or a specified subfolder.
#' Additionally the current date is automatically included in the file name.
#'
#' @param file The data.table or data.frame to be saved.
#'
#' @param file_name Character string that specifies the name the saved file should have
#' The date of creation and the .csv ending are added automatically
#'
#' @param subfolder A character string without "/" giving the subfolder the file shall be saved in.
#'
#' @param create_subfolder Given a subfolder, setting this to TRUE will create a new directory with the name given in subfolder and will stop if set to FALSE
#'
#' @param sep the field separator string. Values within each row of x are separated by this string.
#'
#' @param quote a logical value (TRUE or FALSE) or a numeric vector. If TRUE, any character or factor columns will be surrounded by double quotes. If a numeric vector, its elements are taken as the indices of columns to quote. In both cases, row and column names are quoted if they are written. If FALSE, nothing is quoted.
#'
#' @export
save_csv_carl <- function(file = NA, file_name = NA, subfolder = NA, create_subfolder = F, sep = ",", quote = F) {
# check if data.table is
if(!all(is.na(file)) | data.table::is.data.table(file) == F){
# try to coerce file into data.table
try(data.table::setDT(file), silent = T)
# if file is still not a data.table, stop
if (!data.table::is.data.table(file)) {
stop("Please specifiy a file that can be converted into a data.table to save as file!")
}
}
# check whether file name was given
if (is.na(file_name) | !is.character(file_name)) {
stop("Please specifiy a character string with a name for the file_name!")
}
# check if subfolder is a character string
if (!is.character(subfolder) & !is.na(subfolder)) {
stop("Please specifiy a character string with a name of an existing subfolder to save the file in!")
}
# check whether subfolder is given
if (is.na(subfolder)) {
save_designation <- here::here()
} else {
# remove any / in case I forgot that I don't need them
subfolder <- gsub(x = subfolder, pattern = "\\|/", replacement = "")
save_designation <- here::here(subfolder)
}
# check if given directory exists
if (!dir.exists(save_designation) & create_subfolder == F) {
stop("Directory or subfolder does not exist. Set create_subfolder == T to create a subdirectory of your working directory")
# create subdirectory when create_subfolder == T
} else if (!dir.exists(save_designation) & create_subfolder == T) {
dir.create(save_designation)
}
# set the complete file_name
complete_file_name <- paste0(format(Sys.time(), '%y%m%d'), "_", file_name, ".csv")
# save in wd
if (here::here() == save_designation) {
# print message
message("File will be saved as: ", here::here(complete_file_name))
# save file as .csv
data.table::fwrite(x = file,
file = here::here(complete_file_name),
sep = sep,
row.names = F,
quote = quote)
# in case of subfolder
} else if(here::here() != save_designation & !is.na(subfolder))
# print message
message("File will be saved as: ", here::here(subfolder, complete_file_name))
# save file as .csv
data.table::fwrite(x = file,
file = here::here(subfolder, complete_file_name),
sep = sep,
row.names = F,
quote = quote)
}
|
library(data.table)
library(jsonlite)
library(optparse)
# make an options parsing list
option_list = list(
make_option(c("-u", "--user"), type="character", default=NULL,
help="user_id", metavar="character")
);
opt_parser = OptionParser(option_list=option_list);
opt = parse_args(opt_parser);
# get the arguments that were passed in
user_id<-opt$user
# function that will submit a given string and return the % matched
submitCode256<-function(user_id,x){
url <- paste0("https://dsdemo.vmhost.psu.edu/api/nlp/CodeBreak_256?user_id=",user_id,"&x=",x)
read_json(url)[[1]]
}
# function that will make a random string in the form that can be submitted to the API
rand_code<-function(){
set<-c(1,0)
rand_vec<-sample(set,256,replace=T)
rand_string<-paste0(rand_vec,collapse="")
rand_string
}
# function that will invert a given string
flip_code<-function(code){
char_string<-strsplit(code,"")
num_vec<-as.numeric(char_string[[1]])
rev_logic_vec<-!as.logical(num_vec)
rev_num_vec<-rev_logic_vec*1
rev_string<-paste0(rev_num_vec,collapse="")
rev_string
}
# function that will take a bunch of strings and return the most common value in each position
string_vote<-function(binary_strings){
char_string<-strsplit(binary_strings,"")
num_vecs<-lapply(char_string,as.numeric)
code_tab<-data.table(do.call(rbind,num_vecs))
maj_vote<-round(colMeans(code_tab))
vote_string<-paste0(maj_vote,collapse="")
vote_string
}
# submits a random string then inverts it if it isnt above average
getAboveAvg<-function(user_id){
code<-rand_code()
eval<-submitCode256(user_id,code)[[1]]
# make sure it wasnt the password (this will never happen)
if (is.character(eval)){
eval<-1
}
# invert if below avg
if (eval<0.5){
code<-flip_code(code)
eval<-1-eval
}
DT<-data.table(eval,code)
DT
}
########
##main##
########
wacky_boost<-function(user_id){
DT<-NULL
eval<-0
i=0
while(!is.character(eval)){
# make a bunch of above average strings then ensamble them
i<-i+1
out<-getAboveAvg(user_id)
DT<-rbind(DT,out)
ensamble_code<-string_vote(c(DT$code))
eval<-submitCode256(user_id,ensamble_code)[[1]]
message(paste0("ensambled strings: ",i," percent correct: ",eval))
}
ensamble_code<-string_vote(c(DT$code))
out<-submitCode256(user_id,ensamble_code)[[1]]
return(out)
}
wacky_boost(user_id)
|
/Lectures/9.Script_writing/Scripting_2_wacky_boost.R
|
no_license
|
sharonsunpeng/PSU_Stat_184
|
R
| false
| false
| 2,403
|
r
|
library(data.table)
library(jsonlite)
library(optparse)
# make an options parsing list
option_list = list(
make_option(c("-u", "--user"), type="character", default=NULL,
help="user_id", metavar="character")
);
opt_parser = OptionParser(option_list=option_list);
opt = parse_args(opt_parser);
# get the arguments that were passed in
user_id<-opt$user
# function that will submit a given string and return the % matched
submitCode256<-function(user_id,x){
url <- paste0("https://dsdemo.vmhost.psu.edu/api/nlp/CodeBreak_256?user_id=",user_id,"&x=",x)
read_json(url)[[1]]
}
# function that will make a random string in the form that can be submitted to the API
rand_code<-function(){
set<-c(1,0)
rand_vec<-sample(set,256,replace=T)
rand_string<-paste0(rand_vec,collapse="")
rand_string
}
# function that will invert a given string
flip_code<-function(code){
char_string<-strsplit(code,"")
num_vec<-as.numeric(char_string[[1]])
rev_logic_vec<-!as.logical(num_vec)
rev_num_vec<-rev_logic_vec*1
rev_string<-paste0(rev_num_vec,collapse="")
rev_string
}
# function that will take a bunch of strings and return the most common value in each position
string_vote<-function(binary_strings){
char_string<-strsplit(binary_strings,"")
num_vecs<-lapply(char_string,as.numeric)
code_tab<-data.table(do.call(rbind,num_vecs))
maj_vote<-round(colMeans(code_tab))
vote_string<-paste0(maj_vote,collapse="")
vote_string
}
# submits a random string then inverts it if it isnt above average
getAboveAvg<-function(user_id){
code<-rand_code()
eval<-submitCode256(user_id,code)[[1]]
# make sure it wasnt the password (this will never happen)
if (is.character(eval)){
eval<-1
}
# invert if below avg
if (eval<0.5){
code<-flip_code(code)
eval<-1-eval
}
DT<-data.table(eval,code)
DT
}
########
##main##
########
wacky_boost<-function(user_id){
DT<-NULL
eval<-0
i=0
while(!is.character(eval)){
# make a bunch of above average strings then ensamble them
i<-i+1
out<-getAboveAvg(user_id)
DT<-rbind(DT,out)
ensamble_code<-string_vote(c(DT$code))
eval<-submitCode256(user_id,ensamble_code)[[1]]
message(paste0("ensambled strings: ",i," percent correct: ",eval))
}
ensamble_code<-string_vote(c(DT$code))
out<-submitCode256(user_id,ensamble_code)[[1]]
return(out)
}
wacky_boost(user_id)
|
output$pageStub <- renderUI(
fluidPage(useShinyjs(),theme = shinytheme('superhero'),
fluidRow( column( 7, offset = 1, h2("WHO IS MOST LIKELY TO LIVE? ")
)
),
fluidRow ( id = "greyBox", align = "center",imageOutput("grey", click = "grey_click")
),
shinyjs::hidden(
div( id = "ruler",
fluidRow( align = "center", imageOutput("brienne")
),
fluidRow ( align = "center ",style = 'margin-top: -5%', h3("Brienne of Tarth!")))),
fluidRow( column( 7, offset = 1, h2("WHO IS MOST LIKELY TO DIE?"))),
fluidRow ( id = "greyBox2", align = "center", imageOutput("grey2", click = "grey_click2")),
shinyjs::hidden(
div( id = "dead",
fluidRow( align = "center", imageOutput("sansa")),
fluidRow ( align = "center ",style = 'margin-top: -5%', h3("Sansa Stark :(")))),
fluidRow(column(3, offset = 1,h2( "Most Likely to Die:")), column(3,offset = 1 ,h2( "50/50 Chance:")), column(3,offset =1,h2( "Likely to Live:"))),
fluidRow( style = 'margin-bottom: 10%', column( 3,offset = 1, h4 ( "1. Sansa Stark"), h4("2. Gilly"), h4("3. Arya Stark"), h4("4. Tormund Giantsbane"), h4("5. Cersei Lannister"), h4("6. Gendry")),
column(3, offset = 1, h4("1. Bran Stark"), h4("2. Samwell Tarly"), h4("3. Tyrion Lannister"), h4("4. Lord Varys"), h4("5. Jon Snow"), h4("6. Sir Davos")),
column ( 3, offset = 1,h4("1. Brienne of Tarth"), h4("2. Jamie Lannister"),h4("3. Jorah Mormont"), h4( "4. Theon Greyjoy"), h4("5. Daenerys Targaryen"), h4("6. Sandor 'The Hound' Clegane"))) ,
fluidRow( column( 7, offset = 1, tags$em("My model is only based on what I call 'census' data, i.e hometown, occupation, number of children, etc.")))
))
output$grey = renderImage({
list(src = "click.png", width = 600,
height = 300)},
deleteFile = FALSE)
output$grey2 = renderImage({
list(src = "click.png", width = 600,
height = 300)},
deleteFile = FALSE)
output$brienne = renderImage({
list(src = "brienne.jpg", width = 600,
height = 300)},
deleteFile = FALSE)
output$sansa = renderImage({
list(src = "sansa2.jpg", width = 600,
height = 300)},
deleteFile = FALSE)
observeEvent(input$grey_click, {
shinyjs::show("ruler")
shinyjs::hide("greyBox")
})
observeEvent(input$grey_click2, {
shinyjs::show("dead")
shinyjs::hide("greyBox2")
})
|
/GameofThrones/predictions.R
|
no_license
|
zembrodta/GOT
|
R
| false
| false
| 2,715
|
r
|
output$pageStub <- renderUI(
fluidPage(useShinyjs(),theme = shinytheme('superhero'),
fluidRow( column( 7, offset = 1, h2("WHO IS MOST LIKELY TO LIVE? ")
)
),
fluidRow ( id = "greyBox", align = "center",imageOutput("grey", click = "grey_click")
),
shinyjs::hidden(
div( id = "ruler",
fluidRow( align = "center", imageOutput("brienne")
),
fluidRow ( align = "center ",style = 'margin-top: -5%', h3("Brienne of Tarth!")))),
fluidRow( column( 7, offset = 1, h2("WHO IS MOST LIKELY TO DIE?"))),
fluidRow ( id = "greyBox2", align = "center", imageOutput("grey2", click = "grey_click2")),
shinyjs::hidden(
div( id = "dead",
fluidRow( align = "center", imageOutput("sansa")),
fluidRow ( align = "center ",style = 'margin-top: -5%', h3("Sansa Stark :(")))),
fluidRow(column(3, offset = 1,h2( "Most Likely to Die:")), column(3,offset = 1 ,h2( "50/50 Chance:")), column(3,offset =1,h2( "Likely to Live:"))),
fluidRow( style = 'margin-bottom: 10%', column( 3,offset = 1, h4 ( "1. Sansa Stark"), h4("2. Gilly"), h4("3. Arya Stark"), h4("4. Tormund Giantsbane"), h4("5. Cersei Lannister"), h4("6. Gendry")),
column(3, offset = 1, h4("1. Bran Stark"), h4("2. Samwell Tarly"), h4("3. Tyrion Lannister"), h4("4. Lord Varys"), h4("5. Jon Snow"), h4("6. Sir Davos")),
column ( 3, offset = 1,h4("1. Brienne of Tarth"), h4("2. Jamie Lannister"),h4("3. Jorah Mormont"), h4( "4. Theon Greyjoy"), h4("5. Daenerys Targaryen"), h4("6. Sandor 'The Hound' Clegane"))) ,
fluidRow( column( 7, offset = 1, tags$em("My model is only based on what I call 'census' data, i.e hometown, occupation, number of children, etc.")))
))
output$grey = renderImage({
list(src = "click.png", width = 600,
height = 300)},
deleteFile = FALSE)
output$grey2 = renderImage({
list(src = "click.png", width = 600,
height = 300)},
deleteFile = FALSE)
output$brienne = renderImage({
list(src = "brienne.jpg", width = 600,
height = 300)},
deleteFile = FALSE)
output$sansa = renderImage({
list(src = "sansa2.jpg", width = 600,
height = 300)},
deleteFile = FALSE)
observeEvent(input$grey_click, {
shinyjs::show("ruler")
shinyjs::hide("greyBox")
})
observeEvent(input$grey_click2, {
shinyjs::show("dead")
shinyjs::hide("greyBox2")
})
|
rm(list=ls())
library(ggplot2)
library(plyr)
library(dplyr)
library(moments)
home_wd = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/"
main_wd = paste0(home_wd, "/fig-plots/")
setwd(main_wd)
#Ct and distribution fits
mada.df.tot <- read.csv("mada-ct-cross-gp.csv", header = TRUE, stringsAsFactors = F)
mada.df = mada.df.tot
mada.df = subset(mada.df.tot, keep_t==1)
head(mada.df)
mada.df$region <- factor(mada.df$region, levels=c("Atsinanana", "Analamanga", "National"))
mada.df$week_date <- as.Date(mada.df$week_date)
mada.df <- arrange(mada.df,region, t)
colz = c("Analamanga" = "mediumseagreen", "Atsinanana"="tomato", "National"="cornflowerblue")
pa <- ggplot(mada.df) +
geom_violin(aes(week_date,ct, group=t, fill=region), scale="width",
draw_quantiles=c(0.025,0.5,0.975), show.legend = F) +
geom_jitter(aes(x=week_date,y=ct),size=0.1,width=1,height=0) +
facet_grid(region~., switch = "y") + theme_bw() + ylab("weekly Ct distribution") +
theme(axis.title.x = element_blank(), panel.grid = element_blank(),
strip.background = element_rect(fill="white"), plot.margin = unit(c(.1,0,.6,.1), "cm")) +
coord_cartesian(xlim=c(as.Date("2020-03-15"), as.Date("2020-10-01")), ylim=c(40,05)) + scale_y_reverse()
pa
#and plot the fit distributions by the two methods
dist.gp1 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-gp-clust-final/out-dat/dist-df-gp-National.csv")
dist.gp2 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-gp-clust-final/out-dat/dist-df-gp-Analamanga.csv")
dist.gp3 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-gp-clust-final/out-dat/dist-df-gp-Atsinanana.csv")
dist.gp <- rbind(dist.gp1, dist.gp2, dist.gp3)
head(dist.gp)
names(dist.gp)[names(dist.gp)=="district"] <- "region"
#and the seir fits
dist.seir1 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-seir-clust-final/out-dat/distribution-fits-seir-National.csv")
dist.seir2 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-seir-clust-final/out-dat/distribution-fits-seir-Analamanga.csv")
dist.seir3 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-seir-clust-final/out-dat/distribution-fits-seir-Atsinanana.csv")
dist.seir <- rbind(dist.seir1, dist.seir2, dist.seir3)
head(dist.seir)
dist.gp <- dplyr::select(dist.gp, names(dist.seir))
dist.df <- rbind(dist.seir, dist.gp)
head(dist.df)
dist.df <- dplyr::select(dist.df, ct, t, lower_expec,median_expec,upper_expec,region,date,fit)
dist.df$date <- as.Date(dist.df$date)
#and plot with data
head(mada.df)
region_name = "National"
join.df <- dplyr::select(mada.df, region, ct,week_date, t)
names(join.df)[names(join.df)=="week_date"] <- "date"
head(join.df)
merge.df <- dplyr::select(dist.df, t, fit, region, lower_expec, median_expec, upper_expec)
date.key <- ddply(mada.df, .(t), summarise, date = unique(week_date))
dist.df <- dplyr::select(dist.df, -(date))
dist.df <- merge(dist.df, date.key, by="t", all.x = T)
dist.df$date_name <- paste0(lubridate::month(dist.df$date, label=T), "-", lubridate::day(dist.df$date))
join.df$date_name <- paste0(lubridate::month(join.df$date, label=T), "-", lubridate::day(join.df$date))
dist.df <- arrange(dist.df, date)
dist.df$date_name<- factor(dist.df$date_name, levels=unique(dist.df$date_name))
join.df$date_name<- factor(join.df$date_name, levels=unique(join.df$date_name))
join.df$plot_date <- dist.df$plot_date <- 0
join.df$plot_date[join.df$date==as.Date("2020-05-04") |
#join.df$date==as.Date("2020-06-08") |
#join.df$date==as.Date("2020-06-15")|
join.df$date==as.Date("2020-07-06")|
#join.df$date==as.Date("2020-07-13")|
#join.df$date==as.Date("2020-07-27")|
join.df$date==as.Date("2020-08-17")] <- 1#|
#join.df$date==as.Date("2020-08-24")|
#join.df$date==as.Date("2020-09-14")] <- 1
dist.df$plot_date[dist.df$date==as.Date("2020-05-04") |
#dist.df$date==as.Date("2020-06-08") |
#dist.df$date==as.Date("2020-06-15")|
dist.df$date==as.Date("2020-07-06")|
#dist.df$date==as.Date("2020-07-13")|
#dist.df$date==as.Date("2020-07-27")|
dist.df$date==as.Date("2020-08-17") ] <- 1#|
#dist.df$date==as.Date("2020-08-24")|
#dist.df$date==as.Date("2020-09-14")] <- 1
dist.df$fit[dist.df$fit=="seir"] <- "Ct-SEIR"
dist.df$fit[dist.df$fit=="gp"] <- "Ct-GP"
colz=c("Ct-GP"="firebrick", "Ct-SEIR" ="purple")
p.nat <- ggplot(data = subset(join.df, region==region_name & plot_date==1)) + #
geom_histogram(aes(ct), fill="cornflowerblue") + facet_wrap(~date_name,ncol=1) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name & plot_date==1), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name & plot_date==1), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.2,.5), "cm"),
legend.position = c(.18,.24), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (National)") + scale_y_continuous(position="right")
p.nat
#and load in the skew plots
#load Ct data
#mada.df.tot <- read.csv(paste0(home_wd,"viro-gp-clust/data/mada-ct-cross-gp.csv"), header = TRUE, stringsAsFactors = F)
#mada.df.tot = subset(mada.df.tot, keep_t==1)
#head(mada.df.tot)
mada.skew <- ddply(mada.df.tot, .(region, week_date, t), summarise, median_ct = median(ct), skew_ct = skewness(ct) )
names(mada.skew)[names(mada.skew)=="week_date"] <- "date"
head(mada.skew)
#and join with Rt
dat <- read.csv(file="epinow2-estimates.csv", stringsAsFactors = F, header = T)
dat.Rt.IPM = subset(dat, variable=="growth_rate" & fit=="EpiNow2-IPM-dat")
dat.Rt.reported = subset(dat, variable=="R" & fit=="EpiNow2-Reported")
dat.Rt.IPM$median_ct <- dat.Rt.IPM$skew_ct <- dat.Rt.reported$median_ct <- dat.Rt.reported$skew_ct <- NA
for(i in 1:length(mada.skew$region)){
dat.Rt.IPM$median_ct[dat.Rt.IPM$region==mada.skew$region[i] & dat.Rt.IPM$date==mada.skew$date[i]] <- mada.skew$median_ct[i]
dat.Rt.IPM$skew_ct[dat.Rt.IPM$region==mada.skew$region[i] & dat.Rt.IPM$date==mada.skew$date[i]] <- mada.skew$skew_ct[i]
dat.Rt.reported$median_ct[dat.Rt.reported$region==mada.skew$region[i] & dat.Rt.reported$date==mada.skew$date[i]] <- mada.skew$median_ct[i]
dat.Rt.reported$skew_ct[dat.Rt.reported$region==mada.skew$region[i] & dat.Rt.reported$date==mada.skew$date[i]] <- mada.skew$skew_ct[i]
}
#and plot
dat.Rt <-dat.Rt.IPM# rbind(dat.Rt.IPM, dat.Rt.reported)
dat.Rt$region <- factor(dat.Rt$region, levels=c("Atsinanana", "Analamanga", "National"))
pskew <- ggplot(data=dat.Rt) +
geom_point(aes(x=skew_ct, y=median_ct,
color=median), size=3) +
scale_y_reverse()+
scale_color_gradient(low="purple", high="goldenrod", name="growth rate")+
facet_grid(region~.) + theme_bw() +
theme(panel.grid = element_blank(), legend.position = c(.13,.2),
legend.text = element_text(size=7),legend.title = element_text(size=7),
strip.background = element_blank(), plot.margin = unit(c(.1,.1,.1,.7), "cm"),
strip.text = element_blank())+# element_rect(fill="white")) +
xlab("skewness of Ct distribution") +
ylab("median of Ct distribution") + scale_y_continuous(position="right") +
coord_cartesian(ylim=c(40,10), xlim=c(-2,.8))
pskew
pleft=cowplot::plot_grid(pa,pskew, ncol = 2, rel_widths = c(1,.7), labels = c("A.", "B."), hjust = c(0,-.3) )
pleft
Fig3 <- cowplot::plot_grid(pleft,p.nat, ncol=2, nrow = 1, labels=c("", "C."), rel_widths = c(1,.4), hjust = c(-.5,.1))
ggsave(file = "Fig3.png",
plot = Fig3,
units="mm",
width=100,
height=60,
scale=3,
dpi=200)
#and the others as supplement
#and the others as supplement
region_name="National"
p.nat2 <- ggplot(data = subset(join.df, region==region_name & date>as.Date("2020-05-01") & date < as.Date("2020-09-20"))) + #
geom_histogram(aes(ct), fill="cornflowerblue") + facet_wrap(~date_name,ncol=5) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.1,.1), "cm"),
legend.position = c(.93,.15), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (National)")
p.nat2
ggsave(file = "FigS9.png",
plot = p.nat2,
units="mm",
width=70,
height=60,
scale=3,
dpi=200)
region_name="Analamanga"
p.anala <- ggplot(data = subset(join.df, region==region_name & date>as.Date("2020-05-01") & date < as.Date("2020-09-20"))) + #
geom_histogram(aes(ct), fill="mediumseagreen") + facet_wrap(~date_name,ncol=5) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.1,.1), "cm"),
legend.position = c(.93,.15), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (Analamanga)")
p.anala
ggsave(file = "FigS8.png",
plot = p.anala,
units="mm",
width=70,
height=60,
scale=3,
dpi=200)
region_name="Atsinanana"
p.atsin <- ggplot(data = subset(join.df, region==region_name & date>as.Date("2020-05-01") & date < as.Date("2020-09-20"))) + #
geom_histogram(aes(ct), fill="tomato") + facet_wrap(~date_name,ncol=5) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.1,.1), "cm"),
legend.position = c(.93,.15), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (Atsinanana)")
p.atsin
ggsave(file = "FigS7.png",
plot = p.atsin,
units="mm",
width=70,
height=60,
scale=3,
dpi=200)
|
/fig-plots/Fig3-S7-S8-S9.R
|
no_license
|
carabrook/Mada-Ct-Distribute
|
R
| false
| false
| 12,197
|
r
|
rm(list=ls())
library(ggplot2)
library(plyr)
library(dplyr)
library(moments)
home_wd = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/"
main_wd = paste0(home_wd, "/fig-plots/")
setwd(main_wd)
#Ct and distribution fits
mada.df.tot <- read.csv("mada-ct-cross-gp.csv", header = TRUE, stringsAsFactors = F)
mada.df = mada.df.tot
mada.df = subset(mada.df.tot, keep_t==1)
head(mada.df)
mada.df$region <- factor(mada.df$region, levels=c("Atsinanana", "Analamanga", "National"))
mada.df$week_date <- as.Date(mada.df$week_date)
mada.df <- arrange(mada.df,region, t)
colz = c("Analamanga" = "mediumseagreen", "Atsinanana"="tomato", "National"="cornflowerblue")
pa <- ggplot(mada.df) +
geom_violin(aes(week_date,ct, group=t, fill=region), scale="width",
draw_quantiles=c(0.025,0.5,0.975), show.legend = F) +
geom_jitter(aes(x=week_date,y=ct),size=0.1,width=1,height=0) +
facet_grid(region~., switch = "y") + theme_bw() + ylab("weekly Ct distribution") +
theme(axis.title.x = element_blank(), panel.grid = element_blank(),
strip.background = element_rect(fill="white"), plot.margin = unit(c(.1,0,.6,.1), "cm")) +
coord_cartesian(xlim=c(as.Date("2020-03-15"), as.Date("2020-10-01")), ylim=c(40,05)) + scale_y_reverse()
pa
#and plot the fit distributions by the two methods
dist.gp1 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-gp-clust-final/out-dat/dist-df-gp-National.csv")
dist.gp2 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-gp-clust-final/out-dat/dist-df-gp-Analamanga.csv")
dist.gp3 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-gp-clust-final/out-dat/dist-df-gp-Atsinanana.csv")
dist.gp <- rbind(dist.gp1, dist.gp2, dist.gp3)
head(dist.gp)
names(dist.gp)[names(dist.gp)=="district"] <- "region"
#and the seir fits
dist.seir1 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-seir-clust-final/out-dat/distribution-fits-seir-National.csv")
dist.seir2 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-seir-clust-final/out-dat/distribution-fits-seir-Analamanga.csv")
dist.seir3 <- read.csv(file = "/Users/caraebrook/Documents/R/R_repositories/COVID-Ct-Madagascar/Mada-Ct-Distribute/viro-seir-clust-final/out-dat/distribution-fits-seir-Atsinanana.csv")
dist.seir <- rbind(dist.seir1, dist.seir2, dist.seir3)
head(dist.seir)
dist.gp <- dplyr::select(dist.gp, names(dist.seir))
dist.df <- rbind(dist.seir, dist.gp)
head(dist.df)
dist.df <- dplyr::select(dist.df, ct, t, lower_expec,median_expec,upper_expec,region,date,fit)
dist.df$date <- as.Date(dist.df$date)
#and plot with data
head(mada.df)
region_name = "National"
join.df <- dplyr::select(mada.df, region, ct,week_date, t)
names(join.df)[names(join.df)=="week_date"] <- "date"
head(join.df)
merge.df <- dplyr::select(dist.df, t, fit, region, lower_expec, median_expec, upper_expec)
date.key <- ddply(mada.df, .(t), summarise, date = unique(week_date))
dist.df <- dplyr::select(dist.df, -(date))
dist.df <- merge(dist.df, date.key, by="t", all.x = T)
dist.df$date_name <- paste0(lubridate::month(dist.df$date, label=T), "-", lubridate::day(dist.df$date))
join.df$date_name <- paste0(lubridate::month(join.df$date, label=T), "-", lubridate::day(join.df$date))
dist.df <- arrange(dist.df, date)
dist.df$date_name<- factor(dist.df$date_name, levels=unique(dist.df$date_name))
join.df$date_name<- factor(join.df$date_name, levels=unique(join.df$date_name))
join.df$plot_date <- dist.df$plot_date <- 0
join.df$plot_date[join.df$date==as.Date("2020-05-04") |
#join.df$date==as.Date("2020-06-08") |
#join.df$date==as.Date("2020-06-15")|
join.df$date==as.Date("2020-07-06")|
#join.df$date==as.Date("2020-07-13")|
#join.df$date==as.Date("2020-07-27")|
join.df$date==as.Date("2020-08-17")] <- 1#|
#join.df$date==as.Date("2020-08-24")|
#join.df$date==as.Date("2020-09-14")] <- 1
dist.df$plot_date[dist.df$date==as.Date("2020-05-04") |
#dist.df$date==as.Date("2020-06-08") |
#dist.df$date==as.Date("2020-06-15")|
dist.df$date==as.Date("2020-07-06")|
#dist.df$date==as.Date("2020-07-13")|
#dist.df$date==as.Date("2020-07-27")|
dist.df$date==as.Date("2020-08-17") ] <- 1#|
#dist.df$date==as.Date("2020-08-24")|
#dist.df$date==as.Date("2020-09-14")] <- 1
dist.df$fit[dist.df$fit=="seir"] <- "Ct-SEIR"
dist.df$fit[dist.df$fit=="gp"] <- "Ct-GP"
colz=c("Ct-GP"="firebrick", "Ct-SEIR" ="purple")
p.nat <- ggplot(data = subset(join.df, region==region_name & plot_date==1)) + #
geom_histogram(aes(ct), fill="cornflowerblue") + facet_wrap(~date_name,ncol=1) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name & plot_date==1), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name & plot_date==1), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.2,.5), "cm"),
legend.position = c(.18,.24), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (National)") + scale_y_continuous(position="right")
p.nat
#and load in the skew plots
#load Ct data
#mada.df.tot <- read.csv(paste0(home_wd,"viro-gp-clust/data/mada-ct-cross-gp.csv"), header = TRUE, stringsAsFactors = F)
#mada.df.tot = subset(mada.df.tot, keep_t==1)
#head(mada.df.tot)
mada.skew <- ddply(mada.df.tot, .(region, week_date, t), summarise, median_ct = median(ct), skew_ct = skewness(ct) )
names(mada.skew)[names(mada.skew)=="week_date"] <- "date"
head(mada.skew)
#and join with Rt
dat <- read.csv(file="epinow2-estimates.csv", stringsAsFactors = F, header = T)
dat.Rt.IPM = subset(dat, variable=="growth_rate" & fit=="EpiNow2-IPM-dat")
dat.Rt.reported = subset(dat, variable=="R" & fit=="EpiNow2-Reported")
dat.Rt.IPM$median_ct <- dat.Rt.IPM$skew_ct <- dat.Rt.reported$median_ct <- dat.Rt.reported$skew_ct <- NA
for(i in 1:length(mada.skew$region)){
dat.Rt.IPM$median_ct[dat.Rt.IPM$region==mada.skew$region[i] & dat.Rt.IPM$date==mada.skew$date[i]] <- mada.skew$median_ct[i]
dat.Rt.IPM$skew_ct[dat.Rt.IPM$region==mada.skew$region[i] & dat.Rt.IPM$date==mada.skew$date[i]] <- mada.skew$skew_ct[i]
dat.Rt.reported$median_ct[dat.Rt.reported$region==mada.skew$region[i] & dat.Rt.reported$date==mada.skew$date[i]] <- mada.skew$median_ct[i]
dat.Rt.reported$skew_ct[dat.Rt.reported$region==mada.skew$region[i] & dat.Rt.reported$date==mada.skew$date[i]] <- mada.skew$skew_ct[i]
}
#and plot
dat.Rt <-dat.Rt.IPM# rbind(dat.Rt.IPM, dat.Rt.reported)
dat.Rt$region <- factor(dat.Rt$region, levels=c("Atsinanana", "Analamanga", "National"))
pskew <- ggplot(data=dat.Rt) +
geom_point(aes(x=skew_ct, y=median_ct,
color=median), size=3) +
scale_y_reverse()+
scale_color_gradient(low="purple", high="goldenrod", name="growth rate")+
facet_grid(region~.) + theme_bw() +
theme(panel.grid = element_blank(), legend.position = c(.13,.2),
legend.text = element_text(size=7),legend.title = element_text(size=7),
strip.background = element_blank(), plot.margin = unit(c(.1,.1,.1,.7), "cm"),
strip.text = element_blank())+# element_rect(fill="white")) +
xlab("skewness of Ct distribution") +
ylab("median of Ct distribution") + scale_y_continuous(position="right") +
coord_cartesian(ylim=c(40,10), xlim=c(-2,.8))
pskew
pleft=cowplot::plot_grid(pa,pskew, ncol = 2, rel_widths = c(1,.7), labels = c("A.", "B."), hjust = c(0,-.3) )
pleft
Fig3 <- cowplot::plot_grid(pleft,p.nat, ncol=2, nrow = 1, labels=c("", "C."), rel_widths = c(1,.4), hjust = c(-.5,.1))
ggsave(file = "Fig3.png",
plot = Fig3,
units="mm",
width=100,
height=60,
scale=3,
dpi=200)
#and the others as supplement
#and the others as supplement
region_name="National"
p.nat2 <- ggplot(data = subset(join.df, region==region_name & date>as.Date("2020-05-01") & date < as.Date("2020-09-20"))) + #
geom_histogram(aes(ct), fill="cornflowerblue") + facet_wrap(~date_name,ncol=5) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.1,.1), "cm"),
legend.position = c(.93,.15), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (National)")
p.nat2
ggsave(file = "FigS9.png",
plot = p.nat2,
units="mm",
width=70,
height=60,
scale=3,
dpi=200)
region_name="Analamanga"
p.anala <- ggplot(data = subset(join.df, region==region_name & date>as.Date("2020-05-01") & date < as.Date("2020-09-20"))) + #
geom_histogram(aes(ct), fill="mediumseagreen") + facet_wrap(~date_name,ncol=5) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.1,.1), "cm"),
legend.position = c(.93,.15), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (Analamanga)")
p.anala
ggsave(file = "FigS8.png",
plot = p.anala,
units="mm",
width=70,
height=60,
scale=3,
dpi=200)
region_name="Atsinanana"
p.atsin <- ggplot(data = subset(join.df, region==region_name & date>as.Date("2020-05-01") & date < as.Date("2020-09-20"))) + #
geom_histogram(aes(ct), fill="tomato") + facet_wrap(~date_name,ncol=5) + scale_color_manual(values = colz) + scale_fill_manual(values = colz) +
geom_line(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, y=median_expec, color=fit)) +#& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
geom_ribbon(data = subset(dist.df, region==region_name& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")), aes(x=ct, ymin=lower_expec, ymax=upper_expec, fill=fit), alpha=.5) + #& date>as.Date("2020-05-01") & date < as.Date("2020-09-20")
theme_bw() + theme(panel.grid = element_blank(), legend.title = element_blank(), plot.margin = unit(c(.1,.1,.1,.1), "cm"),
legend.position = c(.93,.15), strip.background = element_rect(fill="white"))+ xlab("Ct") +
ylab("weekly count (Atsinanana)")
p.atsin
ggsave(file = "FigS7.png",
plot = p.atsin,
units="mm",
width=70,
height=60,
scale=3,
dpi=200)
|
# Code used to create charts for 1.2 Grammar of Graphics
# Not for distribution to students
library(tidyverse) # Makes tidyverse accessible to this script
baseball <- read_csv("/Users/mchapple/Desktop/baseball.csv")
baseball <- baseball %>%
gather(year, wins, -Team) %>%
rename(team=Team)
baseball$year <- as.integer(baseball$year)
baseball$team <- as.factor(baseball$team)
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_point()
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_point(mapping=aes(shape=team))
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_point(mapping=aes(color=team))
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_line(mapping=aes(color=team))
baseball %>%
filter(team=='New York Mets' | team=='New York Yankees') %>%
ggplot(mapping=aes(x=year, y=wins)) +
geom_line(mapping=aes(color=team))
baseball %>%
filter(team=='New York Mets' | team=='New York Yankees') %>%
ggplot(mapping=aes(x=year, y=wins)) +
geom_col(mapping=aes(fill=team), position="dodge") +
coord_cartesian(ylim=c(60,105))
|
/R/ggplot2 - LinkedIn Learning/1_2_examples.r
|
no_license
|
robbyjeffries1/DataSciencePortfolio
|
R
| false
| false
| 1,085
|
r
|
# Code used to create charts for 1.2 Grammar of Graphics
# Not for distribution to students
library(tidyverse) # Makes tidyverse accessible to this script
baseball <- read_csv("/Users/mchapple/Desktop/baseball.csv")
baseball <- baseball %>%
gather(year, wins, -Team) %>%
rename(team=Team)
baseball$year <- as.integer(baseball$year)
baseball$team <- as.factor(baseball$team)
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_point()
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_point(mapping=aes(shape=team))
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_point(mapping=aes(color=team))
ggplot(data=baseball, mapping=aes(x=year, y=wins)) +
geom_line(mapping=aes(color=team))
baseball %>%
filter(team=='New York Mets' | team=='New York Yankees') %>%
ggplot(mapping=aes(x=year, y=wins)) +
geom_line(mapping=aes(color=team))
baseball %>%
filter(team=='New York Mets' | team=='New York Yankees') %>%
ggplot(mapping=aes(x=year, y=wins)) +
geom_col(mapping=aes(fill=team), position="dodge") +
coord_cartesian(ylim=c(60,105))
|
library(ssvd)
### Name: ssvd
### Title: Sparse SVD
### Aliases: ssvd
### Keywords: sparse SVD iterative thresholding
### ** Examples
ssvd(matrix(rnorm(2^15),2^7,2^8), method = "method")
|
/data/genthat_extracted_code/ssvd/examples/ssvd.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false
| false
| 193
|
r
|
library(ssvd)
### Name: ssvd
### Title: Sparse SVD
### Aliases: ssvd
### Keywords: sparse SVD iterative thresholding
### ** Examples
ssvd(matrix(rnorm(2^15),2^7,2^8), method = "method")
|
# VideoData
rm(list=ls())
setwd("~/Jasmine uni/Imperial/Winter project/")
library(igraph)
library(dplyr)
library(plyr)
##HB=hidden badge, VB=visible badge
#Read in RFID-fitted cage data from every aviary:
total_interaction<- read.csv("~/Jasmine uni/Imperial/Winter project/total_interaction2.csv")
data<-total_interaction
str(data)
badge<-(read.csv("Badge1.csv")
##this is each individual's badge data
Result_Dataset <- data.frame(NULL)
RData<-data.frame(NULL)
##every time you run line 14, the table empties, so only run when practicing
#Create for loop to calculate measures of centrality (degree, closeness, betweenness
# for each occasion (i) and every individual. el creates a matrix that extracts
#the two columns of individuals. newGraph pairs individuals from the same row to
#show they interact. directed=FALSE means neither individual initiated the interaction.
#f creates another data frame containing functions that will calculate measures of centrality
#for each individual. rownames<-NULL is used to remove the additional individual name
#attached to degree. rbind will add this new data frame (f) on the bottom of my initial
#table.
library(lme4)
kk<-seq(1:100)
for (k in kk){
Result_Dataset <- data.frame(NULL)
for (j in unique(data$occasion)){
a <- subset(data, occasion == j)
av <- unique(a$aviary)
occ <- j
for (i in av){
b<-data.frame(NULL)
b <- subset(a, a$aviary == i)
len<-length(b$id1)
Posid1<-unique(b$id1)
Posid2<-unique(b$id2)
rid1<-sample(Posid1, size=len, replace=TRUE)
rid2<-sample(Posid2, size=len, replace=TRUE)
df<-data.frame(rid1,rid2)
el<-as.matrix(df)
# the above is the permutation
net <- graph.data.frame(el, directed=FALSE)
E(net)$weight <- 1
net2 <- simplify(net, edge.attr.comb = list(weight="sum"))
nG <- simplify(net2, edge.attr.comb = list(weight="sum"))
NewDat <- NULL
NewDat <- as.data.frame(degree(nG))
NewDat$ID <- row.names(NewDat)
NewDat$ocassion <- paste(j, sep = " ")
NewDat$Av <- paste(i, sep="")
NewDat$Degree <- degree(nG)
NewDat$Strength <- strength(nG)
NewDat$Closeness <- closeness(nG)
NewDat$Betweenness <- betweenness(nG)
NewDat$Hb<-badge$ahb[match(NewDat$ID,badge$ï..Transponder)]
# the next two are variables on network - that means they generate the same value for each entry for that network.
NewDat$Rdensity<-c(rep(edge_density(nG),length(strength(nG))))
NewDat$RNVertices<-c(rep(gorder(nG),length(strength(nG))))
NewDat <- NewDat[,-1]
#Saving each file under a different name.
assign(paste("Event",i, "Dataset", sep = ""), NewDat )
#Compiling all the data.
Result_Dataset <- rbind(Result_Dataset,NewDat)
}
}
##subset so only have males and occasion one as I am not interested in females or sampling occasion 2
Maleshb<-subset(Result_Dataset, Result_Dataset$ocassion!="2")
Males2hb<-subset(Maleshb, Maleshb$Hb!="NA")
##now to do the cor.test with new males data.frame, for badge vs sociality. This will be included in the loop
RslopeD<-cor.test(Males2hb$Hb, Males2hb$Degree, alternative=c("greater"), method=c("pearson"))
rD<-data.frame(NULL)
rD<-as.numeric(RslopeD$estimate)
RslopeC<-cor.test(Males2hb$Hb, Males2hb$Closeness, alternative=c("greater"), method=c("pearson"))
rC<-data.frame(NULL)
rC<-as.numeric(RslopeC$estimate)
RslopeB<-cor.test(Males2hb$Hb, Males2hb$Betweenness, alternative=c("greater"), method=c("pearson"))
rB<-data.frame(NULL)
rB<-as.numeric(RslopeB$estimate)
##fills new table with permutation correlation data, (RSlope) for betweenness, closeness and
##degree
TempR <- NULL
TempR <- as.data.frame(rD[[1]])
TempR$RslopeD<-rD
TempR$RslopeB<-rB
TempR$RslopeC<-rC
TempR<-TempR[,-1]
RData <-rbind(RData, TempR)
}
write.csv(RData, "SlopeSimulationHB.csv")
##here this is for HB. change code to VB when applicable
##run through whole code to get 1000 permutations
#get quantile of badge data (at 5% as my data has negative correlation)
degreeh<-quantile(RData$RslopeD, probs = 0.05)
degreeh
#so the slope at 5% =-0.1360786 for the random permutations of degree and HB
#sort permutation data into ranked order
Ah<-sort(RData$RslopeD)
Ah
##My actual observed slope for degree = -0.1318404, which just comes outside the
#5% quartile (degreeh)
##When comparing to the ranked data the observed slope fits in at rank 55, so p =0.055/0.06
#and not signficant
#For hidden badge
closenessh<-quantile(RData$RslopeC, probs = 0.05)
closenessh
#so the slope at 5% =-0.1276615 for the random permutations of closeness and badge
hh<-sort(RData$RslopeC)
hh
##My actual observed slope for CLOSENESS = -0.0510324 which is inside the
#5% quantile number (closenessh)
##When comparing to the ranked data the observed slope fits in at rank 210, so p =0.21
#and not signficant
#For hidden badge
betweennessh<-quantile(RData$RslopeB, probs = 0.05)
betweennessh
#so the slope at 5% =-0.2401781 for the random permutations of closeness and badge
bb<-sort(RData$RslopeB)
bb
##My actual observed slope for degree = -0.1624953 which is inside the
#5% quantile number (closenessh)
##When comparing to the ranked data the observed slope fits in at rank 163, so p =0.163
#and not signficant
##for visible badge
degree<-quantile(RData$RslopeD, probs = 0.05)
degree
#so the slope at 5% = -0.038601 for the random permutations of degree and badge
A<-sort(RData$RslopeD)
A
##My actual observed slope for degree = -0.08744, which comes outside the
#5% quantile number
##When ranking the permutation data, the observed slope comes in at 17.
#(from lowest to highest)
##therefore, less than 1 in 50 chance that this happened by chance p<0.02
betweenness<-quantile(RData$RslopeB, probs = 0.05)
betweenness
##so slope at 5% for the random permutations of betweeness and badge= -0.227935
B<-sort(RData$RslopeB)
B
#My actual observed slope for betweenness =-0.0906, which greater then the 5% quantile
#number of the random permutations and is therefore inside the range of data - not significant
##When ranking this permutation data, the observed slope comes in at 294.
#294 out of 1000 = 0.294 chance of this coming up up. So this is greater than 0.05
##not significant p=0.294
closeness<-quantile(RData$RslopeC, probs= 0.05)
closeness
##so slope at 5% for the random permutations of badge and closnesss is -0.1844579
C<-sort(RData$RslopeC)
C
##My actual observed slope for closeness is 0.00906, which is greater then the 5% quantile
##number of random permutations and is therefore inside the range of data -not significant
##When ranking the permutation data, the observed slope comes in at 410. Therefore, the p
#value is 410/100 = p=0.410
par(mfrow = c(1, 3))
#Plot permutation data
##plot of random permuatation of badge vs Degree(1000)
plot(density(RData$RslopeD), xlim=c(-0.5,0.5), main="", xlab="Degree")
#plot the slope of my actual data
lines(x=c(-0.1318404,-0.1318404), y=c(0,7), col="red")
#plot mean of random permutation slope
#lines(x=c(0.1196,0.1196), y=c(0,7), col="blue", lty=2)
##plot of random permutation for badge vs closeness (1000)
plot(density(RData$RslopeC), xlim=c(-0.5,0.5), main="", xlab="Closeness")
#slope of observed data
lines(x=c(-0.0510324,-0.0510324), y=c(0,7), col="red")
##mean of RsclopeC
#lines(x=c(0.0228,0.0228), y=c(0,70), col="blue", lty=2)
##plot the random permutations for badge vs betweeness (1000)
plot(density(RData$RslopeB), xlim=c(-0.5,0.5), main="", xlab="Betweenness")
#slope of data
lines(x=c(-0.1624953,-0.1624953), y=c(0,7), col="red")
#mean of RslopeB data
#lines(x=c(-0.01968,-0.01968), y=c(0,7), col="blue", lty=2)
#lines(x=c(0.60,0.60), y=c(0,7.9), col="red", lty=2)
|
/SNASimulations.R
|
no_license
|
j-somerville/MResEECWinter2018
|
R
| false
| false
| 8,019
|
r
|
# VideoData
rm(list=ls())
setwd("~/Jasmine uni/Imperial/Winter project/")
library(igraph)
library(dplyr)
library(plyr)
##HB=hidden badge, VB=visible badge
#Read in RFID-fitted cage data from every aviary:
total_interaction<- read.csv("~/Jasmine uni/Imperial/Winter project/total_interaction2.csv")
data<-total_interaction
str(data)
badge<-(read.csv("Badge1.csv")
##this is each individual's badge data
Result_Dataset <- data.frame(NULL)
RData<-data.frame(NULL)
##every time you run line 14, the table empties, so only run when practicing
#Create for loop to calculate measures of centrality (degree, closeness, betweenness
# for each occasion (i) and every individual. el creates a matrix that extracts
#the two columns of individuals. newGraph pairs individuals from the same row to
#show they interact. directed=FALSE means neither individual initiated the interaction.
#f creates another data frame containing functions that will calculate measures of centrality
#for each individual. rownames<-NULL is used to remove the additional individual name
#attached to degree. rbind will add this new data frame (f) on the bottom of my initial
#table.
library(lme4)
kk<-seq(1:100)
for (k in kk){
Result_Dataset <- data.frame(NULL)
for (j in unique(data$occasion)){
a <- subset(data, occasion == j)
av <- unique(a$aviary)
occ <- j
for (i in av){
b<-data.frame(NULL)
b <- subset(a, a$aviary == i)
len<-length(b$id1)
Posid1<-unique(b$id1)
Posid2<-unique(b$id2)
rid1<-sample(Posid1, size=len, replace=TRUE)
rid2<-sample(Posid2, size=len, replace=TRUE)
df<-data.frame(rid1,rid2)
el<-as.matrix(df)
# the above is the permutation
net <- graph.data.frame(el, directed=FALSE)
E(net)$weight <- 1
net2 <- simplify(net, edge.attr.comb = list(weight="sum"))
nG <- simplify(net2, edge.attr.comb = list(weight="sum"))
NewDat <- NULL
NewDat <- as.data.frame(degree(nG))
NewDat$ID <- row.names(NewDat)
NewDat$ocassion <- paste(j, sep = " ")
NewDat$Av <- paste(i, sep="")
NewDat$Degree <- degree(nG)
NewDat$Strength <- strength(nG)
NewDat$Closeness <- closeness(nG)
NewDat$Betweenness <- betweenness(nG)
NewDat$Hb<-badge$ahb[match(NewDat$ID,badge$ï..Transponder)]
# the next two are variables on network - that means they generate the same value for each entry for that network.
NewDat$Rdensity<-c(rep(edge_density(nG),length(strength(nG))))
NewDat$RNVertices<-c(rep(gorder(nG),length(strength(nG))))
NewDat <- NewDat[,-1]
#Saving each file under a different name.
assign(paste("Event",i, "Dataset", sep = ""), NewDat )
#Compiling all the data.
Result_Dataset <- rbind(Result_Dataset,NewDat)
}
}
##subset so only have males and occasion one as I am not interested in females or sampling occasion 2
Maleshb<-subset(Result_Dataset, Result_Dataset$ocassion!="2")
Males2hb<-subset(Maleshb, Maleshb$Hb!="NA")
##now to do the cor.test with new males data.frame, for badge vs sociality. This will be included in the loop
RslopeD<-cor.test(Males2hb$Hb, Males2hb$Degree, alternative=c("greater"), method=c("pearson"))
rD<-data.frame(NULL)
rD<-as.numeric(RslopeD$estimate)
RslopeC<-cor.test(Males2hb$Hb, Males2hb$Closeness, alternative=c("greater"), method=c("pearson"))
rC<-data.frame(NULL)
rC<-as.numeric(RslopeC$estimate)
RslopeB<-cor.test(Males2hb$Hb, Males2hb$Betweenness, alternative=c("greater"), method=c("pearson"))
rB<-data.frame(NULL)
rB<-as.numeric(RslopeB$estimate)
##fills new table with permutation correlation data, (RSlope) for betweenness, closeness and
##degree
TempR <- NULL
TempR <- as.data.frame(rD[[1]])
TempR$RslopeD<-rD
TempR$RslopeB<-rB
TempR$RslopeC<-rC
TempR<-TempR[,-1]
RData <-rbind(RData, TempR)
}
write.csv(RData, "SlopeSimulationHB.csv")
##here this is for HB. change code to VB when applicable
##run through whole code to get 1000 permutations
#get quantile of badge data (at 5% as my data has negative correlation)
degreeh<-quantile(RData$RslopeD, probs = 0.05)
degreeh
#so the slope at 5% =-0.1360786 for the random permutations of degree and HB
#sort permutation data into ranked order
Ah<-sort(RData$RslopeD)
Ah
##My actual observed slope for degree = -0.1318404, which just comes outside the
#5% quartile (degreeh)
##When comparing to the ranked data the observed slope fits in at rank 55, so p =0.055/0.06
#and not signficant
#For hidden badge
closenessh<-quantile(RData$RslopeC, probs = 0.05)
closenessh
#so the slope at 5% =-0.1276615 for the random permutations of closeness and badge
hh<-sort(RData$RslopeC)
hh
##My actual observed slope for CLOSENESS = -0.0510324 which is inside the
#5% quantile number (closenessh)
##When comparing to the ranked data the observed slope fits in at rank 210, so p =0.21
#and not signficant
#For hidden badge
betweennessh<-quantile(RData$RslopeB, probs = 0.05)
betweennessh
#so the slope at 5% =-0.2401781 for the random permutations of closeness and badge
bb<-sort(RData$RslopeB)
bb
##My actual observed slope for degree = -0.1624953 which is inside the
#5% quantile number (closenessh)
##When comparing to the ranked data the observed slope fits in at rank 163, so p =0.163
#and not signficant
##for visible badge
degree<-quantile(RData$RslopeD, probs = 0.05)
degree
#so the slope at 5% = -0.038601 for the random permutations of degree and badge
A<-sort(RData$RslopeD)
A
##My actual observed slope for degree = -0.08744, which comes outside the
#5% quantile number
##When ranking the permutation data, the observed slope comes in at 17.
#(from lowest to highest)
##therefore, less than 1 in 50 chance that this happened by chance p<0.02
betweenness<-quantile(RData$RslopeB, probs = 0.05)
betweenness
##so slope at 5% for the random permutations of betweeness and badge= -0.227935
B<-sort(RData$RslopeB)
B
#My actual observed slope for betweenness =-0.0906, which greater then the 5% quantile
#number of the random permutations and is therefore inside the range of data - not significant
##When ranking this permutation data, the observed slope comes in at 294.
#294 out of 1000 = 0.294 chance of this coming up up. So this is greater than 0.05
##not significant p=0.294
closeness<-quantile(RData$RslopeC, probs= 0.05)
closeness
##so slope at 5% for the random permutations of badge and closnesss is -0.1844579
C<-sort(RData$RslopeC)
C
##My actual observed slope for closeness is 0.00906, which is greater then the 5% quantile
##number of random permutations and is therefore inside the range of data -not significant
##When ranking the permutation data, the observed slope comes in at 410. Therefore, the p
#value is 410/100 = p=0.410
par(mfrow = c(1, 3))
#Plot permutation data
##plot of random permuatation of badge vs Degree(1000)
plot(density(RData$RslopeD), xlim=c(-0.5,0.5), main="", xlab="Degree")
#plot the slope of my actual data
lines(x=c(-0.1318404,-0.1318404), y=c(0,7), col="red")
#plot mean of random permutation slope
#lines(x=c(0.1196,0.1196), y=c(0,7), col="blue", lty=2)
##plot of random permutation for badge vs closeness (1000)
plot(density(RData$RslopeC), xlim=c(-0.5,0.5), main="", xlab="Closeness")
#slope of observed data
lines(x=c(-0.0510324,-0.0510324), y=c(0,7), col="red")
##mean of RsclopeC
#lines(x=c(0.0228,0.0228), y=c(0,70), col="blue", lty=2)
##plot the random permutations for badge vs betweeness (1000)
plot(density(RData$RslopeB), xlim=c(-0.5,0.5), main="", xlab="Betweenness")
#slope of data
lines(x=c(-0.1624953,-0.1624953), y=c(0,7), col="red")
#mean of RslopeB data
#lines(x=c(-0.01968,-0.01968), y=c(0,7), col="blue", lty=2)
#lines(x=c(0.60,0.60), y=c(0,7.9), col="red", lty=2)
|
#' Fast flight phase for the cube method modified
#'
#' @description
#'
#' implementation modified from the package sampling.
#'
#' @param X matrix of auxiliary variables.
#' @param pik vector of inclusion probabilities.
#' @param order order to rearrange the data. Default 1
#' @param comment bool, if comment should be written.
#'
#' @return
#' @export
#'
#' @examples
#' \dontrun{
#'
#' # Matrix of balancing variables
#' X=cbind(c(1,1,1,1,1,1,1,1,1),c(1,2,3,4,5,6,7,8,9))
#' # Vector of inclusion probabilities.
#' # The sample size is 3.
#' pik=c(1/3,1/3,1/3,1/3,1/3,1/3,1/3,1/3,1/3)
#' # pikstar is almost a balanced sample and is ready for the landing phase
#' pikstar=fastflightcube(X,pik,order=1,comment=TRUE)
#' round(pikstar,9)
#'
#'
#'
#'
#'
#' rm(list = ls())
#' set.seed(1)
#' eps <- 1e-13
#' library(Matrix)
#' N <- 300
#' Pik <- matrix(c(sampling::inclusionprobabilities(runif(N),70),
#' sampling::inclusionprobabilities(runif(N),50),
#' sampling::inclusionprobabilities(runif(N),30)),ncol = 3)
#' X <- PM(Pik)$PM
#' pik <- PM(Pik)$P
#' dim(X)
#' order = 2
#' EPS = 1e-11
#'
#' system.time(test1 <- fastflightcubeSPOT(X,pik,order = 2))
#' system.time(test2 <- fastflightcubeSPOT(X,pik,order = 1))
#' system.time(test3 <- sampling::fastflightcube(X,pik, order = 2))
#' system.time(test4 <- BalancedSampling::flightphase(pik,X))
#'
#'
#' }
fastflightcubeSPOT <- function (X, pik, order = 2, comment = TRUE)
{
EPS = 1e-11
"reduc" <- function(X) {
EPS = 1e-10
N = dim(X)[1]
Re = svd(X)
array(Re$u[, (Re$d > EPS)], c(N, sum(as.integer(Re$d >
EPS))))
}
########################### START ALGO
N = length(pik)
p = round(length(X)/length(pik))
X <- array(X, c(N, p))
if (order == 1){
o <- sample(N, N)
}else {
if (order == 2){
o <- seq(1, N, 1)
}else {
o <- order(pik, decreasing = TRUE)
}
}
liste <- o[(pik[o] > EPS & pik[o] < (1 - EPS))]
if (comment == TRUE) {
cat("\nBEGINNING OF THE FLIGHT PHASE\n")
cat("The matrix of balanced variable has", p, " variables and ",
N, " units\n")
cat("The size of the inclusion probability vector is ",
length(pik), "\n")
cat("The sum of the inclusion probability vector is ",
sum(pik), "\n")
cat("The inclusion probability vector has ", length(liste),
" non-integer elements\n")
}
pikbon <- pik[liste]
Nbon = length(pikbon)
Xbon <- array(X[liste, ], c(Nbon, p))
pikstar <- pik
flag = 0
# X <- Xbon
# pik <- pikbon
# begin algorithm (general case where N > p) at the end of this phase you have
# at most p values that are not equal to 0 or 1.
if (Nbon > p) {
if (comment == TRUE)
cat("Step 1 ")
system.time(pikstarbon <- algofastflightcubeSPOT(Xbon, pikbon,redux = TRUE))
pikstar[liste] = pikstarbon
flag = 1
}
# reupdate the liste and the exctract element no equal to 0 or 1
liste <- o[(pikstar[o] > EPS & pikstar[o] < (1 - EPS))]
pikbon <- pikstar[liste]
Nbon = length(pikbon)
Xbon <- array(X[liste, ], c(Nbon, p))
pbon = dim(Xbon)[2]
# if you still have value that are not equal to 0 or 1 you reduc the matrix and loop until
if (Nbon > 0) {
Xbon = reduc(Xbon)
pbon = dim(Xbon)[2]
}
k = 2
while (Nbon > pbon & Nbon > 0) {
if (comment == TRUE)
cat("Step ", k, ", ")
k = k + 1
pikstarbon <- algofastflightcubeSPOT(Xbon/pik[liste] * pikbon,
pikbon,redux = FALSE)
pikstar[liste] = pikstarbon
liste <- o[(pikstar[o] > EPS & pikstar[o] < (1 - EPS))]
pikbon <- pikstar[liste]
Nbon = length(pikbon)
Xbon <- array(X[liste, ], c(Nbon, p))
if (Nbon > 0) {
Xbon = reduc(Xbon)
pbon = dim(Xbon)[2]
}
flag = 1
}
if (comment == TRUE)
if (flag == 0)
cat("NO FLIGHT PHASE")
if (comment == TRUE)
cat("\n")
pikstar
}
|
/R/fastflightcubeSPOT.R
|
no_license
|
RJauslin/SamplingC
|
R
| false
| false
| 3,933
|
r
|
#' Fast flight phase for the cube method modified
#'
#' @description
#'
#' implementation modified from the package sampling.
#'
#' @param X matrix of auxiliary variables.
#' @param pik vector of inclusion probabilities.
#' @param order order to rearrange the data. Default 1
#' @param comment bool, if comment should be written.
#'
#' @return
#' @export
#'
#' @examples
#' \dontrun{
#'
#' # Matrix of balancing variables
#' X=cbind(c(1,1,1,1,1,1,1,1,1),c(1,2,3,4,5,6,7,8,9))
#' # Vector of inclusion probabilities.
#' # The sample size is 3.
#' pik=c(1/3,1/3,1/3,1/3,1/3,1/3,1/3,1/3,1/3)
#' # pikstar is almost a balanced sample and is ready for the landing phase
#' pikstar=fastflightcube(X,pik,order=1,comment=TRUE)
#' round(pikstar,9)
#'
#'
#'
#'
#'
#' rm(list = ls())
#' set.seed(1)
#' eps <- 1e-13
#' library(Matrix)
#' N <- 300
#' Pik <- matrix(c(sampling::inclusionprobabilities(runif(N),70),
#' sampling::inclusionprobabilities(runif(N),50),
#' sampling::inclusionprobabilities(runif(N),30)),ncol = 3)
#' X <- PM(Pik)$PM
#' pik <- PM(Pik)$P
#' dim(X)
#' order = 2
#' EPS = 1e-11
#'
#' system.time(test1 <- fastflightcubeSPOT(X,pik,order = 2))
#' system.time(test2 <- fastflightcubeSPOT(X,pik,order = 1))
#' system.time(test3 <- sampling::fastflightcube(X,pik, order = 2))
#' system.time(test4 <- BalancedSampling::flightphase(pik,X))
#'
#'
#' }
fastflightcubeSPOT <- function (X, pik, order = 2, comment = TRUE)
{
EPS = 1e-11
"reduc" <- function(X) {
EPS = 1e-10
N = dim(X)[1]
Re = svd(X)
array(Re$u[, (Re$d > EPS)], c(N, sum(as.integer(Re$d >
EPS))))
}
########################### START ALGO
N = length(pik)
p = round(length(X)/length(pik))
X <- array(X, c(N, p))
if (order == 1){
o <- sample(N, N)
}else {
if (order == 2){
o <- seq(1, N, 1)
}else {
o <- order(pik, decreasing = TRUE)
}
}
liste <- o[(pik[o] > EPS & pik[o] < (1 - EPS))]
if (comment == TRUE) {
cat("\nBEGINNING OF THE FLIGHT PHASE\n")
cat("The matrix of balanced variable has", p, " variables and ",
N, " units\n")
cat("The size of the inclusion probability vector is ",
length(pik), "\n")
cat("The sum of the inclusion probability vector is ",
sum(pik), "\n")
cat("The inclusion probability vector has ", length(liste),
" non-integer elements\n")
}
pikbon <- pik[liste]
Nbon = length(pikbon)
Xbon <- array(X[liste, ], c(Nbon, p))
pikstar <- pik
flag = 0
# X <- Xbon
# pik <- pikbon
# begin algorithm (general case where N > p) at the end of this phase you have
# at most p values that are not equal to 0 or 1.
if (Nbon > p) {
if (comment == TRUE)
cat("Step 1 ")
system.time(pikstarbon <- algofastflightcubeSPOT(Xbon, pikbon,redux = TRUE))
pikstar[liste] = pikstarbon
flag = 1
}
# reupdate the liste and the exctract element no equal to 0 or 1
liste <- o[(pikstar[o] > EPS & pikstar[o] < (1 - EPS))]
pikbon <- pikstar[liste]
Nbon = length(pikbon)
Xbon <- array(X[liste, ], c(Nbon, p))
pbon = dim(Xbon)[2]
# if you still have value that are not equal to 0 or 1 you reduc the matrix and loop until
if (Nbon > 0) {
Xbon = reduc(Xbon)
pbon = dim(Xbon)[2]
}
k = 2
while (Nbon > pbon & Nbon > 0) {
if (comment == TRUE)
cat("Step ", k, ", ")
k = k + 1
pikstarbon <- algofastflightcubeSPOT(Xbon/pik[liste] * pikbon,
pikbon,redux = FALSE)
pikstar[liste] = pikstarbon
liste <- o[(pikstar[o] > EPS & pikstar[o] < (1 - EPS))]
pikbon <- pikstar[liste]
Nbon = length(pikbon)
Xbon <- array(X[liste, ], c(Nbon, p))
if (Nbon > 0) {
Xbon = reduc(Xbon)
pbon = dim(Xbon)[2]
}
flag = 1
}
if (comment == TRUE)
if (flag == 0)
cat("NO FLIGHT PHASE")
if (comment == TRUE)
cat("\n")
pikstar
}
|
library(magrittr)
library(purrr)
library(dplyr)
library(ggplot2)
semilla = 800
# base descargada de
# https://www.kaggle.com/aljarah/xAPI-Edu-Data
# notas para la clase -----------------------------------------------------
# leer documentacion de como parte arbol y RF las numericas y las categoricas
# leer ISLR de como parte arbol y RF las variables
# leer sobre variable importance
# ver que es OOB prediction error en ranger
# armar ejemplo de como funciona map_* y map2
# armar ejemplo de como funcionan tibbles con listas
# ver como obtener facilmente signo de la importancia de feature RF cra la clase
# read --------------------------------------------------------------------
f = "data/raw/lms/xAPI-Edu-Data.csv"
base = read.csv(f, stringsAsFactors=F) %>% janitor::clean_names()
dict = read.delim("resources/variables_lms.txt", sep="|", stringsAsFactors=F)
# clean -------------------------------------------------------------------
dat = base %>%
mutate(
target = factor(ifelse(class %in% "L", 1, 0), levels=1:0)
) %>%
select(-class) %>%
select(-c(placeof_birth, national_i_ty))
# exploratorio ------------------------------------------------------------
skimr::skim(dat)
num_vars = dat %>% select_if(is.numeric) %>% names()
cat_vars = dat %>% select_if(function(x) !is.numeric(x)) %>% names()
dat_num = dat %>% select(num_vars, target)
dat_cat = dat %>% select(cat_vars)
GGally::ggpairs(dat_num, aes(color=target))
# ejercicio: hallar cuantos niveles tiene cada variable categorica
# cat_levels = dat_cat %>% map_dbl(function(x) length(unique(x)))
plot_bar = function(var) {
ggplot(dat_cat) +
facet_wrap(as.formula(paste("~", var)), scales="free") +
geom_bar(aes(x=target, fill=target)) +
labs(title=var) +
NULL
}
plots = list()
for (v in names(dat_cat)) plots[[v]] = plot_bar(v)
# arboles de decision -----------------------------------------------------
library(rpart)
library(rpart.plot)
# fit function
fit_tree = function(data, cp=0.01, maxdepth=30, minsplit=20) {
rpart(target ~ ., data=data, method="class", model=T
, cp=cp, maxdepth=maxdepth, minsplit=minsplit)
}
mod = fit_tree(dat)
mod2 = fit_tree(dat, cp=0, minsplit=1)
# plot
rpart.plot(mod, cex=0.5)
rpart.plot(mod, extra=104)
rpart.plot(mod, extra=2)
rpart.plot(mod2)
# reglas
rpart.rules(mod, cover=T) %>% View()
# predict
predict_tree = function(model, newdata) {
predict(model, newdata=newdata, type="prob")[,1]
}
pred_3 = predict_tree(mod3, newdata=dat)
table(pred_3, dat$target)
# performance -------------------------------------------------------------
# performance
library(yardstick)
metrica_auc = function(target, prob_pred) {
tab = data.frame(y=factor(target), prob=prob_pred)
roc_auc(tab, truth=y, prob)$.estimate
}
metrica_auc(dat$target, pred_3)
# data split --------------------------------------------------------------
# idealmente: train - test y CV dentro de train
# hacemos solo CV por pocos datos
# library(rsample)
# set.seed(semilla)
# tt_split = dat %>% initial_split(prop=0.8)
# dat_train = tt_split %>% training()
# dat_test = tt_split %>% testing()
library(rsample)
set.seed(semilla)
cv_split = vfold_cv(dat, v=5)
# (analisis y assessment sets)
# random forest -----------------------------------------------------------
library(recipes)
receta_rf = function(dataset) {
recipe(target ~ ., data = dataset) %>%
step_other(all_nominal(), -all_outcomes(), threshold=0.05)
}
library(ranger)
# fit
fit_rf = function(data, mtry=4, minsize=1, trees=500) {
ranger(target ~ ., data=data, mtry=mtry, min.node.size=minsize, num.trees=trees
, probability=T, importance="permutation")
}
set.seed(semilla)
mod_rf = fit_rf(data=dat, mtry=4, minsize=50)
# predict
predict_rf = function(model, newdata) {
predict(model, data=newdata)$predictions[,1]
}
pred_rf = predict_rf(mod_rf, dat)
# train y predict para un corte de CV
train_apply_rf = function(fold_split, receta, mtry=4, minsize=1, trees=500) {
# get analysis data
dat_an = fold_split %>% analysis()
# train receta sobre analysis data
receta_trained = dat_an %>% receta %>% prep(retain=T)
# get analysis preprocesado
dat_an_prep = juice(receta_trained)
# get assessment data
dat_as = fold_split %>% assessment()
# dat_as_baked = dat_as %>% receta() %>% prep() %>% bake(newdata=dat_as)
dat_as_baked = receta_trained %>% bake(new_data=dat_as)
# entrena modelo
mod = fit_rf(dat_an_prep, mtry, minsize, trees)
# predict
out = tibble(
"id" = fold_split$id$id
,"obs" = dat_as$target
,"pred" = predict_rf(mod, newdata=dat_as_baked)
)
return(out)
}
train_apply_rf(cv_split$splits$`1`, receta=receta_rf)
kcv_rf = function(cv_splits, receta, mtry=4, minsize=1, trees=500) {
map_df(
cv_split$splits,
function(s)
train_apply_rf(fold_split=s, receta, mtry=mtry, minsize=minsize, trees=trees)
)
}
library(yardstick)
kcv_auc_rf = function(tab_cv_pred) {
tab_cv_pred %>%
group_by(id) %>%
roc_auc(obs, pred) %>%
select(id, .estimate) %>%
rename(auc = .estimate)
}
tab_cv_rf = kcv_rf(cv_splits, receta_rf)
kcv_auc_rf(tab_cv_rf)
# hiperparametrizacion
set.seed(semilla)
grilla = expand.grid(
mtry = 2:(ncol(dat)-1)
,minsize = seq(10,100,10)
) %>%
slice(sample(nrow(.),50)) %>%
as_tibble()
kcv = grilla %>%
mutate(
pred_cv = map2(
.x=mtry, .y=minsize
,function(x,y) kcv_rf(cv_splits=cv_split, receta=receta_rf
,mtry=x, minsize=y, trees=100)
)
,auc_cv = map(pred_cv, kcv_auc_rf)
)
resultados = kcv %>%
tidyr::unnest(auc_cv)
# performance por fold
resultados %>%
group_by(id) %>%
summarise(auc_media = mean(auc))
# performance por hiperparametros
resultados_sum = resultados %>%
group_by(mtry, minsize) %>%
summarise(
m_auc = mean(auc)
,se = sd(auc)
,se1 = m_auc-se
) %>%
ungroup() %>%
arrange(-m_auc)
auc_min_se1 = resultados_sum %>%
filter(m_auc == max(m_auc)) %>% pull(se1)
resultados_sum %>%
filter(m_auc >= auc_min_se1) %>%
filter(minsize == max(minsize))
mtry_opt = 2
minsize_opt = 100
# modelo final
# se ajusta modelo con toda la data y parametros optimos
dat_prep = dat %>% receta_rf() %>% prep(retain=T) %>% juice()
set.seed(semilla)
mod_rf = fit_rf(dat_prep, mtry=2, minsize=100, trees=100)
treeInfo(mod_rf, tree = 1)
# feature importance
varimp = tibble(
variable = names(mod_rf$variable.importance)
,importance = mod_rf$variable.importance
) %>% arrange(-importance)
# feature importance plot
# ggplot(varimp, aes(x=reorder(variable,importance), y=importance, fill=importance))+
# geom_bar(stat="identity", position="dodge") +
# coord_flip() +
# guides(fill=F)
# ejercicio:
# optimizar hiperparametros de rpart con CV
# ejercicio:
# incluir el "step_other" en la hiperparametrizacion
# ejercicio:
# obtener hiperparametros que optimizacion una funcion de ganancia ficticia
# (incluyendo el punto de corte de la prob predicha como hiperparametro!)
# PARA TENER EN CUENTA:
# la clasificacion es la misma si la hace directo ranger
# que si sacamos las clasficaciones de todos los arboles y clasificamos
# segun la clase mayoritaria predicha para cada obs
# la probabilidad devuleta por ranger no es la misma que la proporcion de la clase
# mayoritaria devuelta por ranger (pero es cercana)
# library(ranger)
# fit_rf = function(data, mtry=4, minsize=1, trees=500) {
# ranger(target ~ ., data=data, mtry=mtry, min.node.size=minsize, num.trees=trees
# , probability=T)
# }
# fit_rf2 = function(data, mtry=4, minsize=1, trees=500) {
# ranger(target ~ ., data=data, mtry=mtry, min.node.size=minsize, num.trees=trees
# , probability=F)
# }
# set.seed(semilla)
# gg = fit_rf(data=dat, mtry=4, minsize=50)
# set.seed(semilla)
# hh = fit_rf2(data=dat, mtry=4, minsize=50)
# aa = predict(gg, data=dat)
# bb = predict(hh, data=dat, predict.all=F)
# cc = predict(hh, data=dat, predict.all=T)
# aaa = aa$predictions[,1]
# bbb = bb$predictions
# rr = ifelse(cc$predictions == 2, 0, 1)
# ccc = apply(rr, 1, mean) %>% {ifelse(.>0.5, 1, 0)}
# table(bbb, ccc)
|
/clases_full/05-arboles_full.r
|
no_license
|
ftvalentini/curso-CepalR2019
|
R
| false
| false
| 8,162
|
r
|
library(magrittr)
library(purrr)
library(dplyr)
library(ggplot2)
semilla = 800
# base descargada de
# https://www.kaggle.com/aljarah/xAPI-Edu-Data
# notas para la clase -----------------------------------------------------
# leer documentacion de como parte arbol y RF las numericas y las categoricas
# leer ISLR de como parte arbol y RF las variables
# leer sobre variable importance
# ver que es OOB prediction error en ranger
# armar ejemplo de como funciona map_* y map2
# armar ejemplo de como funcionan tibbles con listas
# ver como obtener facilmente signo de la importancia de feature RF cra la clase
# read --------------------------------------------------------------------
f = "data/raw/lms/xAPI-Edu-Data.csv"
base = read.csv(f, stringsAsFactors=F) %>% janitor::clean_names()
dict = read.delim("resources/variables_lms.txt", sep="|", stringsAsFactors=F)
# clean -------------------------------------------------------------------
dat = base %>%
mutate(
target = factor(ifelse(class %in% "L", 1, 0), levels=1:0)
) %>%
select(-class) %>%
select(-c(placeof_birth, national_i_ty))
# exploratorio ------------------------------------------------------------
skimr::skim(dat)
num_vars = dat %>% select_if(is.numeric) %>% names()
cat_vars = dat %>% select_if(function(x) !is.numeric(x)) %>% names()
dat_num = dat %>% select(num_vars, target)
dat_cat = dat %>% select(cat_vars)
GGally::ggpairs(dat_num, aes(color=target))
# ejercicio: hallar cuantos niveles tiene cada variable categorica
# cat_levels = dat_cat %>% map_dbl(function(x) length(unique(x)))
plot_bar = function(var) {
ggplot(dat_cat) +
facet_wrap(as.formula(paste("~", var)), scales="free") +
geom_bar(aes(x=target, fill=target)) +
labs(title=var) +
NULL
}
plots = list()
for (v in names(dat_cat)) plots[[v]] = plot_bar(v)
# arboles de decision -----------------------------------------------------
library(rpart)
library(rpart.plot)
# fit function
fit_tree = function(data, cp=0.01, maxdepth=30, minsplit=20) {
rpart(target ~ ., data=data, method="class", model=T
, cp=cp, maxdepth=maxdepth, minsplit=minsplit)
}
mod = fit_tree(dat)
mod2 = fit_tree(dat, cp=0, minsplit=1)
# plot
rpart.plot(mod, cex=0.5)
rpart.plot(mod, extra=104)
rpart.plot(mod, extra=2)
rpart.plot(mod2)
# reglas
rpart.rules(mod, cover=T) %>% View()
# predict
predict_tree = function(model, newdata) {
predict(model, newdata=newdata, type="prob")[,1]
}
pred_3 = predict_tree(mod3, newdata=dat)
table(pred_3, dat$target)
# performance -------------------------------------------------------------
# performance
library(yardstick)
metrica_auc = function(target, prob_pred) {
tab = data.frame(y=factor(target), prob=prob_pred)
roc_auc(tab, truth=y, prob)$.estimate
}
metrica_auc(dat$target, pred_3)
# data split --------------------------------------------------------------
# idealmente: train - test y CV dentro de train
# hacemos solo CV por pocos datos
# library(rsample)
# set.seed(semilla)
# tt_split = dat %>% initial_split(prop=0.8)
# dat_train = tt_split %>% training()
# dat_test = tt_split %>% testing()
library(rsample)
set.seed(semilla)
cv_split = vfold_cv(dat, v=5)
# (analisis y assessment sets)
# random forest -----------------------------------------------------------
library(recipes)
receta_rf = function(dataset) {
recipe(target ~ ., data = dataset) %>%
step_other(all_nominal(), -all_outcomes(), threshold=0.05)
}
library(ranger)
# fit
fit_rf = function(data, mtry=4, minsize=1, trees=500) {
ranger(target ~ ., data=data, mtry=mtry, min.node.size=minsize, num.trees=trees
, probability=T, importance="permutation")
}
set.seed(semilla)
mod_rf = fit_rf(data=dat, mtry=4, minsize=50)
# predict
predict_rf = function(model, newdata) {
predict(model, data=newdata)$predictions[,1]
}
pred_rf = predict_rf(mod_rf, dat)
# train y predict para un corte de CV
train_apply_rf = function(fold_split, receta, mtry=4, minsize=1, trees=500) {
# get analysis data
dat_an = fold_split %>% analysis()
# train receta sobre analysis data
receta_trained = dat_an %>% receta %>% prep(retain=T)
# get analysis preprocesado
dat_an_prep = juice(receta_trained)
# get assessment data
dat_as = fold_split %>% assessment()
# dat_as_baked = dat_as %>% receta() %>% prep() %>% bake(newdata=dat_as)
dat_as_baked = receta_trained %>% bake(new_data=dat_as)
# entrena modelo
mod = fit_rf(dat_an_prep, mtry, minsize, trees)
# predict
out = tibble(
"id" = fold_split$id$id
,"obs" = dat_as$target
,"pred" = predict_rf(mod, newdata=dat_as_baked)
)
return(out)
}
train_apply_rf(cv_split$splits$`1`, receta=receta_rf)
kcv_rf = function(cv_splits, receta, mtry=4, minsize=1, trees=500) {
map_df(
cv_split$splits,
function(s)
train_apply_rf(fold_split=s, receta, mtry=mtry, minsize=minsize, trees=trees)
)
}
library(yardstick)
kcv_auc_rf = function(tab_cv_pred) {
tab_cv_pred %>%
group_by(id) %>%
roc_auc(obs, pred) %>%
select(id, .estimate) %>%
rename(auc = .estimate)
}
tab_cv_rf = kcv_rf(cv_splits, receta_rf)
kcv_auc_rf(tab_cv_rf)
# hiperparametrizacion
set.seed(semilla)
grilla = expand.grid(
mtry = 2:(ncol(dat)-1)
,minsize = seq(10,100,10)
) %>%
slice(sample(nrow(.),50)) %>%
as_tibble()
kcv = grilla %>%
mutate(
pred_cv = map2(
.x=mtry, .y=minsize
,function(x,y) kcv_rf(cv_splits=cv_split, receta=receta_rf
,mtry=x, minsize=y, trees=100)
)
,auc_cv = map(pred_cv, kcv_auc_rf)
)
resultados = kcv %>%
tidyr::unnest(auc_cv)
# performance por fold
resultados %>%
group_by(id) %>%
summarise(auc_media = mean(auc))
# performance por hiperparametros
resultados_sum = resultados %>%
group_by(mtry, minsize) %>%
summarise(
m_auc = mean(auc)
,se = sd(auc)
,se1 = m_auc-se
) %>%
ungroup() %>%
arrange(-m_auc)
auc_min_se1 = resultados_sum %>%
filter(m_auc == max(m_auc)) %>% pull(se1)
resultados_sum %>%
filter(m_auc >= auc_min_se1) %>%
filter(minsize == max(minsize))
mtry_opt = 2
minsize_opt = 100
# modelo final
# se ajusta modelo con toda la data y parametros optimos
dat_prep = dat %>% receta_rf() %>% prep(retain=T) %>% juice()
set.seed(semilla)
mod_rf = fit_rf(dat_prep, mtry=2, minsize=100, trees=100)
treeInfo(mod_rf, tree = 1)
# feature importance
varimp = tibble(
variable = names(mod_rf$variable.importance)
,importance = mod_rf$variable.importance
) %>% arrange(-importance)
# feature importance plot
# ggplot(varimp, aes(x=reorder(variable,importance), y=importance, fill=importance))+
# geom_bar(stat="identity", position="dodge") +
# coord_flip() +
# guides(fill=F)
# ejercicio:
# optimizar hiperparametros de rpart con CV
# ejercicio:
# incluir el "step_other" en la hiperparametrizacion
# ejercicio:
# obtener hiperparametros que optimizacion una funcion de ganancia ficticia
# (incluyendo el punto de corte de la prob predicha como hiperparametro!)
# PARA TENER EN CUENTA:
# la clasificacion es la misma si la hace directo ranger
# que si sacamos las clasficaciones de todos los arboles y clasificamos
# segun la clase mayoritaria predicha para cada obs
# la probabilidad devuleta por ranger no es la misma que la proporcion de la clase
# mayoritaria devuelta por ranger (pero es cercana)
# library(ranger)
# fit_rf = function(data, mtry=4, minsize=1, trees=500) {
# ranger(target ~ ., data=data, mtry=mtry, min.node.size=minsize, num.trees=trees
# , probability=T)
# }
# fit_rf2 = function(data, mtry=4, minsize=1, trees=500) {
# ranger(target ~ ., data=data, mtry=mtry, min.node.size=minsize, num.trees=trees
# , probability=F)
# }
# set.seed(semilla)
# gg = fit_rf(data=dat, mtry=4, minsize=50)
# set.seed(semilla)
# hh = fit_rf2(data=dat, mtry=4, minsize=50)
# aa = predict(gg, data=dat)
# bb = predict(hh, data=dat, predict.all=F)
# cc = predict(hh, data=dat, predict.all=T)
# aaa = aa$predictions[,1]
# bbb = bb$predictions
# rr = ifelse(cc$predictions == 2, 0, 1)
# ccc = apply(rr, 1, mean) %>% {ifelse(.>0.5, 1, 0)}
# table(bbb, ccc)
|
#' @title Declare a target.
#' @export
#' @description A target is a single step of computation in a pipeline.
#' It runs an R command and returns a value.
#' This value gets treated as an R object that can be used
#' by the commands of targets downstream. Targets that
#' are already up to date are skipped. See the user manual
#' for more details.
#' @return A target object. Users should not modify these directly,
#' just feed them to [list()] in your `_targets.R` file.
#' @param name Symbol, name of the target.
#' @param command R code to run the target.
#' @param pattern Language to define branching for a target.
#' For example, in a pipeline with numeric vector targets `x` and `y`,
#' `tar_target(z, x + y, pattern = map(x, y))` implicitly defines
#' branches of `z` that each compute `x[1] + y[1]`, `x[2] + y[2]`,
#' and so on. See the user manual for details.
#' @param tidy_eval Logical, whether to enable tidy evaluation
#' when interpreting `command` and `pattern`. If `TRUE`, you can use the
#' "bang-bang" operator `!!` to programmatically insert
#' the values of global objects.
#' @param packages Character vector of packages to load right before
#' the target builds. Use `tar_option_set()` to set packages
#' globally for all subsequent targets you define.
#' @param library Character vector of library paths to try
#' when loading `packages`.
#' @param format Optional storage format for the target's return value.
#' With the exception of `format = "file"`, each target
#' gets a file in `_targets/objects`, and each format is a different
#' way to save and load this file.
#' Possible formats:
#' * `"rds"`: Default, uses `saveRDS()` and `readRDS()`. Should work for
#' most objects, but slow.
#' * `"qs"`: Uses `qs::qsave()` and `qs::qread()`. Should work for
#' most objects, much faster than `"rds"`. Optionally set the
#' preset for `qsave()` through the `resources` argument, e.g.
#' `tar_target(..., resources = list(preset = "archive"))`.
#' * `"fst"`: Uses `fst::write_fst()` and `fst::read_fst()`.
#' Much faster than `"rds"`, but the value must be
#' a data frame. Optionally set the compression level for
#' `fst::write_fst()` through the `resources` argument, e.g.
#' `tar_target(..., resources = list(compress = 100))`.
#' * `"fst_dt"`: Same as `"fst"`, but the value is a `data.table`.
#' Optionally set the compression level the same way as for `"fst"`.
#' * `"fst_tbl"`: Same as `"fst"`, but the value is a `tibble`.
#' Optionally set the compression level the same way as for `"fst"`.
#' * `"keras"`: Uses `keras::save_model_hdf5()` and
#' `keras::load_model_hdf5()`. The value must be a Keras model.
#' * `"torch"`: Uses `torch::torch_save()` and `torch::torch_load()`.
#' The value must be an object from the `torch` package
#' such as a tensor or neural network module.
#' * `"file"`: A dynamic file. To use this format,
#' the target needs to manually identify or save some data
#' and return a character vector of paths
#' to the data. Then, `targets` automatically checks those files and cues
#' the appropriate build decisions if those files are out of date.
#' Those paths must point to files or directories,
#' and they must not contain characters `|` or `*`.
#' All the files and directories you return must actually exist,
#' or else `targets` will throw an error. (And if `storage` is `"worker"`,
#' `targets` will first stall out trying to wait for the file
#' to arrive over a network file system.)
#' * `"url"`: A dynamic input URL. It works like `format = "file"`
#' except the return value of the target is a URL that already exists
#' and serves as input data for downstream targets. Optionally
#' supply a custom `curl` handle through the `resources` argument, e.g.
#' `tar_target(..., resources = list(handle = curl::new_handle()))`.
#' The data file at the URL needs to have an ETag or a Last-Modified
#' time stamp, or else the target will throw an error because
#' it cannot track the data. Also, use extreme caution when
#' trying to use `format = "url"` to track uploads. You must be absolutely
#' certain the ETag and Last-Modified time stamp are fully updated
#' and available by the time the target's command finishes running.
#' `targets` makes no attempt to wait for the web server.
#' * `"aws_rds"`, `"aws_qs"`, `"aws_fst"`, `"aws_fst_dt"`,
#' `"aws_fst_tbl"`, `"aws_keras"`: AWS-powered versions of the
#' respective formats `"rds"`, `"qs"`, etc. The only difference
#' is that the data file is uploaded to the AWS S3 bucket
#' you supply to `resources`. See the cloud computing chapter
#' of the manual for details.
#' * `"aws_file"`: arbitrary dynamic files on AWS S3. The target
#' should return a path to a temporary local file, then
#' `targets` will automatically upload this file to an S3
#' bucket and track it for you. Unlike `format = "file"`,
#' `format = "aws_file"` can only handle one single file,
#' and that file must not be a directory.
#' [tar_read()] and downstream targets
#' download the file to `_targets/scratch/` locally and return the path.
#' `_targets/scratch/` gets deleted at the end of [tar_make()].
#' Requires the same `resources` and other configuration details
#' as the other AWS-powered formats. See the cloud computing
#' chapter of the manual for details.
#' @param iteration Character of length 1, name of the iteration mode
#' of the target. Choices:
#' * `"vector"`: branching happens with `vctrs::vec_slice()` and
#' aggregation happens with `vctrs::vec_c()`.
#' * `"list"`, branching happens with `[[]]` and aggregation happens with
#' `list()`.
#' * `"group"`: `dplyr::group_by()`-like functionality to branch over
#' subsets of a data frame. The target's return value must be a data
#' frame with a special `tar_group` column of consecutive integers
#' from 1 through the number of groups. Each integer designates a group,
#' and a branch is created for each collection of rows in a group.
#' See the [tar_group()] function to see how you can
#' create the special `tar_group` column with `dplyr::group_by()`.
#' @param error Character of length 1, what to do if the target
#' runs into an error. If `"stop"`, the whole pipeline stops
#' and throws an error. If `"continue"`, the error is recorded,
#' but the pipeline keeps going.
#' @param memory Character of length 1, memory strategy.
#' If `"persistent"`, the target stays in memory
#' until the end of the pipeline (unless `storage` is `"worker"`,
#' in which case `targets` unloads the value from memory
#' right after storing it in order to avoid sending
#' copious data over a network).
#' If `"transient"`, the target gets unloaded
#' after every new target completes.
#' Either way, the target gets automatically loaded into memory
#' whenever another target needs the value.
#' For cloud-based dynamic files such as `format = "aws_file"`,
#' this memory policy applies to
#' temporary local copies of the file in `_targets/scratch/"`:
#' `"persistent"` means they remain until the end of the pipeline,
#' and `"transient"` means they get deleted from the file system
#' as soon as possible. The former conserves bandwidth,
#' and the latter conserves local storage.
#' @param garbage_collection Logical, whether to run `base::gc()`
#' just before the target runs.
#' @param deployment Character of length 1, only relevant to
#' [tar_make_clustermq()] and [tar_make_future()]. If `"worker"`,
#' the target builds on a parallel worker. If `"main"`,
#' the target builds on the host machine / process managing the pipeline.
#' @param priority Numeric of length 1 between 0 and 1. Controls which
#' targets get deployed first when multiple competing targets are ready
#' simultaneously. Targets with priorities closer to 1 get built earlier.
#' Only applies to [tar_make_future()] and [tar_make_clustermq()]
#' (not [tar_make()]). [tar_make_future()] with no extra settings is
#' a drop-in replacement for [tar_make()] in this case.
#' @param resources A named list of computing resources. Uses:
#' * Template file wildcards for `future::future()` in [tar_make_future()].
#' * Template file wildcards `clustermq::workers()` in [tar_make_clustermq()].
#' * Custom target-level `future::plan()`, e.g.
#' `resources = list(plan = future.callr::callr)`.
#' * Custom `curl` handle if `format = "url"`,
#' e.g. `resources = list(handle = curl::new_handle())`.
#' * Custom preset for `qs::qsave()` if `format = "qs"`, e.g.
#' `resources = list(handle = "archive")`.
#' * Custom compression level for `fst::write_fst()` if
#' `format` is `"fst"`, `"fst_dt"`, or `"fst_tbl"`, e.g.
#' `resources = list(compress = 100)`.
#' * AWS bucket and prefix for the `"aws_"` formats, e.g.
#' `resources = list(bucket = "your-bucket", prefix = "folder/name")`.
#' `bucket` is required for AWS formats. See the cloud computing chapter
#' of the manual for details.
#' @param storage Character of length 1, only relevant to
#' [tar_make_clustermq()] and [tar_make_future()].
#' If `"main"`, the target's return value is sent back to the
#' host machine and saved locally. If `"worker"`, the worker
#' saves the value.
#' @param retrieval Character of length 1, only relevant to
#' [tar_make_clustermq()] and [tar_make_future()].
#' If `"main"`, the target's dependencies are loaded on the host machine
#' and sent to the worker before the target builds.
#' If `"worker"`, the worker loads the targets dependencies.
#' @param cue An optional object from `tar_cue()` to customize the
#' rules that decide whether the target is up to date.
#' @examples
#' # Defining targets does not run them.
#' data <- tar_target(target_name, get_data(), packages = "tidyverse")
#' analysis <- tar_target(analysis, analyze(x), pattern = map(x))
#' # Pipelines accept targets.
#' pipeline <- list(data, analysis)
#' # Tidy evaluation
#' tar_option_set(envir = environment())
#' n_rows <- 30L
#' data <- tar_target(target_name, get_data(!!n_rows))
#' print(data)
#' # Disable tidy evaluation:
#' data <- tar_target(target_name, get_data(!!n_rows), tidy_eval = FALSE)
#' print(data)
#' tar_option_reset()
#' # In a pipeline:
#' if (identical(Sys.getenv("TARGETS_LONG_EXAMPLES"), "true")) {
#' tar_dir({
#' tar_script(tar_target(x, 1 + 1))
#' tar_make()
#' tar_read(x)
#' })
#' }
tar_target <- function(
name,
command,
pattern = NULL,
tidy_eval = targets::tar_option_get("tidy_eval"),
packages = targets::tar_option_get("packages"),
library = targets::tar_option_get("library"),
format = targets::tar_option_get("format"),
iteration = targets::tar_option_get("iteration"),
error = targets::tar_option_get("error"),
memory = targets::tar_option_get("memory"),
garbage_collection = targets::tar_option_get("garbage_collection"),
deployment = targets::tar_option_get("deployment"),
priority = targets::tar_option_get("priority"),
resources = targets::tar_option_get("resources"),
storage = targets::tar_option_get("storage"),
retrieval = targets::tar_option_get("retrieval"),
cue = targets::tar_option_get("cue")
) {
name <- deparse_language(substitute(name))
assert_chr(name, "name arg of tar_target() must be a symbol")
assert_lgl(tidy_eval, "tidy_eval in tar_target() must be logical.")
assert_chr(packages, "packages in tar_target() must be character.")
assert_chr(
library %||% character(0),
"library in tar_target() must be NULL or character."
)
assert_format(format)
iteration <- match.arg(iteration, c("vector", "list", "group"))
error <- match.arg(error, c("stop", "continue", "workspace"))
memory <- match.arg(memory, c("persistent", "transient"))
assert_lgl(garbage_collection, "garbage_collection must be logical.")
assert_scalar(garbage_collection, "garbage_collection must be a scalar.")
deployment <- match.arg(deployment, c("worker", "main"))
assert_dbl(priority)
assert_scalar(priority)
assert_ge(priority, 0)
assert_le(priority, 1)
assert_list(resources, "resources in tar_target() must be a named list.")
storage <- match.arg(storage, c("main", "worker"))
retrieval <- match.arg(retrieval, c("main", "worker"))
if (!is.null(cue)) {
cue_validate(cue)
}
envir <- tar_option_get("envir")
expr <- as.expression(substitute(command))
pattern <- as.expression(substitute(pattern))
target_init(
name = name,
expr = tidy_eval(expr, envir, tidy_eval),
pattern = tidy_eval(pattern, envir, tidy_eval),
packages = packages,
library = library,
envir = envir,
format = format,
iteration = iteration,
error = error,
memory = memory,
garbage_collection = garbage_collection,
deployment = deployment,
priority = priority,
resources = resources,
storage = storage,
retrieval = retrieval,
cue = cue
)
}
|
/R/tar_target.R
|
permissive
|
tjmahr/targets
|
R
| false
| false
| 13,123
|
r
|
#' @title Declare a target.
#' @export
#' @description A target is a single step of computation in a pipeline.
#' It runs an R command and returns a value.
#' This value gets treated as an R object that can be used
#' by the commands of targets downstream. Targets that
#' are already up to date are skipped. See the user manual
#' for more details.
#' @return A target object. Users should not modify these directly,
#' just feed them to [list()] in your `_targets.R` file.
#' @param name Symbol, name of the target.
#' @param command R code to run the target.
#' @param pattern Language to define branching for a target.
#' For example, in a pipeline with numeric vector targets `x` and `y`,
#' `tar_target(z, x + y, pattern = map(x, y))` implicitly defines
#' branches of `z` that each compute `x[1] + y[1]`, `x[2] + y[2]`,
#' and so on. See the user manual for details.
#' @param tidy_eval Logical, whether to enable tidy evaluation
#' when interpreting `command` and `pattern`. If `TRUE`, you can use the
#' "bang-bang" operator `!!` to programmatically insert
#' the values of global objects.
#' @param packages Character vector of packages to load right before
#' the target builds. Use `tar_option_set()` to set packages
#' globally for all subsequent targets you define.
#' @param library Character vector of library paths to try
#' when loading `packages`.
#' @param format Optional storage format for the target's return value.
#' With the exception of `format = "file"`, each target
#' gets a file in `_targets/objects`, and each format is a different
#' way to save and load this file.
#' Possible formats:
#' * `"rds"`: Default, uses `saveRDS()` and `readRDS()`. Should work for
#' most objects, but slow.
#' * `"qs"`: Uses `qs::qsave()` and `qs::qread()`. Should work for
#' most objects, much faster than `"rds"`. Optionally set the
#' preset for `qsave()` through the `resources` argument, e.g.
#' `tar_target(..., resources = list(preset = "archive"))`.
#' * `"fst"`: Uses `fst::write_fst()` and `fst::read_fst()`.
#' Much faster than `"rds"`, but the value must be
#' a data frame. Optionally set the compression level for
#' `fst::write_fst()` through the `resources` argument, e.g.
#' `tar_target(..., resources = list(compress = 100))`.
#' * `"fst_dt"`: Same as `"fst"`, but the value is a `data.table`.
#' Optionally set the compression level the same way as for `"fst"`.
#' * `"fst_tbl"`: Same as `"fst"`, but the value is a `tibble`.
#' Optionally set the compression level the same way as for `"fst"`.
#' * `"keras"`: Uses `keras::save_model_hdf5()` and
#' `keras::load_model_hdf5()`. The value must be a Keras model.
#' * `"torch"`: Uses `torch::torch_save()` and `torch::torch_load()`.
#' The value must be an object from the `torch` package
#' such as a tensor or neural network module.
#' * `"file"`: A dynamic file. To use this format,
#' the target needs to manually identify or save some data
#' and return a character vector of paths
#' to the data. Then, `targets` automatically checks those files and cues
#' the appropriate build decisions if those files are out of date.
#' Those paths must point to files or directories,
#' and they must not contain characters `|` or `*`.
#' All the files and directories you return must actually exist,
#' or else `targets` will throw an error. (And if `storage` is `"worker"`,
#' `targets` will first stall out trying to wait for the file
#' to arrive over a network file system.)
#' * `"url"`: A dynamic input URL. It works like `format = "file"`
#' except the return value of the target is a URL that already exists
#' and serves as input data for downstream targets. Optionally
#' supply a custom `curl` handle through the `resources` argument, e.g.
#' `tar_target(..., resources = list(handle = curl::new_handle()))`.
#' The data file at the URL needs to have an ETag or a Last-Modified
#' time stamp, or else the target will throw an error because
#' it cannot track the data. Also, use extreme caution when
#' trying to use `format = "url"` to track uploads. You must be absolutely
#' certain the ETag and Last-Modified time stamp are fully updated
#' and available by the time the target's command finishes running.
#' `targets` makes no attempt to wait for the web server.
#' * `"aws_rds"`, `"aws_qs"`, `"aws_fst"`, `"aws_fst_dt"`,
#' `"aws_fst_tbl"`, `"aws_keras"`: AWS-powered versions of the
#' respective formats `"rds"`, `"qs"`, etc. The only difference
#' is that the data file is uploaded to the AWS S3 bucket
#' you supply to `resources`. See the cloud computing chapter
#' of the manual for details.
#' * `"aws_file"`: arbitrary dynamic files on AWS S3. The target
#' should return a path to a temporary local file, then
#' `targets` will automatically upload this file to an S3
#' bucket and track it for you. Unlike `format = "file"`,
#' `format = "aws_file"` can only handle one single file,
#' and that file must not be a directory.
#' [tar_read()] and downstream targets
#' download the file to `_targets/scratch/` locally and return the path.
#' `_targets/scratch/` gets deleted at the end of [tar_make()].
#' Requires the same `resources` and other configuration details
#' as the other AWS-powered formats. See the cloud computing
#' chapter of the manual for details.
#' @param iteration Character of length 1, name of the iteration mode
#' of the target. Choices:
#' * `"vector"`: branching happens with `vctrs::vec_slice()` and
#' aggregation happens with `vctrs::vec_c()`.
#' * `"list"`, branching happens with `[[]]` and aggregation happens with
#' `list()`.
#' * `"group"`: `dplyr::group_by()`-like functionality to branch over
#' subsets of a data frame. The target's return value must be a data
#' frame with a special `tar_group` column of consecutive integers
#' from 1 through the number of groups. Each integer designates a group,
#' and a branch is created for each collection of rows in a group.
#' See the [tar_group()] function to see how you can
#' create the special `tar_group` column with `dplyr::group_by()`.
#' @param error Character of length 1, what to do if the target
#' runs into an error. If `"stop"`, the whole pipeline stops
#' and throws an error. If `"continue"`, the error is recorded,
#' but the pipeline keeps going.
#' @param memory Character of length 1, memory strategy.
#' If `"persistent"`, the target stays in memory
#' until the end of the pipeline (unless `storage` is `"worker"`,
#' in which case `targets` unloads the value from memory
#' right after storing it in order to avoid sending
#' copious data over a network).
#' If `"transient"`, the target gets unloaded
#' after every new target completes.
#' Either way, the target gets automatically loaded into memory
#' whenever another target needs the value.
#' For cloud-based dynamic files such as `format = "aws_file"`,
#' this memory policy applies to
#' temporary local copies of the file in `_targets/scratch/"`:
#' `"persistent"` means they remain until the end of the pipeline,
#' and `"transient"` means they get deleted from the file system
#' as soon as possible. The former conserves bandwidth,
#' and the latter conserves local storage.
#' @param garbage_collection Logical, whether to run `base::gc()`
#' just before the target runs.
#' @param deployment Character of length 1, only relevant to
#' [tar_make_clustermq()] and [tar_make_future()]. If `"worker"`,
#' the target builds on a parallel worker. If `"main"`,
#' the target builds on the host machine / process managing the pipeline.
#' @param priority Numeric of length 1 between 0 and 1. Controls which
#' targets get deployed first when multiple competing targets are ready
#' simultaneously. Targets with priorities closer to 1 get built earlier.
#' Only applies to [tar_make_future()] and [tar_make_clustermq()]
#' (not [tar_make()]). [tar_make_future()] with no extra settings is
#' a drop-in replacement for [tar_make()] in this case.
#' @param resources A named list of computing resources. Uses:
#' * Template file wildcards for `future::future()` in [tar_make_future()].
#' * Template file wildcards `clustermq::workers()` in [tar_make_clustermq()].
#' * Custom target-level `future::plan()`, e.g.
#' `resources = list(plan = future.callr::callr)`.
#' * Custom `curl` handle if `format = "url"`,
#' e.g. `resources = list(handle = curl::new_handle())`.
#' * Custom preset for `qs::qsave()` if `format = "qs"`, e.g.
#' `resources = list(handle = "archive")`.
#' * Custom compression level for `fst::write_fst()` if
#' `format` is `"fst"`, `"fst_dt"`, or `"fst_tbl"`, e.g.
#' `resources = list(compress = 100)`.
#' * AWS bucket and prefix for the `"aws_"` formats, e.g.
#' `resources = list(bucket = "your-bucket", prefix = "folder/name")`.
#' `bucket` is required for AWS formats. See the cloud computing chapter
#' of the manual for details.
#' @param storage Character of length 1, only relevant to
#' [tar_make_clustermq()] and [tar_make_future()].
#' If `"main"`, the target's return value is sent back to the
#' host machine and saved locally. If `"worker"`, the worker
#' saves the value.
#' @param retrieval Character of length 1, only relevant to
#' [tar_make_clustermq()] and [tar_make_future()].
#' If `"main"`, the target's dependencies are loaded on the host machine
#' and sent to the worker before the target builds.
#' If `"worker"`, the worker loads the targets dependencies.
#' @param cue An optional object from `tar_cue()` to customize the
#' rules that decide whether the target is up to date.
#' @examples
#' # Defining targets does not run them.
#' data <- tar_target(target_name, get_data(), packages = "tidyverse")
#' analysis <- tar_target(analysis, analyze(x), pattern = map(x))
#' # Pipelines accept targets.
#' pipeline <- list(data, analysis)
#' # Tidy evaluation
#' tar_option_set(envir = environment())
#' n_rows <- 30L
#' data <- tar_target(target_name, get_data(!!n_rows))
#' print(data)
#' # Disable tidy evaluation:
#' data <- tar_target(target_name, get_data(!!n_rows), tidy_eval = FALSE)
#' print(data)
#' tar_option_reset()
#' # In a pipeline:
#' if (identical(Sys.getenv("TARGETS_LONG_EXAMPLES"), "true")) {
#' tar_dir({
#' tar_script(tar_target(x, 1 + 1))
#' tar_make()
#' tar_read(x)
#' })
#' }
tar_target <- function(
name,
command,
pattern = NULL,
tidy_eval = targets::tar_option_get("tidy_eval"),
packages = targets::tar_option_get("packages"),
library = targets::tar_option_get("library"),
format = targets::tar_option_get("format"),
iteration = targets::tar_option_get("iteration"),
error = targets::tar_option_get("error"),
memory = targets::tar_option_get("memory"),
garbage_collection = targets::tar_option_get("garbage_collection"),
deployment = targets::tar_option_get("deployment"),
priority = targets::tar_option_get("priority"),
resources = targets::tar_option_get("resources"),
storage = targets::tar_option_get("storage"),
retrieval = targets::tar_option_get("retrieval"),
cue = targets::tar_option_get("cue")
) {
name <- deparse_language(substitute(name))
assert_chr(name, "name arg of tar_target() must be a symbol")
assert_lgl(tidy_eval, "tidy_eval in tar_target() must be logical.")
assert_chr(packages, "packages in tar_target() must be character.")
assert_chr(
library %||% character(0),
"library in tar_target() must be NULL or character."
)
assert_format(format)
iteration <- match.arg(iteration, c("vector", "list", "group"))
error <- match.arg(error, c("stop", "continue", "workspace"))
memory <- match.arg(memory, c("persistent", "transient"))
assert_lgl(garbage_collection, "garbage_collection must be logical.")
assert_scalar(garbage_collection, "garbage_collection must be a scalar.")
deployment <- match.arg(deployment, c("worker", "main"))
assert_dbl(priority)
assert_scalar(priority)
assert_ge(priority, 0)
assert_le(priority, 1)
assert_list(resources, "resources in tar_target() must be a named list.")
storage <- match.arg(storage, c("main", "worker"))
retrieval <- match.arg(retrieval, c("main", "worker"))
if (!is.null(cue)) {
cue_validate(cue)
}
envir <- tar_option_get("envir")
expr <- as.expression(substitute(command))
pattern <- as.expression(substitute(pattern))
target_init(
name = name,
expr = tidy_eval(expr, envir, tidy_eval),
pattern = tidy_eval(pattern, envir, tidy_eval),
packages = packages,
library = library,
envir = envir,
format = format,
iteration = iteration,
error = error,
memory = memory,
garbage_collection = garbage_collection,
deployment = deployment,
priority = priority,
resources = resources,
storage = storage,
retrieval = retrieval,
cue = cue
)
}
|
# This is the server logic for a Shiny web application.
# You can find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com
#
library(shiny)
library(ggplot2)
buoydata <- read.csv('data/FormattedBuoyData.csv')
buoydata$DateTime <- as.Date(buoydata$DateTime,format="%m/%d/%Y")
shinyServer(function(input, output, session) {
output$selected_var <- renderText({
paste('Viewing water quality data at',input$site,'between',input$dates[1],'and',input$dates[2])
})
output$wqplot <- renderPlot({
plotdata <- subset(buoydata,SiteName==input$site &
DateTime >= input$dates[1] &
DateTime<= input$dates[2])
ggplot(data=plotdata,aes(x=plotdata$DateTime,y=plotdata[,input$param]))+
geom_line()+xlab("Date")+ylab(input$param)+theme_bw()
})
chlmod <- reactive({
plotdata <- subset(buoydata,SiteName==input$site &
DateTime >= input$dates[1] &
DateTime<= input$dates[2])
mod <- lm(plotdata$Chlorophyll_ug_L~plotdata[,input$param])
modsummary <- summary(mod)
return(modsummary)
})
output$modelresults <- renderText({
paste("R-Squared between Chlorophyll and",input$param," during this timeframe:",chlmod()$r.squared)
})
})
|
/ExampleWaterQualityApp/server.R
|
no_license
|
UtahHydroinformatics/ExampleWQAppSolution
|
R
| false
| false
| 1,355
|
r
|
# This is the server logic for a Shiny web application.
# You can find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com
#
library(shiny)
library(ggplot2)
buoydata <- read.csv('data/FormattedBuoyData.csv')
buoydata$DateTime <- as.Date(buoydata$DateTime,format="%m/%d/%Y")
shinyServer(function(input, output, session) {
output$selected_var <- renderText({
paste('Viewing water quality data at',input$site,'between',input$dates[1],'and',input$dates[2])
})
output$wqplot <- renderPlot({
plotdata <- subset(buoydata,SiteName==input$site &
DateTime >= input$dates[1] &
DateTime<= input$dates[2])
ggplot(data=plotdata,aes(x=plotdata$DateTime,y=plotdata[,input$param]))+
geom_line()+xlab("Date")+ylab(input$param)+theme_bw()
})
chlmod <- reactive({
plotdata <- subset(buoydata,SiteName==input$site &
DateTime >= input$dates[1] &
DateTime<= input$dates[2])
mod <- lm(plotdata$Chlorophyll_ug_L~plotdata[,input$param])
modsummary <- summary(mod)
return(modsummary)
})
output$modelresults <- renderText({
paste("R-Squared between Chlorophyll and",input$param," during this timeframe:",chlmod()$r.squared)
})
})
|
## ----echo = FALSE-------------------------------------------------------------
knitr::opts_chunk$set(collapse = TRUE, warning = FALSE, comment = "#>")
suppressPackageStartupMessages(library(sjmisc))
## ----message=FALSE------------------------------------------------------------
library(sjmisc)
data(efc)
## -----------------------------------------------------------------------------
# age, ranged from 65 to 104, in this output
# grouped to get a shorter table
frq(efc, e17age, auto.grp = 5)
# splitting is done at the median by default:
median(efc$e17age, na.rm = TRUE)
# the recoded variable is now named "e17age_d"
efc <- dicho(efc, e17age)
frq(efc, e17age_d)
## -----------------------------------------------------------------------------
x <- dicho(efc$e17age, val.labels = c("young age", "old age"))
frq(x)
## -----------------------------------------------------------------------------
# split at upper quartile
x <- dicho(
efc$e17age,
dich.by = quantile(efc$e17age, probs = .75, na.rm = TRUE),
val.labels = c("younger three quarters", "oldest quarter")
)
frq(x)
## -----------------------------------------------------------------------------
data(efc)
x1 <- dicho(efc$e17age)
x2 <- efc %>%
dplyr::group_by(c161sex) %>%
dicho(e17age) %>%
dplyr::pull(e17age_d)
# median age of total sample
frq(x1)
# median age of total sample, with median-split applied
# to distibution of age by subgroups of gender
frq(x2)
## -----------------------------------------------------------------------------
x <- split_var(efc$e17age, n = 3)
frq(x)
## -----------------------------------------------------------------------------
x <- dplyr::ntile(efc$neg_c_7, n = 3)
# for some cases, value "10" is recoded into category "1",
# for other cases into category "2". Same is true for value "13"
table(efc$neg_c_7, x)
x <- split_var(efc$neg_c_7, n = 3)
# no separation of cases with identical values.
table(efc$neg_c_7, x)
## -----------------------------------------------------------------------------
x <- dplyr::ntile(efc$neg_c_7, n = 3)
frq(x)
x <- split_var(efc$neg_c_7, n = 3)
frq(x)
## -----------------------------------------------------------------------------
set.seed(123)
x <- round(runif(n = 150, 1, 10))
frq(x)
frq(group_var(x, size = 5))
group_labels(x, size = 5)
dummy <- group_var(x, size = 5, as.num = FALSE)
levels(dummy) <- group_labels(x, size = 5)
frq(dummy)
dummy <- group_var(x, size = 3, as.num = FALSE)
levels(dummy) <- group_labels(x, size = 3)
frq(dummy)
## -----------------------------------------------------------------------------
dummy <- group_var(x, size = 4, as.num = FALSE)
levels(dummy) <- group_labels(x, size = 4)
frq(dummy)
dummy <- group_var(x, size = 4, as.num = FALSE, right.interval = TRUE)
levels(dummy) <- group_labels(x, size = 4, right.interval = TRUE)
frq(dummy)
## -----------------------------------------------------------------------------
frq(efc$e42dep)
# replace NA with 5
frq(rec(efc$e42dep, rec = "NA=5;else=copy"))
# recode 1 to 2 into 1 and 3 to 4 into 2
frq(rec(efc$e42dep, rec = "1,2=1; 3,4=2"))
# recode 1 to 3 into 4 into 2
frq(rec(efc$e42dep, rec = "min:3=1; 4=2"))
# recode numeric to character, and remaining values
# into the highest value (="hi") of e42dep
frq(rec(efc$e42dep, rec = "1=first;2=2nd;else=hi"))
data(iris)
frq(rec(iris, Species, rec = "setosa=huhu; else=copy", append = FALSE))
# works with mutate
efc %>%
dplyr::select(e42dep, e17age) %>%
dplyr::mutate(dependency_rev = rec(e42dep, rec = "rev")) %>%
head()
# recode multiple variables and set value labels via recode-syntax
dummy <- rec(
efc, c160age, e17age,
rec = "15:30=1 [young]; 31:55=2 [middle]; 56:max=3 [old]",
append = FALSE
)
frq(dummy)
|
/packrat/lib/x86_64-apple-darwin19.4.0/4.0.4/sjmisc/doc/recodingvariables.R
|
no_license
|
marilotte/Pregancy_Relapse_Count_Simulation
|
R
| false
| false
| 3,869
|
r
|
## ----echo = FALSE-------------------------------------------------------------
knitr::opts_chunk$set(collapse = TRUE, warning = FALSE, comment = "#>")
suppressPackageStartupMessages(library(sjmisc))
## ----message=FALSE------------------------------------------------------------
library(sjmisc)
data(efc)
## -----------------------------------------------------------------------------
# age, ranged from 65 to 104, in this output
# grouped to get a shorter table
frq(efc, e17age, auto.grp = 5)
# splitting is done at the median by default:
median(efc$e17age, na.rm = TRUE)
# the recoded variable is now named "e17age_d"
efc <- dicho(efc, e17age)
frq(efc, e17age_d)
## -----------------------------------------------------------------------------
x <- dicho(efc$e17age, val.labels = c("young age", "old age"))
frq(x)
## -----------------------------------------------------------------------------
# split at upper quartile
x <- dicho(
efc$e17age,
dich.by = quantile(efc$e17age, probs = .75, na.rm = TRUE),
val.labels = c("younger three quarters", "oldest quarter")
)
frq(x)
## -----------------------------------------------------------------------------
data(efc)
x1 <- dicho(efc$e17age)
x2 <- efc %>%
dplyr::group_by(c161sex) %>%
dicho(e17age) %>%
dplyr::pull(e17age_d)
# median age of total sample
frq(x1)
# median age of total sample, with median-split applied
# to distibution of age by subgroups of gender
frq(x2)
## -----------------------------------------------------------------------------
x <- split_var(efc$e17age, n = 3)
frq(x)
## -----------------------------------------------------------------------------
x <- dplyr::ntile(efc$neg_c_7, n = 3)
# for some cases, value "10" is recoded into category "1",
# for other cases into category "2". Same is true for value "13"
table(efc$neg_c_7, x)
x <- split_var(efc$neg_c_7, n = 3)
# no separation of cases with identical values.
table(efc$neg_c_7, x)
## -----------------------------------------------------------------------------
x <- dplyr::ntile(efc$neg_c_7, n = 3)
frq(x)
x <- split_var(efc$neg_c_7, n = 3)
frq(x)
## -----------------------------------------------------------------------------
set.seed(123)
x <- round(runif(n = 150, 1, 10))
frq(x)
frq(group_var(x, size = 5))
group_labels(x, size = 5)
dummy <- group_var(x, size = 5, as.num = FALSE)
levels(dummy) <- group_labels(x, size = 5)
frq(dummy)
dummy <- group_var(x, size = 3, as.num = FALSE)
levels(dummy) <- group_labels(x, size = 3)
frq(dummy)
## -----------------------------------------------------------------------------
dummy <- group_var(x, size = 4, as.num = FALSE)
levels(dummy) <- group_labels(x, size = 4)
frq(dummy)
dummy <- group_var(x, size = 4, as.num = FALSE, right.interval = TRUE)
levels(dummy) <- group_labels(x, size = 4, right.interval = TRUE)
frq(dummy)
## -----------------------------------------------------------------------------
frq(efc$e42dep)
# replace NA with 5
frq(rec(efc$e42dep, rec = "NA=5;else=copy"))
# recode 1 to 2 into 1 and 3 to 4 into 2
frq(rec(efc$e42dep, rec = "1,2=1; 3,4=2"))
# recode 1 to 3 into 4 into 2
frq(rec(efc$e42dep, rec = "min:3=1; 4=2"))
# recode numeric to character, and remaining values
# into the highest value (="hi") of e42dep
frq(rec(efc$e42dep, rec = "1=first;2=2nd;else=hi"))
data(iris)
frq(rec(iris, Species, rec = "setosa=huhu; else=copy", append = FALSE))
# works with mutate
efc %>%
dplyr::select(e42dep, e17age) %>%
dplyr::mutate(dependency_rev = rec(e42dep, rec = "rev")) %>%
head()
# recode multiple variables and set value labels via recode-syntax
dummy <- rec(
efc, c160age, e17age,
rec = "15:30=1 [young]; 31:55=2 [middle]; 56:max=3 [old]",
append = FALSE
)
frq(dummy)
|
##
## hierstrauss.R
##
## $Revision: 1.4 $ $Date: 2015/01/08 07:34:30 $
##
## The hierarchical Strauss process
##
## HierStrauss() create an instance of the hierarchical Strauss process
## [an object of class 'interact']
##
## -------------------------------------------------------------------
##
HierStrauss <- local({
# ......... define interaction potential
HSpotential <- function(d, tx, tu, par) {
# arguments:
# d[i,j] distance between points X[i] and U[j]
# tx[i] type (mark) of point X[i]
# tu[j] type (mark) of point U[j]
#
# get matrix of interaction radii r[ , ]
r <- par$radii
#
# get possible marks and validate
if(!is.factor(tx) || !is.factor(tu))
stop("marks of data and dummy points must be factor variables")
lx <- levels(tx)
lu <- levels(tu)
if(length(lx) != length(lu) || any(lx != lu))
stop("marks of data and dummy points do not have same possible levels")
if(!identical(lx, par$types))
stop("data and model do not have the same possible levels of marks")
if(!identical(lu, par$types))
stop("dummy points and model do not have the same possible levels of marks")
# ensure factor levels are acceptable for column names (etc)
lxname <- make.names(lx, unique=TRUE)
## list all ordered pairs of types to be checked
uptri <- par$archy$relation & !is.na(r)
mark1 <- (lx[row(r)])[uptri]
mark2 <- (lx[col(r)])[uptri]
## corresponding names
mark1name <- (lxname[row(r)])[uptri]
mark2name <- (lxname[col(r)])[uptri]
vname <- apply(cbind(mark1name,mark2name), 1, paste, collapse="x")
vname <- paste("mark", vname, sep="")
npairs <- length(vname)
## create logical array for result
z <- array(FALSE, dim=c(dim(d), npairs),
dimnames=list(character(0), character(0), vname))
# go....
if(length(z) > 0) {
## assemble the relevant interaction distance for each pair of points
rxu <- r[ tx, tu ]
## apply relevant threshold to each pair of points
str <- (d <= rxu)
## score
for(i in 1:npairs) {
# data points with mark m1
Xsub <- (tx == mark1[i])
# quadrature points with mark m2
Qsub <- (tu == mark2[i])
# assign
z[Xsub, Qsub, i] <- str[Xsub, Qsub]
}
}
return(z)
}
#### end of 'pot' function ####
# ........ auxiliary functions ..............
delHS <- function(which, types, radii, archy) {
radii[which] <- NA
if(all(is.na(radii))) return(Poisson())
return(HierStrauss(types=types, radii=radii, archy=archy))
}
# Set up basic object except for family and parameters
BlankHSobject <-
list(
name = "Hierarchical Strauss process",
creator = "HierStrauss",
family = "hierpair.family", # evaluated later
pot = HSpotential,
par = list(types=NULL, radii=NULL, archy=NULL), # filled in later
parnames = c("possible types",
"interaction distances",
"hierarchical order"),
selfstart = function(X, self) {
if(is.null(self$par$types)) types <- levels(marks(X))
if(is.null(self$par$archy)) archy <- types
HierStrauss(types=types,radii=self$par$radii,archy=self$par$archy)
},
init = function(self) {
types <- self$par$types
if(!is.null(types)) {
radii <- self$par$radii
nt <- length(types)
MultiPair.checkmatrix(radii, nt, sQuote("radii"), asymmok=TRUE)
if(length(types) == 0)
stop(paste("The", sQuote("types"),"argument should be",
"either NULL or a vector of all possible types"))
if(any(is.na(types)))
stop("NA's not allowed in types")
if(is.factor(types)) {
types <- levels(types)
} else {
types <- levels(factor(types, levels=types))
}
}
},
update = NULL, # default OK
print = function(self) {
radii <- self$par$radii
types <- self$par$types
archy <- self$par$archy
splat(nrow(radii), "types of points")
if(!is.null(types) && !is.null(archy)) {
splat("Possible types and ordering:")
print(archy)
} else if(!is.null(types)) {
splat("Possible types:")
print(types)
} else splat("Possible types:\t not yet determined")
splat("Interaction radii:")
print(hiermat(radii, self$par$archy))
invisible(NULL)
},
interpret = function(coeffs, self) {
# get possible types
typ <- self$par$types
ntypes <- length(typ)
# get matrix of Strauss interaction radii
r <- self$par$radii
# list all unordered pairs of types
uptri <- self$par$archy$relation & !is.na(r)
index1 <- (row(r))[uptri]
index2 <- (col(r))[uptri]
npairs <- length(index1)
# extract canonical parameters; shape them into a matrix
gammas <- matrix(NA, ntypes, ntypes)
dimnames(gammas) <- list(typ, typ)
gammas[ cbind(index1, index2) ] <- exp(coeffs)
#
return(list(param=list(gammas=gammas),
inames="interaction parameters gamma_ij",
printable=hiermat(round(gammas, 4), self$par$archy)))
},
valid = function(coeffs, self) {
# interaction parameters gamma[i,j]
gamma <- (self$interpret)(coeffs, self)$param$gammas
# interaction radii
radii <- self$par$radii
# parameters to estimate
required <- !is.na(radii) & self$par$archy$relation
# all required parameters must be finite
if(!all(is.finite(gamma[required]))) return(FALSE)
# DIAGONAL interaction parameters must be non-explosive
d <- diag(rep(TRUE, nrow(radii)))
return(all(gamma[required & d] <= 1))
},
project = function(coeffs, self) {
# interaction parameters gamma[i,j]
gamma <- (self$interpret)(coeffs, self)$param$gammas
# interaction radii and types
radii <- self$par$radii
types <- self$par$types
# problems?
uptri <- self$par$archy$relation
required <- !is.na(radii) & uptri
okgamma <- !uptri | (is.finite(gamma) & (gamma <= 1))
naughty <- required & !okgamma
#
if(!any(naughty))
return(NULL)
if(spatstat.options("project.fast")) {
# remove ALL naughty terms simultaneously
return(delHS(naughty, types, radii, archy))
} else {
# present a list of candidates
rn <- row(naughty)
cn <- col(naughty)
ord <- self$par$archy$ordering
uptri <- (ord[rn] <= ord[cn])
upn <- uptri & naughty
rowidx <- as.vector(rn[upn])
colidx <- as.vector(cn[upn])
mats <- lapply(as.data.frame(rbind(rowidx, colidx)),
matrix, ncol=2)
inters <- lapply(mats, delHS, types=types, radii=radii, archy=archy)
return(inters)
}
},
irange = function(self, coeffs=NA, epsilon=0, ...) {
r <- self$par$radii
active <- !is.na(r) & self$par$archy$relation
if(any(!is.na(coeffs))) {
gamma <- (self$interpret)(coeffs, self)$param$gammas
gamma[is.na(gamma)] <- 1
active <- active & (abs(log(gamma)) > epsilon)
}
if(any(active)) return(max(r[active])) else return(0)
},
version=NULL # to be added
)
class(BlankHSobject) <- "interact"
# finally create main function
HierStrauss <- function(radii, types=NULL, archy=NULL) {
if(!is.null(types)) {
if(is.null(archy)) archy <- seq_len(length(types))
archy <- hierarchicalordering(archy, types)
}
radii[radii == 0] <- NA
out <- instantiate.interact(BlankHSobject,
list(types=types,
radii=radii,
archy=archy))
if(!is.null(types))
dimnames(out$par$radii) <- list(types, types)
return(out)
}
HierStrauss <- intermaker(HierStrauss, BlankHSobject)
HierStrauss
})
hierarchicalordering <- function(i, s) {
s <- as.character(s)
n <- length(s)
possible <- if(is.character(i)) s else seq_len(n)
j <- match(i, possible)
if(any(uhoh <- is.na(j)))
stop(paste("Unrecognised",
ngettext(sum(uhoh), "level", "levels"),
sQuote(i[uhoh]),
"amongst possible levels",
commasep(sQuote(s))))
if(length(j) < n)
stop("Ordering is incomplete")
ord <- order(j)
m <- matrix(, n, n)
rel <- matrix(ord[row(m)] <= ord[col(m)], n, n)
dimnames(rel) <- list(s, s)
x <- list(indices=j, ordering=ord, labels=s, relation=rel)
class(x) <- "hierarchicalordering"
x
}
print.hierarchicalordering <- function(x, ...) {
splat(x$labels[x$indices], collapse=" ~> ")
invisible(NULL)
}
hiermat <- function (x, h)
{
stopifnot(is.matrix(x))
isna <- is.na(x)
x[] <- as.character(x)
x[isna] <- ""
if(inherits(h, "hierarchicalordering")) ## allows h to be NULL, etc
x[!(h$relation)] <- ""
return(noquote(x))
}
|
/R/hierstrauss.R
|
no_license
|
jmetz/spatstat
|
R
| false
| false
| 9,511
|
r
|
##
## hierstrauss.R
##
## $Revision: 1.4 $ $Date: 2015/01/08 07:34:30 $
##
## The hierarchical Strauss process
##
## HierStrauss() create an instance of the hierarchical Strauss process
## [an object of class 'interact']
##
## -------------------------------------------------------------------
##
HierStrauss <- local({
# ......... define interaction potential
HSpotential <- function(d, tx, tu, par) {
# arguments:
# d[i,j] distance between points X[i] and U[j]
# tx[i] type (mark) of point X[i]
# tu[j] type (mark) of point U[j]
#
# get matrix of interaction radii r[ , ]
r <- par$radii
#
# get possible marks and validate
if(!is.factor(tx) || !is.factor(tu))
stop("marks of data and dummy points must be factor variables")
lx <- levels(tx)
lu <- levels(tu)
if(length(lx) != length(lu) || any(lx != lu))
stop("marks of data and dummy points do not have same possible levels")
if(!identical(lx, par$types))
stop("data and model do not have the same possible levels of marks")
if(!identical(lu, par$types))
stop("dummy points and model do not have the same possible levels of marks")
# ensure factor levels are acceptable for column names (etc)
lxname <- make.names(lx, unique=TRUE)
## list all ordered pairs of types to be checked
uptri <- par$archy$relation & !is.na(r)
mark1 <- (lx[row(r)])[uptri]
mark2 <- (lx[col(r)])[uptri]
## corresponding names
mark1name <- (lxname[row(r)])[uptri]
mark2name <- (lxname[col(r)])[uptri]
vname <- apply(cbind(mark1name,mark2name), 1, paste, collapse="x")
vname <- paste("mark", vname, sep="")
npairs <- length(vname)
## create logical array for result
z <- array(FALSE, dim=c(dim(d), npairs),
dimnames=list(character(0), character(0), vname))
# go....
if(length(z) > 0) {
## assemble the relevant interaction distance for each pair of points
rxu <- r[ tx, tu ]
## apply relevant threshold to each pair of points
str <- (d <= rxu)
## score
for(i in 1:npairs) {
# data points with mark m1
Xsub <- (tx == mark1[i])
# quadrature points with mark m2
Qsub <- (tu == mark2[i])
# assign
z[Xsub, Qsub, i] <- str[Xsub, Qsub]
}
}
return(z)
}
#### end of 'pot' function ####
# ........ auxiliary functions ..............
delHS <- function(which, types, radii, archy) {
radii[which] <- NA
if(all(is.na(radii))) return(Poisson())
return(HierStrauss(types=types, radii=radii, archy=archy))
}
# Set up basic object except for family and parameters
BlankHSobject <-
list(
name = "Hierarchical Strauss process",
creator = "HierStrauss",
family = "hierpair.family", # evaluated later
pot = HSpotential,
par = list(types=NULL, radii=NULL, archy=NULL), # filled in later
parnames = c("possible types",
"interaction distances",
"hierarchical order"),
selfstart = function(X, self) {
if(is.null(self$par$types)) types <- levels(marks(X))
if(is.null(self$par$archy)) archy <- types
HierStrauss(types=types,radii=self$par$radii,archy=self$par$archy)
},
init = function(self) {
types <- self$par$types
if(!is.null(types)) {
radii <- self$par$radii
nt <- length(types)
MultiPair.checkmatrix(radii, nt, sQuote("radii"), asymmok=TRUE)
if(length(types) == 0)
stop(paste("The", sQuote("types"),"argument should be",
"either NULL or a vector of all possible types"))
if(any(is.na(types)))
stop("NA's not allowed in types")
if(is.factor(types)) {
types <- levels(types)
} else {
types <- levels(factor(types, levels=types))
}
}
},
update = NULL, # default OK
print = function(self) {
radii <- self$par$radii
types <- self$par$types
archy <- self$par$archy
splat(nrow(radii), "types of points")
if(!is.null(types) && !is.null(archy)) {
splat("Possible types and ordering:")
print(archy)
} else if(!is.null(types)) {
splat("Possible types:")
print(types)
} else splat("Possible types:\t not yet determined")
splat("Interaction radii:")
print(hiermat(radii, self$par$archy))
invisible(NULL)
},
interpret = function(coeffs, self) {
# get possible types
typ <- self$par$types
ntypes <- length(typ)
# get matrix of Strauss interaction radii
r <- self$par$radii
# list all unordered pairs of types
uptri <- self$par$archy$relation & !is.na(r)
index1 <- (row(r))[uptri]
index2 <- (col(r))[uptri]
npairs <- length(index1)
# extract canonical parameters; shape them into a matrix
gammas <- matrix(NA, ntypes, ntypes)
dimnames(gammas) <- list(typ, typ)
gammas[ cbind(index1, index2) ] <- exp(coeffs)
#
return(list(param=list(gammas=gammas),
inames="interaction parameters gamma_ij",
printable=hiermat(round(gammas, 4), self$par$archy)))
},
valid = function(coeffs, self) {
# interaction parameters gamma[i,j]
gamma <- (self$interpret)(coeffs, self)$param$gammas
# interaction radii
radii <- self$par$radii
# parameters to estimate
required <- !is.na(radii) & self$par$archy$relation
# all required parameters must be finite
if(!all(is.finite(gamma[required]))) return(FALSE)
# DIAGONAL interaction parameters must be non-explosive
d <- diag(rep(TRUE, nrow(radii)))
return(all(gamma[required & d] <= 1))
},
project = function(coeffs, self) {
# interaction parameters gamma[i,j]
gamma <- (self$interpret)(coeffs, self)$param$gammas
# interaction radii and types
radii <- self$par$radii
types <- self$par$types
# problems?
uptri <- self$par$archy$relation
required <- !is.na(radii) & uptri
okgamma <- !uptri | (is.finite(gamma) & (gamma <= 1))
naughty <- required & !okgamma
#
if(!any(naughty))
return(NULL)
if(spatstat.options("project.fast")) {
# remove ALL naughty terms simultaneously
return(delHS(naughty, types, radii, archy))
} else {
# present a list of candidates
rn <- row(naughty)
cn <- col(naughty)
ord <- self$par$archy$ordering
uptri <- (ord[rn] <= ord[cn])
upn <- uptri & naughty
rowidx <- as.vector(rn[upn])
colidx <- as.vector(cn[upn])
mats <- lapply(as.data.frame(rbind(rowidx, colidx)),
matrix, ncol=2)
inters <- lapply(mats, delHS, types=types, radii=radii, archy=archy)
return(inters)
}
},
irange = function(self, coeffs=NA, epsilon=0, ...) {
r <- self$par$radii
active <- !is.na(r) & self$par$archy$relation
if(any(!is.na(coeffs))) {
gamma <- (self$interpret)(coeffs, self)$param$gammas
gamma[is.na(gamma)] <- 1
active <- active & (abs(log(gamma)) > epsilon)
}
if(any(active)) return(max(r[active])) else return(0)
},
version=NULL # to be added
)
class(BlankHSobject) <- "interact"
# finally create main function
HierStrauss <- function(radii, types=NULL, archy=NULL) {
if(!is.null(types)) {
if(is.null(archy)) archy <- seq_len(length(types))
archy <- hierarchicalordering(archy, types)
}
radii[radii == 0] <- NA
out <- instantiate.interact(BlankHSobject,
list(types=types,
radii=radii,
archy=archy))
if(!is.null(types))
dimnames(out$par$radii) <- list(types, types)
return(out)
}
HierStrauss <- intermaker(HierStrauss, BlankHSobject)
HierStrauss
})
hierarchicalordering <- function(i, s) {
s <- as.character(s)
n <- length(s)
possible <- if(is.character(i)) s else seq_len(n)
j <- match(i, possible)
if(any(uhoh <- is.na(j)))
stop(paste("Unrecognised",
ngettext(sum(uhoh), "level", "levels"),
sQuote(i[uhoh]),
"amongst possible levels",
commasep(sQuote(s))))
if(length(j) < n)
stop("Ordering is incomplete")
ord <- order(j)
m <- matrix(, n, n)
rel <- matrix(ord[row(m)] <= ord[col(m)], n, n)
dimnames(rel) <- list(s, s)
x <- list(indices=j, ordering=ord, labels=s, relation=rel)
class(x) <- "hierarchicalordering"
x
}
print.hierarchicalordering <- function(x, ...) {
splat(x$labels[x$indices], collapse=" ~> ")
invisible(NULL)
}
hiermat <- function (x, h)
{
stopifnot(is.matrix(x))
isna <- is.na(x)
x[] <- as.character(x)
x[isna] <- ""
if(inherits(h, "hierarchicalordering")) ## allows h to be NULL, etc
x[!(h$relation)] <- ""
return(noquote(x))
}
|
sqndwdecomp <-
function (x, J, filter.number, family)
{
lx <- length(x)
ans <- matrix(0, nrow = J, ncol = length(x))
dw <- hwwn.dw(J, filter.number, family)
longest.support <- length(dw[[J]])
scale.shift <- 0
for (j in 1:J) {
l <- length(dw[[j]])
init <- (filter.number - 1) * (lx - 2^j)
for (k in 1:lx) {
yix <- seq(from = k, by = 1, length = l)
yix <- ((yix - 1)%%lx) + 1
ans[j, k] <- sum(x[yix] * dw[[j]]^2)
}
if (filter.number == 1)
scale.shift <- 0
else {
scale.shift <- (filter.number - 1) * 2^j
}
ans[j, ] <- guyrot(ans[j, ], scale.shift)
}
return(ans)
}
|
/R/sqndwdecomp.R
|
no_license
|
cran/hwwntest
|
R
| false
| false
| 720
|
r
|
sqndwdecomp <-
function (x, J, filter.number, family)
{
lx <- length(x)
ans <- matrix(0, nrow = J, ncol = length(x))
dw <- hwwn.dw(J, filter.number, family)
longest.support <- length(dw[[J]])
scale.shift <- 0
for (j in 1:J) {
l <- length(dw[[j]])
init <- (filter.number - 1) * (lx - 2^j)
for (k in 1:lx) {
yix <- seq(from = k, by = 1, length = l)
yix <- ((yix - 1)%%lx) + 1
ans[j, k] <- sum(x[yix] * dw[[j]]^2)
}
if (filter.number == 1)
scale.shift <- 0
else {
scale.shift <- (filter.number - 1) * 2^j
}
ans[j, ] <- guyrot(ans[j, ], scale.shift)
}
return(ans)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plots.R
\name{plot_branches_method1}
\alias{plot_branches_method1}
\title{Plot a tree with branches colored according to molecular data, method 1}
\usage{
plot_branches_method1(
x,
tip_label = "otu",
drop_outgroup = TRUE,
ladderize_tree = TRUE,
color = "red",
...
)
}
\arguments{
\item{x}{A list from get_tip_values}
\item{tip_label}{A character vector. Can be one of "otu" or "taxon"}
\item{drop_outgroup}{Boolean}
\item{ladderize_tree}{Boolean}
}
\value{
a plot
}
\description{
Plot a tree with branches colored according to molecular data, method 1
}
\examples{
treefile = 'data/pg_2827_tree6577/run_pg_2827tree6577_run4/RAxML_bestTree.2020-07-31'
otufile = 'data/pg_2827_tree6577/outputs_pg_2827tree6577/otu_info_pg_2827tree6577.csv'
}
\author{
Emily Jane McTavish
}
|
/man/plot_branches_method1.Rd
|
no_license
|
McTavishLab/physcraperex
|
R
| false
| true
| 864
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plots.R
\name{plot_branches_method1}
\alias{plot_branches_method1}
\title{Plot a tree with branches colored according to molecular data, method 1}
\usage{
plot_branches_method1(
x,
tip_label = "otu",
drop_outgroup = TRUE,
ladderize_tree = TRUE,
color = "red",
...
)
}
\arguments{
\item{x}{A list from get_tip_values}
\item{tip_label}{A character vector. Can be one of "otu" or "taxon"}
\item{drop_outgroup}{Boolean}
\item{ladderize_tree}{Boolean}
}
\value{
a plot
}
\description{
Plot a tree with branches colored according to molecular data, method 1
}
\examples{
treefile = 'data/pg_2827_tree6577/run_pg_2827tree6577_run4/RAxML_bestTree.2020-07-31'
otufile = 'data/pg_2827_tree6577/outputs_pg_2827tree6577/otu_info_pg_2827tree6577.csv'
}
\author{
Emily Jane McTavish
}
|
At the end of the dada2 tutorial (http://benjjneb.github.io/dada2/tutorial.html), you end up with chimera removed variant tables which can be saved as .rds, piped directly into phyloseq, or converted into biom tables for qiime. In the scripts below, I'm saving one of my libraries as seqtabG_nochim.rds(1). In the next step, I'm combining the seqtabs of all of my libraries into one large table (2). In step 3, I am generating the fasta file that qiime1 used to call rep_set.fna. Steps 4-6, I'm making the soon to be biom table.
In R (dada2)
1) >saveRDS(seqtabG.nochim, "/home/ubuntu/data/dada2_16/dada2_16_output/seqtabG_nochim.rds")
2) > wild_nochim <- mergeSequenceTables(seqtabA.nochim, seqtabB.nochim, seqtabC.nochim, seqtabD.nochim, seqtabE.nochim, seqtabF.nochim, seqtabG.nochim)
3) > uniquesToFasta(getUniques(wild_nochim), fout = "/home/ubuntu/data/dada2_16/dada2_16_output/uniques_wild.fasta",ids=paste0("Seq",seq(length(getUniques(wild_nochim)))))
4) > wild.nochimt <- t(wild_nochim)
5) > wildt.nochimt <- cbind('#OTUID' = rownames(wild.nochimt), wild.nochimt)
6) > write.table(wildt.nochimt, "/home/ubuntu/data/dada2_16/dada2_16_output/wild_nochim.txt", sep='\t', row.names=FALSE, quote=FALSE)
At this point, you now have a tab separated text file with #OTUID in the top left corner cell (required to switch into .biom format), samples are going across the top and sequences go across the left column. The left column has to be renamed from actual sequences to Seq1, Seq2, Seq3...etc. Now you can run qiime2. biom is a module that comes with qiime2. If you have qiime2 installed, you also have biom.
In qiime2
7) $ biom convert -i wild_nochim_rmseq.txt -o wild_nochim_rmseq.biom --table-type "OTU table" --to-hdf5
8) $ qiime tools import --input-path wild_nochim_rmseq.biom --type "FeatureTable[Frequency]" --output-path freq_table
9) $ qiime tools import --input-path uniques_wild.fasta --output-path rep_seqs-table --type "FeatureData[Sequence]"
It seems like a lot, but it works pretty well. Let me know if you have any questions.
Cheers,
Betsy
|
/scripts/create_otu.R
|
permissive
|
megaptera-helvetiae/SalmoTruttaVals
|
R
| false
| false
| 2,077
|
r
|
At the end of the dada2 tutorial (http://benjjneb.github.io/dada2/tutorial.html), you end up with chimera removed variant tables which can be saved as .rds, piped directly into phyloseq, or converted into biom tables for qiime. In the scripts below, I'm saving one of my libraries as seqtabG_nochim.rds(1). In the next step, I'm combining the seqtabs of all of my libraries into one large table (2). In step 3, I am generating the fasta file that qiime1 used to call rep_set.fna. Steps 4-6, I'm making the soon to be biom table.
In R (dada2)
1) >saveRDS(seqtabG.nochim, "/home/ubuntu/data/dada2_16/dada2_16_output/seqtabG_nochim.rds")
2) > wild_nochim <- mergeSequenceTables(seqtabA.nochim, seqtabB.nochim, seqtabC.nochim, seqtabD.nochim, seqtabE.nochim, seqtabF.nochim, seqtabG.nochim)
3) > uniquesToFasta(getUniques(wild_nochim), fout = "/home/ubuntu/data/dada2_16/dada2_16_output/uniques_wild.fasta",ids=paste0("Seq",seq(length(getUniques(wild_nochim)))))
4) > wild.nochimt <- t(wild_nochim)
5) > wildt.nochimt <- cbind('#OTUID' = rownames(wild.nochimt), wild.nochimt)
6) > write.table(wildt.nochimt, "/home/ubuntu/data/dada2_16/dada2_16_output/wild_nochim.txt", sep='\t', row.names=FALSE, quote=FALSE)
At this point, you now have a tab separated text file with #OTUID in the top left corner cell (required to switch into .biom format), samples are going across the top and sequences go across the left column. The left column has to be renamed from actual sequences to Seq1, Seq2, Seq3...etc. Now you can run qiime2. biom is a module that comes with qiime2. If you have qiime2 installed, you also have biom.
In qiime2
7) $ biom convert -i wild_nochim_rmseq.txt -o wild_nochim_rmseq.biom --table-type "OTU table" --to-hdf5
8) $ qiime tools import --input-path wild_nochim_rmseq.biom --type "FeatureTable[Frequency]" --output-path freq_table
9) $ qiime tools import --input-path uniques_wild.fasta --output-path rep_seqs-table --type "FeatureData[Sequence]"
It seems like a lot, but it works pretty well. Let me know if you have any questions.
Cheers,
Betsy
|
# Copyright 2019 Battelle Memorial Institute; see the LICENSE file.
#' module_socio_L180.GDP_macro
#'
#' National accounts information for GDP macro.
#'
#' @param command API command to execute
#' @param ... other optional parameters, depending on command
#' @return Depends on \code{command}: either a vector of required inputs,
#' a vector of output names, or (if \code{command} is "MAKE") all
#' the generated outputs: \code{L180.nationalAccounts}, \code{L180.laborForceSSP}.
#' There is no corresponding file in the original data system.
#' @details Select national accounts data from Penn World Table for all countries.
#' @importFrom assertthat assert_that
#' @importFrom dplyr filter lag mutate mutate_at select rename
#' @author SHK October 2020
#'
module_socio_L180.GDP_macro <- function(command, ...) {
if(command == driver.DECLARE_INPUTS) {
return(c(FILE = "common/iso_GCAM_regID",
FILE = "common/GCAM_region_names",
FILE = "socioeconomics/SSP_database_v9",
FILE = "socioeconomics/pwt91",
FILE = "socioeconomics/pwt91_na",
"L100.GTAP_capital_stock"))
} else if(command == driver.DECLARE_OUTPUTS) {
return(c("L180.nationalAccounts",
"L180.laborForceSSP"))
} else if(command == driver.MAKE) {
# silence package checks
scenario <- year <- gdp <- GCAM_region_ID <- account <- Region <- Units <- growth <- timestep <- region <-
GDP <- pop <- laborproductivity <- NULL
# -------------------------------------------------- ---------------------------
# 1. Read data
all_data <- list(...)[[1]]
# note this data is based on market exchange rate (mer)
# macroeconomic date from Penn World Tables
PWT91.raw <- get_data(all_data, "socioeconomics/pwt91")
PWT91.supplemental <- get_data(all_data, "socioeconomics/pwt91_na")
# population by cohort for determining labor force
pop.cohort.ssp.data <- get_data(all_data, "socioeconomics/SSP_database_v9")
gcam.reg.iso <- get_data(all_data, "common/iso_GCAM_regID", strip_attributes = TRUE)
GCAM_region_names <- get_data(all_data, "common/GCAM_region_names", strip_attributes = TRUE)
L100.GTAP_capital_stock <- get_data(all_data, "L100.GTAP_capital_stock", strip_attributes = TRUE)
# -----------------------------------------------------------------------------
# Data ID Info for Penn World Table used here (for complete list, see PWT91 in raw data)
# Variable name Variable definition
# countrycode 3-letter ISO country code
# country Country name
# currency_unit Currency unit
# year Year
# rgdpo Output-side real GDP at chained PPPs (in mil. 2011US$)
# pop Population (in millions)
# emp Number of persons engaged (in millions)
# avh Average annual hours worked by persons engaged
# hc Index of human capital per person, based on years of schooling (see PWT9)
# rgdpna RealGDP at constant 2011 national prices (in mil. 2011US$)
# rconna Real consumption at constant 2011 national prices (in mil. 2011US$)
# rdana Real domestic absorption at constant 2011 national prices (in mil. 2011US$)
## Real domestic absorption = consumption (HH+G) + investment
# rnna Capital stock at constant 2011 national prices (in mil. 2011US$)
# rtfpna TFP at constant national prices (2011=1)
# labsh Share of labour compensation in GDP at current national prices
# delta Average depreciation rate of the capital stock
#
PWT91.raw %>% select(countrycode, country, year, pop, emp, avh, rgdpna, rconna, rdana,
rnna, labsh, delta) %>%
rename(iso = countrycode,
labor.force = emp,
hrs.worked.annual = avh,
pop.pwt = pop,
gdp.pwt = rgdpna,
consumption = rconna,
cons.plus.invest = rdana,
capital.stock = rnna,
labor.share.gdp = labsh,
depreciation.rate = delta) %>%
mutate(iso = tolower(iso)) %>%
gather(var, value, -iso, -country, -year) %>%
filter(!is.na(value)) -> pwt
# process supplemental data to be consistent with main pwt data, namely:
# 1. convert from constant local currency to constant USD using the provided
# exchange rate in the base dollar year
# 2. ensure export and imports balance globally (they are off by ~ 1-2%)
PWT91.supplemental %>%
rename(iso = countrycode) %>%
mutate(iso = tolower(iso)) %>%
filter(iso %in% unique(pwt$iso)) ->
PWT91.supplemental
PWT91.supplemental %>%
# get the exchange rate of the base dollar year
filter(year == socioeconomics.PWT_CONSTANT_CURRENCY_YEAR) %>%
select(iso, xr) %>%
# join that exchange rate back onto the base q_x, q_m (exports and imports)
# which are already in the base currency year so that we can jump from
# local currency to USD
left_join_error_no_match(PWT91.supplemental %>% select(iso, year, q_x, q_m), ., by=c("iso")) %>%
mutate(exports = q_x / xr,
imports = q_m / xr) %>%
select(iso, year, exports, imports) %>%
# scale imports so that they exactly match exports globally
group_by(year) %>%
mutate(imports = imports * sum(exports, na.rm=T) / sum(imports, na.rm = T)) %>%
ungroup() %>%
gather(var, value, -iso, -year) %>%
filter(!is.na(value)) -> pwt_supp
pwt %>%
select(-country) %>%
bind_rows(pwt_supp) ->
pwt
# replace iso:sxm with iso:nld for Dutch part of Saint Maarten
pwt %>% mutate(iso = gsub("sxm", "nld", iso)) %>%
group_by(iso, var, year) %>%
summarise_all(sum, na.rm = TRUE) %>%
ungroup() -> pwt
## Process and aggregate data to GCAM inputs
## Check for iso errors, do not include.
pwt %>% filter(!(iso %in% gcam.reg.iso$iso)) -> pwt.no.iso
assertthat::assert_that(nrow(pwt.no.iso) == 0)
pwt %>%
left_join_error_no_match(gcam.reg.iso, by = "iso") %>%
spread(var, value) %>%
mutate(hrs.worked.annual = if_else(is.na(hrs.worked.annual), socioeconomics.DEFAULT_MEDIAN_HOURS_WORKED, hrs.worked.annual), # Replace missing with median average hours
wages = gdp.pwt * labor.share.gdp, #million 2011US$
wage.rate = wages / labor.force / hrs.worked.annual, #wages/worker-hr
labor.force.share = labor.force / pop.pwt,
depreciation = capital.stock * depreciation.rate,
investment = cons.plus.invest - consumption,
net.export = exports - imports,
capital.net.export = -net.export,
savings = investment - capital.net.export,
savings.rate = savings / gdp.pwt) -> nationalAccounts.2011dollar
# Convert all US dollar year from 2011 to 1990 for consistency with GCAM input
nationalAccounts.2011dollar %>% mutate( gdp.pwt = gdp.pwt * gdp_deflator(1990, 2011),
consumption = consumption * gdp_deflator(1990, 2011),
cons.plus.invest = cons.plus.invest * gdp_deflator(1990, 2011),
wages = wages * gdp_deflator(1990, 2011),
wage.rate = wage.rate * gdp_deflator(1990, 2011),
capital.stock = capital.stock * gdp_deflator(1990, 2011),
depreciation = depreciation * gdp_deflator(1990, 2011),
investment = investment * gdp_deflator(1990, 2011),
savings = savings * gdp_deflator(1990, 2011),
capital.net.export = capital.net.export * gdp_deflator(1990, 2011)) -> L180.nationalAccounts
# We want to partition the total capital stock and investment to split out energy capital usage
# The Penn World table does not include this level of detail so we will need to utilize GTAP
# capital data to do this. The two datasets will not agree on total capital stock values so
# instead we will compute shares from the GTAP data then apply that to the Penn World table values
# Note: we will have fewer countries in the GTAP databases. We will give the missing countries
# the GCAM regional shares so first compute that.
L100.GTAP_capital_stock %>%
group_by(region_GCAM, year) %>%
mutate(invest_total = sum(CapitalCost), stock_total = sum(VKE)) %>%
group_by(region_GCAM, year, GCAM_sector) %>%
summarize(gtap_ene_inv_share = sum(CapitalCost) / mean(invest_total),
gtap_ene_stock_share = sum(VKE) / mean(stock_total)) %>%
ungroup() ->
GTAP_inv_share.regional
# Compute shares for the countries we have
L100.GTAP_capital_stock %>%
group_by(region_GTAP, year) %>%
mutate(invest_total = sum(CapitalCost), stock_total = sum(VKE)) %>%
group_by(region_GTAP, year, GCAM_sector) %>%
summarize(gtap_ene_inv_share = sum(CapitalCost) / mean(invest_total),
gtap_ene_stock_share = sum(VKE) / mean(stock_total)) %>%
ungroup() ->
GTAP_inv_share.ctry
# Set the shares for the countries not included in GTAP
L180.nationalAccounts %>%
select(iso) %>%
distinct() %>%
left_join_error_no_match(gcam.reg.iso %>% select(iso, GCAM_region_ID), by=c("iso")) %>%
filter(!iso %in% unique(L100.GTAP_capital_stock$region_GTAP)) %>%
left_join_error_no_match(GCAM_region_names, by=c("GCAM_region_ID")) %>%
select(region_GTAP = iso, region_GCAM = region) %>%
# note: we are using left_join as we are using it to filter the ISOs which
# are not available in GTAP (which of course will the get assigned regional
# average shares)
left_join(GTAP_inv_share.regional, by=c("region_GCAM")) %>%
select(-region_GCAM) %>%
# combine with the GTAP ctry shares to now have coverage across all ISOs
bind_rows(GTAP_inv_share.ctry) %>%
rename(iso = region_GTAP) ->
GTAP_inv_share
# filter for just Energy as that is what we are interested at the moment, then fillout for
# all historical years holding values constant beyond the endpoints of the GTAP data
GTAP_inv_share %>%
filter(GCAM_sector == "Energy") %>%
select(-GCAM_sector) %>%
full_join(tibble(year = unique(L180.nationalAccounts$year)), by=c("year")) %>%
complete(year, nesting(iso)) %>%
group_by(iso) %>%
mutate(gtap_ene_inv_share = approx_fun(year, gtap_ene_inv_share, rule = 2),
gtap_ene_stock_share = approx_fun(year, gtap_ene_stock_share, rule = 2)) %>%
ungroup() ->
GTAP_inv_share_complete
# Finally, adjust the Penn World table values with the GTAP capital shares
L180.nationalAccounts %>%
left_join_error_no_match(GTAP_inv_share_complete, by=c("iso", "year")) %>%
mutate(capital.stock = capital.stock * (1.0 - gtap_ene_stock_share),
depreciation = depreciation * (1.0 - gtap_ene_stock_share),
energy.investment = investment * gtap_ene_inv_share) %>%
select(-gtap_ene_inv_share, -gtap_ene_stock_share) ->
L180.nationalAccounts
#Future labor force share of population from SSP population by cohort
#Clean up dataset for processing.
pop.cohort.ssp.data %>%
rename(model = MODEL, scenario = SCENARIO, iso = REGION, var = VARIABLE, unit = UNIT) %>%
filter(model == "IIASA-WiC POP") %>%
mutate(var = gsub("\\|", "_", var),
scenario = tolower(substr(scenario, 1, 4)),
iso = tolower(iso),
var = gsub("Population", "pop", var),
var = gsub("\\-", "_", var),
var = gsub("Female", "n", var),
var = gsub("Male", "n", var),
var = gsub("Aged", "", var)) -> pop.cohort.ssp #gender neutral for total
#Filter working age population, excluding some cohorts
#Working age population = ages 15-64. Don't include ages 15-19 in HS and 20-24 in college.
pop.cohort.ssp %>% filter(var %in% c("pop", "pop_n_15_19_No Education", "pop_n_15_19_Primary Education",
"pop_n_20_24_No Education", "pop_n_20_24_Primary Education",
"pop_n_20_24_Secondary Education",
"pop_n_25_29", "pop_n_30_34", "pop_n_35_39" , "pop_n_40_44",
"pop_n_45_49", "pop_n_50_54", "pop_n_55_59" , "pop_n_60_64")) %>%
select(-model) -> pop.labor.force.ssp
pop.labor.force.ssp %>% filter(var %in% "pop") %>%
gather(year, value, -scenario, -var, -iso, -unit) %>%
mutate(year = as.integer(year),
value = as.numeric(value)) -> pop.ssp
pop.labor.force.ssp %>% filter(!(var %in% "pop")) %>%
gather(year, value, -scenario, -var, -iso, -unit) %>%
mutate(year = as.integer(year),
value = as.numeric(value)) -> labor.force.cohort.ssp
labor.force.cohort.ssp %>% select(-var) %>%
group_by(scenario, iso, unit, year) %>%
summarise_all(sum, na.rm = TRUE) %>%
ungroup() %>%
mutate(var ="labor.force") -> labor.force.ssp
#include total SSP population (pop) in table
labor.force.ssp %>%
bind_rows(pop.ssp) %>%
left_join_error_no_match(gcam.reg.iso, by = "iso") %>%
select(scenario, iso, GCAM_region_ID, var, year, value, unit) %>%
arrange(scenario, iso, var, year) ->
L180.laborForceSSP
# WARNING!!!
# Not all data is available for every country and year.
# Check for missing national accounts data by country before using for GCAM region.
# All rates and shares should be recalculated based on total values for GCAM aggregte regions.
L180.nationalAccounts %>% select(iso, country_name, GCAM_region_ID, year,
pop.pwt, labor.force, gdp.pwt, consumption,
cons.plus.invest, capital.stock, depreciation,
savings, wages, hrs.worked.annual, wage.rate,
labor.force.share, depreciation.rate, savings.rate,
energy.investment, capital.net.export ) -> L180.nationalAccounts
# ===================================================
# Produce outputs
L180.laborForceSSP %>%
add_title("Labor Force and Pop by SSP Scenarios") %>%
add_units("millions") %>%
add_comments("Total pop and working age population less ages 15-19 in HS and 20-24 in college") %>%
add_legacy_name("NA") %>%
add_precursors("common/iso_GCAM_regID", "socioeconomics/SSP_database_v9") ->
L180.laborForceSSP
L180.nationalAccounts %>%
add_title("Processed National Accounts Data from Penn World Table") %>%
add_units("million 1990US$") %>%
add_comments("National accounts data: GDP, capital, depreciation, savings rate,
labor wages, labor productivity, labor force, and labor force share, energy investment") %>%
add_legacy_name("NA") %>%
add_precursors("common/iso_GCAM_regID", "common/GCAM_region_names",
"socioeconomics/pwt91", "socioeconomics/pwt91_na",
"L100.GTAP_capital_stock") ->
L180.nationalAccounts
return_data(L180.nationalAccounts, L180.laborForceSSP)
} else {
stop("Unknown command")
}
}
|
/input/gcamdata/R/zsocio_L180.GDP_macro.R
|
permissive
|
JGCRI/gcam-core
|
R
| false
| false
| 15,661
|
r
|
# Copyright 2019 Battelle Memorial Institute; see the LICENSE file.
#' module_socio_L180.GDP_macro
#'
#' National accounts information for GDP macro.
#'
#' @param command API command to execute
#' @param ... other optional parameters, depending on command
#' @return Depends on \code{command}: either a vector of required inputs,
#' a vector of output names, or (if \code{command} is "MAKE") all
#' the generated outputs: \code{L180.nationalAccounts}, \code{L180.laborForceSSP}.
#' There is no corresponding file in the original data system.
#' @details Select national accounts data from Penn World Table for all countries.
#' @importFrom assertthat assert_that
#' @importFrom dplyr filter lag mutate mutate_at select rename
#' @author SHK October 2020
#'
module_socio_L180.GDP_macro <- function(command, ...) {
if(command == driver.DECLARE_INPUTS) {
return(c(FILE = "common/iso_GCAM_regID",
FILE = "common/GCAM_region_names",
FILE = "socioeconomics/SSP_database_v9",
FILE = "socioeconomics/pwt91",
FILE = "socioeconomics/pwt91_na",
"L100.GTAP_capital_stock"))
} else if(command == driver.DECLARE_OUTPUTS) {
return(c("L180.nationalAccounts",
"L180.laborForceSSP"))
} else if(command == driver.MAKE) {
# silence package checks
scenario <- year <- gdp <- GCAM_region_ID <- account <- Region <- Units <- growth <- timestep <- region <-
GDP <- pop <- laborproductivity <- NULL
# -------------------------------------------------- ---------------------------
# 1. Read data
all_data <- list(...)[[1]]
# note this data is based on market exchange rate (mer)
# macroeconomic date from Penn World Tables
PWT91.raw <- get_data(all_data, "socioeconomics/pwt91")
PWT91.supplemental <- get_data(all_data, "socioeconomics/pwt91_na")
# population by cohort for determining labor force
pop.cohort.ssp.data <- get_data(all_data, "socioeconomics/SSP_database_v9")
gcam.reg.iso <- get_data(all_data, "common/iso_GCAM_regID", strip_attributes = TRUE)
GCAM_region_names <- get_data(all_data, "common/GCAM_region_names", strip_attributes = TRUE)
L100.GTAP_capital_stock <- get_data(all_data, "L100.GTAP_capital_stock", strip_attributes = TRUE)
# -----------------------------------------------------------------------------
# Data ID Info for Penn World Table used here (for complete list, see PWT91 in raw data)
# Variable name Variable definition
# countrycode 3-letter ISO country code
# country Country name
# currency_unit Currency unit
# year Year
# rgdpo Output-side real GDP at chained PPPs (in mil. 2011US$)
# pop Population (in millions)
# emp Number of persons engaged (in millions)
# avh Average annual hours worked by persons engaged
# hc Index of human capital per person, based on years of schooling (see PWT9)
# rgdpna RealGDP at constant 2011 national prices (in mil. 2011US$)
# rconna Real consumption at constant 2011 national prices (in mil. 2011US$)
# rdana Real domestic absorption at constant 2011 national prices (in mil. 2011US$)
## Real domestic absorption = consumption (HH+G) + investment
# rnna Capital stock at constant 2011 national prices (in mil. 2011US$)
# rtfpna TFP at constant national prices (2011=1)
# labsh Share of labour compensation in GDP at current national prices
# delta Average depreciation rate of the capital stock
#
PWT91.raw %>% select(countrycode, country, year, pop, emp, avh, rgdpna, rconna, rdana,
rnna, labsh, delta) %>%
rename(iso = countrycode,
labor.force = emp,
hrs.worked.annual = avh,
pop.pwt = pop,
gdp.pwt = rgdpna,
consumption = rconna,
cons.plus.invest = rdana,
capital.stock = rnna,
labor.share.gdp = labsh,
depreciation.rate = delta) %>%
mutate(iso = tolower(iso)) %>%
gather(var, value, -iso, -country, -year) %>%
filter(!is.na(value)) -> pwt
# process supplemental data to be consistent with main pwt data, namely:
# 1. convert from constant local currency to constant USD using the provided
# exchange rate in the base dollar year
# 2. ensure export and imports balance globally (they are off by ~ 1-2%)
PWT91.supplemental %>%
rename(iso = countrycode) %>%
mutate(iso = tolower(iso)) %>%
filter(iso %in% unique(pwt$iso)) ->
PWT91.supplemental
PWT91.supplemental %>%
# get the exchange rate of the base dollar year
filter(year == socioeconomics.PWT_CONSTANT_CURRENCY_YEAR) %>%
select(iso, xr) %>%
# join that exchange rate back onto the base q_x, q_m (exports and imports)
# which are already in the base currency year so that we can jump from
# local currency to USD
left_join_error_no_match(PWT91.supplemental %>% select(iso, year, q_x, q_m), ., by=c("iso")) %>%
mutate(exports = q_x / xr,
imports = q_m / xr) %>%
select(iso, year, exports, imports) %>%
# scale imports so that they exactly match exports globally
group_by(year) %>%
mutate(imports = imports * sum(exports, na.rm=T) / sum(imports, na.rm = T)) %>%
ungroup() %>%
gather(var, value, -iso, -year) %>%
filter(!is.na(value)) -> pwt_supp
pwt %>%
select(-country) %>%
bind_rows(pwt_supp) ->
pwt
# replace iso:sxm with iso:nld for Dutch part of Saint Maarten
pwt %>% mutate(iso = gsub("sxm", "nld", iso)) %>%
group_by(iso, var, year) %>%
summarise_all(sum, na.rm = TRUE) %>%
ungroup() -> pwt
## Process and aggregate data to GCAM inputs
## Check for iso errors, do not include.
pwt %>% filter(!(iso %in% gcam.reg.iso$iso)) -> pwt.no.iso
assertthat::assert_that(nrow(pwt.no.iso) == 0)
pwt %>%
left_join_error_no_match(gcam.reg.iso, by = "iso") %>%
spread(var, value) %>%
mutate(hrs.worked.annual = if_else(is.na(hrs.worked.annual), socioeconomics.DEFAULT_MEDIAN_HOURS_WORKED, hrs.worked.annual), # Replace missing with median average hours
wages = gdp.pwt * labor.share.gdp, #million 2011US$
wage.rate = wages / labor.force / hrs.worked.annual, #wages/worker-hr
labor.force.share = labor.force / pop.pwt,
depreciation = capital.stock * depreciation.rate,
investment = cons.plus.invest - consumption,
net.export = exports - imports,
capital.net.export = -net.export,
savings = investment - capital.net.export,
savings.rate = savings / gdp.pwt) -> nationalAccounts.2011dollar
# Convert all US dollar year from 2011 to 1990 for consistency with GCAM input
nationalAccounts.2011dollar %>% mutate( gdp.pwt = gdp.pwt * gdp_deflator(1990, 2011),
consumption = consumption * gdp_deflator(1990, 2011),
cons.plus.invest = cons.plus.invest * gdp_deflator(1990, 2011),
wages = wages * gdp_deflator(1990, 2011),
wage.rate = wage.rate * gdp_deflator(1990, 2011),
capital.stock = capital.stock * gdp_deflator(1990, 2011),
depreciation = depreciation * gdp_deflator(1990, 2011),
investment = investment * gdp_deflator(1990, 2011),
savings = savings * gdp_deflator(1990, 2011),
capital.net.export = capital.net.export * gdp_deflator(1990, 2011)) -> L180.nationalAccounts
# We want to partition the total capital stock and investment to split out energy capital usage
# The Penn World table does not include this level of detail so we will need to utilize GTAP
# capital data to do this. The two datasets will not agree on total capital stock values so
# instead we will compute shares from the GTAP data then apply that to the Penn World table values
# Note: we will have fewer countries in the GTAP databases. We will give the missing countries
# the GCAM regional shares so first compute that.
L100.GTAP_capital_stock %>%
group_by(region_GCAM, year) %>%
mutate(invest_total = sum(CapitalCost), stock_total = sum(VKE)) %>%
group_by(region_GCAM, year, GCAM_sector) %>%
summarize(gtap_ene_inv_share = sum(CapitalCost) / mean(invest_total),
gtap_ene_stock_share = sum(VKE) / mean(stock_total)) %>%
ungroup() ->
GTAP_inv_share.regional
# Compute shares for the countries we have
L100.GTAP_capital_stock %>%
group_by(region_GTAP, year) %>%
mutate(invest_total = sum(CapitalCost), stock_total = sum(VKE)) %>%
group_by(region_GTAP, year, GCAM_sector) %>%
summarize(gtap_ene_inv_share = sum(CapitalCost) / mean(invest_total),
gtap_ene_stock_share = sum(VKE) / mean(stock_total)) %>%
ungroup() ->
GTAP_inv_share.ctry
# Set the shares for the countries not included in GTAP
L180.nationalAccounts %>%
select(iso) %>%
distinct() %>%
left_join_error_no_match(gcam.reg.iso %>% select(iso, GCAM_region_ID), by=c("iso")) %>%
filter(!iso %in% unique(L100.GTAP_capital_stock$region_GTAP)) %>%
left_join_error_no_match(GCAM_region_names, by=c("GCAM_region_ID")) %>%
select(region_GTAP = iso, region_GCAM = region) %>%
# note: we are using left_join as we are using it to filter the ISOs which
# are not available in GTAP (which of course will the get assigned regional
# average shares)
left_join(GTAP_inv_share.regional, by=c("region_GCAM")) %>%
select(-region_GCAM) %>%
# combine with the GTAP ctry shares to now have coverage across all ISOs
bind_rows(GTAP_inv_share.ctry) %>%
rename(iso = region_GTAP) ->
GTAP_inv_share
# filter for just Energy as that is what we are interested at the moment, then fillout for
# all historical years holding values constant beyond the endpoints of the GTAP data
GTAP_inv_share %>%
filter(GCAM_sector == "Energy") %>%
select(-GCAM_sector) %>%
full_join(tibble(year = unique(L180.nationalAccounts$year)), by=c("year")) %>%
complete(year, nesting(iso)) %>%
group_by(iso) %>%
mutate(gtap_ene_inv_share = approx_fun(year, gtap_ene_inv_share, rule = 2),
gtap_ene_stock_share = approx_fun(year, gtap_ene_stock_share, rule = 2)) %>%
ungroup() ->
GTAP_inv_share_complete
# Finally, adjust the Penn World table values with the GTAP capital shares
L180.nationalAccounts %>%
left_join_error_no_match(GTAP_inv_share_complete, by=c("iso", "year")) %>%
mutate(capital.stock = capital.stock * (1.0 - gtap_ene_stock_share),
depreciation = depreciation * (1.0 - gtap_ene_stock_share),
energy.investment = investment * gtap_ene_inv_share) %>%
select(-gtap_ene_inv_share, -gtap_ene_stock_share) ->
L180.nationalAccounts
#Future labor force share of population from SSP population by cohort
#Clean up dataset for processing.
pop.cohort.ssp.data %>%
rename(model = MODEL, scenario = SCENARIO, iso = REGION, var = VARIABLE, unit = UNIT) %>%
filter(model == "IIASA-WiC POP") %>%
mutate(var = gsub("\\|", "_", var),
scenario = tolower(substr(scenario, 1, 4)),
iso = tolower(iso),
var = gsub("Population", "pop", var),
var = gsub("\\-", "_", var),
var = gsub("Female", "n", var),
var = gsub("Male", "n", var),
var = gsub("Aged", "", var)) -> pop.cohort.ssp #gender neutral for total
#Filter working age population, excluding some cohorts
#Working age population = ages 15-64. Don't include ages 15-19 in HS and 20-24 in college.
pop.cohort.ssp %>% filter(var %in% c("pop", "pop_n_15_19_No Education", "pop_n_15_19_Primary Education",
"pop_n_20_24_No Education", "pop_n_20_24_Primary Education",
"pop_n_20_24_Secondary Education",
"pop_n_25_29", "pop_n_30_34", "pop_n_35_39" , "pop_n_40_44",
"pop_n_45_49", "pop_n_50_54", "pop_n_55_59" , "pop_n_60_64")) %>%
select(-model) -> pop.labor.force.ssp
pop.labor.force.ssp %>% filter(var %in% "pop") %>%
gather(year, value, -scenario, -var, -iso, -unit) %>%
mutate(year = as.integer(year),
value = as.numeric(value)) -> pop.ssp
pop.labor.force.ssp %>% filter(!(var %in% "pop")) %>%
gather(year, value, -scenario, -var, -iso, -unit) %>%
mutate(year = as.integer(year),
value = as.numeric(value)) -> labor.force.cohort.ssp
labor.force.cohort.ssp %>% select(-var) %>%
group_by(scenario, iso, unit, year) %>%
summarise_all(sum, na.rm = TRUE) %>%
ungroup() %>%
mutate(var ="labor.force") -> labor.force.ssp
#include total SSP population (pop) in table
labor.force.ssp %>%
bind_rows(pop.ssp) %>%
left_join_error_no_match(gcam.reg.iso, by = "iso") %>%
select(scenario, iso, GCAM_region_ID, var, year, value, unit) %>%
arrange(scenario, iso, var, year) ->
L180.laborForceSSP
# WARNING!!!
# Not all data is available for every country and year.
# Check for missing national accounts data by country before using for GCAM region.
# All rates and shares should be recalculated based on total values for GCAM aggregte regions.
L180.nationalAccounts %>% select(iso, country_name, GCAM_region_ID, year,
pop.pwt, labor.force, gdp.pwt, consumption,
cons.plus.invest, capital.stock, depreciation,
savings, wages, hrs.worked.annual, wage.rate,
labor.force.share, depreciation.rate, savings.rate,
energy.investment, capital.net.export ) -> L180.nationalAccounts
# ===================================================
# Produce outputs
L180.laborForceSSP %>%
add_title("Labor Force and Pop by SSP Scenarios") %>%
add_units("millions") %>%
add_comments("Total pop and working age population less ages 15-19 in HS and 20-24 in college") %>%
add_legacy_name("NA") %>%
add_precursors("common/iso_GCAM_regID", "socioeconomics/SSP_database_v9") ->
L180.laborForceSSP
L180.nationalAccounts %>%
add_title("Processed National Accounts Data from Penn World Table") %>%
add_units("million 1990US$") %>%
add_comments("National accounts data: GDP, capital, depreciation, savings rate,
labor wages, labor productivity, labor force, and labor force share, energy investment") %>%
add_legacy_name("NA") %>%
add_precursors("common/iso_GCAM_regID", "common/GCAM_region_names",
"socioeconomics/pwt91", "socioeconomics/pwt91_na",
"L100.GTAP_capital_stock") ->
L180.nationalAccounts
return_data(L180.nationalAccounts, L180.laborForceSSP)
} else {
stop("Unknown command")
}
}
|
#'A dataset containing NO2 data for 2010
#'
#'This dataset contains smoothed NO2 data from March to September 2010
#'
#'@format An array of 4 x 179 x 360 dimensions.
#'\describe{
#' \item{Dimension 1}{Each \code{NO2_2010[t, , ]} contains NO2 data for a given month with \code{t=1} corresponding to March and \code{t=7} corresponding to September}
#' \item{Dimensions 2,3}{Each \code{NO2_2010[ ,x, y]} contains NO2 concentration for a given position in the world map.}
#'
#'}
#'
#'@source \url{https://neo.sci.gsfc.nasa.gov/view.php?datasetId=AURA_NO2_M}
"NO2_2010"
|
/R/NO2_2010-data.R
|
permissive
|
battyone/eventstream
|
R
| false
| false
| 567
|
r
|
#'A dataset containing NO2 data for 2010
#'
#'This dataset contains smoothed NO2 data from March to September 2010
#'
#'@format An array of 4 x 179 x 360 dimensions.
#'\describe{
#' \item{Dimension 1}{Each \code{NO2_2010[t, , ]} contains NO2 data for a given month with \code{t=1} corresponding to March and \code{t=7} corresponding to September}
#' \item{Dimensions 2,3}{Each \code{NO2_2010[ ,x, y]} contains NO2 concentration for a given position in the world map.}
#'
#'}
#'
#'@source \url{https://neo.sci.gsfc.nasa.gov/view.php?datasetId=AURA_NO2_M}
"NO2_2010"
|
## ----setup, include=FALSE------------------------------------------------
require(knitr)
knitr::opts_chunk$set(echo = TRUE)
library(mvITR)
## ----Generate data set, results='markup', echo=TRUE----------------------
set.seed(123)
dat <- generateData(n = 1000)
str(dat)
## ----Summary plots, results='markup', echo=FALSE, results='show', fig.cap="Figure 1. Risk score distribution for simulated data", fig.width=7----
boxplot(dat$r ~ dat$trt, boxwex = 0.25,
xlab = "Original Treatment Group",
main = "Risk Distribution by Treatment Group",
ylab = "Risk Score", axes = FALSE)
axis(1, at = 1:2, labels = c("Control", "Treated"),
col = "white"); axis(2, las = 2)
## ----Grow a tree tau 2.75 - lambda 1, results='markup', echo=TRUE--------
tre1 <- grow.ITR(data = dat,
split.var = 1:10,
risk.threshold = 2.75,
lambda = 1,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx")
tre1$tree
## ----Grow a tree tau 2.75 - lambda 2, results='markup', echo=TRUE--------
tre2 <- grow.ITR(data = dat,
split.var = 1:10,
risk.threshold = 2.75,
lambda = 2,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx")
tre2$tree
## ----Code Tree Pruning, results='markup'---------------------------------
pruned1 <- prune(tre1, a = 0, risk.threshold = 2.75, lambda = 1)
pruned.display <- pruned1$result[,c(1:6,10,11)]
pruned.display$alpha <- sprintf("%.3f", as.numeric(pruned.display$alpha))
pruned.display$V <- sprintf("%.3f", as.numeric(pruned.display$V))
pruned.display$Benefit <- sprintf("%.3f", as.numeric(pruned.display$Benefit))
pruned.display$Risk <- sprintf("%.3f", as.numeric(pruned.display$Risk))
pruned.display
## ---- Cross Validated Pruning Model, results='hide', echo=TRUE-----------
rcDT.fit <- treeCV(dat = dat,
split.var = 1:10,
lambda = 1,
risk.threshold = 2.75,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx",
nfolds = 5)
## ---- Code Forest Growth, results='markup'-------------------------------
set.seed(2)
rcRF.fit <- Build.RF.ITR(dat,
split.var = 1:10,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx",
risk.threshold = 2.75,
ntree = 100,
lambda = 0.5)
## ---- Treatment Prediction, results='markup', echo=TRUE------------------
preds.rcDT <- predict.ITR(rcDT.fit$best.tree.alpha,
new.data = dat,
split.var = 1:10)$trt.pred
preds.rcRF <- predict.ITR(rcRF.fit,
new.data = dat,
split.var = 1:10)$trt.pred
## ---- Treatment Prediction Plot, results='markup', echo=FALSE, fig.cap="Figure 2. Prediction Comparisons for rcDT and rcRF Models.", fig.width=10, fig.height=6----
par(mfrow = c(1,2))
par(mar = c(5,5,5,1))
plot(dat$x1, dat$x2, pch = 16,
cex = 100, col = "lightgray",
xlab = expression(X[1]),
ylab = expression(X[2]),
main = paste0("Predictions from rcDT (Tree) Model\n",
"Efficacy = ",
sprintf("%.2f", mean(dat$y * (dat$trt == preds.rcDT) / 0.5)),
"\nRisk = ",
sprintf("%.2f", mean(dat$r * (dat$trt == preds.rcDT) / 0.5))),
axes = FALSE)
points(dat$x1, dat$x2, pch = 16,
cex = 0.8,
col = ifelse(preds.rcDT == 1, "forestgreen", "hotpink"))
axis(1); axis(2, las = 2)
plot(dat$x1, dat$x2, pch = 16,
cex = 100, col = "lightgray",
xlab = expression(X[1]),
ylab = expression(X[2]),
main = paste0("Predictions from rcRF (Forest) Model\n",
"Efficacy = ",
sprintf("%.2f", mean(dat$y * (dat$trt == preds.rcRF) / 0.5)),
"\nRisk = ",
sprintf("%.2f", mean(dat$r * (dat$trt == preds.rcRF) / 0.5))),
axes = FALSE)
points(dat$x1, dat$x2, pch = 16,
cex = 0.8,
col = ifelse(preds.rcRF == 1, "forestgreen", "hotpink"))
axis(1); axis(2, las = 2)
## ---- Code Variable Importance, results='markup'-------------------------
VI <- Variable.Importance.ITR(rcRF.fit, sort = FALSE)
do.call(cbind, VI)
|
/inst/doc/mvITR-vignette.R
|
no_license
|
kdoub5ha/mvITR
|
R
| false
| false
| 4,758
|
r
|
## ----setup, include=FALSE------------------------------------------------
require(knitr)
knitr::opts_chunk$set(echo = TRUE)
library(mvITR)
## ----Generate data set, results='markup', echo=TRUE----------------------
set.seed(123)
dat <- generateData(n = 1000)
str(dat)
## ----Summary plots, results='markup', echo=FALSE, results='show', fig.cap="Figure 1. Risk score distribution for simulated data", fig.width=7----
boxplot(dat$r ~ dat$trt, boxwex = 0.25,
xlab = "Original Treatment Group",
main = "Risk Distribution by Treatment Group",
ylab = "Risk Score", axes = FALSE)
axis(1, at = 1:2, labels = c("Control", "Treated"),
col = "white"); axis(2, las = 2)
## ----Grow a tree tau 2.75 - lambda 1, results='markup', echo=TRUE--------
tre1 <- grow.ITR(data = dat,
split.var = 1:10,
risk.threshold = 2.75,
lambda = 1,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx")
tre1$tree
## ----Grow a tree tau 2.75 - lambda 2, results='markup', echo=TRUE--------
tre2 <- grow.ITR(data = dat,
split.var = 1:10,
risk.threshold = 2.75,
lambda = 2,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx")
tre2$tree
## ----Code Tree Pruning, results='markup'---------------------------------
pruned1 <- prune(tre1, a = 0, risk.threshold = 2.75, lambda = 1)
pruned.display <- pruned1$result[,c(1:6,10,11)]
pruned.display$alpha <- sprintf("%.3f", as.numeric(pruned.display$alpha))
pruned.display$V <- sprintf("%.3f", as.numeric(pruned.display$V))
pruned.display$Benefit <- sprintf("%.3f", as.numeric(pruned.display$Benefit))
pruned.display$Risk <- sprintf("%.3f", as.numeric(pruned.display$Risk))
pruned.display
## ---- Cross Validated Pruning Model, results='hide', echo=TRUE-----------
rcDT.fit <- treeCV(dat = dat,
split.var = 1:10,
lambda = 1,
risk.threshold = 2.75,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx",
nfolds = 5)
## ---- Code Forest Growth, results='markup'-------------------------------
set.seed(2)
rcRF.fit <- Build.RF.ITR(dat,
split.var = 1:10,
efficacy = "y",
risk = "r",
col.trt = "trt",
col.prtx = "prtx",
risk.threshold = 2.75,
ntree = 100,
lambda = 0.5)
## ---- Treatment Prediction, results='markup', echo=TRUE------------------
preds.rcDT <- predict.ITR(rcDT.fit$best.tree.alpha,
new.data = dat,
split.var = 1:10)$trt.pred
preds.rcRF <- predict.ITR(rcRF.fit,
new.data = dat,
split.var = 1:10)$trt.pred
## ---- Treatment Prediction Plot, results='markup', echo=FALSE, fig.cap="Figure 2. Prediction Comparisons for rcDT and rcRF Models.", fig.width=10, fig.height=6----
par(mfrow = c(1,2))
par(mar = c(5,5,5,1))
plot(dat$x1, dat$x2, pch = 16,
cex = 100, col = "lightgray",
xlab = expression(X[1]),
ylab = expression(X[2]),
main = paste0("Predictions from rcDT (Tree) Model\n",
"Efficacy = ",
sprintf("%.2f", mean(dat$y * (dat$trt == preds.rcDT) / 0.5)),
"\nRisk = ",
sprintf("%.2f", mean(dat$r * (dat$trt == preds.rcDT) / 0.5))),
axes = FALSE)
points(dat$x1, dat$x2, pch = 16,
cex = 0.8,
col = ifelse(preds.rcDT == 1, "forestgreen", "hotpink"))
axis(1); axis(2, las = 2)
plot(dat$x1, dat$x2, pch = 16,
cex = 100, col = "lightgray",
xlab = expression(X[1]),
ylab = expression(X[2]),
main = paste0("Predictions from rcRF (Forest) Model\n",
"Efficacy = ",
sprintf("%.2f", mean(dat$y * (dat$trt == preds.rcRF) / 0.5)),
"\nRisk = ",
sprintf("%.2f", mean(dat$r * (dat$trt == preds.rcRF) / 0.5))),
axes = FALSE)
points(dat$x1, dat$x2, pch = 16,
cex = 0.8,
col = ifelse(preds.rcRF == 1, "forestgreen", "hotpink"))
axis(1); axis(2, las = 2)
## ---- Code Variable Importance, results='markup'-------------------------
VI <- Variable.Importance.ITR(rcRF.fit, sort = FALSE)
do.call(cbind, VI)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Case1.r
\docType{data}
\name{Case1}
\alias{Case1}
\title{Virtual dataset Case 1}
\format{
A data frame with 1200 rows and 30 variables:
\describe{
\item{SR_ 0.1}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.1 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 0.2}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.2 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 0.45}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.45 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 0.75}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.75 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 1.55}{Empty column denoting the start of the record sampled at a
sampling resolution of 1.55 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 3.25}{Empty column denoting the start of the record sampled at a
sampling resolution of 3.25 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
...
}
}
\source{
See code to generate data in \code{data–raw}
Details on how these example cases are defined is provided in:
de Winter, N. J., Agterhuis, T., Ziegler, M., Optimizing sampling strategies
in high–resolution paleoclimate records, \emph{Climate of the Past Discussions}
\strong{2020}, 1–52.
\url{https://doi.org/fpc4}
}
\usage{
Case1
}
\description{
A dataset containing ages (\code{Tnew}), depth values (\code{D}), stable
oxygen isotope values (\eqn{\delta^{18}O}{δ18O}) and clumped isotope values
\eqn{\Delta_{47}}{Δ47} of a simulated carbonate record based on environmental
parameters following Case 1 and employing a sampling resolution of
\code{0.1 mm}, \code{0.2 mm}, \code{0.45 mm}, \code{0.75 mm}, \code{1.55 mm}
and \code{3.25 mm}.
}
\details{
Case 1 describes an ideal temperature sinusoid without distortion by either
changes in growth rate or changes in \eqn{\delta^{18}O_{w}}{δ18Ow}.
Generated using the code in "Generate_Case1.r" in \code{data–raw}
}
\keyword{datasets}
|
/man/Case1.Rd
|
no_license
|
cran/seasonalclumped
|
R
| false
| true
| 3,180
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Case1.r
\docType{data}
\name{Case1}
\alias{Case1}
\title{Virtual dataset Case 1}
\format{
A data frame with 1200 rows and 30 variables:
\describe{
\item{SR_ 0.1}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.1 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 0.2}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.2 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 0.45}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.45 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 0.75}{Empty column denoting the start of the record sampled at a
sampling resolution of 0.75 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 1.55}{Empty column denoting the start of the record sampled at a
sampling resolution of 1.55 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
\item{SR_ 3.25}{Empty column denoting the start of the record sampled at a
sampling resolution of 3.25 mm}
\item{Tnew}{Age, in years relative to the start of the record}
\item{D}{Depth, in mm along the virtual record}
\item{d18Oc}{stable oxygen isotope value, in permille VPDB}
\item{D47}{clumped isotope value, in permille}
...
}
}
\source{
See code to generate data in \code{data–raw}
Details on how these example cases are defined is provided in:
de Winter, N. J., Agterhuis, T., Ziegler, M., Optimizing sampling strategies
in high–resolution paleoclimate records, \emph{Climate of the Past Discussions}
\strong{2020}, 1–52.
\url{https://doi.org/fpc4}
}
\usage{
Case1
}
\description{
A dataset containing ages (\code{Tnew}), depth values (\code{D}), stable
oxygen isotope values (\eqn{\delta^{18}O}{δ18O}) and clumped isotope values
\eqn{\Delta_{47}}{Δ47} of a simulated carbonate record based on environmental
parameters following Case 1 and employing a sampling resolution of
\code{0.1 mm}, \code{0.2 mm}, \code{0.45 mm}, \code{0.75 mm}, \code{1.55 mm}
and \code{3.25 mm}.
}
\details{
Case 1 describes an ideal temperature sinusoid without distortion by either
changes in growth rate or changes in \eqn{\delta^{18}O_{w}}{δ18Ow}.
Generated using the code in "Generate_Case1.r" in \code{data–raw}
}
\keyword{datasets}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/partydf.R
\name{partition}
\alias{partition}
\title{Partition data across workers in a cluster}
\usage{
partition(data, cluster)
}
\arguments{
\item{data}{Dataset to partition, typically grouped. When grouped, all
observations in a group will be assigned to the same cluster.}
\item{cluster}{Cluster to use.}
}
\value{
A [party_df].
}
\description{
Partitioning ensures that all observations in a group end up on the same
worker. To try and keep the observations on each worker balanced,
`partition()` uses a greedy algorithm that iteratively assign each group to
the worker that currently has the fewest rows.
}
\examples{
library(dplyr)
cl <- default_cluster()
cluster_library(cl, "dplyr")
mtcars2 <- partition(mtcars, cl)
mtcars2 \%>\% mutate(cyl2 = 2 * cyl)
mtcars2 \%>\% filter(vs == 1)
mtcars2 \%>\% group_by(cyl) \%>\% summarise(n())
mtcars2 \%>\% select(-cyl)
}
|
/multidplyr/man/partition.Rd
|
permissive
|
yp1227/Multiplier
|
R
| false
| true
| 984
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/partydf.R
\name{partition}
\alias{partition}
\title{Partition data across workers in a cluster}
\usage{
partition(data, cluster)
}
\arguments{
\item{data}{Dataset to partition, typically grouped. When grouped, all
observations in a group will be assigned to the same cluster.}
\item{cluster}{Cluster to use.}
}
\value{
A [party_df].
}
\description{
Partitioning ensures that all observations in a group end up on the same
worker. To try and keep the observations on each worker balanced,
`partition()` uses a greedy algorithm that iteratively assign each group to
the worker that currently has the fewest rows.
}
\examples{
library(dplyr)
cl <- default_cluster()
cluster_library(cl, "dplyr")
mtcars2 <- partition(mtcars, cl)
mtcars2 \%>\% mutate(cyl2 = 2 * cyl)
mtcars2 \%>\% filter(vs == 1)
mtcars2 \%>\% group_by(cyl) \%>\% summarise(n())
mtcars2 \%>\% select(-cyl)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/transfer_coda.R
\name{clrvar2phi}
\alias{clrvar2phi}
\title{Calculate Phi Statistics (Proportionality) from CLR Covariances}
\usage{
clrvar2phi(Sigma)
}
\arguments{
\item{Sigma}{Covariance matrix Px(PN) where N is number of
covariance matricies in CLR space}
}
\value{
Array (same dimension as Sigma) but elements represent phi statistics
}
\description{
Assumes parts are the first two dimensions of Sigma
}
\references{
Lovell, David, Vera Pawlowsky-Glahn, Juan Jose Egozcue,
Samuel Marguerat, and Jurg Bahler. 2015. Proportionality: A Valid Alternative
to Correlation for Relative Data. PLoS Computational Biology 11 (3).
Public Library of Science: e1004075.
}
|
/man/clrvar2phi.Rd
|
no_license
|
jsilve24/RcppCoDA
|
R
| false
| true
| 752
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/transfer_coda.R
\name{clrvar2phi}
\alias{clrvar2phi}
\title{Calculate Phi Statistics (Proportionality) from CLR Covariances}
\usage{
clrvar2phi(Sigma)
}
\arguments{
\item{Sigma}{Covariance matrix Px(PN) where N is number of
covariance matricies in CLR space}
}
\value{
Array (same dimension as Sigma) but elements represent phi statistics
}
\description{
Assumes parts are the first two dimensions of Sigma
}
\references{
Lovell, David, Vera Pawlowsky-Glahn, Juan Jose Egozcue,
Samuel Marguerat, and Jurg Bahler. 2015. Proportionality: A Valid Alternative
to Correlation for Relative Data. PLoS Computational Biology 11 (3).
Public Library of Science: e1004075.
}
|
# Congratulations on learning GitHub!
# Make any edits you like here:
jbkku j
|
/practicescript.R
|
no_license
|
garciajj/github-intro-2
|
R
| false
| false
| 80
|
r
|
# Congratulations on learning GitHub!
# Make any edits you like here:
jbkku j
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/boynton.R
\name{show_boynton}
\alias{show_boynton}
\title{Show the colors in a palette view}
\usage{
show_boynton()
}
\description{
Show the colors in a palette view
}
|
/man/show_boynton.Rd
|
no_license
|
btupper/catecolors
|
R
| false
| false
| 255
|
rd
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/boynton.R
\name{show_boynton}
\alias{show_boynton}
\title{Show the colors in a palette view}
\usage{
show_boynton()
}
\description{
Show the colors in a palette view
}
|
#------------------------------------------------------------------------------#
# Replication of Brookhart M.A. et al. (2006)
# Variable Selection for Propensity Score Models.
# American Journal of Epidemiology, 163(12), 1149–1156.
#
# Replicator: K Luijken
# Co-pilot: B B L Penning de Vries
#
# Helper function to estimate outcome model including the ps based on subclassification
#------------------------------------------------------------------------------#
#' Estimating the propensity score model based on subclassification for replication Brookhart et al. (2006)
#'
#' @param PS The propensity score, as estimated by the function estimate_ps().
#' @param data The dataset containing the exposure variable A, outcome variable Y and relevant covariates.
#'
#' @return Returns the exposure effect, estimated within strata defined by quintiles of the propensity score and then averaged across strata. Note that NAs are removed in computing the mean;
#' NAs occur in subsets of the data that include non-exposed or exposed only, i.e., in separated datasets. Stores potential warnings for all models fitted in data subsets.
#'
#' @import data.table
#' @import simsalapar
estimate_effect_quintiles <- function(PS, data){
df <- data.frame(Y = data$Y, A = data$A, PS = PS)
# Split propensity score into quintiles
PS_quint <- quantile(df$PS, probs = seq(from = 0,to = 1, by = 0.2))
# Subset dataset accordingly
df$pscoreq <- cut(df$PS, breaks = PS_quint, labels = 1:5, include.lowest = T)
# Create output matrices to store exposure effect in each quintile and potential warnings
quintile_effect <- matrix(NA, nrow=5, ncol = 1)
warnings <- data.table::data.table(matrix(NA, nrow=5, ncol = 1))
for(i in 1:5){
quintile <- df[df$pscoreq == i,]
quintile_mod <- simsalapar::tryCatch.W.E(
glm(Y ~ A, data = quintile, family = poisson)) # geen + quintile$PS, right? Mee eens!
quintile_effect[i,] <- quintile_mod$value$coefficients["A"]
warnings[i,] <- ifelse(is.null(quintile_mod$warning), NA, quintile_mod)
}
# Return mean exposure effect (note that NAs are removed in computing the mean!
# NAs occur in subsets of the data that include non-exposed or exposed only,
# i.e., in separated datasets)
return(list(effect_A = mean(quintile_effect, na.rm = T), warning = warnings))
}
|
/Replication.Brookhart.2006/R/estimate_effect_quintiles.R
|
permissive
|
replisims/Brookhart_MA-2006
|
R
| false
| false
| 2,352
|
r
|
#------------------------------------------------------------------------------#
# Replication of Brookhart M.A. et al. (2006)
# Variable Selection for Propensity Score Models.
# American Journal of Epidemiology, 163(12), 1149–1156.
#
# Replicator: K Luijken
# Co-pilot: B B L Penning de Vries
#
# Helper function to estimate outcome model including the ps based on subclassification
#------------------------------------------------------------------------------#
#' Estimating the propensity score model based on subclassification for replication Brookhart et al. (2006)
#'
#' @param PS The propensity score, as estimated by the function estimate_ps().
#' @param data The dataset containing the exposure variable A, outcome variable Y and relevant covariates.
#'
#' @return Returns the exposure effect, estimated within strata defined by quintiles of the propensity score and then averaged across strata. Note that NAs are removed in computing the mean;
#' NAs occur in subsets of the data that include non-exposed or exposed only, i.e., in separated datasets. Stores potential warnings for all models fitted in data subsets.
#'
#' @import data.table
#' @import simsalapar
estimate_effect_quintiles <- function(PS, data){
df <- data.frame(Y = data$Y, A = data$A, PS = PS)
# Split propensity score into quintiles
PS_quint <- quantile(df$PS, probs = seq(from = 0,to = 1, by = 0.2))
# Subset dataset accordingly
df$pscoreq <- cut(df$PS, breaks = PS_quint, labels = 1:5, include.lowest = T)
# Create output matrices to store exposure effect in each quintile and potential warnings
quintile_effect <- matrix(NA, nrow=5, ncol = 1)
warnings <- data.table::data.table(matrix(NA, nrow=5, ncol = 1))
for(i in 1:5){
quintile <- df[df$pscoreq == i,]
quintile_mod <- simsalapar::tryCatch.W.E(
glm(Y ~ A, data = quintile, family = poisson)) # geen + quintile$PS, right? Mee eens!
quintile_effect[i,] <- quintile_mod$value$coefficients["A"]
warnings[i,] <- ifelse(is.null(quintile_mod$warning), NA, quintile_mod)
}
# Return mean exposure effect (note that NAs are removed in computing the mean!
# NAs occur in subsets of the data that include non-exposed or exposed only,
# i.e., in separated datasets)
return(list(effect_A = mean(quintile_effect, na.rm = T), warning = warnings))
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rls_update.R
\name{rls_update}
\alias{rls_update}
\title{Updates the model fits}
\usage{
rls_update(model, datatr = NA, y = NA, runcpp = TRUE)
}
\arguments{
\item{model}{A model object}
\item{datatr}{a data.list with transformed data (from model$transform_data(D))}
\item{y}{A vector of the model output for the corresponding time steps in \code{datatr}}
\item{runcpp}{Optional, default = TRUE. If TRUE, a c++ implementation of the update is run, otherwise a slower R implementation is used.}
}
\value{
Returns a named list for each horizon (\code{model$kseq}) containing the variables needed for the RLS fit (for each horizon, which is saved in model$Lfits):
It will update variables in the forecast model object.
}
\description{
Calculates the RLS update of the model coefficients with the provived data.
}
\details{
See vignette ??ref(recursive updating, not yet finished) on how to use the function.
}
\examples{
# See rls_predict examples
}
\seealso{
See \code{\link{rls_predict}}.
}
|
/onlineforecast/man/rls_update.Rd
|
no_license
|
akhikolla/updatedatatype-list2
|
R
| false
| true
| 1,073
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rls_update.R
\name{rls_update}
\alias{rls_update}
\title{Updates the model fits}
\usage{
rls_update(model, datatr = NA, y = NA, runcpp = TRUE)
}
\arguments{
\item{model}{A model object}
\item{datatr}{a data.list with transformed data (from model$transform_data(D))}
\item{y}{A vector of the model output for the corresponding time steps in \code{datatr}}
\item{runcpp}{Optional, default = TRUE. If TRUE, a c++ implementation of the update is run, otherwise a slower R implementation is used.}
}
\value{
Returns a named list for each horizon (\code{model$kseq}) containing the variables needed for the RLS fit (for each horizon, which is saved in model$Lfits):
It will update variables in the forecast model object.
}
\description{
Calculates the RLS update of the model coefficients with the provived data.
}
\details{
See vignette ??ref(recursive updating, not yet finished) on how to use the function.
}
\examples{
# See rls_predict examples
}
\seealso{
See \code{\link{rls_predict}}.
}
|
pollutantmean <- function(directory, pollutant, id = 1:332){
data <- NA
result <- 0
# the id vector is a reference to the monitor data we need to read.
# read in the monitors specified by the callee
files = file.path(directory, paste0(formatC(id, width=3, format="d", flag="0"), ".csv"))
# print(files)
for(item in files)
{
# print(paste0("Processing File:", item))
data <- c(data, read.csv(item)[[pollutant]])
}
# handle excluding NA rows here
result <- mean(data, na.rm=TRUE)
return(result)
}
# check against provided example output
answer <- pollutantmean("/home/user/Downloads/specdata", "sulfate", 1:10)
print(answer)
answer <- pollutantmean("/home/user/Downloads/specdata", "nitrate", 70:72)
print(answer)
answer <- pollutantmean("/home/user/Downloads/specdata", "nitrate", 23)
print(answer)
|
/Week2/pollutantmean.R
|
no_license
|
XAGV1YBGAdk34WDPVVLn/datasciencecoursera
|
R
| false
| false
| 869
|
r
|
pollutantmean <- function(directory, pollutant, id = 1:332){
data <- NA
result <- 0
# the id vector is a reference to the monitor data we need to read.
# read in the monitors specified by the callee
files = file.path(directory, paste0(formatC(id, width=3, format="d", flag="0"), ".csv"))
# print(files)
for(item in files)
{
# print(paste0("Processing File:", item))
data <- c(data, read.csv(item)[[pollutant]])
}
# handle excluding NA rows here
result <- mean(data, na.rm=TRUE)
return(result)
}
# check against provided example output
answer <- pollutantmean("/home/user/Downloads/specdata", "sulfate", 1:10)
print(answer)
answer <- pollutantmean("/home/user/Downloads/specdata", "nitrate", 70:72)
print(answer)
answer <- pollutantmean("/home/user/Downloads/specdata", "nitrate", 23)
print(answer)
|
#!/usr/bin/env Rscript
library(TitanCNA)
version <- "0.1.2"
args <- commandArgs(TRUE)
tumWig <- args[1]
normWig <- args[2]
gc <- args[3]
map <- args[4]
target_list <- args[5]
outfile <- args[6]
genometype <- args[7]
message('titan: Correcting GC content and mappability biases...')
if( target_list!="NULL" ){
target_list_data <- read.table(target_list, sep="\t", header=F, stringsAsFactors=F)
cnData <- correctReadDepth(tumWig, normWig, gc, map, genomeStyle=genometype, targetedSequence = target_list_data)
} else {
cnData <- correctReadDepth(tumWig, normWig, gc, map, genomeStyle=genometype)
}
write.table(cnData, file = outfile, col.names = TRUE, row.names = FALSE, quote = FALSE, sep ="\t")
|
/dockerfiles/titan/correctReads.R
|
no_license
|
shahcompbio/wgspipeline_docker_containers
|
R
| false
| false
| 712
|
r
|
#!/usr/bin/env Rscript
library(TitanCNA)
version <- "0.1.2"
args <- commandArgs(TRUE)
tumWig <- args[1]
normWig <- args[2]
gc <- args[3]
map <- args[4]
target_list <- args[5]
outfile <- args[6]
genometype <- args[7]
message('titan: Correcting GC content and mappability biases...')
if( target_list!="NULL" ){
target_list_data <- read.table(target_list, sep="\t", header=F, stringsAsFactors=F)
cnData <- correctReadDepth(tumWig, normWig, gc, map, genomeStyle=genometype, targetedSequence = target_list_data)
} else {
cnData <- correctReadDepth(tumWig, normWig, gc, map, genomeStyle=genometype)
}
write.table(cnData, file = outfile, col.names = TRUE, row.names = FALSE, quote = FALSE, sep ="\t")
|
#######################################################################
# Mike Safar - CorpusSummary
# ----------------------------------------------
# Copyright (C) 2018. All Rights Reserved.
########################################################################
library(R6)
library(logging)
library(tm)
library(scales)
library(dplyr)
CorpusSummary <- R6Class("CorpusSummary",
public <- list(
#constructor
initialize = function(corpus,
pre.stemmed.corpus = NULL,
weight.terms = T,
sparse.maximal = 0.5,
method = "euclidian",
k.clusters = 5,
k.rounds = 20,
min.words.per.doc = NULL,
...) {
private$corpus <- corpus
private$weight_DTM <- weight.terms
private$method <- method
private$pre_stemmed_corpus <- pre.stemmed.corpus
private$sparse_maximal <- sparse.maximal
private$kClusters <- k.clusters
private$kRounds <- k.rounds
private$min_words_per_doc <- min.words.per.doc
# Not sure how to handle this, but seems like i have to ensure it's there
addHandler(writeToConsole)
private$processCorpus(corpus, ...)
},
#getCorpus
getCorpus = function() {private$corpus},
#getDtm
getDTM = function(sparse = TRUE) {
if (sparse)
private$dtm_sparse
else
private$dtm
},
#getMostFrequentTerms
getMostFrequentTerms = function(sparse = TRUE) {
private$getMostFreqTerms(self$getDTM(sparse))
},
#getKmeansResults
getKmeansResults = function() {private$kResults},
#getDist
getDist = function(completions = FALSE) {
dist <- private$dist
if (completions & !is.null(private$stem_completions)) {
ct <- private$stem_completions
labels <- attr(dist, "Labels")
new.labels <- as.vector(ct[ct$stem %in% labels,]$completion)
attr(dist, "Labels") <- new.labels
}
return(dist)
},
#getSummary
getSummary = function(min.words.per.doc = NULL) {
private$summary
},
getStemCompletionTable = function() {private$stem_completions},
getTermsFromDoc = function(doc.number) {
if (is.na(doc.number) || !is.numeric(doc.number) || is.null(doc.number) || identical(doc.number, integer(0)) || !(doc.number > 0))
return(NULL)
dtm <- self$getDTM(sparse = FALSE)
#assumes tfidf
pruned.dtm <- dtm[doc.number,as.vector(dtm[doc.number,] > 0)]
weighted.terms <- private$getMostFreqTerms(pruned.dtm)
if (!is.null(private$stem_completions)) {
weighted.terms <- merge(weighted.terms, private$stem_completions, by.x = "word", by.y = "stem")
rownames(weighted.terms) <- weighted.terms$word
weighted.terms$word <- weighted.terms$completion
weighted.terms$completion <- NULL
} else {
rownames(weighted.terms) <- weighted.terms$word
}
weighted.terms <- weighted.terms[order(-weighted.terms$freq),]
return(weighted.terms)
}
),
private <- list(
#summary
corpus = "SimpleCorpus",
pre_stemmed_corpus = "SimpleCorpus",
dtm = "DocumentTermMatrix",
dtm_sparse = "DocumentTermMatrix",
dist = "dist",
method = "character",
weight_DTM = "logical",
word_frequencies = NULL,
stem_completions = NULL,
sparse_maximal = "numeric",
kClusters = "numeric",
kRounds = "numeric",
kResults = "kmeans",
min_words_per_doc = "numeric",
summary = "list",
summaryCompleted = FALSE,
#processCorpus
processCorpus = function(corpus, ...) {
loginfo("PROCESSING CORPUS WTIH DOCUMENTS: %d", length(corpus))
loginfo("...creating Matrix...")
private$dtm <- DocumentTermMatrix(corpus, ...)
loginfo("......found %s terms...", comma_format()(length(private$dtm$dimnames$Terms)))
loginfo("...getting stem completions...")
private$stem_completions <- private$getCompletionTable()
loginfo("...weighting the matrix...")
if (private$weight_DTM) {
suppressWarnings(private$dtm <- weightTfIdf(private$dtm))
}
loginfo("...removing sparse terms at maximal of %f...", private$sparse_maximal)
private$dtm_sparse <- removeSparseTerms(private$dtm, private$sparse_maximal)
loginfo("......reduced to %s terms...", comma_format()(length(private$dtm_sparse$dimnames$Terms)))
loginfo("...creating %s distance matrix...", private$method)
private$dist <- dist(t(private$dtm_sparse), private$method)
loginfo("...finding %d kmeans clusters over %d rounds...", private$kClusters, private$kRounds)
private$kResults <- kmeans(private$dist, private$kClusters, private$kRounds)
loginfo("...composing summary...")
private$composeSummary(private$min_words_per_doc)
loginfo("...DONE processing.")
},
#getMostFreqTerms
getMostFreqTerms = function(dtm = NULL, wordList = NULL) {
if (is.null(dtm))
m <- private$dtm_sparse
else
m <- dtm
ft.words <- colnames(m)
ft <- data.frame(word=colnames(m), freq=col_sums(m), row.names=ft.words, stringsAsFactors = F)
if (!is.null(wordList) && length(wordList) > 0) {
ft <- ft[which(ft$word %in% wordList),]
}
if (!is.null(private$stem_completions)) {
ft <- private$getCompletedWords(ft)
}
ft <- ft[order(ft$freq, decreasing=TRUE),]
return(ft)
},
#getCompletedWords -- WILL RETURN FREQUENCIES FROM PRE-STEMMED CORPUS
getCompletedWords = function(stems) {
completions = private$stem_completions
stopifnot(!is.null(completions))
cw <- merge(x=stems, y=completions, by.x="word", by.y="stem")
rownames(cw) <- cw$word
cw$word <- cw$completion
cw$completion <- NULL
return(cw)
},
#getCompletionTable
getCompletionTable = function(corpus = NULL) {
c <- NULL
if (is.null(corpus))
c <- private$pre_stemmed_corpus
else
c <- corpus
if (!is.null(c)) {
originals <- DocumentTermMatrix(c)
completion <- col_sums(originals) %>% sort(decreasing=T)
stem <- stemDocument(names(completion), language = "en")
completionTable <- as.data.frame(cbind(stem, names(completion)), row.names=stem, stringsAsFactors = T)
colnames(completionTable) <- c("stem", "completion")
completionTable <- completionTable[!duplicated(completionTable$stem),]
return(completionTable)
}
return(NULL)
},
#composeSummary
composeSummary = function(min.words.per.doc = NULL) {
stopifnot(!is.null(min.words.per.doc) || !is.numeric(min.words.per.doc))
results <- private$kResults
clusterSummaries <- list()
for (i in 1:private$kClusters) {
clusterSummaries[[i]] <- private$composeClusterSummary(i, min.words.per.doc)
}
private$summary <- clusterSummaries
private$summaryCompleted <- TRUE
return(private$summary)
},
#composeClusterSummary
composeClusterSummary = function(clusterNumber, min.words.per.doc = NULL) {
stopifnot(!is.null(min.words.per.doc) || !is.numeric(min.words.per.doc))
results <- private$kResults
dtm <- as.matrix(private$dtm_sparse)
rownames(dtm) <- 1:nrow(dtm)
#order of operations important here!
termList <- names(which(results$cluster == clusterNumber))
termFreqTable <- as.data.frame(private$getMostFreqTerms(dtm, termList))
min <- min.words.per.doc
if (is.null(min)) min <- 0
relevant.docs <- which(rowSums(as.matrix(dtm[,termList]) > 0) > min)
docList <- rowSums(as.matrix(dtm[relevant.docs,termList])>0)
docList <- as.integer(names(which(docList[order(docList, decreasing=TRUE)] > 0)))
loginfo("--- *** Cluster %d: %s Documents, %d Terms", clusterNumber, comma_format()(length(docList)), length(termFreqTable$word))
return(
list(
"docList" = docList,
"termList" = termFreqTable
)
)
}
)
)
|
/corpusSummary.R
|
no_license
|
mikesafar/signal-boost
|
R
| false
| false
| 9,808
|
r
|
#######################################################################
# Mike Safar - CorpusSummary
# ----------------------------------------------
# Copyright (C) 2018. All Rights Reserved.
########################################################################
library(R6)
library(logging)
library(tm)
library(scales)
library(dplyr)
CorpusSummary <- R6Class("CorpusSummary",
public <- list(
#constructor
initialize = function(corpus,
pre.stemmed.corpus = NULL,
weight.terms = T,
sparse.maximal = 0.5,
method = "euclidian",
k.clusters = 5,
k.rounds = 20,
min.words.per.doc = NULL,
...) {
private$corpus <- corpus
private$weight_DTM <- weight.terms
private$method <- method
private$pre_stemmed_corpus <- pre.stemmed.corpus
private$sparse_maximal <- sparse.maximal
private$kClusters <- k.clusters
private$kRounds <- k.rounds
private$min_words_per_doc <- min.words.per.doc
# Not sure how to handle this, but seems like i have to ensure it's there
addHandler(writeToConsole)
private$processCorpus(corpus, ...)
},
#getCorpus
getCorpus = function() {private$corpus},
#getDtm
getDTM = function(sparse = TRUE) {
if (sparse)
private$dtm_sparse
else
private$dtm
},
#getMostFrequentTerms
getMostFrequentTerms = function(sparse = TRUE) {
private$getMostFreqTerms(self$getDTM(sparse))
},
#getKmeansResults
getKmeansResults = function() {private$kResults},
#getDist
getDist = function(completions = FALSE) {
dist <- private$dist
if (completions & !is.null(private$stem_completions)) {
ct <- private$stem_completions
labels <- attr(dist, "Labels")
new.labels <- as.vector(ct[ct$stem %in% labels,]$completion)
attr(dist, "Labels") <- new.labels
}
return(dist)
},
#getSummary
getSummary = function(min.words.per.doc = NULL) {
private$summary
},
getStemCompletionTable = function() {private$stem_completions},
getTermsFromDoc = function(doc.number) {
if (is.na(doc.number) || !is.numeric(doc.number) || is.null(doc.number) || identical(doc.number, integer(0)) || !(doc.number > 0))
return(NULL)
dtm <- self$getDTM(sparse = FALSE)
#assumes tfidf
pruned.dtm <- dtm[doc.number,as.vector(dtm[doc.number,] > 0)]
weighted.terms <- private$getMostFreqTerms(pruned.dtm)
if (!is.null(private$stem_completions)) {
weighted.terms <- merge(weighted.terms, private$stem_completions, by.x = "word", by.y = "stem")
rownames(weighted.terms) <- weighted.terms$word
weighted.terms$word <- weighted.terms$completion
weighted.terms$completion <- NULL
} else {
rownames(weighted.terms) <- weighted.terms$word
}
weighted.terms <- weighted.terms[order(-weighted.terms$freq),]
return(weighted.terms)
}
),
private <- list(
#summary
corpus = "SimpleCorpus",
pre_stemmed_corpus = "SimpleCorpus",
dtm = "DocumentTermMatrix",
dtm_sparse = "DocumentTermMatrix",
dist = "dist",
method = "character",
weight_DTM = "logical",
word_frequencies = NULL,
stem_completions = NULL,
sparse_maximal = "numeric",
kClusters = "numeric",
kRounds = "numeric",
kResults = "kmeans",
min_words_per_doc = "numeric",
summary = "list",
summaryCompleted = FALSE,
#processCorpus
processCorpus = function(corpus, ...) {
loginfo("PROCESSING CORPUS WTIH DOCUMENTS: %d", length(corpus))
loginfo("...creating Matrix...")
private$dtm <- DocumentTermMatrix(corpus, ...)
loginfo("......found %s terms...", comma_format()(length(private$dtm$dimnames$Terms)))
loginfo("...getting stem completions...")
private$stem_completions <- private$getCompletionTable()
loginfo("...weighting the matrix...")
if (private$weight_DTM) {
suppressWarnings(private$dtm <- weightTfIdf(private$dtm))
}
loginfo("...removing sparse terms at maximal of %f...", private$sparse_maximal)
private$dtm_sparse <- removeSparseTerms(private$dtm, private$sparse_maximal)
loginfo("......reduced to %s terms...", comma_format()(length(private$dtm_sparse$dimnames$Terms)))
loginfo("...creating %s distance matrix...", private$method)
private$dist <- dist(t(private$dtm_sparse), private$method)
loginfo("...finding %d kmeans clusters over %d rounds...", private$kClusters, private$kRounds)
private$kResults <- kmeans(private$dist, private$kClusters, private$kRounds)
loginfo("...composing summary...")
private$composeSummary(private$min_words_per_doc)
loginfo("...DONE processing.")
},
#getMostFreqTerms
getMostFreqTerms = function(dtm = NULL, wordList = NULL) {
if (is.null(dtm))
m <- private$dtm_sparse
else
m <- dtm
ft.words <- colnames(m)
ft <- data.frame(word=colnames(m), freq=col_sums(m), row.names=ft.words, stringsAsFactors = F)
if (!is.null(wordList) && length(wordList) > 0) {
ft <- ft[which(ft$word %in% wordList),]
}
if (!is.null(private$stem_completions)) {
ft <- private$getCompletedWords(ft)
}
ft <- ft[order(ft$freq, decreasing=TRUE),]
return(ft)
},
#getCompletedWords -- WILL RETURN FREQUENCIES FROM PRE-STEMMED CORPUS
getCompletedWords = function(stems) {
completions = private$stem_completions
stopifnot(!is.null(completions))
cw <- merge(x=stems, y=completions, by.x="word", by.y="stem")
rownames(cw) <- cw$word
cw$word <- cw$completion
cw$completion <- NULL
return(cw)
},
#getCompletionTable
getCompletionTable = function(corpus = NULL) {
c <- NULL
if (is.null(corpus))
c <- private$pre_stemmed_corpus
else
c <- corpus
if (!is.null(c)) {
originals <- DocumentTermMatrix(c)
completion <- col_sums(originals) %>% sort(decreasing=T)
stem <- stemDocument(names(completion), language = "en")
completionTable <- as.data.frame(cbind(stem, names(completion)), row.names=stem, stringsAsFactors = T)
colnames(completionTable) <- c("stem", "completion")
completionTable <- completionTable[!duplicated(completionTable$stem),]
return(completionTable)
}
return(NULL)
},
#composeSummary
composeSummary = function(min.words.per.doc = NULL) {
stopifnot(!is.null(min.words.per.doc) || !is.numeric(min.words.per.doc))
results <- private$kResults
clusterSummaries <- list()
for (i in 1:private$kClusters) {
clusterSummaries[[i]] <- private$composeClusterSummary(i, min.words.per.doc)
}
private$summary <- clusterSummaries
private$summaryCompleted <- TRUE
return(private$summary)
},
#composeClusterSummary
composeClusterSummary = function(clusterNumber, min.words.per.doc = NULL) {
stopifnot(!is.null(min.words.per.doc) || !is.numeric(min.words.per.doc))
results <- private$kResults
dtm <- as.matrix(private$dtm_sparse)
rownames(dtm) <- 1:nrow(dtm)
#order of operations important here!
termList <- names(which(results$cluster == clusterNumber))
termFreqTable <- as.data.frame(private$getMostFreqTerms(dtm, termList))
min <- min.words.per.doc
if (is.null(min)) min <- 0
relevant.docs <- which(rowSums(as.matrix(dtm[,termList]) > 0) > min)
docList <- rowSums(as.matrix(dtm[relevant.docs,termList])>0)
docList <- as.integer(names(which(docList[order(docList, decreasing=TRUE)] > 0)))
loginfo("--- *** Cluster %d: %s Documents, %d Terms", clusterNumber, comma_format()(length(docList)), length(termFreqTable$word))
return(
list(
"docList" = docList,
"termList" = termFreqTable
)
)
}
)
)
|
library(ISLR)
library(tidyverse)
library(caret)
library(keras)
library(neuralnet)
library(Hmisc)
data = read.csv("Datos.csv") %>%
filter(G3 != 0)
data_nn= data %>%
dplyr::select(,c("G2","G1","age","studytime","failures",
"G3","traveltime","absences"))#,"Medu",
#"Fedu","famrel","freetime","goout"
#,"Dalc","Walc","health"))
test_data_nn = test_data %>%
dplyr::select(,c("G2","G1","age","studytime","failures",
"traveltime","absences"))#,"Medu",
#"Fedu","famrel","freetime","goout"
#,"Dalc","Walc","health"))
x = data.frame(data_nn) %>%
dplyr::select(-c("G3"))
y = data.frame(data_nn) %>%
dplyr::select(c("G3"))
mean_x = apply(x, 2, mean)
mean_y = apply(y, 2, mean)
sd_x = apply(x, 2, sd)
sd_y = apply(y, 2, sd)
x_scaled = scale(x, center = mean_x, scale = sd_x) %>%
data.frame()
y_scaled = scale(y, center = mean_y, scale = sd_y) %>%
data.frame()
data_scaled = cbind(x_scaled, y_scaled)
#test_data
set.seed(123)
nn=neuralnet(G3 ~ .,data=data_scaled, hidden=10,act.fct = "logistic",
linear.output = TRUE,stepmax=10^5,threshold = 0.01)
Predict=compute(nn,x_scaled)
pp = Predict$net.result*sd_y+mean_y
nn_vs = data.frame("Real" = y,"NN"= pp)
print(rmse2(nn_vs$G3,nn_vs$NN))
err_por(nn_vs$Real,nn_vs$NN)
plot(nn)
featurePlot(x=x,y=y, plot="box")
|
/NNSCRIP.R
|
no_license
|
stuarstuar/modelospredictivos2
|
R
| false
| false
| 1,402
|
r
|
library(ISLR)
library(tidyverse)
library(caret)
library(keras)
library(neuralnet)
library(Hmisc)
data = read.csv("Datos.csv") %>%
filter(G3 != 0)
data_nn= data %>%
dplyr::select(,c("G2","G1","age","studytime","failures",
"G3","traveltime","absences"))#,"Medu",
#"Fedu","famrel","freetime","goout"
#,"Dalc","Walc","health"))
test_data_nn = test_data %>%
dplyr::select(,c("G2","G1","age","studytime","failures",
"traveltime","absences"))#,"Medu",
#"Fedu","famrel","freetime","goout"
#,"Dalc","Walc","health"))
x = data.frame(data_nn) %>%
dplyr::select(-c("G3"))
y = data.frame(data_nn) %>%
dplyr::select(c("G3"))
mean_x = apply(x, 2, mean)
mean_y = apply(y, 2, mean)
sd_x = apply(x, 2, sd)
sd_y = apply(y, 2, sd)
x_scaled = scale(x, center = mean_x, scale = sd_x) %>%
data.frame()
y_scaled = scale(y, center = mean_y, scale = sd_y) %>%
data.frame()
data_scaled = cbind(x_scaled, y_scaled)
#test_data
set.seed(123)
nn=neuralnet(G3 ~ .,data=data_scaled, hidden=10,act.fct = "logistic",
linear.output = TRUE,stepmax=10^5,threshold = 0.01)
Predict=compute(nn,x_scaled)
pp = Predict$net.result*sd_y+mean_y
nn_vs = data.frame("Real" = y,"NN"= pp)
print(rmse2(nn_vs$G3,nn_vs$NN))
err_por(nn_vs$Real,nn_vs$NN)
plot(nn)
featurePlot(x=x,y=y, plot="box")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/antsImageIterator_class.R
\name{antsImageIteratorIsAtEnd}
\alias{antsImageIteratorIsAtEnd}
\title{antsImageIteratorIsAtEnd}
\usage{
antsImageIteratorIsAtEnd(x)
}
\arguments{
\item{x}{antsImageIterator}
}
\value{
boolean indicating position
}
\description{
test if iterator is at end of data
}
\examples{
img <- makeImage(c(5,5), rnorm(25))
it <- antsImageIterator( img )
it <- antsImageIteratorNext( it )
flag <- antsImageIteratorIsAtEnd( it )
}
|
/man/antsImageIteratorIsAtEnd.Rd
|
permissive
|
alainlompo/ANTsR
|
R
| false
| true
| 525
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/antsImageIterator_class.R
\name{antsImageIteratorIsAtEnd}
\alias{antsImageIteratorIsAtEnd}
\title{antsImageIteratorIsAtEnd}
\usage{
antsImageIteratorIsAtEnd(x)
}
\arguments{
\item{x}{antsImageIterator}
}
\value{
boolean indicating position
}
\description{
test if iterator is at end of data
}
\examples{
img <- makeImage(c(5,5), rnorm(25))
it <- antsImageIterator( img )
it <- antsImageIteratorNext( it )
flag <- antsImageIteratorIsAtEnd( it )
}
|
library(plotrix)
library(cowplot)
library(gridGraphics)
space = seq(0, 1, 0.1)
star = 0.8 * space + 0.1
circle = -0.8 * space + 0.9
d1 = data.frame(space = c(space, space),
fitness = c(star, circle),
type = c(rep("star", length(star)),
rep("circle", length(circle))))
pts1 = data.frame(x = c(-0.075, -0.075),
y = c(star[1], circle[1]),
type = c("star", "circle"))
star = rep(0.75, length(space))
circle = rep(0.25, length(space))
d2 = data.frame(space = c(space, space),
fitness = c(star, circle),
type = c(rep("star", length(star)),
rep("circle", length(circle))))
pts2 = data.frame(x = c(-0.12, -0.025, -0.075, -0.075),
y = c(star[1], star[1], circle[1], circle[1]),
type = c("star", "circle", "star", "circle"))
plot_graph <- function(dd, ptsxx, xlabtxt, ylabtxt) {
ggplot(dd, aes(space, fitness)) + xlim(-0.15, 1) +
ylim(0, 1) + geom_line(aes(color = type)) +
xlab(xlabtxt) + ylab(ylabtxt) +
scale_color_manual(values=c("black", "black")) +
theme(axis.text.x=element_blank(),
axis.ticks.x=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank()) +
geom_point(data = ptsxx,
mapping = aes(x = x, y = y,
shape = type, color = type, cex = 0.3)) +
scale_shape_manual(values=c(8, 1)) +
theme(legend.position="none") +
theme(plot.margin = unit(c(0.1, 0.5, 0.1, 0.95), units = "cm"))
}
b = plot_graph(d1, pts1, "", "Fitness")
c = plot_graph(d2, pts2, expression(paste(sigma ^ o, "= ", sigma ^"*")), "fitness")
c = plot_graph(d2, pts2, "", "Fitness")
c = c + draw_label(expression(paste(sigma ^ o, "= ", sigma ^"*")), 0.9, 0.05)
d = plot_graph(d1, pts1, "Space", expression("Settlement \nprobability"))
pts = data.frame(x = runif(50, 0.05, 0.95),
y = runif(50, 0.05, 0.95),
type = rep(NA, 50))
b1 = 0.37
b2 = 0.63
pts[pts$x < b1, "type"] = 8
pts[pts$x > b2, "type"] = 1
for (i in 1:nrow(pts)) {
if (is.na(pts[i, "type"])) {
rand = runif(1, 0, 1)
if (rand < 0.25) {
pts[i, "type"] = 1
} else if (rand > 0.75) {
pts[i, "type"] = 8
} else {
pts[i, "type"] = 20
}
}
}
dev.new()
par(xpd = NA, bg = "transparent", mar = c(0, 3, 0, 0))
plot(NULL, xlim=c(0, 1), ylim=c(0, 1), axes = F, xlab = "", ylab = "")
gradient.rect(0, 0, 1, 1,
col = gray.colors(100, start = 0.1, end = 0.9, alpha = 0.9),
border = NA)
lines.default(x = c(b1, b1), y = c(0, 1), lty = 2, lwd = 2)
lines.default(x = c(b2, b2), y = c(0, 1), lty = 2, lwd = 2)
points(pts[pts$type != 20, ]$x,
pts[pts$type != 20, ]$y,
pch = pts[pts$type != 20, ]$type,
cex = 1)
points(pts[pts$type == 20, ]$x,
pts[pts$type == 20, ]$y,
pch = 8,
cex = 1)
points(pts[pts$type == 20, ]$x,
pts[pts$type == 20, ]$y,
pch = 1,
cex = 1)
a <- recordPlot()
dev.off()
xx = plot_grid(a, b, c, d, nrow = 4, labels = c("A", "B", "C", "D"),
rel_heights = c(1.5, 1, 1, 1))
save_plot("~/Desktop/figure1.pdf", xx, nrow=4, base_height = 1.8, base_width = 4)
|
/Figure1.R
|
no_license
|
singhal/hz_metaanalysis
|
R
| false
| false
| 3,225
|
r
|
library(plotrix)
library(cowplot)
library(gridGraphics)
space = seq(0, 1, 0.1)
star = 0.8 * space + 0.1
circle = -0.8 * space + 0.9
d1 = data.frame(space = c(space, space),
fitness = c(star, circle),
type = c(rep("star", length(star)),
rep("circle", length(circle))))
pts1 = data.frame(x = c(-0.075, -0.075),
y = c(star[1], circle[1]),
type = c("star", "circle"))
star = rep(0.75, length(space))
circle = rep(0.25, length(space))
d2 = data.frame(space = c(space, space),
fitness = c(star, circle),
type = c(rep("star", length(star)),
rep("circle", length(circle))))
pts2 = data.frame(x = c(-0.12, -0.025, -0.075, -0.075),
y = c(star[1], star[1], circle[1], circle[1]),
type = c("star", "circle", "star", "circle"))
plot_graph <- function(dd, ptsxx, xlabtxt, ylabtxt) {
ggplot(dd, aes(space, fitness)) + xlim(-0.15, 1) +
ylim(0, 1) + geom_line(aes(color = type)) +
xlab(xlabtxt) + ylab(ylabtxt) +
scale_color_manual(values=c("black", "black")) +
theme(axis.text.x=element_blank(),
axis.ticks.x=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank()) +
geom_point(data = ptsxx,
mapping = aes(x = x, y = y,
shape = type, color = type, cex = 0.3)) +
scale_shape_manual(values=c(8, 1)) +
theme(legend.position="none") +
theme(plot.margin = unit(c(0.1, 0.5, 0.1, 0.95), units = "cm"))
}
b = plot_graph(d1, pts1, "", "Fitness")
c = plot_graph(d2, pts2, expression(paste(sigma ^ o, "= ", sigma ^"*")), "fitness")
c = plot_graph(d2, pts2, "", "Fitness")
c = c + draw_label(expression(paste(sigma ^ o, "= ", sigma ^"*")), 0.9, 0.05)
d = plot_graph(d1, pts1, "Space", expression("Settlement \nprobability"))
pts = data.frame(x = runif(50, 0.05, 0.95),
y = runif(50, 0.05, 0.95),
type = rep(NA, 50))
b1 = 0.37
b2 = 0.63
pts[pts$x < b1, "type"] = 8
pts[pts$x > b2, "type"] = 1
for (i in 1:nrow(pts)) {
if (is.na(pts[i, "type"])) {
rand = runif(1, 0, 1)
if (rand < 0.25) {
pts[i, "type"] = 1
} else if (rand > 0.75) {
pts[i, "type"] = 8
} else {
pts[i, "type"] = 20
}
}
}
dev.new()
par(xpd = NA, bg = "transparent", mar = c(0, 3, 0, 0))
plot(NULL, xlim=c(0, 1), ylim=c(0, 1), axes = F, xlab = "", ylab = "")
gradient.rect(0, 0, 1, 1,
col = gray.colors(100, start = 0.1, end = 0.9, alpha = 0.9),
border = NA)
lines.default(x = c(b1, b1), y = c(0, 1), lty = 2, lwd = 2)
lines.default(x = c(b2, b2), y = c(0, 1), lty = 2, lwd = 2)
points(pts[pts$type != 20, ]$x,
pts[pts$type != 20, ]$y,
pch = pts[pts$type != 20, ]$type,
cex = 1)
points(pts[pts$type == 20, ]$x,
pts[pts$type == 20, ]$y,
pch = 8,
cex = 1)
points(pts[pts$type == 20, ]$x,
pts[pts$type == 20, ]$y,
pch = 1,
cex = 1)
a <- recordPlot()
dev.off()
xx = plot_grid(a, b, c, d, nrow = 4, labels = c("A", "B", "C", "D"),
rel_heights = c(1.5, 1, 1, 1))
save_plot("~/Desktop/figure1.pdf", xx, nrow=4, base_height = 1.8, base_width = 4)
|
library(testthat)
suppressPackageStartupMessages(library(sluRm))
test_check("sluRm")
|
/tests/testthat.R
|
permissive
|
pmarjora/sluRm
|
R
| false
| false
| 87
|
r
|
library(testthat)
suppressPackageStartupMessages(library(sluRm))
test_check("sluRm")
|
make_type <- function(x) {
if (is.null(x)) {
return()
}
if (substr(x, 1, 1) == ".") {
x <- mime::guess_type(x, empty = NULL)
}
list(`Content-Type` = x)
}
# adapted from https://github.com/hadley/httr
raw_body <- function(body, type = NULL) {
if (is.character(body)) {
body <- charToRaw(paste(body, collapse = "\n"))
}
stopifnot(is.raw(body))
list(
opts = list(
post = TRUE,
postfieldsize = length(body),
postfields = body
),
type = make_type(type %||% "")
)
}
# adapted from https://github.com/hadley/httr
prep_body <- function(body, encode, type = NULL) {
if (identical(body, FALSE)) {
return(list(opts = list(post = TRUE, nobody = TRUE)))
}
if (is.character(body) || is.raw(body)) {
return(raw_body(body, type = type))
}
if (inherits(body, "form_file")) {
con <- file(body$path, "rb")
size <- file.info(body$path)$size
return(
list(
opts = list(
post = TRUE,
readfunction = function(nbytes, ...) {
if (is.null(con)) return(raw())
bin <- readBin(con, "raw", nbytes)
if (length(bin) < nbytes) {
close(con)
con <<- NULL
}
bin
},
postfieldsize_large = size
),
type = make_type(body$type)
)
)
}
if (is.null(body)) {
return(raw_body(raw()))
}
if (!is.list(body)) {
stop("Unknown type of `body`: must be NULL, FALSE, character, raw or list",
call. = FALSE)
}
body <- ccp(body)
if (!encode %in% c('raw', 'form', 'json', 'multipart')) {
stop("encode must be one of raw, form, json, or multipart", call. = FALSE)
}
if (encode == "raw") {
raw_body(body)
} else if (encode == "form") {
raw_body(make_query(body), "application/x-www-form-urlencoded")
} else if (encode == "json") {
raw_body(jsonlite::toJSON(body, auto_unbox = TRUE), "application/json")
} else if (encode == "multipart") {
if (!all(has_name(body))) {
stop("All components of body must be named", call. = FALSE)
}
list(
opts = list(
post = TRUE
),
fields = lapply(body, as.character)
)
}
}
|
/R/body.R
|
permissive
|
dickoa/crul
|
R
| false
| false
| 2,217
|
r
|
make_type <- function(x) {
if (is.null(x)) {
return()
}
if (substr(x, 1, 1) == ".") {
x <- mime::guess_type(x, empty = NULL)
}
list(`Content-Type` = x)
}
# adapted from https://github.com/hadley/httr
raw_body <- function(body, type = NULL) {
if (is.character(body)) {
body <- charToRaw(paste(body, collapse = "\n"))
}
stopifnot(is.raw(body))
list(
opts = list(
post = TRUE,
postfieldsize = length(body),
postfields = body
),
type = make_type(type %||% "")
)
}
# adapted from https://github.com/hadley/httr
prep_body <- function(body, encode, type = NULL) {
if (identical(body, FALSE)) {
return(list(opts = list(post = TRUE, nobody = TRUE)))
}
if (is.character(body) || is.raw(body)) {
return(raw_body(body, type = type))
}
if (inherits(body, "form_file")) {
con <- file(body$path, "rb")
size <- file.info(body$path)$size
return(
list(
opts = list(
post = TRUE,
readfunction = function(nbytes, ...) {
if (is.null(con)) return(raw())
bin <- readBin(con, "raw", nbytes)
if (length(bin) < nbytes) {
close(con)
con <<- NULL
}
bin
},
postfieldsize_large = size
),
type = make_type(body$type)
)
)
}
if (is.null(body)) {
return(raw_body(raw()))
}
if (!is.list(body)) {
stop("Unknown type of `body`: must be NULL, FALSE, character, raw or list",
call. = FALSE)
}
body <- ccp(body)
if (!encode %in% c('raw', 'form', 'json', 'multipart')) {
stop("encode must be one of raw, form, json, or multipart", call. = FALSE)
}
if (encode == "raw") {
raw_body(body)
} else if (encode == "form") {
raw_body(make_query(body), "application/x-www-form-urlencoded")
} else if (encode == "json") {
raw_body(jsonlite::toJSON(body, auto_unbox = TRUE), "application/json")
} else if (encode == "multipart") {
if (!all(has_name(body))) {
stop("All components of body must be named", call. = FALSE)
}
list(
opts = list(
post = TRUE
),
fields = lapply(body, as.character)
)
}
}
|
library(lme4)
library(data.table)
cd52.expr=read.table('cd52.txt')
cd52.covs=fread('cd52.covs.txt', sep=',',header=T)
#lets put all of our variables in here
df=data.frame(cd52.covs, cd52=cd52.expr)
#just get the monocytes for now, and just SLE
df.use=df[intersect(grep('CD14', df$ct_cov), which(df$disease_cov=='sle')), ]
genos=read.table('cd52.geno.txt')
inds=sapply(strsplit(colnames(genos), 'X'), '[',2)
genos=as.numeric(genos)
df.use$geno=genos[match(df.use$ind_cov, inds)]
#lm is working.. thats a good sign! but it probably works tooooo well because we're not accounting for effects w/in an individual
print('Linear Model')
print(coef(summary(lm(V1 ~ geno , data=df.use))))
print('Linear Mixed Model')
print(coef(summary(lmer(V1 ~ geno + (1|ind_cov), data=df.use))))
#try a number of group sizes for pseudobulk to see how t value changes this needs the raw counts.. maybe try this later
#how many cells do you need??
cells=seq(1000, nrow(df.use), by=1000)
tvals=c()
for(c in cells){
print(c)
t=coef(summary(lmer(V1 ~ geno + (1|ind_cov), data=df.use[sample( nrow(df), c), ])))[2, 3]
tvals=c(tvals, t)
}
pdf('tvals.cells.pdf')
plot(cells, tvals)
abline(h=16.5754120606288, col='red')
dev.off()
#sample cells to get an estimate of the std error of the variance or t value estimate)
#if you seq'd 40,000 cells
n.iter=100
tvals.50k=c()
for(i in 1:n.iter){
if(i %%10==0){print(i)}
t=coef(summary(lmer(V1 ~ geno + (1|ind_cov), data=df.use[sample(nrow(df), 50000),])))[2, 3]
tvals.50k=c(tvals.50k, t)
}
n.iter=100
tvals.80k=c()
for(i in 1:n.iter){
if(i %%10==0){print(i)}
t=coef(summary(lmer(V1 ~ geno + (1|ind_cov), df.use[sample(nrow(df), 80000),])))[2, 3]
tvals.80k=c(tvals.80k, t)
}
df=data.frame(vals=c(tvals.50k, tvals.80k), cells=c(rep('50k', 100), rep('80k', 100)))
pdf('dist.cells.pdf')
ggplot(df, aes(vals, fill=cells))+ geom_density(alpha=0.5)
dev.off()
|
/scQTL.R
|
no_license
|
yelabucsf/sceQTL
|
R
| false
| false
| 1,903
|
r
|
library(lme4)
library(data.table)
cd52.expr=read.table('cd52.txt')
cd52.covs=fread('cd52.covs.txt', sep=',',header=T)
#lets put all of our variables in here
df=data.frame(cd52.covs, cd52=cd52.expr)
#just get the monocytes for now, and just SLE
df.use=df[intersect(grep('CD14', df$ct_cov), which(df$disease_cov=='sle')), ]
genos=read.table('cd52.geno.txt')
inds=sapply(strsplit(colnames(genos), 'X'), '[',2)
genos=as.numeric(genos)
df.use$geno=genos[match(df.use$ind_cov, inds)]
#lm is working.. thats a good sign! but it probably works tooooo well because we're not accounting for effects w/in an individual
print('Linear Model')
print(coef(summary(lm(V1 ~ geno , data=df.use))))
print('Linear Mixed Model')
print(coef(summary(lmer(V1 ~ geno + (1|ind_cov), data=df.use))))
#try a number of group sizes for pseudobulk to see how t value changes this needs the raw counts.. maybe try this later
#how many cells do you need??
cells=seq(1000, nrow(df.use), by=1000)
tvals=c()
for(c in cells){
print(c)
t=coef(summary(lmer(V1 ~ geno + (1|ind_cov), data=df.use[sample( nrow(df), c), ])))[2, 3]
tvals=c(tvals, t)
}
pdf('tvals.cells.pdf')
plot(cells, tvals)
abline(h=16.5754120606288, col='red')
dev.off()
#sample cells to get an estimate of the std error of the variance or t value estimate)
#if you seq'd 40,000 cells
n.iter=100
tvals.50k=c()
for(i in 1:n.iter){
if(i %%10==0){print(i)}
t=coef(summary(lmer(V1 ~ geno + (1|ind_cov), data=df.use[sample(nrow(df), 50000),])))[2, 3]
tvals.50k=c(tvals.50k, t)
}
n.iter=100
tvals.80k=c()
for(i in 1:n.iter){
if(i %%10==0){print(i)}
t=coef(summary(lmer(V1 ~ geno + (1|ind_cov), df.use[sample(nrow(df), 80000),])))[2, 3]
tvals.80k=c(tvals.80k, t)
}
df=data.frame(vals=c(tvals.50k, tvals.80k), cells=c(rep('50k', 100), rep('80k', 100)))
pdf('dist.cells.pdf')
ggplot(df, aes(vals, fill=cells))+ geom_density(alpha=0.5)
dev.off()
|
getVal <- function(x, vars = "both"){
#' Get the vector of Kc/c values from the chaos01.res object.
#'
#' This function allows easy extraction of Kc/c values from the chaos01.res object.
#' @param x the object of "chaos01.res" class, produced by testChaos01 function when parameter out = "TRUE". Subset the output of the function to get the results for the concrete c. See the example.
#' @param vars list/vector define what should be plotted.
#' \itemize{
#' \item "both" - both variables "Kc" and "c" will be returned in data.frame
#' \item "Kc" - vector of "Kc" values will be returned
#' \item "c" - vector of "c" values will be returned
#' }
#' Default is "both").
#' @keywords results test chaos
#' @export
#' @seealso \code{\link{testChaos01}}
#' @examples
#' vec.x <- gen.logistic(mu = 3.55, iter = 2000)
#'
#' #Kc for each value of c
#' res2 <- testChaos01(vec.x, out = TRUE)
#'
#' results <- getVal(res2, vars = "both")
#' print(head(results))
#'
#' #Get results of 0-1 test for Chaos when out = TRUE
#' K <- median(getVal(res2, vars = "Kc"))
#' @references
#' Gottwald G.A. and Melbourne I. (2004) On the implementation of the 0-1 Test for Chaos, SIAM J. Appl. Dyn. Syst., 8(1), 129–145.
#' @return
#' Vector of Kc or c values, or data.frame including both vectors if vars = "both".
if(class(x) != "chaos01.res"){
stop("Input variable is not of class 'chaos01.res' (list of results of class 'chaos01').")
}
switch(vars,
"Kc" = return(sapply(x, function(y)y$Kc)),
"c" = return(sapply(x, function(y)y$c)),
"both" = {
kc <- sapply(x, function(y)y$Kc)
c <- sapply(x, function(y)y$c)
return(data.frame(c = c, kc = kc))
}
)
}
|
/R/getval.R
|
no_license
|
cran/Chaos01
|
R
| false
| false
| 1,770
|
r
|
getVal <- function(x, vars = "both"){
#' Get the vector of Kc/c values from the chaos01.res object.
#'
#' This function allows easy extraction of Kc/c values from the chaos01.res object.
#' @param x the object of "chaos01.res" class, produced by testChaos01 function when parameter out = "TRUE". Subset the output of the function to get the results for the concrete c. See the example.
#' @param vars list/vector define what should be plotted.
#' \itemize{
#' \item "both" - both variables "Kc" and "c" will be returned in data.frame
#' \item "Kc" - vector of "Kc" values will be returned
#' \item "c" - vector of "c" values will be returned
#' }
#' Default is "both").
#' @keywords results test chaos
#' @export
#' @seealso \code{\link{testChaos01}}
#' @examples
#' vec.x <- gen.logistic(mu = 3.55, iter = 2000)
#'
#' #Kc for each value of c
#' res2 <- testChaos01(vec.x, out = TRUE)
#'
#' results <- getVal(res2, vars = "both")
#' print(head(results))
#'
#' #Get results of 0-1 test for Chaos when out = TRUE
#' K <- median(getVal(res2, vars = "Kc"))
#' @references
#' Gottwald G.A. and Melbourne I. (2004) On the implementation of the 0-1 Test for Chaos, SIAM J. Appl. Dyn. Syst., 8(1), 129–145.
#' @return
#' Vector of Kc or c values, or data.frame including both vectors if vars = "both".
if(class(x) != "chaos01.res"){
stop("Input variable is not of class 'chaos01.res' (list of results of class 'chaos01').")
}
switch(vars,
"Kc" = return(sapply(x, function(y)y$Kc)),
"c" = return(sapply(x, function(y)y$c)),
"both" = {
kc <- sapply(x, function(y)y$Kc)
c <- sapply(x, function(y)y$c)
return(data.frame(c = c, kc = kc))
}
)
}
|
path <- "~/Desktop/PhD/GitKraken/gmse_fork_RQ1/batch6-perfObs/"
setwd(dir = path)
# get the directory content
content <- dir()
# order alphabetically
content <- content[order(content)]
#### sim results ####
# initialize a table with the UT0
at0 <- grep(pattern = c("UT0pt0-"), x = content, fixed = T, value = T)
at0.tab <- grep(pattern = "flw", x = at0, fixed = T, value = T, invert = T)
tab <- read.csv(file = paste(path,at0.tab, sep = ""), sep = "\t", header = F)
tab.names <- c("rep", "budget", "at", "bb", "extinct", "act_dev", "abs_act_dev", "fin_yield", "max_diff_yield", "inac_ts", "overK")
colnames(tab) <- tab.names
# remove them from content
from0pt1 <- content[-grep(pattern = c("UT0pt0-"), x = content, fixed = T, value = F)]
# select the sim results only
from0pt1.sim <- grep(pattern = c("flw_"), x = from0pt1, fixed = T, value = T, invert = T)
# loop over the from0pt1.sim and rbind to tab
for (i in 1:length(from0pt1.sim)) {
zz <- read.csv(file = paste(path,from0pt1.sim[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- tab.names
tab <- rbind(tab, zz)
}
# export table
write.csv(tab, file = paste(path, "ATI-sim-merged-results.csv", sep = ""))
#### follow up over population ####
# initialize a table with the AT0 and AT0.1
at0.pop <- grep(pattern = "pop", x = at0, fixed = T, value = T)
pop <- read.csv(file = paste(path,at0.pop, sep = ""), sep = "\t", header = F)#[,-1]
t <- rep(NA, 20)
for (i in 1:20) {t[i] <- paste("t",i, sep = "")}
pop.names <- c("budget", "UT", "BB", "Extinct", "rep", "target", "popInit", t)
colnames(pop) <- pop.names
# select the sim results only
from0pt1.pop <- grep(pattern = c("flw_pop"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to pop
for (i in 1:length(from0pt1.pop)) {
zz <- read.csv(file = paste(path,from0pt1.pop[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
pop <- rbind(pop, zz)
}
# export table
write.csv(pop, file = paste(path, "pop-ATI-res-merged.csv", sep = ""))
#### follow up over costs ####
# initialize a table with the AT0 and AT0.1
at0.cos <- grep(pattern = "cos", x = at0, fixed = T, value = T)
cos <- read.csv(file = paste(path,at0.cos, sep = ""), sep = "\t", header = F)#[,-1]
colnames(cos) <- pop.names
# select the sim results only
from0pt1.cos <- grep(pattern = c("flw_cos"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to cos
for (i in 1:length(from0pt1.cos)) {
zz <- read.csv(file = paste(path,from0pt1.cos[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
cos <- rbind(cos, zz)
}
# export table
write.csv(cos, file = paste(path, "cos-ATI-res-merged.csv", sep = ""))
#### follow up over actions ####
# initialize a table with the AT0 and AT0.1
at0.act <- grep(pattern = "act", x = at0, fixed = T, value = T)
act <- read.csv(file = paste(path,at0.act, sep = ""), sep = "\t", header = F) #[,-1]
colnames(act) <- pop.names
# select the sim results only
from0pt1.act <- grep(pattern = c("flw_act"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to act
for (i in 1:length(from0pt1.act)) {
zz <- read.csv(file = paste(path,from0pt1.act[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
act <- rbind(act, zz)
}
# export table
write.csv(act, file = paste(path, "act-ATI-res-merged.csv", sep = ""))
#### follow up over budget ####
# initialize a table with the AT0 and AT0.1
at0.bgt <- grep(pattern = "bgt", x = at0, fixed = T, value = T)
bgt <- read.csv(file = paste(path,at0.bgt, sep = ""), sep = "\t", header = F)#[,-1]
colnames(bgt) <- pop.names
# select the sim results only
from0pt1.bgt <- grep(pattern = c("flw_bgt"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to bgt
for (i in 1:length(from0pt1.bgt)) {
zz <- read.csv(file = paste(path,from0pt1.bgt[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
bgt <- rbind(bgt, zz)
}
# export table
write.csv(bgt, file = paste("~/Desktop/PhD/GitKraken/gmse_fork_RQ1/", "bgt-ATI-res-merged.csv", sep = ""))
|
/merge-results.R
|
no_license
|
AdrianBach/gmse
|
R
| false
| false
| 4,133
|
r
|
path <- "~/Desktop/PhD/GitKraken/gmse_fork_RQ1/batch6-perfObs/"
setwd(dir = path)
# get the directory content
content <- dir()
# order alphabetically
content <- content[order(content)]
#### sim results ####
# initialize a table with the UT0
at0 <- grep(pattern = c("UT0pt0-"), x = content, fixed = T, value = T)
at0.tab <- grep(pattern = "flw", x = at0, fixed = T, value = T, invert = T)
tab <- read.csv(file = paste(path,at0.tab, sep = ""), sep = "\t", header = F)
tab.names <- c("rep", "budget", "at", "bb", "extinct", "act_dev", "abs_act_dev", "fin_yield", "max_diff_yield", "inac_ts", "overK")
colnames(tab) <- tab.names
# remove them from content
from0pt1 <- content[-grep(pattern = c("UT0pt0-"), x = content, fixed = T, value = F)]
# select the sim results only
from0pt1.sim <- grep(pattern = c("flw_"), x = from0pt1, fixed = T, value = T, invert = T)
# loop over the from0pt1.sim and rbind to tab
for (i in 1:length(from0pt1.sim)) {
zz <- read.csv(file = paste(path,from0pt1.sim[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- tab.names
tab <- rbind(tab, zz)
}
# export table
write.csv(tab, file = paste(path, "ATI-sim-merged-results.csv", sep = ""))
#### follow up over population ####
# initialize a table with the AT0 and AT0.1
at0.pop <- grep(pattern = "pop", x = at0, fixed = T, value = T)
pop <- read.csv(file = paste(path,at0.pop, sep = ""), sep = "\t", header = F)#[,-1]
t <- rep(NA, 20)
for (i in 1:20) {t[i] <- paste("t",i, sep = "")}
pop.names <- c("budget", "UT", "BB", "Extinct", "rep", "target", "popInit", t)
colnames(pop) <- pop.names
# select the sim results only
from0pt1.pop <- grep(pattern = c("flw_pop"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to pop
for (i in 1:length(from0pt1.pop)) {
zz <- read.csv(file = paste(path,from0pt1.pop[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
pop <- rbind(pop, zz)
}
# export table
write.csv(pop, file = paste(path, "pop-ATI-res-merged.csv", sep = ""))
#### follow up over costs ####
# initialize a table with the AT0 and AT0.1
at0.cos <- grep(pattern = "cos", x = at0, fixed = T, value = T)
cos <- read.csv(file = paste(path,at0.cos, sep = ""), sep = "\t", header = F)#[,-1]
colnames(cos) <- pop.names
# select the sim results only
from0pt1.cos <- grep(pattern = c("flw_cos"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to cos
for (i in 1:length(from0pt1.cos)) {
zz <- read.csv(file = paste(path,from0pt1.cos[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
cos <- rbind(cos, zz)
}
# export table
write.csv(cos, file = paste(path, "cos-ATI-res-merged.csv", sep = ""))
#### follow up over actions ####
# initialize a table with the AT0 and AT0.1
at0.act <- grep(pattern = "act", x = at0, fixed = T, value = T)
act <- read.csv(file = paste(path,at0.act, sep = ""), sep = "\t", header = F) #[,-1]
colnames(act) <- pop.names
# select the sim results only
from0pt1.act <- grep(pattern = c("flw_act"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to act
for (i in 1:length(from0pt1.act)) {
zz <- read.csv(file = paste(path,from0pt1.act[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
act <- rbind(act, zz)
}
# export table
write.csv(act, file = paste(path, "act-ATI-res-merged.csv", sep = ""))
#### follow up over budget ####
# initialize a table with the AT0 and AT0.1
at0.bgt <- grep(pattern = "bgt", x = at0, fixed = T, value = T)
bgt <- read.csv(file = paste(path,at0.bgt, sep = ""), sep = "\t", header = F)#[,-1]
colnames(bgt) <- pop.names
# select the sim results only
from0pt1.bgt <- grep(pattern = c("flw_bgt"), x = from0pt1, fixed = T, value = T, invert = F)
# loop over the from0pt1.sim and rbind to bgt
for (i in 1:length(from0pt1.bgt)) {
zz <- read.csv(file = paste(path,from0pt1.bgt[i], sep = ""), sep = "\t", header = F)
colnames(zz) <- pop.names
bgt <- rbind(bgt, zz)
}
# export table
write.csv(bgt, file = paste("~/Desktop/PhD/GitKraken/gmse_fork_RQ1/", "bgt-ATI-res-merged.csv", sep = ""))
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/function.R
\name{distr}
\alias{distr}
\title{Length distribution}
\usage{
distr(mu, sigma, l, par = data.frame())
}
\arguments{
\item{mu}{mean length for all ages}
\item{sigma}{standard deviation of length for all ages}
\item{l}{lengthgroups}
\item{par}{gadget parameters objects}
}
\value{
a matrix of dimension length(mu) X (length(l)-1)
}
\description{
This is a helper function for the firststep function. This defines the
length distribution for each age group
}
|
/man/distr.Rd
|
no_license
|
milokmilo/rgadget
|
R
| false
| false
| 558
|
rd
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/function.R
\name{distr}
\alias{distr}
\title{Length distribution}
\usage{
distr(mu, sigma, l, par = data.frame())
}
\arguments{
\item{mu}{mean length for all ages}
\item{sigma}{standard deviation of length for all ages}
\item{l}{lengthgroups}
\item{par}{gadget parameters objects}
}
\value{
a matrix of dimension length(mu) X (length(l)-1)
}
\description{
This is a helper function for the firststep function. This defines the
length distribution for each age group
}
|
#' Bayes Factor Calculation Scheme for META prior
#'
#' A function that calculates bayes factor for each data pair on each grid point
#' in log scale.
#'
#' @param data A dataset which is constructed by pairs of coefficient
#' values \eqn{ \beta } and standard errors \eqn{ se(\beta)}.
#' @param hyperparam A two-dimensional vector denoting all the grid points,
#' namely, \eqn{\phi} x \eqn{\omega}.
#' @param bf.only A boolean, denoting whether this function is called to calculate
#' Bayes factor for META prior only. Usually used when publication bias issue is the target.
#'
#' @return A list records all the log scale bayes factor values or a list records log scale
#' bayes factor for null, reproducible and irreproducible model (when bf.only=TRUE).
#'
#' @export
bf.cal.meta<-function(data,hyperparam=NULL,bf.only=FALSE){
if (bf.only==TRUE){
if (is.null(hyperparam)){
rcd_phi<-c()
for (k in 1:(ncol(data)/2)){
rcd_phi<-rbind(rcd_phi,data[,2*(k-1)+1]^2+data[,2*(k-1)+2]^2)
}
phi_min<-sqrt(mean(rcd_phi))
phi_max<-sqrt(quantile(rcd_phi,0.99))
philist<-phi_max
med<-phi_max
#### start from maxphi and go down
while (med>phi_min){
med<-med/sqrt(2)
philist<-c(philist,med)
}
phi<-philist^2
rep=c(0,6e-3,0.024)
irre<-c(0.500,0.655,0.795)
r<-c(rep,irre)
#### Compute k values
hyperparam<-c()
for (i in 1:length(r)){
kk<-phi*r[i]
oa<-phi*(1-r[i])
param<-cbind(kk,oa)
hyperparam<-rbind(hyperparam,param)
}
}
}
param<-rbind(c(0,0),hyperparam)
n<-nrow(data)
m<-ncol(data)/2
K<-nrow(param)
log10_bf<-rep(0,n*K)
for (i in 1:n){
for (k in 1:K) {
bm=0
vm2=0
sumw=0
oa2<-param[k,2]
phi2<-param[k,1]
for (j in 1:m){
beta<-data[i,2*(j-1)+1]
ds2<-data[i,2*(j-1)+2]^2
w<-1/(ds2+phi2)
bm<-bm+beta*w
sumw<-sumw+w
vm2=vm2+w
log10_bf[(i-1)*K+k]<-log10_bf[(i-1)*K+k]+0.5*log(ds2/(ds2+phi2))+0.5*(beta^2/ds2)*(phi2/(ds2+phi2))
}
bm<-bm/sumw
vm2<-1/vm2
log10_bf[(i-1)*K+k]<-log10_bf[(i-1)*K+k]+0.5*log(vm2/(oa2+vm2))+0.5*(bm^2/vm2)*(oa2/(vm2+oa2))
}
}
if (bf.only==TRUE){
rval<-hyperparam[,1]/(hyperparam[,1]+hyperparam[,2])
bf.null<-log10_bf[1]
bf.rep<-sum(log10_bf[which(rval<0.050)+1])
bf.irr<-sum(log10_bf[which(rval>0.050)+1])
return(c(bf.null,bf.rep,bf.irr))
}
else{
return(log10_bf)}
}
|
/R/bf.cal.meta.R
|
no_license
|
cran/INTRIGUE
|
R
| false
| false
| 2,634
|
r
|
#' Bayes Factor Calculation Scheme for META prior
#'
#' A function that calculates bayes factor for each data pair on each grid point
#' in log scale.
#'
#' @param data A dataset which is constructed by pairs of coefficient
#' values \eqn{ \beta } and standard errors \eqn{ se(\beta)}.
#' @param hyperparam A two-dimensional vector denoting all the grid points,
#' namely, \eqn{\phi} x \eqn{\omega}.
#' @param bf.only A boolean, denoting whether this function is called to calculate
#' Bayes factor for META prior only. Usually used when publication bias issue is the target.
#'
#' @return A list records all the log scale bayes factor values or a list records log scale
#' bayes factor for null, reproducible and irreproducible model (when bf.only=TRUE).
#'
#' @export
bf.cal.meta<-function(data,hyperparam=NULL,bf.only=FALSE){
if (bf.only==TRUE){
if (is.null(hyperparam)){
rcd_phi<-c()
for (k in 1:(ncol(data)/2)){
rcd_phi<-rbind(rcd_phi,data[,2*(k-1)+1]^2+data[,2*(k-1)+2]^2)
}
phi_min<-sqrt(mean(rcd_phi))
phi_max<-sqrt(quantile(rcd_phi,0.99))
philist<-phi_max
med<-phi_max
#### start from maxphi and go down
while (med>phi_min){
med<-med/sqrt(2)
philist<-c(philist,med)
}
phi<-philist^2
rep=c(0,6e-3,0.024)
irre<-c(0.500,0.655,0.795)
r<-c(rep,irre)
#### Compute k values
hyperparam<-c()
for (i in 1:length(r)){
kk<-phi*r[i]
oa<-phi*(1-r[i])
param<-cbind(kk,oa)
hyperparam<-rbind(hyperparam,param)
}
}
}
param<-rbind(c(0,0),hyperparam)
n<-nrow(data)
m<-ncol(data)/2
K<-nrow(param)
log10_bf<-rep(0,n*K)
for (i in 1:n){
for (k in 1:K) {
bm=0
vm2=0
sumw=0
oa2<-param[k,2]
phi2<-param[k,1]
for (j in 1:m){
beta<-data[i,2*(j-1)+1]
ds2<-data[i,2*(j-1)+2]^2
w<-1/(ds2+phi2)
bm<-bm+beta*w
sumw<-sumw+w
vm2=vm2+w
log10_bf[(i-1)*K+k]<-log10_bf[(i-1)*K+k]+0.5*log(ds2/(ds2+phi2))+0.5*(beta^2/ds2)*(phi2/(ds2+phi2))
}
bm<-bm/sumw
vm2<-1/vm2
log10_bf[(i-1)*K+k]<-log10_bf[(i-1)*K+k]+0.5*log(vm2/(oa2+vm2))+0.5*(bm^2/vm2)*(oa2/(vm2+oa2))
}
}
if (bf.only==TRUE){
rval<-hyperparam[,1]/(hyperparam[,1]+hyperparam[,2])
bf.null<-log10_bf[1]
bf.rep<-sum(log10_bf[which(rval<0.050)+1])
bf.irr<-sum(log10_bf[which(rval>0.050)+1])
return(c(bf.null,bf.rep,bf.irr))
}
else{
return(log10_bf)}
}
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/parameterEstimation.R
\name{Cvb}
\alias{Cvb}
\title{Cvb function}
\usage{
Cvb(xyt, spatial.intensity, N = 100, spatial.covmodel, covpars)
}
\arguments{
\item{xyt}{object of class stppp}
\item{spatial.intensity}{bivariate density estimate of lambda, an object of class im (produced from density.ppp for example)}
\item{N}{number of integration points}
\item{spatial.covmodel}{spatial covariance model}
\item{covpars}{additional covariance parameters}
}
\value{
a function, see below.
Computes Monte carlo estimate of function C(v;beta) in Brix and Diggle 2001 pp 829 (... note later corrigendum to paper (2003) corrects the expression given in this paper)
}
\description{
This function is used in \code{thetaEst} to estimate the temporal correlation parameter, theta.
}
\references{
\enumerate{
\item Benjamin M. Taylor, Tilman M. Davies, Barry S. Rowlingson, Peter J. Diggle (2013). Journal of Statistical Software, 52(4), 1-40. URL http://www.jstatsoft.org/v52/i04/
\item Brix A, Diggle PJ (2001). Spatiotemporal Prediction for log-Gaussian Cox processes. Journal of the Royal Statistical Society, Series B, 63(4), 823-841.
}
}
\seealso{
\link{thetaEst}
}
|
/man/Cvb.Rd
|
no_license
|
bentaylor1/lgcp
|
R
| false
| false
| 1,256
|
rd
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/parameterEstimation.R
\name{Cvb}
\alias{Cvb}
\title{Cvb function}
\usage{
Cvb(xyt, spatial.intensity, N = 100, spatial.covmodel, covpars)
}
\arguments{
\item{xyt}{object of class stppp}
\item{spatial.intensity}{bivariate density estimate of lambda, an object of class im (produced from density.ppp for example)}
\item{N}{number of integration points}
\item{spatial.covmodel}{spatial covariance model}
\item{covpars}{additional covariance parameters}
}
\value{
a function, see below.
Computes Monte carlo estimate of function C(v;beta) in Brix and Diggle 2001 pp 829 (... note later corrigendum to paper (2003) corrects the expression given in this paper)
}
\description{
This function is used in \code{thetaEst} to estimate the temporal correlation parameter, theta.
}
\references{
\enumerate{
\item Benjamin M. Taylor, Tilman M. Davies, Barry S. Rowlingson, Peter J. Diggle (2013). Journal of Statistical Software, 52(4), 1-40. URL http://www.jstatsoft.org/v52/i04/
\item Brix A, Diggle PJ (2001). Spatiotemporal Prediction for log-Gaussian Cox processes. Journal of the Royal Statistical Society, Series B, 63(4), 823-841.
}
}
\seealso{
\link{thetaEst}
}
|
# List of sample names to combine each person's sequencing for CAZyme analysis
map <- read.delim(file = "~/Documents/Projects/dietstudy/data/maps/SampleID_map.txt")
map$X.SampleID <- gsub("\\.", "_", map$X.SampleID)
run1names <- read.delim(file = "~/Documents/Projects/dietstudy/data/run1names.txt", header = F, col.names = "Run1_name")
run2names <- read.delim(file = "~/Documents/Projects/dietstudy/data/run2names.txt", header = F, col.names = "Run2_name")
# add a column that contains just the first part of each samples name
run1names$X.SampleID <- run1names$Run1_name
run1names$X.SampleID <- gsub("_S.*", "", run1names$X.SampleID)
run2names$X.SampleID <- run2names$Run2_name
run2names$X.SampleID <- gsub("_S.*", "", run2names$X.SampleID)
run2names$X.SampleID <- gsub("-", "_", run2names$X.SampleID)
require(dplyr)
all <- full_join(run1names,run2names)
all <- full_join(all, map)
all$Run1_name <- as.character(all$Run1_name)
all$Run2_name <- as.character(all$Run2_name)
# drop lines that contain the word Blank
all <- all[grep("Blank", all$X.SampleID, invert = T),]
# drop the NA's
all <- all[!is.na(all$Run1_name),]
all$Seq_name_used <- ifelse(is.na(all$Run2_name)==TRUE, all$Run1_name, all$Run2_name)
length(all$Seq_name_used)
write.table(all$Seq_name_used, file = "Documents/Projects/dietstudy/data/maps/seq_name_used.txt", sep = "\t", row.names = F, col.names = F, quote = F)
# TODO: export all the names per person for CAZyme analysis
|
/lib/processing_scripts/sequencingrunnames.R
|
no_license
|
knights-lab/dietstudy_analyses
|
R
| false
| false
| 1,460
|
r
|
# List of sample names to combine each person's sequencing for CAZyme analysis
map <- read.delim(file = "~/Documents/Projects/dietstudy/data/maps/SampleID_map.txt")
map$X.SampleID <- gsub("\\.", "_", map$X.SampleID)
run1names <- read.delim(file = "~/Documents/Projects/dietstudy/data/run1names.txt", header = F, col.names = "Run1_name")
run2names <- read.delim(file = "~/Documents/Projects/dietstudy/data/run2names.txt", header = F, col.names = "Run2_name")
# add a column that contains just the first part of each samples name
run1names$X.SampleID <- run1names$Run1_name
run1names$X.SampleID <- gsub("_S.*", "", run1names$X.SampleID)
run2names$X.SampleID <- run2names$Run2_name
run2names$X.SampleID <- gsub("_S.*", "", run2names$X.SampleID)
run2names$X.SampleID <- gsub("-", "_", run2names$X.SampleID)
require(dplyr)
all <- full_join(run1names,run2names)
all <- full_join(all, map)
all$Run1_name <- as.character(all$Run1_name)
all$Run2_name <- as.character(all$Run2_name)
# drop lines that contain the word Blank
all <- all[grep("Blank", all$X.SampleID, invert = T),]
# drop the NA's
all <- all[!is.na(all$Run1_name),]
all$Seq_name_used <- ifelse(is.na(all$Run2_name)==TRUE, all$Run1_name, all$Run2_name)
length(all$Seq_name_used)
write.table(all$Seq_name_used, file = "Documents/Projects/dietstudy/data/maps/seq_name_used.txt", sep = "\t", row.names = F, col.names = F, quote = F)
# TODO: export all the names per person for CAZyme analysis
|
# plot of all 2017 balls and strikes
library(dplyr)
library(tibble)
library(ggplot2)
#pitches <- as_data_frame(readRDS("pitches2017.Rda"))
# Rule book zone: up/down pz's have been normalized to go from
# 1.5 to 3.5. Width of baseball is 0.245 feet, so we add 1/2 of
# a baseball's width to each edge. Width of plate is 17 inches.
# (17/12)/2+0.245/2 = 0.8308333
rbzoneX <- c(-0.8308333, 0.8308333, 0.8308333, -0.8308333)
rbzoneY <- c(1.3775, 1.3775, 3.6225, 3.6225)
rbpoly <- data.frame(x=rbzoneX, y=rbzoneY)
calledPitches <- pitches[pitches$des=="Ball" |
pitches$des=="Ball In Dirt" |
pitches$des=="Called Strike", c("px","pz","des","stand")]
calledPitches <- calledPitches[!is.na(calledPitches[,1]),]
npitch <- nrow(calledPitches)
balls <- calledPitches[calledPitches$des=="Ball" | calledPitches$des=="Ball In Dirt",
c("px", "pz", "stand")]
strikes <- calledPitches[calledPitches$des=="Called Strike", c("px", "pz", "stand")]
stk <- list(L=data_frame(), R=data_frame())
bll <- list(L=data_frame(), R=data_frame())
for(s in c("L", "R")) {
stk[[s]] <- strikes[strikes$stand==s,c("px","pz")]
bll[[s]] <- balls[balls$stand==s,c("px","pz")]
}
strikePlot <- list(L=list(), R=list())
ballPlot <- list(L=list(), R=list())
s <- "L"
strikePlot[[s]] <- ggplot() +
geom_point(data=stk[[s]], aes(x=px,y=pz), alpha=0.06, color="red", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle("Called Strikes, 2017", subtitle="vs. left-handed batters")
ballPlot[[s]] <- ggplot() +
geom_point(data=bll[[s]], aes(x=px,y=pz), alpha=0.06, color="blue", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle("Called Balls, 2017", subtitle="vs. left-handed batters")
s <- "R"
strikePlot[[s]] <- ggplot() +
geom_point(data=stk[[s]], aes(x=px,y=pz), alpha=0.06, color="red", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle(" ", subtitle="vs. right-handed batters")
ballPlot[[s]] <- ggplot() +
geom_point(data=bll[[s]], aes(x=px,y=pz), alpha=0.06, color="blue", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle(" ", subtitle="vs. right-handed batters")
require(gridExtra)
bscloud <- grid.arrange(strikePlot$L, strikePlot$R, ballPlot$L, ballPlot$R, ncol=4)
ggsave("figures/ball_strike_cloud.pdf", plot = bscloud, width = 10, height = 3.3, dpi = 300)
# to reduce size:
# pdf2ps ball_strike_cloud.pdf ball_strike_cloud.eps
# ps2pdf -dPDFSETTINGS=/printer ball_strike_cloud.eps ball_strike_cloud_printer.pdf
|
/figures/ball_strike_cloud.R
|
no_license
|
djhunter/inconsistency
|
R
| false
| false
| 3,485
|
r
|
# plot of all 2017 balls and strikes
library(dplyr)
library(tibble)
library(ggplot2)
#pitches <- as_data_frame(readRDS("pitches2017.Rda"))
# Rule book zone: up/down pz's have been normalized to go from
# 1.5 to 3.5. Width of baseball is 0.245 feet, so we add 1/2 of
# a baseball's width to each edge. Width of plate is 17 inches.
# (17/12)/2+0.245/2 = 0.8308333
rbzoneX <- c(-0.8308333, 0.8308333, 0.8308333, -0.8308333)
rbzoneY <- c(1.3775, 1.3775, 3.6225, 3.6225)
rbpoly <- data.frame(x=rbzoneX, y=rbzoneY)
calledPitches <- pitches[pitches$des=="Ball" |
pitches$des=="Ball In Dirt" |
pitches$des=="Called Strike", c("px","pz","des","stand")]
calledPitches <- calledPitches[!is.na(calledPitches[,1]),]
npitch <- nrow(calledPitches)
balls <- calledPitches[calledPitches$des=="Ball" | calledPitches$des=="Ball In Dirt",
c("px", "pz", "stand")]
strikes <- calledPitches[calledPitches$des=="Called Strike", c("px", "pz", "stand")]
stk <- list(L=data_frame(), R=data_frame())
bll <- list(L=data_frame(), R=data_frame())
for(s in c("L", "R")) {
stk[[s]] <- strikes[strikes$stand==s,c("px","pz")]
bll[[s]] <- balls[balls$stand==s,c("px","pz")]
}
strikePlot <- list(L=list(), R=list())
ballPlot <- list(L=list(), R=list())
s <- "L"
strikePlot[[s]] <- ggplot() +
geom_point(data=stk[[s]], aes(x=px,y=pz), alpha=0.06, color="red", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle("Called Strikes, 2017", subtitle="vs. left-handed batters")
ballPlot[[s]] <- ggplot() +
geom_point(data=bll[[s]], aes(x=px,y=pz), alpha=0.06, color="blue", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle("Called Balls, 2017", subtitle="vs. left-handed batters")
s <- "R"
strikePlot[[s]] <- ggplot() +
geom_point(data=stk[[s]], aes(x=px,y=pz), alpha=0.06, color="red", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle(" ", subtitle="vs. right-handed batters")
ballPlot[[s]] <- ggplot() +
geom_point(data=bll[[s]], aes(x=px,y=pz), alpha=0.06, color="blue", size=1, shape=20) +
geom_polygon(data=rbpoly, aes(x=x, y=y), color="black", fill=NA, linetype="solid") +
coord_fixed(xlim=c(-1.5,1.5), ylim=c(1.0,4.0)) +
theme_bw() + theme(axis.title.x=element_blank(),axis.title.y=element_blank()) +
ggtitle(" ", subtitle="vs. right-handed batters")
require(gridExtra)
bscloud <- grid.arrange(strikePlot$L, strikePlot$R, ballPlot$L, ballPlot$R, ncol=4)
ggsave("figures/ball_strike_cloud.pdf", plot = bscloud, width = 10, height = 3.3, dpi = 300)
# to reduce size:
# pdf2ps ball_strike_cloud.pdf ball_strike_cloud.eps
# ps2pdf -dPDFSETTINGS=/printer ball_strike_cloud.eps ball_strike_cloud_printer.pdf
|
raw.data = file.choose()
Vigilance.Calc.Block.Function = function(dataset,binsize){
acc.start = 15
omission.start = 65
total.bins = 50 / binsize
new.data = as.data.frame(matrix(nrow=nrow(dataset),ncol=(total.bins * 2)))
bin.num = 1
for(a in 1:total.bins){
acc.col = a
colnames(new.data)[acc.col] = paste("Accuracy.Block",a,sep=".")
om.col = a + total.bins
colnames(new.data)[om.col] = paste("Omission.Block",a,sep=".")
for(b in 1:nrow(dataset)){
acc.list = as.vector(dataset[b,c(acc.start:(acc.start + binsize - 1))])
om.list = as.vector(dataset[b,c(omission.start:(omission.start + binsize - 1))])
acc.bin = 0
om.bin = 0
na.count = 0
if(a == 1){
start.acc = 0
start.om = 0
}else{
start.acc = as.numeric(dataset[b,(acc.start - 1)])
start.om = as.numeric(dataset[b,(omission.start - 1)])
}
acc.track = c()
om.track = c()
for(c in 1:(binsize)){
curr.acc = as.numeric(acc.list[c])
curr.om = as.numeric(om.list[c])
if(isTRUE(is.na(curr.acc) & is.na(curr.om))){
acc.bin = acc.bin
om.bin = om.bin
na.count = na.count + 1
}
if(isTRUE(((curr.om > start.om) | ((curr.om == 100) & (start.om == 100))))){
om.bin = om.bin + 1
}else if(isTRUE((curr.acc > start.acc) | ((curr.acc == 100) & (start.acc == 100)))){
acc.bin = acc.bin + 1
}
start.acc = curr.acc
start.om = curr.om
}
acc.bin = (acc.bin / (binsize - om.bin)) * 100
om.bin = (om.bin / binsize) * 100
if(na.count == binsize){
acc.bin = NA
om.bin = NA
}
new.data[b,acc.col] = acc.bin
new.data[b,om.col] = om.bin
}
acc.start = (acc.start + binsize)
omission.start = omission.start + binsize
}
final.data = cbind(dataset[ ,c(1:14)],new.data)
return(final.data)
}
vig.data = Vigilance.Calc.Block.Function(raw.data,10)
write.csv(vig.data,'VAChT-KD 5-CSRTT Data.csv',row.names=FALSE)
|
/Chris Vigilance.R
|
no_license
|
dpalmer9/Weston_QC_Processor
|
R
| false
| false
| 2,117
|
r
|
raw.data = file.choose()
Vigilance.Calc.Block.Function = function(dataset,binsize){
acc.start = 15
omission.start = 65
total.bins = 50 / binsize
new.data = as.data.frame(matrix(nrow=nrow(dataset),ncol=(total.bins * 2)))
bin.num = 1
for(a in 1:total.bins){
acc.col = a
colnames(new.data)[acc.col] = paste("Accuracy.Block",a,sep=".")
om.col = a + total.bins
colnames(new.data)[om.col] = paste("Omission.Block",a,sep=".")
for(b in 1:nrow(dataset)){
acc.list = as.vector(dataset[b,c(acc.start:(acc.start + binsize - 1))])
om.list = as.vector(dataset[b,c(omission.start:(omission.start + binsize - 1))])
acc.bin = 0
om.bin = 0
na.count = 0
if(a == 1){
start.acc = 0
start.om = 0
}else{
start.acc = as.numeric(dataset[b,(acc.start - 1)])
start.om = as.numeric(dataset[b,(omission.start - 1)])
}
acc.track = c()
om.track = c()
for(c in 1:(binsize)){
curr.acc = as.numeric(acc.list[c])
curr.om = as.numeric(om.list[c])
if(isTRUE(is.na(curr.acc) & is.na(curr.om))){
acc.bin = acc.bin
om.bin = om.bin
na.count = na.count + 1
}
if(isTRUE(((curr.om > start.om) | ((curr.om == 100) & (start.om == 100))))){
om.bin = om.bin + 1
}else if(isTRUE((curr.acc > start.acc) | ((curr.acc == 100) & (start.acc == 100)))){
acc.bin = acc.bin + 1
}
start.acc = curr.acc
start.om = curr.om
}
acc.bin = (acc.bin / (binsize - om.bin)) * 100
om.bin = (om.bin / binsize) * 100
if(na.count == binsize){
acc.bin = NA
om.bin = NA
}
new.data[b,acc.col] = acc.bin
new.data[b,om.col] = om.bin
}
acc.start = (acc.start + binsize)
omission.start = omission.start + binsize
}
final.data = cbind(dataset[ ,c(1:14)],new.data)
return(final.data)
}
vig.data = Vigilance.Calc.Block.Function(raw.data,10)
write.csv(vig.data,'VAChT-KD 5-CSRTT Data.csv',row.names=FALSE)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xls-find.R
\name{xls_find}
\alias{xls_find}
\title{Find the pair of Excel files from a automatic weather station}
\usage{
xls_find(file.name, verbose = TRUE)
}
\arguments{
\item{file.name}{character vector with paths to Excel files;
in general a list with path to file 2 (for more details see the vignette:
\code{vignette("data-org", package = "rinmetxls")})}
\item{verbose}{logical, should print messages? Default: TRUE}
}
\value{
character vector of file paths; it is expected two file paths
}
\description{
Find the pair of Excel files from a automatic weather station
}
\examples{
\dontrun{
if(interactive()){
p <- system.file("extdata/dvd_xls_files", package = "rinmetxls")
pf <- list.files(
p,
pattern = ".*[[:punct:]]_\\\\.xls",
recursive = TRUE,
full.names = TRUE
)
xls_find(pf[1])
xls_find(pf[2])
xls_find(pf[3])
}
}
}
\concept{file functions}
|
/man/xls_find.Rd
|
permissive
|
lhmet/rinmetxls
|
R
| false
| true
| 958
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/xls-find.R
\name{xls_find}
\alias{xls_find}
\title{Find the pair of Excel files from a automatic weather station}
\usage{
xls_find(file.name, verbose = TRUE)
}
\arguments{
\item{file.name}{character vector with paths to Excel files;
in general a list with path to file 2 (for more details see the vignette:
\code{vignette("data-org", package = "rinmetxls")})}
\item{verbose}{logical, should print messages? Default: TRUE}
}
\value{
character vector of file paths; it is expected two file paths
}
\description{
Find the pair of Excel files from a automatic weather station
}
\examples{
\dontrun{
if(interactive()){
p <- system.file("extdata/dvd_xls_files", package = "rinmetxls")
pf <- list.files(
p,
pattern = ".*[[:punct:]]_\\\\.xls",
recursive = TRUE,
full.names = TRUE
)
xls_find(pf[1])
xls_find(pf[2])
xls_find(pf[3])
}
}
}
\concept{file functions}
|
\name{runShiny}
\alias{runShiny}
\title{
Run Shiny Web interface
}
\description{
Web interface for univariate, bivariate and multivariate breakeven analysis
}
\usage{
runShiny(...,name="shiny_perctolimit")
}
\arguments{
\item{\dots}{
Arguments to pass to \code{\link{runApp}}
}
\item{name}{
User interface to use: \code{shiny1} or \code{shiny_perctolimit} (default)
}
}
\details{
Starts a webserver in R and a browser session
}
\value{
None. Run for its side effect.
}
\author{
Joseph Guillaume
}
\examples{
data(ranges)
runShiny()
}
|
/man/runShiny.Rd
|
no_license
|
josephguillaume/cost_benefit_breakeven
|
R
| false
| false
| 550
|
rd
|
\name{runShiny}
\alias{runShiny}
\title{
Run Shiny Web interface
}
\description{
Web interface for univariate, bivariate and multivariate breakeven analysis
}
\usage{
runShiny(...,name="shiny_perctolimit")
}
\arguments{
\item{\dots}{
Arguments to pass to \code{\link{runApp}}
}
\item{name}{
User interface to use: \code{shiny1} or \code{shiny_perctolimit} (default)
}
}
\details{
Starts a webserver in R and a browser session
}
\value{
None. Run for its side effect.
}
\author{
Joseph Guillaume
}
\examples{
data(ranges)
runShiny()
}
|
# Install Requried Packages
installed.packages("SnowballC")
installed.packages("tm")
installed.packages("twitteR")
installed.packages("syuzhet")
# Load Requried Packages
library("SnowballC")
library("tm")
library("twitteR")
library("syuzhet")
library(data.table)
data_republicans <- fread("C:/Users/uia91182/Desktop/aba project/datasets/republicans_tweets (1).csv", sep=",", header=T,
strip.white = T, na.strings = c("NA","NaN","","?"))
# install.packages("tidytext", repos = "https://cran.r-project.org")
# install.packages("dplyr", repos = "https://cran.r-project.org")
library(dplyr)
library(tidytext)
text_df <- data_frame(line = 1:38415, text = data_republicans$Content)
head(text_df)
#removing hashtag , urls and other special charactersR
tweets.df2 <- gsub("http.*","",text_df)
tweets.df2 <- gsub("https.*","",tweets.df2)
tweets.df2 <- gsub("#.*","",tweets.df2)
tweets.df2 <- gsub("@.*","",tweets.df2)
head(tweets.df2)
text = data_republicans$Content
text_df %>%
unnest_tokens(word, text) # This means that in data frame text_df, tokenize column "text" by each
republicans_with_hcr <-text[grep(pattern = "hcr", text, ignore.case = T)]
getwd()
write.csv(republicans_with_hcr, "republicans_with_hcr.csv")
republicans_with_hcrjobs <-republicans_with_hcr[grep(pattern = "jobs", text, ignore.case = T)]
write.csv(republicans_with_hcrjobs, "republicans_with_hcrjobs.csv")
republicans_with_obama<-text[grep(pattern = "Obama", text, ignore.case = T)]
write.csv(republicans_with_obama, "republicans_with_obama.csv")
republicans_with_obamacare <-republicans_with_obama[grep(pattern = "care", text, ignore.case = T)]
write.csv(republicans_with_obamacare , "republicans_with_obamacare.csv")
republicans_with_energy <-text[grep(pattern = "energy", text, ignore.case = T)]
write.csv(republicans_with_energy, "republicans_with_energy.csv")
republicans_with_energytax <-republicans_with_energy[grep(pattern = "tax", text, ignore.case = T)]
write.csv(republicans_with_energytax , "republicans_with_energytax.csv")
republicans_with_house <-text[grep(pattern = "house", text, ignore.case = T)]
write.csv(republicans_with_house, "republicans_with_house.csv")
republicans_with_energyhouse <-republicans_with_house[grep(pattern = "energy", text, ignore.case = T)]
write.csv(republicans_with_energyhouse , "republicans_with_energyhouse.csv")
republicans_with_energyhousecommerce <-republicans_with_energyhouse[grep(pattern = "commerce", text, ignore.case = T)]
write.csv(republicans_with_energyhousecommerce , "republicans_with_energyhousecommerce.csv")
republicans_with_energyhousecommercecommittee <-republicans_with_energyhousecommerce[grep(pattern = "committee", text, ignore.case = T)]
write.csv(republicans_with_energyhousecommercecommittee , "republicans_with_energyhousecommercecommittee.csv")
|
/scripts/wordseperation_republicans.R
|
no_license
|
sakethg/Sentiment-Analysis-on-political-twitter-data
|
R
| false
| false
| 2,836
|
r
|
# Install Requried Packages
installed.packages("SnowballC")
installed.packages("tm")
installed.packages("twitteR")
installed.packages("syuzhet")
# Load Requried Packages
library("SnowballC")
library("tm")
library("twitteR")
library("syuzhet")
library(data.table)
data_republicans <- fread("C:/Users/uia91182/Desktop/aba project/datasets/republicans_tweets (1).csv", sep=",", header=T,
strip.white = T, na.strings = c("NA","NaN","","?"))
# install.packages("tidytext", repos = "https://cran.r-project.org")
# install.packages("dplyr", repos = "https://cran.r-project.org")
library(dplyr)
library(tidytext)
text_df <- data_frame(line = 1:38415, text = data_republicans$Content)
head(text_df)
#removing hashtag , urls and other special charactersR
tweets.df2 <- gsub("http.*","",text_df)
tweets.df2 <- gsub("https.*","",tweets.df2)
tweets.df2 <- gsub("#.*","",tweets.df2)
tweets.df2 <- gsub("@.*","",tweets.df2)
head(tweets.df2)
text = data_republicans$Content
text_df %>%
unnest_tokens(word, text) # This means that in data frame text_df, tokenize column "text" by each
republicans_with_hcr <-text[grep(pattern = "hcr", text, ignore.case = T)]
getwd()
write.csv(republicans_with_hcr, "republicans_with_hcr.csv")
republicans_with_hcrjobs <-republicans_with_hcr[grep(pattern = "jobs", text, ignore.case = T)]
write.csv(republicans_with_hcrjobs, "republicans_with_hcrjobs.csv")
republicans_with_obama<-text[grep(pattern = "Obama", text, ignore.case = T)]
write.csv(republicans_with_obama, "republicans_with_obama.csv")
republicans_with_obamacare <-republicans_with_obama[grep(pattern = "care", text, ignore.case = T)]
write.csv(republicans_with_obamacare , "republicans_with_obamacare.csv")
republicans_with_energy <-text[grep(pattern = "energy", text, ignore.case = T)]
write.csv(republicans_with_energy, "republicans_with_energy.csv")
republicans_with_energytax <-republicans_with_energy[grep(pattern = "tax", text, ignore.case = T)]
write.csv(republicans_with_energytax , "republicans_with_energytax.csv")
republicans_with_house <-text[grep(pattern = "house", text, ignore.case = T)]
write.csv(republicans_with_house, "republicans_with_house.csv")
republicans_with_energyhouse <-republicans_with_house[grep(pattern = "energy", text, ignore.case = T)]
write.csv(republicans_with_energyhouse , "republicans_with_energyhouse.csv")
republicans_with_energyhousecommerce <-republicans_with_energyhouse[grep(pattern = "commerce", text, ignore.case = T)]
write.csv(republicans_with_energyhousecommerce , "republicans_with_energyhousecommerce.csv")
republicans_with_energyhousecommercecommittee <-republicans_with_energyhousecommerce[grep(pattern = "committee", text, ignore.case = T)]
write.csv(republicans_with_energyhousecommercecommittee , "republicans_with_energyhousecommercecommittee.csv")
|
#Load library
library(plyr)
url<-"https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
file <- "dataset.zip"
baseDir <-"UCI HAR Dataset"
# Download the files if required
if(!file.exists(file)){
download.file(url, file, method="curl")
unzip(file, list = FALSE, overwrite = TRUE)
}
# Load activity and feature mapping tables
activityLabelF <- file.path(baseDir,"activity_labels.txt" )
featureLabelsF <- file.path(baseDir, "features.txt")
activityLabels <-read.table(activityLabelF)
featureLabels <- read.table(featureLabelsF)
# Read the values in the table
# type - train or test data
# name - name of the file
readMain <- function (type, name) {
fName <- file.path(baseDir,type, paste0(name,"_",type,".txt"))
read.table (fName)
}
# Read the tables for training/test data sets
readTable <- function (type) {
dataSet <- list()
dataSet$subject <- readMain(type,"subject")
dataSet$X <- readMain(type,"X")
names(dataSet$X)<-featureLabels$V2
dataSet$Y <- readMain(type,"y")
dataSet
}
#Load training and test data set.
#The training and test data set follow the naming convention
#Leverage the naming convention
trainSet <- readTable("train")
testSet <- readTable("test")
#merge the training and test data set
#Filter the metrics deal with mean and standard deviation.
#Any metrics with keyword "std" or "mean" will be considered for now
mergeTables <- function (a,b){
trY <- merge (a$Y,activityLabels)
teY <- merge (b$Y,activityLabels)
trX <- a$X[,c(grep("std",x=featureLabels$V2,ignore.case = TRUE),
grep("mean",x=featureLabels$V2,ignore.case = TRUE))]
teX <- b$X[,c(grep("std",x=featureLabels$V2,ignore.case = TRUE),
grep("mean",x=featureLabels$V2,ignore.case = TRUE))]
trX<- cbind (person=a$subject$V1, activity=trY$V2,trX)
teX<- cbind (person=b$subject$V1,activity=teY$V2,teX)
rbind (trX,teX )
}
mT <- mergeTables (trainSet,testSet)
#Split the result data by person and activity and find the mean
result_data <- ddply(mT, .(person, activity), .fun=function(row){ colMeans(row[,-c(1,2)]) })
# Change the headers and write the results in a file
headToBeChanged <- colnames(result_data)[-(1:2)]
newHead <- c("person","activity",paste0("mean-",headToBeChanged))
names(result_data)<-newHead
result_file <- "result.csv"
write.table(result_data,file = result_file,row.names = FALSE)
|
/course-project/run_analysis.R
|
no_license
|
senthil69/datasciencecoursera
|
R
| false
| false
| 2,402
|
r
|
#Load library
library(plyr)
url<-"https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
file <- "dataset.zip"
baseDir <-"UCI HAR Dataset"
# Download the files if required
if(!file.exists(file)){
download.file(url, file, method="curl")
unzip(file, list = FALSE, overwrite = TRUE)
}
# Load activity and feature mapping tables
activityLabelF <- file.path(baseDir,"activity_labels.txt" )
featureLabelsF <- file.path(baseDir, "features.txt")
activityLabels <-read.table(activityLabelF)
featureLabels <- read.table(featureLabelsF)
# Read the values in the table
# type - train or test data
# name - name of the file
readMain <- function (type, name) {
fName <- file.path(baseDir,type, paste0(name,"_",type,".txt"))
read.table (fName)
}
# Read the tables for training/test data sets
readTable <- function (type) {
dataSet <- list()
dataSet$subject <- readMain(type,"subject")
dataSet$X <- readMain(type,"X")
names(dataSet$X)<-featureLabels$V2
dataSet$Y <- readMain(type,"y")
dataSet
}
#Load training and test data set.
#The training and test data set follow the naming convention
#Leverage the naming convention
trainSet <- readTable("train")
testSet <- readTable("test")
#merge the training and test data set
#Filter the metrics deal with mean and standard deviation.
#Any metrics with keyword "std" or "mean" will be considered for now
mergeTables <- function (a,b){
trY <- merge (a$Y,activityLabels)
teY <- merge (b$Y,activityLabels)
trX <- a$X[,c(grep("std",x=featureLabels$V2,ignore.case = TRUE),
grep("mean",x=featureLabels$V2,ignore.case = TRUE))]
teX <- b$X[,c(grep("std",x=featureLabels$V2,ignore.case = TRUE),
grep("mean",x=featureLabels$V2,ignore.case = TRUE))]
trX<- cbind (person=a$subject$V1, activity=trY$V2,trX)
teX<- cbind (person=b$subject$V1,activity=teY$V2,teX)
rbind (trX,teX )
}
mT <- mergeTables (trainSet,testSet)
#Split the result data by person and activity and find the mean
result_data <- ddply(mT, .(person, activity), .fun=function(row){ colMeans(row[,-c(1,2)]) })
# Change the headers and write the results in a file
headToBeChanged <- colnames(result_data)[-(1:2)]
newHead <- c("person","activity",paste0("mean-",headToBeChanged))
names(result_data)<-newHead
result_file <- "result.csv"
write.table(result_data,file = result_file,row.names = FALSE)
|
/Practica 8/ref_reto1.R
|
no_license
|
DavidM0413/Simulacion_Sistemas
|
R
| false
| false
| 3,625
|
r
| ||
#' Power Generation
#'
#' This function computes instantaneous power generation
#’ from a reservoir given its height and flow rate into turbines
#' @param rho Density of water (kg/m3) Default is 1000
#' @param g Acceleration due to gravity (m/sec2) Default is 9.8
#' @param Kefficiency Turbine Efficiency (0-1) Default is 0.8
#' @param height height of water in reservoir (m)
#' @param flow flow rate (m3/sec)
#' @author Naomi
#' @examples power_gen(20, 1)
#' @return Power generation (W/s)
power_gen = function(height, flow, rho=1000, g=9.8, Keff=0.8) {
# calculate power
result = rho * height * flow * g * Keff
return(result)
}
|
/power_gen.R
|
no_license
|
tcobian/ESM_232
|
R
| false
| false
| 649
|
r
|
#' Power Generation
#'
#' This function computes instantaneous power generation
#’ from a reservoir given its height and flow rate into turbines
#' @param rho Density of water (kg/m3) Default is 1000
#' @param g Acceleration due to gravity (m/sec2) Default is 9.8
#' @param Kefficiency Turbine Efficiency (0-1) Default is 0.8
#' @param height height of water in reservoir (m)
#' @param flow flow rate (m3/sec)
#' @author Naomi
#' @examples power_gen(20, 1)
#' @return Power generation (W/s)
power_gen = function(height, flow, rho=1000, g=9.8, Keff=0.8) {
# calculate power
result = rho * height * flow * g * Keff
return(result)
}
|
#NAOC 2020 count data workshop distance probability distribution lesson
#By Evan Adams and Beth Ross
###########################################################
#Not a lot of code here, just showing how we created the distributions shown in the presentation
#coin flipping
hist(rbinom(10, 1, 0.5))
#probability density of the binomial distribution
plot(dbinom(0:10, 10, 0.5))
#distribution types
#gaussian
hist(rnorm(100, 0, 1))
#lognormal
hist(rlnorm(100, 0, 1))
#gamma
hist(rgamma(100, 1, 1))
#binomial
hist(rbinom(100, 10, 0.5))
#poisson
hist(rpois(100, 3))
#probability mixtures
#showing two unmixed gaussians
hist(c(rnorm(100, -25, 3), rnorm(100, 25, 7)), freq = FALSE, breaks = -50:50)
#then combining two gaussians
hist(c(rnorm(100, -25, 3) + rnorm(100, 25, 7)), freq = FALSE, breaks = -50:50)
#now let's create a zero-inflated poisson
hist(rpois(100, 4))
hist(rbinom(100, 1, 0.1))
rzipois <- function(n,p,lambda) {
z <- rbinom(n,size=1,prob=p)
y <- (1-z)*rpois(n,lambda)
return(y)
}
hist(rzipois(100, 0.9, 4))
#negative binomial
hist(rnbinom(100, mu = 4, size = 1.4))
|
/lessons/probability_distribution_lesson_code.R
|
no_license
|
dubrewer92/NAOC2020-Count-Data-Workshop
|
R
| false
| false
| 1,171
|
r
|
#NAOC 2020 count data workshop distance probability distribution lesson
#By Evan Adams and Beth Ross
###########################################################
#Not a lot of code here, just showing how we created the distributions shown in the presentation
#coin flipping
hist(rbinom(10, 1, 0.5))
#probability density of the binomial distribution
plot(dbinom(0:10, 10, 0.5))
#distribution types
#gaussian
hist(rnorm(100, 0, 1))
#lognormal
hist(rlnorm(100, 0, 1))
#gamma
hist(rgamma(100, 1, 1))
#binomial
hist(rbinom(100, 10, 0.5))
#poisson
hist(rpois(100, 3))
#probability mixtures
#showing two unmixed gaussians
hist(c(rnorm(100, -25, 3), rnorm(100, 25, 7)), freq = FALSE, breaks = -50:50)
#then combining two gaussians
hist(c(rnorm(100, -25, 3) + rnorm(100, 25, 7)), freq = FALSE, breaks = -50:50)
#now let's create a zero-inflated poisson
hist(rpois(100, 4))
hist(rbinom(100, 1, 0.1))
rzipois <- function(n,p,lambda) {
z <- rbinom(n,size=1,prob=p)
y <- (1-z)*rpois(n,lambda)
return(y)
}
hist(rzipois(100, 0.9, 4))
#negative binomial
hist(rnbinom(100, mu = 4, size = 1.4))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/clustering.R
\name{cluster_pathways}
\alias{cluster_pathways}
\title{Cluster Pathways}
\usage{
cluster_pathways(enrichment_res, method = "hierarchical",
kappa_threshold = 0.35, plot_clusters_graph = TRUE,
use_names = FALSE, use_active_snw_genes = FALSE,
hclu_method = "average", plot_hmap = FALSE, plot_dend = FALSE)
}
\arguments{
\item{enrichment_res}{data frame of pathway enrichment results (result of `run_pathfindR`)}
\item{method}{Either "hierarchical" or "fuzzy". Details of clustering are
provided in the corresponding functions.}
\item{kappa_threshold}{threshold for kappa statistics, defining strong relation (default = 0.35)}
\item{plot_clusters_graph}{boolean value indicate whether or not to plot the graph
diagram of clustering results (default = TRUE)}
\item{use_names}{boolean to indicate whether to use pathway names instead of IDs (default = FALSE, i.e. use IDs)}
\item{use_active_snw_genes}{boolean to indicate whether or not to use non-input active subnetwork genes
in the calculation of kappa statistics (default = FALSE, i.e. use only affected genes)}
\item{hclu_method}{the agglomeration method to be used (default = "average", see `?hclust`)}
\item{plot_hmap}{boolean to indicate whether to plot the kappa statistics
heatmap or not (default = FALSE)}
\item{plot_dend}{boolean to indicate whether to plot the clustering dendrogram
partitioned into the optimal number of clusters (default = TRUE)}
}
\value{
a data frame of clustering results. For "hierarchical", the cluster assignments
(Cluster) and whether the term is representative of its cluster (Status) is added as columns.
For "fuzzy", terms that are in multiple clusters are provided for each cluster. The cluster
assignments (Cluster) and whether the term is representative of its cluster (Status) is
added as columns.
}
\description{
Cluster Pathways
}
\examples{
example_clustered <- cluster_pathways(RA_output[1:3,], plot_clusters_graph = FALSE)
example_clustered <- cluster_pathways(RA_output[1:3,],
method = "fuzzy", plot_clusters_graph = FALSE)
}
\seealso{
See \code{\link{hierarchical_pw_clustering}} for hierarchical clustering of enriched terms.
See \code{\link{fuzzy_pw_clustering}} for fuzzy clustering of enriched terms.
}
|
/man/cluster_pathways.Rd
|
no_license
|
KUNJU-PITT/pathfindR
|
R
| false
| true
| 2,311
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/clustering.R
\name{cluster_pathways}
\alias{cluster_pathways}
\title{Cluster Pathways}
\usage{
cluster_pathways(enrichment_res, method = "hierarchical",
kappa_threshold = 0.35, plot_clusters_graph = TRUE,
use_names = FALSE, use_active_snw_genes = FALSE,
hclu_method = "average", plot_hmap = FALSE, plot_dend = FALSE)
}
\arguments{
\item{enrichment_res}{data frame of pathway enrichment results (result of `run_pathfindR`)}
\item{method}{Either "hierarchical" or "fuzzy". Details of clustering are
provided in the corresponding functions.}
\item{kappa_threshold}{threshold for kappa statistics, defining strong relation (default = 0.35)}
\item{plot_clusters_graph}{boolean value indicate whether or not to plot the graph
diagram of clustering results (default = TRUE)}
\item{use_names}{boolean to indicate whether to use pathway names instead of IDs (default = FALSE, i.e. use IDs)}
\item{use_active_snw_genes}{boolean to indicate whether or not to use non-input active subnetwork genes
in the calculation of kappa statistics (default = FALSE, i.e. use only affected genes)}
\item{hclu_method}{the agglomeration method to be used (default = "average", see `?hclust`)}
\item{plot_hmap}{boolean to indicate whether to plot the kappa statistics
heatmap or not (default = FALSE)}
\item{plot_dend}{boolean to indicate whether to plot the clustering dendrogram
partitioned into the optimal number of clusters (default = TRUE)}
}
\value{
a data frame of clustering results. For "hierarchical", the cluster assignments
(Cluster) and whether the term is representative of its cluster (Status) is added as columns.
For "fuzzy", terms that are in multiple clusters are provided for each cluster. The cluster
assignments (Cluster) and whether the term is representative of its cluster (Status) is
added as columns.
}
\description{
Cluster Pathways
}
\examples{
example_clustered <- cluster_pathways(RA_output[1:3,], plot_clusters_graph = FALSE)
example_clustered <- cluster_pathways(RA_output[1:3,],
method = "fuzzy", plot_clusters_graph = FALSE)
}
\seealso{
See \code{\link{hierarchical_pw_clustering}} for hierarchical clustering of enriched terms.
See \code{\link{fuzzy_pw_clustering}} for fuzzy clustering of enriched terms.
}
|
# Read in data
library(data.table)
data_train <- fread("data/train.csv")
data_test <- fread("data/test.csv")
##### Preprocess
## For imputation, stack predictors in training and test together
allPred <- rbindlist(list(data_train,data_test),use.names=TRUE,fill=TRUE,idcol = "Source")
ol.mask <- allPred$Fare==0|allPred$Fare>500
allPred$Fare[ol.mask]<-NA
# Impute Fare NAs by mean per pclass
fna.mask <- is.na(allPred$Fare)
aggregate(allPred$Fare[!fna.mask], by=list(allPred$Pclass[!fna.mask]), mean)
allPred$Fare[fna.mask&allPred$Pclass==1]<-84.02592
allPred$Fare[fna.mask&allPred$Pclass==2]<-21.64811
allPred$Fare[fna.mask&allPred$Pclass==3]<-13.37847
# Impute Age NAs by median
ana.mask <- is.na(allPred$Age)
allPred$Age[ana.mask]<-median(allPred$Age[!ana.mask])
# Create New features
allPred$relative <- allPred$SibSp+allPred$Parch
allPred$ticket_dup <- as.numeric(duplicated(allPred$Ticket, fromLast=T)|duplicated(allPred$Ticket))
allPred$tckPal <- rep(0, nrow(allPred))
for (tckNum in unique(allPred$Ticket[allPred$ticket_dup==1])){
xxpal.mask <- allPred$Ticket == tckNum
allPred$tckPal[xxpal.mask] <- sum(xxpal.mask)-1
}
allPred$wCabin <- as.numeric(allPred$Cabin!="")
allPred$cabinL <- substr(allPred$Cabin,0,1)
allPred$cabinL[allPred$cabinL==""]<-"Unknown"
allPred$cabinL<-as.factor(allPred$cabinL)
# Transform
allPred$female <- as.numeric(allPred$Sex == "female")
allPred$Embarked[allPred$Embarked==""] <- "S" ##impute by mode
allPred$Embarked <- as.factor(allPred$Embarked)
# Recover traing and test set
dataTr <- allPred[Source==1,]
dataTe <- allPred[Source==2,]
dataTe <- dataTe[,Survived:=NULL]
##########################################################################
##### Logistc Regression with regularizer
library(dummies)
library(LiblineaR)
pred_tr <- dataTr[,.(Pclass,Fare,wCabin,SibSp,Parch,relative,ticket_dup,tckPal,female,Age)]
pred_tr <- data.matrix(pred_tr)
pred_tr <- cbind(pred_tr, dummy(dataTr$Embarked),dummy(dataTr$cabinL))
target <- as.factor(dataTr[, Survived])
tryCosts=c(200,150,100,50,30,1)
bestCost=NA
bestAcc=0
for(co in tryCosts){
acc=LiblineaR(data=pred_tr,target=target,type=0,cost=co,bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 100)
cat("Results for C=",co," : ",acc," accuracy.\n",sep="")
if(acc>bestAcc){
bestCost=co
bestAcc=acc
}
}
logit_fit <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 891)
## Train by all traing set
logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 0)
# logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, cross = 0)
logit_fit_all
## from W(weight), most infuential predictors: Pclass, gender,Embarked,wCabin
## base line: 0.8069585 (leave-one-out cv)
###############################################################
#### Use fewer predictor
pred_tr <- dataTr[,.(Pclass,wCabin,female)]
pred_tr <- data.matrix(pred_tr)
pred_tr <- cbind(pred_tr, dummy(dataTr$Embarked))
target <- as.factor(dataTr[, Survived])
tryCosts=c(200,150,100,50,30,1)
bestCost=NA
bestAcc=0
for(co in tryCosts){
acc=LiblineaR(data=pred_tr,target=target,type=0,cost=co,bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 100)
cat("Results for C=",co," : ",acc," accuracy.\n",sep="")
if(acc>bestAcc){
bestCost=co
bestAcc=acc
}
}
logit_fit <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 891)
## Train by all traing set
logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 0)
# logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, cross = 0)
logit_fit_all
## from W(weight), most infuential predictors: Pclass, gender,Embarked,wCabin
## base line: 0.8125701 (leave-one-out cv)
###############################################################
# Generate predictive result
pred_test <- dataTe[,.(Pclass,wCabin,female)]
pred_test <- data.matrix(pred_test)
pred_test <- cbind(pred_test, dummy(dataTe$Embarked))
result_test <- predict(logit_fit_all,pred_test,proba=T,decisionValues=TRUE)
# Output table
output_tb <- cbind(dataTe[,PassengerId], as.numeric(result_test$predictions)-1)
colnames(output_tb) <- c("PassengerId", "Survived")
# 0.77990 accuracy on leader board
write.csv(output_tb, file="logit_pred3.csv", row.names = F)
##########################################################################
##########################################################################
##########################################################################
#### Random Forest
library(randomForest)
#trainset <- dataTr[,.(Survived,Pclass,Fare,wCabin,SibSp,Parch,relative,ticket_dup,tckPal,female,Age)]
trainset <- dataTr[,.(Survived,Pclass,Fare,wCabin,relative,tckPal,female,Age)]
trainset$Survived<-as.factor(trainset$Survived)
### run model
rf1<-randomForest(Survived~.,data=trainset,ntree=2000, mtry=3)
print(rf1)
# OOB estimate of error rate: 16.61%
# importance of predictor
varImpPlot(rf1, main='') ## Importance: gender, age, pclass, fare; tckpal, relative, wcabin
impmx <- importance(rf1)
### predict using test set
# testset <- dataTe[,.(Pclass,Fare,wCabin,SibSp,Parch,relative,ticket_dup,tckPal,female,Age)]
testset <- dataTe[,.(Pclass,Fare,wCabin,relative,tckPal,female,Age)]
rf.pred1 <- predict( rf1, testset)
# Output table
output_tb <- cbind(dataTe[,PassengerId], as.numeric(rf.pred1)-1)
colnames(output_tb) <- c("PassengerId", "Survived")
# 0.76555 accuracy, worse than logistic and gender based model
write.csv(output_tb, file="rf1_pred4.csv", row.names = F)
|
/titanic2.R
|
no_license
|
chelsyx/titanic_kaggle
|
R
| false
| false
| 5,741
|
r
|
# Read in data
library(data.table)
data_train <- fread("data/train.csv")
data_test <- fread("data/test.csv")
##### Preprocess
## For imputation, stack predictors in training and test together
allPred <- rbindlist(list(data_train,data_test),use.names=TRUE,fill=TRUE,idcol = "Source")
ol.mask <- allPred$Fare==0|allPred$Fare>500
allPred$Fare[ol.mask]<-NA
# Impute Fare NAs by mean per pclass
fna.mask <- is.na(allPred$Fare)
aggregate(allPred$Fare[!fna.mask], by=list(allPred$Pclass[!fna.mask]), mean)
allPred$Fare[fna.mask&allPred$Pclass==1]<-84.02592
allPred$Fare[fna.mask&allPred$Pclass==2]<-21.64811
allPred$Fare[fna.mask&allPred$Pclass==3]<-13.37847
# Impute Age NAs by median
ana.mask <- is.na(allPred$Age)
allPred$Age[ana.mask]<-median(allPred$Age[!ana.mask])
# Create New features
allPred$relative <- allPred$SibSp+allPred$Parch
allPred$ticket_dup <- as.numeric(duplicated(allPred$Ticket, fromLast=T)|duplicated(allPred$Ticket))
allPred$tckPal <- rep(0, nrow(allPred))
for (tckNum in unique(allPred$Ticket[allPred$ticket_dup==1])){
xxpal.mask <- allPred$Ticket == tckNum
allPred$tckPal[xxpal.mask] <- sum(xxpal.mask)-1
}
allPred$wCabin <- as.numeric(allPred$Cabin!="")
allPred$cabinL <- substr(allPred$Cabin,0,1)
allPred$cabinL[allPred$cabinL==""]<-"Unknown"
allPred$cabinL<-as.factor(allPred$cabinL)
# Transform
allPred$female <- as.numeric(allPred$Sex == "female")
allPred$Embarked[allPred$Embarked==""] <- "S" ##impute by mode
allPred$Embarked <- as.factor(allPred$Embarked)
# Recover traing and test set
dataTr <- allPred[Source==1,]
dataTe <- allPred[Source==2,]
dataTe <- dataTe[,Survived:=NULL]
##########################################################################
##### Logistc Regression with regularizer
library(dummies)
library(LiblineaR)
pred_tr <- dataTr[,.(Pclass,Fare,wCabin,SibSp,Parch,relative,ticket_dup,tckPal,female,Age)]
pred_tr <- data.matrix(pred_tr)
pred_tr <- cbind(pred_tr, dummy(dataTr$Embarked),dummy(dataTr$cabinL))
target <- as.factor(dataTr[, Survived])
tryCosts=c(200,150,100,50,30,1)
bestCost=NA
bestAcc=0
for(co in tryCosts){
acc=LiblineaR(data=pred_tr,target=target,type=0,cost=co,bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 100)
cat("Results for C=",co," : ",acc," accuracy.\n",sep="")
if(acc>bestAcc){
bestCost=co
bestAcc=acc
}
}
logit_fit <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 891)
## Train by all traing set
logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 0)
# logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, cross = 0)
logit_fit_all
## from W(weight), most infuential predictors: Pclass, gender,Embarked,wCabin
## base line: 0.8069585 (leave-one-out cv)
###############################################################
#### Use fewer predictor
pred_tr <- dataTr[,.(Pclass,wCabin,female)]
pred_tr <- data.matrix(pred_tr)
pred_tr <- cbind(pred_tr, dummy(dataTr$Embarked))
target <- as.factor(dataTr[, Survived])
tryCosts=c(200,150,100,50,30,1)
bestCost=NA
bestAcc=0
for(co in tryCosts){
acc=LiblineaR(data=pred_tr,target=target,type=0,cost=co,bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 100)
cat("Results for C=",co," : ",acc," accuracy.\n",sep="")
if(acc>bestAcc){
bestCost=co
bestAcc=acc
}
}
logit_fit <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 891)
## Train by all traing set
logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, wi = c("0"=0.6161616, "1"=0.3838384), cross = 0)
# logit_fit_all <- LiblineaR(pred_tr, target, type = 0, cost = 50, bias = F, cross = 0)
logit_fit_all
## from W(weight), most infuential predictors: Pclass, gender,Embarked,wCabin
## base line: 0.8125701 (leave-one-out cv)
###############################################################
# Generate predictive result
pred_test <- dataTe[,.(Pclass,wCabin,female)]
pred_test <- data.matrix(pred_test)
pred_test <- cbind(pred_test, dummy(dataTe$Embarked))
result_test <- predict(logit_fit_all,pred_test,proba=T,decisionValues=TRUE)
# Output table
output_tb <- cbind(dataTe[,PassengerId], as.numeric(result_test$predictions)-1)
colnames(output_tb) <- c("PassengerId", "Survived")
# 0.77990 accuracy on leader board
write.csv(output_tb, file="logit_pred3.csv", row.names = F)
##########################################################################
##########################################################################
##########################################################################
#### Random Forest
library(randomForest)
#trainset <- dataTr[,.(Survived,Pclass,Fare,wCabin,SibSp,Parch,relative,ticket_dup,tckPal,female,Age)]
trainset <- dataTr[,.(Survived,Pclass,Fare,wCabin,relative,tckPal,female,Age)]
trainset$Survived<-as.factor(trainset$Survived)
### run model
rf1<-randomForest(Survived~.,data=trainset,ntree=2000, mtry=3)
print(rf1)
# OOB estimate of error rate: 16.61%
# importance of predictor
varImpPlot(rf1, main='') ## Importance: gender, age, pclass, fare; tckpal, relative, wcabin
impmx <- importance(rf1)
### predict using test set
# testset <- dataTe[,.(Pclass,Fare,wCabin,SibSp,Parch,relative,ticket_dup,tckPal,female,Age)]
testset <- dataTe[,.(Pclass,Fare,wCabin,relative,tckPal,female,Age)]
rf.pred1 <- predict( rf1, testset)
# Output table
output_tb <- cbind(dataTe[,PassengerId], as.numeric(rf.pred1)-1)
colnames(output_tb) <- c("PassengerId", "Survived")
# 0.76555 accuracy, worse than logistic and gender based model
write.csv(output_tb, file="rf1_pred4.csv", row.names = F)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/readmsb.R
\name{ms2bed}
\alias{ms2bed}
\title{Import the output of the \code{ms} program in a \code{BED} object}
\usage{
ms2bed(fname)
}
\arguments{
\item{fname}{the name of the text file containing \code{ms} output}
}
\value{
a bed object
}
\description{
Import the output of the \code{ms} program into a \code{BED} object, as defined in the
\href{https://cran.r-project.org/package=gaston}{gaston} package
}
|
/man/ms2bed.Rd
|
no_license
|
plantarum/hierfstat
|
R
| false
| true
| 489
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/readmsb.R
\name{ms2bed}
\alias{ms2bed}
\title{Import the output of the \code{ms} program in a \code{BED} object}
\usage{
ms2bed(fname)
}
\arguments{
\item{fname}{the name of the text file containing \code{ms} output}
}
\value{
a bed object
}
\description{
Import the output of the \code{ms} program into a \code{BED} object, as defined in the
\href{https://cran.r-project.org/package=gaston}{gaston} package
}
|
setwd('~/Downloads/battleship/data_visualization')
# model 1
basic_model1 = read.csv('../results/basic_test_model1.csv')
b_model1_BT = subset(basic_model1, `propagation.type` == 'BT')
b_model1_FC = subset(basic_model1, `propagation.type` == 'FC')
b_model1_GAC = subset(basic_model1, `propagation.type` == 'GAC')
b_model1_BT_aggr = aggregate(b_model1_BT[, 7:9], list(b_model1_BT$`board.size`), mean)
b_model1_FC_aggr = aggregate(b_model1_FC[, 7:9], list(b_model1_FC$`board.size`), mean)
b_model1_GAC_aggr = aggregate(b_model1_GAC[, 7:9], list(b_model1_GAC$`board.size`), mean)
b_model1_GAC_val_dec = subset(b_model1_GAC, `value.ordering.type` == 'val_decreasing_order')
b_model1_GAC_val_dec_aggr = aggregate(b_model1_GAC_val_dec[, 7:9], list(b_model1_GAC_val_dec$`board.size`), mean)
b_model1_GAC_val_inc = subset(b_model1_GAC, `value.ordering.type` == 'val_increasing_order')
b_model1_GAC_val_inc_aggr = aggregate(b_model1_GAC_val_inc[, 7:9], list(b_model1_GAC_val_inc$`board.size`), mean)
b_model1_GAC_val_dec_lcv = subset(b_model1_GAC, `value.ordering.type` == 'val_decrease_lcv')
b_model1_GAC_val_dec_lcv_aggr = aggregate(b_model1_GAC_val_dec_lcv[, 7:9], list(b_model1_GAC_val_dec_lcv$`board.size`), mean)
# model 2
basic_model2 = read.csv('../results/basic_test_model2.csv')
b_model2_BT = subset(basic_model2, `propagation.type` == 'BT')
b_model2_FC = subset(basic_model2, `propagation.type` == 'FC')
b_model2_GAC = subset(basic_model2, `propagation.type` == 'GAC')
b_model2_BT_aggr = aggregate(b_model2_BT[, 7:9], list(b_model2_BT$`board.size`), mean)
b_model2_FC_aggr = aggregate(b_model2_FC[, 7:9], list(b_model2_FC$`board.size`), mean)
b_model2_GAC_aggr = aggregate(b_model2_GAC[, 7:9], list(b_model2_GAC$`board.size`), mean)
# model 3
basic_model3 = read.csv('../results/basic_test_model3.csv')
b_model3_BT = subset(basic_model3, `propagation.type` == 'BT')
b_model3_FC = subset(basic_model3, `propagation.type` == 'FC')
b_model3_GAC = subset(basic_model3, `propagation.type` == 'GAC')
b_model3_BT_aggr = aggregate(b_model3_BT[, 7:9], list(b_model3_BT$`board.size`), mean)
b_model3_FC_aggr = aggregate(b_model3_FC[, 7:9], list(b_model3_FC$`board.size`), mean)
b_model3_GAC_aggr = aggregate(b_model3_GAC[, 7:9], list(b_model3_GAC$`board.size`), mean)
# data - board size x runtime, for 3 models, for 3 propagation types
data = cbind(b_model1_BT_aggr[1:2],
b_model2_BT_aggr[2],
b_model3_BT_aggr[2],
b_model1_FC_aggr[2],
b_model2_FC_aggr[2],
b_model3_FC_aggr[2],
b_model1_GAC_aggr[2],
b_model2_GAC_aggr[2],
b_model3_GAC_aggr[2])
colnames(data) = c('board.size',
'bt.runtime1',
'bt.runtime2',
'bt.runtime3',
'fc.runtime1',
'fc.runtime2',
'fc.runtime3',
'gac.runtime1',
'gac.runtime2',
'gac.runtime3')
write.csv(data, file='../results/small_propagator_model.csv')
# data - board size x runtime/assignment/pruning, for 3 orderings, for 3 propagation types
colnames(b_model1_GAC_val_dec_aggr) = c('board.size',
'runtime_dec',
'assignment_dec',
'pruning_dec')
colnames(b_model1_GAC_val_inc_aggr) = c('board.size',
'runtime_inc',
'assignment_inc',
'pruning_inc')
colnames(b_model1_GAC_val_dec_lcv_aggr) = c('board.size',
'runtime_dec_lcv',
'assignment_dec_lcv',
'pruning_dec_lcv')
data = merge(b_model1_GAC_val_dec_aggr, b_model1_GAC_val_inc_aggr, by='board.size')
data = merge(data, b_model1_GAC_val_dec_lcv_aggr, by='board.size')
write.csv(data, file='../results/small_val_ord.csv')
|
/data_visualization/PD_board_size_basic.R
|
no_license
|
pyliaorachel/battleship-ai
|
R
| false
| false
| 4,045
|
r
|
setwd('~/Downloads/battleship/data_visualization')
# model 1
basic_model1 = read.csv('../results/basic_test_model1.csv')
b_model1_BT = subset(basic_model1, `propagation.type` == 'BT')
b_model1_FC = subset(basic_model1, `propagation.type` == 'FC')
b_model1_GAC = subset(basic_model1, `propagation.type` == 'GAC')
b_model1_BT_aggr = aggregate(b_model1_BT[, 7:9], list(b_model1_BT$`board.size`), mean)
b_model1_FC_aggr = aggregate(b_model1_FC[, 7:9], list(b_model1_FC$`board.size`), mean)
b_model1_GAC_aggr = aggregate(b_model1_GAC[, 7:9], list(b_model1_GAC$`board.size`), mean)
b_model1_GAC_val_dec = subset(b_model1_GAC, `value.ordering.type` == 'val_decreasing_order')
b_model1_GAC_val_dec_aggr = aggregate(b_model1_GAC_val_dec[, 7:9], list(b_model1_GAC_val_dec$`board.size`), mean)
b_model1_GAC_val_inc = subset(b_model1_GAC, `value.ordering.type` == 'val_increasing_order')
b_model1_GAC_val_inc_aggr = aggregate(b_model1_GAC_val_inc[, 7:9], list(b_model1_GAC_val_inc$`board.size`), mean)
b_model1_GAC_val_dec_lcv = subset(b_model1_GAC, `value.ordering.type` == 'val_decrease_lcv')
b_model1_GAC_val_dec_lcv_aggr = aggregate(b_model1_GAC_val_dec_lcv[, 7:9], list(b_model1_GAC_val_dec_lcv$`board.size`), mean)
# model 2
basic_model2 = read.csv('../results/basic_test_model2.csv')
b_model2_BT = subset(basic_model2, `propagation.type` == 'BT')
b_model2_FC = subset(basic_model2, `propagation.type` == 'FC')
b_model2_GAC = subset(basic_model2, `propagation.type` == 'GAC')
b_model2_BT_aggr = aggregate(b_model2_BT[, 7:9], list(b_model2_BT$`board.size`), mean)
b_model2_FC_aggr = aggregate(b_model2_FC[, 7:9], list(b_model2_FC$`board.size`), mean)
b_model2_GAC_aggr = aggregate(b_model2_GAC[, 7:9], list(b_model2_GAC$`board.size`), mean)
# model 3
basic_model3 = read.csv('../results/basic_test_model3.csv')
b_model3_BT = subset(basic_model3, `propagation.type` == 'BT')
b_model3_FC = subset(basic_model3, `propagation.type` == 'FC')
b_model3_GAC = subset(basic_model3, `propagation.type` == 'GAC')
b_model3_BT_aggr = aggregate(b_model3_BT[, 7:9], list(b_model3_BT$`board.size`), mean)
b_model3_FC_aggr = aggregate(b_model3_FC[, 7:9], list(b_model3_FC$`board.size`), mean)
b_model3_GAC_aggr = aggregate(b_model3_GAC[, 7:9], list(b_model3_GAC$`board.size`), mean)
# data - board size x runtime, for 3 models, for 3 propagation types
data = cbind(b_model1_BT_aggr[1:2],
b_model2_BT_aggr[2],
b_model3_BT_aggr[2],
b_model1_FC_aggr[2],
b_model2_FC_aggr[2],
b_model3_FC_aggr[2],
b_model1_GAC_aggr[2],
b_model2_GAC_aggr[2],
b_model3_GAC_aggr[2])
colnames(data) = c('board.size',
'bt.runtime1',
'bt.runtime2',
'bt.runtime3',
'fc.runtime1',
'fc.runtime2',
'fc.runtime3',
'gac.runtime1',
'gac.runtime2',
'gac.runtime3')
write.csv(data, file='../results/small_propagator_model.csv')
# data - board size x runtime/assignment/pruning, for 3 orderings, for 3 propagation types
colnames(b_model1_GAC_val_dec_aggr) = c('board.size',
'runtime_dec',
'assignment_dec',
'pruning_dec')
colnames(b_model1_GAC_val_inc_aggr) = c('board.size',
'runtime_inc',
'assignment_inc',
'pruning_inc')
colnames(b_model1_GAC_val_dec_lcv_aggr) = c('board.size',
'runtime_dec_lcv',
'assignment_dec_lcv',
'pruning_dec_lcv')
data = merge(b_model1_GAC_val_dec_aggr, b_model1_GAC_val_inc_aggr, by='board.size')
data = merge(data, b_model1_GAC_val_dec_lcv_aggr, by='board.size')
write.csv(data, file='../results/small_val_ord.csv')
|
while(TRUE){
print('Hello') # use print inside loops
}
counter <- 1
while(counter < 12){
print(counter)
counter <- counter + 1 # counter++ does not work
}
# Iterate i from 1 to 5
for(i in 1:5){
print("Hello R")
}
# Prints Hello 6 times
for(i in 5:10){
print("Hello R")
}
|
/loops.R
|
no_license
|
rhymermj/R-practice
|
R
| false
| false
| 298
|
r
|
while(TRUE){
print('Hello') # use print inside loops
}
counter <- 1
while(counter < 12){
print(counter)
counter <- counter + 1 # counter++ does not work
}
# Iterate i from 1 to 5
for(i in 1:5){
print("Hello R")
}
# Prints Hello 6 times
for(i in 5:10){
print("Hello R")
}
|
networkER <- function(p.){
# Input : p. - total dim
# Output : a matrix with Erdos-Renyi structure
A <- matrix(0,p.,p.)
for (i in 1:p.){
for (j in 1:p.) A[i,j] <- ifelse(rbinom(1, size=1, 0.3), ifelse(rbinom(1, size=1, 0.5),
runif(1, min = -0.4, max = -0.1),
runif(1, min = 0.1, max = 0.4)),0)
}
A <- (A+t(A))
for (i in 1:p.) A[i,i] <- sum(abs(A[i,]))+0.1
return(A)
}
networkBD <- function(p., rho.){ #this function needs to be updated according to the dim change
# Input : p. - total dim
# rho. - the parameter to generate band correlation
# Output : a matrix with banded structure
A <- diag(0.5, p.)
for (i in 2:p.){
for (j in 1:(i-1)) A[i,j] <- rho.^(abs(i-j))
}
A <- A + t(A)
return(A)
}
sigmaGenerator1 <- function(p.=50, block.=10){
# Output: a covariance matrix with #block. blocks and dim #p.
# submatrix is in ER structure
mat = matrix(0,nrow=p., ncol=p.)
sub.p = p./block.
for(i in 1:block.){
index1 = 1 + (i-1)*sub.p
index2 = sub.p + (i-1)*sub.p
mat[index1:index2 , index1:index2] <- networkER(sub.p)
}
print(is.positive.semi.definite(mat))
return(mat)
}
sigmaGenerator2 <- function(p.=100, bs.=rep(c(5,10,10),4)){
# Output: a covariance matrix with #block. blocks and dim #p.
# submatrix is in ER structure
mat = matrix(0,nrow=p., ncol=p.)
index1 = index2 <- 0
for(i in bs.){
index1 = 1 + index2
index2 = index2 + i
mat[index1:index2 , index1:index2] <- networkER(i)
}
print(is.positive.semi.definite(mat))
return(mat)
}
sigmaChange1 <- function(sigma., change., sub.p.){
# Input : sigma. - a matrix needs to be changed
# change. - a vector of the order of blocks that need to be changed
# sub.p. - the size of the block
# Output: a covariance modified matrix based on sigma.
# submatrix is in ER structure
for (i in change.){
index1 = 1 + (i-1)*sub.p.
index2 = sub.p. + (i-1)*sub.p.
sigma.[index1:index2 , index1:index2] <- networkER(sub.p.)
}
print(is.positive.semi.definite(sigma.))
return(sigma.)
}
sigmaChange2 <- function(sigma., change.){
# Input : sigma. - a matrix needs to be changed
# change. - a vector of the order of blocks that need to be changed
# sub.p. - the size of the block
# Output: a covariance modified matrix based on sigma.
# submatrix is in ER structure
sigma.[change., change.] <- networkER(length(change.))
print(is.positive.semi.definite(sigma.))
return(sigma.)
}
betaGenerator <- function(p.=50, q.=100, block., blockX., blockY., type.="A1C1"){
# Output : a coeff beta matrix in Y = X %*% Beta + W. It is block diag, with value 1
mat = matrix(0, nrow=q., ncol=p.)
sub.p = p./block.
sub.q = q./block.
if (type. == "A1C1"){
for (i in 1:block.){
index1 = 1 + (i-1)*sub.p
index2 = sub.p + (i-1)*sub.p
index3 = 1 + (i-1)*sub.q
index4 = sub.q + (i-1)*sub.q
mat[index3:index4 , index1:index2] <- runif(sub.q*sub.p, min = 0.9, max = 1)
}
}
if (type. == "A2C1" | type. == "A3C1"){
index1 = index2 = index3 = index4 <- 0
for(i in 1:block.){
index1 = 1 + index2
index2 = index2 + blockX.[i]
index3 = 1 + index4
index4 = index4 + blockY.[i]
mat[index1:index2 , index3:index4] <- runif(blockX.[i]*blockY.[i], min = 0.9, max = 1)
}
}
if (type. ==""){
}
return(mat)
}
betaChange <- function(Beta., r.){
p <- dim(Beta.)[1]
q <- dim(Beta.)[2]
mat <- matrix(nrow = p, ncol = q)
for (i in 1:p){
for (j in 1:q) mat[i,j] <- ifelse(rbinom(1, size=1, r.), runif(1, min = 0.9, max = 1),0)
}
res <- Beta.*(Beta.!=0) + mat*(Beta.==0)
return(res)
}
noiseGenerator <- function(p.=50, bs. = rep(5,10), rho.=0.3){
mat = matrix(0, nrow=p., ncol=p.)
index1 = index2 <- 0
for (i in bs.){
index1 = 1 + index2
index2 = index2 + i
if (i>1){
mat[index1:index2 , index1:index2] <- networkBD(i, rho.)
}
else{
mat[index1:index2 , index1:index2] <- runif(1,0.9, 1.1)
}
}
print(is.positive.semi.definite(mat))
return(mat)
} # Revised noiseGenerator() and made it more general
|
/networks.R
|
no_license
|
DeniseYi/Assisted_Differential_Network
|
R
| false
| false
| 4,313
|
r
|
networkER <- function(p.){
# Input : p. - total dim
# Output : a matrix with Erdos-Renyi structure
A <- matrix(0,p.,p.)
for (i in 1:p.){
for (j in 1:p.) A[i,j] <- ifelse(rbinom(1, size=1, 0.3), ifelse(rbinom(1, size=1, 0.5),
runif(1, min = -0.4, max = -0.1),
runif(1, min = 0.1, max = 0.4)),0)
}
A <- (A+t(A))
for (i in 1:p.) A[i,i] <- sum(abs(A[i,]))+0.1
return(A)
}
networkBD <- function(p., rho.){ #this function needs to be updated according to the dim change
# Input : p. - total dim
# rho. - the parameter to generate band correlation
# Output : a matrix with banded structure
A <- diag(0.5, p.)
for (i in 2:p.){
for (j in 1:(i-1)) A[i,j] <- rho.^(abs(i-j))
}
A <- A + t(A)
return(A)
}
sigmaGenerator1 <- function(p.=50, block.=10){
# Output: a covariance matrix with #block. blocks and dim #p.
# submatrix is in ER structure
mat = matrix(0,nrow=p., ncol=p.)
sub.p = p./block.
for(i in 1:block.){
index1 = 1 + (i-1)*sub.p
index2 = sub.p + (i-1)*sub.p
mat[index1:index2 , index1:index2] <- networkER(sub.p)
}
print(is.positive.semi.definite(mat))
return(mat)
}
sigmaGenerator2 <- function(p.=100, bs.=rep(c(5,10,10),4)){
# Output: a covariance matrix with #block. blocks and dim #p.
# submatrix is in ER structure
mat = matrix(0,nrow=p., ncol=p.)
index1 = index2 <- 0
for(i in bs.){
index1 = 1 + index2
index2 = index2 + i
mat[index1:index2 , index1:index2] <- networkER(i)
}
print(is.positive.semi.definite(mat))
return(mat)
}
sigmaChange1 <- function(sigma., change., sub.p.){
# Input : sigma. - a matrix needs to be changed
# change. - a vector of the order of blocks that need to be changed
# sub.p. - the size of the block
# Output: a covariance modified matrix based on sigma.
# submatrix is in ER structure
for (i in change.){
index1 = 1 + (i-1)*sub.p.
index2 = sub.p. + (i-1)*sub.p.
sigma.[index1:index2 , index1:index2] <- networkER(sub.p.)
}
print(is.positive.semi.definite(sigma.))
return(sigma.)
}
sigmaChange2 <- function(sigma., change.){
# Input : sigma. - a matrix needs to be changed
# change. - a vector of the order of blocks that need to be changed
# sub.p. - the size of the block
# Output: a covariance modified matrix based on sigma.
# submatrix is in ER structure
sigma.[change., change.] <- networkER(length(change.))
print(is.positive.semi.definite(sigma.))
return(sigma.)
}
betaGenerator <- function(p.=50, q.=100, block., blockX., blockY., type.="A1C1"){
# Output : a coeff beta matrix in Y = X %*% Beta + W. It is block diag, with value 1
mat = matrix(0, nrow=q., ncol=p.)
sub.p = p./block.
sub.q = q./block.
if (type. == "A1C1"){
for (i in 1:block.){
index1 = 1 + (i-1)*sub.p
index2 = sub.p + (i-1)*sub.p
index3 = 1 + (i-1)*sub.q
index4 = sub.q + (i-1)*sub.q
mat[index3:index4 , index1:index2] <- runif(sub.q*sub.p, min = 0.9, max = 1)
}
}
if (type. == "A2C1" | type. == "A3C1"){
index1 = index2 = index3 = index4 <- 0
for(i in 1:block.){
index1 = 1 + index2
index2 = index2 + blockX.[i]
index3 = 1 + index4
index4 = index4 + blockY.[i]
mat[index1:index2 , index3:index4] <- runif(blockX.[i]*blockY.[i], min = 0.9, max = 1)
}
}
if (type. ==""){
}
return(mat)
}
betaChange <- function(Beta., r.){
p <- dim(Beta.)[1]
q <- dim(Beta.)[2]
mat <- matrix(nrow = p, ncol = q)
for (i in 1:p){
for (j in 1:q) mat[i,j] <- ifelse(rbinom(1, size=1, r.), runif(1, min = 0.9, max = 1),0)
}
res <- Beta.*(Beta.!=0) + mat*(Beta.==0)
return(res)
}
noiseGenerator <- function(p.=50, bs. = rep(5,10), rho.=0.3){
mat = matrix(0, nrow=p., ncol=p.)
index1 = index2 <- 0
for (i in bs.){
index1 = 1 + index2
index2 = index2 + i
if (i>1){
mat[index1:index2 , index1:index2] <- networkBD(i, rho.)
}
else{
mat[index1:index2 , index1:index2] <- runif(1,0.9, 1.1)
}
}
print(is.positive.semi.definite(mat))
return(mat)
} # Revised noiseGenerator() and made it more general
|
setMethod("initialize", signature(.Object="BeadStudioSetList"),
function(.Object,
assayDataList=AssayDataList(baf=baf, lrr=lrr),
lrr=list(),
baf=lapply(lrr, function(x) matrix(nrow=nrow(x), ncol=ncol(x))),
featureDataList=GenomeAnnotatedDataFrameFrom(assayDataList, annotation, genome),
chromosome=vector("list", length(lrr)),
phenoData,
annotation=character(),
genome=character(),
...){
if(missing(phenoData)){
if(length(lrr) > 0){
phenoData <- annotatedDataFrameFrom(lrr[[1]], byrow=FALSE)
} else {
phenoData <- new("AnnotatedDataFrame")
}
}
callNextMethod(.Object,
assayDataList=assayDataList,
featureDataList=featureDataList,
phenoData=phenoData,
chromosome=chromosome,
annotation=annotation,
genome=genome,
...)
})
setMethod("updateObject", signature(object="BeadStudioSetList"),
function(object, ..., verbose=FALSE) {
if (verbose) message("updateObject(object = 'BeadStudioSetList')")
obj <- tryCatch(callNextMethod(object), error=function(e) NULL)
if(is.null(obj)){
obj <- new("BeadStudioSetList",
assayDataList = assayDataList(object),
phenoData = phenoData(object),
annotation = updateObject(annotation(object),
..., verbose=verbose),
featureDataList=featureDataList(object),
chromosome=chromosome(object),
genome=genomeBuild(object),
...)
}
obj
})
setMethod("[[", signature(x="BeadStudioSetList"),
function(x, i, j, ..., exact=TRUE){
if(missing(i)) return(x)
ad <- assayDataList(x)
fdlist <- featureData(x)
adnew <- switch(storage.mode(ad),
lockedEnvironment =,
environment = new.env(parent=emptyenv()),
list = list())
nms <- ls(ad)
if(length(i) == 1){
for (nm in ls(ad)){
elt <- ad[[nm]][[i]]
dimnames(elt) <- lapply(dimnames(elt), unname)
adnew[[nm]] <- elt
}
}
x <- new("BeadStudioSet",
assayData=adnew,
phenoData=phenoData(x),
featureData=fdlist[[i]],
genome=genomeBuild(x),
annotation=annotation(x))
})
setMethod("[[", signature(x="BafLrrSetList"),
function(x, i, j, ..., exact=TRUE){
x <- callNextMethod()
new("BafLrrSet",
assayData=assayData(x),
phenoData=phenoData(x),
featureData=featureData(x),
genome=genomeBuild(x),
annotation=annotation(x))
})
setMethod("[", signature(x="gSetList"),
function(x, i, j, ..., drop=TRUE){
if(missing(i) && missing(j)) return(x)
ad <- assayDataList(x)
if(!missing(i)){
fdlist <- featureData(x)[i]
}
adnew <- switch(storage.mode(ad),
lockedEnvironment =,
environment = new.env(parent=emptyenv()),
list = list())
nms <- ls(ad)
if(!missing(i)){
for (nm in ls(ad)){
elt <- ad[[nm]][i]
adnew[[nm]] <- elt
}
ad <- adnew
}
if(missing(j)){
x@featureDataList <- fdlist
x@chromosome <- x@chromosome[i]
} else {
for (nm in ls(ad)){
elt <- lapply(ad[[nm]], function(y, j) y[, j, drop=FALSE], j=j)
adnew[[nm]] <- elt
}
phenoData(x) <- phenoData(x)[j, ]
x@protocolData <- x@protocolData[j, ]
}
x@assayDataList <- adnew
return(x)
})
setReplaceMethod("[[", signature(x="BafLrrSetList", value="BafLrrSet"),
function(x, i, j, ..., value){
fdl <- x@featureDataList
fdl[[i]] <- featureData(value)
adl <- x@assayDataList
r <- adl[["lrr"]]
r[[i]] <- lrr(value)
b <- adl[["baf"]]
b[[i]] <- baf(value)
adl <- AssayDataList(lrr=r, baf=b)
new("BafLrrSetList",
assayDataList=adl,
featureDataList=fdl,
phenoData=phenoData(x),
chromosome=chromosome(x),
annotation=annotation(x),
genome=genomeBuild(x))
})
setReplaceMethod("assayData", signature=signature(object="BeadStudioSetList",
value="AssayData"),
function(object, value) {
object@assayDataList <- value
object
})
setMethod("baf", signature(object="oligoSetList"),
function(object) assayData(object)[["baf"]])
setMethod("baf", signature(object="BeadStudioSetList"),
function(object){
##lapply(object, baf)
assayDataList(object)[["baf"]]
})
setMethod(baf, signature(object="BafLrrSetList"),
function(object){
assayDataList(object)[["baf"]]
})
setMethod("calls", signature(object="oligoSetList"),
function(object) assayData(object)[["call"]])
setMethod("copyNumber", signature(object="oligoSetList"),
function(object) assayData(object)[["copyNumber"]])
setMethod("lrr", signature(object="BeadStudioSetList"),
function(object){
##lapply(object, lrr)
assayDataList(object)[["lrr"]]
})
setMethod("lrr", signature(object="BafLrrSetList"),
function(object){
##lapply(object, lrr)
assayDataList(object)[["lrr"]]
})
setReplaceMethod("lrr", signature(object="BafLrrSetList", value="matrix"),
function(object, value){
## value can often be fewer columns than object
if(is.null(rownames(value))) stop("row.names is NULL")
if(is.null(colnames(value))) stop("col.names is NULL")
sample.index <- match(colnames(value), sampleNames(object))
for(j in seq_along(object)){
bset <- object[[j]]
k <- match(featureNames(bset), rownames(value))
lrr(bset)[, sample.index] <- value[k, , drop=FALSE]
object[[j]] <- bset
}
return(object)
})
##setMethod("ncol", signature(x="BeadStudioSetList"),
## function(x) ncol(assayDataList(x)[["lrr"]][[1]]))
setMethod("snpCallProbability", signature(object="oligoSetList"),
function(object) assayData(object)[["callProbability"]])
setMethod(clone2, "BafLrrSetList", function(object, id, prefix, ...){
duplicateBLList(object, ids=id, prefix=prefix, ...)
})
duplicateBLList <- function(object, ids, prefix="waveAdj", empty=FALSE){
##brList.copy <- object
## duplicate the lrr ff objects. Then do wave correction on the
## duplicated files.
if(missing(ids)) ids <- sampleNames(object)
ids <- as.character(ids)
r <- lrr(object)
b <- baf(object)
rcopy.list <- list()
bcopy.list <- list()
for(i in seq_along(r)){
x <- r[[i]]
y <- b[[i]]
rcopy <- initializeBigMatrix(paste(prefix, "lrr", sep="-"), nrow(x), length(ids), vmode="integer")
bcopy <- initializeBigMatrix(paste(prefix, "baf", sep="-"), nrow(x), length(ids), vmode="integer")
dimnames(rcopy) <- list(rownames(x),
ids)
dimnames(bcopy) <- dimnames(rcopy)
J <- match(ids, colnames(x))
if(!empty){
for(j in seq_along(J)){
k <- J[j]
rcopy[, j] <- x[, k]
bcopy[, j] <- y[, k]
}
}
rcopy.list[[i]] <- rcopy
bcopy.list[[i]] <- bcopy
}
adl <- AssayDataList(baf=bcopy.list, lrr=rcopy.list)
pd <- phenoData(object)[match(ids, sampleNames(object)), ]
new("BafLrrSetList",
assayDataList=adl,
featureDataList=featureData(object),
phenoData=pd,
chromosome=chromosome(object),
annotation=annotation(object),
genome=genomeBuild(object))
}
|
/R/methods-BeadStudioSetList.R
|
no_license
|
benilton/oligoClasses
|
R
| false
| false
| 7,111
|
r
|
setMethod("initialize", signature(.Object="BeadStudioSetList"),
function(.Object,
assayDataList=AssayDataList(baf=baf, lrr=lrr),
lrr=list(),
baf=lapply(lrr, function(x) matrix(nrow=nrow(x), ncol=ncol(x))),
featureDataList=GenomeAnnotatedDataFrameFrom(assayDataList, annotation, genome),
chromosome=vector("list", length(lrr)),
phenoData,
annotation=character(),
genome=character(),
...){
if(missing(phenoData)){
if(length(lrr) > 0){
phenoData <- annotatedDataFrameFrom(lrr[[1]], byrow=FALSE)
} else {
phenoData <- new("AnnotatedDataFrame")
}
}
callNextMethod(.Object,
assayDataList=assayDataList,
featureDataList=featureDataList,
phenoData=phenoData,
chromosome=chromosome,
annotation=annotation,
genome=genome,
...)
})
setMethod("updateObject", signature(object="BeadStudioSetList"),
function(object, ..., verbose=FALSE) {
if (verbose) message("updateObject(object = 'BeadStudioSetList')")
obj <- tryCatch(callNextMethod(object), error=function(e) NULL)
if(is.null(obj)){
obj <- new("BeadStudioSetList",
assayDataList = assayDataList(object),
phenoData = phenoData(object),
annotation = updateObject(annotation(object),
..., verbose=verbose),
featureDataList=featureDataList(object),
chromosome=chromosome(object),
genome=genomeBuild(object),
...)
}
obj
})
setMethod("[[", signature(x="BeadStudioSetList"),
function(x, i, j, ..., exact=TRUE){
if(missing(i)) return(x)
ad <- assayDataList(x)
fdlist <- featureData(x)
adnew <- switch(storage.mode(ad),
lockedEnvironment =,
environment = new.env(parent=emptyenv()),
list = list())
nms <- ls(ad)
if(length(i) == 1){
for (nm in ls(ad)){
elt <- ad[[nm]][[i]]
dimnames(elt) <- lapply(dimnames(elt), unname)
adnew[[nm]] <- elt
}
}
x <- new("BeadStudioSet",
assayData=adnew,
phenoData=phenoData(x),
featureData=fdlist[[i]],
genome=genomeBuild(x),
annotation=annotation(x))
})
setMethod("[[", signature(x="BafLrrSetList"),
function(x, i, j, ..., exact=TRUE){
x <- callNextMethod()
new("BafLrrSet",
assayData=assayData(x),
phenoData=phenoData(x),
featureData=featureData(x),
genome=genomeBuild(x),
annotation=annotation(x))
})
setMethod("[", signature(x="gSetList"),
function(x, i, j, ..., drop=TRUE){
if(missing(i) && missing(j)) return(x)
ad <- assayDataList(x)
if(!missing(i)){
fdlist <- featureData(x)[i]
}
adnew <- switch(storage.mode(ad),
lockedEnvironment =,
environment = new.env(parent=emptyenv()),
list = list())
nms <- ls(ad)
if(!missing(i)){
for (nm in ls(ad)){
elt <- ad[[nm]][i]
adnew[[nm]] <- elt
}
ad <- adnew
}
if(missing(j)){
x@featureDataList <- fdlist
x@chromosome <- x@chromosome[i]
} else {
for (nm in ls(ad)){
elt <- lapply(ad[[nm]], function(y, j) y[, j, drop=FALSE], j=j)
adnew[[nm]] <- elt
}
phenoData(x) <- phenoData(x)[j, ]
x@protocolData <- x@protocolData[j, ]
}
x@assayDataList <- adnew
return(x)
})
setReplaceMethod("[[", signature(x="BafLrrSetList", value="BafLrrSet"),
function(x, i, j, ..., value){
fdl <- x@featureDataList
fdl[[i]] <- featureData(value)
adl <- x@assayDataList
r <- adl[["lrr"]]
r[[i]] <- lrr(value)
b <- adl[["baf"]]
b[[i]] <- baf(value)
adl <- AssayDataList(lrr=r, baf=b)
new("BafLrrSetList",
assayDataList=adl,
featureDataList=fdl,
phenoData=phenoData(x),
chromosome=chromosome(x),
annotation=annotation(x),
genome=genomeBuild(x))
})
setReplaceMethod("assayData", signature=signature(object="BeadStudioSetList",
value="AssayData"),
function(object, value) {
object@assayDataList <- value
object
})
setMethod("baf", signature(object="oligoSetList"),
function(object) assayData(object)[["baf"]])
setMethod("baf", signature(object="BeadStudioSetList"),
function(object){
##lapply(object, baf)
assayDataList(object)[["baf"]]
})
setMethod(baf, signature(object="BafLrrSetList"),
function(object){
assayDataList(object)[["baf"]]
})
setMethod("calls", signature(object="oligoSetList"),
function(object) assayData(object)[["call"]])
setMethod("copyNumber", signature(object="oligoSetList"),
function(object) assayData(object)[["copyNumber"]])
setMethod("lrr", signature(object="BeadStudioSetList"),
function(object){
##lapply(object, lrr)
assayDataList(object)[["lrr"]]
})
setMethod("lrr", signature(object="BafLrrSetList"),
function(object){
##lapply(object, lrr)
assayDataList(object)[["lrr"]]
})
setReplaceMethod("lrr", signature(object="BafLrrSetList", value="matrix"),
function(object, value){
## value can often be fewer columns than object
if(is.null(rownames(value))) stop("row.names is NULL")
if(is.null(colnames(value))) stop("col.names is NULL")
sample.index <- match(colnames(value), sampleNames(object))
for(j in seq_along(object)){
bset <- object[[j]]
k <- match(featureNames(bset), rownames(value))
lrr(bset)[, sample.index] <- value[k, , drop=FALSE]
object[[j]] <- bset
}
return(object)
})
##setMethod("ncol", signature(x="BeadStudioSetList"),
## function(x) ncol(assayDataList(x)[["lrr"]][[1]]))
setMethod("snpCallProbability", signature(object="oligoSetList"),
function(object) assayData(object)[["callProbability"]])
setMethod(clone2, "BafLrrSetList", function(object, id, prefix, ...){
duplicateBLList(object, ids=id, prefix=prefix, ...)
})
duplicateBLList <- function(object, ids, prefix="waveAdj", empty=FALSE){
##brList.copy <- object
## duplicate the lrr ff objects. Then do wave correction on the
## duplicated files.
if(missing(ids)) ids <- sampleNames(object)
ids <- as.character(ids)
r <- lrr(object)
b <- baf(object)
rcopy.list <- list()
bcopy.list <- list()
for(i in seq_along(r)){
x <- r[[i]]
y <- b[[i]]
rcopy <- initializeBigMatrix(paste(prefix, "lrr", sep="-"), nrow(x), length(ids), vmode="integer")
bcopy <- initializeBigMatrix(paste(prefix, "baf", sep="-"), nrow(x), length(ids), vmode="integer")
dimnames(rcopy) <- list(rownames(x),
ids)
dimnames(bcopy) <- dimnames(rcopy)
J <- match(ids, colnames(x))
if(!empty){
for(j in seq_along(J)){
k <- J[j]
rcopy[, j] <- x[, k]
bcopy[, j] <- y[, k]
}
}
rcopy.list[[i]] <- rcopy
bcopy.list[[i]] <- bcopy
}
adl <- AssayDataList(baf=bcopy.list, lrr=rcopy.list)
pd <- phenoData(object)[match(ids, sampleNames(object)), ]
new("BafLrrSetList",
assayDataList=adl,
featureDataList=featureData(object),
phenoData=pd,
chromosome=chromosome(object),
annotation=annotation(object),
genome=genomeBuild(object))
}
|
\name{Principal coordinate analysis using the Jensen-Shannon divergence}
\alias{esov.mds}
\title{
Principal coordinate analysis using the Jensen-Shannon divergence
}
\description{
Principal coordinate analysis using the Jensen-Shannon divergence.
}
\usage{
esov.mds(x, k = 2, eig = TRUE)
}
\arguments{
\item{x}{
A matrix with the compositional data. Zero values are allowed.
}
\item{k}{
The maximum dimension of the space which the data are to be represented in. This can be a number between
1 and \eqn{D-1}, where \eqn{D} denotes the number of dimensions.
}
\item{eig}{
Should eigenvalues be returned? The default value is TRUE.
}
}
\details{
The function computes the Jensen-Shannon divergence matrix and then plugs it into the classical
multidimensional scaling function in the "cmdscale" function.
}
\value{
A list with the results of "cmdscale" function.
}
\references{
Aitchison J. (1986). The statistical analysis of compositional data. Chapman & Hall.
Cox, T. F. and Cox, M. A. A. (2001). Multidimensional Scaling. Second edition. Chapman and Hall.
Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979). Chapter 14 of Multivariate Analysis, London: Academic Press.
Tsagris, Michail (2015). A novel, divergence based, regression for compositional data.
Proceedings of the 28th Panhellenic Statistics Conference, 15-18/4/2015, Athens, Greece.
https://arxiv.org/pdf/1511.07600.pdf
}
\author{
Michail Tsagris.
R implementation and documentation: Michail Tsagris \email{mtsagris@uoc.gr}.
}
%\note{
%% ~~further notes~~
%}
\seealso{
\code{\link{alfa.mds}, \link{alfa.pca},
}
}
\examples{
x <- as.matrix(iris[, 1:4])
x <- x/ rowSums(x)
a <- esov.mds(x)
}
|
/man/esov.mds.Rd
|
no_license
|
cran/Compositional
|
R
| false
| false
| 1,740
|
rd
|
\name{Principal coordinate analysis using the Jensen-Shannon divergence}
\alias{esov.mds}
\title{
Principal coordinate analysis using the Jensen-Shannon divergence
}
\description{
Principal coordinate analysis using the Jensen-Shannon divergence.
}
\usage{
esov.mds(x, k = 2, eig = TRUE)
}
\arguments{
\item{x}{
A matrix with the compositional data. Zero values are allowed.
}
\item{k}{
The maximum dimension of the space which the data are to be represented in. This can be a number between
1 and \eqn{D-1}, where \eqn{D} denotes the number of dimensions.
}
\item{eig}{
Should eigenvalues be returned? The default value is TRUE.
}
}
\details{
The function computes the Jensen-Shannon divergence matrix and then plugs it into the classical
multidimensional scaling function in the "cmdscale" function.
}
\value{
A list with the results of "cmdscale" function.
}
\references{
Aitchison J. (1986). The statistical analysis of compositional data. Chapman & Hall.
Cox, T. F. and Cox, M. A. A. (2001). Multidimensional Scaling. Second edition. Chapman and Hall.
Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979). Chapter 14 of Multivariate Analysis, London: Academic Press.
Tsagris, Michail (2015). A novel, divergence based, regression for compositional data.
Proceedings of the 28th Panhellenic Statistics Conference, 15-18/4/2015, Athens, Greece.
https://arxiv.org/pdf/1511.07600.pdf
}
\author{
Michail Tsagris.
R implementation and documentation: Michail Tsagris \email{mtsagris@uoc.gr}.
}
%\note{
%% ~~further notes~~
%}
\seealso{
\code{\link{alfa.mds}, \link{alfa.pca},
}
}
\examples{
x <- as.matrix(iris[, 1:4])
x <- x/ rowSums(x)
a <- esov.mds(x)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/explorePatentData.R
\name{summarizeColumns}
\alias{summarizeColumns}
\title{Summarize columns of a data frame}
\usage{
summarizeColumns(df, names, naOmit = FALSE)
}
\arguments{
\item{df}{A data frame of patent data.}
\item{names}{a character vector of header names that you want to summarize.}
\item{naOmit}{Logical. Optionally, remove NA values at the end of the summary.
Useful when comparing fields that have NA values, such as features.}
}
\value{
A dataframe of summarize values.
}
\description{
Summarize columns of a data frame.
Summarize a data frame \code{df} by a \code{names} character vector of
header names.
}
\examples{
sumo <- cleanPatentData(patentData = patentr::acars, columnsExpected = sumobrainColumns,
cleanNames = sumobrainNames,
dateFields = sumobrainDateFields,
dateOrders = sumobrainDateOrder,
deduplicate = TRUE,
cakcDict = patentr::cakcDict,
docLengthTypesDict = patentr::docLengthTypesDict,
keepType = "grant",
firstAssigneeOnly = TRUE,
assigneeSep = ";",
stopWords = patentr::assigneeStopWords)
# note that in reality, you need a patent analyst to carefully score
# these patents, the score here is for demonstrational purposes
score <- round(rnorm(dim(sumo)[1],mean=1.4,sd=0.9))
score[score>3] <- 3
score[score<0] <- 0
sumo$score <- score
scoreSum <- summarizeColumns(sumo, "score")
scoreSum
# load library(ggplot2) for the below part to run
# ggplot(scoreSum, aes(x=score, y = total, fill=factor(score) )) + geom_bar(stat="identity")
nameAndScore <- summarizeColumns(sumo, c("assigneeClean","score"))
# tail(nameAndScore)
}
|
/man/summarizeColumns.Rd
|
no_license
|
lupok2001/patentr
|
R
| false
| true
| 1,640
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/explorePatentData.R
\name{summarizeColumns}
\alias{summarizeColumns}
\title{Summarize columns of a data frame}
\usage{
summarizeColumns(df, names, naOmit = FALSE)
}
\arguments{
\item{df}{A data frame of patent data.}
\item{names}{a character vector of header names that you want to summarize.}
\item{naOmit}{Logical. Optionally, remove NA values at the end of the summary.
Useful when comparing fields that have NA values, such as features.}
}
\value{
A dataframe of summarize values.
}
\description{
Summarize columns of a data frame.
Summarize a data frame \code{df} by a \code{names} character vector of
header names.
}
\examples{
sumo <- cleanPatentData(patentData = patentr::acars, columnsExpected = sumobrainColumns,
cleanNames = sumobrainNames,
dateFields = sumobrainDateFields,
dateOrders = sumobrainDateOrder,
deduplicate = TRUE,
cakcDict = patentr::cakcDict,
docLengthTypesDict = patentr::docLengthTypesDict,
keepType = "grant",
firstAssigneeOnly = TRUE,
assigneeSep = ";",
stopWords = patentr::assigneeStopWords)
# note that in reality, you need a patent analyst to carefully score
# these patents, the score here is for demonstrational purposes
score <- round(rnorm(dim(sumo)[1],mean=1.4,sd=0.9))
score[score>3] <- 3
score[score<0] <- 0
sumo$score <- score
scoreSum <- summarizeColumns(sumo, "score")
scoreSum
# load library(ggplot2) for the below part to run
# ggplot(scoreSum, aes(x=score, y = total, fill=factor(score) )) + geom_bar(stat="identity")
nameAndScore <- summarizeColumns(sumo, c("assigneeClean","score"))
# tail(nameAndScore)
}
|
## Processing torpor data from 2015 field season to incorporate with previous torpor data
## Anusha Shankar
## Started February 22, 2016
##Packages
library(ggplot2)
library(reshape)
library(gridExtra)
library(grid)
library(wq)
library(gam)
library(foreign)
library(MASS)
library(devtools)
require(dplyr)
#library(plotflow) #Useful function reorder_by() might be useful for ordering variables by others
## Set working directory and read in .csv file
setwd("C:\\Users\\nushi\\Dropbox\\Hummingbird energetics\\Tables_for_paper")
torpor2015 <- read.csv("Torpor_METY_AGCU_2015_detailed_hourly.csv")
litstudy <- read.csv("LitStudy_combined.csv")
litnew <- read.csv("LitStudy_andKruger2.csv")
krugertab <- read.csv("Lit_Kruger1982.csv")
k_melt <- read.csv("Lit_Kruger1982_modified.csv")
##In Jan 2018, have to check with other Kruger files that this is complete
litjan <- read.csv("LitStudy_andKruger.csv")
##Code for Anita project - 2015 and 2016 torpor datacd "E:\cd
tor_am <- read.csv("C:\\Users\\ANUSHA\\Dropbox\\Data 2015\\all_torpor_data.csv")
tor_am$NEE_MC <- tor_am$NEE/(tor_am$Av_mass^(2/3))
n_fun <- function(Species){
return(data.frame(y = median(Species), label = paste0("n = ",length(Species)),"\n"))
}
ggplot(tor_am, aes(Species, NEE_MC)) + geom_boxplot() + geom_point(aes(col=Species)) + geom_point(aes(Species, mean(NEE_MC))) +
stat_summary(fun.data = n_fun, geom = "text", vjust = -2, size = 8) + my_theme
### end code for Anita project
## For Nat Geo demo
gcbnight <- read.csv("Plotting_DailyGraphs_torpor_in_R//E14_0720_GCB.csv")
gcbsumm <- read.csv("Plotting_DailyGraphs_torpor_in_R//E14_0720_GCB_summary.csv")
birdsumms <- read.csv("Plotting_DailyGraphs_torpor_in_R//E14_bird_summaries_toplot.csv")
m.krug <- melt(krugertab, id.vars = c("Species", "Sex", "Mean_mass_g", "Temp"),
measure.vars = c("MR_day_J_g_hr", "MR_night_J_g_hr", "MR_torpor_J_g_hr"))
names(m.krug) <- c("Species", "Sex", "Mass", "Temp", "Measure", "Value")
levels(litstudy$Torpid_not) <- c("Normothermic", "Torpid", "Unknown")
## Min chamber temo in deg. C axis label
Tc_min.xlab <- expression(atop(paste("Minimum Chamber Temperature (", degree,"C)")))
## Making my easy-theme
my_theme <- theme_classic(base_size = 30) +
theme(axis.title.y = element_text(vjust = 2),
panel.border = element_rect(colour = "black", fill=NA))
## Function to return sample sizes
give.n <- function(x){
return(c(y = mean(x), label = length(x)))
}
## USeful for introduction !!!!!!!!!!!!!!!! #####
## McKechnie, A.E. and B.G. Lovegrove. 2002. Avian Facultative Hypothermic Responses: a Review.
# The Condor 104: 705.
## The capacity for shallow hypothermia (rest-phase hypothermia) occurs throughout the avian phylogeny,
## but the capacity for pronounced hypothermia (torpor) appears to be restricted to certain taxa.
## Families in which torpor has been reported include the Todidae, Coliidae, Trochilidae, Apodidae,
## Caprimulgidae, and Columbidae.
## Subsetting files
#agcu <- torpor2015[torpor2015$Species=="AGCU",]
#mety <- torpor2015[torpor2015$Species=="METY",]
tor_sub <- torpor2015[torpor2015$Species=="AGCU" | torpor2015$Species=="METY",]
##### Set time as a factor ######
#agcu_indiv <- torpor2015[torpor2015$BirdID=="EG15_0104_AGCU",]
#agcu_indiv$Time <- factor(agcu_indiv$Time, levels=agcu_indiv$Time)
#mety$Time <- factor(mety$Time, levels=mety$Time)
#mety_indiv <- torpor2015[torpor2015$BirdID=="EG15_1028_METY",]
#mety_indiv$Time <- factor(mety_indiv$Time, levels=mety_indiv$Time)
##METY days - 0910, 1028, 1130, 1209, 1211, 1212, 1219
##AGCU days - 0826, 1023, 1220, 1223, 0104
#### Literature and some Lit-study plots #####
#### Kruger et al. 1982 study, plotting values for 22 species ####
krugerplot <- ggplot(m.krug, aes(Temp, Value, group=interaction(Measure,Species))) + my_theme +
geom_line(aes(col=Measure)) +
xlab("Ambient temperature (deg. C)") + ylab("Energy expenditure (J/g*hr)") +
scale_y_continuous(breaks=c(0,50,100,200,400,600)) + theme(panel.grid.major.y = element_line(size=.1, color="grey"))
krugerplot
## Kruger's data, selecting particular species
krugerplot_sp <- ggplot(m.krug[m.krug$Species=="Aglaectis cupripennis",], aes(Temp, Value)) + my_theme +
geom_point(aes(col=Measure, size=Mass)) + ylab("Energy expenditure (J/g*hr)") +
scale_y_continuous(breaks=c(0,50,100,200,400,600)) + theme(panel.grid.major.y = element_line(size=.1, color="grey"))
krugerplot_sp
### Plot literature review plot study values ######
litplotstudy <- ggplot(litstudy, aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(3,20)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") + ggtitle("Mass category")
litplotstudy
### Plot literature review plot study values, just 7.5g mass category ######
litplotstudy <- ggplot(litstudy[litstudy], aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(3,20)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") + ggtitle("Mass category")
litplotstudy
levels(litjan$Torpid_not) <- c("Normothermic", "N_day", "Normothermic", "Torpid", "Unknown")
old.lvl<-levels(litjan$Mass_categ)
litjan$Mass_categ<-factor(litjan$Mass_categ,
levels=c(sort(old.lvl[old.lvl!="20"], decreasing=F), "20"))
### Plot literature review plot study values with kruger ######
litplotstudy_jan <- ggplot(litjan[litjan$Torpid_not !="N_day" &
litjan$Mass_categ%in% c("2_4", "6_8","8_10"),], aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit, alpha=Study_lit), size=4) +
scale_alpha_manual(values=c(0.3,1)) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#0066CC','green')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) + ylim(0,3300) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") + ggtitle("Mass category (g)")
litplotstudy_jan
### Plot just literature review values with kruger, just 6-8g category ######
litplot_jan_6to8 <- ggplot(litjan[litjan$Torpid_not !="N_day" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit",], aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not), shape=20, alpha=0.5, size=4) +
geom_smooth(data=litjan[litjan$Torpid_not=="Normothermic" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit",], method="lm", col="black", alpha=0.3) +
geom_smooth(data=litjan[litjan$Torpid_not=="Torpid" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit" & litjan$Temp<28,], method="loess", col="black", alpha=0.3) +
#scale_alpha_manual(values=c(0.3,1)) +
#scale_shape_manual(values=c(20,3)) +
my_theme + #facet_grid(~Mass_categ) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=20)) + ylim(0,3300) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") #+ ggtitle("Mass category (g)")
litplot_jan_6to8
### Plot literature review plot study values with kruger, just 6-8g category ######
litplotstudy_jan_6to8 <- ggplot(litjan[litjan$Torpid_not !="N_day" &
litjan$Mass_categ%in% c("6_8"),], aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit, alpha=Study_lit), size=4) +
geom_smooth(data=litjan[litjan$Torpid_not=="Normothermic" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit",], method="lm", col="black", alpha=0.3) +
geom_smooth(data=litjan[litjan$Torpid_not=="Torpid" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit" & litjan$Temp<28,], method="loess", col="black", alpha=0.3) +
scale_alpha_manual(values=c(0.5,1)) +
scale_shape_manual(values=c(20,3)) + my_theme + #facet_grid(~Mass_categ) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) + ylim(0,3300) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") #+ ggtitle("Mass category (g)")
litplotstudy_jan_6to8
## Subsetting just lit vals, including Kruger
litplot_jan <- ggplot(litjan[litjan$Torpid_not %in% c("Normothermic","Torpid") &
litjan$Study_lit=="Lit",],
aes(Temp, EE_J)) +
geom_point(aes(shape=Study_lit, col=Torpid_not), size=4, alpha=0.7) +
facet_grid(~Mass_categ) + my_theme +
scale_shape_manual(values=c(20,3)) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
guides(colour = guide_legend(override.aes = list(size=4))) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category (g)")
litplot_jan
## Subsetting just the known values; excluding uncategorized
## Jan vals with Kruger
litplotstudy_subset_NT_jan <- ggplot(litjan[litjan$Torpid_not %in% c("Normothermic","Torpid"),],
aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4, alpha=0.7) +
facet_grid(~Mass_categ) + my_theme +
scale_shape_manual(values=c(20,3)) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category (g)")
litplotstudy_subset_NT_jan
litplotstudy_subset_NT_jan <- ggplot(litjan[litjan$Torpid_not %in% c("Normothermic","Torpid") &
litjan$Mass_categ%in% c("2_4", "6_8","8_10"),],
aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4, alpha=0.7) +
facet_grid(~Mass_categ) + my_theme +
scale_shape_manual(values=c(20,3)) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category (g)")
litplotstudy_subset_NT_jan
## Subsetting three mass categories, just the known values; excluding uncategorized
litplotstudy_subset_NT <- ggplot(litstudy[litstudy$Mass_categ %in% c(3,6,7.5) &
litstudy$Torpid_not %in% c("Normothermic","Torpid"),],
aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category")
litplotstudy_subset_NT
## Subsetting three mass categories
litplotstudy_subset <- ggplot(litstudy[litstudy$Mass_categ %in% c(3,6,7.5),], aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category")
litplotstudy_subset
## Plot just lit values
litplot <- ggplot(litstudy[litstudy$Study_lit=="Lit",], aes(Tc_min, EE_J)) +
my_theme + geom_point(aes(col=Torpid_not), size=3) +
scale_color_manual(values=c('black', '#ff3333')) +
facet_grid(.~Mass_categ) + ylim(0,3300) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category")
litplot
## With Kruger et al. 1982 data added in
litplotnew <- ggplot(litnew, aes(Temp, EE_J/Mass)) +
theme_bw(base_size = 20) + geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J/g*hr)")
litplotnew
## All lit data, jut plotting torpor points
litplotnew_T <- ggplot(litnew[litnew$Torpid_not=="T",], aes(Temp, EE_J/Mass)) +
theme_bw(base_size = 20) + geom_point(aes(col=Mass, shape=Study_lit), size=5) +
scale_shape_manual(values=c(20,3)) + my_theme + xlab(Tc_min.xlab) +
scale_colour_gradientn(colours=rainbow(3)) + ggtitle("Energy expenditure in torpor") +
ylab("Energy expenditure (J/g*hr)")
litplotnew_T
#Testing slopes
summary(lm(EE_J/Mass~Temp, data=litnew[litnew$Torpid_not=="T" & litnew$Temp > 18,]))
summary(lm(EE_J/Mass~Temp, data=litnew[litnew$Torpid_not=="T" & litnew$Temp < 18,]))
## Without the Unknown points from La Paz
lits <- ggplot(litnew[litnew$Torpid_not!="UK",], aes(Temp, EE_J/Mass)) +
theme_bw(base_size = 20) + geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + #facet_grid(~Mass) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J/g*hr)")
lits
## Just La paz data
coir_plot <- ggplot(litstudy[litstudy$Species=="COIR",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
coir_plot
agcu_plot <- ggplot(litstudy[litstudy$Species=="AGCU",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
agcu_plot
hevi_plot <- ggplot(litstudy[litstudy$Species=="HEVI",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
hevi_plot
mety_plot <- ggplot(litstudy[litstudy$Species=="METY",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
mety_plot
grid.arrange(coir_plot, agcu_plot, mety_plot, hevi_plot,
nrow=2, ncol=2)
### TO DO use Hainsworth AZ birds and Lasiewski 1963 torpor cutoffs, color points by whether they're 1. presumed torpid,
## 2. presumed normothermic, and 3. the weirdos.
grid.arrange(coir_plot, agcu_plot, mety_plot, hevi_plot,
nrow=2, ncol=2)
## Regressing species' energy expenditure vs. Tc_min
summary(lm(EE_J~Tc_min, data=litstudy[litstudy$Species=="AGCU",]))
grid.text(unit(0.5,"npc"),0.99,label = "Mass in grams", gp=gpar(fontsize=20))
litstudy_med <- litstudy[litstudy$Mass_categ==7.5,]
litstudy_sm <- litstudy[litstudy$Mass_categ==3,]
litplot_med <- ggplot(litstudy_med, aes(Tc_min, EE_J)) +
theme_bw(base_size = 30) + geom_point(aes(col=Torpid_not, shape=Study_lit), size=6) +
scale_shape_manual("Source\n", values=c(20,3), labels=c("Literature", "This Study")) +
scale_color_brewer("Energetic state", palette="Set1",
labels=c("Normothermic", "Torpid", "Shallow hypothermia?")) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA),
legend.key.height=unit(3,"line")) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)")
litplot_med
litplot_sm <- ggplot(litstudy_sm, aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA))
litplot_sm
####### Subsetting some columns from tor_sub and ordering them) ######
o.tor_sub <- na.omit(tor_sub[, c("Hourly", "Time", "EE_J", "BirdID","Species", "Ta_day_min",
"Ta_day_avg", "Ta_day_max", "Ta_night_min", "Tc_avg", "Tc_min")])
o.tor_sub$Hourly <- factor(o.tor_sub$Hourly, levels=o.tor_sub$Hourly)
o.tor_sub$BirdID <- factor(o.tor_sub$BirdID,
levels = c("EG15_0826_AGCU", "EG15_0910_METY", "EG15_1023_AGCU",
"EG15_1028_METY", "EG15_1130_METY", "EG15_1209_METY",
"EG15_1211_METY","EG15_1212_METY", "EG15_1219_METY",
"EG15_1220_AGCU", "EG15_1223_AGCU", "EG15_0104_AGCU"))
## Without subsetting out the METYs and AGCU's - all of La Paz data
o.tor <- na.omit(torpor2015[, c("Hourly", "Time", "EE_J", "BirdID","Species", "Ta_day_min",
"Ta_day_avg", "Ta_day_max", "Ta_night_min", "Tc_avg", "Tc_min")])
o.tor$Hourly <- factor(o.tor$Hourly, levels=o.tor$Hourly)
o.tor$BirdID <- factor(o.tor$BirdID,
levels = c("EG15_0826_AGCU", "EG15_0910_METY", "EG15_1023_AGCU",
"EG15_1028_METY", "EG15_1130_METY", "EG15_1209_METY",
"EG15_1211_METY","EG15_1212_METY", "EG15_1219_METY",
"EG15_1220_AGCU", "EG15_1223_AGCU", "EG15_0104_AGCU"))
#### Nightly hourly EE plots #####
##### For EC2014 birds, making plots for Nat Geo demonstration ####
## Plotting EE per hour by time, and labeling with chamber temperature
energy_gcb <- ggplot(gcbnight, aes(SampleNo, EE_J)) + theme_bw() +
geom_line(aes(group=BirdID)) + facet_grid(TimeSlot~., scales = "free_x") +
ylab("Energy expenditure (J)")
energy_gcb
## Short code to produce and save multiple plots from one ggplot snippet, by factor TimeSlot
tryplot <- ggplot(data = gcbnight, aes(SampleNo, EE_J)) + theme_bw() +
geom_line() + ylab("Energy expenditure (J)") + ylim(-5,50) + xlab("Time (seconds)") +
scale_y_continuous(breaks=c(0,2,5,10,20,30,40,50)) + theme(panel.grid.major.y = element_line(size=.1, color="grey"))
plotbunch_gcb <- gcbnight %>%
group_by(TimeSlot) %>%
do(plots = tryplot %+% . + facet_wrap(~TimeSlot))
plotbunch_gcb$plots[1]
gcbplots <- ggplot(data = gcbnight, aes(SampleNo, EE_J)) + theme_bw() +
geom_line() + ylab("Energy expenditure (J)") + ylim(-5,50) + xlab("Time (seconds)") +
theme(panel.grid.major.y = element_line(size=.1, color="grey"))
gcbplots
plotbunch_gcb <- gcbnight %>%
group_by(TimeSlot) %>%
do(plots = gcbplots %+% . + facet_wrap(~TimeSlot))
pdf("EC14_GCB_0720.pdf", width=10, height = 7)
plotbunch_gcb$plots
dev.off()
Two_gcb_slots <- gcbnight[gcbnight$TimeSlot==9,] # | gcbnight$TimeSlot==5 | gcbnight$TimeSlot==7,]
m.gcbslots <- melt(Two_gcb_slots, id.vars = c("BirdID", "TimeSlot"), measure.vars = "EE_J")
m.gcbslots$TimeSlot <- as.factor(m.gcbslots$TimeSlot)
gcbfacet <- ggplot(m.gcbslots, aes(x = seq(1, length(m.gcbslots$value)), value)) +
my_theme +
geom_line(col="red") + ylab("Energy expenditure (J)") + ylim(-5,50) + xlab("Time (seconds)")
gcbfacet
gcbsumm$Hour <- factor(gcbsumm$Hour, levels=gcbsumm$Hour)
gcbsummplot <- ggplot(gcbsumm, aes(HourID, EE_J)) + my_theme + geom_point(size=3) + geom_line(aes(group="identity")) +
ylab("Energy expenditure (J)") + xlab("Hour") + scale_x_continuous(breaks = 1:10) +
scale_y_continuous(breaks=c(-100,0,100,200,500,1000,1500)) +
theme(panel.grid.major.y = element_line(size=.1, color="grey"))
gcbsummplot
birdsumms$BirdID <- factor(birdsumms$BirdID, levels=birdsumms$BirdID)
birdsummsplot <- ggplot(birdsumms, aes(HourID, EE_J)) + my_theme +
geom_point(size=3) + geom_line(aes(group="identity")) +
ylab("Energy expenditure (J)") + xlab("Hour") + ylim(-10,2800) + scale_x_continuous(breaks = 1:11)
birdbunch <- birdsumms %>%
group_by(BirdID) %>%
do(plots = birdsummsplot %+% . + facet_wrap(~BirdID))
pdf("EC14_birdsumms.pdf", width=10, height = 7)
birdbunch$plots
dev.off()
### For La Paz birds ####
lapaz_hourly <- ggplot(data = o.tor, aes(Hourly, EE_J)) + theme_bw() + geom_line(aes(group="identity")) +
geom_point() + ylab("Energy expenditure (J)") + xlab("Hour") #+ ylim(-10,3000)
plotbunch_lapaz <- o.tor %>%
group_by(BirdID) %>%
do(plots = lapaz_hourly %+% . + facet_wrap(~BirdID))
pdf("LaPaz_plotbunch.pdf", width=10, height = 7)
plotbunch_lapaz$plots
dev.off()
## Plotting EE per hour by time, and labeling with chamber temperature
energy_metyagcu <- ggplot(o.tor_sub, aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + geom_text(aes(label=Tc_min), vjust=-1) +
#geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_text(angle=30, hjust=1),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Hour step (Birdno_ArmyTime)") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_metyagcu
#For Nat His talk
energy_mety <- ggplot(o.tor_sub[o.tor_sub$Species=="METY",], aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + #geom_text(aes(label=Tc_min), vjust=-1) +
#geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(), strip.text = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Nighttime Hour") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_mety
## just agcu
energy_metyagcu <- ggplot(o.tor_sub, aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + geom_text(aes(label=Tc_min), vjust=-1) +
geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_text(angle=30, hjust=1),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Hour step (Birdno_ArmyTime)") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_metyagcu
## Plotting EE per hour by time, and labeling with chamber temperature
energy_all <- ggplot(o.tor, aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + geom_text(aes(label=Tc_min), vjust=-1) +
geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + #scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Hour step (Birdno_ArmyTime)") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_all
#Plot EE over night for agcu
energy15_agcu <- ggplot(na.omit(agcu_indiv[, c("Time", "EE_J", "BirdID")]),aes(Time, EE_J)) +
theme_bw(base_size=30) + geom_line(aes(group=BirdID, col=BirdID), size=2) +
scale_color_manual(values="purple") +
ylab("Hourly energy expenditure (J)")
energy15_agcu
#Plot EE over night for mety
energy15_mety <- ggplot(na.omit(mety_indiv[, c("Time", "EE_J", "BirdID")]), aes(Time, EE_J)) +
theme_bw(base_size=30) + geom_line(aes(group=BirdID, col = BirdID), size=2) +
ylab("Hourly energy expenditure (J)")
energy15_mety
## Plot NEE
energy_plot <- ggplot(m.nee, aes(Species, value)) + theme_bw(base_size = 30) +
geom_boxplot(size=2) + geom_point(aes(col=Species), size=6) +
ylab("Nighttime energy expenditure (kJ)")
energy_plot
###### For later #######
## Adding column dividing NEE by 2/3*Mass to correct for mass with allometric scaling
torpor$NEE_MassCorrected<- torpor$NEE_kJ/((2/3)*torpor$Mass)
## Adding columns to correct for mass in Avg EE normo, Min EE normo, torpid, etc.
torpor$AvgEE_normo_MassCorrected <- torpor$Avg_EE_hourly_normo/((2/3)*torpor$Mass)
torpor$MinEE_normo_MassCorrected <- as.numeric(torpor$Min_EE_normo)/((2/3)*torpor$Mass)
torpor$AvgEE_torpid_MassCorrected <- torpor$Avg_EE_hourly_torpid/((2/3)*torpor$Mass)
torpor$MinEE_torpid_MassCorrected <- as.numeric(torpor$Min_EE_torpid)/((2/3)*torpor$Mass)
|
/Torpor1415/Torpor2015.R
|
no_license
|
nushiamme/AShankar_hummers
|
R
| false
| false
| 27,199
|
r
|
## Processing torpor data from 2015 field season to incorporate with previous torpor data
## Anusha Shankar
## Started February 22, 2016
##Packages
library(ggplot2)
library(reshape)
library(gridExtra)
library(grid)
library(wq)
library(gam)
library(foreign)
library(MASS)
library(devtools)
require(dplyr)
#library(plotflow) #Useful function reorder_by() might be useful for ordering variables by others
## Set working directory and read in .csv file
setwd("C:\\Users\\nushi\\Dropbox\\Hummingbird energetics\\Tables_for_paper")
torpor2015 <- read.csv("Torpor_METY_AGCU_2015_detailed_hourly.csv")
litstudy <- read.csv("LitStudy_combined.csv")
litnew <- read.csv("LitStudy_andKruger2.csv")
krugertab <- read.csv("Lit_Kruger1982.csv")
k_melt <- read.csv("Lit_Kruger1982_modified.csv")
##In Jan 2018, have to check with other Kruger files that this is complete
litjan <- read.csv("LitStudy_andKruger.csv")
##Code for Anita project - 2015 and 2016 torpor datacd "E:\cd
tor_am <- read.csv("C:\\Users\\ANUSHA\\Dropbox\\Data 2015\\all_torpor_data.csv")
tor_am$NEE_MC <- tor_am$NEE/(tor_am$Av_mass^(2/3))
n_fun <- function(Species){
return(data.frame(y = median(Species), label = paste0("n = ",length(Species)),"\n"))
}
ggplot(tor_am, aes(Species, NEE_MC)) + geom_boxplot() + geom_point(aes(col=Species)) + geom_point(aes(Species, mean(NEE_MC))) +
stat_summary(fun.data = n_fun, geom = "text", vjust = -2, size = 8) + my_theme
### end code for Anita project
## For Nat Geo demo
gcbnight <- read.csv("Plotting_DailyGraphs_torpor_in_R//E14_0720_GCB.csv")
gcbsumm <- read.csv("Plotting_DailyGraphs_torpor_in_R//E14_0720_GCB_summary.csv")
birdsumms <- read.csv("Plotting_DailyGraphs_torpor_in_R//E14_bird_summaries_toplot.csv")
m.krug <- melt(krugertab, id.vars = c("Species", "Sex", "Mean_mass_g", "Temp"),
measure.vars = c("MR_day_J_g_hr", "MR_night_J_g_hr", "MR_torpor_J_g_hr"))
names(m.krug) <- c("Species", "Sex", "Mass", "Temp", "Measure", "Value")
levels(litstudy$Torpid_not) <- c("Normothermic", "Torpid", "Unknown")
## Min chamber temo in deg. C axis label
Tc_min.xlab <- expression(atop(paste("Minimum Chamber Temperature (", degree,"C)")))
## Making my easy-theme
my_theme <- theme_classic(base_size = 30) +
theme(axis.title.y = element_text(vjust = 2),
panel.border = element_rect(colour = "black", fill=NA))
## Function to return sample sizes
give.n <- function(x){
return(c(y = mean(x), label = length(x)))
}
## USeful for introduction !!!!!!!!!!!!!!!! #####
## McKechnie, A.E. and B.G. Lovegrove. 2002. Avian Facultative Hypothermic Responses: a Review.
# The Condor 104: 705.
## The capacity for shallow hypothermia (rest-phase hypothermia) occurs throughout the avian phylogeny,
## but the capacity for pronounced hypothermia (torpor) appears to be restricted to certain taxa.
## Families in which torpor has been reported include the Todidae, Coliidae, Trochilidae, Apodidae,
## Caprimulgidae, and Columbidae.
## Subsetting files
#agcu <- torpor2015[torpor2015$Species=="AGCU",]
#mety <- torpor2015[torpor2015$Species=="METY",]
tor_sub <- torpor2015[torpor2015$Species=="AGCU" | torpor2015$Species=="METY",]
##### Set time as a factor ######
#agcu_indiv <- torpor2015[torpor2015$BirdID=="EG15_0104_AGCU",]
#agcu_indiv$Time <- factor(agcu_indiv$Time, levels=agcu_indiv$Time)
#mety$Time <- factor(mety$Time, levels=mety$Time)
#mety_indiv <- torpor2015[torpor2015$BirdID=="EG15_1028_METY",]
#mety_indiv$Time <- factor(mety_indiv$Time, levels=mety_indiv$Time)
##METY days - 0910, 1028, 1130, 1209, 1211, 1212, 1219
##AGCU days - 0826, 1023, 1220, 1223, 0104
#### Literature and some Lit-study plots #####
#### Kruger et al. 1982 study, plotting values for 22 species ####
krugerplot <- ggplot(m.krug, aes(Temp, Value, group=interaction(Measure,Species))) + my_theme +
geom_line(aes(col=Measure)) +
xlab("Ambient temperature (deg. C)") + ylab("Energy expenditure (J/g*hr)") +
scale_y_continuous(breaks=c(0,50,100,200,400,600)) + theme(panel.grid.major.y = element_line(size=.1, color="grey"))
krugerplot
## Kruger's data, selecting particular species
krugerplot_sp <- ggplot(m.krug[m.krug$Species=="Aglaectis cupripennis",], aes(Temp, Value)) + my_theme +
geom_point(aes(col=Measure, size=Mass)) + ylab("Energy expenditure (J/g*hr)") +
scale_y_continuous(breaks=c(0,50,100,200,400,600)) + theme(panel.grid.major.y = element_line(size=.1, color="grey"))
krugerplot_sp
### Plot literature review plot study values ######
litplotstudy <- ggplot(litstudy, aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(3,20)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") + ggtitle("Mass category")
litplotstudy
### Plot literature review plot study values, just 7.5g mass category ######
litplotstudy <- ggplot(litstudy[litstudy], aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(3,20)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") + ggtitle("Mass category")
litplotstudy
levels(litjan$Torpid_not) <- c("Normothermic", "N_day", "Normothermic", "Torpid", "Unknown")
old.lvl<-levels(litjan$Mass_categ)
litjan$Mass_categ<-factor(litjan$Mass_categ,
levels=c(sort(old.lvl[old.lvl!="20"], decreasing=F), "20"))
### Plot literature review plot study values with kruger ######
litplotstudy_jan <- ggplot(litjan[litjan$Torpid_not !="N_day" &
litjan$Mass_categ%in% c("2_4", "6_8","8_10"),], aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit, alpha=Study_lit), size=4) +
scale_alpha_manual(values=c(0.3,1)) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#0066CC','green')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) + ylim(0,3300) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") + ggtitle("Mass category (g)")
litplotstudy_jan
### Plot just literature review values with kruger, just 6-8g category ######
litplot_jan_6to8 <- ggplot(litjan[litjan$Torpid_not !="N_day" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit",], aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not), shape=20, alpha=0.5, size=4) +
geom_smooth(data=litjan[litjan$Torpid_not=="Normothermic" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit",], method="lm", col="black", alpha=0.3) +
geom_smooth(data=litjan[litjan$Torpid_not=="Torpid" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit" & litjan$Temp<28,], method="loess", col="black", alpha=0.3) +
#scale_alpha_manual(values=c(0.3,1)) +
#scale_shape_manual(values=c(20,3)) +
my_theme + #facet_grid(~Mass_categ) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=20)) + ylim(0,3300) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") #+ ggtitle("Mass category (g)")
litplot_jan_6to8
### Plot literature review plot study values with kruger, just 6-8g category ######
litplotstudy_jan_6to8 <- ggplot(litjan[litjan$Torpid_not !="N_day" &
litjan$Mass_categ%in% c("6_8"),], aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit, alpha=Study_lit), size=4) +
geom_smooth(data=litjan[litjan$Torpid_not=="Normothermic" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit",], method="lm", col="black", alpha=0.3) +
geom_smooth(data=litjan[litjan$Torpid_not=="Torpid" & litjan$Mass_categ=="6_8" &
litjan$Study_lit=="Lit" & litjan$Temp<28,], method="loess", col="black", alpha=0.3) +
scale_alpha_manual(values=c(0.5,1)) +
scale_shape_manual(values=c(20,3)) + my_theme + #facet_grid(~Mass_categ) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) + ylim(0,3300) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J/h)") #+ ggtitle("Mass category (g)")
litplotstudy_jan_6to8
## Subsetting just lit vals, including Kruger
litplot_jan <- ggplot(litjan[litjan$Torpid_not %in% c("Normothermic","Torpid") &
litjan$Study_lit=="Lit",],
aes(Temp, EE_J)) +
geom_point(aes(shape=Study_lit, col=Torpid_not), size=4, alpha=0.7) +
facet_grid(~Mass_categ) + my_theme +
scale_shape_manual(values=c(20,3)) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
guides(colour = guide_legend(override.aes = list(size=4))) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category (g)")
litplot_jan
## Subsetting just the known values; excluding uncategorized
## Jan vals with Kruger
litplotstudy_subset_NT_jan <- ggplot(litjan[litjan$Torpid_not %in% c("Normothermic","Torpid"),],
aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4, alpha=0.7) +
facet_grid(~Mass_categ) + my_theme +
scale_shape_manual(values=c(20,3)) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category (g)")
litplotstudy_subset_NT_jan
litplotstudy_subset_NT_jan <- ggplot(litjan[litjan$Torpid_not %in% c("Normothermic","Torpid") &
litjan$Mass_categ%in% c("2_4", "6_8","8_10"),],
aes(Temp, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4, alpha=0.7) +
facet_grid(~Mass_categ) + my_theme +
scale_shape_manual(values=c(20,3)) +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category (g)")
litplotstudy_subset_NT_jan
## Subsetting three mass categories, just the known values; excluding uncategorized
litplotstudy_subset_NT <- ggplot(litstudy[litstudy$Mass_categ %in% c(3,6,7.5) &
litstudy$Torpid_not %in% c("Normothermic","Torpid"),],
aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category")
litplotstudy_subset_NT
## Subsetting three mass categories
litplotstudy_subset <- ggplot(litstudy[litstudy$Mass_categ %in% c(3,6,7.5),], aes(Tc_min, EE_J)) +
geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass_categ) + my_theme +
scale_color_manual(values=c('black', '#ff3333', '#9999ff')) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category")
litplotstudy_subset
## Plot just lit values
litplot <- ggplot(litstudy[litstudy$Study_lit=="Lit",], aes(Tc_min, EE_J)) +
my_theme + geom_point(aes(col=Torpid_not), size=3) +
scale_color_manual(values=c('black', '#ff3333')) +
facet_grid(.~Mass_categ) + ylim(0,3300) +
theme(legend.key.height = unit(3,"line"), plot.title = element_text(hjust = 0.5, size=20),
axis.text.x = element_text(size=15)) +
xlab(Tc_min.xlab) + ylab("Energy expenditure (J)") + ggtitle("Mass category")
litplot
## With Kruger et al. 1982 data added in
litplotnew <- ggplot(litnew, aes(Temp, EE_J/Mass)) +
theme_bw(base_size = 20) + geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + facet_grid(~Mass) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J/g*hr)")
litplotnew
## All lit data, jut plotting torpor points
litplotnew_T <- ggplot(litnew[litnew$Torpid_not=="T",], aes(Temp, EE_J/Mass)) +
theme_bw(base_size = 20) + geom_point(aes(col=Mass, shape=Study_lit), size=5) +
scale_shape_manual(values=c(20,3)) + my_theme + xlab(Tc_min.xlab) +
scale_colour_gradientn(colours=rainbow(3)) + ggtitle("Energy expenditure in torpor") +
ylab("Energy expenditure (J/g*hr)")
litplotnew_T
#Testing slopes
summary(lm(EE_J/Mass~Temp, data=litnew[litnew$Torpid_not=="T" & litnew$Temp > 18,]))
summary(lm(EE_J/Mass~Temp, data=litnew[litnew$Torpid_not=="T" & litnew$Temp < 18,]))
## Without the Unknown points from La Paz
lits <- ggplot(litnew[litnew$Torpid_not!="UK",], aes(Temp, EE_J/Mass)) +
theme_bw(base_size = 20) + geom_point(aes(col=Torpid_not, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) + #facet_grid(~Mass) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J/g*hr)")
lits
## Just La paz data
coir_plot <- ggplot(litstudy[litstudy$Species=="COIR",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
coir_plot
agcu_plot <- ggplot(litstudy[litstudy$Species=="AGCU",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
agcu_plot
hevi_plot <- ggplot(litstudy[litstudy$Species=="HEVI",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
hevi_plot
mety_plot <- ggplot(litstudy[litstudy$Species=="METY",], aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species), size=4) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)") + xlim(7,15)
mety_plot
grid.arrange(coir_plot, agcu_plot, mety_plot, hevi_plot,
nrow=2, ncol=2)
### TO DO use Hainsworth AZ birds and Lasiewski 1963 torpor cutoffs, color points by whether they're 1. presumed torpid,
## 2. presumed normothermic, and 3. the weirdos.
grid.arrange(coir_plot, agcu_plot, mety_plot, hevi_plot,
nrow=2, ncol=2)
## Regressing species' energy expenditure vs. Tc_min
summary(lm(EE_J~Tc_min, data=litstudy[litstudy$Species=="AGCU",]))
grid.text(unit(0.5,"npc"),0.99,label = "Mass in grams", gp=gpar(fontsize=20))
litstudy_med <- litstudy[litstudy$Mass_categ==7.5,]
litstudy_sm <- litstudy[litstudy$Mass_categ==3,]
litplot_med <- ggplot(litstudy_med, aes(Tc_min, EE_J)) +
theme_bw(base_size = 30) + geom_point(aes(col=Torpid_not, shape=Study_lit), size=6) +
scale_shape_manual("Source\n", values=c(20,3), labels=c("Literature", "This Study")) +
scale_color_brewer("Energetic state", palette="Set1",
labels=c("Normothermic", "Torpid", "Shallow hypothermia?")) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA),
legend.key.height=unit(3,"line")) + xlab(Tc_min.xlab) +
ylab("Energy expenditure (J)")
litplot_med
litplot_sm <- ggplot(litstudy_sm, aes(Tc_min, EE_J)) +
theme_bw(base_size = 20) + geom_point(aes(col=Species, shape=Study_lit), size=4) +
scale_shape_manual(values=c(20,3)) +
theme(strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA))
litplot_sm
####### Subsetting some columns from tor_sub and ordering them) ######
o.tor_sub <- na.omit(tor_sub[, c("Hourly", "Time", "EE_J", "BirdID","Species", "Ta_day_min",
"Ta_day_avg", "Ta_day_max", "Ta_night_min", "Tc_avg", "Tc_min")])
o.tor_sub$Hourly <- factor(o.tor_sub$Hourly, levels=o.tor_sub$Hourly)
o.tor_sub$BirdID <- factor(o.tor_sub$BirdID,
levels = c("EG15_0826_AGCU", "EG15_0910_METY", "EG15_1023_AGCU",
"EG15_1028_METY", "EG15_1130_METY", "EG15_1209_METY",
"EG15_1211_METY","EG15_1212_METY", "EG15_1219_METY",
"EG15_1220_AGCU", "EG15_1223_AGCU", "EG15_0104_AGCU"))
## Without subsetting out the METYs and AGCU's - all of La Paz data
o.tor <- na.omit(torpor2015[, c("Hourly", "Time", "EE_J", "BirdID","Species", "Ta_day_min",
"Ta_day_avg", "Ta_day_max", "Ta_night_min", "Tc_avg", "Tc_min")])
o.tor$Hourly <- factor(o.tor$Hourly, levels=o.tor$Hourly)
o.tor$BirdID <- factor(o.tor$BirdID,
levels = c("EG15_0826_AGCU", "EG15_0910_METY", "EG15_1023_AGCU",
"EG15_1028_METY", "EG15_1130_METY", "EG15_1209_METY",
"EG15_1211_METY","EG15_1212_METY", "EG15_1219_METY",
"EG15_1220_AGCU", "EG15_1223_AGCU", "EG15_0104_AGCU"))
#### Nightly hourly EE plots #####
##### For EC2014 birds, making plots for Nat Geo demonstration ####
## Plotting EE per hour by time, and labeling with chamber temperature
energy_gcb <- ggplot(gcbnight, aes(SampleNo, EE_J)) + theme_bw() +
geom_line(aes(group=BirdID)) + facet_grid(TimeSlot~., scales = "free_x") +
ylab("Energy expenditure (J)")
energy_gcb
## Short code to produce and save multiple plots from one ggplot snippet, by factor TimeSlot
tryplot <- ggplot(data = gcbnight, aes(SampleNo, EE_J)) + theme_bw() +
geom_line() + ylab("Energy expenditure (J)") + ylim(-5,50) + xlab("Time (seconds)") +
scale_y_continuous(breaks=c(0,2,5,10,20,30,40,50)) + theme(panel.grid.major.y = element_line(size=.1, color="grey"))
plotbunch_gcb <- gcbnight %>%
group_by(TimeSlot) %>%
do(plots = tryplot %+% . + facet_wrap(~TimeSlot))
plotbunch_gcb$plots[1]
gcbplots <- ggplot(data = gcbnight, aes(SampleNo, EE_J)) + theme_bw() +
geom_line() + ylab("Energy expenditure (J)") + ylim(-5,50) + xlab("Time (seconds)") +
theme(panel.grid.major.y = element_line(size=.1, color="grey"))
gcbplots
plotbunch_gcb <- gcbnight %>%
group_by(TimeSlot) %>%
do(plots = gcbplots %+% . + facet_wrap(~TimeSlot))
pdf("EC14_GCB_0720.pdf", width=10, height = 7)
plotbunch_gcb$plots
dev.off()
Two_gcb_slots <- gcbnight[gcbnight$TimeSlot==9,] # | gcbnight$TimeSlot==5 | gcbnight$TimeSlot==7,]
m.gcbslots <- melt(Two_gcb_slots, id.vars = c("BirdID", "TimeSlot"), measure.vars = "EE_J")
m.gcbslots$TimeSlot <- as.factor(m.gcbslots$TimeSlot)
gcbfacet <- ggplot(m.gcbslots, aes(x = seq(1, length(m.gcbslots$value)), value)) +
my_theme +
geom_line(col="red") + ylab("Energy expenditure (J)") + ylim(-5,50) + xlab("Time (seconds)")
gcbfacet
gcbsumm$Hour <- factor(gcbsumm$Hour, levels=gcbsumm$Hour)
gcbsummplot <- ggplot(gcbsumm, aes(HourID, EE_J)) + my_theme + geom_point(size=3) + geom_line(aes(group="identity")) +
ylab("Energy expenditure (J)") + xlab("Hour") + scale_x_continuous(breaks = 1:10) +
scale_y_continuous(breaks=c(-100,0,100,200,500,1000,1500)) +
theme(panel.grid.major.y = element_line(size=.1, color="grey"))
gcbsummplot
birdsumms$BirdID <- factor(birdsumms$BirdID, levels=birdsumms$BirdID)
birdsummsplot <- ggplot(birdsumms, aes(HourID, EE_J)) + my_theme +
geom_point(size=3) + geom_line(aes(group="identity")) +
ylab("Energy expenditure (J)") + xlab("Hour") + ylim(-10,2800) + scale_x_continuous(breaks = 1:11)
birdbunch <- birdsumms %>%
group_by(BirdID) %>%
do(plots = birdsummsplot %+% . + facet_wrap(~BirdID))
pdf("EC14_birdsumms.pdf", width=10, height = 7)
birdbunch$plots
dev.off()
### For La Paz birds ####
lapaz_hourly <- ggplot(data = o.tor, aes(Hourly, EE_J)) + theme_bw() + geom_line(aes(group="identity")) +
geom_point() + ylab("Energy expenditure (J)") + xlab("Hour") #+ ylim(-10,3000)
plotbunch_lapaz <- o.tor %>%
group_by(BirdID) %>%
do(plots = lapaz_hourly %+% . + facet_wrap(~BirdID))
pdf("LaPaz_plotbunch.pdf", width=10, height = 7)
plotbunch_lapaz$plots
dev.off()
## Plotting EE per hour by time, and labeling with chamber temperature
energy_metyagcu <- ggplot(o.tor_sub, aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + geom_text(aes(label=Tc_min), vjust=-1) +
#geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_text(angle=30, hjust=1),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Hour step (Birdno_ArmyTime)") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_metyagcu
#For Nat His talk
energy_mety <- ggplot(o.tor_sub[o.tor_sub$Species=="METY",], aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + #geom_text(aes(label=Tc_min), vjust=-1) +
#geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(), strip.text = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Nighttime Hour") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_mety
## just agcu
energy_metyagcu <- ggplot(o.tor_sub, aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + geom_text(aes(label=Tc_min), vjust=-1) +
geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_text(angle=30, hjust=1),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Hour step (Birdno_ArmyTime)") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_metyagcu
## Plotting EE per hour by time, and labeling with chamber temperature
energy_all <- ggplot(o.tor, aes(Hourly, EE_J)) + theme_bw(base_size=18) +
geom_line(aes(group=BirdID, col=Species), size=1.5) + facet_wrap(~BirdID, scales="free_x") +
geom_point() + geom_text(aes(label=Tc_min), vjust=-1) +
geom_text(aes(label=Ta_day_min), col="red", vjust=1) +
#annotate("text", x=7, y=2100, label= paste("Ta daytime min = ", o.tor_sub$Ta_day_min)) +
ylab("Hourly energy expenditure (J)") + #scale_color_manual(values=c("#000080", "#ff0000")) +
scale_y_continuous(breaks=c(0,100,200,300,500,1000,1500,2000))+
theme(axis.text.x = element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(size=.1, color="grey"),
panel.grid.minor = element_blank(),
strip.background = element_blank(),
panel.border = element_rect(colour = "black", fill=NA)) +
xlab("Hour step (Birdno_ArmyTime)") # + scale_x_discrete(labels=o.tor_sub$Time)
energy_all
#Plot EE over night for agcu
energy15_agcu <- ggplot(na.omit(agcu_indiv[, c("Time", "EE_J", "BirdID")]),aes(Time, EE_J)) +
theme_bw(base_size=30) + geom_line(aes(group=BirdID, col=BirdID), size=2) +
scale_color_manual(values="purple") +
ylab("Hourly energy expenditure (J)")
energy15_agcu
#Plot EE over night for mety
energy15_mety <- ggplot(na.omit(mety_indiv[, c("Time", "EE_J", "BirdID")]), aes(Time, EE_J)) +
theme_bw(base_size=30) + geom_line(aes(group=BirdID, col = BirdID), size=2) +
ylab("Hourly energy expenditure (J)")
energy15_mety
## Plot NEE
energy_plot <- ggplot(m.nee, aes(Species, value)) + theme_bw(base_size = 30) +
geom_boxplot(size=2) + geom_point(aes(col=Species), size=6) +
ylab("Nighttime energy expenditure (kJ)")
energy_plot
###### For later #######
## Adding column dividing NEE by 2/3*Mass to correct for mass with allometric scaling
torpor$NEE_MassCorrected<- torpor$NEE_kJ/((2/3)*torpor$Mass)
## Adding columns to correct for mass in Avg EE normo, Min EE normo, torpid, etc.
torpor$AvgEE_normo_MassCorrected <- torpor$Avg_EE_hourly_normo/((2/3)*torpor$Mass)
torpor$MinEE_normo_MassCorrected <- as.numeric(torpor$Min_EE_normo)/((2/3)*torpor$Mass)
torpor$AvgEE_torpid_MassCorrected <- torpor$Avg_EE_hourly_torpid/((2/3)*torpor$Mass)
torpor$MinEE_torpid_MassCorrected <- as.numeric(torpor$Min_EE_torpid)/((2/3)*torpor$Mass)
|
### 0. preparation for the data cleaning
# call libraries
library(dplyr)
library(plyr)
library(reshape2)
# download row data
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip "
download.file(fileURL, "rowdata.zip", method="curl")
# unzip row data
unzip("rowdata.zip")
# read predefined data
acs <- read.table("UCI HAR Dataset/activity_labels.txt")
fes <- read.table("UCI HAR Dataset/features.txt")
# read train data
str <- read.table("UCI HAR Dataset/train/subject_train.txt", header=FALSE)
xtr <- read.table("UCI HAR Dataset/train/X_train.txt", header=FALSE)
ytr <- read.table("UCI HAR Dataset/train/y_train.txt", header=FALSE)
# read test data
ste <- read.table("UCI HAR Dataset/test/subject_test.txt", header=FALSE)
xte <- read.table("UCI HAR Dataset/test/X_test.txt", header=FALSE)
yte <- read.table("UCI HAR Dataset/test/y_test.txt", header=FALSE)
### 1. merges the training and the test sets to create one data set
trn <- cbind(str, ytr, xtr)
tst <- cbind(ste, yte, xte)
dat <- rbind(trn, tst)
colnames(dat) <- c("subject", "activity", as.character(fes[, 2]))
### 2. extracts only the measurements on the mean and standard deviation - 79
idx <- grep("subject|activity|mean|std", colnames(dat))
dat.msd <- dat[,idx]
### 3. Uses descriptive activity names to name the activities in the data set
colnames(acs) <- c("activity", "activityName")
dat.msd <- join(dat.msd, acs, by = "activity", match = "first")
dat.msd <- select(dat.msd, subject, activityName, 3:82)
### 4. appropriately labels the data set with descriptive variable names
# remove special characters
names(dat.msd) <- gsub("\\(|\\)", "", names(dat.msd), perl = TRUE)
# descriptive names
names(dat.msd) <- gsub("Acc", "Acceleration", names(dat.msd))
names(dat.msd) <- gsub("BodyBody", "Body", names(dat.msd))
names(dat.msd) <- gsub("mean", "Mean", names(dat.msd))
names(dat.msd) <- gsub("std", "Std", names(dat.msd))
names(dat.msd) <- gsub("Freq", "Frequency", names(dat.msd))
names(dat.msd) <- gsub("Mag", "Magnitude", names(dat.msd))
names(dat.msd) <- gsub("^t", "Time", names(dat.msd))
names(dat.msd) <- gsub("^f", "Frequency", names(dat.msd))
### 5. tidy data set with the average of each variable for each activity and each subject
# group and average
dat.msd.melted <- melt(dat.msd, id = c("subject", "activityName"))
dat.msd.mean <- dcast(dat.msd.melted, subject + activityName ~ variable, mean)
# write to txt file
write.table(dat.msd.mean, file="tidy_data.txt", row.name = FALSE)
|
/Course 3 - Getting and Cleaning Data/Course 3 - Assignments/run_analysis.R
|
no_license
|
buihongthu/datasciencespecialization
|
R
| false
| false
| 2,535
|
r
|
### 0. preparation for the data cleaning
# call libraries
library(dplyr)
library(plyr)
library(reshape2)
# download row data
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip "
download.file(fileURL, "rowdata.zip", method="curl")
# unzip row data
unzip("rowdata.zip")
# read predefined data
acs <- read.table("UCI HAR Dataset/activity_labels.txt")
fes <- read.table("UCI HAR Dataset/features.txt")
# read train data
str <- read.table("UCI HAR Dataset/train/subject_train.txt", header=FALSE)
xtr <- read.table("UCI HAR Dataset/train/X_train.txt", header=FALSE)
ytr <- read.table("UCI HAR Dataset/train/y_train.txt", header=FALSE)
# read test data
ste <- read.table("UCI HAR Dataset/test/subject_test.txt", header=FALSE)
xte <- read.table("UCI HAR Dataset/test/X_test.txt", header=FALSE)
yte <- read.table("UCI HAR Dataset/test/y_test.txt", header=FALSE)
### 1. merges the training and the test sets to create one data set
trn <- cbind(str, ytr, xtr)
tst <- cbind(ste, yte, xte)
dat <- rbind(trn, tst)
colnames(dat) <- c("subject", "activity", as.character(fes[, 2]))
### 2. extracts only the measurements on the mean and standard deviation - 79
idx <- grep("subject|activity|mean|std", colnames(dat))
dat.msd <- dat[,idx]
### 3. Uses descriptive activity names to name the activities in the data set
colnames(acs) <- c("activity", "activityName")
dat.msd <- join(dat.msd, acs, by = "activity", match = "first")
dat.msd <- select(dat.msd, subject, activityName, 3:82)
### 4. appropriately labels the data set with descriptive variable names
# remove special characters
names(dat.msd) <- gsub("\\(|\\)", "", names(dat.msd), perl = TRUE)
# descriptive names
names(dat.msd) <- gsub("Acc", "Acceleration", names(dat.msd))
names(dat.msd) <- gsub("BodyBody", "Body", names(dat.msd))
names(dat.msd) <- gsub("mean", "Mean", names(dat.msd))
names(dat.msd) <- gsub("std", "Std", names(dat.msd))
names(dat.msd) <- gsub("Freq", "Frequency", names(dat.msd))
names(dat.msd) <- gsub("Mag", "Magnitude", names(dat.msd))
names(dat.msd) <- gsub("^t", "Time", names(dat.msd))
names(dat.msd) <- gsub("^f", "Frequency", names(dat.msd))
### 5. tidy data set with the average of each variable for each activity and each subject
# group and average
dat.msd.melted <- melt(dat.msd, id = c("subject", "activityName"))
dat.msd.mean <- dcast(dat.msd.melted, subject + activityName ~ variable, mean)
# write to txt file
write.table(dat.msd.mean, file="tidy_data.txt", row.name = FALSE)
|
#
#
#
context("Test that all null model methods work")
# Here we just run the code to check that it works
test_that("All null model methods work", {
# We need to increase the max number of iterations otherwise warnings are produced
options(spatialwarnings.constants.maxit = 10^(8.5))
options(spatialwarnings.constants.reltol = 1e-8)
# Check that all methods run
all_methods <- c("perm", "intercept", "smooth")
testmat <- matrix(runif(30*30) > .7, ncol = 30)
testmat[1:15, ] <- TRUE
dat <- list(testmat, testmat)
a <- generic_sews(dat)
b <- patchdistr_sews(dat)
c <- suppressWarnings( spectral_sews(dat) )
null_control <- list(family = binomial())
for ( m in all_methods ) {
indictest(a, nulln = 3, null_method = m,
null_control = null_control)
indictest(b, nulln = 3, null_method = m,
null_control = null_control)
indictest(c, nulln = 3, null_method = m,
null_control = null_control)
expect_true(TRUE)
}
# Check the values returned by the null model
ictest <- indictest(a[[2]], 3, null_method = "intercept",
null_control = null_control)
nullmean <- mean(replicate(99, mean( ictest$get_nullmat() )))
expect_equal(mean(ictest[["orig_data"]]), nullmean,
tol = 0.01)
# Check the values returned by the null model
ictest <- indictest(a[[2]], 3, null_method = "smooth",
null_control = null_control)
nullmean <- mean(replicate(99, mean( ictest$get_nullmat() )))
expect_equal(mean(ictest[["orig_data"]]), nullmean,
tol = 0.01)
# Check that smoothed null model is closer to reality than the intercept
# model
ictest <- indictest(a[[2]], 3, null_method = "intercept",
null_control = null_control)
error_intercept <- replicate(49, {
mean( abs(ictest$get_nullmat() - a[[2]][["orig_data"]]) )
})
ictest <- indictest(a[[2]], 3, null_method = "smooth",
null_control = null_control)
error_smooth <- replicate(49, {
mean( abs(ictest$get_nullmat() - a[[2]][["orig_data"]]) )
})
expect_true({
mean(error_intercept) > mean(error_smooth)
})
# Check that we warn when the null method is a function and does not return
# logical values when the input matrix is logical
nullfun <- function(mat) { mat * rnorm(prod(dim(mat))) }
expect_warning({
indictest(compute_indicator(serengeti[[1]], raw_cg_moran), 3,
null_method = nullfun)
})
# Check that arguments are properly set
expect_warning({
null_control_set_args(serengeti[[1]], list(a = "opl"), "perm")
})
expect_true({
a <- null_control_set_args(serengeti[[1]],
list(family = "binomial"),
"intercept")[["family"]]
is.binomial(a)
})
expect_true({
a <- null_control_set_args(serengeti[[1]],
list(family = binomial()),
"intercept")[["family"]]
is.binomial(a)
})
# Model option are honored
expect_warning({
null_control_set_args(serengeti[[1]], list(), "intercept")
}, regexp = "using a binomial")
expect_warning({
null_control_set_args(coarse_grain(serengeti[[1]], 5),
list(), "intercept")
}, regexp = "using a gaussian")
# Set options to their default
options(spatialwarnings.constants.maxit = NULL)
options(spatialwarnings.constants.reltol = NULL)
# Test the simulation of new values
# # Intercept-only models
# mu <- 10
# dat <- rnorm(1000, mu, 2)
# gmod <- glm(y ~ 1, data = data.frame(y = dat), family = gaussian())
# newvals <- simulate_newdat(gmod, seq(0, 10000), family = gaussian())
# expect_equal(mu, mean(newvals), tol = 0.1)
#
# lambda <- 10
# dat <- rpois(1000, lambda)
# gmod <- glm(y ~ 1, data = data.frame(y = dat), family = poisson())
# newvals <- simulate_newdat(gmod, seq(0, 10000), family = poisson())
# expect_equal(lambda, mean(newvals), tol = 0.1)
#
# prob <- 0.4
# dat <- rbinom(1000, size = 1, prob = prob)
# gmod <- glm(y ~ 1, data = data.frame(y = dat), family = binomial())
# newvals <- simulate_newdat(gmod, seq(0, 10000), family = binomial())
# expect_equal(prob, mean(newvals), tol = 0.1)
#
#
# # Helper function for whats following
# matricize <- function(tab) {
# matrix(tab[ ,3], nrow = max(tab[ ,1]), ncol = max(tab[ ,1]))
# }
#
#
# # Smooth models: binomial
# input <- serengeti[[7]]
# # parameter selection for the spline.
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input))),
# value = as.vector(input))
# null_mod <- mgcv::gam(value ~ s(row, col, bs = "tp"),
# data = mat_tab,
# family = binomial())
# ovals <- expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input)))
# newvals <- simulate_newdat(null_mod,
# ovals,
# family = binomial())
# ovals[ ,"y"] <- newvals
# ovals[ ,"yref"] <- simulate(null_mod)[, 1]
# ggplot(ovals, aes(x = row, y = col)) +
# geom_raster(aes(fill = y > 0.5))
#
# m <- coarse_grain(matricize(ovals[ ,c("row", "col", "y")]), 20)
# mref <- coarse_grain(matricize(ovals[ ,c("row", "col", "yref")]), 20)
# expect_true({
# mean( abs(m - mref) ) < 0.05
# })
#
# if ( exists("VISUAL_TESTS") && VISUAL_TESTS ) {
# display_matrix(matricize(ovals[ ,c("row", "col", "y")])) +
# labs(title = "produced")
# dev.new()
# display_matrix(matricize(ovals[ ,c("row", "col", "yref")])) +
# labs(title = "ref")
#
# }
#
#
# # Smooth models: gaussian
# input <- matrix(ifelse(serengeti[[7]][],
# rnorm(prod(dim(serengeti[[7]])), mean = 20 , sd = 8),
# rnorm(prod(dim(serengeti[[7]])), mean = 100, sd = 8)),
# nrow = nrow(serengeti[[7]]),
# ncol = ncol(serengeti[[7]]))
# # parameter selection for the spline.
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input))),
# value = as.vector(input))
# null_mod <- mgcv::gam(value ~ s(row, col, bs = "tp"),
# data = mat_tab,
# family = gaussian())
# ovals <- expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input)))
# newvals <- simulate_newdat(null_mod,
# ovals,
# family = gaussian())
# ovals[ ,"y"] <- newvals
# ovals[ ,"yref"] <- simulate(null_mod)[, 1]
#
# m <- coarse_grain(matricize(ovals[ ,c("row", "col", "y")]), 20)
# mref <- coarse_grain(matricize(ovals[ ,c("row", "col", "yref")]), 20)
# expect_true({
# cor(as.vector(m), as.vector(mref)) > 0.95
# })
# expect_equal(var(ovals$y), var(ovals$yref), tol = 4)
#
# if ( exists("VISUAL_TESTS") && VISUAL_TESTS ) {
# display_matrix(matricize(ovals[ ,c("row", "col", "y")])) +
# labs(title = "produced")
# dev.new()
# display_matrix(matricize(ovals[ ,c("row", "col", "yref")])) +
# labs(title = "ref")
#
# }
#
#
#
#
# # Smooth models: poisson
# input <- matrix(ifelse(serengeti[[7]][],
# rpois(prod(dim(serengeti[[7]])), lambda = 20 ),
# rpois(prod(dim(serengeti[[7]])), lambda = 100)),
# nrow = nrow(serengeti[[7]]),
# ncol = ncol(serengeti[[7]]))
# # parameter selection for the spline.
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input))),
# value = as.vector(input))
# null_mod <- mgcv::gam(value ~ s(row, col, bs = "tp"),
# data = mat_tab,
# family = gaussian())
# ovals <- expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input)))
# newvals <- simulate_newdat(null_mod,
# ovals,
# family = gaussian())
# ovals[ ,"y"] <- newvals
# ovals[ ,"yref"] <- simulate(null_mod)[, 1]
#
# m <- coarse_grain(matricize(ovals[ ,c("row", "col", "y")]), 20)
# mref <- coarse_grain(matricize(ovals[ ,c("row", "col", "yref")]), 20)
# expect_true({
# cor(as.vector(m), as.vector(mref)) > 0.95
# })
# expect_equal(var(ovals$y), var(ovals$yref), tol = 4)
#
# if ( exists("VISUAL_TESTS") && VISUAL_TESTS ) {
# display_matrix(matricize(ovals[ ,c("row", "col", "y")])) +
# labs(title = "produced")
# dev.new()
# display_matrix(matricize(ovals[ ,c("row", "col", "yref")])) +
# labs(title = "ref")
# }
})
#
#
# img <- serengeti[[length(serengeti)]]
# img_coarse <- coarse_grain(img, 1)
# img_tab <- data.frame(expand.grid(row = seq.int(nrow(img_coarse)),
# col = seq.int(ncol(img_coarse))),
# value = as.vector(img_coarse))
#
#
# # Test if trend
# mod <- mgcv::gam(value ~ s(row, col, bs = "tp"), data = img_tab,
# family = binomial())
# img_tab[ ,"pred"] <- predict(mod, type = "response")
# img_tab[ ,"sim"] <- simulate(mod, type = "response") > .5
#
#
#
# plot1 <- ggplot(img_tab) +
# geom_raster(aes(x = col, y = row, fill = value)) +
# scale_fill_brewer(palette = "Spectral") +
# coord_fixed() +
# theme_minimal() +
# labs(x = "x", y = "y", title = "Observed matrix")
#
# plot2 <- ggplot(img_tab) +
# geom_raster(aes(x = col, y = row, fill = 1 - pred)) +
# scale_fill_distiller(palette = "Spectral",
# name = "Cover",
# direction = -1) +
# coord_fixed() +
# theme_minimal() +
# labs(x = "x", y = "y", title = "Fitted model")
#
# plot3 <- ggplot(img_tab) +
# geom_raster(aes(x = col, y = row, fill = sim)) +
# scale_fill_brewer(palette = "Spectral") +
# coord_fixed() +
# theme_minimal() +
# labs(x = "x", y = "y", title = "One null matrix")
#
#
#
# library(patchwork)
# plot1 + plot2 + plot3 +
# plot_layout(nrow = 1)
#
#
# # Create a null matrix
# library(mgcv)
# library(memoise)
# fit_model <- memoise(function(data) {
# mgcv::gam(value ~ s(row, col, bs = "tp"), data = data,
# family = binomial())
# })
# fnull <- function(mat) {
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(mat)),
# col = seq.int(ncol(mat))),
# value = as.vector(mat))
# mod <- fit_model(data = mat_tab)
# mat[ , ] <- simulate(mod) > .5
# return(mat)
# }
#
# indic <- compute_indicator(serengeti, raw_moran)
# test <- indictest(indic, null_method = fnull, nulln = 99)
# plot(test, along = serengeti.rain)
|
/tests/testthat/test-nullfun.R
|
permissive
|
spatial-ews/spatialwarnings
|
R
| false
| false
| 11,173
|
r
|
#
#
#
context("Test that all null model methods work")
# Here we just run the code to check that it works
test_that("All null model methods work", {
# We need to increase the max number of iterations otherwise warnings are produced
options(spatialwarnings.constants.maxit = 10^(8.5))
options(spatialwarnings.constants.reltol = 1e-8)
# Check that all methods run
all_methods <- c("perm", "intercept", "smooth")
testmat <- matrix(runif(30*30) > .7, ncol = 30)
testmat[1:15, ] <- TRUE
dat <- list(testmat, testmat)
a <- generic_sews(dat)
b <- patchdistr_sews(dat)
c <- suppressWarnings( spectral_sews(dat) )
null_control <- list(family = binomial())
for ( m in all_methods ) {
indictest(a, nulln = 3, null_method = m,
null_control = null_control)
indictest(b, nulln = 3, null_method = m,
null_control = null_control)
indictest(c, nulln = 3, null_method = m,
null_control = null_control)
expect_true(TRUE)
}
# Check the values returned by the null model
ictest <- indictest(a[[2]], 3, null_method = "intercept",
null_control = null_control)
nullmean <- mean(replicate(99, mean( ictest$get_nullmat() )))
expect_equal(mean(ictest[["orig_data"]]), nullmean,
tol = 0.01)
# Check the values returned by the null model
ictest <- indictest(a[[2]], 3, null_method = "smooth",
null_control = null_control)
nullmean <- mean(replicate(99, mean( ictest$get_nullmat() )))
expect_equal(mean(ictest[["orig_data"]]), nullmean,
tol = 0.01)
# Check that smoothed null model is closer to reality than the intercept
# model
ictest <- indictest(a[[2]], 3, null_method = "intercept",
null_control = null_control)
error_intercept <- replicate(49, {
mean( abs(ictest$get_nullmat() - a[[2]][["orig_data"]]) )
})
ictest <- indictest(a[[2]], 3, null_method = "smooth",
null_control = null_control)
error_smooth <- replicate(49, {
mean( abs(ictest$get_nullmat() - a[[2]][["orig_data"]]) )
})
expect_true({
mean(error_intercept) > mean(error_smooth)
})
# Check that we warn when the null method is a function and does not return
# logical values when the input matrix is logical
nullfun <- function(mat) { mat * rnorm(prod(dim(mat))) }
expect_warning({
indictest(compute_indicator(serengeti[[1]], raw_cg_moran), 3,
null_method = nullfun)
})
# Check that arguments are properly set
expect_warning({
null_control_set_args(serengeti[[1]], list(a = "opl"), "perm")
})
expect_true({
a <- null_control_set_args(serengeti[[1]],
list(family = "binomial"),
"intercept")[["family"]]
is.binomial(a)
})
expect_true({
a <- null_control_set_args(serengeti[[1]],
list(family = binomial()),
"intercept")[["family"]]
is.binomial(a)
})
# Model option are honored
expect_warning({
null_control_set_args(serengeti[[1]], list(), "intercept")
}, regexp = "using a binomial")
expect_warning({
null_control_set_args(coarse_grain(serengeti[[1]], 5),
list(), "intercept")
}, regexp = "using a gaussian")
# Set options to their default
options(spatialwarnings.constants.maxit = NULL)
options(spatialwarnings.constants.reltol = NULL)
# Test the simulation of new values
# # Intercept-only models
# mu <- 10
# dat <- rnorm(1000, mu, 2)
# gmod <- glm(y ~ 1, data = data.frame(y = dat), family = gaussian())
# newvals <- simulate_newdat(gmod, seq(0, 10000), family = gaussian())
# expect_equal(mu, mean(newvals), tol = 0.1)
#
# lambda <- 10
# dat <- rpois(1000, lambda)
# gmod <- glm(y ~ 1, data = data.frame(y = dat), family = poisson())
# newvals <- simulate_newdat(gmod, seq(0, 10000), family = poisson())
# expect_equal(lambda, mean(newvals), tol = 0.1)
#
# prob <- 0.4
# dat <- rbinom(1000, size = 1, prob = prob)
# gmod <- glm(y ~ 1, data = data.frame(y = dat), family = binomial())
# newvals <- simulate_newdat(gmod, seq(0, 10000), family = binomial())
# expect_equal(prob, mean(newvals), tol = 0.1)
#
#
# # Helper function for whats following
# matricize <- function(tab) {
# matrix(tab[ ,3], nrow = max(tab[ ,1]), ncol = max(tab[ ,1]))
# }
#
#
# # Smooth models: binomial
# input <- serengeti[[7]]
# # parameter selection for the spline.
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input))),
# value = as.vector(input))
# null_mod <- mgcv::gam(value ~ s(row, col, bs = "tp"),
# data = mat_tab,
# family = binomial())
# ovals <- expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input)))
# newvals <- simulate_newdat(null_mod,
# ovals,
# family = binomial())
# ovals[ ,"y"] <- newvals
# ovals[ ,"yref"] <- simulate(null_mod)[, 1]
# ggplot(ovals, aes(x = row, y = col)) +
# geom_raster(aes(fill = y > 0.5))
#
# m <- coarse_grain(matricize(ovals[ ,c("row", "col", "y")]), 20)
# mref <- coarse_grain(matricize(ovals[ ,c("row", "col", "yref")]), 20)
# expect_true({
# mean( abs(m - mref) ) < 0.05
# })
#
# if ( exists("VISUAL_TESTS") && VISUAL_TESTS ) {
# display_matrix(matricize(ovals[ ,c("row", "col", "y")])) +
# labs(title = "produced")
# dev.new()
# display_matrix(matricize(ovals[ ,c("row", "col", "yref")])) +
# labs(title = "ref")
#
# }
#
#
# # Smooth models: gaussian
# input <- matrix(ifelse(serengeti[[7]][],
# rnorm(prod(dim(serengeti[[7]])), mean = 20 , sd = 8),
# rnorm(prod(dim(serengeti[[7]])), mean = 100, sd = 8)),
# nrow = nrow(serengeti[[7]]),
# ncol = ncol(serengeti[[7]]))
# # parameter selection for the spline.
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input))),
# value = as.vector(input))
# null_mod <- mgcv::gam(value ~ s(row, col, bs = "tp"),
# data = mat_tab,
# family = gaussian())
# ovals <- expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input)))
# newvals <- simulate_newdat(null_mod,
# ovals,
# family = gaussian())
# ovals[ ,"y"] <- newvals
# ovals[ ,"yref"] <- simulate(null_mod)[, 1]
#
# m <- coarse_grain(matricize(ovals[ ,c("row", "col", "y")]), 20)
# mref <- coarse_grain(matricize(ovals[ ,c("row", "col", "yref")]), 20)
# expect_true({
# cor(as.vector(m), as.vector(mref)) > 0.95
# })
# expect_equal(var(ovals$y), var(ovals$yref), tol = 4)
#
# if ( exists("VISUAL_TESTS") && VISUAL_TESTS ) {
# display_matrix(matricize(ovals[ ,c("row", "col", "y")])) +
# labs(title = "produced")
# dev.new()
# display_matrix(matricize(ovals[ ,c("row", "col", "yref")])) +
# labs(title = "ref")
#
# }
#
#
#
#
# # Smooth models: poisson
# input <- matrix(ifelse(serengeti[[7]][],
# rpois(prod(dim(serengeti[[7]])), lambda = 20 ),
# rpois(prod(dim(serengeti[[7]])), lambda = 100)),
# nrow = nrow(serengeti[[7]]),
# ncol = ncol(serengeti[[7]]))
# # parameter selection for the spline.
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input))),
# value = as.vector(input))
# null_mod <- mgcv::gam(value ~ s(row, col, bs = "tp"),
# data = mat_tab,
# family = gaussian())
# ovals <- expand.grid(row = seq.int(nrow(input)),
# col = seq.int(ncol(input)))
# newvals <- simulate_newdat(null_mod,
# ovals,
# family = gaussian())
# ovals[ ,"y"] <- newvals
# ovals[ ,"yref"] <- simulate(null_mod)[, 1]
#
# m <- coarse_grain(matricize(ovals[ ,c("row", "col", "y")]), 20)
# mref <- coarse_grain(matricize(ovals[ ,c("row", "col", "yref")]), 20)
# expect_true({
# cor(as.vector(m), as.vector(mref)) > 0.95
# })
# expect_equal(var(ovals$y), var(ovals$yref), tol = 4)
#
# if ( exists("VISUAL_TESTS") && VISUAL_TESTS ) {
# display_matrix(matricize(ovals[ ,c("row", "col", "y")])) +
# labs(title = "produced")
# dev.new()
# display_matrix(matricize(ovals[ ,c("row", "col", "yref")])) +
# labs(title = "ref")
# }
})
#
#
# img <- serengeti[[length(serengeti)]]
# img_coarse <- coarse_grain(img, 1)
# img_tab <- data.frame(expand.grid(row = seq.int(nrow(img_coarse)),
# col = seq.int(ncol(img_coarse))),
# value = as.vector(img_coarse))
#
#
# # Test if trend
# mod <- mgcv::gam(value ~ s(row, col, bs = "tp"), data = img_tab,
# family = binomial())
# img_tab[ ,"pred"] <- predict(mod, type = "response")
# img_tab[ ,"sim"] <- simulate(mod, type = "response") > .5
#
#
#
# plot1 <- ggplot(img_tab) +
# geom_raster(aes(x = col, y = row, fill = value)) +
# scale_fill_brewer(palette = "Spectral") +
# coord_fixed() +
# theme_minimal() +
# labs(x = "x", y = "y", title = "Observed matrix")
#
# plot2 <- ggplot(img_tab) +
# geom_raster(aes(x = col, y = row, fill = 1 - pred)) +
# scale_fill_distiller(palette = "Spectral",
# name = "Cover",
# direction = -1) +
# coord_fixed() +
# theme_minimal() +
# labs(x = "x", y = "y", title = "Fitted model")
#
# plot3 <- ggplot(img_tab) +
# geom_raster(aes(x = col, y = row, fill = sim)) +
# scale_fill_brewer(palette = "Spectral") +
# coord_fixed() +
# theme_minimal() +
# labs(x = "x", y = "y", title = "One null matrix")
#
#
#
# library(patchwork)
# plot1 + plot2 + plot3 +
# plot_layout(nrow = 1)
#
#
# # Create a null matrix
# library(mgcv)
# library(memoise)
# fit_model <- memoise(function(data) {
# mgcv::gam(value ~ s(row, col, bs = "tp"), data = data,
# family = binomial())
# })
# fnull <- function(mat) {
# mat_tab <- data.frame(expand.grid(row = seq.int(nrow(mat)),
# col = seq.int(ncol(mat))),
# value = as.vector(mat))
# mod <- fit_model(data = mat_tab)
# mat[ , ] <- simulate(mod) > .5
# return(mat)
# }
#
# indic <- compute_indicator(serengeti, raw_moran)
# test <- indictest(indic, null_method = fnull, nulln = 99)
# plot(test, along = serengeti.rain)
|
# EXERCISE: K-Means Clustering Example
iris_tbl <- copy_to(sc, iris, "iris", overwrite = TRUE)
kmeans_model <- iris_tbl %>%
ml_kmeans(k = 3, features = c("Petal_Length", "Petal_Width"))
print(kmeans_model)
# Predict associated class
predicted <- ml_predict(kmeans_model, iris_tbl) %>%
collect
table(predicted$Species, predicted$prediction)
# EXERCISE: Plot cluster membership
ml_predict(kmeans_model) %>%
collect() %>%
ggplot(aes(Petal_Length, Petal_Width)) +
geom_point(aes(Petal_Width, Petal_Length, col = factor(prediction + 1)),
size = 2, alpha = 0.5) +
geom_point(data = kmeans_model$centers, aes(Petal_Width, Petal_Length),
col = scales::muted(c("red", "green", "blue")),
pch = 'x', size = 12) +
scale_color_discrete(name = "Predicted Cluster",
labels = paste("Cluster", 1:3)) +
labs(
x = "Petal Length",
y = "Petal Width",
title = "K-Means Clustering",
subtitle = "Use Spark.ML to predict cluster membership with the iris dataset."
)
|
/reference/modeling-exercises.R
|
no_license
|
BB1464/wsds_sparklyr_workshop
|
R
| false
| false
| 1,044
|
r
|
# EXERCISE: K-Means Clustering Example
iris_tbl <- copy_to(sc, iris, "iris", overwrite = TRUE)
kmeans_model <- iris_tbl %>%
ml_kmeans(k = 3, features = c("Petal_Length", "Petal_Width"))
print(kmeans_model)
# Predict associated class
predicted <- ml_predict(kmeans_model, iris_tbl) %>%
collect
table(predicted$Species, predicted$prediction)
# EXERCISE: Plot cluster membership
ml_predict(kmeans_model) %>%
collect() %>%
ggplot(aes(Petal_Length, Petal_Width)) +
geom_point(aes(Petal_Width, Petal_Length, col = factor(prediction + 1)),
size = 2, alpha = 0.5) +
geom_point(data = kmeans_model$centers, aes(Petal_Width, Petal_Length),
col = scales::muted(c("red", "green", "blue")),
pch = 'x', size = 12) +
scale_color_discrete(name = "Predicted Cluster",
labels = paste("Cluster", 1:3)) +
labs(
x = "Petal Length",
y = "Petal Width",
title = "K-Means Clustering",
subtitle = "Use Spark.ML to predict cluster membership with the iris dataset."
)
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
context("Array")
test_that("Integer Array", {
ints <- c(1:10, 1:10, 1:5)
x <- expect_array_roundtrip(ints, int32())
})
test_that("binary Array", {
# if the type is given, we just need a list of raw vectors
bin <- list(as.raw(1:10), as.raw(1:10))
expect_array_roundtrip(bin, binary(), as = binary())
expect_array_roundtrip(bin, large_binary(), as = large_binary())
expect_array_roundtrip(bin, fixed_size_binary(10), as = fixed_size_binary(10))
bin[[1L]] <- as.raw(1:20)
expect_error(Array$create(bin, fixed_size_binary(10)))
# otherwise the arrow type is deduced from the R classes
bin <- vctrs::new_vctr(
list(as.raw(1:10), as.raw(11:20)),
class = "arrow_binary"
)
expect_array_roundtrip(bin, binary())
bin <- vctrs::new_vctr(
list(as.raw(1:10), as.raw(11:20)),
class = "arrow_large_binary"
)
expect_array_roundtrip(bin, large_binary())
bin <- vctrs::new_vctr(
list(as.raw(1:10), as.raw(11:20)),
class = "arrow_fixed_size_binary",
byte_width = 10L
)
expect_array_roundtrip(bin, fixed_size_binary(byte_width = 10))
# degenerate cases
bin <- vctrs::new_vctr(
list(1:10),
class = "arrow_binary"
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(1:10),
ptype = raw(),
class = "arrow_large_binary"
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(1:10),
class = "arrow_fixed_size_binary",
byte_width = 10
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(as.raw(1:5)),
class = "arrow_fixed_size_binary",
byte_width = 10
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(as.raw(1:5)),
class = "arrow_fixed_size_binary"
)
expect_error(Array$create(bin))
})
test_that("Slice() and RangeEquals()", {
ints <- c(1:10, 101:110, 201:205)
x <- Array$create(ints)
y <- x$Slice(10)
expect_equal(y$type, int32())
expect_equal(length(y), 15L)
expect_as_vector(y, c(101:110, 201:205))
expect_true(x$RangeEquals(y, 10, 24))
expect_false(x$RangeEquals(y, 9, 23))
expect_false(x$RangeEquals(y, 11, 24))
z <- x$Slice(10, 5)
expect_as_vector(z, c(101:105))
expect_true(x$RangeEquals(z, 10, 15, 0))
# Input validation
expect_error(x$Slice("ten"))
expect_error(x$Slice(NA_integer_), "Slice 'offset' cannot be NA")
expect_error(x$Slice(NA), "Slice 'offset' cannot be NA")
expect_error(x$Slice(10, "ten"))
expect_error(x$Slice(10, NA_integer_), "Slice 'length' cannot be NA")
expect_error(x$Slice(NA_integer_, NA_integer_), "Slice 'offset' cannot be NA")
expect_error(x$Slice(c(10, 10)))
expect_error(x$Slice(10, c(10, 10)))
expect_error(x$Slice(1000), "Slice 'offset' greater than array length")
expect_error(x$Slice(-1), "Slice 'offset' cannot be negative")
expect_error(z$Slice(10, 10), "Slice 'offset' greater than array length")
expect_error(x$Slice(10, -1), "Slice 'length' cannot be negative")
expect_error(x$Slice(-1, 10), "Slice 'offset' cannot be negative")
expect_warning(x$Slice(10, 15), NA)
expect_warning(
overslice <- x$Slice(10, 16),
"Slice 'length' greater than available length"
)
expect_equal(length(overslice), 15)
expect_warning(z$Slice(2, 10), "Slice 'length' greater than available length")
expect_error(x$RangeEquals(10, 24, 0), 'other must be a "Array"')
expect_error(x$RangeEquals(y, NA, 24), "'start_idx' cannot be NA")
expect_error(x$RangeEquals(y, 10, NA), "'end_idx' cannot be NA")
expect_error(x$RangeEquals(y, 10, 24, NA), "'other_start_idx' cannot be NA")
expect_error(x$RangeEquals(y, "ten", 24))
# TODO (if anyone uses RangeEquals)
# expect_error(x$RangeEquals(y, 10, 2400, 0)) # does not error
# expect_error(x$RangeEquals(y, 1000, 24, 0)) # does not error
# expect_error(x$RangeEquals(y, 10, 24, 1000)) # does not error
})
test_that("Double Array", {
dbls <- c(1, 2, 3, 4, 5, 6)
x_dbl <- expect_array_roundtrip(dbls, float64())
})
test_that("Array print method includes type", {
x <- Array$create(c(1:10, 1:10, 1:5))
expect_output(print(x), "Array\n<int32>\n[\n", fixed = TRUE)
})
test_that("Array supports NA", {
x_int <- Array$create(as.integer(c(1:10, NA)))
x_dbl <- Array$create(as.numeric(c(1:10, NA)))
expect_true(x_int$IsValid(0))
expect_true(x_dbl$IsValid(0L))
expect_true(x_int$IsNull(10L))
expect_true(x_dbl$IsNull(10))
expect_equal(as.vector(is.na(x_int)), c(rep(FALSE, 10), TRUE))
expect_equal(as.vector(is.na(x_dbl)), c(rep(FALSE, 10), TRUE))
# Input validation
expect_error(x_int$IsValid("ten"))
expect_error(x_int$IsNull("ten"))
expect_error(x_int$IsValid(c(10, 10)))
expect_error(x_int$IsNull(c(10, 10)))
expect_error(x_int$IsValid(NA), "'i' cannot be NA")
expect_error(x_int$IsNull(NA), "'i' cannot be NA")
expect_error(x_int$IsValid(1000), "subscript out of bounds")
expect_error(x_int$IsValid(-1), "subscript out of bounds")
expect_error(x_int$IsNull(1000), "subscript out of bounds")
expect_error(x_int$IsNull(-1), "subscript out of bounds")
})
test_that("Array support null type (ARROW-7064)", {
expect_array_roundtrip(vctrs::unspecified(10), null())
})
test_that("Array supports logical vectors (ARROW-3341)", {
# with NA
x <- sample(c(TRUE, FALSE, NA), 1000, replace = TRUE)
expect_array_roundtrip(x, bool())
# without NA
x <- sample(c(TRUE, FALSE), 1000, replace = TRUE)
expect_array_roundtrip(x, bool())
})
test_that("Array supports character vectors (ARROW-3339)", {
# without NA
expect_array_roundtrip(c("itsy", "bitsy", "spider"), utf8())
expect_array_roundtrip(c("itsy", "bitsy", "spider"), large_utf8(), as = large_utf8())
# with NA
expect_array_roundtrip(c("itsy", NA, "spider"), utf8())
expect_array_roundtrip(c("itsy", NA, "spider"), large_utf8(), as = large_utf8())
})
test_that("Character vectors > 2GB become large_utf8", {
skip_on_cran()
skip_if_not_running_large_memory_tests()
big <- make_big_string()
expect_array_roundtrip(big, large_utf8())
})
test_that("empty arrays are supported", {
expect_array_roundtrip(character(), utf8())
expect_array_roundtrip(character(), large_utf8(), as = large_utf8())
expect_array_roundtrip(integer(), int32())
expect_array_roundtrip(numeric(), float64())
expect_array_roundtrip(factor(character()), dictionary(int8(), utf8()))
expect_array_roundtrip(logical(), bool())
})
test_that("array with all nulls are supported", {
nas <- c(NA, NA)
expect_array_roundtrip(as.character(nas), utf8())
expect_array_roundtrip(as.integer(nas), int32())
expect_array_roundtrip(as.numeric(nas), float64())
expect_array_roundtrip(as.factor(nas), dictionary(int8(), utf8()))
expect_array_roundtrip(as.logical(nas), bool())
})
test_that("Array supports unordered factors (ARROW-3355)", {
# without NA
f <- factor(c("itsy", "bitsy", "spider", "spider"))
expect_array_roundtrip(f, dictionary(int8(), utf8()))
# with NA
f <- factor(c("itsy", "bitsy", NA, "spider", "spider"))
expect_array_roundtrip(f, dictionary(int8(), utf8()))
})
test_that("Array supports ordered factors (ARROW-3355)", {
# without NA
f <- ordered(c("itsy", "bitsy", "spider", "spider"))
arr_fac <- expect_array_roundtrip(f, dictionary(int8(), utf8(), ordered = TRUE))
expect_true(arr_fac$ordered)
# with NA
f <- ordered(c("itsy", "bitsy", NA, "spider", "spider"))
expect_array_roundtrip(f, dictionary(int8(), utf8(), ordered = TRUE))
})
test_that("array supports Date (ARROW-3340)", {
d <- Sys.Date() + 1:10
expect_array_roundtrip(d, date32())
d[5] <- NA
expect_array_roundtrip(d, date32())
})
test_that("array supports POSIXct (ARROW-3340)", {
times <- lubridate::ymd_hms("2018-10-07 19:04:05") + 1:10
expect_array_roundtrip(times, timestamp("us", "UTC"))
times[5] <- NA
expect_array_roundtrip(times, timestamp("us", "UTC"))
times2 <- lubridate::ymd_hms("2018-10-07 19:04:05", tz = "US/Eastern") + 1:10
expect_array_roundtrip(times2, timestamp("us", "US/Eastern"))
})
test_that("array supports POSIXct without timezone", {
# Make sure timezone is not set
withr::with_envvar(c(TZ = ""), {
times <- strptime("2019-02-03 12:34:56", format="%Y-%m-%d %H:%M:%S") + 1:10
expect_array_roundtrip(times, timestamp("us", ""))
# Also test the INTSXP code path
skip("Ingest_POSIXct only implemented for REALSXP")
times_int <- as.integer(times)
attributes(times_int) <- attributes(times)
expect_array_roundtrip(times_int, timestamp("us", ""))
})
})
test_that("Timezone handling in Arrow roundtrip (ARROW-3543)", {
# Write a feather file as that's what the initial bug report used
df <- tibble::tibble(
no_tz = lubridate::ymd_hms("2018-10-07 19:04:05") + 1:10,
yes_tz = lubridate::ymd_hms("2018-10-07 19:04:05", tz = "Asia/Pyongyang") + 1:10
)
if (!identical(Sys.timezone(), "Asia/Pyongyang")) {
# Confirming that the columns are in fact different
expect_false(any(df$no_tz == df$yes_tz))
}
feather_file <- tempfile()
on.exit(unlink(feather_file))
write_feather(df, feather_file)
expect_identical(read_feather(feather_file), df)
})
test_that("array supports integer64", {
x <- bit64::as.integer64(1:10) + MAX_INT
expect_array_roundtrip(x, int64())
x[4] <- NA
expect_array_roundtrip(x, int64())
# all NA int64 (ARROW-3795)
all_na <- Array$create(bit64::as.integer64(NA))
expect_type_equal(all_na, int64())
expect_true(as.vector(is.na(all_na)))
})
test_that("array supports difftime", {
time <- hms::hms(56, 34, 12)
expect_array_roundtrip(c(time, time), time32("s"))
expect_array_roundtrip(vctrs::vec_c(NA, time), time32("s"))
})
test_that("support for NaN (ARROW-3615)", {
x <- c(1, NA, NaN, -1)
y <- Array$create(x)
expect_true(y$IsValid(2))
expect_equal(y$null_count, 1L)
})
test_that("integer types casts (ARROW-3741)", {
# Defining some type groups for use here and in the following tests
int_types <- c(int8(), int16(), int32(), int64())
uint_types <- c(uint8(), uint16(), uint32(), uint64())
float_types <- c(float32(), float64()) # float16() not really supported in C++ yet
a <- Array$create(c(1:10, NA))
for (type in c(int_types, uint_types)) {
casted <- a$cast(type)
expect_equal(casted$type, type)
expect_identical(as.vector(is.na(casted)), c(rep(FALSE, 10), TRUE))
}
})
test_that("integer types cast safety (ARROW-3741, ARROW-5541)", {
a <- Array$create(-(1:10))
for (type in uint_types) {
expect_error(a$cast(type), regexp = "Integer value -1 not in range")
expect_error(a$cast(type, safe = FALSE), NA)
}
})
test_that("float types casts (ARROW-3741)", {
x <- c(1, 2, 3, NA)
a <- Array$create(x)
for (type in float_types) {
casted <- a$cast(type)
expect_equal(casted$type, type)
expect_identical(as.vector(is.na(casted)), c(rep(FALSE, 3), TRUE))
expect_identical(as.vector(casted), x)
}
})
test_that("cast to half float works", {
skip("Need halffloat support: https://issues.apache.org/jira/browse/ARROW-3802")
a <- Array$create(1:4)
a_f16 <- a$cast(float16())
expect_type_equal(a_16$type, float16())
})
test_that("cast input validation", {
a <- Array$create(1:4)
expect_error(a$cast("not a type"), "type must be a DataType, not character")
})
test_that("Array$create() supports the type= argument. conversion from INTSXP and int64 to all int types", {
num_int32 <- 12L
num_int64 <- bit64::as.integer64(10)
types <- c(
int_types,
uint_types,
float_types,
double() # not actually a type, a base R function but should be alias for float64
)
for (type in types) {
expect_type_equal(Array$create(num_int32, type = type)$type, as_type(type))
expect_type_equal(Array$create(num_int64, type = type)$type, as_type(type))
}
# Input validation
expect_error(
Array$create(5, type = "not a type"),
"type must be a DataType, not character"
)
})
test_that("Array$create() aborts on overflow", {
expect_error(Array$create(128L, type = int8()))
expect_error(Array$create(-129L, type = int8()))
expect_error(Array$create(256L, type = uint8()))
expect_error(Array$create(-1L, type = uint8()))
expect_error(Array$create(32768L, type = int16()))
expect_error(Array$create(-32769L, type = int16()))
expect_error(Array$create(65536L, type = uint16()))
expect_error(Array$create(-1L, type = uint16()))
expect_error(Array$create(65536L, type = uint16()))
expect_error(Array$create(-1L, type = uint16()))
expect_error(Array$create(bit64::as.integer64(2^31), type = int32()))
expect_error(Array$create(bit64::as.integer64(2^32), type = uint32()))
})
test_that("Array$create() does not convert doubles to integer", {
for (type in c(int_types, uint_types)) {
a <- Array$create(10, type = type)
expect_type_equal(a$type, type)
expect_true(as.vector(a) == 10L)
}
})
test_that("Array$create() converts raw vectors to uint8 arrays (ARROW-3794)", {
expect_type_equal(Array$create(as.raw(1:10))$type, uint8())
})
test_that("Array<int8>$as_vector() converts to integer (ARROW-3794)", {
i8 <- (-128):127
a <- Array$create(i8)$cast(int8())
expect_type_equal(a, int8())
expect_equal(as.vector(a), i8)
u8 <- 0:255
a <- Array$create(u8)$cast(uint8())
expect_type_equal(a, uint8())
expect_equal(as.vector(a), u8)
})
test_that("Arrays of {,u}int{32,64} convert to integer if they can fit", {
u32 <- Array$create(1L)$cast(uint32())
expect_identical(as.vector(u32), 1L)
u64 <- Array$create(1L)$cast(uint64())
expect_identical(as.vector(u64), 1L)
i64 <- Array$create(bit64::as.integer64(1:10))
expect_identical(as.vector(i64), 1:10)
})
test_that("Arrays of uint{32,64} convert to numeric if they can't fit integer", {
u32 <- Array$create(bit64::as.integer64(1) + MAX_INT)$cast(uint32())
expect_identical(as.vector(u32), 1 + MAX_INT)
u64 <- Array$create(bit64::as.integer64(1) + MAX_INT)$cast(uint64())
expect_identical(as.vector(u64), 1 + MAX_INT)
})
test_that("Array$create() recognise arrow::Array (ARROW-3815)", {
a <- Array$create(1:10)
expect_equal(a, Array$create(a))
})
test_that("Array$create() handles data frame -> struct arrays (ARROW-3811)", {
df <- tibble::tibble(x = 1:10, y = x / 2, z = letters[1:10])
a <- Array$create(df)
expect_type_equal(a$type, struct(x = int32(), y = float64(), z = utf8()))
expect_equivalent(as.vector(a), df)
df <- structure(list(col = structure(list(structure(list(list(structure(1))), class = "inner")), class = "outer")), class = "data.frame", row.names = c(NA, -1L))
a <- Array$create(df)
expect_type_equal(a$type, struct(col = list_of(list_of(list_of(float64())))))
expect_equivalent(as.vector(a), df)
})
test_that("StructArray methods", {
df <- tibble::tibble(x = 1:10, y = x / 2, z = letters[1:10])
a <- Array$create(df)
expect_equal(a$x, Array$create(df$x))
expect_equal(a[["x"]], Array$create(df$x))
expect_equal(a[[1]], Array$create(df$x))
expect_identical(names(a), c("x", "y", "z"))
expect_identical(dim(a), c(10L, 3L))
})
test_that("Array$create() can handle data frame with custom struct type (not inferred)", {
df <- tibble::tibble(x = 1:10, y = 1:10)
type <- struct(x = float64(), y = int16())
a <- Array$create(df, type = type)
expect_type_equal(a$type, type)
type <- struct(x = float64(), y = int16(), z = int32())
expect_error(Array$create(df, type = type), regexp = "Number of fields in struct.* incompatible with number of columns in the data frame")
type <- struct(y = int16(), x = float64())
expect_error(Array$create(df, type = type), regexp = "Field name in position.*does not match the name of the column of the data frame")
type <- struct(x = float64(), y = utf8())
expect_error(Array$create(df, type = type), regexp = "Invalid")
})
test_that("Array$create() supports tibble with no columns (ARROW-8354)", {
df <- tibble::tibble()
expect_equal(Array$create(df)$as_vector(), df)
})
test_that("Array$create() handles vector -> list arrays (ARROW-7662)", {
# Should be able to create an empty list with a type hint.
expect_r6_class(Array$create(list(), list_of(bool())), "ListArray")
# logical
expect_array_roundtrip(list(NA), list_of(bool()))
expect_array_roundtrip(list(logical(0)), list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), c(FALSE, TRUE)), list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), NA, logical(0), c(FALSE, NA, TRUE)), list_of(bool()))
# integer
expect_array_roundtrip(list(NA_integer_), list_of(int32()))
expect_array_roundtrip(list(integer(0)), list_of(int32()))
expect_array_roundtrip(list(1:2, 3:4, 12:18), list_of(int32()))
expect_array_roundtrip(list(c(1:2), NA_integer_, integer(0), c(12:18, NA_integer_)), list_of(int32()))
# numeric
expect_array_roundtrip(list(NA_real_), list_of(float64()))
expect_array_roundtrip(list(numeric(0)), list_of(float64()))
expect_array_roundtrip(list(1, c(2, 3), 4), list_of(float64()))
expect_array_roundtrip(list(1, numeric(0), c(2, 3, NA_real_), 4), list_of(float64()))
# character
expect_array_roundtrip(list(NA_character_), list_of(utf8()))
expect_array_roundtrip(list(character(0)), list_of(utf8()))
expect_array_roundtrip(list("itsy", c("bitsy", "spider"), c("is")), list_of(utf8()))
expect_array_roundtrip(list("itsy", character(0), c("bitsy", "spider", NA_character_), c("is")), list_of(utf8()))
# factor
expect_array_roundtrip(list(factor(c("b", "a"), levels = c("a", "b"))), list_of(dictionary(int8(), utf8())))
expect_array_roundtrip(list(factor(NA, levels = c("a", "b"))), list_of(dictionary(int8(), utf8())))
# struct
expect_array_roundtrip(
list(tibble::tibble(a = integer(0), b = integer(0), c = character(0), d = logical(0))),
list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()))
)
expect_array_roundtrip(
list(tibble::tibble(a = list(integer()))),
list_of(struct(a = list_of(int32())))
)
# degenerated data frame
df <- structure(list(x = 1:2, y = 1), class = "data.frame", row.names = 1:2)
expect_error(Array$create(list(df)))
})
test_that("Array$create() handles vector -> large list arrays", {
# Should be able to create an empty list with a type hint.
expect_r6_class(Array$create(list(), type = large_list_of(bool())), "LargeListArray")
# logical
expect_array_roundtrip(list(NA), large_list_of(bool()), as = large_list_of(bool()))
expect_array_roundtrip(list(logical(0)), large_list_of(bool()), as = large_list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), c(FALSE, TRUE)), large_list_of(bool()), as = large_list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), NA, logical(0), c(FALSE, NA, TRUE)), large_list_of(bool()), as = large_list_of(bool()))
# integer
expect_array_roundtrip(list(NA_integer_), large_list_of(int32()), as = large_list_of(int32()))
expect_array_roundtrip(list(integer(0)), large_list_of(int32()), as = large_list_of(int32()))
expect_array_roundtrip(list(1:2, 3:4, 12:18), large_list_of(int32()), as = large_list_of(int32()))
expect_array_roundtrip(list(c(1:2), NA_integer_, integer(0), c(12:18, NA_integer_)), large_list_of(int32()), as = large_list_of(int32()))
# numeric
expect_array_roundtrip(list(NA_real_), large_list_of(float64()), as = large_list_of(float64()))
expect_array_roundtrip(list(numeric(0)), large_list_of(float64()), as = large_list_of(float64()))
expect_array_roundtrip(list(1, c(2, 3), 4), large_list_of(float64()), as = large_list_of(float64()))
expect_array_roundtrip(list(1, numeric(0), c(2, 3, NA_real_), 4), large_list_of(float64()), as = large_list_of(float64()))
# character
expect_array_roundtrip(list(NA_character_), large_list_of(utf8()), as = large_list_of(utf8()))
expect_array_roundtrip(list(character(0)), large_list_of(utf8()), as = large_list_of(utf8()))
expect_array_roundtrip(list("itsy", c("bitsy", "spider"), c("is")), large_list_of(utf8()), as = large_list_of(utf8()))
expect_array_roundtrip(list("itsy", character(0), c("bitsy", "spider", NA_character_), c("is")), large_list_of(utf8()), as = large_list_of(utf8()))
# factor
expect_array_roundtrip(list(factor(c("b", "a"), levels = c("a", "b"))), large_list_of(dictionary(int8(), utf8())), as = large_list_of(dictionary(int8(), utf8())))
expect_array_roundtrip(list(factor(NA, levels = c("a", "b"))), large_list_of(dictionary(int8(), utf8())), as = large_list_of(dictionary(int8(), utf8())))
# struct
expect_array_roundtrip(
list(tibble::tibble(a = integer(0), b = integer(0), c = character(0), d = logical(0))),
large_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool())),
as = large_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()))
)
expect_array_roundtrip(
list(tibble::tibble(a = list(integer()))),
large_list_of(struct(a = list_of(int32()))),
as = large_list_of(struct(a = list_of(int32())))
)
})
test_that("Array$create() handles vector -> fixed size list arrays", {
# Should be able to create an empty list with a type hint.
expect_r6_class(Array$create(list(), type = fixed_size_list_of(bool(), 20)), "FixedSizeListArray")
# logical
expect_array_roundtrip(list(NA), fixed_size_list_of(bool(), 1L), as = fixed_size_list_of(bool(), 1L))
expect_array_roundtrip(list(c(TRUE, FALSE), c(FALSE, TRUE)), fixed_size_list_of(bool(), 2L), as = fixed_size_list_of(bool(), 2L))
expect_array_roundtrip(list(c(TRUE), c(FALSE), NA), fixed_size_list_of(bool(), 1L), as = fixed_size_list_of(bool(), 1L))
# integer
expect_array_roundtrip(list(NA_integer_), fixed_size_list_of(int32(), 1L), as = fixed_size_list_of(int32(), 1L))
expect_array_roundtrip(list(1:2, 3:4, 11:12), fixed_size_list_of(int32(), 2L), as = fixed_size_list_of(int32(), 2L))
expect_array_roundtrip(list(c(1:2), c(NA_integer_, 3L)), fixed_size_list_of(int32(), 2L), as = fixed_size_list_of(int32(), 2L))
# numeric
expect_array_roundtrip(list(NA_real_), fixed_size_list_of(float64(), 1L), as = fixed_size_list_of(float64(), 1L))
expect_array_roundtrip(list(c(1,2), c(2, 3)), fixed_size_list_of(float64(), 2L), as = fixed_size_list_of(float64(), 2L))
expect_array_roundtrip(list(c(1,2), c(NA_real_, 4)), fixed_size_list_of(float64(), 2L), as = fixed_size_list_of(float64(), 2L))
# character
expect_array_roundtrip(list(NA_character_), fixed_size_list_of(utf8(), 1L), as = fixed_size_list_of(utf8(), 1L))
expect_array_roundtrip(list(c("itsy", "bitsy"), c("spider", "is"), c(NA_character_, NA_character_), c("", "")), fixed_size_list_of(utf8(), 2L), as = fixed_size_list_of(utf8(), 2L))
# factor
expect_array_roundtrip(list(factor(c("b", "a"), levels = c("a", "b"))), fixed_size_list_of(dictionary(int8(), utf8()), 2L), as = fixed_size_list_of(dictionary(int8(), utf8()), 2L))
# struct
expect_array_roundtrip(
list(tibble::tibble(a = 1L, b = 1L, c = "", d = TRUE)),
fixed_size_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()), 1L),
as = fixed_size_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()), 1L)
)
expect_array_roundtrip(
list(tibble::tibble(a = list(1L))),
fixed_size_list_of(struct(a = list_of(int32())), 1L),
as = fixed_size_list_of(struct(a = list_of(int32())), 1L)
)
expect_array_roundtrip(
list(tibble::tibble(a = list(1L))),
list_of(struct(a = fixed_size_list_of(int32(), 1L))),
as = list_of(struct(a = fixed_size_list_of(int32(), 1L)))
)
})
test_that("Handling string data with embedded nuls", {
raws <- structure(list(
as.raw(c(0x70, 0x65, 0x72, 0x73, 0x6f, 0x6e)),
as.raw(c(0x77, 0x6f, 0x6d, 0x61, 0x6e)),
as.raw(c(0x6d, 0x61, 0x00, 0x6e)), # <-- there's your nul, 0x00
as.raw(c(0x66, 0x00, 0x00, 0x61, 0x00, 0x6e)), # multiple nuls
as.raw(c(0x63, 0x61, 0x6d, 0x65, 0x72, 0x61)),
as.raw(c(0x74, 0x76))),
class = c("arrow_binary", "vctrs_vctr", "list"))
expect_error(
rawToChar(raws[[3]]),
"embedded nul in string: 'ma\\0n'", # See?
fixed = TRUE
)
array_with_nul <- Array$create(raws)$cast(utf8())
expect_error(
as.vector(array_with_nul),
"embedded nul in string: 'ma\\0n'; to strip nuls when converting from Arrow to R, set options(arrow.skip_nul = TRUE)",
fixed = TRUE
)
withr::with_options(list(arrow.skip_nul = TRUE), {
expect_warning(
expect_identical(
as.vector(array_with_nul),
c("person", "woman", "man", "fan", "camera", "tv")
),
"Stripping '\\0' (nul) from character vector",
fixed = TRUE
)
})
})
test_that("Array$create() should have helpful error", {
expect_error(Array$create(list(numeric(0)), list_of(bool())), "Expecting a logical vector")
lgl <- logical(0)
int <- integer(0)
num <- numeric(0)
char <- character(0)
expect_error(Array$create(list()), "Requires at least one element to infer")
expect_error(Array$create(list(lgl, lgl, int)), "Expecting a logical vector")
expect_error(Array$create(list(char, num, char)), "Expecting a character vector")
})
test_that("Array$View() (ARROW-6542)", {
a <- Array$create(1:3)
b <- a$View(float32())
expect_equal(b$type, float32())
expect_equal(length(b), 3L)
# Input validation
expect_error(a$View("not a type"), "type must be a DataType, not character")
})
test_that("Array$Validate()", {
a <- Array$create(1:10)
expect_error(a$Validate(), NA)
})
test_that("is.Array", {
a <- Array$create(1, type = int32())
expect_true(is.Array(a))
expect_true(is.Array(a, "int32"))
expect_true(is.Array(a, c("int32", "int16")))
expect_false(is.Array(a, "utf8"))
expect_true(is.Array(a$View(float32())), "float32")
expect_false(is.Array(1))
expect_true(is.Array(ChunkedArray$create(1, 2)))
})
test_that("Array$Take()", {
a <- Array$create(10:20)
expect_equal(as.vector(a$Take(c(4, 2))), c(14, 12))
})
test_that("[ method on Array", {
vec <- 11:20
a <- Array$create(vec)
expect_as_vector(a[5:9], vec[5:9])
expect_as_vector(a[c(9, 3, 5)], vec[c(9, 3, 5)])
expect_as_vector(a[rep(c(TRUE, FALSE), 5)], vec[c(1, 3, 5, 7, 9)])
expect_as_vector(a[rep(c(TRUE, FALSE, NA, FALSE, TRUE), 2)], c(11, NA, 15, 16, NA, 20))
expect_as_vector(a[-4], vec[-4])
expect_as_vector(a[-1], vec[-1])
})
test_that("[ accepts Arrays and otherwise handles bad input", {
vec <- 11:20
a <- Array$create(vec)
ind <- c(9, 3, 5)
expect_error(
a[Array$create(ind)],
"Cannot extract rows with an Array of type double"
)
expect_as_vector(a[Array$create(ind - 1, type = int8())], vec[ind])
expect_as_vector(a[Array$create(ind - 1, type = uint8())], vec[ind])
expect_as_vector(a[ChunkedArray$create(8, 2, 4, type = uint8())], vec[ind])
filt <- seq_along(vec) %in% ind
expect_as_vector(a[Array$create(filt)], vec[filt])
expect_error(
a["string"],
"Cannot extract rows with an object of class character"
)
})
test_that("%in% works on dictionary arrays", {
a1 <- Array$create(as.factor(c("A", "B", "C")))
a2 <- DictionaryArray$create(c(0L, 1L, 2L), c(4.5, 3.2, 1.1))
c1 <- Array$create(c(FALSE, TRUE, FALSE))
c2 <- Array$create(c(FALSE, FALSE, FALSE))
b1 <- Array$create("B")
b2 <- Array$create(5.4)
expect_equal(is_in(a1, b1), c1)
expect_equal(is_in(a2, b2), c2)
expect_error(is_in(a1, b2))
})
test_that("[ accepts Expressions", {
vec <- 11:20
a <- Array$create(vec)
b <- Array$create(1:10)
expect_as_vector(a[b > 4], vec[5:10])
})
test_that("Array head/tail", {
vec <- 11:20
a <- Array$create(vec)
expect_as_vector(head(a), head(vec))
expect_as_vector(head(a, 4), head(vec, 4))
expect_as_vector(head(a, 40), head(vec, 40))
expect_as_vector(head(a, -4), head(vec, -4))
expect_as_vector(head(a, -40), head(vec, -40))
expect_as_vector(tail(a), tail(vec))
expect_as_vector(tail(a, 4), tail(vec, 4))
expect_as_vector(tail(a, 40), tail(vec, 40))
expect_as_vector(tail(a, -40), tail(vec, -40))
})
test_that("Dictionary array: create from arrays, not factor", {
a <- DictionaryArray$create(c(2L, 1L, 1L, 2L, 0L), c(4.5, 3.2, 1.1))
expect_equal(a$type, dictionary(int32(), float64()))
})
test_that("Dictionary array: translate to R when dict isn't string", {
a <- DictionaryArray$create(c(2L, 1L, 1L, 2L, 0L), c(4.5, 3.2, 1.1))
expect_warning(
expect_identical(
as.vector(a),
factor(c(3, 2, 2, 3, 1), labels = c("4.5", "3.2", "1.1"))
)
)
})
test_that("Array$Equals", {
vec <- 11:20
a <- Array$create(vec)
b <- Array$create(vec)
d <- Array$create(3:4)
expect_equal(a, b)
expect_true(a$Equals(b))
expect_false(a$Equals(vec))
expect_false(a$Equals(d))
})
test_that("Array$ApproxEquals", {
vec <- c(1.0000000000001, 2.400000000000001)
a <- Array$create(vec)
b <- Array$create(round(vec, 1))
expect_false(a$Equals(b))
expect_true(a$ApproxEquals(b))
expect_false(a$ApproxEquals(vec))
})
test_that("auto int64 conversion to int can be disabled (ARROW-10093)", {
withr::with_options(list(arrow.int64_downcast = FALSE), {
a <- Array$create(1:10, int64())
expect_true(inherits(a$as_vector(), "integer64"))
batch <- RecordBatch$create(x = a)
expect_true(inherits(as.data.frame(batch)$x, "integer64"))
tab <- Table$create(x = a)
expect_true(inherits(as.data.frame(batch)$x, "integer64"))
})
})
|
/r/tests/testthat/test-Array.R
|
permissive
|
vikrant1717/arrow
|
R
| false
| false
| 30,038
|
r
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
context("Array")
test_that("Integer Array", {
ints <- c(1:10, 1:10, 1:5)
x <- expect_array_roundtrip(ints, int32())
})
test_that("binary Array", {
# if the type is given, we just need a list of raw vectors
bin <- list(as.raw(1:10), as.raw(1:10))
expect_array_roundtrip(bin, binary(), as = binary())
expect_array_roundtrip(bin, large_binary(), as = large_binary())
expect_array_roundtrip(bin, fixed_size_binary(10), as = fixed_size_binary(10))
bin[[1L]] <- as.raw(1:20)
expect_error(Array$create(bin, fixed_size_binary(10)))
# otherwise the arrow type is deduced from the R classes
bin <- vctrs::new_vctr(
list(as.raw(1:10), as.raw(11:20)),
class = "arrow_binary"
)
expect_array_roundtrip(bin, binary())
bin <- vctrs::new_vctr(
list(as.raw(1:10), as.raw(11:20)),
class = "arrow_large_binary"
)
expect_array_roundtrip(bin, large_binary())
bin <- vctrs::new_vctr(
list(as.raw(1:10), as.raw(11:20)),
class = "arrow_fixed_size_binary",
byte_width = 10L
)
expect_array_roundtrip(bin, fixed_size_binary(byte_width = 10))
# degenerate cases
bin <- vctrs::new_vctr(
list(1:10),
class = "arrow_binary"
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(1:10),
ptype = raw(),
class = "arrow_large_binary"
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(1:10),
class = "arrow_fixed_size_binary",
byte_width = 10
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(as.raw(1:5)),
class = "arrow_fixed_size_binary",
byte_width = 10
)
expect_error(Array$create(bin))
bin <- vctrs::new_vctr(
list(as.raw(1:5)),
class = "arrow_fixed_size_binary"
)
expect_error(Array$create(bin))
})
test_that("Slice() and RangeEquals()", {
ints <- c(1:10, 101:110, 201:205)
x <- Array$create(ints)
y <- x$Slice(10)
expect_equal(y$type, int32())
expect_equal(length(y), 15L)
expect_as_vector(y, c(101:110, 201:205))
expect_true(x$RangeEquals(y, 10, 24))
expect_false(x$RangeEquals(y, 9, 23))
expect_false(x$RangeEquals(y, 11, 24))
z <- x$Slice(10, 5)
expect_as_vector(z, c(101:105))
expect_true(x$RangeEquals(z, 10, 15, 0))
# Input validation
expect_error(x$Slice("ten"))
expect_error(x$Slice(NA_integer_), "Slice 'offset' cannot be NA")
expect_error(x$Slice(NA), "Slice 'offset' cannot be NA")
expect_error(x$Slice(10, "ten"))
expect_error(x$Slice(10, NA_integer_), "Slice 'length' cannot be NA")
expect_error(x$Slice(NA_integer_, NA_integer_), "Slice 'offset' cannot be NA")
expect_error(x$Slice(c(10, 10)))
expect_error(x$Slice(10, c(10, 10)))
expect_error(x$Slice(1000), "Slice 'offset' greater than array length")
expect_error(x$Slice(-1), "Slice 'offset' cannot be negative")
expect_error(z$Slice(10, 10), "Slice 'offset' greater than array length")
expect_error(x$Slice(10, -1), "Slice 'length' cannot be negative")
expect_error(x$Slice(-1, 10), "Slice 'offset' cannot be negative")
expect_warning(x$Slice(10, 15), NA)
expect_warning(
overslice <- x$Slice(10, 16),
"Slice 'length' greater than available length"
)
expect_equal(length(overslice), 15)
expect_warning(z$Slice(2, 10), "Slice 'length' greater than available length")
expect_error(x$RangeEquals(10, 24, 0), 'other must be a "Array"')
expect_error(x$RangeEquals(y, NA, 24), "'start_idx' cannot be NA")
expect_error(x$RangeEquals(y, 10, NA), "'end_idx' cannot be NA")
expect_error(x$RangeEquals(y, 10, 24, NA), "'other_start_idx' cannot be NA")
expect_error(x$RangeEquals(y, "ten", 24))
# TODO (if anyone uses RangeEquals)
# expect_error(x$RangeEquals(y, 10, 2400, 0)) # does not error
# expect_error(x$RangeEquals(y, 1000, 24, 0)) # does not error
# expect_error(x$RangeEquals(y, 10, 24, 1000)) # does not error
})
test_that("Double Array", {
dbls <- c(1, 2, 3, 4, 5, 6)
x_dbl <- expect_array_roundtrip(dbls, float64())
})
test_that("Array print method includes type", {
x <- Array$create(c(1:10, 1:10, 1:5))
expect_output(print(x), "Array\n<int32>\n[\n", fixed = TRUE)
})
test_that("Array supports NA", {
x_int <- Array$create(as.integer(c(1:10, NA)))
x_dbl <- Array$create(as.numeric(c(1:10, NA)))
expect_true(x_int$IsValid(0))
expect_true(x_dbl$IsValid(0L))
expect_true(x_int$IsNull(10L))
expect_true(x_dbl$IsNull(10))
expect_equal(as.vector(is.na(x_int)), c(rep(FALSE, 10), TRUE))
expect_equal(as.vector(is.na(x_dbl)), c(rep(FALSE, 10), TRUE))
# Input validation
expect_error(x_int$IsValid("ten"))
expect_error(x_int$IsNull("ten"))
expect_error(x_int$IsValid(c(10, 10)))
expect_error(x_int$IsNull(c(10, 10)))
expect_error(x_int$IsValid(NA), "'i' cannot be NA")
expect_error(x_int$IsNull(NA), "'i' cannot be NA")
expect_error(x_int$IsValid(1000), "subscript out of bounds")
expect_error(x_int$IsValid(-1), "subscript out of bounds")
expect_error(x_int$IsNull(1000), "subscript out of bounds")
expect_error(x_int$IsNull(-1), "subscript out of bounds")
})
test_that("Array support null type (ARROW-7064)", {
expect_array_roundtrip(vctrs::unspecified(10), null())
})
test_that("Array supports logical vectors (ARROW-3341)", {
# with NA
x <- sample(c(TRUE, FALSE, NA), 1000, replace = TRUE)
expect_array_roundtrip(x, bool())
# without NA
x <- sample(c(TRUE, FALSE), 1000, replace = TRUE)
expect_array_roundtrip(x, bool())
})
test_that("Array supports character vectors (ARROW-3339)", {
# without NA
expect_array_roundtrip(c("itsy", "bitsy", "spider"), utf8())
expect_array_roundtrip(c("itsy", "bitsy", "spider"), large_utf8(), as = large_utf8())
# with NA
expect_array_roundtrip(c("itsy", NA, "spider"), utf8())
expect_array_roundtrip(c("itsy", NA, "spider"), large_utf8(), as = large_utf8())
})
test_that("Character vectors > 2GB become large_utf8", {
skip_on_cran()
skip_if_not_running_large_memory_tests()
big <- make_big_string()
expect_array_roundtrip(big, large_utf8())
})
test_that("empty arrays are supported", {
expect_array_roundtrip(character(), utf8())
expect_array_roundtrip(character(), large_utf8(), as = large_utf8())
expect_array_roundtrip(integer(), int32())
expect_array_roundtrip(numeric(), float64())
expect_array_roundtrip(factor(character()), dictionary(int8(), utf8()))
expect_array_roundtrip(logical(), bool())
})
test_that("array with all nulls are supported", {
nas <- c(NA, NA)
expect_array_roundtrip(as.character(nas), utf8())
expect_array_roundtrip(as.integer(nas), int32())
expect_array_roundtrip(as.numeric(nas), float64())
expect_array_roundtrip(as.factor(nas), dictionary(int8(), utf8()))
expect_array_roundtrip(as.logical(nas), bool())
})
test_that("Array supports unordered factors (ARROW-3355)", {
# without NA
f <- factor(c("itsy", "bitsy", "spider", "spider"))
expect_array_roundtrip(f, dictionary(int8(), utf8()))
# with NA
f <- factor(c("itsy", "bitsy", NA, "spider", "spider"))
expect_array_roundtrip(f, dictionary(int8(), utf8()))
})
test_that("Array supports ordered factors (ARROW-3355)", {
# without NA
f <- ordered(c("itsy", "bitsy", "spider", "spider"))
arr_fac <- expect_array_roundtrip(f, dictionary(int8(), utf8(), ordered = TRUE))
expect_true(arr_fac$ordered)
# with NA
f <- ordered(c("itsy", "bitsy", NA, "spider", "spider"))
expect_array_roundtrip(f, dictionary(int8(), utf8(), ordered = TRUE))
})
test_that("array supports Date (ARROW-3340)", {
d <- Sys.Date() + 1:10
expect_array_roundtrip(d, date32())
d[5] <- NA
expect_array_roundtrip(d, date32())
})
test_that("array supports POSIXct (ARROW-3340)", {
times <- lubridate::ymd_hms("2018-10-07 19:04:05") + 1:10
expect_array_roundtrip(times, timestamp("us", "UTC"))
times[5] <- NA
expect_array_roundtrip(times, timestamp("us", "UTC"))
times2 <- lubridate::ymd_hms("2018-10-07 19:04:05", tz = "US/Eastern") + 1:10
expect_array_roundtrip(times2, timestamp("us", "US/Eastern"))
})
test_that("array supports POSIXct without timezone", {
# Make sure timezone is not set
withr::with_envvar(c(TZ = ""), {
times <- strptime("2019-02-03 12:34:56", format="%Y-%m-%d %H:%M:%S") + 1:10
expect_array_roundtrip(times, timestamp("us", ""))
# Also test the INTSXP code path
skip("Ingest_POSIXct only implemented for REALSXP")
times_int <- as.integer(times)
attributes(times_int) <- attributes(times)
expect_array_roundtrip(times_int, timestamp("us", ""))
})
})
test_that("Timezone handling in Arrow roundtrip (ARROW-3543)", {
# Write a feather file as that's what the initial bug report used
df <- tibble::tibble(
no_tz = lubridate::ymd_hms("2018-10-07 19:04:05") + 1:10,
yes_tz = lubridate::ymd_hms("2018-10-07 19:04:05", tz = "Asia/Pyongyang") + 1:10
)
if (!identical(Sys.timezone(), "Asia/Pyongyang")) {
# Confirming that the columns are in fact different
expect_false(any(df$no_tz == df$yes_tz))
}
feather_file <- tempfile()
on.exit(unlink(feather_file))
write_feather(df, feather_file)
expect_identical(read_feather(feather_file), df)
})
test_that("array supports integer64", {
x <- bit64::as.integer64(1:10) + MAX_INT
expect_array_roundtrip(x, int64())
x[4] <- NA
expect_array_roundtrip(x, int64())
# all NA int64 (ARROW-3795)
all_na <- Array$create(bit64::as.integer64(NA))
expect_type_equal(all_na, int64())
expect_true(as.vector(is.na(all_na)))
})
test_that("array supports difftime", {
time <- hms::hms(56, 34, 12)
expect_array_roundtrip(c(time, time), time32("s"))
expect_array_roundtrip(vctrs::vec_c(NA, time), time32("s"))
})
test_that("support for NaN (ARROW-3615)", {
x <- c(1, NA, NaN, -1)
y <- Array$create(x)
expect_true(y$IsValid(2))
expect_equal(y$null_count, 1L)
})
test_that("integer types casts (ARROW-3741)", {
# Defining some type groups for use here and in the following tests
int_types <- c(int8(), int16(), int32(), int64())
uint_types <- c(uint8(), uint16(), uint32(), uint64())
float_types <- c(float32(), float64()) # float16() not really supported in C++ yet
a <- Array$create(c(1:10, NA))
for (type in c(int_types, uint_types)) {
casted <- a$cast(type)
expect_equal(casted$type, type)
expect_identical(as.vector(is.na(casted)), c(rep(FALSE, 10), TRUE))
}
})
test_that("integer types cast safety (ARROW-3741, ARROW-5541)", {
a <- Array$create(-(1:10))
for (type in uint_types) {
expect_error(a$cast(type), regexp = "Integer value -1 not in range")
expect_error(a$cast(type, safe = FALSE), NA)
}
})
test_that("float types casts (ARROW-3741)", {
x <- c(1, 2, 3, NA)
a <- Array$create(x)
for (type in float_types) {
casted <- a$cast(type)
expect_equal(casted$type, type)
expect_identical(as.vector(is.na(casted)), c(rep(FALSE, 3), TRUE))
expect_identical(as.vector(casted), x)
}
})
test_that("cast to half float works", {
skip("Need halffloat support: https://issues.apache.org/jira/browse/ARROW-3802")
a <- Array$create(1:4)
a_f16 <- a$cast(float16())
expect_type_equal(a_16$type, float16())
})
test_that("cast input validation", {
a <- Array$create(1:4)
expect_error(a$cast("not a type"), "type must be a DataType, not character")
})
test_that("Array$create() supports the type= argument. conversion from INTSXP and int64 to all int types", {
num_int32 <- 12L
num_int64 <- bit64::as.integer64(10)
types <- c(
int_types,
uint_types,
float_types,
double() # not actually a type, a base R function but should be alias for float64
)
for (type in types) {
expect_type_equal(Array$create(num_int32, type = type)$type, as_type(type))
expect_type_equal(Array$create(num_int64, type = type)$type, as_type(type))
}
# Input validation
expect_error(
Array$create(5, type = "not a type"),
"type must be a DataType, not character"
)
})
test_that("Array$create() aborts on overflow", {
expect_error(Array$create(128L, type = int8()))
expect_error(Array$create(-129L, type = int8()))
expect_error(Array$create(256L, type = uint8()))
expect_error(Array$create(-1L, type = uint8()))
expect_error(Array$create(32768L, type = int16()))
expect_error(Array$create(-32769L, type = int16()))
expect_error(Array$create(65536L, type = uint16()))
expect_error(Array$create(-1L, type = uint16()))
expect_error(Array$create(65536L, type = uint16()))
expect_error(Array$create(-1L, type = uint16()))
expect_error(Array$create(bit64::as.integer64(2^31), type = int32()))
expect_error(Array$create(bit64::as.integer64(2^32), type = uint32()))
})
test_that("Array$create() does not convert doubles to integer", {
for (type in c(int_types, uint_types)) {
a <- Array$create(10, type = type)
expect_type_equal(a$type, type)
expect_true(as.vector(a) == 10L)
}
})
test_that("Array$create() converts raw vectors to uint8 arrays (ARROW-3794)", {
expect_type_equal(Array$create(as.raw(1:10))$type, uint8())
})
test_that("Array<int8>$as_vector() converts to integer (ARROW-3794)", {
i8 <- (-128):127
a <- Array$create(i8)$cast(int8())
expect_type_equal(a, int8())
expect_equal(as.vector(a), i8)
u8 <- 0:255
a <- Array$create(u8)$cast(uint8())
expect_type_equal(a, uint8())
expect_equal(as.vector(a), u8)
})
test_that("Arrays of {,u}int{32,64} convert to integer if they can fit", {
u32 <- Array$create(1L)$cast(uint32())
expect_identical(as.vector(u32), 1L)
u64 <- Array$create(1L)$cast(uint64())
expect_identical(as.vector(u64), 1L)
i64 <- Array$create(bit64::as.integer64(1:10))
expect_identical(as.vector(i64), 1:10)
})
test_that("Arrays of uint{32,64} convert to numeric if they can't fit integer", {
u32 <- Array$create(bit64::as.integer64(1) + MAX_INT)$cast(uint32())
expect_identical(as.vector(u32), 1 + MAX_INT)
u64 <- Array$create(bit64::as.integer64(1) + MAX_INT)$cast(uint64())
expect_identical(as.vector(u64), 1 + MAX_INT)
})
test_that("Array$create() recognise arrow::Array (ARROW-3815)", {
a <- Array$create(1:10)
expect_equal(a, Array$create(a))
})
test_that("Array$create() handles data frame -> struct arrays (ARROW-3811)", {
df <- tibble::tibble(x = 1:10, y = x / 2, z = letters[1:10])
a <- Array$create(df)
expect_type_equal(a$type, struct(x = int32(), y = float64(), z = utf8()))
expect_equivalent(as.vector(a), df)
df <- structure(list(col = structure(list(structure(list(list(structure(1))), class = "inner")), class = "outer")), class = "data.frame", row.names = c(NA, -1L))
a <- Array$create(df)
expect_type_equal(a$type, struct(col = list_of(list_of(list_of(float64())))))
expect_equivalent(as.vector(a), df)
})
test_that("StructArray methods", {
df <- tibble::tibble(x = 1:10, y = x / 2, z = letters[1:10])
a <- Array$create(df)
expect_equal(a$x, Array$create(df$x))
expect_equal(a[["x"]], Array$create(df$x))
expect_equal(a[[1]], Array$create(df$x))
expect_identical(names(a), c("x", "y", "z"))
expect_identical(dim(a), c(10L, 3L))
})
test_that("Array$create() can handle data frame with custom struct type (not inferred)", {
df <- tibble::tibble(x = 1:10, y = 1:10)
type <- struct(x = float64(), y = int16())
a <- Array$create(df, type = type)
expect_type_equal(a$type, type)
type <- struct(x = float64(), y = int16(), z = int32())
expect_error(Array$create(df, type = type), regexp = "Number of fields in struct.* incompatible with number of columns in the data frame")
type <- struct(y = int16(), x = float64())
expect_error(Array$create(df, type = type), regexp = "Field name in position.*does not match the name of the column of the data frame")
type <- struct(x = float64(), y = utf8())
expect_error(Array$create(df, type = type), regexp = "Invalid")
})
test_that("Array$create() supports tibble with no columns (ARROW-8354)", {
df <- tibble::tibble()
expect_equal(Array$create(df)$as_vector(), df)
})
test_that("Array$create() handles vector -> list arrays (ARROW-7662)", {
# Should be able to create an empty list with a type hint.
expect_r6_class(Array$create(list(), list_of(bool())), "ListArray")
# logical
expect_array_roundtrip(list(NA), list_of(bool()))
expect_array_roundtrip(list(logical(0)), list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), c(FALSE, TRUE)), list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), NA, logical(0), c(FALSE, NA, TRUE)), list_of(bool()))
# integer
expect_array_roundtrip(list(NA_integer_), list_of(int32()))
expect_array_roundtrip(list(integer(0)), list_of(int32()))
expect_array_roundtrip(list(1:2, 3:4, 12:18), list_of(int32()))
expect_array_roundtrip(list(c(1:2), NA_integer_, integer(0), c(12:18, NA_integer_)), list_of(int32()))
# numeric
expect_array_roundtrip(list(NA_real_), list_of(float64()))
expect_array_roundtrip(list(numeric(0)), list_of(float64()))
expect_array_roundtrip(list(1, c(2, 3), 4), list_of(float64()))
expect_array_roundtrip(list(1, numeric(0), c(2, 3, NA_real_), 4), list_of(float64()))
# character
expect_array_roundtrip(list(NA_character_), list_of(utf8()))
expect_array_roundtrip(list(character(0)), list_of(utf8()))
expect_array_roundtrip(list("itsy", c("bitsy", "spider"), c("is")), list_of(utf8()))
expect_array_roundtrip(list("itsy", character(0), c("bitsy", "spider", NA_character_), c("is")), list_of(utf8()))
# factor
expect_array_roundtrip(list(factor(c("b", "a"), levels = c("a", "b"))), list_of(dictionary(int8(), utf8())))
expect_array_roundtrip(list(factor(NA, levels = c("a", "b"))), list_of(dictionary(int8(), utf8())))
# struct
expect_array_roundtrip(
list(tibble::tibble(a = integer(0), b = integer(0), c = character(0), d = logical(0))),
list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()))
)
expect_array_roundtrip(
list(tibble::tibble(a = list(integer()))),
list_of(struct(a = list_of(int32())))
)
# degenerated data frame
df <- structure(list(x = 1:2, y = 1), class = "data.frame", row.names = 1:2)
expect_error(Array$create(list(df)))
})
test_that("Array$create() handles vector -> large list arrays", {
# Should be able to create an empty list with a type hint.
expect_r6_class(Array$create(list(), type = large_list_of(bool())), "LargeListArray")
# logical
expect_array_roundtrip(list(NA), large_list_of(bool()), as = large_list_of(bool()))
expect_array_roundtrip(list(logical(0)), large_list_of(bool()), as = large_list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), c(FALSE, TRUE)), large_list_of(bool()), as = large_list_of(bool()))
expect_array_roundtrip(list(c(TRUE), c(FALSE), NA, logical(0), c(FALSE, NA, TRUE)), large_list_of(bool()), as = large_list_of(bool()))
# integer
expect_array_roundtrip(list(NA_integer_), large_list_of(int32()), as = large_list_of(int32()))
expect_array_roundtrip(list(integer(0)), large_list_of(int32()), as = large_list_of(int32()))
expect_array_roundtrip(list(1:2, 3:4, 12:18), large_list_of(int32()), as = large_list_of(int32()))
expect_array_roundtrip(list(c(1:2), NA_integer_, integer(0), c(12:18, NA_integer_)), large_list_of(int32()), as = large_list_of(int32()))
# numeric
expect_array_roundtrip(list(NA_real_), large_list_of(float64()), as = large_list_of(float64()))
expect_array_roundtrip(list(numeric(0)), large_list_of(float64()), as = large_list_of(float64()))
expect_array_roundtrip(list(1, c(2, 3), 4), large_list_of(float64()), as = large_list_of(float64()))
expect_array_roundtrip(list(1, numeric(0), c(2, 3, NA_real_), 4), large_list_of(float64()), as = large_list_of(float64()))
# character
expect_array_roundtrip(list(NA_character_), large_list_of(utf8()), as = large_list_of(utf8()))
expect_array_roundtrip(list(character(0)), large_list_of(utf8()), as = large_list_of(utf8()))
expect_array_roundtrip(list("itsy", c("bitsy", "spider"), c("is")), large_list_of(utf8()), as = large_list_of(utf8()))
expect_array_roundtrip(list("itsy", character(0), c("bitsy", "spider", NA_character_), c("is")), large_list_of(utf8()), as = large_list_of(utf8()))
# factor
expect_array_roundtrip(list(factor(c("b", "a"), levels = c("a", "b"))), large_list_of(dictionary(int8(), utf8())), as = large_list_of(dictionary(int8(), utf8())))
expect_array_roundtrip(list(factor(NA, levels = c("a", "b"))), large_list_of(dictionary(int8(), utf8())), as = large_list_of(dictionary(int8(), utf8())))
# struct
expect_array_roundtrip(
list(tibble::tibble(a = integer(0), b = integer(0), c = character(0), d = logical(0))),
large_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool())),
as = large_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()))
)
expect_array_roundtrip(
list(tibble::tibble(a = list(integer()))),
large_list_of(struct(a = list_of(int32()))),
as = large_list_of(struct(a = list_of(int32())))
)
})
test_that("Array$create() handles vector -> fixed size list arrays", {
# Should be able to create an empty list with a type hint.
expect_r6_class(Array$create(list(), type = fixed_size_list_of(bool(), 20)), "FixedSizeListArray")
# logical
expect_array_roundtrip(list(NA), fixed_size_list_of(bool(), 1L), as = fixed_size_list_of(bool(), 1L))
expect_array_roundtrip(list(c(TRUE, FALSE), c(FALSE, TRUE)), fixed_size_list_of(bool(), 2L), as = fixed_size_list_of(bool(), 2L))
expect_array_roundtrip(list(c(TRUE), c(FALSE), NA), fixed_size_list_of(bool(), 1L), as = fixed_size_list_of(bool(), 1L))
# integer
expect_array_roundtrip(list(NA_integer_), fixed_size_list_of(int32(), 1L), as = fixed_size_list_of(int32(), 1L))
expect_array_roundtrip(list(1:2, 3:4, 11:12), fixed_size_list_of(int32(), 2L), as = fixed_size_list_of(int32(), 2L))
expect_array_roundtrip(list(c(1:2), c(NA_integer_, 3L)), fixed_size_list_of(int32(), 2L), as = fixed_size_list_of(int32(), 2L))
# numeric
expect_array_roundtrip(list(NA_real_), fixed_size_list_of(float64(), 1L), as = fixed_size_list_of(float64(), 1L))
expect_array_roundtrip(list(c(1,2), c(2, 3)), fixed_size_list_of(float64(), 2L), as = fixed_size_list_of(float64(), 2L))
expect_array_roundtrip(list(c(1,2), c(NA_real_, 4)), fixed_size_list_of(float64(), 2L), as = fixed_size_list_of(float64(), 2L))
# character
expect_array_roundtrip(list(NA_character_), fixed_size_list_of(utf8(), 1L), as = fixed_size_list_of(utf8(), 1L))
expect_array_roundtrip(list(c("itsy", "bitsy"), c("spider", "is"), c(NA_character_, NA_character_), c("", "")), fixed_size_list_of(utf8(), 2L), as = fixed_size_list_of(utf8(), 2L))
# factor
expect_array_roundtrip(list(factor(c("b", "a"), levels = c("a", "b"))), fixed_size_list_of(dictionary(int8(), utf8()), 2L), as = fixed_size_list_of(dictionary(int8(), utf8()), 2L))
# struct
expect_array_roundtrip(
list(tibble::tibble(a = 1L, b = 1L, c = "", d = TRUE)),
fixed_size_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()), 1L),
as = fixed_size_list_of(struct(a = int32(), b = int32(), c = utf8(), d = bool()), 1L)
)
expect_array_roundtrip(
list(tibble::tibble(a = list(1L))),
fixed_size_list_of(struct(a = list_of(int32())), 1L),
as = fixed_size_list_of(struct(a = list_of(int32())), 1L)
)
expect_array_roundtrip(
list(tibble::tibble(a = list(1L))),
list_of(struct(a = fixed_size_list_of(int32(), 1L))),
as = list_of(struct(a = fixed_size_list_of(int32(), 1L)))
)
})
test_that("Handling string data with embedded nuls", {
raws <- structure(list(
as.raw(c(0x70, 0x65, 0x72, 0x73, 0x6f, 0x6e)),
as.raw(c(0x77, 0x6f, 0x6d, 0x61, 0x6e)),
as.raw(c(0x6d, 0x61, 0x00, 0x6e)), # <-- there's your nul, 0x00
as.raw(c(0x66, 0x00, 0x00, 0x61, 0x00, 0x6e)), # multiple nuls
as.raw(c(0x63, 0x61, 0x6d, 0x65, 0x72, 0x61)),
as.raw(c(0x74, 0x76))),
class = c("arrow_binary", "vctrs_vctr", "list"))
expect_error(
rawToChar(raws[[3]]),
"embedded nul in string: 'ma\\0n'", # See?
fixed = TRUE
)
array_with_nul <- Array$create(raws)$cast(utf8())
expect_error(
as.vector(array_with_nul),
"embedded nul in string: 'ma\\0n'; to strip nuls when converting from Arrow to R, set options(arrow.skip_nul = TRUE)",
fixed = TRUE
)
withr::with_options(list(arrow.skip_nul = TRUE), {
expect_warning(
expect_identical(
as.vector(array_with_nul),
c("person", "woman", "man", "fan", "camera", "tv")
),
"Stripping '\\0' (nul) from character vector",
fixed = TRUE
)
})
})
test_that("Array$create() should have helpful error", {
expect_error(Array$create(list(numeric(0)), list_of(bool())), "Expecting a logical vector")
lgl <- logical(0)
int <- integer(0)
num <- numeric(0)
char <- character(0)
expect_error(Array$create(list()), "Requires at least one element to infer")
expect_error(Array$create(list(lgl, lgl, int)), "Expecting a logical vector")
expect_error(Array$create(list(char, num, char)), "Expecting a character vector")
})
test_that("Array$View() (ARROW-6542)", {
a <- Array$create(1:3)
b <- a$View(float32())
expect_equal(b$type, float32())
expect_equal(length(b), 3L)
# Input validation
expect_error(a$View("not a type"), "type must be a DataType, not character")
})
test_that("Array$Validate()", {
a <- Array$create(1:10)
expect_error(a$Validate(), NA)
})
test_that("is.Array", {
a <- Array$create(1, type = int32())
expect_true(is.Array(a))
expect_true(is.Array(a, "int32"))
expect_true(is.Array(a, c("int32", "int16")))
expect_false(is.Array(a, "utf8"))
expect_true(is.Array(a$View(float32())), "float32")
expect_false(is.Array(1))
expect_true(is.Array(ChunkedArray$create(1, 2)))
})
test_that("Array$Take()", {
a <- Array$create(10:20)
expect_equal(as.vector(a$Take(c(4, 2))), c(14, 12))
})
test_that("[ method on Array", {
vec <- 11:20
a <- Array$create(vec)
expect_as_vector(a[5:9], vec[5:9])
expect_as_vector(a[c(9, 3, 5)], vec[c(9, 3, 5)])
expect_as_vector(a[rep(c(TRUE, FALSE), 5)], vec[c(1, 3, 5, 7, 9)])
expect_as_vector(a[rep(c(TRUE, FALSE, NA, FALSE, TRUE), 2)], c(11, NA, 15, 16, NA, 20))
expect_as_vector(a[-4], vec[-4])
expect_as_vector(a[-1], vec[-1])
})
test_that("[ accepts Arrays and otherwise handles bad input", {
vec <- 11:20
a <- Array$create(vec)
ind <- c(9, 3, 5)
expect_error(
a[Array$create(ind)],
"Cannot extract rows with an Array of type double"
)
expect_as_vector(a[Array$create(ind - 1, type = int8())], vec[ind])
expect_as_vector(a[Array$create(ind - 1, type = uint8())], vec[ind])
expect_as_vector(a[ChunkedArray$create(8, 2, 4, type = uint8())], vec[ind])
filt <- seq_along(vec) %in% ind
expect_as_vector(a[Array$create(filt)], vec[filt])
expect_error(
a["string"],
"Cannot extract rows with an object of class character"
)
})
test_that("%in% works on dictionary arrays", {
a1 <- Array$create(as.factor(c("A", "B", "C")))
a2 <- DictionaryArray$create(c(0L, 1L, 2L), c(4.5, 3.2, 1.1))
c1 <- Array$create(c(FALSE, TRUE, FALSE))
c2 <- Array$create(c(FALSE, FALSE, FALSE))
b1 <- Array$create("B")
b2 <- Array$create(5.4)
expect_equal(is_in(a1, b1), c1)
expect_equal(is_in(a2, b2), c2)
expect_error(is_in(a1, b2))
})
test_that("[ accepts Expressions", {
vec <- 11:20
a <- Array$create(vec)
b <- Array$create(1:10)
expect_as_vector(a[b > 4], vec[5:10])
})
test_that("Array head/tail", {
vec <- 11:20
a <- Array$create(vec)
expect_as_vector(head(a), head(vec))
expect_as_vector(head(a, 4), head(vec, 4))
expect_as_vector(head(a, 40), head(vec, 40))
expect_as_vector(head(a, -4), head(vec, -4))
expect_as_vector(head(a, -40), head(vec, -40))
expect_as_vector(tail(a), tail(vec))
expect_as_vector(tail(a, 4), tail(vec, 4))
expect_as_vector(tail(a, 40), tail(vec, 40))
expect_as_vector(tail(a, -40), tail(vec, -40))
})
test_that("Dictionary array: create from arrays, not factor", {
a <- DictionaryArray$create(c(2L, 1L, 1L, 2L, 0L), c(4.5, 3.2, 1.1))
expect_equal(a$type, dictionary(int32(), float64()))
})
test_that("Dictionary array: translate to R when dict isn't string", {
a <- DictionaryArray$create(c(2L, 1L, 1L, 2L, 0L), c(4.5, 3.2, 1.1))
expect_warning(
expect_identical(
as.vector(a),
factor(c(3, 2, 2, 3, 1), labels = c("4.5", "3.2", "1.1"))
)
)
})
test_that("Array$Equals", {
vec <- 11:20
a <- Array$create(vec)
b <- Array$create(vec)
d <- Array$create(3:4)
expect_equal(a, b)
expect_true(a$Equals(b))
expect_false(a$Equals(vec))
expect_false(a$Equals(d))
})
test_that("Array$ApproxEquals", {
vec <- c(1.0000000000001, 2.400000000000001)
a <- Array$create(vec)
b <- Array$create(round(vec, 1))
expect_false(a$Equals(b))
expect_true(a$ApproxEquals(b))
expect_false(a$ApproxEquals(vec))
})
test_that("auto int64 conversion to int can be disabled (ARROW-10093)", {
withr::with_options(list(arrow.int64_downcast = FALSE), {
a <- Array$create(1:10, int64())
expect_true(inherits(a$as_vector(), "integer64"))
batch <- RecordBatch$create(x = a)
expect_true(inherits(as.data.frame(batch)$x, "integer64"))
tab <- Table$create(x = a)
expect_true(inherits(as.data.frame(batch)$x, "integer64"))
})
})
|
# Do transmission tree reconstruction & generate priors -----
source("R/utils.R")
# Packages
library(here)
library(treerabid) # devtools::install_github("mrajeev08/treerabid")
library(data.table)
library(lubridate)
library(dplyr)
library(lubridate)
library(magrittr)
library(foreach)
library(iterators)
library(doRNG)
library(simrabid)
# Data ----
load(file = "data/sd_case_data_trees.rda")
# clean up (no cases with NA location or time & filter to start/end dates)
sd_case_data_trees %>%
mutate(symptoms_started = dmy(symptoms_started)) %>%
dplyr::filter(!is.na(symptoms_started),
!is.na(utm_easting),
!is.na(utm_easting),
symptoms_started >= ymd("2002-01-01"),
symptoms_started <= ymd("2020-12-31")) %>%
# get uncertainty in days
mutate(days_uncertain = case_when(symptoms_started_accuracy == "+/- 14 days" ~ 14L,
symptoms_started_accuracy == "+/- 7 days" ~ 7L,
symptoms_started_accuracy == "+/- 28 days" ~ 28L,
symptoms_started_accuracy == "0" ~ 0L,
TRUE ~ 0L)) -> case_dt
# Get reconstructed tree & incursions ----
cl <- parallel::makeCluster(parallel::detectCores() - 1)
doParallel::registerDoParallel(cl)
ttrees <-
boot_trees(id_case = case_dt$id,
id_biter = case_dt$biter_id,
x_coord = case_dt$utm_easting,
y_coord = case_dt$utm_northing,
owned = FALSE,
date_symptoms = case_dt$symptoms_started, # needs to be in a date class
days_uncertain = case_dt$days_uncertain,
use_known_source = TRUE,
prune = TRUE,
si_fun = treerabid::si_lnorm1,
dist_fun = treerabid::dist_lnorm1,
params = list(DK_meanlog = param_defaults$disp_meanlog,
DK_sdlog = param_defaults$disp_sdlog,
SI_meanlog = param_defaults$serial_meanlog,
SI_sdlog = param_defaults$serial_sdlog),
cutoff = 0.95,
N = 1000,
seed = 1345)
parallel::stopCluster(cl)
# Summarize the trees
links_all <- build_all_links(ttrees, N = 1000)
incs_all <- links_all[is.na(id_progen)]
links_consensus <- build_consensus_links(links_all)
tree_consensus <- build_consensus_tree(links_consensus, ttrees)
# Write out files of the trees and the links (consensus & all)
write_create(tree_consensus, here("analysis/out/ttrees/consensus_tree_ln0.95.csv"),
fwrite)
write_create(links_consensus, here("analysis/out/ttrees/consensus_links_ln0.95.csv"),
fwrite)
write_create(incs_all, here("analysis/out/ttrees/incs_all_ln0.95.csv"),
fwrite)
# write out incursions
load("data/sd_case_data.rda")
incs <- links_consensus[is.na(id_progen)]
incs %<>%
as_tibble() %>%
left_join(sd_case_data, by = c("id_case" = "id")) %>%
mutate(date = dmy(symptoms_started)) %>%
select(id = id_case, date,
x_coord = utm_easting, y_coord = utm_northing,
prob)
write_create(incs, here("analysis/out/incursions.csv"), fwrite)
# Get priors
incs %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarize(n = n()) %$% mean(n)/52 -> incs_per_week
sd_case_data %>%
mutate(year = year(dmy(symptoms_started))) %>%
group_by(year) %>% summarize(n = n()) %$% mean(n)/52 -> cases_per_week
sd_incs <- cases_per_week/ 0.85 * 0.5 / 1.96 # 0.96
priors <- list(R0 = function(n) exp(rnorm(n, mean = 0.2, sd = 0.3)), # centered around 1.2
iota = function(n) exp(rnorm(n, mean = 0, sd = 0.5)), # centered around 1
k = function(n) exp(rnorm(n, mean = 0.25, sd = 0.25))) # centered around 1.35
write_create(priors, here("analysis/out/priors.rds"), saveRDS)
|
/analysis/scripts/00_trees_priors.R
|
no_license
|
mrajeev08/dynamicSD
|
R
| false
| false
| 3,863
|
r
|
# Do transmission tree reconstruction & generate priors -----
source("R/utils.R")
# Packages
library(here)
library(treerabid) # devtools::install_github("mrajeev08/treerabid")
library(data.table)
library(lubridate)
library(dplyr)
library(lubridate)
library(magrittr)
library(foreach)
library(iterators)
library(doRNG)
library(simrabid)
# Data ----
load(file = "data/sd_case_data_trees.rda")
# clean up (no cases with NA location or time & filter to start/end dates)
sd_case_data_trees %>%
mutate(symptoms_started = dmy(symptoms_started)) %>%
dplyr::filter(!is.na(symptoms_started),
!is.na(utm_easting),
!is.na(utm_easting),
symptoms_started >= ymd("2002-01-01"),
symptoms_started <= ymd("2020-12-31")) %>%
# get uncertainty in days
mutate(days_uncertain = case_when(symptoms_started_accuracy == "+/- 14 days" ~ 14L,
symptoms_started_accuracy == "+/- 7 days" ~ 7L,
symptoms_started_accuracy == "+/- 28 days" ~ 28L,
symptoms_started_accuracy == "0" ~ 0L,
TRUE ~ 0L)) -> case_dt
# Get reconstructed tree & incursions ----
cl <- parallel::makeCluster(parallel::detectCores() - 1)
doParallel::registerDoParallel(cl)
ttrees <-
boot_trees(id_case = case_dt$id,
id_biter = case_dt$biter_id,
x_coord = case_dt$utm_easting,
y_coord = case_dt$utm_northing,
owned = FALSE,
date_symptoms = case_dt$symptoms_started, # needs to be in a date class
days_uncertain = case_dt$days_uncertain,
use_known_source = TRUE,
prune = TRUE,
si_fun = treerabid::si_lnorm1,
dist_fun = treerabid::dist_lnorm1,
params = list(DK_meanlog = param_defaults$disp_meanlog,
DK_sdlog = param_defaults$disp_sdlog,
SI_meanlog = param_defaults$serial_meanlog,
SI_sdlog = param_defaults$serial_sdlog),
cutoff = 0.95,
N = 1000,
seed = 1345)
parallel::stopCluster(cl)
# Summarize the trees
links_all <- build_all_links(ttrees, N = 1000)
incs_all <- links_all[is.na(id_progen)]
links_consensus <- build_consensus_links(links_all)
tree_consensus <- build_consensus_tree(links_consensus, ttrees)
# Write out files of the trees and the links (consensus & all)
write_create(tree_consensus, here("analysis/out/ttrees/consensus_tree_ln0.95.csv"),
fwrite)
write_create(links_consensus, here("analysis/out/ttrees/consensus_links_ln0.95.csv"),
fwrite)
write_create(incs_all, here("analysis/out/ttrees/incs_all_ln0.95.csv"),
fwrite)
# write out incursions
load("data/sd_case_data.rda")
incs <- links_consensus[is.na(id_progen)]
incs %<>%
as_tibble() %>%
left_join(sd_case_data, by = c("id_case" = "id")) %>%
mutate(date = dmy(symptoms_started)) %>%
select(id = id_case, date,
x_coord = utm_easting, y_coord = utm_northing,
prob)
write_create(incs, here("analysis/out/incursions.csv"), fwrite)
# Get priors
incs %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarize(n = n()) %$% mean(n)/52 -> incs_per_week
sd_case_data %>%
mutate(year = year(dmy(symptoms_started))) %>%
group_by(year) %>% summarize(n = n()) %$% mean(n)/52 -> cases_per_week
sd_incs <- cases_per_week/ 0.85 * 0.5 / 1.96 # 0.96
priors <- list(R0 = function(n) exp(rnorm(n, mean = 0.2, sd = 0.3)), # centered around 1.2
iota = function(n) exp(rnorm(n, mean = 0, sd = 0.5)), # centered around 1
k = function(n) exp(rnorm(n, mean = 0.25, sd = 0.25))) # centered around 1.35
write_create(priors, here("analysis/out/priors.rds"), saveRDS)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/imports.most.R
\name{imports.most}
\alias{imports.most}
\title{Mostly imported packages}
\usage{
imports.most(imports, n = 10, year = FALSE)
}
\arguments{
\item{imports}{results of function imports()}
\item{n}{the most frequency number}
\item{year}{logical, default is FALSE. Whether to include years}
}
\value{
a data.frame contains package, frequency or year
}
\description{
Mostly imported packages
}
\examples{
\donttest{
d <- loadData()
i <- imports(d)
imt <- imports.most(imports = i,20)
library(ggplot2)
ggplot(imt,aes(Imports,Freq))+
geom_col()+
coord_flip()+
theme(axis.text = element_text(size=16))
imt <- imports.most(i,10,T)
imt <- imt[imt$year >= 2011 & imt$year <= 2020,]
# the latest 10 years
library(ggplot2)
library(tidytext)
ggplot(imt,aes(reorder_within(Imports,Freq,year),Freq))+
geom_col()+
scale_x_reordered() +
facet_wrap(~year,scales="free")+
coord_flip()+
theme(
axis.text = element_text(size=16),
strip.text.x = element_text(size = 18,
colour = "red")
)+
xlab(NULL)+ylab(NULL)
}
}
|
/man/imports.most.Rd
|
no_license
|
yikeshu0611/packagomatrics
|
R
| false
| true
| 1,279
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/imports.most.R
\name{imports.most}
\alias{imports.most}
\title{Mostly imported packages}
\usage{
imports.most(imports, n = 10, year = FALSE)
}
\arguments{
\item{imports}{results of function imports()}
\item{n}{the most frequency number}
\item{year}{logical, default is FALSE. Whether to include years}
}
\value{
a data.frame contains package, frequency or year
}
\description{
Mostly imported packages
}
\examples{
\donttest{
d <- loadData()
i <- imports(d)
imt <- imports.most(imports = i,20)
library(ggplot2)
ggplot(imt,aes(Imports,Freq))+
geom_col()+
coord_flip()+
theme(axis.text = element_text(size=16))
imt <- imports.most(i,10,T)
imt <- imt[imt$year >= 2011 & imt$year <= 2020,]
# the latest 10 years
library(ggplot2)
library(tidytext)
ggplot(imt,aes(reorder_within(Imports,Freq,year),Freq))+
geom_col()+
scale_x_reordered() +
facet_wrap(~year,scales="free")+
coord_flip()+
theme(
axis.text = element_text(size=16),
strip.text.x = element_text(size = 18,
colour = "red")
)+
xlab(NULL)+ylab(NULL)
}
}
|
library(coin)
absoluteMeanDifferences <- function(group1,group2){
return(abs(mean(group1)-mean(group2)))
}
permTest <- function(group1,group2,permutations=1000){
testStatistic <- absoluteMeanDifferences
observed <- testStatistic(group1,group2)
allValues <- c(group1,group2)
groupMemberShips <- c(rep(TRUE,length(group1)),rep(FALSE,length(group2)))
hypothetical <- rep(NA,permutations)
for (i in 1:permutations){
curGroupMembersShips <- sample(groupMemberShips)
curGroup1 <- allValues[curGroupMembersShips]
curGroup2 <- allValues[!curGroupMembersShips]
hypothetical[i] <- testStatistic(curGroup1,curGroup2)
}
return(sum(observed<=hypothetical)/permutations)
}
typeIerrorsTTest <- 0
typeIerrorsPermut <- 0
reps <- 1000
mean2 <- 20
mean1 = 21
typeIerrorsTTest = 0
typeIerrorsPermut = 0
typeIIerrorsTTest = 0
typeIIerrorsPermut = 0
for (i in 1:reps){ #repeat 1000 times
##generate data according to normal
group1 <- rnorm(n=20,mean=mean2,sd=1)
group2 <- rnorm(n=20,mean=mean1,sd=1)
group1 = rlnorm(n=20, meanlog = mean2, sdlog = 1)
group2 = rlnorm(n=20, meanlog = mean1, sdlog = 1)
test = t.test(group1,group2)
pvalue = test$p.value
#get p-value from t-test
permPValue = permTest(group1, group2)
#get p-value from permutation test
if (abs(mean1 - mean2) == 0){
if (pvalue < 0.05 ){
typeIerrorsTTest = typeIerrorsTTest + 1
}
if (permPValue < 0.05){
typeIerrorsPermut = typeIerrorsPermut + 1
}
}
else{
if (pvalue > 0.05 ){
typeIIerrorsTTest = typeIIerrorsTTest + 1
}
if (permPValue > 0.05){
typeIIerrorsPermut = typeIIerrorsPermut + 1
}
}
#check whether type I error for each method (no mean difference but test concludes difference)
#if yes, increase the respective counter
} #end of for
cat('t-test\n')
cat(sprintf('Type I error rate: %.2f\n',typeIerrorsTTest/reps))
cat(sprintf('Type II error rate: %.2f\n',typeIIerrorsTTest/reps))
cat('Permutation-test\n')
cat(sprintf('Type I error rate: %.2f\n',typeIerrorsPermut/reps))
cat(sprintf('Type II error rate: %.2f\n',typeIIerrorsPermut/reps))
|
/permutationTest.R
|
no_license
|
rushkock/simulation_practice
|
R
| false
| false
| 2,133
|
r
|
library(coin)
absoluteMeanDifferences <- function(group1,group2){
return(abs(mean(group1)-mean(group2)))
}
permTest <- function(group1,group2,permutations=1000){
testStatistic <- absoluteMeanDifferences
observed <- testStatistic(group1,group2)
allValues <- c(group1,group2)
groupMemberShips <- c(rep(TRUE,length(group1)),rep(FALSE,length(group2)))
hypothetical <- rep(NA,permutations)
for (i in 1:permutations){
curGroupMembersShips <- sample(groupMemberShips)
curGroup1 <- allValues[curGroupMembersShips]
curGroup2 <- allValues[!curGroupMembersShips]
hypothetical[i] <- testStatistic(curGroup1,curGroup2)
}
return(sum(observed<=hypothetical)/permutations)
}
typeIerrorsTTest <- 0
typeIerrorsPermut <- 0
reps <- 1000
mean2 <- 20
mean1 = 21
typeIerrorsTTest = 0
typeIerrorsPermut = 0
typeIIerrorsTTest = 0
typeIIerrorsPermut = 0
for (i in 1:reps){ #repeat 1000 times
##generate data according to normal
group1 <- rnorm(n=20,mean=mean2,sd=1)
group2 <- rnorm(n=20,mean=mean1,sd=1)
group1 = rlnorm(n=20, meanlog = mean2, sdlog = 1)
group2 = rlnorm(n=20, meanlog = mean1, sdlog = 1)
test = t.test(group1,group2)
pvalue = test$p.value
#get p-value from t-test
permPValue = permTest(group1, group2)
#get p-value from permutation test
if (abs(mean1 - mean2) == 0){
if (pvalue < 0.05 ){
typeIerrorsTTest = typeIerrorsTTest + 1
}
if (permPValue < 0.05){
typeIerrorsPermut = typeIerrorsPermut + 1
}
}
else{
if (pvalue > 0.05 ){
typeIIerrorsTTest = typeIIerrorsTTest + 1
}
if (permPValue > 0.05){
typeIIerrorsPermut = typeIIerrorsPermut + 1
}
}
#check whether type I error for each method (no mean difference but test concludes difference)
#if yes, increase the respective counter
} #end of for
cat('t-test\n')
cat(sprintf('Type I error rate: %.2f\n',typeIerrorsTTest/reps))
cat(sprintf('Type II error rate: %.2f\n',typeIIerrorsTTest/reps))
cat('Permutation-test\n')
cat(sprintf('Type I error rate: %.2f\n',typeIerrorsPermut/reps))
cat(sprintf('Type II error rate: %.2f\n',typeIIerrorsPermut/reps))
|
library(TMB)
compile("birthDist.cpp","-O1 -g",DLLFLAGS="")
dyn.load(dynlib("birthDist"))
load("birthidstData.RDat")
obj <- MakeADFun(data,parameters,DLL="birthDist",checkParameterOrder = FALSE)
obj$fn()
obj$gr()
system.time(opt <- nlminb(obj$par,obj$fn,obj$gr,control = list(eval.max = 1e6,maxit = 1e6)))
rep<-sdreport(obj, getJointPrecision=TRUE)
rep.matrix <- summary(rep)
rep.rnames = rownames(rep.matrix)
#Extract estimates
indQ = which(rep.rnames == "PropEst")
indmub = which(rep.rnames == "mub")
indsigmab = which(rep.rnames == "sigmab")
Q = rep.matrix[indQ,1]
Qsd = rep.matrix[indQ,2]
mub = rep.matrix[indmub,1]
mubsd = rep.matrix[indmub,2]
sigmab = rep.matrix[indsigmab,1]
sigmabsd = rep.matrix[indsigmab,2]
xax = seq(mub-3*sigmab,mub+3*sigmab,by = 0.1)
bdist = dnorm(xax,mub,sigmab)
#bdistmin = dnorm(xax,mybmin,sigmab)
#bdistmax = dnorm(xax,mybmax,sigmab)
############################
# Brukes ikke
############################
windows(width = 9,height = 6)
plot(xax,bdist,type = "l",
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l")
mubmin = mub - 1.96*mubsd
mubmax = mub + 1.96*mubsd
sigmabmin = sigmab - 1.96*sigmabsd
sigmabmax = sigmab + 1.96*sigmabsd
bdistmin = dnorm(xax,mubmin,sigmab)
bdistmax = dnorm(xax,mubmax,sigmab)
bdistminsig = dnorm(xax,mub,sigmabmin)
bdistmaxsig = dnorm(xax,mub,sigmabmax)
bdistminnew = dnorm(xax,mubmin,sigmabmin)
bdistmaxnew = dnorm(xax,mubmax,sigmabmax)
windows(width = 9,height = 6)
par(mar = c(5.1, 5.1, 4.1, 2.1))
plot(xax,bdist,type = "n",
ylim = c(0,0.5),
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l",
main = "Estimated birth distribution",
cex.axis = 1.5,
cex.lab = 1.5)
polygon(x = c(xax,rev(xax)),c(bdistminnew,rev(bdistmaxnew)),border = NA,
col = "lightblue")
polygon(x = c(xax,rev(xax)),c(bdistmin,rev(bdistmax)),
border = NA,
col = "lightblue")
polygon(x = c(xax,rev(xax)),c(bdistminsig,rev(bdistmaxsig)),
border = NA,
col = "lightblue")
lines(xax,bdist,col = "royalblue",lwd = 4)
lines(mub*rep(1,10),seq(0,0.5,length.out = 10),
lwd = 4,lty = 2,col = "black")
##################
# Not used
lines(xax,bdistmin,lwd = 4,lty = 2,col = "royalblue")
lines(xax,bdistmax,lwd = 4,lty = 2,col = "royalblue")
# Not used
lines(xax,bdistminsig,lwd = 4,lty = 2,col = "red")
lines(xax,bdistmaxsig,lwd = 4,lty = 2,col = "red")
lines(xax,bdistminnew,lwd = 4,lty = 2,col = "green")
lines(xax,bdistmaxnew,lwd = 4,lty = 2,col = "green")
###################
# Monte Carlo simulations
Nsim = 10000
muv = rnorm(Nsim,mub,mubsd)
sigmav = rnorm(Nsim,sigmab,sigmabsd)
BirthDistCurves = matrix(0,nrow = Nsim,ncol = length(xax))
windows(width = 9,height = 6)
par(mar = c(5.1, 5.1, 4.1, 2.1))
plot(xax,bdist,type = "n",
ylim = c(0,0.5),
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l",
main = "Estimated birth distribution",
cex.axis = 1.5,
cex.lab = 1.5)
for(i in 1:Nsim){
BirthDistCurves[i,] = dnorm(xax,muv[i],sigmav[i])
#points(xax,BirthDistCurves[i,],lwd = 1,col = rgb(red = 0, green = 1, blue = 0, alpha = 0.1))
}
Bdistmin = rep(0,length(xax))
Bdistmax = Bdistmin
for(i in 1:length(xax)){
Bdistmin[i] = quantile(BirthDistCurves[,i],0.025)
Bdistmax[i] = quantile(BirthDistCurves[,i],0.975)
}
windows(width = 9,height = 6)
par(mar = c(5.1, 5.1, 4.1, 2.1))
plot(xax,bdist,type = "n",
ylim = c(0,0.5),
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l",
main = "Estimated birth distribution",
cex.axis = 1.5,
cex.lab = 1.5)
polygon(x = c(xax,rev(xax)),c(Bdistmin,rev(Bdistmax)),border = NA,
col = "lightblue")
lines(xax,bdist,col = "royalblue",lwd = 4)
lines(mub*rep(1,10),seq(0,0.5,length.out = 10),
lwd = 4,lty = 2,col = "black")
Report = obj$report()
nn1 = Report$Nout[,1]
nn2 = Report$Nout[,2]
nn3 = Report$Nout[,3]
nn = Report$Nout[,4]
BirthDistCurves = matrix(0,nrow = Nsim,ncol = length(xax))
nn1min = rep(0,length(nn1))
nn1max = nnmin
nn2min = nnmin
nn2max = nnmin
nn3min = nnmin
nn3max = nnmin
nnmin = nnmin
nnmax = nnmin
for(i in 1:length(xax)){
Bdistmin[i] = quantile(BirthDistCurves[,i],0.025)
Bdistmax[i] = quantile(BirthDistCurves[,i],0.975)
}
t_tot = data$ttot
days = data$days
staging = data$staging
windows(height = 7,width = 9)
par(mar=c(6,5,4,5),bg = "white")
plot(t_tot,nn1/nn,type = "l",col = "red",lwd = 4,xlim = c(min(days)-5,max(days)+5),ylim = c(0,1),xlab = "Days since 1. March 2012",ylab = "Proportion",cex.lab = 1.5,cex.main = 1.5,bty = "l")
lines(t_tot,nn2/nn,col = "blue",lwd = 4)
lines(t_tot,nn3/nn,col= "green",lwd = 4)
points(days,staging[,1]/rowSums(staging),bg = "red",pch = 21,cex = 1.5)
points(days,staging[,2]/rowSums(staging),bg = "blue",pch = 22,cex = 1.5)
points(days,staging[,3]/rowSums(staging),bg = "green",pch = 24,cex = 1.5)
legend('right', lwd=2, col=c("red","blue","green"), cex=1.0, c('Newborn/Yellow', 'Thin', 'Fat/Grey'), bty='n')
|
/scripts/BirthDist3.R
|
no_license
|
NorskRegnesentral/pupR
|
R
| false
| false
| 5,140
|
r
|
library(TMB)
compile("birthDist.cpp","-O1 -g",DLLFLAGS="")
dyn.load(dynlib("birthDist"))
load("birthidstData.RDat")
obj <- MakeADFun(data,parameters,DLL="birthDist",checkParameterOrder = FALSE)
obj$fn()
obj$gr()
system.time(opt <- nlminb(obj$par,obj$fn,obj$gr,control = list(eval.max = 1e6,maxit = 1e6)))
rep<-sdreport(obj, getJointPrecision=TRUE)
rep.matrix <- summary(rep)
rep.rnames = rownames(rep.matrix)
#Extract estimates
indQ = which(rep.rnames == "PropEst")
indmub = which(rep.rnames == "mub")
indsigmab = which(rep.rnames == "sigmab")
Q = rep.matrix[indQ,1]
Qsd = rep.matrix[indQ,2]
mub = rep.matrix[indmub,1]
mubsd = rep.matrix[indmub,2]
sigmab = rep.matrix[indsigmab,1]
sigmabsd = rep.matrix[indsigmab,2]
xax = seq(mub-3*sigmab,mub+3*sigmab,by = 0.1)
bdist = dnorm(xax,mub,sigmab)
#bdistmin = dnorm(xax,mybmin,sigmab)
#bdistmax = dnorm(xax,mybmax,sigmab)
############################
# Brukes ikke
############################
windows(width = 9,height = 6)
plot(xax,bdist,type = "l",
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l")
mubmin = mub - 1.96*mubsd
mubmax = mub + 1.96*mubsd
sigmabmin = sigmab - 1.96*sigmabsd
sigmabmax = sigmab + 1.96*sigmabsd
bdistmin = dnorm(xax,mubmin,sigmab)
bdistmax = dnorm(xax,mubmax,sigmab)
bdistminsig = dnorm(xax,mub,sigmabmin)
bdistmaxsig = dnorm(xax,mub,sigmabmax)
bdistminnew = dnorm(xax,mubmin,sigmabmin)
bdistmaxnew = dnorm(xax,mubmax,sigmabmax)
windows(width = 9,height = 6)
par(mar = c(5.1, 5.1, 4.1, 2.1))
plot(xax,bdist,type = "n",
ylim = c(0,0.5),
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l",
main = "Estimated birth distribution",
cex.axis = 1.5,
cex.lab = 1.5)
polygon(x = c(xax,rev(xax)),c(bdistminnew,rev(bdistmaxnew)),border = NA,
col = "lightblue")
polygon(x = c(xax,rev(xax)),c(bdistmin,rev(bdistmax)),
border = NA,
col = "lightblue")
polygon(x = c(xax,rev(xax)),c(bdistminsig,rev(bdistmaxsig)),
border = NA,
col = "lightblue")
lines(xax,bdist,col = "royalblue",lwd = 4)
lines(mub*rep(1,10),seq(0,0.5,length.out = 10),
lwd = 4,lty = 2,col = "black")
##################
# Not used
lines(xax,bdistmin,lwd = 4,lty = 2,col = "royalblue")
lines(xax,bdistmax,lwd = 4,lty = 2,col = "royalblue")
# Not used
lines(xax,bdistminsig,lwd = 4,lty = 2,col = "red")
lines(xax,bdistmaxsig,lwd = 4,lty = 2,col = "red")
lines(xax,bdistminnew,lwd = 4,lty = 2,col = "green")
lines(xax,bdistmaxnew,lwd = 4,lty = 2,col = "green")
###################
# Monte Carlo simulations
Nsim = 10000
muv = rnorm(Nsim,mub,mubsd)
sigmav = rnorm(Nsim,sigmab,sigmabsd)
BirthDistCurves = matrix(0,nrow = Nsim,ncol = length(xax))
windows(width = 9,height = 6)
par(mar = c(5.1, 5.1, 4.1, 2.1))
plot(xax,bdist,type = "n",
ylim = c(0,0.5),
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l",
main = "Estimated birth distribution",
cex.axis = 1.5,
cex.lab = 1.5)
for(i in 1:Nsim){
BirthDistCurves[i,] = dnorm(xax,muv[i],sigmav[i])
#points(xax,BirthDistCurves[i,],lwd = 1,col = rgb(red = 0, green = 1, blue = 0, alpha = 0.1))
}
Bdistmin = rep(0,length(xax))
Bdistmax = Bdistmin
for(i in 1:length(xax)){
Bdistmin[i] = quantile(BirthDistCurves[,i],0.025)
Bdistmax[i] = quantile(BirthDistCurves[,i],0.975)
}
windows(width = 9,height = 6)
par(mar = c(5.1, 5.1, 4.1, 2.1))
plot(xax,bdist,type = "n",
ylim = c(0,0.5),
lwd = 4,
col = "royalblue",
xlab = "Dates in March",
ylab = "Density",
bty = "l",
main = "Estimated birth distribution",
cex.axis = 1.5,
cex.lab = 1.5)
polygon(x = c(xax,rev(xax)),c(Bdistmin,rev(Bdistmax)),border = NA,
col = "lightblue")
lines(xax,bdist,col = "royalblue",lwd = 4)
lines(mub*rep(1,10),seq(0,0.5,length.out = 10),
lwd = 4,lty = 2,col = "black")
Report = obj$report()
nn1 = Report$Nout[,1]
nn2 = Report$Nout[,2]
nn3 = Report$Nout[,3]
nn = Report$Nout[,4]
BirthDistCurves = matrix(0,nrow = Nsim,ncol = length(xax))
nn1min = rep(0,length(nn1))
nn1max = nnmin
nn2min = nnmin
nn2max = nnmin
nn3min = nnmin
nn3max = nnmin
nnmin = nnmin
nnmax = nnmin
for(i in 1:length(xax)){
Bdistmin[i] = quantile(BirthDistCurves[,i],0.025)
Bdistmax[i] = quantile(BirthDistCurves[,i],0.975)
}
t_tot = data$ttot
days = data$days
staging = data$staging
windows(height = 7,width = 9)
par(mar=c(6,5,4,5),bg = "white")
plot(t_tot,nn1/nn,type = "l",col = "red",lwd = 4,xlim = c(min(days)-5,max(days)+5),ylim = c(0,1),xlab = "Days since 1. March 2012",ylab = "Proportion",cex.lab = 1.5,cex.main = 1.5,bty = "l")
lines(t_tot,nn2/nn,col = "blue",lwd = 4)
lines(t_tot,nn3/nn,col= "green",lwd = 4)
points(days,staging[,1]/rowSums(staging),bg = "red",pch = 21,cex = 1.5)
points(days,staging[,2]/rowSums(staging),bg = "blue",pch = 22,cex = 1.5)
points(days,staging[,3]/rowSums(staging),bg = "green",pch = 24,cex = 1.5)
legend('right', lwd=2, col=c("red","blue","green"), cex=1.0, c('Newborn/Yellow', 'Thin', 'Fat/Grey'), bty='n')
|
#Levels of a given vector
v = c(1, 2, 3, 3, 4, NA, 3, 2, 4, 5, NA, 5)
print("Original vector:")
print(v)
print("Levels of factor of the said vector:")
print(levels(factor(v)))
|
/lvlofvector.r
|
permissive
|
maansisrivastava/Practice-code-R
|
R
| false
| false
| 182
|
r
|
#Levels of a given vector
v = c(1, 2, 3, 3, 4, NA, 3, 2, 4, 5, NA, 5)
print("Original vector:")
print(v)
print("Levels of factor of the said vector:")
print(levels(factor(v)))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SQLContext.R
\name{tableToDF}
\alias{tableToDF}
\title{Create a SparkDataFrame from a SparkSQL table or view}
\usage{
tableToDF(tableName)
}
\arguments{
\item{tableName}{the qualified or unqualified name that designates a table or view. If a database
is specified, it identifies the table/view from the database.
Otherwise, it first attempts to find a temporary view with the given name
and then match the table/view from the current database.}
}
\value{
SparkDataFrame
}
\description{
Returns the specified table or view as a SparkDataFrame. The table or view must already exist or
have already been registered in the SparkSession.
}
\note{
tableToDF since 2.0.0
}
\examples{
\dontrun{
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
createOrReplaceTempView(df, "table")
new_df <- tableToDF("table")
}
}
|
/man/tableToDF.Rd
|
no_license
|
cran/SparkR
|
R
| false
| true
| 902
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SQLContext.R
\name{tableToDF}
\alias{tableToDF}
\title{Create a SparkDataFrame from a SparkSQL table or view}
\usage{
tableToDF(tableName)
}
\arguments{
\item{tableName}{the qualified or unqualified name that designates a table or view. If a database
is specified, it identifies the table/view from the database.
Otherwise, it first attempts to find a temporary view with the given name
and then match the table/view from the current database.}
}
\value{
SparkDataFrame
}
\description{
Returns the specified table or view as a SparkDataFrame. The table or view must already exist or
have already been registered in the SparkSession.
}
\note{
tableToDF since 2.0.0
}
\examples{
\dontrun{
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
createOrReplaceTempView(df, "table")
new_df <- tableToDF("table")
}
}
|
test_that("redist.plot.map works", {
out <- redist.plot.map(shp = iowa, plan = iowa$cd_2010)
expect_true('ggplot' %in% class(out))
iowa_map = redist_map(iowa, existing_plan = cd_2010, pop_tol=0.01)
out <- iowa_map %>% redist.plot.map(shp = ., plan = get_existing(.))
expect_true('ggplot' %in% class(out))
out <- iowa_map %>% redist.plot.map(shp = ., plan = cd_2010)
expect_true('ggplot' %in% class(out))
out <- iowa_map %>% redist.plot.map(shp = ., plan = cd_2010, fill = white)
expect_true('ggplot' %in% class(out))
})
test_that("redist.plot.adj works", {
out <- redist.plot.adj(shp = iowa, plan = iowa$cd_2010)
expect_true('ggplot' %in% class(out))
iowa_map = redist_map(iowa, existing_plan = cd_2010, pop_tol=0.01)
out <- iowa_map %>% redist.plot.adj(shp = ., plan = get_existing(.))
expect_true('ggplot' %in% class(out))
out <- iowa_map %>% redist.plot.map(shp = ., plan = cd_2010)
expect_true('ggplot' %in% class(out))
})
|
/tests/testthat/test_plots.R
|
no_license
|
LiYao-sfu/redist
|
R
| false
| false
| 969
|
r
|
test_that("redist.plot.map works", {
out <- redist.plot.map(shp = iowa, plan = iowa$cd_2010)
expect_true('ggplot' %in% class(out))
iowa_map = redist_map(iowa, existing_plan = cd_2010, pop_tol=0.01)
out <- iowa_map %>% redist.plot.map(shp = ., plan = get_existing(.))
expect_true('ggplot' %in% class(out))
out <- iowa_map %>% redist.plot.map(shp = ., plan = cd_2010)
expect_true('ggplot' %in% class(out))
out <- iowa_map %>% redist.plot.map(shp = ., plan = cd_2010, fill = white)
expect_true('ggplot' %in% class(out))
})
test_that("redist.plot.adj works", {
out <- redist.plot.adj(shp = iowa, plan = iowa$cd_2010)
expect_true('ggplot' %in% class(out))
iowa_map = redist_map(iowa, existing_plan = cd_2010, pop_tol=0.01)
out <- iowa_map %>% redist.plot.adj(shp = ., plan = get_existing(.))
expect_true('ggplot' %in% class(out))
out <- iowa_map %>% redist.plot.map(shp = ., plan = cd_2010)
expect_true('ggplot' %in% class(out))
})
|
\name{houseVotes}
\alias{houseVotes}
\docType{data}
\title{
Congressional Voting Records Data}
\description{
1984 United Stated Congressional Voting Records for each of the U.S. House of
Representatives Congressmen on the 16 key votes identified by the
Congressional Quarterly Almanac.
}
\usage{data(houseVotes)}
\format{
A data.frame with 435 rows on 17 columns (16 qualitative variables and 1 classification variable).
}
\details{
The data collect 1984 United Stated Congressional Voting Records for each of the 435 U.S. House of Representatives Congressmen on the 16 key votes identified by the Congressional Quarterly Almanac (CQA). The variable \code{class} splits the observations in \code{democrat} and \code{republican}. The qualitative variables refer to the votes on \code{handicapped-infants}, \code{water-project-cost-sharing}, \code{adoption-of-the-budget-resolution}, \code{physician-fee-freeze}, \code{el-salvador-aid}, \code{religious-groups-in-schools}, \code{anti-satellite-test-ban}, \code{aid-to-nicaraguan-contras}, \code{mx-missile}, \code{immigration}, \code{synfuels-corporation-cutback}, \code{education-spending}, \code{superfund-right-to-sue}, \code{crime}, \code{duty-free-exports}, and \code{export-administration-act-south-africa}. All these 16 variables are objects of class \code{factor} with three levels according to the CQA scheme: \code{y} refers to the types of votes ''voted for'', ''paired for'' and ''announced for''; \code{n} to ''voted against'', ''paired against'' and ''announced against''; code{yn} to ''voted present'', ''voted present to avoid conflict of interest'' and ''did not vote or otherwise make a position known''.}
\source{
https://archive.ics.uci.edu/ml/datasets/congressional+voting+records
}
\references{
Schlimmer, J.C., 1987. Concept acquisition through representational adjustment. Doctoral dissertation, Department of Information and Computer Science, University of California, Irvine, CA.
}
\author{Paolo Giordani, Maria Brigida Ferraro, Alessio Serafini}
\seealso{\code{\link{NEFRC}}, \code{\link{NEFRC.noise}}}
\examples{
data(houseVotes)
X=houseVotes[,-1]
class=houseVotes[,1]
}
\keyword{data}
\keyword{multivariate}
|
/man/houseVotes.Rd
|
no_license
|
cran/fclust
|
R
| false
| false
| 2,200
|
rd
|
\name{houseVotes}
\alias{houseVotes}
\docType{data}
\title{
Congressional Voting Records Data}
\description{
1984 United Stated Congressional Voting Records for each of the U.S. House of
Representatives Congressmen on the 16 key votes identified by the
Congressional Quarterly Almanac.
}
\usage{data(houseVotes)}
\format{
A data.frame with 435 rows on 17 columns (16 qualitative variables and 1 classification variable).
}
\details{
The data collect 1984 United Stated Congressional Voting Records for each of the 435 U.S. House of Representatives Congressmen on the 16 key votes identified by the Congressional Quarterly Almanac (CQA). The variable \code{class} splits the observations in \code{democrat} and \code{republican}. The qualitative variables refer to the votes on \code{handicapped-infants}, \code{water-project-cost-sharing}, \code{adoption-of-the-budget-resolution}, \code{physician-fee-freeze}, \code{el-salvador-aid}, \code{religious-groups-in-schools}, \code{anti-satellite-test-ban}, \code{aid-to-nicaraguan-contras}, \code{mx-missile}, \code{immigration}, \code{synfuels-corporation-cutback}, \code{education-spending}, \code{superfund-right-to-sue}, \code{crime}, \code{duty-free-exports}, and \code{export-administration-act-south-africa}. All these 16 variables are objects of class \code{factor} with three levels according to the CQA scheme: \code{y} refers to the types of votes ''voted for'', ''paired for'' and ''announced for''; \code{n} to ''voted against'', ''paired against'' and ''announced against''; code{yn} to ''voted present'', ''voted present to avoid conflict of interest'' and ''did not vote or otherwise make a position known''.}
\source{
https://archive.ics.uci.edu/ml/datasets/congressional+voting+records
}
\references{
Schlimmer, J.C., 1987. Concept acquisition through representational adjustment. Doctoral dissertation, Department of Information and Computer Science, University of California, Irvine, CA.
}
\author{Paolo Giordani, Maria Brigida Ferraro, Alessio Serafini}
\seealso{\code{\link{NEFRC}}, \code{\link{NEFRC.noise}}}
\examples{
data(houseVotes)
X=houseVotes[,-1]
class=houseVotes[,1]
}
\keyword{data}
\keyword{multivariate}
|
library(nws)
### Name: batchNodeList
### Title: NodeList Functions
### Aliases: batchNodeList sgeNodeList lsfNodeList pbsNodeList
### Keywords: utilities
### ** Examples
Sys.setenv(LSB_HOSTS="node1 node2 node3")
batchNodeList()
|
/data/genthat_extracted_code/nws/examples/batchNodeList.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false
| false
| 235
|
r
|
library(nws)
### Name: batchNodeList
### Title: NodeList Functions
### Aliases: batchNodeList sgeNodeList lsfNodeList pbsNodeList
### Keywords: utilities
### ** Examples
Sys.setenv(LSB_HOSTS="node1 node2 node3")
batchNodeList()
|
CAAJupiter_EclipticLongitude <-
function(JD){
.Call("CAAJupiter_EclipticLongitude", JD)
}
|
/R/CAAJupiter_EclipticLongitude.R
|
no_license
|
helixcn/skycalc
|
R
| false
| false
| 94
|
r
|
CAAJupiter_EclipticLongitude <-
function(JD){
.Call("CAAJupiter_EclipticLongitude", JD)
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.